In 2023 and 2024, a spate of highly publicized cyber attacks struck some of the largest U.S. financial institutions with footprints in mortgage, from servicing giant Mr. Cooper and consumer-direct mortgage lender loanDepot to title insurance provider Fidelity National Financial.
Wholesale lenders Fairway Independent Mortgage Corp. and Nations Direct Mortgage were breached. So was title insurance and settlement services giant First American Financial Corp. And some cyber incidents that occur never hit the headlines, experts say, because not all breaches must be publicly disclosed.
In June of this year, Towne Mortgage Co. reported a ransomware attack that led to sensitive customer data potentially being copied from its systems. In November, New American Funding disclosed a data breach stemming from a third-party notary services vendor that resulted in the exposure of borrower data used in loan closings, such as names, addresses and social security numbers.
In November, the FBI opened an investigation into a cyberattack at SitusAMC, a third-party vendor that provides services across the mortgage lifecycle, from underwriting to pricing to securities valuations, for the likes of JPMorgan Chase, Citi and Morgan Stanley.
For all their differences, mortgage companies and cybercriminals share an unquenchable thirst for consumer data. Recent advancements in artificial intelligence have simultaneously lowered the bar for bad actors trying to filch consumer data and upped the ante for mortgage companies whose operations increasingly rely on stockpiling it.
Experts warn that cyber threats are evolving faster than the mortgage industry’s capacity — or willingness — to combat them.
The shifting cyber threat landscape
Michael Nouguier, chief information security officer and partner in charge of cybersecurity services for Richey May, a tax audit and business advisory across various sectors including financial services, says AI has “absolutely” changed the threat landscape facing mortgage companies.
“The two biggest entry points for a mortgage company are email and poor systems management,” he tells Scotsman Guide. “In the last two years, business email compromise has dwarfed ransomwares and dwarfed advanced attacks on organizations because it’s a quick buck.”
Nouguier recently helped a mortgage client who accidentally paid a cybercriminal $19,000 due to a compromised business email. The ACH information on an invoice had been edited, causing the company’s chief financial officer to pay the threat actor and not the intended recipient. That easily could have been $100,000, he says.
“Attacks that leverage AI in some form have increased, but the adoption to protect with AI have not really increased.”
Large-language models like OpenAI’s ChatGPT and Anthropic’s Claude were introduced in late 2022 and early 2023 with barely a warning label in place and internal security features that cyber experts say are relatively simple to circumvent. Known vulnerabilities at mortgage companies have become much easier to exploit at scale.
“We used to train people to look for misspellings, broken English and grammatical errors in emails, but now everybody just writes their emails in ChatGPT, so it’s perfectly orchestrated,” explains Nouguier. “It is industry-focused and specific. It can really cater toward the individual. The ease of entry has just been truncated dramatically.”
Three years of thin margins and low mortgage production have not helped to shift what Nouguier describes as an industry-endemic mindset that regards cybersecurity investment as a non-revenue generating expense, compared to a more traditional business enablement tool like a loan operating system.
“The processes and procedures we have in place to mitigate these attacks are not increasing with the increased attacks,” he says. “Attacks that leverage AI in some form have increased, but the adoption to protect with AI have not really increased.”
Nouguier is optimistic that mortgage companies’ security postures will increase over the next few years, however, as state financial regulators and counterparties like government-sponsored enterprises Fannie Mae and Freddie Mac require more stringent, and even independent cybersecurity audits.
Regulating AI risks, not AI innovation
Over the past few years, AI in the mortgage industry has become as much a branding strategy as an operations edge — the “supersize” equivalent for vendors, lenders and recruiters promising faster, slicker and stickier mortgage technology.
The debate over how to regulate AI in a manner that protects consumers, financial markets and critical infrastructure without restraining innovation has landed state legislatures in direct conflict with President Donald Trump, who has sought to strip states of their ability to erect guardrails around AI’s expansion and usage.
Because advancements in AI are happening so quickly, says Curtis Knuth, president and CEO of Service 1st, a credit reporting agency that resells credit reports and assorted data primarily into the mortgage industry, there will be a necessary catch-up period for governance and internal controls.
For now, however, Knuth says there are benefits to letting innovation reign. The hands-off approach of the current administration benefits entrepreneurs who under previous administrations had grown accustomed to “constantly having your hand slapped for moving forward and advancing technology,” he says.
“There’s not a lot in it for you if at every step you have to worry about being sued either by the CFPB [Consumer Financial Protection Bureau] or private actors,” explains Knuth. “At this point, I’m glad that there is a Trump administration in place because it does give us the ability to test this [technology] out, but at some point, there is going to have to be a pullback.”
On a bipartisan basis, state legislatures have insisted on preserving their ability to regulate AI, despite the president’s preference. In July, all but one U.S. senator rejected a proposal in Trump’s signature tax-and-spending bill that would have established a 10-year moratorium banning states from regulating AI in any way.
Determined to curtail state AI laws, Trump recently adopted a different strategy. An executive order the president signed in mid-December directs the U.S. attorney general to create an “AI Litigation Task Force” to challenge state AI laws.
The order also directs the U.S. Department of Commerce to draw up a list of state AI regulations “that conflict with national AI policy priorities” while threatening to withhold funding from a broadband internet deployment program to states with AI laws.
“I think the natural place for regulation and oversight of an enabling technology like AI, which operates across the country and the world, is at the federal level,” says Theo Ellis, founder and CEO of Friday Harbor, an AI-native mortgage technology provider. “This is clearly something related to interstate commerce and it’s clearly something where it’s not in anyone’s interest to have a patchwork of approaches.”
Get these articles in your inbox
Sign up for our daily newsletter
Get these articles in your inbox
Sign up for our daily newsletter
Ellis believes it would be a mistake for the government to repeat past delays in, for example, establishing a digital privacy framework, which prompted states to lead that effort — creating the patchwork of privacy laws in place today.
Navigating the AI regulatory void
Myriad state and local AI laws are already in force as additional legislation continues to emerge, offering guidelines and enforcement tools for a range of AI-related impacts on consumers and businesses.
Passed in March 2024, for example, Tennessee’s Ensuring Likeness Voice and Image Security (ELVIS) Act restricts AI tools from replicating artists’ voices without their consent. The Artificial Intelligence Bill of Rights, containing measures to protect minors from harmful online content and the unauthorized use of people’s images and likenesses, was introduced to the Florida state legislature by Sen. Tom Leek, R-Ormond Beach, last week.
Taking effect at the end of June, the Colorado Artificial Intelligence Act is the first state-level bill to impose comprehensive governance, disclosure and enforcement standards addressing algorithmic discrimination, consumer privacy and automated decision-making, among a range of other issues.
“From a servicing perspective, when we get something wrong, we don’t get it wrong once. We get it wrong tens of thousands of times because whatever we do is replicated.”
Supporters of a federal AI framework claim that a patchwork of 50 different state-level AI frameworks slows innovation and raises compliance costs.
That argument is not without merit, says Toby Wells, president of Cornerstone Servicing — particularly for mortgage companies that are already highly regulated at the state and federal levels.
“I think all servicers would always advocate for some centralized governance that would generally be federal as opposed to state level to dictate at least the high-level parameters in which you can operate,” Wells tells Scotsman Guide.
“From a servicing perspective, when we get something wrong, we don’t get it wrong once,” he adds. “We get it wrong tens of thousands of times because whatever we do is replicated.” Hence, he says Cornerstone has taken a measured approach to AI, aiming to perfect low-hanging integrations instead of reinventing end-to-end systems.
Right now, most mortgage companies “are on their own journey,” Wells says, implementing AI in a siloed fashion without cross-industry dialogue for sharing best practices or standardizing AI compliance. That presents an opportunity for servicers to create a centralized framework and share best practices on their own.
“At some point AI engines will be able to distill down quick decisions to customers, but I think there’s a lot of discovery that has to go into that,” says Wells. “In theory, those tools are there. That data architecture is there. That decisioning apparatus is there. But boy, that’s just something you cannot be wrong about.”
AI risks transcend mortgage lending
“We’re critical infrastructure,” says Kyle Draisey, chief information security officer and head of cybersecurity for Sagent, a loan servicing technology provider backed by Warburg Pincus, a private equity firm with a reported $86 billion in assets under management. Sagent technology powers more than $2 trillion in mortgage servicing portfolios.
Prior to joining Sagent last May, Draisey was senior technical director at major defense contractor BAE Systems, designing cyber and intelligence tools for the U.S. Department of Homeland Security, federal law enforcement and a three-letter intelligence agency that he says “you see in movies a lot and has stars up on the wall.”
“We are the servicing technology backbone of mortgages and large consumer loans,” explains Draisey — the mortgage industry equivalent of the powerlines and substations of the electrical grid, or the runways and radar of U.S. air traffic control.
“That’s what Sagent does. That’s what Black Knight does,” Draisey adds, referring to Sagent’s top competitor. “That’s a huge chunk of the U.S. economy.”
A recent survey by Dun & Bradstreet of more than 2,000 senior professionals working in financial services and insurance across major markets in the U.S. and Europe highlighted the extent to which global financial institutions recognize the severity of AI-enhanced cyber risks. Nearly 80% of respondents said cybersecurity vulnerability was the priority risk facing their companies.
But a significant preparedness gap remains, the survey revealed. Cybersecurity was the business risk that respondents said their firms were least prepared to mitigate, with roughly 38% of respondents feeling that their companies were not fully prepared for cyber risks.
“Governance is playing catch-up, and that’s just not just in the mortgage industry. That’s in the technology space globally,” says Draisey. Two years ago, he delivered a speech to the Intelligence and National Security Alliance, a public-private partnership for the intelligence community, pitching the idea of an “ISAC” for AI, which does not currently exist.
Introduced in 1998 pursuant to Presidential Decision Directive 63, the Information Sharing and Analysis Centers (ISACs) are organizations designed to facilitate information sharing and threat preparedness within every vertical of critical infrastructure, such as aviation, food, health care, national defense and oil and natural energy.
Sagent, other large mortgage companies and many of the world’s largest banking and insurance institutions are members of the financial services ISAC, for which there is even a mortgage subcommittee. These veritable safe spaces for information sharing between competitors, coordinated by the National Council of ISACs, close the loop between U.S. intelligence agencies and private companies that function, in many ways, like a first line of cyber defense.
Draisey, who previously sat on the national defense and information technology ISACs, sees cybersecurity as a “team sport.” He believes that an AI-specific ISAC would provide a venue for collectively and deliberatively securing financial infrastructure, thus improving large and small companies’ resilience to cyber threats while enhancing consumer protections.
“Let’s just pull back the curtain,” says Draisey. Without divulging corporate secrets, he wishes competitors like Sagent and Black Knight could share observations on the shifting threat landscape.
“Tell us what you’re doing,” he continues, “about how you’re implementing those responsible and secure methods of having AI, to where you’re helping people do more with their job in a safe and responsible manner.”



