Residential Magazine

Artificial intelligence should be respected but not feared

By Neil Pierson

Cars pull up at a drive-through restaurant and food orders are given to a computer that can understand human speech. A small flying machine sweeps over crowded city streets to deliver a package. A college student turns in an essay that was written by an invisible robot.

Science fiction pioneers H.G. Wells, Jules Verne and Mary Shelley might have smiled wryly if they were here to see it. These scenarios would’ve been pure fantasy to most people in the 19th century, but here in the 21st century, they’re actually happening.

“I think that you’re going to see these solutions make their way into the industry, albeit I think it’s going to be at a measured pace.”

– Adam Carmel, CEO, Polly

Late last year, McDonald’s opened its first automated restaurant in North Texas, where artificial intelligence (AI) takes and delivers orders. A Seattle-area pizza chain recently announced plans to use AI-powered drones as part of its delivery service. And this past January, a survey of 1,000 U.S. college students found that 30% had used AI tool ChatGPT to help write assignments (despite the fact that most believe it to be a form of cheating).

Mortgage lenders, too, are in the early stages of sorting out AI’s potential benefits and ramifications. Already, some lenders have rolled out proprietary next-generation technologies, such as an AI-driven personal assistant designed to streamline communications tasks for real estate partners. But experts like Adam Carmel, CEO of mortgage fintech provider Polly, caution that there is much to consider before AI goes mainstream.

“Historically, it’s harder to implement new technology solutions in heavily regulated industries like the mortgage industry,” Carmel says. “So, I think that you’re going to see these solutions make their way into the industry, albeit I think it’s going to be at a measured pace.”

McKinsey & Co. has conducted a global survey on AI in each of the past five years. At the end of 2022, the company reported that 50% of respondents were using some form of AI, up from 20% in 2017. But the adoption rate has leveled off since reaching a high point of 58% in 2019. Among the corporations that have AI capabilities embedded in at least one of their business functions today, the most commonly used forms of the technology are robotic process automation, computer vision (the interpretation of images and videos), natural-language text processing and virtual agents.

AI, of course, is not free from detractors or potential dangers. The late physicist Stephen Hawking warned that “the development of artificial intelligence could spell the end of the human race.” Beyond this rhetoric, there are emerging real-world consequences such as job losses, social media manipulation and socioeconomic inequality.

Candice Nonas, managing consultant for professional services firm RGP, cites research showing that AI tools are prone to programmer bias. Among these studies is a 2022 report from the National Institute of Standards and Technology in which the group argues that human and systemic bias are largely responsible for bias in AI. And the Consumer Financial Protection Bureau has stated its intention to stamp out “digital redlining,” in which companies that make AI-driven credit decisions are held responsible for discriminatory actions.

“While AI can be a helper to us, a supporter to us, I don’t think it’s at a place today where we can confidently turn over certain human judgments — especially as they relate to protection that we get under the federal government — to make wholesale decisions for us,” Nonas says.

She points to a recent report from the Federal Deposit Insurance Corp.’s Office of Inspector General, in which cybersecurity and data privacy were listed among the top challenges for the U.S. bank regulator. Ransomware attacks at banks, for example, more than doubled from 2020 to 2021, reaching an aggregate dollar volume of $886 million. Because these large institutions lack agility, Nonas says security needs to be shored up — particularly with third-party vendors — before becoming heavily dependent on AI technologies.

She also says that mortgage companies must think holistically about the three lines of defense to strengthen their risk-management programs. This is often a challenge because executives tend to implement tech solutions for the profit-generating first line (operations personnel) while overlooking the cost-generating second and third lines (compliance and audit personnel).

“When we talk to bank CEOs, they’re very proud of the AI and the technology that they’re using on the front end,” Nonas says. “But when it comes to managing risk, auditing, rolling up information in order to do regulatory reporting, that’s still very much manual and error prone. In order for AI and technology to be most powerful, you need to have it in all lines of defense.”

Carmel believes that automation-driven job losses should be of minimal concern to mortgage professionals at the moment. Historically speaking, he says, these advancements make people much more efficient. As AI-based lending tools are further developed, much of the attention should be placed on a system of checks and balances that will alleviate compliance concerns and build consumer trust.

“Loan officers, I’ve always said, are the ultimate entrepreneurs,” Carmel says. “They’re going to receive the most amount of new technology — as they should. They’re the revenue generators in the mortgage industry.” ●


You might also like...