Freddie Mac Bulletin and Executive Order Implications on Mortgage AI
December is not even halfway over and already we have two big developments on the AI housing policy front. I'm sure many of us have seen the buzz on the Freddie Mac AI bulletin and the executive order. Here's my take on what it means for us in mortgage.
Freddie Mac Bulletin Interesting Things
The first kind of interesting thing is the lack of mention of generative artificial intelligence. This implication here is the more traditional definition of artificial intelligence applies - my definition of artificial intelligence is that very broad field of computer science focused on enabling machines to perform tasks that typically require human intelligence.
One of the most interesting things about the bulletin is that that is does NOT say anything about use cases. It does obligate seller/servicers to furnish documentation on the types of AI/ML used, as well as the "purpose and manner" for such use - but it falls short of saying the industry must provide a use case list. Look here for a mortgage AI use case list. Look here and here for a description of AI types, especially those prevalent in mortgage. This is interesting to me because I think the use case list is kind of where the rubber meets the road in mortgage AI, I also think it's one place for differentiation in a mortgage companies' AI strategy. I appreciate that this was not called for as I think it's very special to each organization.
Another very interesting thing about the bulletin is the extent to which Freddie Mac's own technology will show up. Freddie Mac, of course, makes extensive use of AI/ML systems in its own stack so naturally this will show up in the documentation. I wonder if this means that seller/servicers will also have to test Freddie Mac technology for adherence to Freddie Mac's requirements.
Also interesting is the following statement "[l]egal and regulatory requirements involving AI are understood, managed, and documented" - I really wish this was easy. We have what we have "always" had to do from a process integrity and data privacy perspective, then we have the patchwork of state requirements (not new to us in mortgage). There is no decoder ring for the rules, and what I generally say is that we have to anticipate where the puck is going to go. I'm happy to see Freddie Mac give us a less gelatinous sense of the puck's destination. Continue reading for my take on the executive order that could change all this.
To the left of the December 2025 line is what I taught in my classes before last week.
I found the use of the term "trustworthy AI" as opposed to "responsible AI (RAI)" interesting. Upon further review this certainly harkens back to the NIST definition, which is very close to the RAI framework we teach in our classes. I will start using this term and framework instead. In terms of the trustworthy AI framework defined by NIST, please keep in mind that guardrails and evaluations are a central foundation of a good implemented framework.
Guardrails are technical and non-technical measures implemented to prevent unsafe, unethical, and unreliable use in production environments. Evaluations measure how well generative AI performs against expectations, helping to ensure outputs are accurate, relevant, and aligned with user goals.
The center of the cheeseburger is the NIST definition of trustworthy AI, the buns and sidebars are the author's own edition. A trustworthy AI framework is unevidenceable (is that a word?) without guardrails and evaluations.
The last thing in the bulletin I found interesting was the segregation of duties language. I'll be honest, I hadn't really thought about this before. In context it makes sense. This language is trying to prevent a very specific failure mode. Specifically, it's designed to help ensure that the people who benefit from using an AI system are not also the people who define the risk, “measure” the risk, and sign off that the risk is acceptable. Makes sense.
Specific Steps Mortgage Companies Should Take to Comply with the Freddie Mac Bulletin
Incorporate your AI/ML uses and applications into your policy and procedures. Create a matrix articulating same so you can provide it to Freddie Mac upon request. I would also strongly suggest creating or updating your use case matrix. It's not specifically asked for but it's a good practice. Take a risk based lens (add that as a column in your matrix).
Review current governance process for AI/ML systems and ensure you have the right operational structures in place for accountability and prevention of conflicts of interest.
Get your guardrails and evals in place. I cannot emphasize this enough. This is the key to trustworthy AI. We do not trust the foundation model providers and AI vendors to just "take care of it" for us. We must verify. The accountability expected (and expressed in the form of indemnification) is not really new, as always as a seller/servicer - you are responsible. As the former CFPB director said "there is no special exemption for artificial intelligence".
Get your AI controls defined and risked mapped. This is central to having a validateable control and security framework that you should already have.
Monitor (and store evidence of that monitoring) your AI systems. This will be trick for those of us using foundation models (so basically all of us). I'm unsure what the expectation is from Freddie Mac on the testing and monitoring required for things like Microsoft CoPilot and enterprise deployments of foundational models. Is the expectation that we will do our own testing of these platforms for data poisoning and adversarial inputs? That's a pretty heavy load for anyone really, and requires a pretty hefty dose of technical skill in addition to access that won't be given.
And finally, I'd suggest reaching out to Freddie Mac to discuss the implication of this policy and how you intend to implement it. It's a great group out there and I'm sure they would be happy to engage.
December 11 Executive Order Interesting Things
Definitely the most interesting thing about the executive order (to me, anyway) was the statement that the administration intends to "initiate a proceeding to determine whether to adopt a Federal reporting and disclosure standard for AI models that preempts conflicting State laws". Personally, I would welcome a set of requirements for appropriate use of AI systems. Right now, it's really hard to know what to do. Anticipating where the puck is going to go puts a major damper on innovation. Many companies are paralyzed by the lack of clarity, which stymies creativity. I don't think it would be so bad to have a decoder ring. I acknowledge that I may regret this statement in the future.
Also interesting to me is the extent to which these two sets of guidance were released in coordination with each other. I have to think they were reviewed in concert. One does not conflict with the other, but if the Freddie Mac bulletin had been produced by a state government, I wonder how it would be reviewed under the executive order.
Specific Steps Mortgage Companies Should Take to Comply with the Executive Order
Nothing really to do here except watch and wait. Of course, it's a great idea to keep the lines of communication open with your state examiners and regulators, see what they are doing and thinking about it. Until something changes, there are about 150 distinct AI-related laws, ordinances, and legislative proposals out there that we should understand and determine for ourselves if they apply to us.
By Tela G. Mathias, Chief Nerd and Mad Scientist at PhoenixTeam, CEO at Phoenix Burst
Eleven Reasons Why the Mortgage Industry Isn't Further Along with GenAI Adoption
I got asked a really excellent question yesterday: "Why isn't the mortgage industry further along with genAI adoption?". I really should have had a better answer, given that my one job is to help the industry adopt genAI and I pretty much eat, sleep, and breathe mortgage AI. I just hadn't really sat down to formulate my thoughts, and my answer was pretty meh. So I had time this morning to do a better job. Here's my take, in no particular order.
#1 - It's actually really hard to do at scale
It's hard. I have a commercial product in addition to observing and partnering with organizations to scale genAI solutions and it's just really hard. The tech can be fragile. There is an enormous amount of error analysis to do if you want to get it right. Mortgage is flooded with choices (so many amazing genAI demos by vendors, so few credible evaluation results), creating decision fatigue. There is so much to learn and figuring out the core tech is challenging.
#2 - The mortgage technology ecosystem
The industry technology ecosystem is still sandwiched between tech that was already aging, and the constellation of wrap and ancillary applications that sprouted up post 2008. The ecosystem is unbelievably complex, numerous workarounds and control reports supplant humans that have to make the tech work to get things done. We have multiple lines of defense used to verify that the technology has done the right thing. Often, we see four lines of review for one decision. That's really challenging to integrate with.
#3 - Fear of getting it wrong
Pre-genAI there were already hundreds of thousands of rules to implement to make and service a mortgage. Maybe 150,000 rules, and at least a million pages of documents to comply with. And that was before genAI. The consequences of "getting things wrong" were already high. Fines, buybacks, consent orders... not mention the financial and emotional costs to the homeowners. Getting it wrong is a big deal. Now enter genAI. It will never be 100% correct. It just won't, that's not how it works. In an industry where perfection is the standard (even though the humans in the process are not perfect) it's hard to introduce technology that is not rules-based.
#4 - The people set the pace
I often make the mistake, like many of us do, of thinking that everyone thinks like I do. I eat new technology for breakfast. I thrive in uncertainty. I enjoy the pressure created when the stakes are high. I like change, it keeps things interesting. This is a very myopic way of thinking. Everyone does not think like I do, and what a boring, chaotic world that would be if they did. Each human on this AI journey is on, well, their own personal journey. We can buy all the tech we want, and eventually, even with AI, there's a person somewhere who has to use it or derive value from it. It's not the tech that sets the pace, it's the people. And frankly, I'm kind of grateful for that. AI headlines are kind of scary. Maybe the human adoption throttle is a good thing.
#5 - Talent gaps
Unless we have bajillions of dollars, the talent bar is so high that it's effectively unachievable. We are all looking for these unicorns - these super savvy, genAI native, AI experts who know mortgage and have people skills. All for like $150K. Guess what guys, not happening. So we all have to kind of fumble around to find the talent, grow the talent, partner to acquire the talent. It's just really hard. And it's really hard to actually tell where the tech really is. What can actually be implemented safely and at scale? You literally have to troll the developer community to see what the real deal is with agents. There are so many powerpoints. Who has time to sift through them all and then pressure test them? And then there's all the completely-unsexy-yet-utterly-neccessary error analysis. Guess what? That takes humans who really know the business. You know where those people are? Yeah, they are in the business.
#6 - The unrelenting pace of change
This one is really daunting, even for me and this is my whole job. The pace of change is unlike anything I have ever seen in 25 years of tech. I don't have a fancy silicon valley pedigree, but I sure spend a lot of time making up for it and I just can't keep up with every aspect of every potentially useful thing in the AI space. I can't go to every conference, and even if I could, it still wouldn't be enough. Just when we think we figured something out, there's the next new thing. RAG was it, then agents were it, then agentic workflow was it, now it's neurosymbolic AI. There is not an organization in the world of any size that can ingest this kind of change in an immediately productive way.
#7 - Competing priorities
We all have lives to care for, many of us have families to feed (or at least a cat or houseplant). We are all running a business in one of the most uncertain times of our country's history. We all want to create time and space to experiment and learn. But we still have beans to count, and we only count two types of beans in mortgage - heads and dollars. That's just the way it is. So the pressure to generate revenue or reduce expense is absolutely unrelenting. And that's not going to stop. It takes a truly rare executive team with the emotional and intestinal fortitude to invest what it takes to figure all this out.
#8 - Hallucination rates
Then there's just the basic fact of hallucination rates. It's a thing. In order to produce the truly fantastic results we can get out of a large language model, we must be able to accept variability - and that variability can be inaccurate. Say it with me now folks, this is probabilistic technology. If you need 100% accurate answers 100% of the time, genAI is not for you. But I will challenge the idea that we actually need 100% accurate 100% of the time in mortgage. Mostly because I know with 100% certainty that we don't have it today. This requires what Sequoia has called a stochastic mindset shift (I wrote about this here).
#9 - There’s no instruction manual
Yes, we operate in a completely rules based industry. But is it really? What about all that interpretation that we have to do to the federal rule set? What about all those VA circulars? Lender letters? We already don't have an instruction manual, and now we add still-in-the-oven paradigm changing technology to the mix. Talent gaps. No help from regulators or the white house. The state patchwork. It's a mess, and we take all the risk ourselves. We take all the learning on our own. It's just really hard.
#10 - Organizational inertia
Moving an organization of any size (even a small size) is hard. We just are ingrained in how we do what we do. Every system is perfectly designed to produce the result it produces. We have settled into a way that we understand, doing a thing we know. The weight of what we have built, especially in mortgage, holds us down. 2008 crushed us all. That's where these four lines of defense came from. TRID crushed us all. It literally cost tens to hundreds of millions of dollars to implement change of that size. Everything about our organizations are optimized to carry that weight. We are stuck, all of us.
#11 - Serious resistance to process reimagination
I added this one after thinking through this for a while, so unfortunately my top ten list is now a top eleven list. So much for clickbait. It's very hard to reimagine what we do. Even if we can reimagine it, then we have to make it real. A great example is the regulatory change process. The true cost of regulatory change is not tracked, not really. It is a fantastically distributed process that touches every single part of our organizations and all the technology and people involved. From the attorney that summarizes the change, to the tester working for a third party vendor who implements a piece of code the changes a calculation. No one counts all those steps. We don't know what it really takes, what it really costs. And if we don't know how it is, it's very hard to know how it could be.
So there you have it, faithful readers, if you're out there. My top eleven reasons why we are not further along with this genAI thing. Do not despair, however, the change is here and the time is now. We'll get through it eventually, we always do.
By Tela Mathias, Chief Nerd and Mad Scientist, PhoenixTeam | CEO, Phoenix Burst
I recently wrote about the now infamous and largely eye-rolled MIT study. You'll remember it's sensational headline that 95% of organizations are getting a zero percent return on their genAI investments. Yikes. I won't rehash that article here, but I will offer an alternative that didn't get quite so much press coverage, and that is a three-year longitudinal study published annually by the Wharton school that had a very different conclusion.
Author's note - I have created a publicly available Google Drive for all the interesting things I find that seem useful for sharing. You should be able to access it, let me know if you cannot.
Wharton's slightly more scientific study found that of 800 leaders surveyed, 75% report positive ROI from their genAI investments. Hmm.
One of the headlines of the Wharton study was that "most firms now measure ROI, and roughly three of four already see positive returns". Darn-it, the ROI problem continues to be devilishly confounding. This genAI business continues to be such an emotional roller-coaster. So let's dig in, and then I give you my $0.02 on what it means to us in mortgage.
Methodology
This one was slightly more scientific, it was based on 15 minute online "quantitative" surveys of 800 leaders, and the study has been repeated for each year beginning in 2023, which I have to say was very forward looking of Wharton. I put quantitative in quotations, because while I am sure the data they gathered was numerical, the source of the information was from people describing their experience and, therefore, qualitative.
Definitely bigger than the MIT study, and assessed over a longer time horizon. Still qualitative and rich with opportunities for bias.
Main Observation #1: GenAI Usage is Now Mainstream
There was a lot to this first observation that was very validating. I haven't seen a good study on adoption since the Deming study from late 2024 (there is still no better one, unfortunately). Yes, genAI is mainstream. Yes, 46% of leaders are using it everyday and 80% are using it at least weekly. Still, of note, one in six executives surveyed were not using it (I commend them for their honestly). Those executives and companies are in for a cold awakening if they don't join us on the AI train.
Very interesting to note that "practical, repeatable use cases supporting employee productivity" see the most adoption, with IT, legal, and HR being the furthest ahead. I continue to believe that adoption is good, employee productivity is a great place to start, and also the truly differentiating companies will be making their inroads much deeper into operations. Unsurprisingly, the study finds that operations as a business area is among the furthest behind. Well yeah, it's the hardest.
Main Observation #2: 75% Respondents Reported Postive ROI
One thing holding us all back, that the study found as a positive, is the idea that "accountability is now the lens". Yes - we need to find the value for sure, and also we are still early. Part of the value is in the learning, and the struggle. We can easily get stuck in a spiral of ROI purgatory when we load up a small set of narrow use cases with the full spend of getting started.
Budgets are moving from "one-off pilots to performance justified investments" and budgets are being moved from existing cost centers to genAI adoption. Again, yes, we need to justify investment, but I'm telling you it should just be one lens, not the sole picture.
One of the headline conclusions here is "budget discipline + ROI rigor are becoming the operating model for genAI investment". This to me is a sign that bean counters (absolutely not judging the bean counters, love the bean counters) are winning the executive perspective.
Main Observation #3: Culture is the Adoption Throttle, Not Technology
This was the most interesting section of the study for me, by far. "People set the pace"... I love this, and I could not agree more strongly that this is true. The gap between how we are using genAI (the "everyday AI" that is mainstream) and what it can actually do is a massive chasm.
We are seeing that almost 70% of leaders reported they have some kind of a Chief AI Officer role, indicating that accountability for AI adoption has moved into the C-suite. At the same time that confidence in AI and it's ability to provide value grows, capability is falling short.
"Capability building is falling short of ambition. Despite nearly half of organizations reporting technical skill gaps, investment in training has softened, and confidence in training as the primary path to fluency is down. Some firms are pivoting to hiring new talent, yet recruiting advanced genAI skills remains a top challenge."
Unless you have millions of dollars, recruiting advanced genAI skills is absolutely impossible, especially in mortgage. The only way to get it is to home grow it, or partner for it. And when partnering is chosen, it needs to come with a coaching and "teaching by doing" component. I continue to believe that every organization needs to have in house genAI talent, and that means investing in general education, applied education, education that favors application over attendance. Oh, and learning and applying new ways of thinking.
"Cartoon style scene of several friendly bear cubs swirling playfully around glowing streams of futuristic technology - floating holograms, data ribbons, and soft light particles..." Created on Midjourney
This is where we see that "people set the pace". In the past few weeks at PhoenixTeam, we have been up to our eyeballs in Phoenix Burst product adoption and workforce enablement, both separately and together. At the heart of adoption is the people. Literally. There is no adoption without the people. I can be guilty of this myself, we focus so much and so hard on the tech and the product and the experience, and we lose sight of the hearts of the people. The fear. The lack of trust. The domain of "change management".
The study echoes "the human side remains the bottleneck and a key potential accelerant. Morale, change management, and cross functional coordination remain persistent barriers. Without deliberate role design, coaching, and time to practice, 43% of leaders warn of skill atrophy, even as 89% believe genIA tools augment work.
What does this mean for us in mortgage?
I had a great friend say to me last week about adoption struggles, "perhaps it's FOAK?", which I had to look up. (Embarrassed to admit it but here there it is, authenticity is the best route to achieve authentic human connection).
Thank you chatGPT for once again helping me learn and put words to things I knew but only felt.
And yes, tying back to the study, for those of us that are working on really challenging areas, where the metrics are not well established, where the tasks are not necessarily repeatable, where the path ahead has to be redesigned -- we are running into FOAK. Taking a step back to think about the people, and learning from what they are telling us is really important. And forging ahead, being adaptable, being resilient, and staying positive.
In no particular order, what I think all this means to us in mortgage:
When we invest in applied learning, the people adapt better. I suggest that mortgage leaders look at their talent strategies, really think about what it means to the people, and create tailored learning strategies based on real change and thoughtful consideration about what is actually changing about actual jobs.
Remember, whatever feedback you are getting, it's valid. Even if you don't agree with it. That is an opportunity to open our minds to what someone else is thinking, doing, and feeling. Their experience is their experience. I suggest that mortgage leaders inspect their own feelings, inspect how they are using genAI (if at all), and then talk to others at all levels about their experience. It's probably won't be the same.
If you want fast adoption and fast ROI, go with the easy stuff. Just know that the easy stuff is commoditizing faster than you know, and differentiating on fast and easy isn't a thing.
There is so much happening out there, it's very hard to keep up. I hope you will reach out and share what you are learning and applying, what's working and what's not. We are listening and trying to find solutions.
By Tela Gallagher Mathias, Chief Nerd and Mad Scientist at PhoenixTeam
What does the infamous "MIT study" really mean to us in mortgage?
Everyone is hating on the the MIT study published in July, which claimed that 95% of organizations are getting a zero percent return on their genAI investments. This report, published by the MIT Media Lab, has been extensively debated by both critics and advocates, including some of the most recognized and respected voices on the AI circuit.
The 2.27% current impact represents the portion of total possible value that organizations are realizing today from agentic AI. (iceberg.mit.edu)
"Despite $30-$40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return.... Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L Impact."
That's a pretty spectacular claim. I certainly agree that finding the return on investment (ROI) is harder than expected, and I have seen teams swirl looking for that spectacular 2-4x ROI on one or just a handful of use cases. I think this study also ignites fear in all of us product companies looking to really make a difference in mortgage. We don't really want to talk about how hard it is to find meaningful and lasting change. So let's just put that out in the open.
What does the article actually say?
The main point is to argue that the key differentiator between success and failure are systems that learn. It argues that the classic ChatGPT model of assistive or conversational AI is great for short thinking tasks, and falls apart for long thinking due to lack of memory. It argues that agents are necessary to achieve real organizational value, and that there is a window of about 18 months to settle on partnerships that will help organizations really capitalize on the AI advantage.
I don't think these conclusions are wrong. In fact, I agree. However, I think they are, at best, weakly supported by a sparse set of anecdotal data in a study that has an agenda.
So basically it's a study to put data behind the claim that agents are the key to real value unlock, and that the time is now to seize the advantage. That's the bottom line, and I think it's useful. Yes, there are many reasons to hate on the study, but the bottom line strikes me as mostly valid.
Better than a bunch of hallway conversations?
The study was based on 52 interviews across "enterprise stakeholders", a "systematic analysis" of about 300 public AI initiatives, and surveys with 152 leaders. Not a super big or scientific study from my perspective. But still, let's put away the pitchforks. It's better than nothing, right? I think some of the best insights are revealed in the quotes.
"The hype on linked in says everything has changed, but in our operations, nothing fundamental has shifted." Little bit of victim mentality here but ok, yes there is a lot of hype and the PowerPoints do not agree with what is actually happening.
"If I buy a tool to help my team work faster, how do I quantify the impact? How do I justify it to my CEO when it won't directly move revenue or decrease measurable costs?". Preach - this is like THE problem. We only count two types of beans in mortgage - headcount and revenue. One has to go down and the other has to go up. otherwise we have no ROI.
"[ChatGPT is] excellent for brainstorming and first drafts, but it doesn't retain knowledge of client preferences or learn from previous edits. It repeats the same mistakes and requires extensive context input for each session. For high stakes work, I need a system that accumulates knowledge and improves over time." Yes and no on this one. The more I use ChatGPT, the better it performs relative to what I want it to do. It does anticipate what I will ask, and I have to provide less context. But yes, on an individual question basis, memory is an issue.
"I can't risk client data mixing with someone else's model, even if the vendor says it's fine". This is completely true and I hear it all the time.
A high bar defining success.
The report had a pretty high bar for the definition of success. Said in my words, success is defined as meaningful impact on the P&L, measured six months post deployment. Keep in mind, this wasn't actually measured, this was based on what those interviewed or surveyed said.
I've been a large scale commercial software product manager for a lot of my career. I've had many glorious successes and just as many spectacular failures. By this definition, I'm sure at least some of my successes would be failures. And if you consider what it takes to move in federal, I think success would be even more scarce. This definition applies to a narrow spectrum of small, turnkey, commercial solutions where you can turn it on and see immediate P&L impact.
While this is definitely the goal for all of us, I'm just not sure it's a realistic definition for the rest of the world. Or maybe I'm the one with the outdated perspective (ok, ok, probably it's a me problem and I am being defensive). I do base a lot of my experience on what the process has been like in the past. I certainly agree that in a world where we can go concept to cash in a week, we should be able to move the needle on the P&L in a matter of months.
Learning systems and the agentic web.
The authors are from the Networked AI Agents in Decentralized Architecture (NANDA) team at MIT. NANDA is a research initiative focused on how agentic, networked AI systems will impact organizational performance. They conduct research and host events that explore the future of what they call the agentic web, defined as "billions of specialized AI agents collaborating across a decentralized architecture".
Agentic AI according to NANDA researchers is the class of systems that embeds persistent memory and iterative learning by design, directly addressing what they see as the learning gap in assistive AI solutions like ChatGPT and wrapper based AI solutions.
That is also a high bar, in terms of the definition of an AI agent. In my classes and workshops, I typically define an AI agent as having four key characteristics, the ability to:
Perceive, understand, and remember context.
Reason about a problem.
Plan and take action.
Use tools.
I adapted this definition from Jensen at NVIDIA GTC earlier this year, so admittedly maybe it's time for me to evolve my definition, it has been about four months or so. I like my definition because its easy to communicate and remember, and it is easy to contrast from assistive or conversational AI. But just because it's easy, doesn't make it right. NANDA has a much more complicated perspective, resting on a foundation of what they call decentralized AI.
Decentralized AI enables collaboration amongst individuals and organizations that have complimentary assets without a central oversight function. The idea is sharing to achieve value rather than relying on central functions (or monopolistic vendors).
This idea of an agentic web resting upon a network of decentralized AI systems is complicated, and requires a level of technical sophistication that I really don't have. But I get the concept, and it makes theoretical sense. It just seems... really hard. It requires a lot of humans (?) to do a lot of sophisticated things around the world. Meanwhile in mortgage we are still just trying to figure out agents beyond the call center, research functions, and development acceleration (where agents are well established).
The five myths about genAI in the enterprise.
This I did find useful. It was a little section that did a good job painting the picture of common myths in genAI, some of which I agreed with, and I did stop and think about all of them.
Myth #1: AI will replace most jobs in the next few years. Yeah, no. Certainly across all major technological disruptions in the history of disruption, jobs become obsolete and new jobs were created. We are seeing fewer jobs for entry level team members. The Stanford Digital Economy Lab, using ADP employment data, found that entry-level hiring in “AI exposed jobs” has dropped 13% since large language models started proliferating.
Myth #2: Generative AI is transforming business. The study suggests that adoption is high but transformation is rare. I can echo this sentiment, this is what I see as well. I see very little truly transformational adoption in our industry.
Myth #3: Enterprises are slow in adopting new tech. The study indicates this is a myth, but then goes on to say "enterprises are extremely eager to adopt AI and 90% have seriously explored buying an AI solution". Exploring and adopting are just not the same thing, so I disagreed here.
Myth #4: The biggest thing holding back AI is model quality, legal, data, and risk. They argue as I've pointed out already that it's the lack of system learning that is the biggest barrier. This may be true, but model quality is a real problem, and it's what I hear about the most. the question is get the most often is "how do you know it's right" (followed closely by "is it safe"), so I don't 100% agree with the authors sentiment on this one.
Myth #5: The best enterprises are building their own tools. They state that "internal builds fail twice as often". This one is hard for me to substantiate as I tend to work with organizations that buy and build, perhaps with a slight lean towards the buy side. Naturally, this means my perspective will be skewed. I did find this fascinating, though, and in theory it makes sense. I'll have to dig into this one more and see.
Bottom line for us mere mortals in mortgage.
So cutting through all the jargon and the NANDA rabbit holes I explored through my study of the study, here's what I take away from all this for us in mortgage AI.
We need to redefine, or at least expand, our definition of an agent and think more thoughtfully about agentic AI. I continue to believe we have to start where we are, and agentic AI adoption is still very, very early for us in mortgage. I continue to believe that starting with good CI/CD pipelines for retrieval augmented generation (RAG) is the right foundation to start with for organizations that want to build. I don't generally advise organizations to skip steps, but maybe I should.
We need to be even more aggressive about agentic AI adoption and seeking out high-value, low complexity agentic use cases. I am already doing this, but not with enough vigor so I will place more emphasis on this.
We are still really early. The authors argue there is an 18-month window of opportunity. This is likely true in other industries but based on what I am seeing, our window is a bit longer, say 24 to 48 months. Longer in federal, of course. But it's coming.
And finally, perhaps most importantly, we have to continue to double-down on adaptive systems, and continue to incorporate what we learn and see from the actual operations into everything we build. This applies to our product strategy, our educational strategies, and workforce transformation considerations as well. This will take more thought and introspection, I'll let you know what I land on.
By Tela Mathias, Chief Nerd and Mad Scientist, PhoenixTeam