
My take on 2026 is that it will be a race to achieve what I call the mortgage AI "FOMO" strategy as fast as possible. FOMO, of course, is fear of missing out. It's a perfectly reasonable strategy since achieving FOMO faster than everyone else is a differentiating edge. FOMO has things like voice agents, knowledge bots, generative AI for data gathering, and a bunch of other things (reach out if you want the list).

We can contrast FOMO with what I see as the AI future, a radically more effective mortgage organization that embraces controlled autonomy. The AI future has things like seamless compliance change integration, a consumer data flywheel generating new opportunities in addition to uncovering service quality limitations, and a move from reactive lines of defense to proactive lines of offense. (Again reach out if you want more details).
The gap between FOMO and the AI future is large.
The long term differentiating value won't come from FOMO. It will come from a total reimagination of how mortgage works. We are very focused today on task level automation. Removing friction from the current process. Improving the value of KPIs we already have. That is not the same as looking at mortgage end-to-end to find and forge the new way of working.
Kind of stark I suppose, but we really need to think existentially about our business. The questions I ask of my clients are:
Gartner poses the first question in terms of "defend, extend, and upend", which is a really helpful framing although a little too watered down. We can blend these ideas and express them in terms of spend - maybe we want to survive for now, differentiate in the next two years, and completely crush everything ultimately. That would say we need to "catch up" first, lean into our differentiation points, while also setting in motion at least one major reimagination. No one has unlimited time and unlimited budget, we have to prioritize.
The three year horizon is a useful thought exercise because it gives the illusion of time, thereby removing some fear and enabling a bit of clear thinking. The reality is that no one has three years to wait for AI.
This process reimagination is taking different forms, and will be unique and proprietary to each company (which means I won't write about it). It is a perfect example of the truly unique human value we can offer as experienced operators and experts in our crafts. Yes we can use genAI to expedite our process, come up with some idea remixes - but the real innovation will come from operators, not machines.
In addition to process reimagination, there are real technical hurdles to overcome. Ethan Mollick, of course, calls this the jagged frontier. My take on this is playing out in my lab this week. I'm working on four different challenges at the moment. I'll talk more about the use case I'm working on as I get it figured out.
Agent mania has officially taken hold, and the insanity it is bringing has reached a fever pitch. Let's step back a moment. So you can build and agent. So what. You can build an agent in 45 minutes if you want to, we do it in our classes all the time. Not everything needs an agent, which is the first (and perhaps the most important) thing. Once we pass that gate, how we scope agents matters a lot. Scope them narrow and scope them atomic.
I think (one of the) major technical hurdles of 2026 is agent orchestration (although some say this is not needed if agents are scoped and designed right). How do you reliably create networks of agent that work together to achieve an objective or related set of objectives? What is the contract between the agents and how is it memorialized, enforced, and make transparent? I'm looking at a few things here - dust, crew, and gastown. I'll report back with what I discover. Maybe agent orchestration doesn't even matter. (Gasp!)
Oh context engineering... maybe the most important consideration. How to shove just the right amount of context into the context window? How to compress a long and circuitous history into a meaningful collection of right sized bites? This is really the trick, I think, to solving the memory problem. We humans have such an interesting and not well understood way of remembering. How does our brain differentiate between important and unimportant? Why do you remember what you remember and forget the things you forget? How do we get a machine to do that reliably?
This is an old new problem. Congratulations to our colleagues at Promptfoo on their transition to OpenAI, I saw that coming years ago, truthfully. I only thought it would happen faster. Ian Webster and Michael D'Angelo deserve all the awesomeness that I hope will come to them for being out in front on this. I hope they will remain as accessible, approachable, and passionate as they have been to date. In any event, doubling down on agent evals is a thing I continue to explore using the open source promptfoo framework.
And last but not least, the intersection of deterministic and probabilistic - neurosymbolic AI. This is the idea that we use reasoning capabilities when it makes sense, and code-based solutions when it doesn't. Yes we have an amazing magical hammer (large language models), but not every problem is a nail amirite? Code-based solutions are still super useful. Maybe you need 100% accuracy 100% of the time. Guess what - probabilistic technology probably isn't the right solution. So I'm working this too.
Man, now I'm tired. I wonder if a week is enough to put all these things together? Might need two. Anyway, hope to hear from some of you with your thoughts. We will be at NVIDIA GTC next week, come find me if you are too.
