Insights

Perspectives on AI, technology, and compliance transformation to help you move faster, smarter, and with more clarity.

featured insights

January 28, 2026

Getting Started with Vibe Coding in Four Steps

We are gearing up for the innovation track at MBA servicing and one of the labs we are running is on vibe coding (you can sign up for the lab here, it's free to MBA registrants). There are plenty of things to worry about with generative AI (safety matters!), but today I thought I'd keep it light. Vibe coding has changed everything for me, and caused me to unthink and rethink the entire paradigm of discovery in product development.

Article content
Yuchen Jin is the PhD cofounder and CTO of Hyperbolic. He works on machine learning and distributed systems. He's a total baller and hilarious to boot. I'm a superfan.

Vibe coding is the process of using a software development agent to bring ideas to life through natural language expression and iteration. You express your ideas in plain english, and the machine creates a representation of your idea (software). You don't have to be an engineer, and you don't (directly) write any code. The term was originally coined by Andrej Karpathy in a now infamous X post. Andrej is one of the original founders of OpenAI and a well known AI researcher and educator. I'm a superfan.

Article content

Before we get into the process I used to get started - let's address the controversy. This is a hotly debated topic, with superfans and haters alike. The main controversy in the AI community comes down to:

  1. Security. The code "works" and you can ship it (publish it to the internet, for example) but if you are not careful, it will come with all the classic security vulnerabilities that professional software engineers know to avoid. Traditional software development processes typically control for this.
  2. Scalability. If you can't explain how it works, it's much harder to debug, maintain, and enhance, especially in a team environment. It's difficult to vibe code as a team although I have had some success with this in Replit. I do agree that it's not currently a scalable practice.
  3. Maintainability. Without guardrails and good development practices, you are likely to create a spaghetti mess of code that is overly complex, inconsistent, duplicative, and very hard to refactor. It can create a ton of technical debt.
  4. Homogenization. The thing we build out of the developer agents can have a very similar feel and tend to work similarly, especially as you are just getting started. There is concern that we are all just converging to a see of sloppy AI-generated apps, diminishing the value of software and losing the true craft of software engineering.

These are all definitely real risks, but the massive scale success of Anthropic's Claude Code shows that it can be done, although their practices probably fall more inline with AI-assisted software engineering.

Article content
Personally, I value the software engineering profession MORE than I ever have now that I am so much closer to the craft. I don't see the craft being diminished, I see it being elevated.

As a personal aside, I am the CTO at PhoenixTeam and I suppose I'm a new kind of CTO as I do not have a classic software engineering background. (I usually make a joke about vector embeddings here about how I am more "developery" than some and less than others but you have to know a bit for this to be funny). Vibe coding has allawed me to get so much closer to the practice of software engineering. Without genAI and what I've done over the past two years with genAI I definately could not call myself a CTO with any confidence.

Article content
Tela and others as two dimensional vector representation. Not to scale. I am pretty much no where near these guys from a product perspective but you get the idea.

But I digress and have said too much about vector embeddings, I hope you have hung on to this point. On to the four steps. I started my vibe coding journey relatively early in late 2024 (how is that early, amirite?). And once I started I was completely hooked. Things that have historically taken months and multiple roles, I could do in a weekend. All alone. That was it for me. I started with Replit and then moved onto Claude code, which is where I am now.

Article content
These are the four steps I took to get started and how I advise others who are just getting started with vibe coding.

Step 1: Be Amazed

Find a tool. You have to just get in there and do stuff. There are a lot of choices, Lovable, Bolt, Replit, others. I personally like Replit but it's what I know so I have that bias. You can start for free.

  1. Prompting proficiency is a suggested pre-requisite for effective vibe coding but certainly not required.
  2. Start small. You can use the below zero shot prompt t get started on a free replit account.
  3. Think big – what’s something you always wanted build but never had the time/team?
  4. Literally no engineering is required to get started, systems thinking helps.

Build a simple mortgage calculator. Inputs: home price, down payment, interest rate (%), loan term (years). On clicking Calculate, show loan amount, monthly payment, and total interest — all formatted as USD. Test it to make sure it works and that we can save results to a history. Pre-populate it with realistic values.

The above prompt should render your application in about 2.5 minutes.

Step 2: Struggle

The zero shot promot above should work right out of the gate, but it may not. It could have bugs. This is part of the struggle. Don't give up here! Especially as you get more proficient and try harder things, it won't work right away.

  1. These tools are constantly improving. You will hit the ”jagged frontier”.
  2. When in doubt, start a new chat/conversation.
  3. Ask the agent to take stock of where it is, familiarize with the app, reset context.
  4. Take a breath. Start over. You got this!
Article content
Something like this should pop out. Then you can "vibe" and make changes, debug.

Step 3: Iterate

Then you just jam out. In the above example, I made the button pink, which you can see below. It took about 58 seconds on a relatively slow internet connection. How cool is that.

  1. As you begin, you will not know precisely what you want.
  2. Don’t be afraid to start over completely multiple times. The tooling is nascent.
  3. Take note what what you learn so you can use it for your “clear thinking” version.
Article content
Sadly, you won't get rich with this idea but hey, it took less than five minutes!

Step 4: Arrive at Clear(er) Thinking

As you work through the struggle, you will discover so much more about your idea. This is the fun and also depressing part. You will discover you need a login, you will figure out that you should have started with data requirements. You will start over. You will get angry. Keep going.

  1. Data requirements will become clear and central.
  2. The flaws and learnings of prior iterations become requirements.
  3. Once you know what you really want, your prompt and products will improve dramatically.
  4. Continually refine and keep track of your prompt.

Here's a more complex prompt you can try. In this example, I load my logos and the Fannie Mae Seller/Servicer guide as reference for retrieval augmented generation (RAG). I've built this app at least ten times, which is how I arrived at this prompt. This one probably won't work in one turn but you can debug it within about 20 minutes. It will take about 15 minutes to render so go get your coffee.

I want to create a super simple mortgage lead tracking website for a mortgage loan originator, this is for a demo I am doing for PhoenixTeam mortgage so use the logo and style it in their brand guidelines. Make sure the logo has good contrast, perhaps a white background for the app. Use a modern, clean font. Give the application a modern feel. Don’t use a lot of colors, keep it simple. Let’s include some test leads but make sure we can delete the leads and put in real ones. The second feature I want is a chatbot that will allow the user to search the Fannie Mae Seller guide (which I have attached here) using OpenAI integration with version 4o and retrieval augmented generation. The chat bot should NEVER depart from the context and ONLY refer to the guide through RAG, but it should not be overly restrictive and should answer questions about Fannie Mae when asked. Make sure you set up a user ID so the chatbot feature can work. The chatbot window should wrap so the full text of what the user is typing can be seen. Make sure the bot window is prominent and allows me to see all the text, scrolling when necessary. Make sure it has some starter questions I can select. Create detailed answers, not simply summaries. Use a generous response length. The final feature I want is user authentication with their replit account. Set me up as the initial user with the initial test leads so authentication will work.

Voila!

And that's kind of it to get started. From here you may want a finer grain of control, in which case I cannot enthuse enough about Claude Code. I am in it every, single day. If you go Claude Code, you will need a way to deply, in which case I recommend Railway. You will probably also need a databse, you can use Supabase for that. Be prepared for many more struggles. I'll cover advanced vibe coding in another article. Happy vibing!


By Tela G. Mathias, Chief Nerd and Mad Scientist, PhoenixTeam

featured insights

January 22, 2026

Why AI Adoption Is a Human Opportunity in Addition to a Technical One

As featured in Chrisman Commentary, Daily Mortgage News

By Tela Mathias, Chief Nerd and Mad Scientist

Generative artificial intelligence (genAI) did not arrive quietly in the mortgage industry. It burst onto the scene with accelerated timelines and forced uncomfortable questions about relevance, speed, and survival. For many leaders, AI remains a buzzword or a vague mandate handed down from the boardroom. For others, it has become an existential inflection point. The difference between those two perspectives is not technology. It is mindset and this gut-based belief that radical change is upon us.

In late 2023, as generative AI tools matured rapidly, the realization set in that traditional approaches to work were no longer necessary. Tasks that once required armies of analysts, spreadsheets stitched together by hand, and months of effort could suddenly be decomposed, analyzed, and rebuilt in days. In mortgage servicing alone, years of regulatory guides, handbooks, and policy documents had historically demanded painstaking manual effort. Generative AI offered a fundamentally different path forward.

That moment forced a decision that many technology-driven businesses now face: wait for the market to define the future, or actively shape it. The metaphor often used is simple but stark. You can eat the bear, or the bear can eat you. In an industry built on legacy systems, regulatory pressure, and deep technical debt, standing still is no longer a neutral choice.

Being an AI builder today is not about chasing every new tool or trend. It is about living at the intersection of stability and experimentation. On one side sits the established generative AI stack, the techniques and architectures that are already reliable enough for production use. On the other side is the bleeding edge, where tools change weekly and experimentation is constant.

For those deeply embedded in this work, the day often starts early, or at least mine does. The quiet hours of the morning provide space for creative thinking, learning, and trial and error. This is where new models are tested, workflows are prototyped, and failures are reframed as data points rather than mistakes. It is also where a crucial mindset takes hold: the beginner’s mindset.

The beginner’s mindset is essential because generative AI does not reward rigid thinking. It rewards curiosity, play, and iteration. Much like a child approaching art without fear of mistakes, effective AI builders treat failures as “happy accidents,” learning what does not work in order to uncover what might.

One of the most profound shifts generative AI introduces is scale. AI does not replace talent; it amplifies it. It takes natural ability and multiplies it by removing friction, repetitive work, and manual constraints. For the first time, individuals can move from concept to deployed prototype in hours rather than months. In our lifetime, entire products will be imagined, built, tested, and deployed securely at scale by a single person in a weekend.

This is why AI feels almost disorienting at first. Long-standing barriers between idea and execution have collapsed. What once  was only possible through a carefully coordinated “process” now simply requires clarity of thought and the ability to communicate a goal in natural language. English, not code, is the fastest-growing programming language in the world.

Yet this power comes with a caveat. AI amplifies humanity. It does not possess judgment, innovate, or aspire. Those remain uniquely human characteristics, especially in an industry like mortgage lending, where decisions affect people’s lives and financial futures.

For a novice, generative AI is not much more than a research assistant. For those with creative mastery, it’s an exponential force multiplier.

Despite the promise of “anything you can imagine, you can build,” AI still operates along what researchers call the jagged frontier. Some complex tasks are suddenly trivial, while others remain stubbornly out of reach. Nowhere is this more apparent than in regulated industries like mortgage lending.

The hardest problem is not ideation or prototyping. It is the messy middle: moving from a compelling prototype to a secure, compliant, scalable production system that integrates with decades of legacy technology. Mortgage systems carry the weight of post-2008 regulation, layered applications, and fragmented workflows. AI-native solutions struggle to thread through that complexity cleanly.

Closing the messy middle is the next frontier. Progress is happening, but it will take time, partnerships, and new ways of thinking about policy, data, and integration. That is because mortgage AI today is not one thing. It is a spectrum.

On one end sits traditional or narrow AI, technologies the industry has used for years: optical character recognition, rules-based underwriting engines, document classification, machine learning models, and natural language processing for call centers. These tools quietly power much of the modern mortgage process.

Generative AI is the newer layer. It includes conversational agents, workflow orchestration, automated summarization, development acceleration, and intelligent decision support. In 2025, much of its adoption followed a familiar pattern: fear of missing out. Organizations rushed to deploy table-stakes tools simply to keep pace with competitors.

That phase is ending. In 2026, differentiation will not come from using the same copilots and summarization tools as everyone else. It will come from creative, thoughtful applications of AI that are rooted in real business understanding. It will come when organizations figure out how to unleash human creativity and weave it through the fabric of both the organization and the people within it.

The biggest opportunity to accelerate AI adoption is not technology. It is people.

Across organizations, there is a widening gap between what AI is capable of and what teams are actually doing with it. Fear plays a central role. Fear of job loss. Fear of compliance missteps. Fear of the unknown. Until those fears are addressed directly, no amount of strategy decks or vendor demos will unlock real transformation. It is the people that have to be unlocked, so that their innately human qualities can be applied in previously unimaginable ways to the most confounding problems.

Attendance is not application. Sitting through an AI presentation does not change how work gets done. It certainly does not tap into that beginner’s mindset or inspire fresh perspective. Change happens only when individuals see how AI makes their own work better, faster, and more meaningful. Organizations become 10x when their people do.

This is especially true in mortgage lending, where responsibility to borrowers, regulators, and the public is profound. There are no shortcuts. AI must be deployed responsibly, transparently, and in partnership with the humans who use it.

From a bean counting perspective, AI adds the most value today in highly repeatable, measurable tasks. These are the same tasks historically targeted for outsourcing, which raises a natural question about return on investment. The answer lies in differentiation. We count two types of beans in mortgage – heads and dollars. So long as that is what defines “value”, we will continue to look for ROI in automation when what we should really be looking at is reimagination and human spirit amplification. We don’t have beans for that.

If everyone automates the same workflows, no one gains a competitive edge. Real value comes from combining human creativity with AI capability to rethink processes entirely, not just accelerate them. The goal is not to replace judgment, but to free it.

Leaders should resist the temptation to outsource thinking to machines. First drafts, strategic ideas, and original insight still belong to humans. AI is a collaborator, not an author of vision.

For organizations wondering how to keep up without burning out, the advice is grounded and practical. First, imagine your business three years from now. Ask whether it is still relevant and what roles will look like in that future. Second, have honest conversations with people about fear, uncertainty, and opportunity. Without that, transformation stalls.

The technology is advancing faster than any organization can absorb. That is not a failure. It is a reminder that leadership, culture, and trust determine whether AI becomes a force for growth or a source of anxiety.

For the first time since the introduction of Desktop Underwriter (DU) in the 1990s, the mortgage industry has a chance to meaningfully confront its technical debt. Generative AI, deployed responsibly, offers a path to simplify complexity rather than layer on more tools. That opportunity will not realize itself automatically.

The bear is here. Whether the industry eats it or is eaten will depend less on algorithms and more on people willing to create space for beginning again.

featured insights

January 20, 2026

Leading When Digital is Cheap and AI Slop is Everywhere

It's hard to create impact in a sea of slop. (Coined by tech journalist Casey Newton, AI slop is the term for the flood of low quality, AI generated content created rapidly to attract eyeballs and sell or promote things). One of the effects of this AI slop era is to diminish the value of online and offline content. What we see and read online is now subject to a new critical lens of "is this real?" and "was this generated by AI?". This effect is carrying over to offline content as well - if everyone is an expert, who and what can we rely on?

Last week we closed out the third AI exchange in our series on AI's impact on the workforce. One of the questions I wanted to answer was how to lead when digital is cheap and AI slop is everywhere. What's different about an AI-fluent leader? How do we find and amplify the things that matter for teams of humans and machines, working together? How do we help a group see clearly in a sea of bland and remixed messages?

Big caveat - I can only speak from a place of my own lived experience so this writing is from that perspective. Different leaders do different things and can be equally or more successful, I just offer my $0.02 for what it's worth.

Leadership

I went down a frustrating leadership rabbit hole last week preparing my opening remarks for the AI exchange. I figured I would find an quick and easy to lean on definition of leadership. After about 30 minutes I settled on the definition of leadership as helping a group to see clearly, choose a direction, and move together, while owning the consequences. Leadership is a quality a person can have, and thing a person or people can do, as well as a process.

AI-Fluent Leadership

Unlike genAI-native technology (which I'll shorten to AI-native for this article), we don't have truly "AI-native" leaders yet since the AI-natives are only about three years old right now. I'll define AI-fluent leadership as:

Helping a group see clearly when AI (and AI slop) is an integral part of everything we do, choose a direction that will be informed by both humans and machines, and move together with people that have widely variable levels of AI-fluency, all while owning the consequences of what humans and machines do.

AI-Fluent Leadership Actions

I am, or am at least trying to be, an AI-fluent leader. Maybe not a good one, but that is the subject for another day. So what has changed about what I do? The first and most obvious change in my actions is that I use genAI all the time, every single day, constantly. I use it to create new things, accelerate my work, and amplify my creative impact. I have become supercharged, I can do all the things I used to be able to do at an even more vigorous level.

I also build things all the time. At least once a day I am in Claude Code building something new, or tweaking something I built already. I also use the things I build. Not all the things, sometimes I build things that turn out not to be useful, but this has been perhaps the most profound change in my way of working. Later in my career, especially in the last ten years or so, I became a pretty prolific creator. GenAI has now unlocked the inventor and individual builder in me.

For better or for worse - I write all my own content. And it's really time-consuming. I use writing to crystallize perspectives, consider alternatives, and to learn. Inevitably I have to research, pull threads, maybe try something new - that's just my process. I think it's important to have our unique voice, and have the voice come through. I don't even use genAI to edit, as I have come to believe the occasionally typo is a good thing, it shows I'm not a bot.

AI-Fluent Leadership Perspectives

I now believe that anything we can imagine, we can make. We can have an idea, and bring that idea to life in a day. The only limits are the limitations imposed by physical reality, and even that is changing with accelerated computing and embodied AI (think robotics). Even if I can't do it, could a robot do it? As Jensen says, when we take the effective cost of something to zero or approximately zero, previously unimaginable things become possible.

I also believe that quality matters way more than it used to, and this is because digital has become so cheap. Creating "good enough" is now trivial, which means "good enough" isn't good enough any more. It has to be great. A side effect of this is the important of offline results. Actual physical things. Yes the world has gone paperless, and I'm going paperfull. I'm like the salmon fighting the current. I want things to matter more, and I believe a part of that is taking the time and putting in the effort for a quality physical thing. I want to create feelings and memories that stands out.

My nine-year old daughter believes there are no mistakes in art, just happy accidents. This to me is an absolutely perfect manifestation of the beginners mindset, and I love it. I believe in happy accidents, there is always learning in failure, and sometimes failing is succeeding. Not always, but a lot of the time.

AI-Fluent Leadership Expectations

I expect more from myself as a leader and from my leaders. I expect us to be able to move faster. I expect us to be able to scale out quickly. It's probably not a realistic expectation, but I expect leaders to be able to move at something at least approximating the pace of genAI. We talk about being on genAI time - where a day is a week, a week is a month, and a month is a year. I expect us to move with that kind of intensity.

Ethan Mollick talks about the jagged frontier, and I expect AI-fluent leaders to push the jagged frontier all the time. One of the most frustrating challenges in this AI future is bumping up against that boundary - that thing that should be possible but just isn't. Even more frustrating is the fact that the impossible could become possible at any point, and so we have to keep trying. What is possible isn't settled - just give it a week.

This AI future is hard and it moves fast. I do expect all this to take time, which I realize it counter to my first point (but as I often say - two opposing things can be true at the same time). I expect that we are all works in process and we have to give ourselves grace. We have to give others grace as well. We have to keep trying and learning from those happy accidents.

AI-Fluent Qualities

Bob Sternfels, the global managing partner of McKinsey described three qualities that are required to succeed in the AI future. I'm not a huge McKinsey fan, but I did find his remarks insightful. He talks about what the models can't do and that these uniquely human characteristics are the differentiating qualities that have already increased in significance post mass availability of genAI. They were really good.

The first is aspiration, setting the right ambitions and getting others to believe. The second is judgement, being able to know what is right and what is wrong when there are no easy answers. And third, he describes true creativity as contrasted with statistical remixing.

AI-Fluent Leadership Worries

One of the most significant pitfalls of AI-fluent leadership is the risk of creating a two-tiered workforce. I don't have an answer here, and I'm not even sure I have the right questions. As a society, do we have an obligation to create pathways for workers who are not AI-fluent and do not intend to become AI-fluent? Is the answer to that question different for each of us as leaders in our organizations?

As a leader at my company and in my industry, for at least the next ten to 20 years, we will have these pathways. We have clients that are not AI-fluent, and really haven't started their journey. We have and will continue to meet them where they are. We also have clients that are trailblazing in the AI future. We have and will continue to meet them where they are as well. And for all clients, regardless of where they are on their AI journey, our goal is to deliver outcomes for them. That hasn't changed.

I worry about the entry level professional roles in my industry, and roles for people entering our workforce from non-traditional paths. We simply won't have the traditional starter jobs anymore. I turned 50 last weekend, and the entry level jobs from 25 years ago for people like me don't or won't exist anymore. I have four kids - ages eight, nine, 11, and 22. I believe that we need to both prepare them for the path as well as prepare the path for them. My job as a parent is to help them find their way, help them to be happy, kind, and financially independent (when possible). My job as an employer is to create entry level opportunities, even if it's a little fuzzy what those opportunities are.

By Tela G. Mathias, Chief Nerd and Mad Scientist at PhoenixTeam

featured insights

January 6, 2026

How to Get Agents with Agency

What even is agency, amitrite? To me, it has now become this word that I've heard so many times it has lost its meaning as a real word. (Apparently this phenomena is called “verbal satiation”). But I digress. Let's start this off with the definition of agency, which I had to look up to feel confident.

The have agency is to take action and make choices that influence outcomes. To same “something has agency,” means it can decide between options (even if the options are limited), act intentionally, and cause effects in the world.

This can be contrasted with just waiting for things to happen. In my classes I use the metaphor of being proactive (agentic) as opposed to reactive (assistive). Responding to a request as opposed to proactively pursuing an objective. Having spend a lot of time in the last quarter building and deploying agents, here are some key concepts that may be useful for you on your journey to create agents that actually have agency.

Agency = Context + Planning + Memory + Tools + Action + Adaptivity

Context

Context engineering is the discipline of deliberately designing, assembling, and managing everything an AI model sees before it responds. You can think of it as the fuller realization of what started out as prompt engineering. Prompt engineering is what you say, context engineering is what the model knows, remembers, assumes, is constrained by, and is allowed to do at any moment. Yeah, it's kind of a Big Deal.

Consider an agent whose scope it to correct misapplied payments and waive late fees, when appropriate. "Context" in this use case might include:

  1. Information about the loan from the system or systems that have it including prior payment data from the servicing system.
  2. Information the customer might provide about what happened expressed as a complaint or inquiry.
  3. The specific circumstances around the one payment in question (i.e. "the facts").
  4. The rules (constraints) about what has to happen to a payment when it's received.
  5. The last few things that happened and the new few things that are expected to happen.
  6. The specific span of control the agent has (what is it actually allowed to do).

All of this has to be engineered and cared for. The systems have to be known and able to be integrated with. The data has to be high quality and consistent, and conditions for bad data understood and designed into the workflow. From a testing perspective, we have to know the conditions we should expect, and also plan for the conditions we failed to anticipate.

Let's talk about data. The data informs the prompts informs the data. Rich data requires rich prompts, when the prompts don't "match" the data, we can get really bad results. When I set up a test harness for my agent, I have to have a good set of realistic data because I can assure you, the agent will find all the little warts and holes and the agents plan will have that bad conext baked right in. I find that I engineer the data over time as my understanding evolves. Then I revise the prompts. Then I learn new things about the data. Then I revise the prompts.... You get the idea.

Agents do stupid shit sometimes. Context engineering helps to improve the judgement of the agent to do less stupid shit less frequently.

Planning

On its surface, this is relatively straightforward - use a large language model (LLM) to make a plan to solve the problem. There is another aspect to planning, however, which is the actual orchestration of the workflow, knowing when to call tools, knowing when to call deterministic functions, and knowing the scope. In the context of our example, planning might include:

  1. What information is required before we can make the specific interaction plan to correct a misapplied payment and handle the associated late fee.
  2. An overall flow for handling this specific type of problem - classify problem at sayment related, verify identity of specific borrower, gather additional payment history data.
  3. The specific plan for the actual interaction - determine correct application of funds, apply funds, determine what to do with late fee, make recommendation to human in the loop.
  4. Take specific set of action steps to resolve problem.

So there are multiple aspects to planning - workflow orchestration (which may be a combination of deterministic and probabilistic steps), the actual recommended plan, the validated plan, and the resolution plan.

Memory

It's funny to come from a field where we have sought after statelessness for so long. Not to get too technical, as I am already way over my skis talking about things I don't understand well, but service-based architectures (the wave before now) was all about resiliency and moving away from monoliths. The idea that we can have one part of the system break, and the rest of the system stay in tact.

A stateless service tends to scale well, fail predictably, be easier to understand, and minimizes coordination complexity. This is helpful for us mere mortals. So where did the state actually go? It went into databases, event logs, workflow engines, and... humans. We had state, it was just distributed. Now we have this whole new way of making systems (the agentic way), and the key distinction is understanding and evolving state. And this is one of the critical parts of memory.

Back to our example of the agent that corrects misapplied payments and waives late fees. What does memory mean? It means:

  1. We know what came before, the payments that were made, the fact that there was or was not a late fee applied.
  2. We know what happened in the initial interaction, why the customer contacted us, the channel, how they "usually" interact, whether they were frustrated.
  3. We understand the original plan and the adapted plan based on the feedback from the human in the loop. Perhaps the original plan waived the late fee and the HITL overrode that and ultimately the agent did not waive the fee.
  4. The next time this consumer contacts or there is another "related" situation, the agent knows about the misapplied payment that happened in this event context.

So memory can be short term, long term, in the interaction, apart from the interaction, and can apply to the future or not. For those of you using Claude Code, this is a critical part of why Claude "compacts conversations". This is taking a cet of context that is too large or inefficient to pass each time and keep "in memory" that is subsequently compacted for continuous use. That compacted context is not typically available int he next session unless it is stored elsewhere, which is a conversation for another article.

Tools

Looking at Claude Code, there are hundreds if not thousands of tools. Claude gives you clues to this when it is thinking as it notes the number of tools it used in a particular step. i've started to pay attention to this. Tools are just what they sound like - a tool is any capability the agent can call to do something in the world beyond just generating text.

In our example context, tools might be:

  1. Get payment history.
  2. Generate summary for human in the loop.
  3. Waive late fee.
  4. Store document.
  5. Create customer notification.

There can be lots and lots of tools, but keep in mind that the more tools there are, the more ways there are that things can do sideways with your agent. Yes, claude code has hundreds or thousands of tools. Anthropic's overall post-money valuation was reported at $183B in September 2025. It's safe to say they have way more money and talent to develop Claude Code that we do for whatever agents we are creating. Stay humble guys, keep it small and iterate.

Adaptivity

Two types of adaptation we are talking about here - adapting to environmental changes (new data being a big one) and adapting the human in the loop (HITL). I might also throw in adapting to predefined stopping points - sometimes called "stop hooks" - although that's not really adaptation, that's just executive a plan when a set of plan conditions are met. In any event, adapting to the environment and new or changed context is a critical part of agency. Something changes, and I need to change - I had a plan, now I need to make a new plan. On my own. Without being asked. Or, I need to stop and check in to make a new plan (this is a stop hook).

And then there is human-in-the-loop (HITL) adaptivity. Making a plan, having tools, and executing a plan is really meaningless if it's the wrong plan. In fact, it's worse than meaningless, it can be actually destructive and create harm for humans. Yes we want to provide good context engineering to prevent the wrong plan from being executed, and also, especially in mortgage, we will continue to have humans in the loop for many use cases for many years.

In our example, we might adapt to:

  1. New information - another payment comes in, a check bounces, we receive a bankruptcy notification.
  2. Changed information - turns out the loan type is actually different than what we thought, changing the rules for what we have to do.
  3. Human policy latitude - maybe the system says we shouldn't waive the late fee but in fact the human authority makes a different decision. We will, therefore, need to take a different set of actions (make and execute a new plan).

And it goes on an on. I find the best way to really see all this in action is to, well, make it happen. I encourage each of you to create your own agent, or reach out to me and I'd be happy to help you get started on your journey. And naturally, feel free to work with various partners in the industry (including us and the Phoenix Burst team) who already have agents available to do useful things in mortgage.

If you are looking for education, the Mortgage Bankers Association training calendar for AI is already up on their website, and we will be focusing a lot more on agents this year. Hope to see you in a class sometime soon. Happy building!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Artificial Intelligence

Getting Started with Vibe Coding in Four Steps

January 28, 2026

Artificial Intelligence

Why AI Adoption Is a Human Opportunity in Addition to a Technical One

January 22, 2026

Artificial Intelligence

Leading When Digital is Cheap and AI Slop is Everywhere

January 20, 2026

Artificial Intelligence

How to Get Agents with Agency

January 6, 2026

Artificial Intelligence

Why 2026 could be the year mortgage AI delivers

January 1, 2026

Artificial Intelligence

Freddie Mac Bulletin and Executive Order Implications on Mortgage AI

December 13, 2025

Artificial Intelligence

Eleven Reasons Why the Mortgage Industry Isn't Further Along with GenAI Adoption

December 8, 2025

Artificial Intelligence

Looking for ROI in All the Wrong Places

November 17, 2025

Artificial Intelligence

What does the infamous "MIT study" really mean to us in mortgage?

October 28, 2025

Our Company

Blue Phoenix Awarded $215 Million VA Loan Guaranty DevSecOps Contract | A PhoenixTeam and Blue Bay Mentor-Protégé Joint Venture

October 3, 2025

Artificial Intelligence
Our Thoughts

Ten Not Very Easy Steps to Achieve AI Workforce Transformation

September 29, 2025

Our Thoughts

MISMO Fall Summit Recap: Our Take on the Summit, AI, and the Road Ahead

September 25, 2025

No items found.

Towards Determinism in Generative AI-Based Mortgage Use Cases

September 22, 2025

Our Company

PhoenixTeam Achieves SOC 2 Compliance, Strengthening Security and Trust in Its Phoenix Burst GenAI Platform

September 9, 2025

Artificial Intelligence

My Journey with Claude Code and Running Llama 70b on My Mac Pro

September 4, 2025

Our Company

Tela Mathias recognized as a 2025 HousingWire Vanguard

September 2, 2025

Our Thoughts
Artificial Intelligence

Top Ten Insights on GenAI in Mortgage

August 25, 2025

Our Company

PhoenixTeam Awarded $49M Contract to Modernize USDA’s Guaranteed Underwriting System, Expanding Rural Homeownership

August 18, 2025

No items found.

Case Study: The Messy and Arduous Reality of Workforce Upskilling for the AI Future

July 28, 2025

Artificial Intelligence

What is uniquely human? AI impacts on the workforce.

July 7, 2025

Artificial Intelligence

251-Page Compliance Change in Hours, Not Months

June 20, 2025

Our Thoughts
Artificial Intelligence

The Medley of Misfits – Reflections from Day 2 at the AI Engineer World’s Fair

June 5, 2025

Our Thoughts
Artificial Intelligence

AI Engineer World’s Fair: What We’re Seeing

June 4, 2025

Artificial Intelligence

Departing from Determinism and into the Stochastic Mindset

May 30, 2025

Artificial Intelligence

The Agents Are Here and They Are Coming for our Kids

May 6, 2025

Our Company

Built for What’s Next: Welcome to the New PhoenixTeam Website

April 29, 2025

Artificial Intelligence
Our Company

Phoenix Burst Honored with MortgagePoint Tech Excellence Award for GenAI Compliance Innovation

April 7, 2025

Artificial Intelligence
Our Thoughts

From Trolling to Subscribing – An Alternative to Compliance Insanity

March 10, 2025

Artificial Intelligence
Our Thoughts

Supercharge LLM Performance with Prompt Chaining

February 24, 2025

Artificial Intelligence
Our Thoughts

From Program Management to Program Efficiency and Innovation

February 20, 2025

Artificial Intelligence
Our Thoughts

The Evolution of Service Level Agreements: Why AI Evaluations Matter in Mortgage

February 12, 2025

Artificial Intelligence
Our Thoughts

An Impassioned Plea for AI-Ready Mortgage Policy Data

February 3, 2025

Artificial Intelligence

The Role of Mortgage Regulators in Generative AI

January 28, 2025

No items found.

PhoenixTeam Announces Partnership with Mortgage Bankers Association to Offer GenAI Education for Mortgage Professionals

January 13, 2025

Artificial Intelligence
Our Thoughts

Calculating AI ROI in Mortgage: Strategies for Success

December 16, 2024

New Contract
Our Company

PhoenixTeam Awarded $5 Million Contract for HUD Section 3 Reporting System Modernization

November 25, 2024

Artificial Intelligence
Our Thoughts

The History of Artificial Intelligence in Mortgage

November 21, 2024

Artificial Intelligence
Our Thoughts

Adoption of GenAI is Outpacing the Internet and the Personal Computer

November 8, 2024

Artificial Intelligence
Our Company
Phoenix Burst

PhoenixTeam Launches Phoenix Burst — A Generative AI Platform to Accelerate the Product and Change Management Lifecycle

October 22, 2024

Artificial Intelligence
Our Thoughts

Application of Large Language Model (LLM) Guardrails in Mortgage

October 17, 2024

Artificial Intelligence

Where is GenAI Going in Mortgage?

September 9, 2024

Recognitions
Our Company

Tela Mathias Wins HousingWire's Vanguard Award

September 4, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Episode 9 | The Lindsay Bennett Test: A Live Assessment of Phoenix Burst with a Product Leader

July 19, 2024

Artificial Intelligence
Our Thoughts

A Practical Approach to AI in Mortgage

July 18, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Episode 8 | Inside Phoenix Burst: Transforming Software Development with AI

July 11, 2024

Accessible AI Talks
Our Thoughts
Phoenix Burst

Accessible AI Talks | Episode 7 | Leading a Gen AI Team to Production

July 10, 2024

Artificial Intelligence
Our Thoughts
Phoenix Burst

AI Reflections After Getting Lots of Feedback

June 21, 2024

Our Company

PhoenixTeam Ranks Among Highest-Scoring Businesses on Inc.’s Annual List of Best Workplaces for 2024

June 18, 2024

Our Thoughts

MISMO Spring Summit 2024: Key Insights and Takeaways

June 18, 2024

Our Community

Join Us in Making a Difference: Hope Starts with a Home Charity Drive

June 3, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts

Freeing the American People from the Bondage of Joyless Mortgage Technology

June 3, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Part 6 | The Role of AI in Solution Design

May 25, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Part 5 | The Problem of Product Design

May 22, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Part 4 | The Problem of Requirements

May 8, 2024

Artificial Intelligence
Phoenix Burst

What is a value engineer?

May 7, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Part 3 | Problem of Shared Understanding

May 7, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Part 2 | The Imagine Space and More

May 6, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Part 1 | Introduction with Guest: Brian Woodring

May 6, 2024

Artificial Intelligence
Our Thoughts
Phoenix Burst

Storyboards Matter: Three Insights for Using AI to Accelerate Product Design and Delivery

May 3, 2024

Our Company

PhoenixTeam Designated as One of MISMO's First Certified Consultants, Shaping the Future of Mortgage Industry Standards

April 24, 2024

Artificial Intelligence
Our Thoughts
Phoenix Burst

The Two Rules of Gen AI

April 18, 2024

Artificial Intelligence
Our Thoughts
Phoenix Burst

Three Simple Steps to Kickstart your AI Journey Today

April 16, 2024

Artificial Intelligence
Our Thoughts
Phoenix Burst

The Peanut Butter and Jelly Sandwich AI Experiment

April 16, 2024

Artificial Intelligence
Our Thoughts
Phoenix Burst

Starting Our AI Journey

April 10, 2024

Our Company

PhoenixTeam CEO and COO Make Inc.’s 2024 Female Founders List

April 10, 2024

Our Company

PhoenixTeam has earned its spot on Inc. 5000 Regionals: Mid-Atlantic

March 4, 2024

Our Company
Our Thoughts

Key Insights and Takeaways from MISMO Winter Summit 2024

January 29, 2024

Our Company
Our Thoughts

Leader of the Year Interview with Jacki Frazer

January 18, 2024

Our Company
Our Thoughts

Why Phoenix - Shawn Burke

December 28, 2023

Agile
Our Thoughts

PhoenixTeam at Agile + DevOps East 2023: Key Insights and Takeaways

November 17, 2023

Spotlights
Our Company
Our Thoughts

Why Phoenix — Vicki Withrow

November 3, 2023

Our Thoughts

Breaking Barriers: The Extraordinary Woman Who Redefined Workplace Equality

October 20, 2023

Our Company

PhoenixTeam: In the Arena

October 12, 2023

Our Company

PhoenixTeam: In the Arena

October 12, 2023

Agile
Our Company

Defining the Word “Done” The Phoenix Way

September 14, 2023

Product
Our Company

PhoenixTeam Approved as DOD SkillBridge Partner to Help Active-Duty Military Service Members Re-Enter Civilian Workforce

August 18, 2023

Recognitions
Our Company

PhoenixTeam featured on Inc. 5000 List of America’s Fastest-Growing Private Companies for the 4th Consecutive Year!

August 15, 2023

Agile
Our Company

Military Veteran’s Transition from Active Duty to Civilian Life as Lean-Agile Methodologist and Coach

August 1, 2023

Product
Recognitions
Our Company
Our Work

Veteran Founded Technology Venture Blue Phoenix Expands Reach with GSA IT-70 Award

July 24, 2023

New Contract
Our Company
Our Work

PhoenixTeam Begins New Partnership with HUD for FHA Catalyst

June 9, 2023

Our Company

PhoenixTeam is Excited to Announce Becky Griswold as its Newest Partner

June 1, 2023

Product
Our Work

PhoenixTeam proves value realization begins with product discovery

March 30, 2023

New Contract
Our Company
Our Work

PhoenixTeam Strengthens Partnership with U.S. Department of Agriculture

March 29, 2023

Recognitions
Our Company

PhoenixTeam Featured on 2023 Inc. Regionals Mid-Atlantic for Third Consecutive Year

February 28, 2023

Recognitions
Our Company

PhoenixTeam Announces 2022 Annual Company Award Winners!

January 30, 2023

Recognitions
Our Community

PhoenixTeam Ranks #15 on Washington Business Journal’s 2022 Fastest Growing Companies

October 21, 2022

Recognitions
Our Community

Fortune and Great Place to Work® Rank PhoenixTeam #29 2022 Best Workplaces in Technology™

September 7, 2022

Recognitions
Our Company

PHOENIXTEAM FEATURED ON INC. 5000 LIST OF AMERICA’S FASTEST-GROWING PRIVATE COMPANIES

August 16, 2022

Recognitions
Our Company

Fortune and Great Place to Work® Rank PhoenixTeam #53 2022 Best Medium Workplaces™

August 8, 2022

Recognitions
Our Company

Fortune and Great Place to Work® Rank PhoenixTeam #29 2022 Best Workplaces for Millennials™

July 18, 2022

Recognitions
Our Company

PhoenixTeam Ranks Among Highest-scoring Businesses on Inc. Magazine’s Annual List of Best Workplaces for 2nd Consecutive Year

May 10, 2022

Recognitions
Our Company

PhoenixTeam Featured on 2022 Inc. Regionals Mid-Atlantic for Second Consecutive Year

March 15, 2022

Our Community

PhoenixTeam Goes Pink for Breast Cancer Awareness Month

October 15, 2021

Our Company

PhoenixTeam shows up strong at the MISMO Fall 2021 Summit

October 5, 2021

Recognitions
Our Community

PhoenixTeam makes the 2021 Inc. 5000 list for 2nd consecutive year!

August 17, 2021

Salesforce
Our Work

PhoenixTeam is Now a Salesforce Partner!

June 30, 2021

Our Company

Introducing the newly designed PhoenixTeam Website

June 29, 2021

Recognitions
Our Company

PhoenixTeam Ranks Among Highest-Scoring Businesses on Inc. Magazine's Annual List of Best Workplaces for 2021!

May 12, 2021

Our Company
Our Thoughts

The Importance of Continuous Learning for Team Members

April 20, 2021

Salesforce
Our Work

PhoenixTeam’s Three Pillars to Successfully Implementing Salesforce

April 6, 2021

2

Accelerate Your Operations with AI-powered Expertise

Let’s Talk

Stay Connected

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
© 2026 PhoenixTeam. All rights reserved.   |   Privacy Policy   |   Terms of Use