Many of you who know me have heard me describe my mission – freeing the American people from the bondage of joyless mortgage technology. For many years, this has been my easy and immediate answer to the question of what I do and why I do it. There is entirely too little joy out there in the software world, and there are entirely too few days on the planet. I’ve had a lot of loss in life, as we all have, and as I age, it becomes that much more important to spend my days with people I love doing things I feel passionate about.

Much of the dominant mortgage technology out there is . . . of a much older vintage than the modern technology products that get so much attention from the product legends of our time. Servicing operators are among the most innovative users out there, simply because they have had to fashion a staggering number of workarounds required to serve customers and implement the mountain of compliance requirements inherent in the heavily regulated field of mortgage. The technology debt created by this fragile and complex ecosystem is felt most painfully by the operators, otherwise known as “The Business”.

I love my work. I love my clients, my teams, my partners. I love the problems I face every day with my customers. There is certainly no shortage of opportunity to fix things and make them better for users. That continues to be my focus. However, with the mass availability of generative artificial intelligence (genAI), I now also spend a small fraction of my time on my new passion project – putting joy and purpose back in software development with genAI based approaches.

As we’ve discussed before, this is both existential to my business and professionally fulfilling. I’ve had a lot of fun working with our small but mighty AI team on a product we are incubating at PhoenixTeam. Phoenix Burst started its life with a very practical problem: How can we rapidly plow through the massive body of regulatory requirements in mortgage, parse out the requirements, and automatically generate software requirements? Being in the mortgage technology business, this is a problem we face daily, across a diverse commercial and federal customer base. We have more than 100 people who tackle this challenge every day, so it seemed like a useful place to start. Enter Phoenix Burst.

A blue and purple sign with white textDescription automatically generated


We had many twists and turns along the way while testing our idea. None of our team members really had any experience developing around generative AI technologies, so we had a steep learning curve. We learned the fundamentals and managed to graduate from AI elementary school. We think we’ve stumbled onto something cool and, much more importantly, useful. We got some feedback along the way, although not nearly as much as we need, so we are planning to roll out the concept within the company. We have a captive audience of Phoenicians who need help, and we hope to get massive amounts of feedback and ideas from them.

One of the great things about what we are doing with genAI is that if we get it right, it will certainly work in mortgage for our persevering servicing operators, but our hypothesis is that it will also work in any domain space where people are trying to cut time to value in software development. And let’s face it, that’s pretty much every domain space. We are hoping to empower product development teams everywhere to make more valuable software faster.

A white outline of a personDescription automatically generated

So, what is next for us? We are thrilled to showcase the development version of Phoenix Burst at the Mortgage Industry Standards Maintenance Organization (MISMO) AI Summit in San Francisco this coming Monday, June 3, 2024. Not only are we “bursting” with excitement to demo our MVP, but we are more excited for the feedback and ideas Burst will invite. From there, we will roll out a beta product to a small internal user group followed by a generally available (GA) release to all of PhoenixTeam, and our early adopter launch later this year. I hope to see you at the Summit, in person or virtually. Come find me, I will have stickers and post-it notes for you!

Tune in for another episode of our Accessible AI Talks series, where we explore the power of AI in software development. In part 6, Tela Mathias, Chief Value Engineer and Managing Partner, is joined by tech visionary Brian Woodring for an in-depth discussion on solution design.

In this episode, we:

🔍 Explore the critical role of solution architecture in delivering scalable, high-quality software.

🤔 Analyze the current capabilities and limitations of generative AI in architecture design.

💡 Share real-world experiments and insights on how AI can revolutionize the architecture process.

Whether you’re a tech enthusiast, a software developer, or simply curious about the future of AI in the tech industry, this episode will be packed with valuable insights and practical takeaways. Make sure to hit that “attend” button or follow PhoenixTeam to stay updated! And make sure to tune in for our next talk where we’ll discuss gains across the software development lifecycle.

We’re excited to bring you Part 5 of our Accessible AI series, featuring our very own Teela Mathias and the brilliant Brian Woodring. This episode dives deep into the fascinating world of product design and how AI, especially generative AI, is poised to revolutionize the field.

In this episode, Tela and Brian explore the challenges and triumphs of integrating AI into product design. They share real-world experiments, like the fun yet insightful “peanut butter and jelly sandwich storyboard,” and discuss how AI can streamline design processes that traditionally take weeks, now accomplished in minutes.

Discover how AI can help create consistent personas, generate compelling storyboards, and even assist in high-fidelity design mockups. While the technology isn’t perfect yet, the potential is incredible, and we’re on the cusp of some game-changing advancements.

Tune in now to learn how AI can enhance your design process and keep your projects ahead of the curve!

#AccessibleAI #ProductDesign #GenerativeAI #TechInnovation #PhoenixTeam #FutureOfDesign

Value Engineer

By Tela Gallagher Mathias, Managing Partner and COO, PhoenixTeam

What is a value engineer?

In our Accessible AI Talks series with Brian Woodring, and in several of our recent articles, I’ve introduced the concept of the “value engineer”. We have seen some engagement around this idea, so I thought I would expand in this article.

It is really hard to make software. Too hard. At times, it is utterly thankless. Working tirelessly to bring a product to market only to have it miss the mark or miss the window. As one of my favorite clients reminds me so often – time is never our friend. Team environments can be brutal without the right people and the right leadership, and once trust is lost in a team it is very difficult to regain. I’ve said to my kids that trust is lost in buckets and earned in drops. The same goes for teams and the people on them. Lack of trust leads to a toxic team environment where we end up in teams of one – each person or pocket of people working for themselves. Psychological safety erodes. Internally motivated people start to retreat into themselves.

Throw in lack of a shared understanding of a shared vision, forget about it. Nothing valuable is getting shipped. And remember, that is an agile team’s single goal – to ship valuable software perceived as valuable by its users.

Now – it is not all bad out there. Despite my ostensible gloom and doom, I have the privilege of serving on some great teams – internal to Phoenix as well as my client teams. Teams that continuously adapt to change, that learn from the past and acknowledge it, that pivot when things are not working. These teams are doing great. However, it is no longer enough to learn from the past. It is now time to learn from the future because the future is happening now – right before our eyes – and it is squarely based on AI and generative AI.

The jobs in software development are changing. Somewhere between one and three years from now, we will see a rapid decline in the number of opportunities for product team members who have less than five years’ experience. And I’m talking about all the roles – from product owners to software engineers, and everything in between. Why? Because AI is eating everything, and AI-augmented software development is here. As Brian pointed out in one of our recent videos, there have been many advances in the developer automation space. Lots of copilot applications, and code generation is not as new as it seems. And that is all great. It is, however, predicated on having the right idea, the right value proposition, the right understanding, and the right language for the right audience.

Enter the value engineer. Value engineering is the process of removing all the waste and manual work in software development and letting generative AI handle that. The value engineer sits on the product development loop and curates results. He, she, or they are empowered with a new augmented product development platform that empowers them to create every artifact across the software development lifecycle with a single click (or maybe a few, but you get the idea).

(Em)powered by Phoenix Burst

We believe that a new role will emerge, one that pairs directly with the customer and the software engineer to rapidly deliver valuable product in about as much time as it takes today to achieve shared understanding (yes that can take a while but, again, you get the point). If we can understand a domain, and rapidly identify its special parts – the parts AI cannot find for us, the parts that come from decades of experience – we can create a product to enable it.

The value engineer is a modern-day superhero for modern-day product development. He, she, or they are a master product owner, a skilled test automation engineer, a powerful guerilla tester, a product designer, a software shipping solution architect – all rolled into one. And this role is emerging now. At PhoenixTeam, we are developing a product that will enable everyone to be a value engineer, and we are using it now. First it was a custom generative pre-trained transformer (GPT) to test our ideas. Now it is an (in development) AI-powered software development platform. At least that is what we hope it will be. We are still figuring it out. Join us for a preview at the MISMO AI summit in June in Las Vegas where we will show the first real version of our idea – Phoenix Burst.

Accessible AI Talks | The Problem of Requirements

Just dropped Part 4 of our Accessible AI Talks series. Tela and Brian continue the conversation and dive into the nitty-gritty of requirements in product development.

They’re breaking it down for you, sharing insights from their diverse backgrounds. Brian, with his software engineering expertise, sheds light on the misconception of over-specifying requirements. Let’s face it, we don’t need to micromanage button placements! Tackling the essentials is key to effective communication and shared vision among teams.

We’re envisioning a future where diverse artifacts combine to paint the full picture. From user stories to AI-generated insights, the possibilities are endless!

And be sure to join us next time as we tackle the problem of product design in our quest to make AI accessible for all.

Let us know what you think in the comments and thanks for tuning in!

AI Talks - Part 3 - The Problem of Shared Understanding

Part 3 of our Accessible AI Talks series is here, brought to you by Tela Gallagher Mathias and Brian Woodring. Today, we’re diving into how AI is reshaping software development and zooming in on shared understanding.

Ever been on a project where things slip through the cracks? We all know the struggle. Shared vision is key, as Brian said, “It doesn’t begin until you have a shared vision of success.”

But why is it difficult to achieve and maintain this shared understanding? It all boils down to a couple of things: every human interprets things differently and the dreaded time crunch.

AI is the game-changer here, ensuring seamless integration for all team members, anytime. Imagine a world where AI handles the grunt work, allowing us to focus on building rockin’ new products!

Let us know what you think and stay tuned for our next segment on how AI is revolutionizing requirements. We’re adapting with fresh approaches beyond physical proximity. Dream big, innovate bigger!

Accessible AI Talks - Part 2

In this Talk we discuss how we can use AI and generative AI in the ‘imagine’ space of software development. We explore the ways AI will fundamentally change the way software development works and how it will expedite time-to-value. Tela Gallagher Mathias will be joined by our guest, Brian Woodring, a renowned figure in software engineering and technology leadership.

You’ll learn:

  • The critical role of the imagine space, where teams brainstorm ideas and determine how to add value swiftly.
  • Rapid Feedback and Idea Testing
  • The limitations of minimum viable products (MVPs) and advocated for maximum viable products that deliver substantial value to users, prompting immediate adoption.
  • Challenges in Maintaining Shared Understanding
  • The potential of AI, particularly generative AI, to expedite idea generation, validation, and maintaining shared understanding among team members.

And tune in for our next talk where we’ll discuss finding and keeping a shared understanding, and the pivotal role generative AI plays in advancing software development.

Accessible AI Talks - Part 1

Tela Mathias recently kicked off PhoenixTeam’s Accessible AI series with Brian Woodring where they discussed how AI and generative AI can streamline time-to-value to deliver high quality software. In the interview, you’ll learn:

  • Their experiences in software development and the importance of learning from past failures and successes.
  • The significance of AI in accelerating software development, its impact on time-to-value, and its accessibility to a broader range of professionals.
  • The emergence of new roles, such as the “value engineer”.
  • Current experiments with generative AI.

Brian Woodring puts it best: “Technology is for everyone… the ability to deliver more value than ever before is incredibly exciting.”

Storyboards Matter: Three Insights for Using AI to Accelerate Product Design and Delivery

By Jeremy Romano, Managing Practitioner | Experience Design & Product Management

In software development, storyboarding is the essential blueprint for crafting exceptional user experiences. The product design phase and its artifacts are often the most misunderstood, underutilized, and notably absent components of the product and software development processes. Software development is more effective when guided by a design thinking methodology: Empathize, Define, Ideate, Prototype, Test, Implement, and Iterate. Products achieve optimal outcomes when they incorporate a user-centered process: Understand, Explore, Design, Evaluate, Refine, Implement, and Maintain. Despite these well-established frameworks, most organizations are constrained by resources, time, cutting corners, and making compromises that often leads to losing sight of the end-user’s needs and the business goals they aim to achieve. Even the most well-intentioned and successful teams face these struggles.

Over the past three decades, I have collaborated with top-tier brands to create award-winning products. I’ve witnessed product design, a core element of delivering great software that delights customers, become significantly less important to organizations. Now, with AI and Generative AI consuming the market, we believe a window of opportunity is wide open for organizations to embrace the power of AI for product design, specifically, storyboarding.

Why did we choose storyboards for our proof of concept (POC)?

We wanted to prove that one of the most valuable tools for building impactful products can be created faster than ever before and with a high degree of quality and accuracy. While most teams lack the typical skills, time, or resources to effectively produce high-quality design artifacts, using AI to storyboard will transform how organizations visualize and communicate the product vision from the user’s perspective. Taking it one step further and harkening back to Brian Woodring’s statement that “technology can be for everyone”, we want even the most artistically inexperienced people to be able to imagine and create impactful visualizations.

The hypothesis is that if AI can enable anyone to create visually compelling narratives without the associated cost and time challenges, with just the click of a button, the industry would see faster delivery, greater adoption, and happier, more productive, end-users.

Our AI Storyboarding Journey

Generative AI is powerful and challenging because it always generates a response. The scenes in our initial image generations were “awesome” but kind of useless, characterized by strangely abstract elements, wild metaphors, and a variety of artistic styles—from watercolor to oil painting, and from photorealistic to Disney-like. While the raw output from the language model was captivating, it often strayed from our intended narrative, and we watched as the age, race, and gender of our characters changed unexpectedly and with no prompt.

Insight #1: Developing our Cast of Characters

The most impactful learning from our POC was in character design. Drawing from cinema and traditional character design, we embraced the concept of using props. The language model required prompts with precise characteristics, so for us, ‘props’ included specifying ages, hair color, and even learning about various mustaches, beards, and hairstyles, such as the imperial mustache, curtain beard, afro puffs, and even the differences between a pixie cut, a pageboy, a bob, and a long bob. I also discovered the vast array of glasses styles available—rectangle, square, round, cat-eye, brow-line, and aviator (see Image #1). It was this level of detail that allowed us to refine and solidify our cast of characters in a more uniform fashion (see image #2).

Image #1

Once we fine-tuned our approach, we were able to nail down PhoenixTeam’s cast of characters, each with a unique persona, and personality.

Image #2

Insight #2: Defining the Scene Style

The second key insight was the importance of limiting the range of artistic styles for the AI. By narrowing down the color palette, specifying line weight and type, and providing a reference style for it to emulate, along with a clear focus for the “camera,” we achieved more consistent and repeatable results.

Insight #3: Finding a path to maintain a coherent story flow from start to finish.

The third breakthrough involved enabling our Value Engineers to select one of our characters and a series of user stories, which, when combined with our backend integration of artistic guidelines and camera settings, empowered the AI to autonomously generate consistent and engaging narratives. This ‘secret sauce’ has been a game-changer in maintaining a coherent story flow from the first panel to the last (see image #3), allowing us to weave seamless narratives that compellingly drive the story forward. We are now able to generate storyboards directly from user stories and acceptance criteria without any human intervention.

Image #3

AI, for now, is not a magical solution that solves all problems; its role as an incredibly powerful assistant or as John Comiskey states “your ultimate wingman”, invites us to become active and engaged curators. Generative AI possesses the transformative power to break down longstanding barriers, enabling creativity and collaboration at unprecedented scale and speed. We can generate and share ideas more rapidly, pushing the boundaries of innovation and design thinking to make better software faster. This technology allows us to shape a future where AI and human creativity unite to create more meaningful and impactful design outcomes. I invite you to join this exciting journey, and experiment with AI in your projects. Comment below to share your experiences and insights with storyboarding or other image-generating results.

By Kenny Akridge, Managing Director at PhoenixTeam

If you have been following this series and have already experimented with one or more Large Language Models (LLM) such as ChatGPT, you are undoubtedly convinced that Gen AI is completely revolutionary and that it is here to stay. If not, I encourage you to visit our earlier posts and try out some of the simple examples. Before we explore the rules, let’s get a clear understanding of what Gen AI is.

Generative AI (aka Gen AI or GenAI):a type of artificial intelligence that can create new content, pictures [like the graphic above], music, or writing, all on its own. It learns how to do this by studying lots of examples, and then it uses what it has learned to make new things that have never been made before. Think of it like a really smart robot that can draw, write stories, or compose music just by understanding patterns from what it has seen before.

Full transparency – ChatGPT generated that definition. I think it is mostly a good definition though. That brings us to the first rule.

Rule Number One of Gen AI: “AI should almost always do most of the work.”

Why would I write a definition for Gen AI from scratch when the AI can do it for me? I have applied this rule so often for a little over a year now that it has become second nature. To a large degree, it has replaced Google for me. Need to write a complex Excel formula? Gen AI can do that. Need to OCR something, Gen AI can do that. Want to find patterns in a series of data? You guessed it – Gen AI can do that too. There really isn’t much that it can’t do. I find myself constantly telling people around me to “just let the AI do it.”

So, this is amazing, right? I mean, ChatGPT can complete most of the work for my tasks. You might even wonder if this article was composed by AI. Sadly, ChatGPT did not write this article. It would have saved me a lot of time for sure. Unfortunately, this article falls outside of the “almost always” condition of the rule. When I need something to be genuine, to be authentic, to be delivered in my own voice, there is no substitute (at least not yet). That said, I did consult with ChatGPT on many aspects of this article. “Is ‘almost always’ a clause?” ChatGPT said no, it’s a stipulation or a condition.

Ok, so why does Gen AI only do most of the work?  Enter the second rule.

Rule Number Two of Gen AI: “You almost always need a human for the last mile.”

I borrowed the term “last mile” from the transportation industry. In short, it means “The final leg of delivering goods or services.” Despite all its powerful abilities, Gen AI doesn’t always produce high quality outputs with acceptable accuracy.  Sometimes it guesses and guesses wrong. Sometimes it hallucinates – tells fiction as if it were truth. That’s why most work product needs to be reviewed and polished by a human.

Let’s expand on the human element in AI-assisted application development. Building on these rules, we hypothesize that an artificial intelligence-based collaboration framework, enhanced by both machine and human content curation, will dramatically reduce the time to value in software development. By integrating human insight and feedback directly into the AI development and deployment cycles, we can create high quality software much more rapidly. We will dive deeper into this process in future articles.

You’ve made it this far… so make sure to hit that follow button to stay connected. We’re here to share the latest and greatest insights on all things AI including upcoming interviews from AI experts.

Check out previous articles from our AI series: