The Medley of Misfits – Reflections from Day 2 at the AI Engineer World’s Fair
By Tela Mathias
I love being at events like this, I feel like I have met “my people”. We are this weird, eclectic, smart, funny, and super enthusiastic bunch of nerds. Just really nerdy. And I love it. We are just a medley of misfits. So many great things at Day 2 – but the two major highlights were at the beginning and the end. Simon Willison is a hilariously competent and compelling speaker, and definitely part of our medley of misfits. And closing the day with Greg Brockman was an absolute inspiration. The theme of yesterday was the “the power of optimism”, but maybe that’s just because I’m an optimistic person.
Spark to System: Building the Open Agentic Web with Asha Sharma
Wish I new his name, but the demo guy at Microsoft was ON POINT. He showed what you can do with Github Copilot and I have to say – wow. The intersection of spaces, Jira, agents, and agent task assignment really told a good story. Imagine that an agent is just another team member, logged in and working like you or me. Imagine that you could assign a task to an agent, readme file generation was the example they used, and then the agent does the work and updates the work item. Now imagine that you need a team member to do a machine learning model, yeah you can assign that to a coworker agent too.
This definitely made we want to make sure that we are making maximum use of the full set of Microsoft capabilities at the team level. It was not, however, enough to make me move from AWS Bedrock to Azure AI Foundary. Maybe I’ll regret this decision at some point but I’m sticking for now. You’re welcome Amazon Web Services.
State of Startups and AI 2025 with Sarah Guo
Talk about overcoming adversity, Sarah Guo is a presentation boss. None of the technology was working, AV was a hot mess, and honestly, they barely figured it out. She used the time with mastery, and I was riveted by her take.
Sarah is the founder of Conviction, an AI venture capital company. She was speaking on the state of startups in 2025 and providing practical advice. One of the things they are very interested in and encouraged is to think about (you know how VC loves their analogies) is: “Cursor for [_X_]”. In our case it would be “Cursor for Compliance” although that’s not where we are yet, but we will be.
One of the reasons Cursor has been so successful is because it was built by engineers for engineers. And engineers know engineers. She out the cherry on top of what we have been hearing for the past six months, content is king. Knowing your customer, knowing your domain space, really building what you know for and market you know – that continues to be the moat.
Domain is king. Needs no further explanation.
Show up informed. Have a product that has an opinion. Have a product that reflects what we know, what our customers know.
Requiring a prompt is a bug, not a feature. Loved this one, and it validated what we have done. The idea that a user has to prompt the system to do what they need is a bug – the system should just do what you need. And present thoughtful outputs at the appropriate times to the right people, in an excellent ux. I mean it’s easy really.
The moat is execution. Just out execute everybody. Move fast. Continue to move fast. Get to market. (I’m here for it, sister!)
Copilots are still underrated and viable solutions. This was kind of a relief, honestly. I straddle federal, commercial mortgage, and Silicon Valley. I see so many different stages on the adoption curve, and different stages of technology delivery maturity. It is really hard to go from the AI future to the mortgage now. I struggle with what we can/should actually do with all this light speed tech, and this was a helpful sentiment.
BE IRONMAN. Think of your solution as a supercharged companion. Some things Tony Stark has to do, some things the suit does autonomously. Over time the suit does more and more and Tony does less, but also more different. Be Ironman.
I loved the idea that building the ironman suit is the bath of least frustration. Start with what you know, you can always make it better. Sarah Gua is a BOSS. Loved her.
2025 in LLMs so Far with Simon Willison
I had, sadly, never heard of Simon Willison. He was falling out of your chair funny. I love his personal eval, “product an SVG of a pelican riding a bike”. This reminded me of Ethan Mollick and his “otter taking a plane ride” eval. So Simon was there to talk about the past year in LLMs, but there was too much so he skinnied his scope down to the past six months.
The reason he uses the pelican riding the bike is because (a) he’s tired of the other benchmarks and has lost trust and (b) it’s a great test because it requires technical prowess in producing the SVG, the pelican has very difficult anatomical structures that are incompatible with riding a bicycle, and the bicycle seems simple but is actually a challenge for humans to illustrate due to its interesting geometry.
Some of the key points made clear in the past six months:
Local is good now.
Prices of good models have absolutely plummeted, which is a good thing for us. We will continue to see a crushing pace on the releases of mew models and model upgrades. The basic message here is that there was so much improvement that you really do have to pay attention.
Humorous discussion of the infamous OpenAI sycophantism bug. Evidently the source prompts that were used to fix it leaked so you can see the actual from and to documentation, fascinating. That one was hilarious.
Somber noting of the Grok White Genocide horror show. Enough said there. I just can’t with Elon.
Evidently Claude 4 will “rat you out to the feds” for certain prompts and content generation. I really had no idea, but I guess it makes sense. I’m not sure how I feel about this.
Impact AI on Consulting
This one was near and dear to my consulting roots. I had never heard of the company, but I really resonated with what they were talking about. They talk about the staffing models – traditional pyramid v. inverted pyramid (relying on junior staff to do most of the work v. relying on senior staff to do most of the work). And their hypothesis on the future for professional services is the inverted pyramid in the center, with traditional pyramids of agents at each side. This makes a lot of sense to me. Not sure I would have illustrated it this way but intuitively, it’s the right move.
I was surprised that they did not discuss voice agents more specifically, I think the opportunity there is massive. Imagine if you would interview and entire company in, like, two hours. Yeah, voice agents. I’m here for it.
Windsurf Everywhere, Doing Everything, All at Once
I’m so glad that we pivoted away from automating software development because man, Windsurf has pretty much crushed that. This one was personal, when we started this AI journey in December of 2023, I was hell bent on “push button, get software”. Many of our early research meetings were about this idea of automating software development. As we learned more, and really listened to our industry feedback, we realized we needed to be a lot more specific and much closer to the market – hence mortgage compliance change management, of which software development is a key part.
And that was a really good pivot. Windsurf is going to absolutely crush this space. Their vision is ridiculously bold – to be everywhere, doing everything, all at once. And I believe them when they talk about how they intend to do it. I think they will crush everyone, they certainly would have crushed my original product concept. Phew – dodged a bullet there.
Reflections from Greg Brockman, President OpenAI
Greg really gave Jenson a run for his money on being my idol and personal hero (#jensenisstillthegoat). I’m slightly embarrassed to admit I did not know him before this closing keynote. Well, I certainly won’t forget him. He was absolutely inspiring. This will be the subject of a separate article. Too short on time to do it justice.
The key theme of day one at the AI Engineer World’s Fair was evaluations (and agents, of course), which I am up to my eyeballs in as we prepare for enterprise adoption of Phoenix Burst so the timing was apropos. As a refresher, evaluations are:
The systematic assessment and measurement of the performance of LLMs and their applications.
A series of tests and metrics meticulously crafted to judge the “production-readiness” of your application.
Crucial instruments that offer deep insight into how your application interacts with user inputs and real-world data.
Robust evaluation means ensuring that it not only adheres to technical specifications but also resonates with user expectations and proves its worth in practical scenarios.
Think of evals as the way we test genAI systems to ensure they are actually good. Many of you have heard me say this before – but every genAI demo is amazing. As long as the vibe is good, you can generally have a great demo. But the real value is in the content, and if the content is not “better” then really you don’t have much. Evals are how you measure better. I’ve written before about evals as the new SLAs, and that continues to be true. It’s not real until you understand the evals.
Beyond Benchmarking – Strategies for Evaluating LLMs in Production (Taylor Smith, AI Developer Advocate, RedHat)
Great session with Taylor Smith. I’m already at least an amateur when it comes to evals, certainly no Eugene Yan but I can hold my own. This 80-minute workshop included at least 45 minutes of a quasi-doomed-from-the-start hands-on activity. As usual, as soon as the Jupyter notebooks come out, I know it’s time for me to step out. But the content and presenter were great. I hadn’t though about it this way before, but she placed benchmarking within the context of the super set of model evaluations. Meaning model benchmarks are just a specialized instance of a form of evaluations.
We honed in on two major forms of evaluation – system performance and model performance, both of which are equally important. The latter is primarily focused on content, and the former is around the AI-flavored traditional system performance metrics (latency, throughput, cost, scalability). She placed these within an “evaluation pyramid”.
Tela’s advice – it’s easy to get stuck in eval purgatory, going round and round and round forever and getting nowhere. Just start. It’s easy to start (and much hard to scale) but there’s a framework. Here I am talking specifically about domain specific content evals, these are the differentiated aspects of your application – your moat.
Vibe check – everyone starts here, this is why most genAI demos are great. You get a good vibe from the content.
Human evaluations – this is where the hard work really sets in, and you need content subject matter experts for this. This is a painstakingly precise activity to create or acquire “known good” baseline content and compare model results to the baseline. We use these results to identify bugs, prompt optimization needs, and any fundamental flaws (which, of course, you hope you won’t find).
System evaluations – one you have your human evals, we move to automate them in the system so they can be rerun anytime we make a change. This is really the new way of performing regression testing.
Trust me, you need evals. Evals are the only reliable way to get to production. And we are up to our eyeballs in them. So much gratitude for Vicki Lowe Withrow and her amazing curation team. But I digress. Two great examples of why you need evals – Stable Diffusion and the infamous Google AI glue pizza incident.
Bloomberg found that “the world, according to Stable Diffusion is run by white male CEOs, women are rarely doctors, lawyers, or judges, men with dark skin commit crimes, while women with dark skin flip burgers.” Yikes. Come on guys, we can do better.
Why does this happen?
ARXIV (pronounced “archive”) is where most AI papers are published first, sometimes months before they ger peer reviewed and fully vetted. A paper published by researchers at Rice University (Professor Richard Baraniuk) called Self-Consuming Generative Models Go MAD pointed out that “our primary conclusion across all scenarios is that without enough fresh real data in each generation of an autophagous loop, future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease. We term this condition Model Autophagy disorder (MAD), making analogy to mad cow disease."
An autophagous loop (also called a self-consuming loop) is when AI models are trained on data generated by previous AI models, creating a feedback cycle where the model essentially "eats its own tail".
Building Multimodal AI Agents from Scratch
I randomly met Papanii Okai standing in the hallway waiting for this one to start, what a small world. I mean “of all the gin joints in all the towns in all the world…”. Was great to see a fellow industry zealot out in the wild. Jupyter notebooks also made a significant appearance at this one, which is where I made my exit. The opening was a lot of primer material on agents, but we did get into patterns for creating multimodal agents, which we have done, but I hadn’t put it together that that’s what we did.
Quick refresher – there are four main components of an AI agent: perception (how it gets information), planning and reasoning, tools (external interfaces, actions), and memory.
Perception – the mechanism used to gather information in its environment. You can think text, images, speech, a multimodal mix, and physical sensor data.
Planning and reasoning – the process of figuring out how to solve the problem and then create a task based on that understanding. Plans can be with or without feedback. The former uses a zero shot or few shot approach, and the latter uses frameworks like ReAct (Reasoning and Acting), which is a prompting paradigm that combines reasoning and action-taking in language models, creating a loop.
Tools – these are functions typically, what have two types of instructions – when they should be called and the arguments for their use. The tool needs to be defined in the tool schema.
Memory – enables the agent to remember, reason, and learn from past interactions. Memory is a complex topic, for agents it takes two forms – short term memory (think one conversation) and long-term memory (thing multiple conversations over time). It is memory that enables personalization.
Multimodality refers to the ability for machine learning models to process, understand, and generate data in different forms. I didn’t realize there were different embedding models for different medium, which, of course, makes sense. The really big takeaway from this session was the emerging alternatives to typical RAG approaches to parsing, chunking, and vectorizing – which many of us know is a major pain in the ass.
Context loss at chunk boundaries.
Complex element extraction pipelines.
Parallelized embedding models like CLIP (OpenAI) where images and text go on separate paths and you end up with irrelevant things whose vector embeddings are inappropriately near each other.
There is a new type of transformer available, VLM based, where “screenshots are all you need” (see what they did there?). Preparing mixed modality data for retrieval can require data transformers, vision transformers, and possibly table to text converters. This alternative has the document snapped one page at a time and fed into the VLM. The amazing benefits of this were espoused, but the audience was skeptical and brought up many valid questions that had so-so answers. Worth looking at for sure but the silver bullet we all want in not yet out there.
This is actually NOT the pattern we discussed but it was the best I could find. You just take basically a screen snap, page by page, and feed that into the VLM.
Model Maxxing with OpenAI (Ilan Rigio, Developer Experience Engineer, Open AI)
Ilan Bigio was a great presenter. There was a lot of good content covered here, even though it wandered a bit at the end. The basic message that was reinforced up front is stick with prompt engineering/tuning as long as possible and until you really know that you might be able to do better – then consider fine tuning. Meaning, you have good command of your evals, you know you can do better, and you have exhausted what can be done with prompting. This was validating.
As a refresher for some of us, prompting is like a bunch of general-purpose tools that you can use to do a wide variety of things. Fine tuning is like a precision laser-guided table saw. You can do a smaller number of things incredibly well. Prompting has a low barrier, low(er) cost (relative to fine tuning), and is generally enough for most problems. Fine-tuning incurs a higher up front cots, takes longer to implement, and is good for specialized performance gains of a particular type.
Three types of fine tuning were covered – supervised fine tuning (SFT), direct preference optimization (DPO), and reinforcement fine tuning (RFT). With SFT, think about “imitation” or, DPO is “more like this and less like that”, and RFT is the epic “learn to figure it out”. I’m not fully grasping how RFT works but…wait for it… I’ll figure it out. (See what I did there?).
I hope Ilan won't mind I borrowed his diagram.
AWS and Anthropic Networking
I had high hopes for the AWS and Anthropic event, especially because I had to effectively submit a proposal for why I should be accepted to attend this event. I figured if there was an application process, surely this would be top notch. Two highlights here, hearing directly from Anthropic about their new products and vision for the future, and, of course, finding the non-men at the event. Yes, as per usual, it’s a sea of men at these events (nothing against men), with just a light dusting of non-men. I did manage to find a few of my people.
I was not aware of what could be done with Claude Code until seeing the demo at this event, and it was powerful. It has the feel of a command line application, but I have no doubt that I will be able to use it effectively based on what I saw. I suspect it will be able to do a lot of things that the team currently does in Replit, perhaps more effectively.
The "to-do" capability promises to be next level and apparently you can interrupt the flow and alter the to-dos. Whoa.
Mongo DB Networking
Among the most interesting parts of the day was at the very end. After leaving the okish AWS/Anthropic event early, I decided to head over to the host hotel for the Mongo DB welcome reception. I met the very interesting Mark Myshatyn, who is literally “the AI guy” for Los Alamos Labs. I asked him, “So what exactly does Los Alamos Labs Do?” and he goes, “Did you see Oppenheimer? We do that.” Whoa. He’ll be speaking at the AWS Summit in DC, if you are attending you won’t want to miss it.
Tela’s Parting Thoughts on Day 1
It was an amazing day, and we are just getting started. The people, the content, the experience – so worth it. I do these events so I can continue to find out what’s going on, and adapt what’s going on in the work I do for my mortgage clients. I intend to:
Continue work we are doing with multi-agent solutions, especially expand our use of multi-modal agents and possible alternatives to traditional RAG pipelines.
Get through our extensive eval optimization through prompt tuning and consider, if needed, fine tuning. I’m not convinced we’ll need it but we might and I won’t be afraid of it.
Explore the use of Claude Code as an adjunct to our current tolling for product development acceleration.
Continue to have the enormous gratitude I have for the opportunity to attend events like these, especially in difficult times.
Departing from Determinism and into the Stochastic Mindset
By Tela Mathias
The third annual AI Ascent event, an invite-only, elite event hosted by Sequoia Capital, started with a mind-blowing presentation by Sonya Huang, Pat Grady, and Konstantine Buhler. I’d say it took me at least two hours to get through the 28-minute opener as apparently, I had a lot of catching up to do. I kept hearing unfamiliar terms like “segfault”, “test time compute”, “vibe revenue”, and the “uncanny value theory”. I wanted to truly take in what I was seeing so I had to chase all these rabbits down the rabbit holes. I was a little bit in awe of the different dimension of thought these presenters and this audience were in.
Looking back over the past 70 years they shared the major technology waves notable in each decade, laying the context for the raging AI adoption we see today. Each of these waves was additive, and now the waves are coming faster than ever. They lay the foundation for ease of adoption of the next wave.
Sequoia thinks of AI broadly within the context of three major technology segments – mobile, cloud, and now AI. Then they look at the total addressable market for each technology segments as a way of illustrating the ridiculously large market for AI. I spent about 30 minutes alone in understanding this one chart.
The top row is the cloud transition, and the first circle is the global software market when the transition began – 6B of the 350B total revenue realized. Today that market is at least 650B, with 400B realized, so a bigger market was created as a result of the technology. The bottom row represents the global software and services market that can be addressed with AI, and the tiny segment of 15B is what Sequoia sees as realized today. It absolutely dwarfs the cloud opportunity, and the inclusion of services represents the acknowledgement that AI offers both tools and the opportunities to transform tools to radically change the way we approach services. I see this in my own business; virtually all commercial services we offer today are absolutely powered by AI.
We are just starting to see the transition from AI as a tool, to AI as delivering an outcome – and outcomes are the purview (traditionally) of services. Hence the total addressable market is a staggering 10T. Yes, that’s a T. Which brings me to the next chart I spent about 20 minutes on.
I was foolishly unaware of (but intuitively applying) the new physics of distribution, which says that to have a successful product, you need three things – your target customer has to be aware of your offering (awareness), they have to have the desire to purchase your product (desire), and they have to have the ability to purchase your product (action).
I didn’t put these things together, but from an awareness perspective, no one cared about cloud at first. Mark Benioff was out there pounding the pavement for anyone who would listen. With mobile, it took a while for Blackberry to get clobbered (Blackberry forever – RIP). But with ChatGPT (v3.5, with the human user interface), it blew up in the first week.
Which brings us to desire, represented as the combined number of active users (in millions) on reddit and twitter. This number represents the way people find out about cool stuff and is a proxy for desire. At the start of cloud, these things didn’t even exist, so the number is zero. We had about 4M with mobile, and then now we have 1.8+ BILLION. And finally, action. With cloud, there were only 200M people connected to the internet to listen to Benioff and now, at 5.6B, every household and business in the world is connected. So you can see that the foundation for AI had been materially laid for ChatGPT 3.5 before it got here. There were no barriers to adoption, which is not AI-specific – this is the new reality of technology distribution.
So we’ve established that there is massive opportunity to create value (10B TAM), and Sequoia and the industry at large believe that this value will come from the application layer (Sam Altman reinforced this later in his Q&A session). The new race for startups, then, is between foundation model providers (the tech-out perspective), and vertical specific application developers coming with deep customer intimacy (the customer-in perspective).
The second scaling law (test time compute) focuses on enhancing AI model performance during inference rather than training – so rather than more extensive model training, we allow the models to think more. When we combine this reasoning with tool use and interagent communication protocols, this lets foundation model providers get pretty damn close to the application layer. And the race is on. (This confluence of technology and market factors also creates what Sequoia calls the agent economy, which I won’t address in this article).
What to do with this as an AI startup?
Sequoia believes, as do I, that 90% of building an AI company is just building a company. The rest comes down to the Leone Merchandising Cycle and building moats around the stages in the cycle.
As a tech startup we can compete with foundation model providers (and other businesses, for that matter) at each step:
Vision – Customers often don’t know exactly what they want, we can have an opinion were foundation models probably won’t.
Product – We can provide an end-to-end solution to a customer problem, rather than throwing a tool over the fence and hoping the customer will find a way to use it.
Engineering – We can build data flywheels with product usage data, this is really the only information that absolutely no one else will have.
Marketing – We can be of the industry, for the industry, by the industry, we can know out market better than anyone else. And certainly better than a foundation model provider.
Sales – we can speak the language of the customer natively. We send mortgage people in to talk with mortgage companies.
Support – and we can put a “big ole bear hug” around our customers. Sequioa kind of scoffed at this but said that what Palantir has done with forward deployed engineers is certainly value and foundation model providers are very unlikely to do this.
The last thing I’ll cover is the stochastic mindset shift in the AI future. This is a departure from traditional deterministic thinking, born out of traditional software development where you program a system to do a thing and it will always do that thing. Given a set of inputs A, you will always get B. We love that. It’s so comforting to live in this world. It’s binary. There’s not grey. But isn’t the world full of grey? Can’t two opposing concepts be true at the same time? I can be awesome just the way I am and also have serious room for growth and improvement? This is the stochastic shift away from deterministic thinking and into probabilistic thinking. Sometimes, given a set of inputs A, you might get output C even when perhaps the answer should be B. And this is the widow for uncertainty, dialectic thinking, human creativity. Oh how I love the grey.
There was so much more covered, but this article is already too long so I’ll leave you with my parting thoughts. As I started my journey with these 28 minutes, I was, honestly, overwhelmed at what I just didn’t understand. They were not even speaking my language. But after having had a day or so to process what I learned, I find that much if it is intuitive. I have a lot that I will take away and have already started to implement with my teams, but I am comforted by how natural much if this feels. That doesn’t mean that I’m ahead, however. Only that I have to run a lot faster to keep up. I’m so grateful that these materials were made available to the public.
The Agents Are Here and They Are Coming for our Kids
By Tela Mathias, Chief Mad Scientist at PhoenixTeam
We’ve been in the lab for the past few weeks tackling the agent problem and today it really started to connect. I’ll start with a reminder about what an agent is. The foundation of agentic AI is its ability to reason. An AI agent is characterized by the ability to perceive and understand context, reason about a problem, plan and take action, and use tools.
The agent problem, of course, is understanding and rapidly deploying… agents. Why does this matter? This matters because it is a huge unlock in time to value. When we master agents, we eliminate (or at the very least drastically reduce) reliance on coding for solution delivery. And that means we can move at a blistering pace.
We believe that in the AI future, both the front end and the backend of solution delivery are commoditized. There is no moat in foundation models (the “backend”), and anything we can image, we can build (the “frontend”). So what, then, are the differentiators? Original thinking and vertical industry expertise. We believe that in the future (which is happening now, by the way), there is a new formula for differentiation.
Differentiation = Original Thinking + Domain Expertise + Fast Scale
Considering this, we’ve been in the lab working on going “concept to cash” (idea to production value realization), in five days. Imagine if you could have an idea on Monday, and see it through to production by Friday? Wouldn’t that be amazing? I know this is possible. I can’t say we’ve got it quite down to five days, but we are not that far off. What if we could pick an agentic use case from a menu, and have that use case up and running within a week? For that matter, what if it was two weeks? Certainly, way better than how long it takes to get stuff into production in a more classic approach.
I still love classic, don’t get me wrong. We have plenty of customers where we work with AI accelerators to deliver software using traditional agile scrum, with continuous discovery and continuous delivery. That is very much still a thing. But there is another way too, and a way that will become a massive differentiator for those clients boldly willing to go where no one has gone before.
So you are thinking – “yes, let’s go!”. Well, hold on a minute. The tooling isn’t there yet. Where it is there, it doesn’t scale. The accelerators are nascent, and connecting the front end to the back end is still a long way off from “push button get software”. But it’s coming, and it’s coming fast. Things like model context protocol (MCP) will ultimately make building agents like building with Legos. The pieces are all there and you just snap them together. Voice agents will create the ability to do customer discovery at scale in, like, a morning. Coding agents are already accelerating development.
"Agents building technology with legos", created by Midjourney.
The human side of this is interesting and has to be cared for. There is a lot of fear. Jobs are changing. The things we have to know how to do come with a really steep learning curve and not all of us are ready. I talk often now about what this means to my kids. This generation will be the last that was born before ChatGPT. They are also the unique set of children that had a critical time in their lives and learning utterly disrupted by covid. They will engage with the world and learn in fundamentally different ways. The education systems are not changing fast enough, which puts the burden on parents. We are the ones that have to help our children adapt. And we don’t even know what that means so it’s a bit of the blind leading the blind.
The agents are here, and they are coming for our kids.
I have to just acknowledge how scary that is. I do worry. I worry about my oldest at 22, and my youngest at seven (and everyone in between, my nine and eleven year olds). But we really don’t have a choice. The future is coming and the only way out is through.
For existing professionals, it is also scary out there. Will I be replaced? They way I mastered my craft is not the way anymore. Is my craft even still relevant? The reason I know anything about subservicing is because I spent three years photocopying checks in the second subbasement of HUD in conjunction with litigation support for a federal case against a Ginnie Mae master subservicer whose assets were seized by FDIC. That was 1998 and even then it was old (they were raided by the FBI in the early 1980s).
As much as I came to hate this job, gosh did I learn a lot. Precision. Attention to detail. Escrow account commingling. 11710A monthly reporting. A goldmine really, still useful today. This is simply not a job anyone will do in the AI future. Agents will sift through massive quantities of structured and unstructured and structured data, decide what needs to be done, and do it. How will new professionals learn these things? Does it even matter? What do they have to learn?
Yes, yes, I know this is an 11710E but the internet didn't have what I was looking for...
I had a fascinating conversation with a senior executive talking about interns last week. They have their interns learning to do things “the classic way”. Stare and compare. Good old fashion math and analysis. This is a valid approach, but it is not the one we are taking. We have our interns living in the AI future. We are paying them to learn, explore, try, and fail fast – all using “the now way”. I have no idea if this is “right” but it’s the path we chose. For now?
I choose to see the sunny side of this, and really I try to see the sunny side of all things. The alternative is depression and despair, and I don’t want to live like that. We all survived the internet, and we will survive this. Those of us who are out in front will pave the way for everyone else. It’s bumpy and painful but it’s better than being bored.