The AI startup boom cycle comes with a reset: we’re back to solving hard technical problems just to get products off the ground. This represents a shift from the startups of the 2010s, where many technical questions had already been answered and packaged into developer frameworks or well-understood playbooks. Instead, much of the effort then went into other areas like acquiring market insights and figuring out growth mechanics.
With AI, we’re returning to technical roots. The AI APIs from companies like OpenAI are equivalent to DSL internet lines in the mid-nineties. If you were working on the first e-commerce sites, you had to invent database and internet infrastructure. Deploying a live website in 1995 took months of pioneering engineering and inventing new software paradigms. By 2024, you can ship a working website from a solo laptop over a weekend—a testament to the progress made over years of building out internet infrastructure.
Once the current AI wave started unfolding in 2022, many people believed AI was magical—elixir that could make companies build themselves. I think these people are seriously missing the mark. The real excitement with AI is that it opens up new markets. AI enables us to solve problems we couldn’t attack before. For example, you can now develop a very high-quality customer service representative with AI that isn’t just a gimmick. This wasn’t possible before ChatGPT, as we didn’t have intelligence “on tap” (accessible through an API) as a building block.
However, the old challenges of designing and building an initial product, finding product-market fit, and figuring out distribution haven’t disappeared. Sure, you can have a “wow” effect on the back of applying AI, making your product go viral and helping with distribution. But for users to stick around, you need to cross the AI demo-to-product valley. You need to apply serious engineering to tackle the issues with design, accuracy, consistency, or latency - often an underrated problem - of your AI-backed product.
Moreover, you will most likely be wrong about your first idea. If your founding team is too small or lacks necessary skills, your iteration cycle will be too slow, and you’ll give up before reaching the key insight and the first inklings of product-market fit.
If you look at early breakout AI products like Perplexity, Cursor, or Circleback, you’ll find they each had three or four co-founders. Each team had to make AI-specific innovations in both UX and AI infrastructure.
So why can’t you just hire people to work on these problems? Without the early innovations, you don’t have a product—and without a product, you don’t have a strong hypothesis around product-market fit. It’s a well-known mantra that hiring too early, before finding product-market fit, is a deadly mistake for startups. You end up hiring for roles that aren’t clearly defined because your product isn’t solid yet. Plus, the “full-stack AI engineer” role hasn’t yet emerged, as the technology is still too bespoke. You early team needs to possess founder’s mindset: someone ready to figure things out on the go, rather than having things figured out for them (an employee mindset).
Designer, AI Engineer, Developer
What are the main skills needed to launch a successful AI application? I’ll riff off the timeless post by Tom Preston-Werner, where he talks about web applications. We can loosely redefine those roles as Designer, AI Engineer, and Developer.
In his post, he says:
“A web application is nothing more than an experience created by design.” Users can’t see what technology you use or whether you follow an agile development process. All they experience is what’s on the screen. It can’t be confusing, it can’t look amateurish, and it can’t have spelling errors. If the UX is bad, the web application is bad. It’s that simple.
Adapted to the AI era:
“An AI application is nothing more than an experience created by design.” Users can’t see (or hear—a new frontier with AI!) what model you use or whether you rolled out your own AI agent framework. All they experience is what’s on the screen or what comes out of the speakers. It can’t be confusing, it can’t look amateurish, and it can’t have spelling errors. If the UX is bad, the AI application is bad. It’s that simple.
He goes on to say:
The way you get good UX is by having a good designer. Someone on the team must be skilled not only in making things pretty, but in making them usable as well. Without good UX/visual design, you may as well not even bother. It’s impossible to stress how important this is.
This holds true. If you look at Perplexity or Cursor, both had significant UX innovations from the start. For Perplexity, it was quoting sources in a way that made sense to a non-technical user. For Cursor, it was about giving the right context early on (e.g., @-mentions of code files) to the underlying model.
Then he talks about the role of the Architect:
Once you have an idea of what you’re creating, you need to figure out how to make it happen. That’s where the Architect comes in. With the recent explosion of open source solutions to common problems like databases, web frameworks, job processors, messaging systems, etc, you need a team member that has a broad understanding of the technology landscape. The choices you make early on will impact your company for many years, and the wrong choices can spell disaster. The role of the Architect is to choose the best tools for the job, and to decide when new tools need to be created.
In the world of AI applications, this role is for the AI Engineer. This person figures out how to assemble the emerging AI stack and use techniques like prompt engineering to turn the nebulous idea of AI-powered interactions into tangible engineering pieces. The role of the AI Engineer is to deal with AI’s stochastic nature, determine how to provide relevant context, and design data and feedback loops essential for crossing the AI demo-to-product valley—taking a demo that works some of the time and turning it into a reliable product. Unlike traditional software architecture, AI engineering is more about innovating than selecting existing solutions. At this stage, solutions to common problems are being discovered by individual teams developing AI applications. It took decades for the open-source web stack to emerge, and we’re in just year two of the widespread AI rollout.
On the role of Developer, Tom writes:
Design and architecture dictate what you build and how you build it, but without someone to do the construction, you’re dead in the water. The role of the Developer is to turn the wishes of the Designer into reality while staying within the constraints that the Architect has put forth.
This role hasn’t changed much. You still need someone great to do the actual building.
Finally, Tom makes a great point that these roles don’t have to be spread across different people:
The three roles of the Designer, the Architect, and the Developer may reside in a single person, but it’s much more common to see groups of two or three people satisfy all these skills. In fact, the best founding teams are those where everyone fills some combination of roles. This fosters an environment of friendly argument that leads to better decisions.
I believe it’s extremely unlikely that one person can fulfill all three roles of Designer, AI Engineer, and Developer, as the surface area is simply too large—especially on the AI engineering side. It might be possible to fit them into two exceptionally capable people, though.
This is The AI Minimum Viable Team to build an AI application. Of course, there’s another circle around the core: building the actual business. With a killer product, this part might come easily, or in many cases require yet another role—a marketer or salesperson.