Welcome to the 50th episode of Deep Tech Catalyst, the channel by The Scenarionist where science meets venture!
The global demand for AI is surging exponentially, and the need for a robust infrastructure to sustain its growth has never been more critical.
Innovators in technological and scientific arenas, aiming to build investable companies in the field, must navigate both the venture capital perspective and a comprehensive understanding of the hardware and software capabilities needed to meet real market demands and deliver adequate returns to investors.
We are excited to welcome Michael Stewart, Managing Partner at M12, Microsoft's Venture Capital Fund, who is joining us to provide an overview of the sector and help aspiring founders navigate the initial steps at the crossroads of laboratory innovation and industry application.
Key Themes Covered:
📈 Framing the AI Infrastructure and Value Chain
🤖 3 Key Capability Bottlenecks in Compute Infrastructure
🎯 3 Tips to Design a Customer-Driven Deep Tech Solution
🔍 Product-Market Fit Through a VC Lens: Software vs Deep Tech
🤝 Join The Scenarionist’s Partnership Program
Deep Tech professionals thrive on targeted, compelling content. Partner with us to elevate your brand to thousands of influential leaders, venture capitalists, and innovators in the Deep Tech industry. Discover how our collaboration can amplify your reach and impact.
📈 Framing the AI Infrastructure and Value Chain
From a venture capital lens, understanding AI demand involves looking beyond just software or hardware as isolated elements. Instead, it requires viewing the AI ecosystem as an integrated system, a concept that is well-illustrated by the electronics and semiconductor industries.
A Shift from Servers to Global Networks
In today’s AI landscape, the "system" is no longer just a single server or box: it encompasses entire buildings and networks of data centers spread across the globe to keep up with the exponential pace of computing.
This holistic scheme, which is increasingly critical as we move forward in AI, is called Systems Technology Co-Optimizations (STCOs), and it emphasizes the importance of optimizing entire systems rather than individual components.
The current AI applications generating revenue today are largely built on legacy technology, developed for an era with different computing needs and architectures.
While this has worked so far, we’re now entering a transformative phase.
The demand for AI is becoming more specific and complex, moving beyond general computing needs. This shift is not about one new technology disrupting an equilibrium; it's about rethinking the relationship between humans and machines.
AI is reshaping how we leverage technology, electricity, and even robotics. This revolution requires a new perspective on system design to accommodate the expanding AI workloads, which will continue to evolve and grow in complexity.
“The technology that is running AI today to create the applications that are generating revenue is largely built on a foundation that was designed really for a different era of technology, a different kind of compute substrate, a different architecture, even for the applications. And that's fine. That's fine during the period of time when most of the demand we're seeing is kind of general compute and AI is a piece of that. It could be like 5%, it could be 10%, it could be whatever percent you want to imagine, but growing. Now, as we've turned this corner to see what the potential could be, the disruptive potential, this is a lot bigger than really one new technology disrupting an equilibrium. This is more about reshaping how humans work with machines, how humans leverage the mechanical advantage of electricity and robots, and things like that.”
Opportunities for Entrepreneurs and Innovators
Startups have an incredible opportunity to enter a landscape that is shifting from scaling existing paradigms to building entirely new ones.
This new approach will be shaped around AI workloads, which will likely diverge from today’s applications and become more specialized and differentiated.
Addressing this "all-hands-on-deck" moment involves contributions from every corner of the technology ecosystem. Software developers, from algorithm designers to application creators, play crucial roles in advancing AI.
On the hardware side, innovation is needed in deep tech areas, including memory technology, logic technology, and interconnect technology. Every aspect of the infrastructure—from how power is delivered to circuits to the conversion of power from AC to DC—holds potential for efficiency improvements.
To move toward an STCO-like future, every gap and inefficiency in current technologies must be explored and optimized.
The demand for AI will continue to grow, and this trend is unlikely to reverse.
The focus is now on understanding where startups and innovators can make their mark, how quickly they can address market needs, and how they can balance costs and readiness of technologies to be seamlessly integrated into the AI ecosystem.
In this era of continuous demand, entrepreneurs and technologists have a pivotal role in advancing the infrastructure that will sustain AI's future growth.
🤖 3 Key Capability Bottlenecks in Compute Infrastructure
There are several critical areas for the future of AI infrastructure, with immense potential for technological and commercial advancements. For entrepreneurs and technologists, the key to success will be identifying specific applications that benefit from these technologies and focusing on integrating them into viable products and services.
1. Edge Computing: A Complex Frontier for Innovation
The debate on where computing should happen—whether in the cloud or closer to where data is generated—has been ongoing, and edge computing is a focal point in that conversation. It's not just a technological issue: it's also about customer preferences, security concerns, and even regulations or laws that can influence the decision.
The question of the edge is crucial and will remain so for a long time because it is essentially unsolved. While the cloud computing architecture paradigm is more established, it continues to evolve, and the edge presents a more complex challenge.
It's expensive and inefficient, but from an engineering and technological standpoint, it's almost always better to do compute where you can create the most data.
However, this is a significant focus for those investing in AI technologies because the Gen AI need for live, fresh, recent onsite data connects directly to revenue and value in a way that will be much more true in this next era.
In fact, today, most of the data is left on the table—not computed, sensed, recorded, or observed, simply because there's only so much computing that can be crammed in. The question of how much compute can be brought to the data remains crucial.
From a technologist's point of view, compute-in-memory can be a response to this challenge because it is a perfect example of where it is expensive and difficult to create large amounts of super high-speed memory that is tightly integrated with compute.
The compute architectures have evolved from the von Neumann architecture of computer processing elements, with some kind of bus or bridge to connect the data to data banks like DRAM memory, hard drives, or storage class memory. This has been an area long overdue for big disruption.
This is where edge technologies will first react by taking the general architectures that were applied to devices such as drones, security cameras, or smart home installations.
2. Redesigning Chip Architectures
With edge devices being highly power-constrained, there is a limited amount of compute that can be integrated into their power envelopes. This limitation has driven the redesign of chip architectures around these power envelopes and the thermal design parameter (TDP). The primary constraint is the available power.
However, it is often unclear which model will be executed. For example, the application could involve:
Computer vision
A language model like LLM
A new type of application
Determining how these architectures translate into a service or product has also been challenging.
The result is a highly heterogeneous set of architectures with little commonality, unlike the standardization seen in the cloud. The cloud focuses on standardizing around a few types of chips for each application, connecting them in various ways, delivering power differently, and managing bandwidth and network connectivity, but not so much the processor architecture itself.
3. The Challenge of Memory Innovation
Among the components of AI infrastructure, memory technology stands out as an area where innovation is especially challenging but highly necessary. Memory innovation is much harder than almost any other area of investing because the forces that have to be contended with are completely different than even from chip architecture. And yet, in the AI era, the importance of new types of memory that require new process technology, and new device technology, has led to all the vendors having to become much more innovative in what kind of products they serve.
🎯 3 Tips to Design a Customer-Driven Deep Tech Solution
Early interaction with potential customers presents an opportunity for Deep Tech founders to discover where their tech application can become a product and provide significant value. Let’s explore some basics for early value proposition design.
1. Understanding Your Customer Base
The first question to ask is: “What kind of customer am I talking to? Am I talking to a customer who is a buyer to serve an application, or am I talking directly to the application provider?” This question is crucial because the pools of profit that these two parties serve are completely different.
Large hyperscalers, for example, have a mandate to create high-performance compute options for customers, balancing a large number of features that their customers want.
Application providers, on the other hand, have been grappling with the question of whether to have their own compute resources or move everything to the cloud.
For instance, if you have a device technology or other innovations like power savings or cooling, understanding how these deliver bottom-line value to different customer types will be crucial.
2. Is this a 10-30% Improvement or a 10x, 100x Type of Proposition?
Once the customer type is clear, the next step is understanding how your technology adds value to their operations. As an entrepreneur with an innovative and unique solution that could save power, time, and money, you need to understand if this customer is a large consumer of compute. Will they benefit significantly from the advantages you bring? If so, how well do you understand the potential impact? Is this a 10-30% improvement or a 10x, 100x type of proposition? These are totally different worlds.
3. Be Intellectual Honest
It's key to be intellectually honest with yourself, your co-founders, and your team. Get in a room, and diagram out where you're really trying to go for the 10x, 100x impact. If you can achieve it, think about how to target the right customers who would benefit most from that type of advantage.
🔍 Product-Market Fit Through a VC Lens: Software vs Deep Tech
For Deep Tech founders, one of the most difficult questions to answer is whether their technology has achieved product-market fit.
Product-Market Fit in Software Startups
Most investors are looking for evidence that the application or technology has achieved product-market fit.
If the focus is on software as an investment window, it's usually easy to determine when you have product-market fit because people start buying what you have. And then it's up to the entrepreneur to scale up how much of that is available.
For this, you can turn to your top 3 hyperscalers or other providers to get more compute and provide more services. This doesn't mean you're not innovating on the product; you still need to be innovative, but you can establish product-market fit very early.
If by the Series A timeframe, you have not shown product-market fit, you could be in very big trouble.
Deep Tech is Something Different
On the other hand, Deep Tech involves very capital-intensive fields such as AI chips, robots, quantum computing, and fusion energy.
To some founders, it can seem like what's sufficient is just answering the question: "Is there a need for this technology?" They might provide a thousand-word argument referencing various sources, essentially writing a paper to make an intellectual argument that there's a need for this technology.
However, this approach often doesn't work out so well because, in the cases of these very complex technologies, things can go in a number of different directions.
The Role of the Letter of Intent (LOI)
So, where to start to prove to an investor that there’s an unmet need in the very early stages? A customer Letter of Intent (LOI) is oftentimes useful. It's one of the best ways to prove that there is some applicability and market application for the technology. An LOI can show an investor that the potential customer base values this because the LOI process for most large customers is not something just anyone can get. Certain customers will be able to get more value out of such an LOI, while others will never give those LOIs because there's not much of an incentive for them to do it.
The Limitations of LOIs
In the investor's mind, an LOI is not binding. It's better than nothing and better than just a phone call, but it's still just a piece of paper. For the entrepreneur, never be satisfied with just that piece of paper. Keep pushing for more. An LOI is a starting point; it should be seen as a temporary momentary sigh of relief, but then push toward what the real business looks like and how many other customers you can get such commitments from.
Serving Needs That Aren't Current Market Demands
The more you go into the science fiction areas of some of these new technology applications, the more you can find companies serving needs that aren't current market demands.
This is the crux of the difficulty on the investor-entrepreneur side.
"If I introduce a plausible means of teleportation right now, I would say that as product-market fit, like if they could if that really worked, I mean, look, I could use that right now. At the same time, finding a customer who's going to say, OK, well, here's what I would pay for teleportation. I mean, that could that could be quite difficult."
In the more concrete realm, like AI, you have a much better chance of connecting the value that's created, the pricing that you can observe in the market, to the cost of your solution because there's a liquid market for tokens and AI.
That's a big gift.
A lot of these other science fiction-like technologies don't have that yet. So it's on you to really learn as much as possible from the customer and get them over on your side, win them over to speak forcefully about the value that you're creating.