The Man Who Measures Manufacturing Scaleup Risk: A Conversation with Christian Okoye
Part 1 of the Decoding the Learning Curve Series: How one infrastructure investor's frustration with cost overruns is spawning a data revolution in advanced manufacturing
The story of American manufacturing's next chapter won't be written in steel and concrete alone—it will be written in data, patterns, and the rigorous measurement of what actually works. Christian Okoye understood this when he left his perch at Generate Capital to found Occam Edge, a company dedicated to something that sounds deceptively simple: helping novel hardware projects model the future state of their technoeconomics.
But this isn't about crystal balls or consultant hand-waving. It's about recognizing that even the most cutting-edge "first-of-a-kind" projects aren't really first-of-a-kind at all. They're new combinations of known building blocks—like DNA sequences using the same base pairs to create infinite variation—and if you understand those blocks, really understand them, you can predict which projects will achieve their promised cost projections and which will become cautionary tales.
In an industry where "it will get cheaper at scale" has become an article of faith rather than proven fact, Okoye is bringing the receipts. His framework for analyzing budget risks and learning curves, how costs decline as production scales, is giving founders, investors, and policymakers a common language for what has long been discussed in vague generalities.
We sat down with Okoye to understand why he left an investment career to tackle this problem, what his data reveals about manufacturing success and failure, and why understanding your learning curve might be the difference between building the next Tesla and becoming the next Solyndra.
Milo Werner: You've talked about a specific moment with an engineer that crystallized this problem for you. Can you walk us through what happened?
Christian Okoye: I was working with a portfolio company—we were getting our butts kicked by cost overruns on their first commercial project—and I spent six months embedded with their head engineer. I asked him to walk me through how they were predicting costs, and it turned out all of it was in his head. Just pure intuition and experience, no data, no systematic approach.
When we started looking at what they'd done in the past and began projecting forward based on actual data, we were able to get some control over the costs. But what struck me was that this wasn't unique to this company. I started going down a rabbit hole asking why more people don't use data to forecast costs versus assuming "I think this will take X amount, which needs Y people, which costs Z." It's almost always wrong.
Here's the thing—I put my daughter to bed every night. Even though each night is different and she's a huge variable, I've learned from countless bedtime routines exactly when I need to start the process to get her down between 7:30 and 8pm. We all have these planning fallacy conditions, but if you look at past times you've done something, or similar things others have done, that's a much better reference point.
But aren’t we talking about true “first-of-a-kind” projects? How are we able to appeal to past experience when we are building something entirely new?
That's the key insight—it's fundamentally not true that these early projects are completely novel. When you break these technologies down and talk to the engineers, they're often integrating subsystems that have been well understood for decades. Even the "new" things are built on a backbone of proven components.
A friend pointed me toward some under-the-radar data sources and academic papers that dig into what drives cost overruns and how learning curves bring costs down. Across different technologies—from solar to nuclear—there are identifiable patterns. It's not just about unit size or scale; there are deeper factors like modularity and system complexity that we can quantify.
Electric Hydrogen’s modular electrolyzer plants
You analyze everything from solar panels to nuclear reactors. What's the most surprising learning curve you've studied?
Green hydrogen has been fascinating because it's a combination of two different modules. The electrolyzer system, which draws on decades of fuel cell knowledge, has a pretty strong and predictable learning curve. We can bring those costs down significantly over time.
But here's the surprise: the balance of plant—the broader infrastructure around the electrolyzer—can be two-thirds or more of the cost and doesn't come down as easily. That's where the learning curve has been slower and has held back overall cost reductions.
Working with companies like Electric Hydrogen, who are focused on standardizing and modularizing that balance of plant, has shown that if you can control that part of the system, the green premium can drop dramatically. The company has been able to reduce total installed costs by as much as 60% with this approach. It's been eye-opening to see how much of the learning curve success hinges not just on the core technology, but on simplifying and streamlining the entire system around it.
This seems like it could fundamentally change how founders approach their businesses. How should they think differently if they understood their learning curves?
Founders need to shift from generic "value engineering" talk to truly quantifying how their learning curve makes them competitive. Everyone throws out the nebulous "10-20% learning curve" numbers, but the real differentiator is showing how your technology stacks up against incumbents using data-driven comparisons.
If you can identify similar subsystems or standardize parts of your design, you can tell a much more compelling story about how you'll drive costs down over time. It's about engineering your system to capture those cost benefits—thinking through modularity, standardization, and bringing more of the process in-house so you can control and predict cost declines.
When founders understand that, they can present a much stronger case for why their technology will become cost-competitive over time.
You've talked about creating something like a "Kelley Blue Book" for industrial project risk. What would that change?
Right now, everyone operates with their own internal heuristics and assumptions about what makes a project risky. But if you had a standardized reference that said, "Projects with these characteristics tend to have these kinds of cost outcomes," it would give everyone a common baseline.
Technology providers could clearly articulate why their system should be viewed differently. If you can show how your design choices—modularity, standardization, controlling more of the balance of plant—lead to a better risk profile compared to the benchmark, that becomes really credible for building investor confidence.
It would also help investors themselves. Instead of piecing together a risk picture from scattered data points, they'd see how a new project compares to a broader universe of similar projects. It brings transparency and consistency to something that's currently very opaque and often subjective.
Why do you think the industry has been slow to adopt data-driven forecasting?
I wouldn't say there's flat-out resistance; it's more natural skepticism. A lot of folks assume every novel technology is a complete snowflake, something totally unique that can't be benchmarked.
The real challenge is breaking the assumption of uniqueness and recognizing that these systems have a kind of DNA we can analyze. Once we understand how those genetic building blocks fit together, we can start to forecast cost implications more accurately. It's less about pushing something brand-new and more about showing that what looks novel is often just a new combination of known elements—the same base pairs arranged in a different sequence.
The work Okoye is doing at Occam Edge represents more than just better spreadsheets or risk models. It's about transforming manufacturing from an industry of heroic one-offs to one of predictable, systematic improvement. In a world where American manufacturing needs to compete on process innovation, not just breakthrough technology, understanding and optimizing your learning curve isn't just helpful—it's essential.
This is the first in our "Decoding the Learning Curve" series, where we'll explore how modularity, system complexity, and design decisions determine whether technologies achieve their promised cost reductions. In upcoming posts, we'll dive deep into the modularity framework that separates winners from losers, examine why some technologies like nuclear actually get more expensive over time, and provide practical tools for engineering your own learning curve advantage.
For more of Christian's insights on data-driven manufacturing analysis, check out his blog FOAKing Data Nerds and Occam’s website. Feel free to contact him at: christian@occam-edge.com.
What aspects of your manufacturing technology could benefit from better learning curve analysis? Join the conversation in the NextGen community or share your thoughts on what you'd like to see covered in this series.