> If MMF doesn’t exist today, building a startup around it means betting on model improvements that are on someone else’s roadmap. You don’t control when or whether the capability arrives.
I love this. I think there's a tendency to extrapolate past performance gains into the future, while the primary driver of that (scaling) has proven to be dead. Continued improvements seem to be happening through rapid tech breakthroughs in RL training methodologies and to a lesser degree, architectures.
People should see this as a significant shift. With scaling, the path forward is more certain than what we're seeing now. That means you probably shouldn't build in anticipation of future capabilities, because it's uncertain when they will arrive.
Product-market fit has a prerequisite that most AI founders ignore. Before the market can pull your product, the model must be capable of doing the job. That's Model-Market Fit. When MMF Unlocks, Markets Explode (legal, coding...)
> If MMF doesn’t exist today, building a startup around it means betting on model improvements that are on someone else’s roadmap. You don’t control when or whether the capability arrives.
I love this. I think there's a tendency to extrapolate past performance gains into the future, while the primary driver of that (scaling) has proven to be dead. Continued improvements seem to be happening through rapid tech breakthroughs in RL training methodologies and to a lesser degree, architectures.
People should see this as a significant shift. With scaling, the path forward is more certain than what we're seeing now. That means you probably shouldn't build in anticipation of future capabilities, because it's uncertain when they will arrive.
Product-market fit has a prerequisite that most AI founders ignore. Before the market can pull your product, the model must be capable of doing the job. That's Model-Market Fit. When MMF Unlocks, Markets Explode (legal, coding...)