Recursive Superintelligence: Why Self-Improving AI is the Next Frontier

Recursive is building an open-ended architecture that teaches AI to improve its own codebase.

We are witnessing a critical shift in how artificial intelligence will be developed. The standard laws of pre-training are unlocking massive capabilities, but to complement these architectures and achieve true reasoning and scientific discovery, the industry needs to explore orthogonal S-curves.

Recursive Superintelligence, led by CEO and co-founder Richard Socher, is doing just that. They believe the path to superintelligence is teaching AI to improve itself, to learn, through open-ended algorithms that drive endless scientific discovery. GV is incredibly proud to co-lead their early $650M funding at a $4.65 billion valuation.

The core thesis behind Recursive is elegant: AI is code, and now AI can code. When these two realities connect, the self-improvement loop can be closed. Following this logic, instead of relying on human engineers to hand-design optimizations, Recursive is building systems that conduct experiments to safely improve their own capabilities. The system learns to identify its own limitations, write its own benchmarks, and actively rewrite its own codebase to become more capable.

This open-ended architecture is inspired by biological and cultural evolution. Consider the human brain: it evolved over millennia into a system of immense complexity that runs on a mere 20 watts of power, roughly the equivalent of a household light bulb. Biology is living proof of what is possible. Just as understanding the physics of aerodynamics allowed humans to build aircraft that fly far faster than birds, extracting the fundamental principles of intelligence will allow us to build systems that think faster and further than humanly possible.

At GV, we have always believed that category-defining companies are human capital companies at their core. Richard Socher has assembled an exceptional group of seven co-founders, bringing together Tim Rocktäschel, Alexey Dosovitskiy, Josh Tobin, Caiming Xiong, Yuandong Tian, Tim Shi, and Jeff Clune. Each of them have spent years at the frontier of machine learning, spearheading structural AI breakthroughs like the Vision Transformer (ViT) and pioneering continuous safety loops like "rainbow teaming". They bring a unique blend of deep academic research from UCL, UBC,and Google DeepMind alongside the operational instincts of leading early OpenAI agent work and proven unicorn CTOs. The team also shares a long-standing intellectual lineage, with four of Recursive’s employees having co-authored the seminal Darwin Gödel Machine paper alongside Jeff Clune. Importantly, they share a sense of humility and commitment to safety while pushing discoveries at the frontier.

Their first goal is to train a system with the capability of "50,000 PhDs," focusing first on the science of AI itself. Once this engine is running, the team plans to point this "Eureka machine" at humanity's most complex quantitative frontiers. The implications of automating the scientific method are vast, ranging from accelerating therapeutic discoveries and curing diseases to designing next-generation battery chemistries and unlocking advanced fusion physics.

There are few companies where the saying “the possibilities are endless” actually holds true, but this is one of them, and we’re inspired to be a part of the journey. If you, too, are inspired by the research and engineering challenges to be solved, reach out to Recursive — they are hiring for a handful of roles in San Francisco and London.

Read More