We use website use cookies to improve your experience. Privacy Policy
Go back

The Lemurian Labs Origin Story

Musings on the world of AI

Since the age of 15, I have held the belief that AI and robotics together would have a transformational impact on society as we know it. They would function as catalysts, enabling us to do more than we could ever imagine. For almost 65 years we have been trying to build AI systems and intelligent robots that can operate at near or surpass human levels, but we have consistently fallen short. 

Advancements in deep learning and reinforcement learning made it feel like we would soon be able to realize autonomous robotics, but after years of working in AI, it was starting to feel like we were at a standstill, and the current approaches would be insufficient. 

We still did not have AI models that could learn through interactions in the real world, maintain temporal context to enable better predictions, and effectively deal with a changing world. But we did know that larger models trained on more data would outperform smaller models. At around this time, I had begun to explore transformer based architectures and it felt like they may be the missing piece. 

So, in 2018, my co-founder, Vassil, and I got together and started Lemurian to build a hybrid transformer-convolution based foundation model for autonomous robotics, and a platform to manage the end-to-end lifecycle of these models so that all robotics companies could more easily leverage AI. 

But in our pursuit, we very quickly realized that in order to build the kind of model we wanted, it would need to be a much larger model than we originally thought. Training it would have taken over a month on more than ten thousand GPUs. We went out to speak with other AI developers and started noticing that companies were currently training models that required exaflops of compute, and were already planning future models runs which would require zettaflops of compute. It became quite clear that these models were going to grow a lot larger, and within the decade would require a yottaflop to train. This was alarming because the first exascale computer wouldn’t be available for another 3 years.

The Lemurian Labs Origin Story
PetaFLOP-days Growth

This trajectory of model FLOPs and compute FLOPs meant that training state of the art deep learning models would become increasingly unsustainable, and deploying them at scale would require orders of magnitude more compute. Vassil and I were frustrated by this realization and agreed that this wasn’t ideal for the future of AI and would only handicap innovation. We felt like something had to be done and took this as an opportunity to reimagine what we, as AI developers, need and want from accelerated computing so that others like us could actually get their hands on the compute needed to train models and push forward the state of the art. 

We believe that we are still at the dawn of AI. It has extraordinary potential but fully realizing it would require consuming a significant portion of the world's energy budget to power the data centers for AI. That is an outcome we refuse to accept. We cannot callously continue on with things as they are because it is familiar and easy. And so, we reimagined accelerated computing to reorient it for sustainability, while still delivering breakthroughs in performance.

We founded Lemurian Labs with the sole mission of making AI accessible, affordable and efficient for everyone. We brought together experts in AI, compilers, numerical algorithms, computer arithmetic, and computer architecture to challenge the status quo. We designed a software stack, data type, and computer architecture holistically around the needs of the developer to advance AI responsibly. We focused on the needs of AI developers to ensure they have the tools they need to be more effective, and easily develop and deploy the workloads of the future.