Thanks!

Thank you for subscribing to our content. You will now receive the latest news in your inbox.

May 11, 2023

No Priors 🎙️116: Will Everyone Have a Personal AI? With Mustafa Suleyman, Founder of DeepMind and Inflection

EPISODE DESCRIPTION:
Mustafa Suleyman, co-founder of DeepMind and now co-founder and CEO of Inflection AI, joins Sarah and Elad to discuss  how his interests in counseling, conflict resolution, and intelligence led him to start an AI lab that pioneered deep reinforcement learning, lead applied AI and policy efforts at Google, and more recently found Inflection and launch Pi, a personalized intelligence.

Mustafa offers insights on the changing structure of the web, the pressure Google faces in the age of AI personalization, predictions for model architectures, how to measure emotional intelligence in AIs, and the thinking behind Pi: the AI companion that knows you, is aligned to your interests, and provides companionship.

Sarah and Elad also discuss Mustafa’s upcoming book, The Coming Wave (expected release September 12, 2023), which examines the political ramifications of AI and digital biology revolutions.

No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode.

Show Links:

Sign up for new podcasts every week. Email feedback to show@no-priors.com

Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @mustafasuleymn

Show Notes
:
[00:06] - From Conflict Resolution to AI Pioneering
[10:36] - Defining Intelligence
[15:32] - DeepMind's Journey and Breakthroughs
[24:45] - The Future of Personal AI Companionship
[33:22] - AI and the Future of Personalized Content
[41:49] - The Launch of Pi
[51:12] - Mustafa’s New Book The Coming Wave

-----------------------------------------------------------------------

Sarah: Today on No Priors, we're speaking with Mustafa Suleyman, co-founder of DeepMind, the pioneering AI lab acquired by Google in 2014 for $650 million. And now co-founder and CEO of Inflection, along with Reid Hoffman and Karen Simonyan. Inflection just launched their first public product, Pi, last week. Mustafa, welcome to No Priors. Thanks so much for joining us.

Mustafa: Thanks for having me. I'm super excited to be here.

Elad: Yeah, we're very excited to have you today. I think one thing that'd be great to maybe start with is just a little bit of your personal story, because I think you have a really unique background. You're well-known obviously for DeepMind and your pioneering work in the AI world. But I think before all that, you worked on a Muslim youth helpline. You started a partnership and consultancy that was focused on conflict resolution to navigate social problems. I just love to hear a little bit more about the early days of things that you did before DeepMind. And then maybe we can talk a little about DeepMind and sort of more recent stuff as well.

Mustafa: Yeah, sure. I mean, the truth is I was very much a kind of changed the world kid growing up, like a big believer in grand visions, doing good, having a huge impact in the world. That was always kind of what drove me. So I grew up in London and went to Oxford, but at the end of the second year of my philosophy degree, I was kind of getting a bit frustrated with this sort of theoretical nature of it all. It was full of hypothetical moral quandaries. And so a friend that I met at Oxford was starting a telephone counseling service, a kind of helpline, and it really appealed to me. It was a non-judgmental, non-directional secular support service for young British Muslims. This was about six months after the 9/11 attacks, and so there was quite a lot of rising Islamophobia and the government was talking a lot about anti-terrorism. In general, I think that sort of migrant communities were feeling the pressure. This was a support service that was staffed entirely by us, by young people. I was 19 at the time. I spent almost three years working pretty much full-time on that.

It was an incredible experience because it was basically my first startup and fundraising was the name of the game, except the numbers were much, much smaller than they are these days. The service was staffed by almost a hundred volunteer young people, which was just amazing because we felt like we can actually do something. It was quite liberating and energizing to actually give this a shot. I was very much inspired by the kind of human rights principles. It was deliberately not religious, even though it used some of the kind of culturally sensitive language that helped people feel heard and understood. So yeah, it's had a very formative impact on my outlook.

Elad: Yeah, no, it's super interesting and I think we can talk more about that in the context of AI in a little bit. One other thing that you did is you also started a consultancy where you worked as a negotiator and facilitator, and I believe you worked with clients like the United Nations, the Dutch government, and others. Can you tell us a little bit more about that work as well?

Mustafa: Yeah. I mean, I was always trying to figure out how to scale my impact, and I quite quickly realized that delivering a sort of one-to-one service via a nonprofit was not going to scale a great deal, even though it had an amazing impact on a kind of human to human level. And so I was super interested in these meta structures. Like how does the UN actually influence behavior at the country level and how could we run more efficient decision making processes where there's tension and disagreement? So we worked all over the world actually, in Israel Palestine and in Cyprus between the Greeks and the Turks. My colleagues worked in South Africa and Columbia, Guatemala.

I think it really taught me that learning to speak other people's social languages is actually an acquired skill and you really can do it with a little bit of attention to detail and some patience and care. It's kind of a superpower, being able to deeply hear other people and make them feel heard such that they're better able to empathize with people that they disagree with. And that that's been an important theme throughout my kind of career, something I've always been interested in.

So I think I co-founded that and worked on it for I think three years and soon realized the limitations of large scale human processes. I mean, in 2009, I facilitated one part of the climate negotiations in Copenhagen. It was a kind of remarkable experience. 192 countries, literally a thousand NGOs and activists, many different academics, everyone proposing a different solution, a different definition of the problem. In one way, it was sort of inspiring to see so many different cultures and ideas coming together to try to form consensus around an issue that was clearly of existential importance. On the other hand, it was just deeply depressing that we weren't able to achieve consensus. It took another decade to even get mild consensus on this or half a decade, 2015.

I think that was sort of an eye-opener for me. I was like, "The world's governance systems are not going to keep up with both the exponential challenges that we face from globalization and carbon emitting, but also technology." And that was the next thing that I saw on the landscape.

Elad: So how did this lead into your interest in AI? And I believe that you met Demis when you were quite young, and I think he and your other co-founder worked together later in the lab. But I'm a little bit curious how your background and interests in these sorts of global issues then transformed into an interest in AI and the founding of DeepMind.

Mustafa: Yeah, well, around about that time actually, I guess it was 2008 or so, I was starting to keep an eye on Facebook's rise, and I was like, "This is incredible." I mean, this is a two or three year old platform at that point, and it had hit a hundred million monthly actives. And that was just a mind-blowing number to me. It was obvious that this wasn't just a kind of neutral platform for giving people access to information or connecting people with other people. The frame, because I had come from a conflict resolution background, our entire approach was like, "What is the frame of a conversation? How do you organize space? How do you prepare individuals to have a constructive disagreement? How do you set up the environment basically to facilitate dialogue?"

And so that was the lens through which I looked at Facebook, I was like, "Well, this is a frame. There's a choice architecture here. There are significant design choices which are going to incentivize certain behaviors." Obviously at that point there wasn't really ranking, but even just having a thumbs up or the choice of which button you place in what order and how you arrange information on the page and what, all of that drives behaviors in one way or another. That was a big realization to me because I was like, "Well, this is actually reframing the default approach to human connection at a scale that is completely unimaginable." I mean, perhaps only akin to the default expectations in a religion, for example. Everyone grows up with an idea that there is a patriarchy, a male God, that there's a particular role for women. Until a few decades ago, that was just an implied sort of undertone to an entire social structure for thousands of years. And that's kind of what I mean by frame. There's these implicit design choices which caused hundreds of millions of people to change their behavior.

Elad: Yeah. And I think that's super interesting because I remember working on a bunch of Facebook apps at the time when the platform launched. People were purposefully thinking about that stuff, but on the micro level, right? How do we get more users? How do we get people to convert? How do we drive certain behaviors? And so everybody, I think, was very explicitly thinking about this as a behavioral change platform, but not at the level of society. We were thinking about it in the context of just like, "How do you get more people to use this thing?" And so I think it's really interesting that people then later realize the big ramifications of this in terms of how that actually cascades in terms of social behaviors and other things. How did that lead to starting DeepMind?

Mustafa: Well, it was clear to me from that moment on, I left Copenhagen in 2009 thinking this is not the path to significant positive social change. It still needs to continue and I support those processes obviously, but I'm just saying it is just not something that I feel I could continue to work on. And so my heart was set on technology at that point.

So I reached out to Demis, who was the brother of my best friend from when I was a kid. We got together, we had a coffee, we went and actually we played poker at one of the casinos in London because we both love games. We're both super competitive, both good at poker. On that night, I think we both got knocked out pretty early in the tournament. So we sat around drinking Diet Coke, talking about ways to change the world. We basically were having exactly this conversation, like, "Is it going to be..." I mean, obviously at that point I was mostly inspired by platforms and software and social apps and connectivity and so on. Whereas Demis was way more in the kind of robotics land and sci-fi land. I mean, he was fully thinking that the way to manage the economy, the way to make economic decisions was to simulate the entire economy. He was very much obviously had just come off the back of his games like Evil Genius and Black & White and so on, which were kind of simulation based games. So I think that was his default frame at that point.

And then we spent many months talking and spent a lot of time with Shane Legg as well. Shane was really the core driver of the ideas and the language around artificial general intelligence. I mean, he had worked on that for his PhD with Marcus Hutter on definitions of intelligence. I found that super inspiring. I think that was actually the turning point for me, that it was pretty clear that we at least had a thesis around how we could distill the sort of essence of human intelligence into an algorithmic construct. It was his work in... I think for his PhD thesis, he put together 80 definitions of intelligence and aggregated those into a single formulation, which was, the intelligence is the ability to perform well across a wide range of problems. He basically gave us a measurement, an engineering kind of measurement that allowed us to constantly measure progress towards whether we were actually producing an algorithm which was inherently general. It could do many things well at the same time.

Sarah: Is that the working definition you use for intelligence today?

Mustafa: Actually, no. I've changed. I think that there's a more nuanced version of that. I think that's a good definition of intelligence, but I think in a weird way it's over rotated the entire field on one aspect of intelligence, which is generality. And I think OpenAI and then subsequently Anthropic and others have taken up this default sort of mantra that all that matters is can a single agent do everything? Can it be multimodal? Can it do translation and speech generation, recognition, et cetera, et cetera?

I think there's another definition which is valuable, which is the ability to direct attention or processing power to the salient features of an environment given some context. So actually, what you want is to be able to take your raw processing horsepower and direct it in the right way at the right time, because it may be that a certain tone or style is more appropriate given a context. It may be that a certain expert model is more suitable. Or it may be that you actually need to go and use a tool. And obviously, we're starting to see this emerge.

And in fact, I think the key, and we can get into this obviously in a moment, but I think the key element that is going to really unlock this field is actually going to be the router in the middle of a series of different systems which are specialized. Some of which don't even look like AI at all. They might just be traditional pieces of software, databases, tools and other sorts of things, but it's the router or the kind of central brain which is going to need to be the key decision maker. And that doesn't necessarily need to be the largest language model that we have.

Elad: It's really interesting because I feel like a lot of what you described is actually how the human brain seems to work in terms of you have something a little bit closer to a mixture of experts or MOE model where you have the visual cortex responsible for visual processing, and then you have a other piece of the brain specifically responsible for empathy, and you have mirror neurons. It feels like the brain is actually this ensemble model in some sense with some routing depending on the subsystem you're trying to access. And so the generality approach seems like really it almost goes at odds with some of those pieces of it, unless you're just talking about some part of the hippocampus or something, right?

Mustafa: Well, I think that's long been the inspiration, right? I think for everybody, that your networks are the obvious example. But in many other elements, reinforcement learning and so on are all brain inspired. I think that there's been a lot of talk about sparsity as well, which is what you're describing. So far we've had to do very dense all to all connections because we haven't quite learned the algorithms for sparse activations. But I think that's going to be a very promising area. And in many ways, what I'm describing doesn't actually require sparse activations because you actually could just train a decision making engine at the middle to know when to use which size model, right? So maybe in some context, you would want the highest quality, super expensive 20 second latency model. And in most other contexts, a super fast three second mini model might work fine.

I think that's going to be the key unlock actually. And quite sort of remarkably, that's an engineering problem perhaps more than it is an AI problem, which is just a pretty surreal moment just if you actually observed that, given where we are in the field and stuff.

Elad: When you started DeepMind, I think it was reasonably unpopular to do what you were doing. And so I think you ended up getting funded by Founders Fund and Peter Thiel and Elon Musk. But I remember at the time there was three or four parties that funded a lot of AI things and then nobody else was really doing it in terms of the types of approaches you were taking in terms of saying, "We're going to build these big AI systems that can do all sorts of things," right?

Mustafa: Yeah, I mean it was wacky. I can't say that enough. Especially for the first two years. Because we found it in 2010. For the most of the sort of spring and summer of 2010, actually most of the rest of that year, I was going to a Gatsby computational neuroscience unit at UCL, sneaking in with Demis and Shane to just sit in on the lunches that Peter and Diane ran. I remember Shane sort of saying to me like, "The language here is machine learning."

Elad: Yeah. You couldn't say AI.

Mustafa: Don't say AI. And I was like, "Okay. Okay, I'll keep my mouth shut. Don't worry." We certainly don't say AGI. That was pretty weird. I mean there weren't very many funders for us. Peter Thiel, to his credit, did actually have significant vision here, although he sold pretty early, I think, and now doesn't seem to be in the game. But yeah, he certainly saw it first. I think that all changed pretty quickly. First with Alexnet of course in 2012, and then with DQN, the Atari paper in 2013, and then a kind of succession of breakthroughs after AlphaGo and people got more sort of aware of it. But it still surprises me to the extent to which the rest of the world is suddenly waking up, and obviously we've seen that crazy in the last six months.

Elad: Yeah. And then I guess last question on sort of your time with Google and DeepMind because I think there's a lot of really exciting things to talk about in the context of Inflection and the broader field and world, what are some of the things you were most excited to have the team create a DeepMind over the years or some of the breakthroughs that you're most proud of?

Mustafa: Yeah. Well, I mean in some ways we definitely sort of pioneered the deep reinforcement learning effort. I think in principle, it's a very promising direction. I mean, you clearly want some mechanism by which you can learn from raw perceptual data, and that directly feeds into a reinforcement learning algorithm that can update and essentially iterate on that in real time with respect to some reward function, whether that's online or offline, directly interacting with the real world in real time, or it's in a kind of batch simulation mode.

That turned out to be very valuable for a specific type of problem where a game-like environment had a very structured scaler reward and we could play that game many millions of times. That's part of the reason why we started the AlphaFold project, because it was actually my group that was looking around for other applications of DQN like AlphaGo-like tools. And in a hackathon that we did one week, someone stumbled across this problem. We'd actually looked at it back in 2013 when it was called Foldit, which was a very small scale kind of version of this.

Elad: And just for a context, sorry to interrupt. AlphaFold was focused on folding proteins, which at the time was a really hard problem, right? People were trying to do this molecular modeling and they couldn't really make any real headway and lots of the traditional approaches. And then your group at DeepMind really started pioneering how to think about protein folding in a different way. So sorry to interrupt. I just wanted to give context for people listening.

Mustafa: I think the hackathon was probably 2016. And then as soon as we saw the hackathon start to work, then we actually scaled up the effort and hired a bunch of outside consultants to help us with the domain knowledge. And then I think the following year we entered the CASP competition. So these things take a long time and longer than I think people realize. That was very big effort by DeepMind and eventually became a company-wide strike team. So in hindsight, these things do take a huge amount of effort.

Elad: Yeah. The fascinating thing here is that the work started with AlphaGo, which was how to play Go better or how to beat people at Go. And then the same underlying approach could then be morphed and applied to protein folding, which I think is an amazing sort of leap or connection to make. I used to work as a biologist, and I remember you'd spend literal years trying to crystallize proteins in different solutions. You'd do all these different salt concentrations in each well. So if the protein would crystallize, you could hit it with x-rays and then you'd interpret those x-rays to look at the structure. And so you had to do this really hard, sort of chemistry and physics, to get any information about a protein at all. And then you folks with the machine ran through every protein sequence literally in every database for every organism, and you're able to then predict folding, which is it's pretty amazing. It's very striking.

Mustafa: Yeah. I mean, I think if I were to summarize the core thesis of DeepMind, it was that it would be possible to... The motivation for generality was that you would be able to learn a rewarding behavior in one environment and transfer in a more compressed or efficient representation the insights that had made you successful in one environment to the next environment. That transfer learning has always been the key goal, and that was one of the very exciting proof points that it is increasingly looking likely that that's possible.

So I definitely think that's pretty cool because when you think about the sorts of problems that we are facing in the world today, we don't have obvious answers lying around. There's no genius insight that's just waiting to be applied. We actually have to discover new knowledge. And I think that's the attraction of artificial intelligence. That's why we want to work on these models because we're sort of at the limit of what the smartest humans in the world are capable of inventing. We have very pressing urgent global challenges from food supply to water, to decarbonization, to clean energy, transportation with a rising population that we really want to solve. So amidst all of the stresses and the fears about everything that's being worked on at the moment, it is important to keep in mind that there is an important north star that everybody is working towards, and we just got to keep focused on those goals rather than be too sidetracked by some of the fears.

Sarah: Let's talk about Inflection. What was the motivation for starting another company?

Mustafa: Well, I guess back in sort of 2018, 2019, it wasn't clear that neural networks were going to have a significant impact in language. If you just think about it intuitively, for the previous sort of five years, CNNs had been effective at learning structure locally, in an image, in the input. So pixels in an image that were correlated in space tended to produce sub features, which were a good representation of what you were trying to predict. Maybe there were lines and edges and they grew into eyes and faces and scenes and so on. That kind of hierarchy just intuitively seemed to make sense and seemed to apply to audio and other modalities, right? Whereas if you think about it, a lot of the structure of predicting the next word or letter or token in a sentence seems to exist in a very, very, very spread out, far removed from the immediate next step of the prediction.

And so it didn't look like that was working. And then to be honest, when GPT-3 came out, that was a big revelation. I had seen the GPT-2 work and hadn't quite clicked for me that this was significant. It was really only when I started saw the GPT-3 paper that my eyes were wide open to this possibility. It's pretty amazing that you could attend to a very, very seemingly sparse representation and use that to predict something, which on the face of it seemed like there were billions of possibilities of what might come next in a sentence, or maybe tens of millions or something, but a lot.

And for me, it was early 2020 that I went to work at Google and I got involved in the large language model efforts. I got involved in the Mina team that was called at the time. I know that you guys had Noam the show recently. Noam's super awesome. It was me and Noam, Daniel Coakley, and a few others. And it was just unbelievable what was being built there. When I joined these pretty small models, very quickly we scaled it up. It became the Lambda group. We started seeing how it could potentially be used in various kinds of search. We started looking at retrieval, grounding for improving factuality. We started getting a feel for all the hallucinations and so on. That was just really a mind-blowing few years to me.

While I was there in the last year in 2021, I tried pretty hard to get things launched at Google. We were all kind of banging on the table being like, "Come on, this is the future." And obviously David Luan from Adept was also in and around that group. So the three of us in our own ways were pushing pretty hard for launch. It wasn't meant to be. Just timing is everything. It wasn't the right timing for Google for various reasons. I was just like, "Look, this has to be out there in the world. This is clearly the new wave of technology."

And so in January I left, got together with Karen, my co-founder, who I worked with at DeepMind for seven years. We bought his company back in 2014 at DeepMind. He led the deep learning scaling team at DeepMind for years and worked on all the big breakthroughs at DeepMind. And then of course Reid Hoffman, who's been one of my closest friends for 10 years. We've always talked about starting something together and I was like, "This is the obvious thing. Now is the time for sure." And so the rest is history. It's been a wild ride since then.

Sarah: It makes me feel a little bit better than somebody who's been such a pioneer in the field and working on this all the time is still constantly surprised as I am also constantly surprised. I remember when you were first starting to get this going. Another thing I was surprised by is the focus you... I mean, I came around to it in writing the investment memo, but you have this focus on the idea of companionship rather than information as the right initial approach. You've talked about worked on thought about empathy for humans and other populations for a long time. It seems counterintuitive, like why companionship?

Mustafa: Yeah, it's a great question. So I think to step back from that first, I think my core insight about what was missing for Lambda was interaction feedback. In a funny way, that was exactly what was motivating Karen too, having beaten all the academic benchmarks and achieved SOTA many times, he had come to the same conclusion I had seen the same thing from Lambda. What we were missing was user feedback. And actually when you think about it, all of our interfaces today are fundamentally about interaction. You're giving your browser feedback all the time. You're giving that web service feedback, same with an app or anything that you interact with. It's actually a dialogue.

And so the way I'd position Lambda at Google is that conversation is the future interface. Google is already a conversation. It's just an appallingly painful one, right? You say something to Google, it gives you an answer in 10 blue links. You say something about those 10 blue links by clicking on it. It generates that page. You look at that page, you say something to Google by how long you spend on that page, what you click on it, how much you scroll up and down, et cetera, et cetera. And then you come back to the search log in and you update your query and you say something again to Google about what you saw. That's a dialogue. And Google learns like that.

The problem is it's using 1980s Yellow Pages to have that conversation. And actually now we can do that conversation in fluent natural language. I think the problem with what Google has sort of, I guess in a way, accidentally done to the internet, is that it has basically shaped content production in a way that optimizes for ads. And everything is now SEOed within an inch of its life. You go on a webpage and all the text has been broken out into sub bullets and subheaders separated by ads. You spend five to seven or 10 seconds just scrolling through the page to find the snippet of the answer that you actually wanted. But most of the time you're just looking for a quick snippet. And if you are reading, it's just in this awkward format. And that's because if you spend 11 seconds on the page instead of five seconds, that looks like high quality content to Google and it's "engaging." So the content creator is incentivized to keep you on that page.

That's bad for us because what we want is a succinct-

Sarah: We as humans.

Mustafa: Well, we as humans, all humans clearly want a high quality, succinct, fluent natural language answer to the questions that we want. And then crucially, we want to be able to update our response without thinking, "How do I change my query and write this?" We've learned to speak Google. It's a crazy environment. We've learned to Google, right? That's just a weird lexicon that we've co-developed with Google over 20 years. No, now that has to stop. That's over. That moment is done and we can now talk to computers in fluent natural language, and that is the new interface. So that's what I think is going on.

Sarah: Maybe we should back up for a second and just tell people about what Pi is.

Mustafa: Sure. Yeah. So building on all of that, we think that Pi... I think that everyone in the next few years is going to have their own personal AI. So there's going to be many different types of AI. There will be business AIs, government AIs, non-profit AIs, political AIs, influencer AIs, brand AIs. All of those AIs are going to have their own objective, aligned to their owner, which is to promote something, sell something, persuade you of something. My belief is that we all as individuals want our own AIs that are aligned to our own interests and on our team and in our corner. That's what a personal AI is. And ours is called Pi, Personal Intelligence. It is, as you said, there to be your companion. We've started off with a style that is empathetic and supportive. We tried to ask ourselves at the beginning like, "What makes for great conversation?"

When you have a really flowing, smooth generative interaction with somebody, what's the essence of that? And I think there's a few things. The first is the other person really does listen to you and they demonstrate that they've heard you by reflecting back some of what you've said. They add something to the conversation. So it's not just regurgitation, but they introduce another nugget, another fact. They ask you follow up questions and they're curious and interested in what you say. Sometimes there's a bit of spice, right? They throw in something silly or surprising or random or kind of wrong and it's endearing and you're like, "Oh, we're connecting." And so we've tried to, as in our first version, and this really just is a first version, this is actually not even our biggest model at the moment. So we are just putting out a first version that is skinned for this kind of interaction so that we can sort of learn and improve. It really makes for a good companion, someone that is thoughtful and kind and interested in your world as a first start.

Elad: You're working on these sort of personalized intelligence or personal agents and you mentioned how you think in the future there'll be all these different types of agents for representing different businesses or causes or political groups or the like. What do you think that means in terms of how the web exists and how it's structured? So to your point, the web is effectively really based on a lot of SEO and sort of Google is the access point. What happens to webpages or what happens to the structure of the internet?

Mustafa: I think it's going to change fundamentally. I think that most computing is going to become a conversation. A lot of that conversation is going to be facilitated by AIs of various kinds. So your Pi is going to give you a summary of the news in the morning, right? It's going to help you keep learning about your favorite hobby, whether it's cactuses or motorcycles. And so every couple days it's going to send you new updates, new information in a summary snippet that really kind of suits exactly your reading style and your interests and your preference for consuming information. Whereas a website, the traditional open internet just assumes that there's a fixed format and that everybody wants a single format. And generative AI clearly shows us that we can make this dynamic and emergent and entirely personalized.

So if I was Google, I would be pretty worried because that old school system does not look like it's going to be where we're at in 10 years time. It's not going to happen overnight. There's going to be a transition. But these kind of succinct, dynamic, personalized, interactive moments are clearly the future in my opinion.

Sarah: The other group of people that is clearly worried is anybody with a website where their business is that website. I spent a lot of time talking to publishers in April because they were freaking out. What advice would you have for people who generate content today?

Mustafa: Well, I think that an AI is kind of just a website or an app, right? So you can still have... Let's say that you have a blog about baking and so on. You can still produce super high quality content with your AI. And your AI will be, I think, a lot more engaging and interactive for other people to talk to. So to me, any brand is already kind of an AI, it's just using static tools. So for a couple hundred years, the ad industry has been using color and shape and texture and text and sound and image to generate meaning, right? It's just they release a new version every six months or every year. It's the same thing that applies to everybody, like TV ads used to be, right? Whereas now that's going to become much more dynamic and interactive.

So I really don't subscribe to this view that there's going to be one or five AIs. I think this is completely misguided and fundamentally wrong. There are going to be hundreds of millions of AIs or billions of Ais. They'll be aligned to individuals. So what we don't want is autonomous AIs that can operate completely independently and wander off doing their own thing. That I'm really not into that vision of the world. That doesn't end well.

But if your blogger has their own AI that represents their content, then I imagine a world where my Pi will go out and talk to that AI and say, "Yeah, my Mustafa is super interested to learn about baking. He can't crack an egg, so where does he need to start?" And then Pi will have an interaction and be like, "Oh, that was really kind of funny and interesting. Mustafa will really like that." And then Pi will come back to me and be like, "Hey, I found this great AI today. Maybe we could set up a conversation you'll find something super interesting." Or they recorded this little clip of me and the other AI interacting, and here's a three-minute video or something like that. That'll be how new content I think gets produced. I think it'll be your AI, your Pi, your personal AI that access interlocutor accessing the other world, which is basically, by the way, what Google does at the moment, right? Google crawls other essentially AIs that are statically produced by the existing methods and has a little interaction with them, ranks them and then presents them to you.

Elad: Back to your original point on Facebook, I think one thing Facebook has been criticized for is the creation of context bubbles where the only information that you see is information that you kind of inherently believe, or the feed is kind of tailored to you. If you think about some of these AI agents, one could argue they're going to be the extreme form of this in the downside case. In the upside case, obviously there's other versions of this. But the downside cases, it will just constantly use the feedback from you to reinforce things you already strongly believe whether they're correct or not. And so I'm a little bit curious how you think about this as we go through this new platform shift and you mentioned that you identified some of these issues quite early on with some of the Facebook or other social platforms, how do you think about that in the context of AI agents?

Mustafa: I think that is the default trajectory without intervention. So that might be a controversial view, but I think that the platforms were never neutral. That was the big lie. I think that was, frankly to me, very obvious from the very beginning. The choice architecture is a ranking, it's not a clean feed. Clearly there's billions of bits of content. So you have to select what to show. And what to show is a huge sort of political cultural influence on how we end up. And so of course AI is an accelerated version of that. My take is that all of us, AI companies as well as the old social media platforms, have to embrace the platform responsibility of curation and try to be as transparent as possible about what that curation actually looks like, what is excluded. Here I think that the Valley probably needs to be a bit more open to the European approach.

The reality is that we have to figure out as a society which bodies we trust to make decisions which influence recommendation algorithms or AI algorithms, right? And if that's a requirement for transparency of training or if it's a requirement for transparency with respect to content that has been excluded or what has been upvoted or down voted, fundamentally we have to make these things accountable to democratic structures. And that means that democratic structures need to sort themselves out pretty sharpish and actually have some functioning bodies that can provide real oversight without everybody fainting over the accusations that this is censorship and being super childish about that because now really is the time to actually get that a bit more straightened out and have some kind of responsible interactions with these companies. Because you're right, these are going to be very, very powerful systems.

Sarah: This is my bias coming in, but that seems like a harder hill to climb than the AGI hill. I don't want to-

Elad: I think we all agree with that.

Mustafa: I hope not. I think do agree, but I hope not.

Sarah: Yeah. Yeah. Well, we can all work on it. So you described Pi as the first foray that you guys can get out into the world and learn from an and improve with. What does improvement mean? Are you measuring emotional intelligence? What is better?

Mustafa: Yep. Yep, we're certainly measuring emotional intelligence. We're measuring the fluidity of the conversation. We're measuring how respectful it is. We're measuring how even-handed it is. We've already had a couple of errors where it's made some politically biased remarks and we've tried super hard to make sure that it's even-handed. No matter how sort of racist, homophobic, or misogynist in any way, it should never be dismissive, disrespectful, or judgemental of you. It's there to talk through issues and make you feel heard and take feedback. It tries very hard to take feedback.

So yeah, we are measuring all of those kinds of things. But the next phase of obviously where we're headed is that we really think that this is going to be your ultimate personal digital assistant. And it is going to, as I said, interact with other AIs to make decisions, buy your groceries, and manage your sort of domestic life and help you book vacations and find fun information and that kind of stuff. So it's going to get increasingly more down that route. The other thing is that quite soon it will have the ability to access real-time content in the web. So it'll be able to look up the weather and news and other kind of fresh content like sports results or provide citations and increasingly add a lot more of those sort of practical utility features that you would expect from your personal intelligence.

Sarah: So in my early conversations with my Pi, I guess maybe I shouldn't be so surprised, it's very human and people like to talk about themselves. But I immediately invested a reasonable amount of effort in personalizing it, right? I'm like, "Okay, here are a bunch of things about me that you should know, what I'm like and my interests and how you can be useful to me." What surprised you in usage? Or maybe you expect, but what would surprise our listeners?

Mustafa: Yeah, that's a great question. I mean, a lot of people proactively share a huge amount of personal information. At the moment, our memory is not that long. It's about a hundred messages, which is actually still quite a lot, surprisingly a lot. But what we would really like is to be able to grab that knowledge and store it in your own personal brain and have Pi be your kind of second mind, able to remember all of your subtle preferences, likes, habits, relations and so on to be super useful to you. I think in time some people will want to connect other data sources like email and documents and drive. I think some people are already starting to see doing that and so on.

It's very interesting to see what people ask Pi to ask us to do. So they're like, "Can you tell your developers that I really love this voice. I'm really enjoying talking to..." I think it was P2. We've just called them P1, P2, P3, P4, our voices. And of course some people are like, "Can you tell your developers that it should really know that I wrote the following stories for Forbes, but I didn't write this story on this other topic." I was just like, "Dude, that was a journalist yesterday or the day before." So yeah, seeing what people give us feedback on is really, really helpful.

Sarah: Okay. Inflection today is still a relatively small team. What's it like as a company culturally? You guys are recruiting, what are you looking for?

Mustafa: Yeah, we are a pretty small team. We are about 30 people. We've hand selected a very, very talented team of AI scientists and engineers. Everybody on the technical side goes by MTS. Super important to us that we don't draw a strong distinction between researchers, scientists, engineers, data scientists, and anything else. To us, that equality and respect is really important, and we've seen that go wrong at our other labs previously. I think it's an important modification because everybody makes a really big contribution. We're very much an applied AI company so we don't publish and we're not really focused on research, even though fundamentally what we do do is applied research in production. I mean, we run some of the largest language models in the world. We have state-of-the-art performance across many of the main benchmarks with the exception of coding because we don't have Pi generate code and it's not a priority for us.

So yeah, it's a very energetic, very high standards environment. We are very focused on Ics. So everybody is an exceptional individual contributor and mostly self-directed. So we don't do managers just yet. It's just two of us doing management, which unbelievably has worked so, so well because we have such senior experienced people and they're very driven, they know what to do.

My experience of building teams like this over the last decade and a half is that the best people really just want to work with really high quality people, be given outstanding amounts of resources and freedom and focus on a shared goal. So we have a very sort of explicit company goal. Every six weeks we ship. And in our seventh week, we come together in person to do a hackathon and really push super hard as a team because that forms great bonds and it's really fun. We have drinks and dinner and hang out and stuff like that. It's a week of intensity, which closes our launch, and then we plan again for the next six weeks. So it's actually a really nice rhythm. And I found that most people make up the second half of their OKRs anyway, and a 12-week cycle is just too long and BS. So six weeks is actually perfect and it creates a lot of accountability and a lot of fun.

Elad: So one thing that a lot of people talk about is, "How do these models actually scale? What is the basis for the next generation of these types of models, their performance? Where does it asymptote?" How do you think about scalability? How do you think about the underlying silicone that drives it? Is it a data issue? Is it a compute issue? I'm just really interested in how you think about more broadly these really large scale models since you fix your building, many of them now.

Mustafa: Well, the incredible thing about where we've got to at this point is that all of the progress, in my opinion, is a function of compounding exponentials. So over the last decade, the amount of compute that we've used to train the largest models in the world has increased by an order of magnitude every single year. So I went back and had a look at the Atari DQN paper that we published in 2013. That used just two petaFLOPS. Some of the biggest models that we're training today at Inflection used 10 billion petaFLOPS. So nine orders of magnitude in nine years is just insane. So I feel like it's super important to stay humble and acknowledge that there is this epic wave of exponentials which is unfolding around us, which is actually shaping the industry. And so when it comes to predictions, you have to just look at the exponential, it's pretty clear what's going on. That's just on the amount of compute side.

The data side, I think everyone's super familiar with. We're using vast amounts of data and that's continuing. But I think the other thing that people don't always appreciate is that the models are also getting much more efficient. So one of the big breakthroughs of last year which got some attention but probably didn't quite get as much given how many breakthroughs there were was the Chinchilla paper, which I'm sure a bunch of you will be familiar with. But there's a very, very significant result showing that we could actually train much smaller models with much more data for longer. And that was actually compute optimal and achieve essentially comparable performance to the models that were previously being trained. And so that gives us an indication that it's very early in the space for architectures, and these models are highly underoptimized and there's a lot of low hanging fruit.

And so that's what we found over the last year and a half. So actually the lead author of Chinchilla, Jordan Hoffman, is on my team here at Inflection. We have a bunch of really outstanding people who have produced a number of really awesome proprietary innovations building on work like that. And so I think both trajectories are going to play out. Scale building larger models is definitely going to deliver returns. We are obviously pursuing that. We have one of the largest supercomputers in the world. And at the same time, we are going to see much more efficient architectures, which are going to mean that many, many people can access these models. And in that sense, it's the coming wave of contradictions in AI. That's what's happening.

Sarah: I have one last question for you.

Mustafa: Sure.

Sarah: So you are working on a book. I know you can't say much about it yet. But why? You're a pretty busy guy.

Mustafa: I love reading. I love writing and I love thinking about stuff. What I've realized over the years is that the best way to sharpen your thoughts is to create hard deadlines. So that was one of the main things. And I'll be honest, did I regret multiple times over the last year and a half agreeing to a book deal with Penguin Random House at the same time as doing a startup? Yes. Like multiple times I was tearing my already quite gray hair out. But it's nearly finished and it has been absolutely phenomenal. And yeah, I've super enjoyed it. The book's called The Coming Wave, and it's about the consequences of the AI revolution and the synthetic biology revolution over the next decade for the future of the nation state. And I try to intersect the political ramifications with the technology trajectories at the same time. So it's been a lot of fun.

Sarah: My hobbies are also this trivial, Mustafa. So, good. Thank you so much for joining us. Congratulations on the launch. And for our listeners, you can try it at inflection.ai and find Pi in the App store.

Mustafa:Thanks so much. It was really fun talking to you both. See you soon.