Thanks!

Thank you for subscribing to our content. You will now receive the latest news in your inbox.

April 27, 2023

No Priors 🎙️115: Listener Q&A - Elad & Sarah on AI Investment Hype, Foundation Models, Regulation, Opportunity Areas and More (TRANSCRIPT)

EPISODE DESCRIPTION:

This week on No Priors, Sarah and Elad answer listener questions about tech and AI. Topics covered include the evolution of open-source models, Elon AI, regulating AI, areas of opportunity, and AI hype in the investing environment. Sarah and Elad also delve into the impact of AI on drug development and healthcare, and the balance between regulation and innovation.

Sign up for new podcasts every week. Email feedback to show@no-priors.com

Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil

Show Notes: 

[0:00:06] - The March of Progress for Open Source Foundation Models 

[0:06:00] - Should AI Be Regulated?

[0:13:49] - Investing in AI and Exploring the AI Opportunity Landscape

[0:23:28] - The Impact of Regulation on Innovation

[0:31:55] - AI in Healthcare and Biotech

Sarah Guo: Hey, everyone, welcome to No Priors. Today we're going to switch things up a bit and just hang out and answer listener questions about tech and AI.

Elad Gil: The topics people want us to talk about include everything from the evolution of open source models to the balkanization of AI. Elon AI, which I think will be super interesting to cover, regulating AI, and AI hype in the investing environment. Let's start with the march of progress for open source models. I guess, Sarah, what have you been paying attention to and what are some of the more interesting things that you view happening right now?

Sarah Guo: Yeah. There's nothing out there today in open source that is like GPT 4, 3.5, or philanthropic cloud quality. There's one player out in front, and that's OpenAI, but I think the landscape has changed a lot over the last couple months. Facebook LLaMA is quite good. Many starts are just using it despite its licensing issues, assuming Mark won't come after them. And then you have a number of other releases that have happened. Tomorrow just released a pre-training dataset, which seems quite good. Stability just released Diffusion XL in the image gen space. And so I think the larger dynamic is that there's been an increasing number of people and teams that now know how to train large models.

The cost of a flop is only going to go down. There's a lot of investment in distilling models. And a lot of researchers would claim that you and I know that it's going to be 5X cheaper to train the same size model the second time around, once you've made your mistakes and know what you're doing. And then you have these other accelerants, you can use these models to annotate your datasets and increasingly do advanced self-supervision. If VCs are going to continue to fund foundation model efforts, including open source foundation model efforts. If I were a betting woman, and I am, I bet there's a 3.5 level model in the open source ecosystem within a year. And I didn't personally believe that would be true a few months ago.

Elad Gil: I guess that puts it about two to three years behind when GPT 3.5 came out though. And so do you think that's going to be the ongoing trend? That there'll be a handful of companies that are ahead of open source by one or two generations?

Sarah Guo: Yeah, I think that's the status quos, if we just straight line project. I imagine that will continue to happen. And the real question is, can you stay in the lead if you are OpenAI and get paid for that, or is that the objective of the organization anyway? I think if you have a great leader and a lot of resources, and a lot of really talented people, that's not something I want to bet against.

Elad Gil: Is there anything you think is coming in terms of other big shifts in the model world, either on the open source side or more generally?

Sarah Guo: Yeah. We should also talk about just stuff that you are interested in investing in and generally paying attention to. But I think the big idea that's been very popular over the last few weeks are autonomous agents. And I don't think that's a... I want to hear what you think about this too. I don't think that's necessarily an architectural change, but for our listeners, the basic ideas to orchestrate LMs in this iterative loop towards some high level goal. Where they're doing planning, and memory, and prioritization, reflection. And so you're not necessarily changing the architecture of the LM itself, but this orchestration allows you to do many new things, possibly. The classic example being, make money on the internet for me. And there's a good number of hackers trying to figure out how to make agents that, for example, analyze demand, find a supplier, set up a drop ship Shopify store, generate ads, then promote that store on social. The whole loop being one call to an agent with this high level goal of, make money on the internet for me. Do you think this stuff is interesting around autonomous agents?

Elad Gil: I think it's super interesting. And there's the old saying that the future is here, it's just not equally distributed. And I feel like that's one of those things that people in the AI community have been talking about for a while, and there have been very clear ways to do it. And then I think there's one or two people that went and implemented interesting things there in terms of auto GPT or other things. And then everybody's like, oh my gosh, this can happen. And I think a lot of people in the community are like, this is really cool. But at the same time, of course it can happen. Because effectively you have some form of context as a AI agent is acting, and then you use that context to inform the next motion and update the prompt or what the model's going to do.

I think there's other forms of memory that people have been talking about that are super interesting. How do you make that a bit more of a cohesive part of how an LLM or AI agent functions? Because right now, effectively every time you start a new instance of Chat GPT, a new chat, you've lost the context on all the other sessions you've had. And so a lot of what people are thinking about is, how do I create ongoing context so that whatever chat bot, or whatever API I'm using remembers everything else I've done with it over time, or perhaps everything it's done with every other user over time. And then that becomes really powerful because you're effectively crowdsourcing and understanding of the world, and then integrating it into an AI system and agent. And so suddenly you have global context. Imagine if you as a person understood the life of every other person who's lived, and then you had all the context around what that means in terms of just how you operate in the world. And so I think [inaudible 00:05:19]-

Sarah Guo: I wouldn't operate anymore, Elad. We'd be hive mind.

Elad Gil: Yeah, exactly. It's just the hive mind, I think that's where we're all heading. You heard it here first.

Sarah Guo: Okay, fine. While we're on this topic of directionally AGI, there's been a lot of call for regulation of AI from Sam Altman, to Satya, to Elon Musk. Do you think AI should be regulated?

Elad Gil: I think the first question is, why do people want to regulate to begin with? And I think there's two or three reasons. One is, if you're an incumbent, it actually really benefits you to get lock-in. And one of the best ways you can get lock-in as an industry is to have regulators get involved, because they start blocking innovation and creativity and new efforts. There's that famous chart of where prices have gone up by industry and where they've gone down, and they've largely gone down in areas that have been unregulated traditionally, that's things like software or certain types of food products, or other things. And then there's areas where prices have gone up dramatically. And that's education, it's healthcare, it's housing, it's the most regulated industries. Regulation tends to lock-in incumbents. It means you have fewer people making drugs and you have fewer people doing all sorts of things that could actually be quite useful. So that's one thing.

The second is, I think some people are just scared. And in some cases you could say, well, there's reasons to be scared. What if the AI is used to unleash a virus? Or what if the AI is used to cause war? And if you look at the history of the 20th century, humans have done that pretty well on their own already. It's not a new concept that bad things will happen, and often they're driven by other people, versus technology. And of course technology can have accidents or can be misused, but fundamentally, usually people have driven a lot of the really bad things that have happened over time. And there's a really long history of doomers who are wrong. And I should say, by the way, on AI I'm a short-term optimist, long-term doomer. I actually think eventually there may be an existential threat from AI, but I think in the next 10 years everything will be okay.

There may be accidents or maybe terrible things that happen, but fundamentally it won't be any different from any other period. But if you look at the doomerism, in the past it's things like public intellectuals worried about swine flu, and nothing happened. A lot of people worried in the 70s about population collapse, we're going to have too many people and the world will starve, and we're going to have global famine. And that didn't happen. And so we have a lot of examples of people in the past predicting doom when nothing happened. And we also had that during COVID where a lot of people said COVID is the worst thing that ever happened to the world. And then they would be hosting dinner parties unmasked inside with large groups later that evening. And so I just think you have to look at people's actions versus their words.

And fundamentally, my view would be let's not regulate right now, at least most things. I think the things that maybe should be regulated are things related to export controls. So there may be advanced chip technology that we don't want to get out of the country, and we already have those export controls on other capabilities. We may want to limit the use of AI for certain defense applications. Do we really want a really smart hyper-intelligent AI agent driving swarms of offensive drones or weaponry? And so there may be some need to do some global regulation for things like that, or at least something like what we've done for chemical weapons. And then I think in the long run, we may want to think twice about advanced robotics and their implications as AI becomes more of existential threat to humanity.

But overall, if I had to choose right now I'd say don't regulate in the short run, except for those areas that I mentioned. And then I think that the big pivot point for regulation may actually come during the 2024 election. Because I think that's the moment that people will show examples of AI being used to influence the election or influence voting behavior, just like ads influence voting behavior. But AI could write better ad copy or do other things. I worry a bit about that becoming the reason that people claim that they should not regulate things, just like they got really aggressive about social networking. I don't know. What do you think?

Sarah Guo: I largely agree with that. I feel like it's worth describing what I think are the two more rational cases. By the way, I think it's too early to regulate. I just want to make sure that's very clear. But I think the two rational cases I've heard, because I keep asking smart people that I don't think are taking cynical actions why they're afraid, or short-term afraid. Or why they think this makes sense. And the two things I've heard are, one, this is unlike the past because of the speed of progression. This hard takeoff idea of, especially if... And I see you nodding and smiling, but when very, very smart people who are working at the state-of-the-art tell me that they're concerned within a 10-year band for humanity because of the ability of this current generation of models to be used to train the next generation of models. And we're all very bad at thinking about compounding.

I'm like, okay, that's not a completely unreasonable point of view. I think the other is more of a tactical thing for the industry. Which is, as you said, for whether it be the election or some other trigger, there's a version of the reaction to this from people who are afraid or from political opportunists to go in two directions. One is mass surveillance, or one is complete lockdown. I think the tactical thing is to try to create a democratic process that gets ahead of it with something that's a reasonable path forward. But largely, I feel as far it's very early to be figuring that out. And then you also have the problem of, if you're talking about the more existential risks or the AGI risk, alignment research is very tied to capability research. And so it's impossible to be like, we're going to stop making any progress on research, but figure out how to control this stuff.

Elad Gil: Yeah, absolutely. And I think related to that, I think it's really important, to your point, to separate out almost what I consider technology risk from species risk. Technology risk is, there's some bad things that can happen due to technology being abused, and that could be a nuclear disaster or that could be an AI being used to shut down a pipeline, or to crash a flight, or to do something really bad. And those sorts of things already happen, but you could imagine it could accelerate it. In that case, you could literally turn off a bunch of servers. You could turn off every machine on the planet if you really needed to, and humanity would keep going. And it'd be a reset, but we'd reset fine. Separate from that, there's species level risk. Is there an existential threat to humanity? And that's like an asteroid hits the planet and kills everybody.

And I think a lot of the people who talk about these things mix those two things. And I think the true doomer view is, while AGI eventually becomes a species and we compete with it, and then it wipes out all humans. And in order for an AI to rationally want to kill everybody, you'd need some replacement for the physical world because eventually all the hard drives would burn out and the AI would die, if it existed as a species or a life form. So you need physical form for the AI in order for it to truly be an existential threat. And that's why if I were to focus on an area, it'd probably be robotics or something like that, because that's where you suddenly give physical form to something. And if you're like, oh, isn't it great if AI can build my house, and AI can now build a data center and now build a solar farm? And you're eventually, now build a factory.

You've basically created an external system that no longer needs people. And that's when I think there's real risk. And that's why on the 10-year time horizon I'm not that worried because robotics and atoms in the real world takes a lot of time. So even if you have this hyper-intelligent thing running, the reality is if you really needed to, you could turn off every server on the planet.

Sarah Guo: Yeah, I agree with the embodiment being a key piece in this theory of the AIs are going to kill us. We're pretty far away from that. Okay, one question we got from listeners, and that I'm sure you get all the time, is there's a ton of hype in the AI investing in startup world right now. What do you think of it? Is it justified? Is it appropriate?

Elad Gil: Yeah, I think we've both lived through a couple different hype cycles now. There was hype cycles around social and mobile, and then the cloud, and then multiple crypto hype cycles. And the reality is, out of all those hype waves interesting things emerged. And maybe in the standard hype cycle, 95 or 99% of things fail, but there's still the 1% that work. Or maybe it's 5% work and 1% end up being spectacular. And I think the hard part usually is to know what's actually going to work, because so many things seem so overlapping and similar. And so I remember when the mobile wave happened, or mobile and social at the same time, a bunch of different people I know started mobile photo apps, and each one of those things took off. And so you'd suddenly see something go from zero to million users in a week. Just literally it just spread virally, and none of them stuck.

They all burnt out at cycles, and the only one that really stuck was Instagram. And in part, that's because Instagram emphasized filters, which things like Camera Plus already had. And then in part it emphasized a network. It's like, let's have a follow model like Twitter. And that's the thing that really worked. And so it feels to me like if you'd gotten excited about the overall cycle, you were right, but if you got involved with the wrong set of photo apps or you built the wrong thing, then you were wrong in some sense. I guess you were right about the trend, but wrong about the specific substantiation of it. And it seems like the same thing here. And so I think often it's that question of, Peter Thiel has a good saying, which is you don't want to be the first in market, you want to be the last standing.

And so I think it's a similar thing here. How do you end up being the last person standing, or last company? And it may be the same thing as being the first mover, it's Amazon and books, or things like that. But sometimes it means you actually do something a bit smarter and you come later in the cycle and it's fine. Are there specific areas you're most excited about in this wave or cycle, or opportunities that you think these things that are obviously going to happen, or are important to happen?

Sarah Guo: Absolutely. And I want your ideas. Some of them are shared ideas, to be fair. But I would agree it actually added data point. I was just over at OpenAI yesterday, and they're biased perhaps in a way that I'm also definitely biased. But a friend was saying they actually think that investors are being somewhat wary at the application level right now, because they can't figure out what's going to be standing. It's a very different competitive dynamic. But the market is extreme for researcher led foundation model companies, because everybody is pretty sure OpenAI is going to be around. And I agree the applications are going to be non-obvious. But as one example, any investor that claims they knew image generation from text was a killer use case a year or two ago, besides you, is just empirically wrong. Given David's completely investor free cap table and amazing business.

David, in case you're listening, I still love Midjourney and want to invest. That's why this podcast exists. But in terms of specific things that I'm interested in now, I'd say I think there are a lot of things on the application side that are exciting. To start with some of those, I think voice into this and dubbing are going to be just a huge unlock for content providers and publishers. I'd like to back something in that space. I was just talking to some people at a very large financial, and they said the biggest potential cost savings, on order of tens of millions of dollars a year for us, is in turning every line of code we have into explanations for a regulator. And that's at ones pretty specific to them, but also not. I think the areas of audit, tax compliance, accounting, reconciliation, there's a lot of natural language understanding that could be better served by semantic understanding. And so I think that's an obvious area.

I think annotation is changing again, and we can use... This is a very specific idea, but we can use LMs too much more here. We talked about agents. And then this isn't necessarily a specific company idea, but I think architecturally retrieval is a field of active research, but the idea of personalizing LMs with enterprise data is an important but very tricky one. You have to do data management, you have issues in scalability, in sync, access control. You likely want to apply traditional IR. If you own both retrieval and the model, you can do very magical things. And so I think the Chat GPT retrieval plugin is super cool, but it doesn't just serve a whole host of use cases, and I think this entire half of the stack is still missing. Those are a couple of the things that we're explicitly hunting around. But, what are you paying attention to?

Elad Gil: Yeah, I think we have a lot of overlap, as you know. I'm super interested in voice synthesis, dubbing, and related both in terms of infrastructure but then also in terms of application areas. And so I think that's going to be a really big sea change that perhaps people aren't paying enough attention to. I'm actually quite long on compliance in general. I've done a bunch of things like AGENTS.inc and Medallion, and other compliance related companies and the old world. And so I think that's just an area that there's always going to be converting spreadsheets and offline processes, and random checks and docs into code is really powerful. I think there's a lot to do on the app side. I actually am maybe on the other side of people who think that it's impossible to tell what's good, and nothing's defensible, and everything's just a wrapper on GPT, or whatever.

And I actually think there's tons and tons to do there. I mean Harvey AI, which we're both involved with, I think is a great example on the legal side. But I think there's two dozen things like that to build over time, and it probably takes five years for all those things to get discovered and built and substantiated. I don't think it's this year there'll be 12 of them, but I think every year there'll be a couple really interesting ones. And then there's probably a lot to do on the tooling side. Obviously link chain is a hot one in the area. But there's everything from people exploring vector DBs like Chroma, on through to other forms of infrastructure. And so LlamaIndex and other things.

I just think there's a lot to be done at every level of the stack. It'll be interesting to ask what happens on the foundation model side. Because to some extent the question is, if we locked in a few of the leaders, or it's more to come. And I think the Elon Musk startup that's rumored to exist is an interesting example of a new entrant. And back to regulation, Musk was asking for a six-month moratorium on progress, which seems to be very self-serving if you're simultaneously starting an LLM company. And [inaudible 00:20:09]-

Sarah Guo: Just hold off until I catch up, right?

Elad Gil: Yeah. And if I was in that position, I'd do the same thing. Don't get me wrong. It's just meant as, remember people's incentives. But I do think there may be some interesting things to do on the foundation side. And I do think some people are doing that in a vertical specific way. They're saying, hey, we're going to build a healthcare specific model, and we're going to build a... Bloomberg did their Bloomberg GPT, or whatever it was called on the financial side. And so I think you can clearly see these verticals emerge. And a lot of people obviously are debating, will a general purpose model discover all those use cases or are you going to have bespoke vertical models? And what parts of the actual logic and synthesis and magic of these AI models comes from the fact that you've trained on a massive amount of data and language and then you're applying it to a specific area with potentially a unique data that's overlaid?

Or is it something that can be dealt with vertically specific and you don't need that broad based understanding of the world? I think that's a really interesting area of exploration. And I have no idea what to predict there. I don't know if you have any thoughts on that.

Sarah Guo: Well, I would agree. I think there is real opportunity for vertical specific models where you can imagine that control for either a compliance or a safety, or just performance, reliability of input data makes sense. As well as if there are architectural differences. Because, for example, you have multimodal data in healthcare and pharma. If you are looking at protein structures and radiology and healthcare records, it's not clear that you would want to train that in exactly the same way as a general web text model. I think that makes sense. On the broader foundation model question... We were talking about open source at the beginning, I think that OpenAI will continue to be a leader. Anthropic is very dangerous here, really talented team. But the number of people who know how to train large models, and the cost if a flop goes down. And so I think there's just a lot of incentive in the ecosystem for additional players to compete. What do you think is the opportunity for incumbents, or how should they react to all of this?

Elad Gil: Yeah. Obviously, with every technology wave there's a differential split in terms of where market cap, revenue, employees, innovation, et cetera goes in terms of incumbents versus startups. And every wave is a little bit different. The internet wave was almost... It's probably 80% startups in terms of value, and 20% incumbents. And then mobile was the other way around, it was 80% incumbents and 20% startups. The big platforms for mobile were Google and Apple, but then you had a lot of interesting apps like Instagram and Uber and others emerge. For crypto is was 100% startup value. And it feels like in this wave it's probably 80/20 again. Google will probably become a player. OpenAI is closely aligned with Microsoft. And then Salesforce with AI is probably Salesforce. It probably isn't a new company. It might be. I actually think certain companies are vulnerable for the first time because these capabilities, and that includes everything from ERP providers, where there's a defensive moat through integrations.

And obviously, this could make integrating your data into multiple things really easy and fast. Instead of six months to roll out SAP, maybe you could have a next gen approach where it takes a day or two on a new product to do all the integrations that you would've spent six months on consulting piece for. And so there may be certain types of companies that are vulnerable. But the reality is, I think in most cases if an incumbent is already doing something and they're quick to integrate it, then it works great. The one area that may be really interesting is almost like there's probably room for a new private equity approach. Where if you think about how private equity companies bid on things, they basically look at cash flows and costs and all the rest of it. And if you can radically decrease cost for people heavy businesses by using LLMs as a replacement for certain types of work, or at least in augmentation, then you can differential bid on companies as a private equity shop.

And so I think people who do buyouts could have this as a strategy. I don't know that any of them will, because most of them tend not to be very technology savvy. But I think there's really interesting alternative things to do at scale there that tend to be underdiscussed. The healthcare side that you mentioned earlier is fascinating. Because if you look at the cost of developing a drug, for example, say it's a billion or $2 billion to develop a drug, whatever it is. Most of the early stage development is on the tens of millions of dollars at most. And so I think a lot of the default focus of people who don't understand healthcare very well is to say, I want to use this for drug development. And it may help with certain aspects of drug development later, but usually I think the places in healthcare where this will really get applied fast is on the more operational or services intensive related side.

It's healthcare delivery. It's lowering the cost of a doctor visit or telemedicine. It's making payments easier and more streamlined if you're dealing with insurance reimbursement. And so I think there's really exciting things to be done there. Color, a company I co-founded is, for example, thinking about different application areas. And I just think that's a real wealth of fruitful areas for people to explore if they're healthcare savvy. And of course with healthcare, the technology usually isn't the issue, usually the go-to market is a hard thing. I think market access is really hard there.

Sarah Guo: Yeah. I pushed back on that a little bit. I'd start with saying I agree on just the operational friction in healthcare that we can take down. There's so many processes. If you look at prior authorization, it's a battle on two sides to fill forms and compare EHR data and clinical recommendations against a policy. And so there's a piece of that you can't get rid of because insurance company has incentive not to pay, and hopefully providers trying to provide the best care. But there is a piece you can get rid of. We have models that can read data, try to understand it, fill out a form. And so I think that there are lots of interesting applications there.

The minor pushback, and you know much more about healthcare and pharma than I ever will, but VC is the job of having opinions anyway. And I think if this wave of AI can change the cost curve in drug development, it's because you're not actually impacting the 10 or $20 million up front on what's traditionally considered research. You're increasing in the probability that you're right. And so all of the cost of expensive recruiting and clinical trials, it is more efficient because you're right more often. You would just understand more about that.

Elad Gil: Maybe. Yeah. I think the hard part is that a lot of drug development ends up being, hey, this works great in mice, and let's try it in people now. And to your point, there may be things that you can learn heuristically in terms of, when do things translate versus not? But I think one piece of it is just basic biological differences. And then the second piece of it is, this is back to the point on regulatory capture. To some extent, the incumbents have an incentive to drive up the cost of drug development so no new startups can actually ever enter. In terms of actually making [inaudible 00:27:24]-

Sarah Guo: Oh, that's very cynical.

Elad Gil: ... to launch drug. Oh, yeah. But it's interesting, it really is this weird regulatory capture. And so if you look at the last time a biotech company, outside of Moderna, which I think is an exception because of COVID. The last time a biotech company hit, I don't remember whether it was 30, 40, $50 billion in market cap, something like that. The last year such a thing was founded was in the late 80s. So it's been at this point, what is that, 35, 40 years without a new major biotech company started. In terms of biopharma actually developing drugs. That's shocking. In tech, during that same time period, there's dozens of companies. And if you actually look at the aggregate market cap of the entire biopharma industry. And as a reminder, healthcare is 20% of GDP, and pharma is about 20% of that, or 10% of that.

If you add up the top four or five tech companies, their market cap equals the entire industry for biopharma, and that includes Pfizer and Eli Lilly, and Genentech and Amgen, and all these companies as well as all the small startups and all the mid-cap companies and everything else. And so then you ask, why is that? And these are very profitable companies. They have software-like margins in some cases. And so as you start digging into the industry, you realize, wow, there's strong reasons for incumbency to remain as incumbents. And there is this regulatory process that really delays things quite a bit. In some cases rightfully, in some cases wrongfully. And if you look, for example, at the COVID era, we were able to develop multiple vaccines and do clinical trials on multiple drugs really, really fast. Part of that was we had a lot of patients, but part of that was we removed all the regulatory constraints. And we didn't have mass scale adverse events and bad things happening to people, we just moved really fast.

This actually also happened during World War II. Winston Churchill wanted a way to treat soldiers in the field for gonorrhea, and so they rediscovered and developed penicillin in nine months. They, again, removed all the regulatory constraints, and boom, nine months later they had a drug that worked really well that was safe. And so I think it's something to really think about deeply in terms of, what are the incentives that we're driving against, and how are we thinking about cost benefits societally? But also, the second you start adding a lot of regulation, things slow way down and innovation goes way down, and cost goes way up. And that's the reason that for the earlier conversation, regulation of AI for most things, export controls make sense, I feel like things make sense. But for most things it's probably a really bad idea right now.

Sarah Guo: I would agree with that. I do think that there is-

Elad Gil: That was my rant, by the way.

Sarah Guo: No, no, stay on the soapbox. Learned something about gonorrhea today. But if you think about the power of government, and I'm strongly on the reduce regulation, encourage innovation side. You also have these wartime examples of production of airplanes in World War II going from a few hundred planes to 6,000 in also less than a year. And here we're fighting atoms, not bits. You have to build plants and figure out all these engineering processes. And so I think that there are ways in which, from a industrial policy, national security perspective, if we wanted to be winning in AI in a really durable way, I think the paths are pretty clear actually. People need compute, and we have to make it a priority in the United States.

But I would also say, in the field of pharma I remember asking you, I don't know, seven, eight years ago, "Hey, Elad, I know you're interested in aging and weight loss, and the intersection of areas where the demand is very consumer driven." You might break out of... And demand and also the ability to access different solutions that are on the edge of consumer purchase, especially as we have more web diagnosed, doctor network diagnosed prescriptions. "Do you think this is interesting?" And I'd send you a company or two, and you gave me the same extremely consistent view, which was, "Hey, despite the PhDs, the data-driven person, investor inside me says, don't do this, just do tech companies." So no change.

Elad Gil: I think that the healthcare services and operations side is super interesting right now, due to LLMs. And so that's an area where I think there's lots and lots of room to do interesting things. And I have invested in some software related companies in the past, like Benchling or Medallion in these areas. But I think it's really about, what's the healthcare infrastructure that can be served through software? And then, how can LMs accelerate it? I think drug development can be extremely useful societally, and really important and impactful, and obviously there can be really great outcomes for people. As well as financially it could be a really great thing. But it just comes back to, why hasn't anybody built a generational company in a really long time in the area? And there's all sorts of reasons behind that. We tried that when I co-founded Color.

The whole focus was trying to make healthcare more accessible to people, and I still really believe in that mission. It's more just, what are the obstacles to getting there for different types of companies, and do you want to take on those obstacles? And if nobody takes them on, the society really suffers. And so it's almost like, how can you make sure that you remove as many obstacles as possible while still safeguarding the public so that people don't get hurt by this stuff? But at the same time, perhaps these things have gotten too extreme and that really strangles the ability for the industry to innovate in ways that it could otherwise. It's a really interesting area. Are there any other topics that we should cover from the audience?

Sarah Guo: I'm good. What do you think, Elad?

Elad Gil: I think we got it all.

Sarah Guo:Thanks to everyone who submitted their questions.