Thanks!

Thank you for subscribing to our content. You will now receive the latest news in your inbox.

March 02, 2023

No Priors 🎙️ Episode 106, with Daphne Koller from insitro: what is the role of academia in the age of scaling AI? (TRANSCRIPT)

EPISODE DESCRIPTION: Life-saving pharmaceuticals continue to grow more costly to discover, however, and at the same time recent advances in using machine learning for the life sciences and medicine are extraordinary. Are we on the verge of a paradigm shift in biotech?

This week on the podcast, a pioneer in AI, Daphne Koller, joins Sarah Guo and Elad Gil on the podcast to help us explore that question. Daphne is the CEO and founder of Insitro — a company that applies machine learning to pharmaceutical discovery and development, specifically by leveraging “induced pluripotent stem cells,” which we'll get into explaining. Daphne was a computer science professor at Stanford, co-founder and co-CEO of Coursera, is a MacArthur fellow, and was named by Time as one of the world's 100 most influential people. 

Show Links: 

Sign up for new podcasts every week. Email feedback to show@no-priors.com

Follow us on Twitter: @Saranormous | @EladGil | @DaphneKoller

SARAH:

We've talked about computational biology for decades, but drugs keep getting more expensive to discover and at the same time, recent advances in using machine learning for the life sciences and medicine are extraordinary. Are we on the verge of a paradigm shift in biotech? We're thrilled to have a pioneer in ai, Daphne Kohler on the show to help us explore that question. She's c e o and founder of Inro, a company that applies machine learning to pharmaceutical discovery and development, specifically by leveraging induced blurry, potent stem cells, which we'll get into explaining. Daphne was a computer science professor at Stanford, co-founder and c e o of Coursera is a MacArthur fellow and was named by time as one of the world's 100 most influential people. We could go through all her other work, but we'd run out of time. Daphne, welcome to the podcast.

DAPHNE:

Thank you Sarah. It's a pleasure to be here.

SARAH:

As we were saying, we won't ask you to walk through every part of your amazing life story, but you came to biology as a computer science application years into your career. What sparked you going down that route?

DAPHNE:

My initial interest in biology came from the technical side in the sense that the data sets, this is way back when in the mid uh nineties, the data sets that were available to machine learning research at the time were kind of boring and not very inspiring. So things like classifying text into 20 different news groups and I found that there were more interesting data sets technically to be had on the biology side back then as we were starting, for example, to measure the activity of genes across the entire genome, um, in multiple samples. So initially it was really more more from a technological perspective, but then I ended up actually having an interest in biology in its own right and ultimately ended up having a bifurcated lab at Stanford where half my lab did core machine learning work, published in traditional computer science venues, and the other half did core biology work that was published in Nature and cell and science. And what was really interesting is that most of my computer science colleagues had no idea that I did biology and most of my life science colleagues had no idea I was in a computer science department. So it was a bit of a bifurcated existence, but it was a lot of fun.

SARAH:

One more historical question for you. You wrote the book on probabilistic graph models. When I asked mutual friend what I should ask you, he suggested, you know, what motivated that work and how that field has changed?

DAPHNE:

Just like, like in most fields there is a swing of a pendulum. A lot of the early work in probabilistic graphical models was hugely influential in bringing artificial intelligence more into the world of machine learning and working with numerical data rather than just symbolic ai. And then I think the advent of deep learning, uh, pushed that to the side a little bit because there was so much power that could be gained from basically the kind of pattern recognition from raw inputs, um, raw images, hext and so on without having to worry very much about interpretable representations. What I think we're starting to see right now is a pendulum starting to swing back in the sense that there is a greater understanding that you really need a bit of both. You need that hugely powerful pattern recognition that we get from deep learning, but you also need the ability to reason about things like causality, and you also need some interpretability of your deep learning models so that you can potentially convey to a clinician why you made the decision that you did. And so what we're ending up with as a really powerful paradigm is some kind of synthesis of the ideas from both of these disciplines coming together.

ELAD:

You went from Stanford, I believe that then going in co-founding Coursera with entering, and then uh, you went to Calico few years after that. I'm sort of curious, what made you decide to go into Calico? Because you mentioned your career was split between life sciences and computer sciences, and so you went down the computer science online learning route and then you went back into biology. So I'm a little bit curious what drove you back in?

DAPHNE:

So actually I'm gonna go back and answer the earlier part of that, which is what took me to Coursera in the first place because I think it feeds into what took me away. So throughout much of my career at Stanford, I had an increasing sense of urgency that I needed to make an impact in the world, a real impact on real people, not something that was at one step or two steps removed by training great students and having them go and do amazing things, but by something that I get to experience myself. And so when the work that I was doing at Stanford on technology assisted education gave rise to the launch of those first Stanford massive open online courses and we saw just how much impact those were having, I felt like it was too amazing of an opportunity to pass up and just assume that if I didn't do this, then somehow other people would take on the flag and carry it forward.

I felt like there was an incredible need to go and actually have that impact myself and make sure that it was done right. And so that led to what my departure from Stanford on what was supposed to be a two year leave of absence to go and found Coursera. And I had the full intention to go back to Stanford at some later point and resume my faculty life. That didn't happen. Stanford has a very strict leave of absence policy and when they came two years later and said, so are you coming back? And I responded that it wasn't really the right time I needed to see the project through for another year or so, and they said that that was not an option. I ended up doing this completely crazy thing, which is resigning an endowed chair from Stanford and staying at industry. My mother thought I was nuts.

I think she still thinks I'm nuts <laugh>, but I ended up staying at Coursera for a total of about five years. And so five years was kind of a reasonable point to take a step back and and reflect. And when I did that, this was in, um, early 2016, I realized that while I'd been deep in the trenches, um, building Coursera, the machine learning world had totally transformed because as a reminder, I left Stanford for Coursera in late 2011 just before the machine learning revolution really took off in 2012. And so I suddenly lifted my head, looked around me and said, wow, machine learning really is transforming the world but not really having much of an impact in the life sciences. And so I left Coursera in, in good hands, Coursera is a wonderful company, but it's not really a deep technology company and certainly not a science company.

And decided that where I could have a really disproportionate impact wasn't bringing these these two disciplines together because there's just not a lot of people who had the benefit as I did, of spending basically, you know, 20 years doing machine learning and maybe a decade doing biology and could really speak both languages and figure out how to synthesize them. But since I'd been in industry for five years and away from science and even away from machine learning, I didn't quite know where I wanted to go and what I wanted to do. And so I turned for advice actually more than anything else, to Art Levinson, who is the former c e O of Genentech, the former chairman of, of Google and Apple. And I figured that if there was anyone who would know how to bring those two fields together, he was probably uniquely qualified to do that.

And so I asked him for advice and he was very, I think admittedly self-serving in his advice. He, he said, you should come to Calico. And honestly, I didn't know much about what Calico did other than it worked on aging, which seemed like a really important problem to think about. But I did know that it's not many times that one has the opportunity to work with luminary like Art Levinson. And I'd also by that point met Hal Barron, who's another person I have tremendous respect for. And I figured this was, you know, a really interesting way to spend some time and and learn from these wonderful people. And I learned a ton during my time at Calico. It was only 18 months because ultimately I realized that I didn't want to be at a company that focused on a particular biology, but really built a platform for doing drug discovery differently, addressing some of the points that you Sarah made in your introduction about how drug discoveries this incredibly fraught, largely unsuccessful and very expensive endeavor. And so how could I make that happen differently? And it didn't seem like Calico was necessarily the right place to take on a, what was a platform company built. And so that's why I left and found founded in Ciro.

ELAD:

Were there any specific insights from Calico that drove the found Citra? Was it just more the exposure to biopharmaceuticals and how things are developed that really drove your thinking that maybe ML and AI would have a real application area there?

DAPHNE:

I think that it was, um, really the exposure for the first time to how biopharmaceuticals were developed, as you said, at uh, Stanford had worked a lot at the intersection of machine learning, data science and biology and realized just, you know, how much power these machine learning technologies can have when applied even to small data sets. And certainly as the technology had evolved tremendously since then, data sets were becoming considerably larger and richer. There was an even larger opportunity to make a huge difference. And so that's what led my, uh, move back into that intersection and then therefore the Calico. But I think it was really the realization that, I guess twofold. One is that the way in which you turned insights into therapeutic interventions was so old fashioned and so unaccommodating of the use of data that I felt there had to be a better way to do this.

Which I think that the industry has since started to demonstrate across the board in many different companies. And I think the other thing that made me make that shift is that whereas data is, uh, in life, science is growing tremendously. Data in aging and specifically human aging is really hard to get because human aging is a very long process. And in order to get data on the longitudinal trajectory of human aging today, well needed to start collecting data, you know, 20, 30 years ago. And the cohorts are rather small. And so I felt like there was a huge opportunity in this intersection, but maybe aging wasn't the first place where one could, uh, most beneficially apply it from at least my perspective.

ELAD:

Yeah, when you looked across direct development, because I guess right now it costs a billion to billion and a half dollars to DES to develop a drug successfully. It takes a decade plus to actually get there. When I look at the potential areas that are challenging in the industry, there's sort of the initial small molecule selection and design, or alternatively the pathway or cell type that you're using. Separate from that, there's the clinical trial itself and how do you figure out who to enroll and how to deal with the data and the patients and everything else. There's all the calibration around diagnostics and endpoints and clinical endpoints and how you think and all those places seem like there could be real uses of ai. How did you choose what Enro is actually gonna do, given how much room there actually is to innovate in this area relative to data, to your point, I mean it's just, it's shocking how little's done, right? It's like awful

DAPHNE:

<laugh>. I completely agree. And yeah, in some sense the wealth of opportunities here is one of the biggest challenges because everywhere you look, there is a big opportunity for machine learning to be deployed in a potentially quite significant way. Sometimes I have these discussions with the increasingly fewer number of people within biopharma who think that, uh, yeah, this machine learning thing is a fad that will go away, or maybe that machine learning is gonna be this thing that helps you in a particular point area like, you know, xray, it can improve this narrow little vertical. But that's pretty much what it's going to do. And my analogy is that it's not like x-ray crystalography, it's like computers. You're gonna use it everywhere and it's going to be transformative everywhere. It's not gonna be the silver bullet unless you figure out how to use it most effectively, but the opportunities are pretty much endless across the entire process from beginning to end.

So with that, how did we pick what we end up working on? You know, I thought about this and you could divide the, the process as many do into three large chunks. One is the original biology discovery, which is what targets do we employ in what indications and maybe in what patient population is kind of the first chunk. Then there's turning those targets into therapeutic matter, which is a molecular design process. And then at the end there's the enablement of the clinical trials in terms of actually actualizing patient selection or um, biomarkers for efficacy and things like that. And all of those are important and all of those are valuable, but if you look at the actual numbers of what makes drug discovery so expensive, it is the fact that 95% of drug programs fail. They just do not succeed. And the biggest reason why they don't succeed is not because the clinical trial was poorly designed.

That still happens, but it's not the biggest reason. Nor is it because the molecule doesn't hit its target and modulated in the right way. That too happens. But again, it's, it's an increasingly smaller number of situations because pharma companies have gotten better and better at making therapeutic matter. It's a place where most programs fail is because we're just not modulating the right thing. It's the wrong target in the wrong indication or the wrong patient population. So if you really wanna bring down that two and a half billion dollar number, what you have to do is to bring down this completely mind blowing statistic of 95% of drug programs fail into something that is much more manageable so that a successful program doesn't have to carry on its back. All of the many failures, expensive failures of all the things that didn't quite make it. And so I figured that it was maybe the hardest thing to do, but also the thing that was gonna be the most impactful.

SARAH:

So how do you approach that problem as a, a computer science and now computer science and biology person like the the target identification problem?

DAPHNE:

Yeah, you know, it's really hard, right? Because when you think about it, it's the one area where you really don't have the right type of training data, at least not obviously, because the question you're asking yourself is, if I make this therapeutic intervention in this patient, what is it gonna do clinically? And that is the thing about which you don't have data until the very end of the process, which is called a clinical trial. And so how do you train a machine learning model that doesn't have training data to train it, right? And so the direction that we've chosen to take is actually a two-pronged approach and it's the synthesis of the two that we think is particularly powerful. We bring in data from two quite different sources. One is data from human individuals where we don't get to do experiments, but we have experiments of nature.

Each of us is an experiment of nature where nature has modulated our genetics into, you know, different types of activity levels or of individual genes where some of them behave this way and others behave that way. And we can look at that mapping from genotype to phenotype as a surrogate of what a therapeutic intervention would do in those humans. So that's great, but it leads you to those experiments of nature and the experiments of nature are not necessarily the same as what a therapeutic intervention would do. And so what we've done in parallel is to create our own data in our own wet lab where we make interventions in cellular systems and measure the phenotypic consequences there again using very large scale data with very high content modalities. And so the machine learning is actually used, I would say in three different ways. One is to interrogate, um, the phenotypical consequences of genetic variation in human looking at very high content data like imaging where we know machine learning works really well, like different types of omic modalities, transcriptomics proteomics and so on.

To really understand that mapping between genetics and phenotype, we similarly look at the mapping between genetic interventions, which in this case we get to actually direct ourselves by doing genome editing of cells and say, what is the phenotype of consequences of modulating this gene in this cell background and reading out a large high content data to really understand how cell state responds to these interventions. And so the machine learning is used on each of those two separately. And then also to bring them together so that you can kind of think about building cellular models that are predictive of human clinical outcomes, which is ultimately what we're looking to do is to replace the sort of untranslatable animal models was something that is much more driven from human biology.

SARAH:

When you think about, again, just like the focusing of en citro, what domains do you decide to work in first? Because this approach should be quite horizontal of of course then you have, you know, complexity of what that cellular model can be.

DAPHNE:

It for sure is, and again, focusing has always been a challenge in the sense that there's so many opportunities and how do we say no to some of 'em? So what we've done is tried to go in areas where we think there is both a large unmet need in the sense that the current tools that we're deploying are just not very effective. And at the same time where we think that the technologies that we are developing internally provide us with a unique differentiated advantage. So one of those areas has been in neuroscience because as we know, the unmet need there is humongous. There are so very few effective therapeutic interventions in neuroscience and that's partly because the model systems that we've been using, specifically animal models, while one can quibble about in which other therapeutic errors, they're more or less relevant in neuroscience, it is very clear that they're probably not. And that's one of the reasons why things work so well in whatever curing mice of schizophrenia, whatever the heck that means. And then not having much of an impact in human schizophrenia because it's not really even the same disease. Right? So that's on the unmet needs side. And on the opportunity side, we know that induced flury potent stem cells are actually relatively easily differentiated into neurons.

SARAH:

We have mostly a like computer science, not biology audience. And can you just explain like how you get a Daphne or an ALO neuron at all?

DAPHNE:

Okay, so um, in order to get a Daphne neuron in the lab, you take either a white blood cell from me or a skin cell from me and you go through a process of what's called reprogramming, which received a technology which received a noble prize number of years ago, which allows you to turn it into what is basically a stem cell, which means a cell that can then take any lineage. It doesn't have to form a skin cell, which is where it came from. It can form a liver cell or a heart cell or, or a brain cell. And so, and then with that stem cell, which is why it's called an induced because you force it to be pluripotent, which means it can go in any different direction, stem cell or it's called I P S C, and you can, depending on what you do to it, it can now be transformed as I said, into a a neuron or a, or a cardiomyocyte, which is a heart cell and so on and so forth.

And so you can effectively get the effect of our genetics in these cellular systems. And similarly, you can make an even more pointed change by editing those cells and say, if there is a genetic variant that we know causes a particular disease or significantly increases the chances of such a disease, we can introduce that into different genetic backgrounds and then do a sort of almost like an in vitro case control, which is same cell with and without the genetic variant, what are the differences? And that very carefully positioned for tech people, this is like an AB test. This in vitro AB test is something that allows us to really get at those differences that are specifically associated with this disease-causing variant. So that is one aspect of the capability that drove us towards our therapeutic areas. The other is, as I said, we have a two-pronged strategy.

One is the data that we produce in the lab and one is data that we collect from humans. So we also looked for areas in which the data from humans is relatively readily available. And in neuroscience we have an increasing number of brain MRIs. I think there will be even more now with the approval of some of the earliest Alzheimer's drugs because it's gonna be part of the process by which people are either selected to receive the drug or not, depending on whether their brain MRI shows certain aspects of disease. The other areas that we've gone into are metabolism and oncology, because again, those are areas where relevant disease, relevant data that is high content, that is unbiased and truly informative about the disease state is collected quite abundantly as part of the standard of care. And so those are, again, we tried to look for areas where there there's large on meth need and where the two types of capabilities that we bring to bear can be deployed.

SARAH:

That makes sense. If you think about something like, uh, you know, neurodegenerative diseases, Alzheimer's, et cetera, like, you know, is it single cell who can say but feels, feels unlikely? What's beyond single cell and do you guys do organoid research? Like what is that within the scope of in Citro?

DAPHNE:

Yeah, no, that's a great question. So a lot of complex diseases are not encompassed within a single cell lineage. However, I think even there one can study in many cases not always the disease state, by looking at a cell type that is clearly relevant to the disease and perhaps pushing it out of its comfort zone. So for example, in some of the work that we've done in metabolic disease, I mean it's clear that hepatocytes are not the bl and end all of what it takes to make a disease liver, but you can push the hepatocyte out of its comfort zone by putting in the right combination of, you know, fatty acids and maybe various, um, immune system factors or whatever to create a disease state that is much more similar to what you see in its natural environment. That having been said, it's clearly the case that we're not going to be able to recapitulate the entire complexity of a disease stage for a lot of those diseases.

And so one of the things that we do, and this is in the spirit of being pragmatic and prioritizing, there's plenty of things that we can do today where the disease does manifest sufficiently in a single cell lineage. And so we go after those first and we defer some of the other ones to a later stage because technology is such as organoids for example, that encompass multiple cell types in a single, you know, little micro brain or micro liver, whatever, or sometimes these things called organs on chips, which allow you to actually create things that are more than just even a single organ. They start to create sort of the flow between different organ systems. Those are technologies that other people are currently developing, they're getting better by the day. And so we feel like there's a lot of value that we can bring with the capabilities that are out there, even if we know they're reductionist, even if we know they don't fully capture the disease, but they capture enough diseases so that we can bring medicines to patients. And maybe in three years we'll have another tranche of diseases that are unlocked by the technological tidal waves that we're all riding.

ELAD:

You mentioned there are sort of two areas of exploration for in Citra right now. One was metabolic disease and and cancer. I guess that's really three areas. And then the second is neurological areas. I was just sort of curious how far you wanna take these in terms of the actual development of drugs in-house versus partnering out. And then I noticed you had things like relationships with B M s and others for a l s and dementia and a few other areas. So a little bit curious about how far you actually wanna take the development of drugs yourself versus partnering with others and how you think about that in the context of building a company and culture.

DAPHNE:

Uh, that's a, uh, great question and the answer is that we are going to be relatively pragmatic about this as well and what makes sense in terms of maximizing the impact that we have on patients. So one of the things that we have going for us, I think over a lot of other companies is that what we've built as a, as an engine for generating novel insights, novel targets. So it's not, um, the situation that a lot of companies are in, which is you have one program, two programs, and if you kind of sell those off, then you're left with an empty cupboard and then what do you do? You're, you're not a company anymore. So what we think is, is because we have this engine, we have the opportunity to have some of those programs be done in partnership with others, some of those perhaps, um, even be entirely out licensed to others while the engine continues to give us additional insights, maybe even better insights as we expand, for example, into new indications using new technologies.

On the other hand, to think about it from the complimentary perspective, some of the targets that we find ourselves having emerged from our platform are ones around which there's already a drug available because you know, there's only 20,000 genes. And so sometimes someone may have developed a drug, just didn't deploy it in the right indication or didn't deploy it in the right patient population. And we don't believe that the only thing that makes our existence worthwhile is if we come up with a new chemical matter towards those targets. So we might go to the asset owner and say, Hey, let's work together to bring that asset to patient faster. And, and that can usually shave off, you know, two, three, maybe even five years from the development of a program because you've already made the drugs, sometimes you've already put it in people, you've shown that it's safe, you have a good biomarker for when it's working and when it's not. All those things that can really slow down a program if you're starting from absolutely square one and a brand new target. And so we hope to be very pragmatic in terms of what we develop in-house and what we develop with others, uh, with the goal of really trying to maximize the impact that the platform can bring to as many patients as possible.

ELAD:

How much work, if any, are you doing on the biomarker side? Because I think one of the points that you just raised is really interesting. When I look at a lot of clinical drug development, a lot of it is waiting for clinical endpoints that may take months or years to really substantiate. And so sometimes the F D A or others will be willing to accept certain clinical biomarkers as sort of intermediary steps or things that tend to vary relative to the trade or the outcome. Are you doing biomarker development as well? Because that seems like such a great area for the applications of ML and yet it seems like there's so little work in terms of actually translating ML into the real world for biomarkers in particular.

DAPHNE:

And I completely agree and I think there's research that shows that drugs that have a biomarker are about twice as likely to be successful in the clinic as ones that do not. By the way, there's also data that show that drugs that have support in human genetics are twice as likely to succeed as ones that do not. And so we are deep believers in both of those. And I think that because our focus is so much on human data, a lot of the insights that come out of analysis of human clinical data does actually give you a biomarker for which patients are likely to benefit from a particular therapeutic intervention. And so in some ways you can think of clinical biomarkers as coming out almost for free if you will, not for free, but sort of as a consequence of the work that we're doing anyway, as long as we pay attention and don't just say as a lot of companies do that, oh, we found the target, we're just gonna go and apply it in all comers.

Because honestly that is one of the big things that causes drugs to fail is that you are trying to apply it more broadly, if I'm being cynical, sometimes it's starts to maximize the revenues that you can get from a drug versus trying to figure out exactly in which patients it's gonna work. And one of the things you asked earlier a lot, which, what did I learn at calco? And one of the things that I learned there, there were a lot of former Genentech people there as one would expect given the pedigree of the company. One of them told me that if one of the earliest precision oncology drugs was Herceptin and that goes after her two positive breast cancer patients, that if they had tried to run a Herceptin clinical trial in an all comer breast cancer population, you would've needed a population of 10,000 in the clinical trial, which is a very large clinical trial.

And even then you might not have seen a sufficiently strong, statistically significant signal because the adverse side effects in every drug has adverse side effects in the non-responders may have outweighed the benefits in with the very strong benefits in the responders. So the fact that they had the right patient population in the clinical development of Herceptin was absolutely critical to creating a successful and reasonably sized clinical trial. And so I think that that is a pattern that many more people in the drug development industry should be following. And frankly, a lot of them have started to see the benefits of this. So we're not the only ones going in there, but I do think to your point a lot that we have a differentiated technology stack that will hopefully allow us to get even better, more accurate biomarkers via machine learning on high content data.

ELAD:

Yeah, you mentioned two really key points I feel to expediting direct delivery. There's a biomarker part and then there's finding the right patients relative to the drug. And I think that that actually also is very famous for the H R D drugs where there's a specific set of pathways that if you didn't actually select out the patients with specific mutations, the drugs didn't work. And the second you focused on that population, it worked extremely well, right? And so there's lots of examples of that where you just have to figure out who you're actually targeting. There's a really great interview from a couple years ago with Janssen who started Janssen Pharmaceuticals where he talked about how he felt that a lot of drug regulation and the length of time it takes to develop drugs was driven by almost an overly safest view of the world. Like there wasn't a strong series of cost-benefit trade-offs or willingness to sub-segment patient populations or really look at data in a rich way. And we've seen recently with things like covid that we can really expedite both drug development, vaccine development, everything, right? We, we did things in six months that normally would take 10 years during covid because we decided we could do it. How much time do you think an ML first company or ML first approach can really cut out of drug development? Or do you think it's purely a regulatory issue in terms of those timelines?

DAPHNE:

I don't, I think as, that's a complicated question and I think has elements of both. I think first there does need to be a discussion with the regulators around what might be feasible from a regulatory approval perspective about different kinds of biomarkers. There's also elements that I think are very legitimate questions like how do you collect the relevant biomarker in a robust reproducible way from different patients? What kind of lab protocols one would need in order to have that be collected robustly? That's not always trivial. You can have the most beautiful sophisticated biomarker that works in a very carefully designed research environment and is not gonna work in the wild as part of the standard of care. So I think the regulator does have legitimate questions that need to be answered there. But I do think that with that discussion, and especially if you can front load that and have the discussion with the regulators, not at the very end when you show up with your whatever N NDA a package, but in an earlier state saying, okay, what would it take in order to make this reasonable from your perspective, what questions would you like to see answered?

I think there is a, a legitimate opportunity to actually accelerate things. Having said that, I think one needs to be realistic about what is and is not feasible In covid, we were in the fortunate or unfortunate position that there were a lot of patients with covid. Um, it was rampant. And so you were able to fill your clinical trials relatively quickly and the disease progression was relatively fast. If you're doing an Alzheimer's trial, the disease progression is what it is, and you need to wait long enough to see a delta in the cognition curve in order to convince yourself that there is in fact a difference that your drug is making a difference. Now, I think there is an opportunity to try and create proxy biomarkers, amyloid beta as an example of that. There's been questions about is it the right proxy for cognition or not? My guess would be that it is for some patients and probably not others. So it's a mixed bag to our earlier point about heterogeneity and finding the right patient population. But I think that is a thing that we need to gain conviction around over time. And so ultimately there's only so much that you can speed up biology in certain cases because biology takes as long as it takes.

ELAD:

Yeah, it's it's interesting because I feel like that's a mindset that those of us have worked on, both computer science and biology have to learn, right? You are so used to just being able to manipulate some data in the cloud and then you get an answer versus waiting for years for a readout or to make progress. When you think about how you built out the team at Enro and how you built out the culture, how did you think about having each side learn about the different aspects that each side provides? And in general, how did you think about the culture of a company that could bridge both things?

DAPHNE:

You know, it's really hard and I think building the right culture is one of the most challenging things that we had to do it in seat row. And at the same time, I think a big competitive advantage because doing it is really not very easy. You have to bring in people who truly have both a learning mindset on their own in terms of being interested enough to learn about something that for many is a totally different set of concepts and, and even ways of thinking about the world. So you need computer scientists who are willing to learn about this fuzzy ill behaved field of biology where things don't do what they're supposed to do. You know, when you program a computer, yeah, you can have bugs, but ultimately assuming you did the right things, the same thing will happen. And that's not true in biology.

SARAH:

We just don't know that much <laugh>.

DAPHNE:

Exactly. And these things are living being so they don't respond in the same way even day after day. And so there's just, it's really hard. And then conversely, you have the scientists mindset, the, that sometimes they get frustrated with the, okay, we can take those building blocks and put them together and this is what will happen. And, and science is not like that. And so you have to create a bridge between the different cultures, the different jargons, the different mindsets, and really both get people who are willing to learn about the other discipline, but also just engage in meaningful ways with people who are different to themselves.

SARAH:

What did that mean when you said science is not, just, not like that in terms of manipulating building blocks?

DAPHNE:

So there's so many variables that have a huge effect on the system that sometimes we only are only vaguely appreciate, sometimes you don't appreciate it all. A colleague told me an anecdote about an experiment where some days it went perfectly well, and then the other days the cells just died. And the, and they tried to figure out what was going on. It turns out that they, the cells died were the days when there was a particular technician who had really had a fondness for onion sandwiches. And, and so it turns out that the onion in on his breath actually ended up, you know, making the cells be less happy. And so you just don't even think about these things if you're an engineer, right? The other really interesting mindset difference between how scientists and how engineers approach the world is when you show an engineer or computer scientist a bunch of dots, usually the natural inclination is to try and find the pattern and the thing that explains as many of the points as you can, because that is the thing around which you will engineer your system if you're a scientist, oftentimes what you look for are the outliers, the exceptions, because those exceptions are often the beginnings of scientific discovery because they're the beginning of a threat.

It's like, why did this one behave differently from everybody else? And that gives rise to a new discovery. So again, it's just the mindsets are just so different

ELAD:

To, was there anything you did from a, a process perspective to help bridge these things? So for example, I remember at Colorer we tried to often embed a bioinformatician with a team of systems engineers, and they'd learn off of each other, but then everybody on the team, you know, it could be a phen scientist, it could be somebody else would participate in a scrum, which was a concept that they weren't used to, right? On the biology side, for example. It was more of a way to set that everybody does things on weekly cadences and it's, you don't just do long-term planning, you also do way more short-term planning than you normally would in a lab, or, you know, there's different approaches to almost try and bridge those divides. Were there any things that you specifically did along those lines or were there other approaches that you took from a tangible perspective?

DAPHNE:

Well, so first of all, we do bring in people with their different mindsets and, and we try and create sort of bridges between them. So we have product managers who do scrums and do, you know, these, these agile planning processes? And we apply that also to our platform development, even on the biology side. But at the same time, you know, drug discovery projects, which are years long, you don't do scrums. You, you know, there is a timeline and when you have a whatever, uh, a 45 day differentiation for your I p s cells, it takes 45 days and you, you, there's no point to doing an agile scrum in the middle. You just need to wait for the cells to do their thing. And so we have project managers and we have product managers and we make sure they communicate with each other, but they each deploy their discipline in their own way.

But to your question about one of the things that we did, a lot of it comes down to really being deliberate about culture and values. And so one of the things that we did at the very beginning of the company is we laid out a set of behavioral norms, which, you know, you can think of as values. And the one that is, I think among my favorites, maybe my favorite is actually the last one. They're ordered not an order of importance, but from what we do to how we do it, which is that we engage with each other openly, constructively, and with respect. Each of the words matter. Engagement means we don't silo ourselves and just sit with our tribe. We really have an engagement with others openly being open to asking naive questions, and at the same time being open to naive suggestions from someone, from a discipline other than yourself.

Because sometimes the question of why don't we do things this way is actually a really good idea when you don't come in with a preconceived notion of, oh, because that's how we've always done it. Constructively means that when you make these suggestions, it has to be with the goal of making the outcome better rather than being the smartest person in the room, which is a big problem in companies. We have a lot of smart people, and the respect is really the respect for what everyone brings to the table. And I think that's really important because there's a lot of, um, and please forgive me a lot, but a lot of tech people who come in to life sciences, and it's like we have that all verbal, we are the smartest, we're machine learning, we're gonna solve everything. And they don't respect the challenges of the other discipline. They sometimes they don't even take the time to learn what the challenges of the other discipline are, and that creates immediate hackle raising on the other side. And you know, from there the conversation can only get worse. So I think it's really important to have that respect for both sides, for all sides.

SARAH:

We have a lot of tech people, engineers, founders, researchers as listeners. What would you be working on if you weren't working on in Ctra? Like what else are you paying attention to in digital bio or ai, assuming people are, uh, attuned to having that culture of openness and respect and constructive thinking?

DAPHNE:

So, um, I think that's a great question. And this really is the golden age of AI and machine learning. And there's just so many different ways in which that can be deployed in useful ways. I mean, my personal compass has always been that we should be deploying this towards areas where we make life better for people. So I've tried to veer towards applications that are really about improving life, improving health versus, you know, selling more ads or whatever. Not that, you know, I mean, I guess selling ads is good too, but for me it's really about how do we make, uh, life better. So I think there is a lot of really exciting opportunities right now. I think that intersection or that interface, if you will, between biology and technology, is one of the richest areas that exist today because each of these fields has been making a huge amount of progress in its own right.

We all hear about, you know, AI much more in the news because of chat, G p T and so on. And it's something that everyone can really relate to and understand, but the toolkit that biologists have available to them with CRISPR and pluripotent stem cells and huge advances in microscopy and such are maybe not quite as visible to the everyday person, but they're equally dramatic, I think, in terms of what they un unlock. And so bringing those two together creates so many opportunities for change in, not just in drug discovery, which is where I happen to, uh, pick my own trajectory, but in agriculture technology, in environmental technology, in energy, in biomaterials, maybe materials that are much less destructive to the environment and and such with better properties in food tech. I think there's just a tremendous wealth of directions that one can take those fields and bring them together in interesting ways.

Having said that, I think there's other really beneficial societal directions that one can deploy this. I think we're only starting to see the applications of machine learning and AI to say energy other than things like biofuels because the data just haven't been as readily available. But I'm sure that will change. Similarly, I think going back to my Coursera days and even my Stanford days, the benefits of machine learning in education and really personalizing learning experiences to individual learners, maybe having a more beneficial experience than just letting Chad g p t write their essays for them. I think there is a lot of opportunities to really deepen and enhanced learning experiences for, for students. So I think there's almost unlimited things that one could do. One just needs to be committed to finding them versus falling into the sort of, um, comfortable place of going to one of the tech giants and just doing something that earns you a lot of money, which is, I guess nice for you, but maybe not as good in terms of making the world better.

SARAH:

You've worked with great success in areas that are, uh, perhaps traditionally harder to make money in as a startup, ed tech, health tech, there's not traditionally a ton of budget or there's, um, an impedance mismatch. You know, you have regulatory controls or whatever it is that makes it more challenging traditionally than many other areas of software. But what advice would you give to founders who wanna work in these areas in particular?

DAPHNE:

So I think that there is, I'm hoping a realization among investors that there are entire untapped ecosystems where technology can make a difference and hasn't. And so I think that as you look at what we did at Coursera, for example, EdTech had always been a backwater of investment, and yet we were very fortunate to have been able to attract fairly significant funding even at the very early stages because we had an idea that our investors found compelling and differentiated from what others had done. So I guess I'm a believer, and maybe I'm an optimist, that if you have a really good idea that is differentiated from what others have done, where the impact is something you can make clear, as we were able to do those first early MOOCs, people will have confidence that you can turn that into something that is revenue bearing and will be willing to, you know, go with it for a while.

So that having been said, I would say that ultimately, and this is I guess how I feel about maybe the other half of the question, which is, is this gonna be the place where you make the most money with the greatest amount of certainty? Maybe not, but I believe that we only have one life to live. And that ultimately what you want to be able to do is to look back on your life at some point since I have done something that's really worthwhile and important. And I think that's something that is important for people to keep in mind as they decide where to spend their time.