Thanks!

Thank you for subscribing to our content. You will now receive the latest news in your inbox.

April 13, 2023

No Priors 🎙️112: Your AI Friends Have Awoken, with Noam Shazeer, Founder and CEO of Character.AI (TRANSCRIPT)

EPISODE DESCRIPTION:

Noam Shazeer played a key role in developing key foundations of modern AI - including co-inventing Transformers at Google, as well as pioneering AI chat pre-chatGPT. These are the foundations supporting today’s AI revolution. On this episode of No Priors, Noam discusses his work as an AI researcher, engineer, inventor, and now CEO.  
 
Noam Shazeer is currently the CEO and Co-founder of Character AI, a service that allows users to design and interact with their own personal bots that take on the personalities of well-known individuals or archetypes. You could have a socratic conversation with Socrates. You could pretend you’re being interviewed by Oprah. Or you could work through a life decision with a therapist bot. Character recently raised $150M from A16Z, Elad Gil, and others. Noam talks about his early AI adventures at Google, why he started Character, and what he sees on the horizon of AI development.

No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode.

Show Links:

Sign up for new podcasts every week. Email feedback to show@no-priors.com

Show Notes

[1:50] - Noam’s early AI projects at Google
[7:13] - Noam’s focus on language models and AI applications
[11:13] - Character’s co-founder Daniel de Freitas Adiwardana work on Google’s Lambda
[13:53] - The origin story of Character.AI
[18:47] - How AI can express emotions
[26:51] - What Noam looks for in new hires

Transformers, large language models, AI chat. These are the foundations supporting today's AI revolution. And this week on No Priors we have ai, researcher, engineer and inventor who is a key part to these innovations and is considered one of the smartest people in ai. Noam Shaer is the c e o and co-founder of Character ai. A service that allows users to design and interact with their own personal bots to take on the personalities of well-known individuals or arch types. You could have a Socratic conversation with Socrates or you could pretend you're being interviewed by Oprah or you could work through a life decision with a therapist bot character recently raised 150 million from Andreessen Horowitz, myself and others. We talk about how nom God has started, Google has groundbreaking AI discoveries and what he's doing at character. So Noam welcome to no priors.

NOAM:

Hey Elad. Thanks for having me on. Hi Sarah.

SARAH:
Good to see you.

ELAD:

Yeah, thanks for joining. So you've been working on NLP and AI for a long time. So I think you were at Google for something like 17 years off and on and I think even your Google interview question was something around spell checking and approach that eventually got implemented there. And when I joined Google, one of the main systems being used at the time for ads targeting was like Phil and Phil Clusters and all this stuff, which I think you wrote with George Herrick. And so it'd just be great to get kind of your history in terms of working on AI N L P language models, how this all evolved, what you got started on and what sparked your interest.

NOAM:

Oh thanks Elad. Yeah, just uh, always naturally drawn to ai. Wanted it to make the computer do something smart. Seems like pretty much the most fun game around. Was lucky to find Google early on and really is an AI company. So yeah, I got involved in a lot of the uh, early projects there that maybe you wouldn't call AI now, but seemed pretty smart at the time. And then more recently was on the Google Brain team starting in 2012. It looked like a really smart group of people, uh, doing something interesting. I had never done deep learning before or neural networks I guess as it was called Ben or whatever. I forget when the rebrand happened. Yeah. But uh, yeah, it turned out to be really fun.

ELAD:

That's cool. And then, you know, were one of the main people working on the transformer paper and design in 2017 and then you worked on mesh TensorFlow I think, um, sometime within the following year. Could you talk a little bit about how all that got going?

NOAM:

Yeah, I, I mean I must around a few years on the Google Brain team and like utterly failed at a bunch of stuff till I kind of got the hang of it. Really the key insight is that what makes deep learning work is that it is really well suited to modern hardware where you know, you have the current generation of chips that are great at matrix multiplies and you know, other, other forms of things that require large amounts of computation relative to communication. So basically deep learning like really took off because you know, it runs thousands of times faster than anything else. And as soon as I got the hang about, started designing things that actually were smart and ran fast. But you know, the most exciting problem out there is language modeling. It's like the best problem ever because there's like an infinite amount of data, you know, just scrape the web and you've got all the training data you could ever hope for.

And like the problem is super simple to define it's predict the next word, the fat cat sat on the, you know, what comes next. Like it's extremely easy to define and if you can do a great job of it then you get everything that you're seeing right now and more you can just talk to the thing and it's really AI complete. And so got started uh, around like 2015 or so, I'm working on language modeling and messing with the recurrent neural networks, which was what was great then. And then transformer kind of came about as someone had the bright idea it was yakka was let's create the that hey these rrn ns are just annoying. Let's try to replace 'em with something better. Uh, you know, uh, in attention and overheard a couple of colleagues talking about it in the next cube over. I was like, that sounds great. Let me help you guys. You know, these rrn ns are annoying, it's gonna be so much more fun.

ELAD:

Can you quickly describe sort of the difference between an R N N and a transformer based or attention based model? Yeah,

NOAM:

Sure. Um, okay so like the recurrent neural network is the sequential computation where every word you read the next word and you kind of compute your current, you know, state of your brain based on the old state of your brain and what this next word is. And then you, you predict the next word. So you have this very long sequence of computations that has to be executed in order so that you know, the magic of transformer kind of like convolutions is that you get to process the whole sequence at once. I mean it's still a function of the predictions for the later words are dependent on what the earlier words are. But it happens in like a constant number of steps where you get to take advantage of this parallelism of you can look at like the whole thing at once and like that's what modern hardware's good at is parallelism.

And now you can use the length of the sequences, your parallelism and everything works super well. Attention itself. It's kind of like you're creating this big key value associate of memory where you're like building this big table like with one entry for every word in the sequence. And then you're kind of looking things up in that table. It's all like fuzzy and differential and a big, the French function that you can back prop through. And people had been using this for problems where there are two sequences where you've got machine translation, you're like translating English to French and so while you're producing the French sequence you are like looking over the English sequence and trying to pay attention to the right place in that sequence. But the insight here was hey, you can use the same attention thing to like look back at the past of this sequence that you're trying to produce. And the beauty is that it runs great on GPUs and GPUs and it's kind of parallel to like how deep learning is taken off cuz it's great on the hardware that exists. And this sort of brought the same thing to sequences.

SARAH:

Mm-hmm. <affirmative>. Yeah, I think the classic example to help people picture it is like, you know, saying the same sentence in French and English, the ordering of the words is different is you're not mapping like one to one in that sequence and to figure out how to do that with parallel computation without information loss. So it's like a really elegant thing.

ELAD:

Yep. Yeah, it seems like the technology has also been applied in a variety of different areas. The obvious ones are these multimodal language models. So it's things like chat G P T or what you're doing a character. I've also been surprised by some of the applications into things like alpha fold, the protein folding efforts that Google did where it actually worked in an enormously performant way. Are there any application areas that you've found really unexpected relative to how transformers work and relative to what they can do?

NOAM:

Oh, um, I've just had my head down in language like here you have a problem that can do like anything. I want this thing to be good enough. So I just ask it how do you cure cancer? And it like invents a solution. So, so I've been totally ignoring what everybody's been doing in all these other modalities where I think a lot of the early successes in deep learning have been like in images and people are like all excited about images and kind of completely ignored it. Cause you know, an image is worth a thousand words but it's a million pixels so the text is like a thousand times as dense. So kind of big text nerd here. But very exciting to see it take off in, you know, in all these other modalities as well. And you know, those things are gonna be great. It's super useful for uh, building products that people wanna use but I think a lot of the core intelligence is going to come from these text models.

ELAD:

Where do you think the limitations for these models, what do you think creates the Asim tote that all this is being built against? Because people often talk about just scale, like you just throw more compute and this thing will scale further. There's data and different types of data that may or may not be available. There's algorithmic tweaks, there's a adding new things like memory or loop backs or things like that. What do you think are the big things that people still need to build against and where do you think this sort of taps out as an architecture?

NOAM:

Yeah, I don't know that it taps out. I mean we haven't seen the tap out yet. The amount of work that has gone into it is probably nothing compared to the amount of work that go into it. So quite possibly there will be all kinds of like factors of two inefficiency that people are gonna get through better training algorithms, better model architectures, better ways of building chips and using quantization and like all of that. And then there are going to be factors of 10 and a hundred and a thousand of just like scaling and money that people are just gonna throw into the thing because hey, everyone just realized this thing is phenomenally valuable. At the same time I don't think anyone's seen a wall in terms of how good this stuff is. So I think it will just, it's just gonna keep getting better. I didn't, I don't know what stops it.

SARAH:

What do you think about this sort of idea that we can increase compute but the largest models are undertrained. We've used all the text data on the internet that's easily accessible. We have to go improve the quality, we have to go do human feedback. Like how do you think about

NOAM:

That? Yeah, I mean in terms of getting some more data, like there are a lot of people talking all the time. I mean

ELAD:

<laugh>,

SARAH:

Why do you think we do this podcast?

NOAM:

Right? Like there's like order 10 billion people like producing a thousand, you know like, I don't know, 10,000 words a day. I mean that's like a lot of words that you know and pretty soon many of those people will be doing a lot of that talking to AI systems. So I, I have a feeling like a lot of data is going to find its way into some AI systems, I mean in privacy preserving ways I would hope. And then the data requirements tend to go up like with the square root of the amount of computation cuz you're gonna train a bigger model and then you're going to throw more data at it. I'm not that worried about coming up with data and I, I feel like we could probably like just generate some more with the ai. Yeah.

ELAD:

<laugh>. And then what do you think are the main things to solve for these models going forward? Is it hallucinations, is it memory, is it something else?

NOAM:

I don't know. I kinda like hallucinations <laugh>,

SARAH:

It's also a feature. Yeah,

NOAM:

They're fun. Yeah, it's, we'll call the feature. Yeah, some of the things we wanna work on the most are memory cause our users definitely want their virtual friends to remember them. There's so much you can do with personalization and you wanna dump in a lot of data and use it efficiently. Yeah, there's a ton of great work going on in trying to figure out what's real and what's hallucinated. Of course I think we'll solve this.

ELAD:

And then do you wanna talk a little bit about Lambda and your role with it and how that led eventually to character?

NOAM:

Yeah, my co-founder Daniel de Freitas, he is like the scrappiest, most hardworking really you know, smartest guy. Uh, he's kind of been on this lifelong mission to build chatbots. Like since he was like a kid in Brazil, he's like always been trying to build chatbots. So he came to join us at Google Brain because I think he had read some papers and figured that this neural language model technology would be like, you know, something that could actually generalize and build something truly open domains. So, and he did not get a lot of headcount. He started the thing as a 20% project where like people are encouraged to spend 20% of their time like doing whatever they want. And then he just recruited like an army of 20% helpers who were like ignoring their day jobs and like actually just you know, helping him with the system.

And he went as far as going around and panhandling people's TPU quota like, and he called his project Mina cuz he like, I guess it came to him in a dream and like at some point I'm looking at the scoreboard and was like what is this thing called Mina and why does it have 30 TPU credits? And it was like just gotten a bunch of people to contribute and then he was really successful at this because you know, in building something really cool that actually worked where like a lot of other systems were just like totally failing either because people were not, you know, just weren't scrappy enough or were going for like rule-based systems that were just never going to generalize. So at some point I was like okay, there are so many ways we can make this technology better by like factors of two, but the biggest thing is just convince everyone that this is like worth trillions of dollars by demonstrating some application that is clearly super valuable to like billions of people. And

ELAD:

Lambda was this, I believe it is the internal chat bot pre G P T at Google. That was famously in the news because an engineer thought it'd become sentient, right?

NOAM:

Yeah, yeah, yeah. So that was like a renaming of Mina. So I guess I went and helped Daniel on Mina, we got it on some giant language models and then kind of became like an internal viral sensation and then got renamed to Lambda and yeah we had left before the, the business about somebody thought it was sentient <laugh>. I'm fathered. Can you

SARAH:

Talk a little bit about just why it wasn't released, what some of the concerns were?

NOAM:

I think just large companies have concerns around launching products that can say anything <laugh>. I would, I would guess it's just like a matter of risk, uh, versus you know, how much you're risking versus how much you have to gain from it. So figured hey startup seems like the right idea that you can kind of just move faster.

SARAH:

Yeah. So tell us about character. What's the origin story there? Did you and Danielle look at each other one day and we were just like, we have to get it out there?

NOAM:

Yeah, pretty much we're like yeah and kind of noticed, hey there are people who like just go out and get some investors and start doing something so we've just like, okay, let's just like build this thing and launch as fast as we can. So hired a total rockstar team of engineer researchers and got some compute.

ELAD:

How about, one thing that comes up a lot is people say that you all have one of the truly extraordinary teams in the AI world. Are there specific things that you recruited against or how did you actually go about finding these people?

NOAM:

You know, some people we knew from Google happened to get introduced to Myat formerly from Meta who's had launched a lot of, well built a lot of their large language model stuff and their neural language model infrastructure and the uh, a bunch of other meta people followed him and you know, they were great.

ELAD:

Is there anything specific that you would like look for in or ways to test for it or was it just standard interviewing approaches?

NOAM:

I mean a lot of it was just kind of motivation. I think. Uh, Daniel tends to very, very highly valued motivation. I think he's looking for something between burning desire and childhood dream. So like <laugh>, there were a lot of great people that that we did not hire because they didn't quite meet that bar. But then, then we got a bunch of people who were kind of up for uh, joining a startup and really talented and and entirely motivated.

SARAH:

I mean, speaking of childhood dreams, it's a, do you wanna describe the product a little bit? Like you have these bots, they can be user created, they can be character created, you can be public figures, fictional figures, anybody with like a corpus that you could make up or historic figures. How'd you even arrive there as the right form for this?

NOAM:

Yeah, I mean like basically this is kind of a technology that's so accessible that billions of people can just invent use cases, you know, and it's so flexible that you really just wanna put the user in control because often they know way better than you do what, what they wanna use the thing for. And I guess we had kind of seen some of the assistants from uh, sort of assistant bots from large companies. You know, you've got uh, Siri and Alexa and Google assistant and, and like some of the problems there are that when you're just projecting one persona to the world, people will a expect you to like be very consistent in say your likes and dislikes and b, just not be offensive to anyone and not really have an opinion. It's kind of like, you know, like you're the queen of England and you can't say something that's going to disappoint someone or I don't know, like I remember like, I think it was George h w Bush said he didn't like broccoli and then like the broccoli farmers were like all mad at him or something.

So if you're like such a public trying to present like one public persona that everyone likes, you're going to end up just being boring essentially. And people just don't want boring, you know, people wanna interact with uh, something that feels human, you know? So basically you need to go for multiple personas, you know, like let people invent personas as much as they want and kind of, I like the name character cause it's got a few different meanings, you know, character like, you know, asky character, it's like unit of text character, like a persona or character, like good morals. But anyway, so it's, I think that's just how people like to relate to this stuff. It's okay, I kind of know what to expect from an experience if I can kind of define it as a person or a character. Maybe it's someone I know, maybe it's just something I invent, but it kind of helps people like kind of use their imagination.

SARAH:

So what do people want? Like do they do their friends, do they do fiction? Do they do entirely new things?

NOAM:

Yeah, I mean there's like a lot of role playing, role playing games are big, you know, like text adventure where it's just making it up as it goes. There's a lot of like video game characters in anime and there's, you know, some amount of people talking to public figures and influencers and like I think a lot of people have these existing parasocial relationships where they, there's they've got characters they're following like on TV or some uh, you know, or internet or influencers or or whatever. And so far they just have not had the experience of okay the now this character responds cuz like it's always something you can watch or maybe you're in like a thousand on one fan chat or something where like this V2 girl write back to you like once in an hour or something. But now they get the experience of, oh like I can just create a version of this privately and just talk to it And it's pretty fun. We also see like a lot of people using it cuz they're lonely or troubled and need someone to talk to. Like so many people just don't have someone to talk to. And a lot of you kind of crosses all of these boundaries. Like somebody will post, okay this video game character is my new therapist or something. So like it's a huge mix of fun and people who need a friend and connecting with, you know, game playing, all all kinds of stuff. How do you

SARAH:

Think about emotion both ways, right? Like people's relationships with characters or like what level we are at in expressing coherent emotion and how important that is?

NOAM:

Oh yeah, I mean probably you don't need that high end level of intelligence to do emotion. I mean emotion is great and is super important but like a dog probably does emotion pretty well, right? I mean I don't have a dog but I've heard that people will, like a dog is great for like emotional support and it's got pretty lousy linguistic capabilities but um, but the emotional use case is huge and people are using the stuff for all kinds of emotional support or relationships or, or whatever which is just terrific.

ELAD:

How do you think the behavior of the system will change as you kind of scale things up? Because I think the original model was trained on not a ton of money, like on a relative basis. You folks were incredibly frugal.

NOAM:

Yeah, I think we should be able to make it smarter in all kinds of ways. Both algorithmically and scaling, you know, get more compute and train a bigger model and train it for longer should just get more brilliant and more knowledgeable and better attuned to what, what people want, what people are looking for.

SARAH:

You have some users that are on the service like many hours a day. Like how do you think about your target user over time and what the usage patterns do you expect to be are,

NOAM:

We're gonna just leave that up to the user. Our aim has always been like, get something out there and let users decide what they think it's good for. And you know, we see like somebody who's on the site today is active for about two hours on average today. That's of people who send a message today, which is, which is pretty wild, but it's a great metric that people are finding some sort of value. And then as I said, it's really hard to pin down exactly what that value is because it's, it's really like a big mix of things. But our goal is like make this thing more useful to people and let people kind of customize it and decide what they wanna use it for. If it's brainstorming or help or information or fun or like emotional support, let's get it into user's hands and and see what happens.

SARAH:

How do you think about commercialization?

NOAM:

We're just going to lose money on every user and make it up in volume.

SARAH:

<laugh>. Oh good. It's good strategy.

NOAM:

No, I'm joking. No

ELAD:

Like the traditional, uh, 1990s business model, so that's good.

SARAH:

It's kinda a 2022 business model too. <laugh>,

ELAD:

You should issue a token and then just make it a crypto thing.

NOAM:

Uh, no we're going to uh, monetize at some point pretty soon. Cause again, this, this is the kind of thing that benefits from having a lot of compute and you know, rather than burn investor money, the most scalable way to uh, fund something is actually provide a lot of value to a huge number of people. So probably try some premium subscription type of service where you know, we can as we develop some new capabilities that might be a little more expensive to serve than start charging for them. I really like that anyone can use it now for free because it's, you know, there's so many people that it's, it's providing so much value.

ELAD:

I mean it's really taken off as a consumer service in a really striking way. If you look at the numbers of users and the number of hours of usage per user, which is insane. Are there any scenarios where you think it's likely to go down like a commercial setting where you have like customer service bots who provide like a brand identity around support or is that just not that interesting right now as a direction?

NOAM:

I mean right now we have 22 employees so we need to prioritize, you know, we are hiring definitely enough work for way way more people. Priority number one is just, just get it available to the general public. It would be fun to like, uh, launch it as customer service bots when we're able people would just stay on customer service all day. <laugh>. Yeah,

ELAD:

They're like chatting with a friend effectively. So yeah, let's start with the customer support. And that actually happened apparently on some old e-commerce sites. Like eBay apparently was effectively a social network really early on as people were buying and selling things and just kind of hanging out cause there weren't that many places to hang out online. So I always think it's kind of interesting to see these emergent social behaviors on different types of almost like commercial products or sites. But that makes a lot of sense.

SARAH:

So you said one of the obvious reasons Lambda didn't ship immediately at Google was safety. Like how do you guys think about that? Like remember everything characters say is made up

NOAM:

Exactly right. Make sure the users are aware that this is fiction. There's anything factual that you're trying to extract from it at this point. It's best to go look it up somewhere that you find reliable. <laugh>. You know, I, I mean there are other things that types of filters we've got there. Like, you know, we don't wanna encourage people to hurt themselves or hurt other people or or blocking porn. There's been a bit of protest around that.

ELAD:

Yeah. And do you view all this as a path to AGI or sort of super intelligence? Sure, yeah. And is that part of the goal? For some companies it seems like it's part of the goal and for some companies it seems like it's either not explicitly an antigo or if it happens it happens and the thing people are trying to build is just something useful for people. What a

SARAH:

Flex AGI I is a side effect.

NOAM:

Yeah, well I mean that was a lot of the motivations here cuz like, I mean my MA motivation for working on ai other than that it's fun. Well I mean fun is secondary. Like the real thing is like I wanna drive technology forward. There are just so many technological problems in the world that could be solved. For example, like all of medicine, like there are all these people who die from all kinds of things that we could come up with technological solutions for. I would like that to happen like as soon as possible, which is why I've been working on AI because okay, rather than working on same medicine directly, let's work on AI and then AI can be used to accelerate some of these other things. So basically that's why I'm working so hard on the AI stuff and I wanted to have a company that was both AGI first and product first because product is great. It lets you build a company and like motivates you. And so like the way you have a company that's both AGI first and Product first is that you make your product depend entirely on the quality of the ai. Like the biggest determining factor in the quality of our product is how smart the thing's gonna be. So now we're like fully motivated a to make the AI better and to make the product better.

ELAD:

Yeah, it's a really nice sort of purchase feedback loop because to your point, as you make the product better, more people interact with it and that helps make it a better product over time. So it's a really smart approach. How far away do you think we are from ais that are as smart or smarter than people? And obviously they're smarter than people on certain dimensions already, but I'm just thinking of something that would be sort of equivalent.

NOAM:

Yeah, I guess we, we just always get surprised at like what dimensions the AI gets better than people. That's uh, pretty cool that some of these things can now like do your homework for you. I wish I had that as a kid.

ELAD:

What advice would you give to people starting companies now who come from background similar to yours? Like what are things that you learn as a founder that you didn't necessarily learn while working at Google or other places?

NOAM:

Oh, good question. Basically like you learned from horrible mistakes, but I don't feel like we've made really, really bad ones so far. Or at least we've kind of recovered, uh, <laugh>. But I guess yeah, just build the thing you want really fast hire people who are just like really motivated to do it. Yeah.

SARAH:

Um, so one, one quick question just for your users, like what's the secret to making a good character? Like if I'm gonna go make a copy of a lad instead of rubber ducking with myself, like what do I need? Oh, just like my text chat with aod. Yeah. Stop disappearing the chat a lot.

ELAD:

<laugh>. I'm just trying to protect myself from becoming a character, you know? So

NOAM:

I mean, so you can do it just as simply as like put in a greeting. A name and a greeting is all you need typically for famous characters or famous people. Cuz the model probably already knows what they're supposed to be like if it's, you know, something that the model is not going know about because it's a little less famous than you can create an example conversation to like show it how the character's supposed

SARAH:

To act. It's insane. That character's only 22 people. Like you're hiring. What are you hiring for? What are you looking for

NOAM:

So far? 21 of the 22 are, uh, engineers. So, uh, we're gonna hire more engineers. No, I'm, I'm joking. We are, we're gonna hire more engineers,

SARAH:

<laugh> shocked

ELAD:

<laugh>

NOAM:

Both in the, uh, deep learning but also like, you know, front and back end. Definitely hire more people on like the business and product side. Yeah, we've got a recruiter starting on Monday.

SARAH:

Okay. Hard require burning desire or childhood dream to bring, uh, characters to life. Yeah,

ELAD:

Yeah, yeah. An exceptional person. Yep. Do you mind if I ask you like two or three quick fire questions and then we'll wrap up? Sure. Okay. Who's your favorite mathematician or computer scientist.

NOAM:

Oh. Um huh, that's a good one. They were all standing on the shoulders of giants. Yeah. It's hard to pick out in this big tower of mathematicians and computer scientists. I got to work with Jeff Dean a lot at Google. He's really nice, fun to work with. I guess he's now running their large language model stuff. It's a little bit of a regret of having left Google, but uh, hopefully collaborate in the future.

ELAD:

Yeah. Do you think math is invented or discovered?

NOAM:

Oh, that's interesting. Okay. I guess discovered maybe, maybe all of it's discovered with everything and we're, uh, we're just discovering

ELAD:

It. And then last question. What do you think is something you wish you'd invented?

NOAM:

Uh, let's see.

SARAH:

Teleportation.

NOAM:

Ooh,

ELAD:

That, that seems hard.

NOAM:

That that sounds like a good one. I, I'm not se I'm not gonna step into a teleporter,

SARAH:

Some physics involved here. Yeah.

NOAM:

I do not wanna be like dis assembled or anything like No beaming. I'm, I'm, I'll walk. I don't want

ELAD:

Like tick the elevator. We don't need to teleport

NOAM:

My brain uploads into a computer. Like, I think I would like to keep my physical body please. Thank <laugh>.

SARAH:

Oh, I don't care. Let me outta of the meat box. What do you wish you'd invented? Oh,

NOAM:

What we wish we invented. Sorry. The, that was dodging the question. Um,

ELAD:

<laugh>

NOAM:

Just focused on, uh, on inventing AI that, uh, you know, that can push the technology forward. Such

SARAH:

A good founder answer

ELAD:

Makes sense.

NOAM:

I'm working on it. Very

ELAD:

Focused. That's great. Well, no, this was an incredible conversation. So thank you so much for joining us today on the podcast.

NOAM:

Thank you Elad. Thank you Sarah. Good to see you too.