Thanks!

Thank you for subscribing to our content. You will now receive the latest news in your inbox.

March 16, 2023

No Priors 🎙️ 108, with Sridhar Ramaswamy, Cofounder/CEO Neeva: What is the future of Search? (TRANSCRIPT)

SARAH:

For the first time in decades, one of the Internet's most important products web search feels like it might be at risk for disruption. Bing has Allied with open AI to integrate LLMs. Google is committed to launching new products and new startups are emerging. A former 16 year Google veteran who most recently led the Internet's most profitable business as SVP in charge of Google ads, commerce and privacy. Sridhar Ramaswamy co-founded the Challenger AI powered search platform Neeva in 2019. Streetar, I've learned so much from you as an investing partner, founder, and friend. Welcome to the podcast.

SRIDHAR:

Thank you. Very excited to be here. Same, I've learned so much about companies and investing in tech from you.

SARAH:

Let's start with the background. Tell us about the motivation to start Neeva when you were already part of creating the dominant search product.

SRIDHAR:

Yeah, so Neeva is a little bit of back to basics thinking. When I left Google, I knew I wanted to start a company. I spent a lot of time with Vivic about what we wanted to work on and we ultimately came to the conclusion that like we were actually really excited about search. There's the geek in us that like to help people find information that they needed and we were also ambitious enough to think that, you know, 20 years in we could rethink the search product and create a better one. Our aha moment is a little bit of an abstract aha moment, which is we said if we didn't have to deal with ads, if we didn't have to worry about like monetizing, we truly could start from back to basics. As uh, both of you know in startups it's as much about taking advantage of opportunity as it is the original direction that you set.

So the first three years of nivo were really about building a better private search engine and honestly it also taught us a lot of pretty harsh lessons about consumers and uh, you know, whether they were ready for change or not. And really what we saw happen with AI and large language models last year was the aha moment when we realized, wait, we can have the great principles that we started niva with and create a much, much better experience. And so that's, that's a little bit of the journey to where we were, but at our core, Niva was like there must be a better search product. It cannot be that there's company one religion, one product for the whole world.

SARAH:

So I think many people who use Google every day would say like, it's actually pretty good. And as somebody who was working on this, you could see I think sometimes users are blind when they have a, you know, a default that's this strong. Um, what were the things you thought could be better if I could add to that, like how does that factor into the Neeva mission?

SRIDHAR:

Yeah, so I mean an important part at least early on was the private and the ads free and you know, we have to say that we underestimated how much people especially in the US would care about it. As you know, figuring out consumers is a very tricky thing. People will often not do what they say they will do or will not even admit to things that like they will or will not do. That's just the nature of the game. For us for example, we are surprised that we did so much better in Europe compared to the United States. You don't really think of them as being that different, but in practice in terms of how many people care, it is actually very different. So a lot of the early Neeva was really about how do we use the power of being privacy focused and ads free to create truly a better experience.

So we've tried a number of things. They have achieved varying degrees of success. For example the integration of uh, things like personal data, personal preferences, but I would say the, the fundamental challenge of Neeva, especially in the United States has been how do you get people to take that initial step of caring enough to want to change their search engine? Once you actually get people to do that, the job gets considerably easier and they begin to see all of the things that were not really that great about. You know, about that experience. Again as a startup founder, as a consumer startup founder, I think these are pretty harsh lessons in consumer psychology, but one that you know, one has to learn.

SARAH:

So more recently you guys had a, a big breakthrough in terms of experience and consumer openness to AI summaries which look very different from traditional search. Can you just talk about how this product came about and what you had to build to enable it?

SRIDHAR:

Yeah, so in some sense AI summaries, <laugh>, I am sure there are many Google engineers and execs that'll tell you, wait, we've been doing this for 15 years. It's kind of true. Google launched something called featured snippets like I think it was a long time ago, 2010, 11. Google's always known that you know an answer right in the main search experience trumps all. Google actually knows this really well. IAD will remember this. Google knocked out live.com Bings image search as the top image search product in the world by integrating image search right into the search experience. Turns out bing, you know live.com then was the one that had the best emit search experience. Google knocked it out by putting it into the search experience. Same thing happened with Yelp and with local, it didn't matter how good Yelp was if you could show an answer right in the search experience that basically one similarly this featured snippets, which is really pick out the two or three lines from a website that is exactly the answer that the user is looking for was always a big win.

People love the product. It goes back to essentially like ACOMs Razor, like the simplest explanation is anything that minimizes work people are going to love. And so if you give an answer instead of get letting people click on something, of course they're going to like it. This is the reason why you know, the currency conversion widget on Google is wildly popular. It's not that you and I like can't click and go to another site but it's like ah, why it's there. And so answers in that sense are old but the fundamentals of search have always been that you got back a set of opaque links and of course Google's entire business, the trillion dollar business is built on this again, obvious fact that you and I cannot tell really between a good link and a bad link. We can say a little bit if it's the New York Times our brain it like basically tells us ah that's a good site.

You know for most sites we don't really know, we click, we find out. But the opacity and the linear scanning order is always an important part of how search has worked. And so answers are, you know, this linear scanning is important to remember this consistent desire, whether they stated or not on the part of users to get to the answer in the fastest possible way is an important thing to remember. But things like featured snippets were never deploy a blood scale. You know, the technology simply was not there. Even if Google put the full might of its mighty machine against the problem, the coverage never really extended beyond like five, six, 7% and it would make website owners really unhappy. They're like you're taking away my clicks. And so it was always like this edgy feature that Google would be like, you know, yes we can show this but not really too much our aha moment with large language models where we are like wait a minute for the first time you know you have these models that can take like any content and come up with a summary that gets to the heart of what this page is saying and oftentimes you have to do it in the context of the query.

If you have a blog for example that has six sections and your query is really about one of those sections, then you better find out the right section to summarize. And so a lot of it was just realizing that what was essentially previously unsolvable and some reason in particular are this frustratingly vague concept. You and I can do a reasonable job if given a bunch of different kinds of content to summarize, but actually making a machine learning model do that in general is a tough thing. So a lot of last year was really like understanding that but also trying to make work at scale, which is a big effort on our part. We decided that we didn't really want to be beholden to say like using open AI's API for doing things like summarizing a 4 billion page index. We built a lot of the technology in-house but the final cumulative product is these cited summaries which really is one fluid answer when asked a pretty complicated question or or query.

Obviously you know many people are doing this now but for us that was this aha moment of wait we can write answers a single authoritative answer for 50, 60, 70% of queries and large language models as you folks well know are also general purpose learners. The exact same tech that can summarize a piece of text can also be used now to pull out structured information. We realize that we were sitting basically at a gold mine beyond compare in terms of a better search experience. You know most of what you see for cited summaries are in the context of information seeking queries, but there's a whole lot of work coming that can tackle different kinds of commercial queries. So this is the beginning of a lot of work that can be done to make the search experience better, but the core really is if we can provide a believable answer to a question, people are always going to prefer that over any number of links that you can give them. People don't like clicking on links.

ELAD:

Yeah, it's really interesting because uh, you know I overlap with you at Google and one of the things I worked on pro while mobile search and I remember to your point we tried to surface every single but at the time we were calling one boxes, you know that would trigger with images or trigger with uh, location information and it's pretty amazing that you're able to get to such high amounts of coverage just using the l l M side. How do you think about, cuz I remember when we were building those individual pieces there was a lot of custom work, there was custom indices for news and crawls and then there was custom ranking algorithms, you know everything you, you had sort of specialization. How do you think about the other 30 or 40% that you're covering or is the idea eventually to do everything via LLMs? Is that prohibited from a cost perspective? I guess more generally how do you think about information retrieval related problems in this new world and how you map the different types of search queries and the different types of results against that?

SRIDHAR:

It's a great question. Uh, so for example in like the 55 60% that I'm talking about, I'm actually excluding the one box and that we already fire so it doesn't include like the stock cards or the weather cards and stuff like that. In fact we are working on a PO integration and part of what the PO team is saying is like wait, wait, if somebody ask for weather just give it back. You have the information already. It's not that hard

ELAD:

For clarity PO is the uh, Quora app.

SRIDHAR:

Yeah PO is the Quora app. It's like, uh, I don't know what's the right way to put it. It's, it's like a chatbot aggregator, it's a pretty cool app. You can take some of the one box in and even there by the way this code for triggering as you point out, ILARDS used to be like really annoying code sometimes it would be regular expressions. It's basically like a giant, you know, ball backs when it comes to figuring out how to trigger right LMS actually make some of that stuff easier if you want to extract structured information even from user typed queries. And I'm sure like you know, most tech people have dealt with this at some point in their life or the other. All of us have nightmares about writing beautiful soup code in order to parse webpages. It's basically regular expression parsing over ever-changing websites.

It's horrible. We have done a bunch of it in the first two, two-ish years of Neeva. That stuff is also easily generalizable with the smallest model that there is at this point. I don't feel that there's like a natural limit to how much l LMS can be used with search. I do feel however <laugh> that there's a very strong limit to how many questions can be useful answered and you realize with the shock that search engines are actually pretty terrible at a lot of tail queries that you and I will now no longer think twice about putting into a chat Bott. I mean what do I mean by that? The other time, you know Jason Khans who like you folks has a big podcast, you know he just typed in how are the Nicks doing this year? Inva and a bunch of other search engines and he was like, ah, this AI stuff does not work.

But the real answer is no one in their right mind is going to think of typing in how are the Nicks doing this year into Google search because it just never gave great answers for stuff like this. Tale queries have always been served poorly. I don't think that is going to change instantly, but queries that can be meaningfully answered, I think a lot of them can be answered with lms. For what it's worth, the approach that we are taking, which is very much like the beginning of how large language models can be applied, retrieval problems is this technique called retrieval augmented generation. Again, uh, you know a lot of your listeners know this. It's basically how do you combine a search engine as a tool that a large language model uses And even there there's going to be generalization, there is zero reason why we can't recognize that you actually typed in an arithmetic expression and fired off a Python interpreter for doing this or some other api. So again, even in terms of what can search engines do, we are very much at the beginning I think we are going to expect a lot more from these kinds of interfaces and the difference between like a chat bot and a search engine that like combines a chatbot and retrieval is going to just look more and more bloody going forward. So hard questions will continue to be hard but a lot of questions that we expect answers for I think will be eminently answerable with LLMs as one of the tools that go in.

ELAD:

Super exciting. Relatedly, when I've seen people model out the costs of using LLMs versus more traditional IR approaches, LLMs seem to be more expensive per and you know, I know that when Satcha Nello was talking about integrating these things into Bing, he almost had this, your margin is my opportunity style perspective relative to Google, right? I don't know if it's true or not in terms of how that would substantiate over time, but it almost felt like the claim was that, you know, Bing was okay almost subsidizing LLMs integrated into search to try and draw or sort of hurt the margin on the Google side. How do you think about, you know, the potential cost-prohibitive nature of LLMs for search? Is it, is it really a thing? Do you deal with it with semiconductors or small models or other things or is it, is it not really that that important of a consideration?

SRIDHAR:

Well first of all his comment might have meant two things. There are two ways to think about margin. One is the cost of serving and the other is the margin that Google makes say on an Apple deal. Not clear which one he was talking about, but this is a topic that you've written a lot on Elad when it comes to like just l LMS and cost. We saw something dramatic happen where open AI reduced the cost of its API by a factor of 10. That's a little insane this early on, but if you go back to the basics of your question and think roughly like you know, an average very large model call takes about 5 cents and that's actually, that is astronomical because you're talking $50, you know CPMs for serving a thousand queries. Now the average RPM for US queries is about 40 to $50 and clearly that'll be a very high cost.

Uh, rest of the world is a lot lower by the way, like my memory is on the order of $20 if you average over the whole world. So I'm sure you folks also know that Sydney for example, will issue up to three queries for every question that you ask. I mean it's an arbitrary limit, but there are like sometimes we need to ask more than one question in order to answer it. Well put that way. Um, yes this is an astronomical cost but personally I feel that there is more and more evidence that says that you don't need like the full power of the largest, biggest model to get most things done. Certainly the way we think about cost paid summarization for example, we are very comfortable with using models that are in the five to 10 billion parameter range. We are very good at fine tuning them.

There's a human feedback loop that is about to kick off and be there. So whatever can be done with very large models for large classes of problems, our attitude is we'll do them all day long for the kinds of problems that we care about and we are fine running six models in six kinds of models instead of running one model that is going to conquer them all. And so I do feel like for a lot of like known problems, model size is not really going to be an issue and there's going to be an ongoing reduction both in the size and therefore the cost to serve them. Satya of course might be referring to the margin that Apple pays out and um, if I were them I would offer, you know, apple a hundred percent of rev share in order to get at the traffic. It's a way to establish a beachhead by the way there's precedents. Google gave more than a hundred percent to a O L and close to a hundred percent to Yahoo in its early years. That's how you make markets. They obviously will be trying everything

SARAH:

You're saying that we should expect these players or that it'd be rational to play even more aggressively from an economics perspective than we've seen so far?

SRIDHAR:

Oh absolutely, absolutely. You know, part of the problem with Bing's growth has been that Google has fought it off very effectively on the business side, of course it hasn't helped that it is common perception through our dessert or not is a different story that being search quality is not as good as Google for what it's worth. There are very few people on the planet that can objectively judge search engine quality and so they need a way to break through and establish meaningful, meaningful presence and so it is perfectly rational for them to start with a better product but then go out of their way to establish a beachhead, establish a market because that is going to pay off in a pretty big way for them down the line.

SARAH:

Every part of this game feels like an expensive game to play. And I wanted to ask you about just the building of search even aside from training lms, I remember there was a lot of skepticism when Eva first started, including from yourself about how any startup could afford to build a new search engine from both an engineering talent, just ambition of technical project infrastructure, cost perspective. You've built an all-star team but obviously can't spend a billion dollars as a startup. Can you talk a little bit about what's been most challenging to build?

SRIDHAR:

Yeah, search is one of these things where you need a fair amount of scale before you have any kind of meaningful product. With like an ad system for example, I can tell you how to build one with a 3% team, um, because it's like limited data or if you're building a new mail client, it's a, it's a small problem. Yes you'll have scale problems but only after you have a million users not on like day one search, like setting up like a new mobile network let's say where you have to start from scratch is problematic from that perspective simply because you have to do a lot of work to be seen as even vaguely competitive. And so everything from how we went about doing our crawl to how we built our index has been a struggle. I won't deny it and it's one of these problems where like you know, grown men and women, San Juans will just run away after a while.

They'll be like, they'll work on it for three months, they'll be, I can't deal with this, I just need to like go and it's disconcerting to you know, kind of watch that. But having said that, you know, we do have an amazing team asking for example was just brilliant at engineering a system that ran completely on flash in which we could do things like super rapid iteration, replace the entire index or the space of two days or put in arbitrary amounts of information for experimentation in a much more flexible way. Problems that took Google like 15 years to solve. We had solved out of the gates simply because he had run into many, many of these problems. We had also opportunistic, you know, to the point of L LMS being these universal input output machines, we realized that a lot of problems that Google solve with massive scale and user data, they could in fact solve with LLMs.

So we use a lot of them for things like query rewriting, similarly extracting structured information turns out is whether that people will ask about whether in like many wondrous ways we are in the process of actually replacing a hardcore system with one that's based on an LLMs to extract, to extract structure. So we have taken shortcuts wherever we can in order to do this. It is a daunting problem, but I'll tell you the single biggest positive thing for the team is actually launching answers because up until then they sort of had this feeling of even if we were to be as good, if not better than Google, no one will care. People can't tell between like, you know, list of links anyway, once you turn that into and yes, here is like an actual answer that my mom can take a look at and say way better than a bunch of links. All of a sudden there's excitement and so there's the actual psychology. All of you deal with teams of what excites the team and really it's been over the past few quarters where people have realized, oh wait, this can be a transformational experience that just is like a big jolt of electricity through everybody just in terms of how excited they are, how hard they work and things like that.

ELAD:

Yeah, that's very exciting progress. I guess one, one question related to that is when you look at distribution, because you mentioned, you know, consumer habits are quite sticky on the distribution side and I remember even back when I was at Google many years ago, like over a decade ago, probably more than that now, 15 years ago or something, hundreds of millions of dollars a year are being spent on distribution and obviously that number's grown with the Apple deal and other things. And so do you view it as like distribution through superior product? Is it specific integrations or partnerships or how do you think about getting that consumer interest?

SRIDHAR:

Uh, distribution is hard. There's just no question about it. Habits are hard to change. You can dislodge some of this with a superior product. You can dislodge some of it with the dollars. Part of the reason why we released this app called gist, which was a very different take on search, is we very deliberately said if we wanted search to look like Instagram stories, what should it look like? It's an experiment, we hope it'll do well. And so sometimes you have to look for change, sort of the locus of change. The other thing that we are also actively looking at is uh, you know, in this moment where there's going to be enormous amounts of uncertainty about things like is search engine traffic basically going to disappear for websites or l LMS going to disrupt the aggregator publisher relationship in a fundamental way? We are now realizing that we can offer a superior search experience to lots of publishers, whether it's a Reddit, a boston.com or anyone else. We can give them conversational search on their corpus. So we are going to try a set of different things. We've actually had a fair number of success working with privacy products like Dashlane and obviously other folks that we are talking to like proton mail about how we could work better together. Distribution continues to be like easily my top worry for how does Neeva get scale,

ELAD:

I guess related to distribution and business model, you opted for a privacy centric subscription service without ads quite early and I think at the time that was very innovative thinking, right? I think now that other products chat, G P T, et cetera, all sort of coming out with these subscription-based approaches, I, I was just sort of curious how you thought about it. Like when do you think a product should be supported by subscriptions? When should it be supported by ads and how do you think about it in the context of this type of product?

SRIDHAR:

I mean for us it was a way to stand out. It was to give us a clear runway, thoughtfully done as monetization is an incredible juggernaut as everyone that's on this podcast knows in terms of the kinds of scale that it can bring and how it can disconnect monetization from the product. So it's almost like a separate team that is working on it, you know, when it's very successful it can actually kind of get annoying. I'm sure like none of us likes watching broadcast TV anymore, like sports broadcast drive me crazy When I think about like how many ads that I have to sit through, ads sort of come with elements of self-destruction built in it's part for the course. When you're doing it, it's always attractive to do things like show more ads in some ways, you know, hybrid approaches of starting ads free and maybe using ads as an additional mechanism might be more sustainable.

Even though, you know, reasonable people will argue that most people that come to ads later tend to be even more d discriminate about how many ads they show and ads quality than the people that had been working on it for the first time. You know, I worked on it, it's also the team, but Google search ads actually tried very hard to hang on to quality bars to hang on to user metrics for a very, very long time. Compare them to somebody like Amazon. Today I find Amazon search experience a joke because it is so full of ads and is actually misleading ads where it's really hard to find what is going on. I think they're viable options. There are structural elements that then come into what should you adopt if you're in the business of providing answers like Chad g p t was ads just becomes a whole lot harder to do. You're betting on the quality of answers, but for many other products that are about more casual consumption, whether it's social media or even where search might go, I think it's an open question where ultimately it'll settle. I point to point out to people that something like a just experience, which is a summary followed by a series of cards, you can stick ads in there. We are not planning to do that, but there are many different ways to solve problems.

ELAD:

In the early days of Google, one of the arguments that are being made for ads was that the signal in terms of willingness to pay was a way to actually boost meaningful link to somebody. In other words, if there is somebody who's willing to promote a link, that in and of itself was a signal on the potential quality of that link relative to the potential user. Do you think that's a true statement or do you think it used to be, or you know, is is sort of are commerce signals like good boost for actual ranking?

SRIDHAR:

They can be, but I think the bigger truth is that smart people will come up with great explanations for everything that they do as long as it's convenient to them. The best religion to have are the ones that are aligned with your business interests. And of course the ads team is going to say that um, there's some amount of truth in it, but that clearly is not an explanation for like two screen fulls of ads when you're searching on your phone. I find this whole thing of ads enable Google to make free products or ads enable Facebook to be available for Ecuadorian people made by billionaires sitting in Palo Alto to be entirely self-serving. My attitude is like, yep, we can make money with ads, it works pretty well. We are rich. It's okay

SARAH:

If we just sort of project out a little bit and say these summaries cited or not chatbot experiences answers are really compelling to consumers. How do you see the relationship between search and content producers changing in the long term, right? If these summaries take traffic from publishers, do we lose the incentive to publish content on the internet?

SRIDHAR:

I think that's one of the big unknowns. I think what is going to happen is that some of the larger content creators, you know, I would put people like, uh, Reddit and Cora. These are some of the forward thinking ones very much in that bucket. They're going to say, we wanna be part of search, uh, but we don't really want to be part of your answers. Like, you know, taking our data and sticking into LLMs is not really allowed by our crawl policy, but smaller publishers are not really going to be able to do this. The bigger ones are going to have things like their own chat bots so that you can browse Reddit content or Cora content. So I liken the current moment to, you know, basically we are going to be dropping a bomb or a giant impulse to the center of how a lot of us get at information.

This is going to radiate out from here to a whole bunch of sites to the content ecosystem. I think it's going to be a little bit, it's, it's a little hard in my mind to predict. It does feel like there might be more centralization or more consolidation when it comes to content creation. Your average small blog which could subsidize itself or which could monetize itself with advertising is going to find it hard to compete in this answer world. Especially if the expected experience for everybody is going to be, I don't really want to read giant pages. I want to be like talking to you. Give me, gimme a bit of a summary of what you're going to say. Then I'll ask follow up questions. All those experiences are possible, but not for every blog that there is. So I think there is potentially a very different platform that is going to evolve for how content is going to be created. That looks a little bit different from how it is today.

ELAD:

You know, when you were at Google, your team was doing machine learning and AI at a scale that I think roughly didn't exist anywhere else and you were very forward thinking in terms of then applying really interesting cutting edge technologies at Neeva and creating one of the really first and most interesting, you know, l l m based search engines. Right? Which I think is super exciting work. What else are you predicting gets most disrupted within the AI world beyond search or what are some areas that you think are coming over the next coming years?

SRIDHAR:

I mean, we talked about content, how it's going to get disrupted. I'm not even talking about synthetic content. Yes, it'll be, but I think there will be techniques that's a cat mouse game of detecting it, but obvious places where content is generated. Actually ironically it's going to be advertising. I can see how personalized advertising actually plays a a pretty big role, especially when it gets to be multimodal. I joke to people that like Michael Chardon is going to be telling you to buy his air Jordans like, you know, look at you in the eye and speak your name and so on and so forth. So advertising with its closed loop for optimization and the relentless focus on efficiency actually is a natural area. I'm not saying there's not going to be, but obviously there are a lot of companies that are saying things like, oh we can apply l l M technology to every other information function, whether it's mail or how we consume documents.

But what I find, you know, interesting is that we have a set of incumbent technology companies that are actually very smart and very driven. Think about it, Microsoft to be this innovative, this late into the game. You don't hear about stuff like that from I B M, not at the scale of like consumers and the whole world. So I think they're all going to react pretty quickly, incorporate a lot of it. So I don't know how like how much there is going to be pure SaaS innovation on products that we take for granted. I'm not saying there's not going to be, but it's a little bit harder. One of the areas I'm personally very excited about is the generalization concept that I spoke about earlier, which is if you think of LLMs as like machine language, then the natural thing is how do you combine them with the various tools that we use in terms of search engines, calculators, APIs, programs, other websites.

So I think like action transformers is going to be an incredibly powerful area. The technology is very nascent. So unlike say, you know, open AI's ability to crank out new generations of LLMs, I don't think that tech is yet at a point where people can build lots of applications on top of it. But to me that is potentially a big breakthrough, not just for things like RPAs, but also potentially for, Hey, can you create an AI s r e, can you create an AI code reviewer? Can you create like fill in the blanks? I think that's incredibly exciting, but I think the technology is also quite a bit more nascent than what we have just come to expect will happen with language models.

ELAD:

Yeah, the agent digitization of the world is a very exciting future. So we'll uh, wait it with bated breath. Um, as we wrap up, is there anything else you'd like to talk about that we didn't touch on?

SRIDHAR:

You know, it's, it's trite, it is repeated, but as a technologist, this is a really exciting moment where I do think that this is powerful new technology. It's also getting democratized very rapidly. You know, my take is that WhatsApp was the semial moment of like mobile computing Here a team of 30 people could create a product for the whole world. To me that represented the power of mobile platforms and uh, if two years from now if whatever three college kids, you know, 20 years old are able to build a brand new application that uses the things that we know for sure, whether it's web servers or databases, but also language models in a fundamental way and say like, wow, we never thought of that. You know, that feels very possible. That is what is really exciting about where we are. Yeah. In, in the meanwhile, super excited for where we are able to take search with Niva and appreciate all your wisdom and support.

SARAH:

I'm counting on that to happen actually, but I think a lot thinks it will too. Streetar all a incredible conversation as long as thank you for joining us on the podcast. We appreciate it.

SRIDHAR:

Thank you Sarah. Thank you EIad.

ELAD:

Thanks for joining us.