Fireside chat with Surojit Chatterjee (Founder, Ema)
In a conversation between Manu Rekhi and Surojit Chatterjee, Surojit shares insights from his journey in the AI industry, drawing from his experiences at major tech companies and his current venture, Ema. He emphasizes the importance of focusing on solving real customer problems rather than getting caught up in technological trends. Surojit discusses the future of AI, predicting a workplace where AI employees work alongside humans, enhancing productivity and innovation. He also touches on the challenges of enterprise sales and pricing models, advocating for outcome-based pricing to better align with customer needs.
Transcript
Chapter 1: Democratization of AI Technology
Manu:
It's my honor to have Surajit Chatterjee as a keynote today. I've known Surajit for almost 18 years. He spent his first 25 years in enterprises and leadership roles at Oracle, Symantec, Google, Flipkart, and Coinbase.
And last year, he crossed over and started an AI enterprise startup, Emma. We'll talk more about this in our conversation. And Surajit generously donates his time to mentor founders and has written numerous angel checks.
However, Surajit at his core is a builder and at the forefront of AI. And given his background, he has a very unique perspective on AI and who will capture more value. Is it the enterprises like Google, Meta, OpenAI, Tencent who are building foundational LLM models?
Or is it the AI startups building point solutions on top of foundational LLM models going to capture more value? You've already heard from nine startups today. So, Surajit, should we get right into it?
Surojit:
Absolutely. Can't wait. Perfect.
Manu:
As you know, the internet protocols that were developed in the 70s and 80s were developed under government-funded projects and was kept open. There was no barriers, no rules. And this allowed for the proliferation of the internet as we know it.
However, this is not the case with AI. Hundreds of billions have been spent by companies like Google, Facebook, OpenAI, Tencent and others. And they all are trying to become the AI platform.
The winners will obviously grab a lot of value. And they could also levy taxes on startups that are unreasonable. Now, can startups build confidently on them?
I mean, you're building as well. So, I'd love to hear your thoughts and then jump into it.
Surojit:
Yeah. The first thing here is, look, the first transformer model was expensive to build. But as people have now figured out, it's not very difficult to build a new GPT model or new LLM.
I think the CAPEX required is going to be exponentially coming down. You are already seeing this, right? Startups can raise money and build new models.
Open source models are available. So I don't think this will be a barrier. Startups will have a lot of choice, building not just one model across multiple models.
These models will compete with each other, these foundational models. So prices will keep going down. We already see that with, say, OpenAI, you know, decreasing price over 90% or something in the last few years.
We'll see the performance getting better, accuracy getting better, prices go down. This is what happens in every technological revolution. So I'm not worried about this at all.
Manu:
Yeah. So as you sort of look through what other AI technology can propel this adoption even faster, we've already seen some, you know, toward pace of innovation. What do you see around the corner?
Surojit:
Yeah. See, I think the cost of running this model will come down. There will be innovation in hardware.
Also you probably won't need specialized hardware anymore. You should be able to run an LLM on your phone, for example. I think that will happen in the next five to seven years.
I think the other big thing is accuracy of these models. I think the hallucination that we see today will come down a lot in the next few years. And those are key blockers today in terms of building meaningful, useful applications.
As it has always happened, I think those things will get solved automatically, or at least that's what I hope.
Chapter 2: Leveraging Proprietary Data
Manu:
Perfect. So you mentioned something very interesting, you know, that, you know, on your mobile device, on your smartphone, you can run an entire model. So can you like sort of walk us through like what possible new things you could do that is not possible today?
Surojit:
Yeah. Today you are wearing probably your smartwatch. It's measuring your heartbeat, your pulse, all kinds of things, right?
People are wearing all kinds of devices that measure their blood sugar, stress level, and so on. With a model on your phone, you can very safely now get a very personalized, like medical assistant, like your personalized doctor. You can get, you know, the model could look at your personal data and recommend things for you.
May even do scheduling for you on your calendar, book travel for you. There are a very large number of things that can be available as these models become cheaper. Not just model running on your phone, I mean, that may happen, may not happen, or every application may not need that.
You will see all kinds of new games, for example, that may use models running right on your phone. But the point is, when these models become real commodity, easily available, whether running on your phone or on cloud, very cheap, very fast, very accurate, it will open doors for all kinds of new applications that we probably cannot imagine today. The rate of growth is almost exponential at this point in this technology, and we have seen this in past, like, technology cycles.
You know, when the first iPhone came out, it was very hard to imagine what are the applications that can be built, which will change lives. And today we see that, right? You know, it was hard to imagine a Uber, right, on the day that, you know, those smartphones like iPhone was announced, or a DoorDash or any such application that really uses things that are unique to the phone, the location of where you are.
And we'll see the same thing happening that will be applications that we will probably be hard to imagine today. So I'm very excited about what entrepreneurs can build. There'll be a lot of creativity needed to think about that future and how you can leverage that future.
Manu:
I can't wait. That's pretty amazing. So one of the things you mentioned was your phone, the LLMs running on your personal information, your data, and actually making a lot more sense of the world around you in conjunction to what is important to you, right?
So data becomes really important, access to siloed private data. So how important is proprietary data and the development of AI-specific solutions, as you mentioned?
And maybe if you can share a couple points on your take on SLMs and DLMs.
Surojit:
Yeah, I think the next generation of innovation will be based on how these models can use proprietary data and build applications, right? Or what applications can you build on top of these models using proprietary data? See, LLMs haven't seen most of humanity's data.
Now, the entire data on the internet is probably 1% of all the data humanity has. Most of your data is locked up in large enterprises. It's also with you, your personal data, probably.
I think the important thing, though, is it's very hard to make these LLMs work with proprietary data today. There's a lot of technology needed to make it work at ground on your data rather than give answers based on data that it has read somewhere on the internet. I think all that technology is improving very fast.
A lot of that, our company does a lot of that work as well, but the entire industry is working on it. And you talked about SLMs and DLMs. Yeah, I think that will be kind of very commonplace. It'll be super simple to create a brand new small model using your data, take one click or something.
There'll be technology available, software layers available that will enable you to do that. We train a lot of small models today, small models based on customers' data that's dedicated to a customer or domain-specific data that we'll use across multiple customers for a given industry. And we see a real advantage and real improvement using those models.
So I think the future will be the foundational model there is commoditized, and I'll encourage everyone to build across multiple models. Startups will create many SLMs and DLMs. Customers will be creating their own models maybe and plugging into your application. And this will be very commonplace.
People in high school, students will build models. That's probably already happening.
Chapter 3: AI on Mobile Devices
Manu:
No, absolutely. Especially where you live. So it's interesting.
You mentioned you've gone from being a big company executive to now running your own company. As you sort of have developed as a person, going to sort of having all the resources in the world to actually having your company, maybe walk us and help the audience understand what are the core principles that you follow as you build your AI company, Emma, and maybe mention a little bit what Emma does as well so we have more context.
Surojit:
So think of Emma as an agentic operating system. So given an enterprise automation problem, we have a unique technology that can take that problem, split it apart, and create an agentic mesh to solve that problem. And then we orchestrate those agents using our technology.
That's at a high level. In terms of the core principles we followed, first thing is, and I think this is really important for startups to internalize, any startup to internalize, only solve problems that are yours. Don't solve somebody else's problem.
For example, when we started working on this almost two years back, the LLMs were not that great. I mean, they are better, not perfect, but they were much worse. Very slow, inaccurate, at very high cost.
Now some people will say, okay, let's solve for those problems first. We made a very deliberate decision that those problems will get solved. Let's figure out how to solve the end customer's problem, how to leverage these LLMs to automate complex workflows.
That's the problem we have been solving. And as we all see, the costs are coming down, LLMs are getting faster, accuracy is getting better. So it's very important to understand which layer of problems you want to solve.
And again, it's okay if some startup wants to create a faster LLM, power to them. But that's not the company we wanted to build. And the other important principle was, from day one, we decided that we are building not another software, not another SaaS application, but an AI employee, because this technology enables us now to mimic humans, to automate things that are repetitive, that are tedious, that are kind of soul-crushing parts of your role in any company.
So when you think about that, we are building AI employees, so we have to really understand how this AI employee will interact with human employees. So there was a lot of focus from early on, on this human-AI interaction. How humans will give feedback, how AI will give feedback back to humans.
How humans can manage, monitor performance of these AI employees. And that's a big part of the innovation that we have done, which is thinking about the product side of things, I'll say, not just core technology, like making the model faster and so on.
Chapter 4: AI Employees and Workflow Automation
Manu:
That's very profound. Do you think, like other founders who presented today and elsewhere, you adapt that more? Because selling is also changing, right?
Because when you're not buying a seat, you're not buying a license, you're saying you're sort of selling a superhuman employee, doesn't have HR issues, you don't need to pay vacation time or PTO or anything else, right? And so it sort of works 24-7. It's sort of a mind shift in how you think through this.
So tell me a little bit more about how do you think about pricing? Because this is sort of a very different way of thinking about it.
Surojit:
Yeah. I think pricing and entire procurement process that companies have today has to change, right? Think about it for a second.
Today, most software, you will go and try to put your hand into the IT budget of some kind. But if you are building an AI employee, which is working with other human employees, should you not be looking at the total personal budget, which is many times more than the IT budget? But that takes time, right?
Understanding that on customer side will take probably more time. The other aspect in pricing is you can't price this software or this new entity, I'd call, by per seat, because the whole idea is you are going to have less number of humans behind this software, less number of humans behind your other SaaS software, probably, because this AI employee is actually turning the knobs, taking actions, and reducing your overall software cost and human cost. So we made a decision to price per outcome or per consumption rather than per seat.
And I think this will be quite common for generative AI application, agentic AI applications in the future. And to me, this is actually kind of the right thing to do. Customers, enterprises have always complained they bought so much SaaS software, so many seats, and they're mostly software.
People are not really using either underused or never used. That problem goes away. There is another core challenge that you have to tackle, because these are employees.
These are kind of human-like. So they also make mistakes sometimes that your traditional software does not make. How do you convince a customer that, oh, you can buy this new type of software that may make mistakes once in a while?
Again, the logic is not hard, which is, will you fire your best employee if they made a mistake, a small mistake? You won't. Similarly, you shouldn't fire your AI employee if they make a mistake.
Of course, if the cost of the mistake is very high, you need to put the right safeguards. And that's the responsibility of the entrepreneurs, of the startups building these AI employees, to put the right safeguards and prompt back the human when you are not certain about the answer or there are some doubts about whether it's correct or not. There's a lot of work needed in that area.
But I think the bottom line is, this requires a whole new type of thinking on the customer's part as well. Pretty fascinating.
Manu:
And I would love to have you come back in a year, maybe with a couple more founders. We can sort of see how this positioning actually took hold with pricing, right? Because if you're thinking about it, I'm thinking other people are also doing so.
And as you were speaking, I was sort of thinking about the advent of the usage of supercomputers like 50 years ago. These are very expensive machines that do specific tasks, but you have to sort of rent time and hours you could use on these supercomputers. So usage was the way, because if somebody used more, you pay more.
If you use less, it was outcome-based, like what do you need to get the machine to do for you? So tasks that were more relevant would be get done. So it's sort of like the more things change, sometimes they sort of go back to being there.
I would love to sort of invite you back maybe in a year to come back and maybe update this audience and others in terms of how selling a superhuman being actually worked. But it's absolutely fascinating. So as we sort of start wrapping up, I would love to sort of finish this with a prediction, either recap and sort of say, if you look ahead five to 10 years from now, you touched upon both the startups and the value they bring and the enterprises.
And it seems like there isn't a clear winner. I think done well, they both have value in different parts of the ecosystem and chain. But if you can walk through, maybe recapture, what do you think happens in the next five, 10 years?
Maybe give some advice to some of the C-level execs that are in the audience, but also to the startup founders that are also listening to you as well.
Chapter 5: Challenges in AI Adoption and Pricing Models
Surojit:
Yeah, I'm actually curious to hear your views. First, I think every technology has this, a new technology has this trend, right? It seems like it's going kind of slowly, then it moves very rapidly, right?
It's the classical S-curve or something. So while it seems like, okay, there's a lot of expectation and maybe it's not doing its job today, there's a lot of FUD there that, oh, it will steal all my data. Maybe it's harmful.
I think a lot of those problems will get solved in the next few years. And you will see a sudden acceleration of adoption. So in five years or so, I think the application of AI will be, or particularly generative AI, genetic AI, will be pervasive.
I imagine a future workplace where every team will have some human employees, some AI employees, humans managing AI employees, sometimes AI employees managing humans or monitoring humans, which by the way is already happening in contact center. We have like agent QA that monitors how agents are working and give feedback in real time. We'll see this AI employees working with each other.
So we'll see a very different workplace than what we see today. And it will be across every industry, across every role. Of course, some industries will probably adapt faster than other industries, but we'll see this pretty pervasive application of generative AI everywhere in a few years.
Manu:
Then there's some really big industries that have been resistant to change. And I think it's because the right pieces were not quite there. So if you look at healthcare technology, typically as technology gets deployed, the cost of delivery comes down.
It happens in every industry, except in healthcare. And I think it's because healthcare hasn't had the right sort of pieces has come together. So I think I'm pretty hopeful with the generative AI, there will be some transformative things in healthcare, which will actually benefit all of us.
Now, the other part is around education, learning. How do you get the best teachers? If you have an amazing teacher, you learn better.
So I think if your teacher is a generative AI teacher, can supplement good teachers, or even make the bad ones better, and then sort of move them up the chain. In my utopian world, what I'm assuming is that artificial intelligence actually improves human intelligence. This competition that the press loves to do is like, oh, this is going to replace, this is going to displace.
And I think it's more going to enhance. A hundred years ago, if you told people or workers, hey, you will have two days off every week, they would have laughed at you, right? And now we take for granted, I mean, 40 hour workweek was not guaranteed until only 70 years ago.
So why not 40 hours a week could become eight hours a week or 20 hours a week. So I think we need to sort of, there'll be other things that we have to figure out as human beings what we do. But the good part is that a lot of the tedious stuff that you mentioned that we don't like to do, or the boring part of your work or your personal stuff can actually be managed.
There's always more TikTok videos to spend time watching. So I'm sure people will find ways to do that.
Surojit:
Just to add to that, I actually think when humans are freed up from repetitive tasks, they do more interesting innovative stuff. I mean, you can see what happened in the industrial revolution. Before that, most of humanity were just farming.
When you free them up, free all the brains up to do more useful, interesting stuff, you see the world all around us, right? Everything that we see, we take it for granted was probably invented in the last 200 years or even less, right?
Chapter 6: Speculations on Human Augmentation
Manu:
Yeah, Surojit, this is very mind blowing. And thank you for dropping all this knowledge on our attendees. It's always a pleasure talking to you as a friend, over a drink or even on this keynote.
And I always walk away learning something from you, right? No matter what. Thank you so much.
Wishing you a pretty good day. Thank you so much. This was fun.
All right. Perfect. Thank you, Sarojit.
So, Sarojit, we have a few amazing questions by the audience. And please keep them coming because we may have a few minutes here with Sarojit. So, by the way, I can't thank the nine founders enough for an amazing presentation.
I learned a lot from them, even though I spent so much time with them. And I'm sure you guys did as well. But before we get into the questions, I would like to thank everyone for showing up today.
And by the way, my chief of staff tells me there's an average of 31 to 85 follow-up conversations that people have asked the founders, right? And these are C-level execs from Fortune 1000 companies to potential future investors. And this is at least 3x better than the last year's responses from the audience.
So, I think my founders may actually need Sarojit's Emma to help them get through all this backlog of follow-up. But jokes aside, Sarojit, should we sort of jump right into some of the questions? Let's do it.
Okay. Do you have a favorite that you found? Should we start?
Surojit:
Yeah, I like the question by Murali. He's asking, how do you figure out whether the outcomes are right or wrong? The measuring outcome is very complex.
How do you assign value to outcomes? It's a very good question. I think the answer is, really, it kind of depends.
For different use cases, the expectation on outcome is quite different. So, you have to upfront work with customers to understand what is the outcome that they are expecting, how would they rate the outcome. Even, for example, if it's generating some doc or blog post, right?
Whether it's good or bad is a little bit subjective. So, we had to do some of that work upfront, which is basically asking, hey, how will you evaluate this environment? Really, when you are hiring someone, when I would interview a long time back for a job, I'd always ask this question to my future manager.
Hey, how would you evaluate me? How do you measure success? It's the same thing you have to do.
Manu:
No, perfect. Now, there's another question by Chapinder Singh. The question is, do you think AI can solve the enterprise data problem?
It is hard to distill insight from petabytes of data enterprises have. That has been challenged for many years with big data, and it is getting worse every year, right? So, do you want to sort of quantificate on this question and see if you have an answer?
Surojit:
Yeah, I think there is a rich area of innovation, actually. A lot of work will happen in this area. AI is solving some of the data problems.
With generative AI, you don't need to clean your data as much. You can throw a bunch of data, and it can figure out, right, which is a huge advancement over previous generation of AI. But I think structured data, there's a lot more to be done in structured data.
Interestingly, generative AI works better in unstructured data situation, but actually stumbles a little bit on structured data. A lot of work has to happen. Yeah, I don't have a solution in mind, but I am expecting many companies will be formed and entrepreneurs will solve the data problem.
Manu:
It's an ongoing thing. Next year, when we have you come back, maybe there'll be more definitive answers. Tej, another founder, actually, has a really good question.
And this goes back to the pricing discussion you and I just had. One of the biggest hurdles may be selling based on outcomes. Are enterprises softening up to this mode?
And he's like, in his experience, it has been really hard to even start such a conversation because you get shut down. And I see how AI employees are the place to start. So I think you got a good seed here started.
But do you want to sort of take Tej's question and sort of dive deeper?
Surojit:
Yeah, it's not easy. The procurement is not even in tune with this kind of pricing today. So a lot of work has to be done.
I think helping customers understand, this is like another employee, how to frame the problem better for them, that has been useful for us or effective for us. But I think more work is needed. Particularly, I think the analyst community, the influencer community, consulting community need to probably do more work and need to get excited about this concept and help customers understand.
Manu:
Okay, perfect. Now, Roy actually took on your future pontification, especially using mobile phones and having LLMs run on them as well. So he's asking this question, if you can speculate a little bit further on use cases 10 years from now, that if our future iPhones and Androids have 1000X more AI power built into them.
Surojit:
Oh, speculation. I think a good thing about speculation is you can say anything about 10 years. I think your phone can really understand you very, very well.
You'll probably have a bunch of AI enabled devices in 10 years of time in your home. Everything will have some AI inbuilt. Just like everything is connected now.
Everything is on the internet device. I think you'll probably see some domestic robots coming up. I'm very excited about that, by the way.
Like some robot folding my clothes and washing dishes, actually taking the dishes from the table and washing them. And all of that probably be controlled by your phone, like you can just talk to your phone and it does, or it manages, it becomes like your manager of your staff, your AI staff at your home. Lot more personalized kind of, I'm very excited about the healthcare opportunity here. And if you like personalized medicine, you have a lot of wearable devices, it's monitoring all types of things, signals from your sensors on your health.
That can actually give you really proactive and warnings of what you should do, what you should not do, eat, exercise more. And it's like your personal trainer, personal coach, your emotional state, right? It can be like you're almost like a buddy following you and advising you and helping you and everything.
Manu:
So instead of my daughter nagging me now, so my phone's going to nag me as well, right? Yes. So Roy, I think you have to live long enough and stay healthy enough so that the technology can keep nagging you even more.
So getting a little bit more technical, so Dilip asked this question, which tasks use the most energy and latency in the current generation of AI hardware? What can the hardware industry do to improve these issues? Look, calls to LLMs are very expensive, right?
Surojit:
And I don't have the answer like internally in LLMs, are there differences in like how much hardware energy they're spending? Like we don't operate at that layer. I don't know, Manu, if you have a better answer.
Manu:
No, yeah, I think I'm not a hardware guy. So I think this has always happened in history that both software pushes the envelope on hardware and hardware has to catch up and vice versa, right? So as NVIDIA sort of made it possible for things to run, that was not possible before.
Intel focused on the wrong things and they've been left behind and NVIDIA took that market. And I think this will continue to sort of go back and forth between how much processing power hardware you have and what software can actually do to push the envelopes on these things. So I think this is just an old thing, back and forth that will continue to happen.
So my chief of staff tells me that he's just allowed people to come live as well. So Sandeep Gupta, do you want to ask your question directly or Surajeet? Sure.
Sandeep:
Yes, Surajeet, thanks for your insights and continued clarification on some of this. Now in the last week itself, Salesforce and ServiceNow announced their agentic platforms and hundreds of kind of pre-built applications and industry use cases. And this is just the beginning of this and it's been there for the last 12 months as well.
And I see more platforms launching this. So now you have agentic platforms springing up everywhere and it becomes even more challenging for the enterprise to see which agentic platforms should they build their agents on. There's a data problem, I'm not even, somebody already asked that question.
How do you see that fusing or confusing for the enterprise?
Surojit:
Yeah, I think next 18 months to two years, three years probably will be very confusing because everybody is figuring out, the industry structure is getting figured out. Everybody is announcing everything and it's not easy for a customer to understand like the layers of software, who does what, right?
So sometimes there's this confusion, but this is normal in a new category or a new technology, kind of disruptive technology, it'll all settle down. I do think it's hard for customers and customers to build their agents and deploy. Probably there is a very systematic underestimation happening on the effort needed to make the agents work with adequate performance in the enterprise.
I think that recognition will come. A year back, year and a half back, whenever I would call, talk to a customer, they would say, oh, I can just talk to chat GPT, right? Why do I need to talk to you?
Like I can just ask chat GPT. Then they would say, oh, I can just blank chain it and so on. I can just write some rag.
I have an engineer who showed me in one afternoon, she could do this, right? So why do I need to buy a license? I think now there is recognition.
It's not as simple as it looks in terms of building a demo is very quick and easy in generative AI. Building real useful product takes much longer time and specialization and different levels of talent. It will all settle down.
It's a matter of time. I think industry has to go through these phases. Perfect.
Manu:
So we're about five minutes before our time. Maybe it's time to ask maybe one more question. Looks like I'm showing up on both screens.
Surojit:
That's my AI camera as it has the brain of its own. And I'll try to fix it.
Manu:
So if you could get Rakesh to come online, like to ask a question.
If not, I'll ask his question. So yeah, so I think AI is solving many isolated problems in marketing and sales or in the similar league as what has been done during digital transformation. Do you think there's a need to make a more holistic approach and create a connected experience across the entire enterprise?
Maybe a chain of agents?
Surojit:
Yeah, absolutely. In fact, this is our premise in building EMA as well.
We don't think point solutions are sustainable or effective. And also there is high risk in deploying lots of point solutions. So AI employees we create, like we create like suites of employees or there's a sales and marketing suite.
There is an HR suite. There is a customer center automation suite. And those employees work with each other.
They communicate and work with each other. There'll be a need for that, right? You don't want to hire, just like a human employee, you don't want to hire a human employee who doesn't talk to anybody, does not collaborate, does not communicate.
Same thing, you need an AI employees to talk to each other, talk to other human colleagues, collaborate and communicate. Here's something that I want you to consider.
Manu:
Okay, Shikhin, could you put Roy to ask a question?
Okay, I think he's here.
Perfect, thank you. All right, so do you want to ask your question, Roy? Yeah, look, I've traveled to the future many times.
Roy:
I've been a huge fan of Star Trek and Star Trek Next Generation and the Borg ship and all of that good stuff. So I have to ask this question. I think there are already people who are beginning to do this at the World's Chess, there was a World Chess Championship in Paris just last week and there was this person who was basically paralyzed, neck below or whatever, he could barely move his hands and he had some kind of an implant in his head that allowed him to look at the board and the computer screen and move pieces.
So these embedded types of things are beginning to happen, right? So if you had to speculate that, and if we all had this AI augmented systems implanted in our head, I'm just curious to hear from somebody who's at the forefront of AI technologies, if you were to speculate on where that is going and what's the future of humanity, I'd be curious to hear your thoughts. I'm keenly watching what Neuralink is doing.
Surojit:
They have already now, I think, second or third human trial they're doing. I think that will happen. I need this for myself.
I know every time my wife would tell me something to do on the way back from work, I forget. I have a special chip in my brain that will tell me. And I need probably more memory.
I think jokes apart, human augmentation will happen. It may not be 10 years, maybe 15 years, maybe 20 years. It's inevitable and it will have real implications for like, you know, Roy, the example you gave, right?
It's really improving life of many people. But in general, I think, you know, people with normal sort of abilities as well may use human augmentation to quickly access memory. Like if I had Neuralink kind of chip, okay, I need to, oh, I don't remember that actor's name.
Oh, I can just quickly search up with my brain without taking my phone out of my pocket. I think the future is where, like everything in our phone may get into our brain with the chip, right? I mean, it started with a desktop in front of you.
Now it's a phone in your pocket. People are mumbling with like talking to Siri or something with their headphone. Now it may be just in your brain and you're communicating that.
That's not too hard to imagine that it will happen.
Manu:
So before we go too much into a world that looks like the matrix, but I hope, Roy, and you will get to meet Sarojit in person soon as well. So I think that conversation can continue.
I think DM, you are next. And I think we're out of time, but Sarojit, do you have a few more minutes to sort of hang out while these questions get asked? But everybody else on the audience, you know, fantastic time with the panelists and the keynote speaker as well.
So if you want to stay on for a few more minutes, you know, please stay on. You know, it depends how long Sarojit takes on these questions, but for everybody else, I do wanna respect your time. And if you need to move on to your next meeting or get back to email, please feel free to do so.
Dm:
Thank you, Manu. Thanks, Sarojit, for wonderful insights. I wanted to hear more thoughts on the pricing.
You're very right that perceived model is kind of more SaaS-based. Outcome can be a little more complicated. So any emerging models to simplify outcome-based pricings in this?
Surojit:
I think it's simple simplification. Simplification is like a package or bundles of like tasks. Say, okay, I'm gonna answer 100,000 tickets for you per year.
That's like more than what you anticipate number of tickets you'll get in your service desk. And this is the price. So it's kind of like SaaS, but not exactly because it has a number associated and you can say, okay, I'll charge more if for overages or something.
So you can simplify outcome to kind of task completion with a definition of what is task completion that needs to be agreed on. I'm sure there'll be a lot of experimentation in pricing in the coming months and years in this space. So I'm also watching the space and seeing what others are doing, what interesting concepts coming out.
Manu:
Enough to that point. And I thought a lot about it since our conversation on pricing and you made me think in very different ways as well. So maybe if you guys are open to it, maybe we can set up a separate conversation where we can dive deeper, even like sharing what's working, what's not working, because I think the fastest way to sort of move this forward is as you dealing with enterprises in a different sector, so and then other people, I think sharing these things with the founders and we have a group that founders discuss these things on and added, so if he wants to stay longer, we can do this conversation as well. But the more sharing that happens, the better this thing moves forward as well. So nexit, do you wanna ask your burning question?
Nexit:
No, it wasn't quite a burning question, but I thought last questions need to have a little bit of a joke in it. So seriously, AI employees have to interact with regular employees, right? And personalities come in the way, speed of conversation comes in the way.
So how do you slow an AI agent down to talk to me because I speak so slow or speed it up to speak to money?
Surojit:
So all our AI employees today talk in text. So that problem is eliminated or doesn't exist, but yes, we have got requests who can kind of incorporate voice. I think you can probably give feedback or kind of instructions to, it's not conceptually, it's not different than when I am instructing a employee saying, make sure this answer is succinct or this is elaborated.
And our users do that all the time. They'll say, okay, yeah. So our agents can take instructions and do different things.
You can say the same or in language, for example, you can instruct and change language, answer in Spanish, right? For voice, you can say, give different type of instruction, like speak slowly, speak faster. Of course, in a real situation like customer support, for example, imagine there's a voice agent directly talking, that's a real issue, right?
For if you are talking to say a non-English speaker, you want to probably slow down a little bit, right? In a kind of international customer support or situation or something like that. I think all those would be possible with just with instructions.
So building these agents carefully is essential, right? Understanding, anticipating these use cases.
Nexit:
Fantastic, thank you.
Chapter 7: The Importance of Industry Collaboration
Manu:
Okay, so thanks. I think we may have time for one more question. Neeraj, can you come online?
If not, I'll ask this question. So while Neeraj sort of comes on, this question was how will open AIs launch? Is he on?
I'm on, Manu. Ah, okay. Neeraj, why don't you ask your question so we paraphrase again?
Neeraj:
Sure, so yesterday's announcement by OpenAI of their O1 or Strawberry, no reasoning capable LLM, is that a fundamental step change? Will that impact a lot of the stuff that we discussed today about agent tech applications and so many different things that the prior LLMs were not really capable of doing just with the next word prediction?
Surojit:
We actually have been testing this and very excited about it because they kind of show their inner monologue and they can slow down and think now.
So for us, I think this is actually great news because you want better reasoning ability for your agents to work better. You still need to solve the last mile problem. If you're asking, okay, can O1 do everything for an enterprise?
The answer is no, because it's a very smart individual with reasoning ability, but then it needs to still learn the enterprise context, still be able to take actions, understand the particular workflow and process that needs to be automated. So I'm actually excited every time there are these releases. I don't think, in general, my advice to all of you will be don't build against the technology trend, build in such a way that it takes advantage of the technology trend, which I talked about in the keynote as well.
Fully expect LLMs will get better. What can you do when they get better is the question. Yeah, that's absolutely correct.
It's sort of like you couldn't have imagined Uber without the cloud being there along with smartphones and Apple launching and Android launching their stuff and Google Maps. So there was a confluence of three different technologies that allowed Uber to exist. Same with Airbnb, same with Instacart.
So you don't expect Apple to build against Uber and against Instacart. And I think to Suraj's point is you need to really focus on what the customer's pain points are. Your large enterprises, your large customers, consumers are not going to, just because agents are easy to do, they're not going to put entire workflow and all the other issues to make sure you have to train, you have to keep it maintained and build.
So it's almost like if you are a parent, you have a child and you have to, the child has an amazing amount of capacity and brain power, right? But you're helping them sort of move along as they grow. And I think this is sort of similar, right?
Manu:
So to Surojit's point, I think we're well seven minutes over time. I know questions are going to keep coming in, but I wanted to really, from the bottom of my heart, thanks Surojit for joining back in again and answering some wonderful questions. And on behalf of all our founders and my partners and my support staff and team, I would like to thank everyone for taking time out of your busy schedule to listening to us rant and also amazing founders who sort of go deeper.
And Surojit's dropping some amazing knowledge. So without further ado, we're going to hang up here and wishing you a wonderful weekend. Thank you all.
Thank you, Surojit.
Surojit:
Thank you so much. This was fun.
This transcript has been edited for clarity.
Comments