A new chatbot Is a ‘code red’ for Google’s search business
Over the past three decades, a handful of products like Netscape’s web browser, Google’s search engine and Apple’s iPhone have truly upended the tech industry and made what came before them look like lumbering dinosaurs.
Three weeks ago, an experimental chatbot called ChatGPT made its case to be the industry’s next big disrupter. It can serve up information in clear, simple sentences, rather than just a list of internet links. It can explain concepts in ways people can easily understand. It can even generate ideas from scratch, including business strategies, Christmas gift suggestions, blog topics and vacation plans.
Although ChatGPT still has plenty of room for improvement, its release led Google’s management to declare a “code red.” For Google, this was akin to pulling the fire alarm. Some fear the company may be approaching a moment that the biggest Silicon Valley outfits dread – the arrival of an enormous technological change that could upend the business.
For more than 20 years, the Google search engine has served as the world’s primary gateway to the internet. But with a new kind of chatbot technology poised to reinvent or even replace traditional search engines, Google could face the first serious threat to its main search business. One Google executive described the efforts as make or break for Google’s future.
ChatGPT was released by an aggressive research lab called OpenAI, and Google is among the many other companies, labs and researchers that have helped build this technology. But experts believe the tech giant could struggle to compete with the newer, smaller companies developing these chatbots, because of the many ways the technology could damage its business.
Google has spent several years working on chatbots and, like other Big Tech companies, has aggressively pursued artificial intelligence technology. Google has already built a chatbot that could rival ChatGPT. In fact, the technology at the heart of OpenAI’s chatbot was developed by researchers at Google.
Called LaMDA, or Language Model for Dialogue Applications, Google’s chatbot received enormous attention in the summer when a Google engineer, Blake Lemoine, claimed it was sentient. This was not true, but the technology showed how much chatbot technology had improved in recent months.
Google may be reluctant to deploy this new tech as a replacement for online search, however, because it is not suited to delivering digital ads, which accounted for more than 80% of the company’s revenue last year.
“No company is invincible; all are vulnerable,” said Margaret O’Mara, a professor at the University of Washington who specializes in the history of Silicon Valley. “For companies that have become extraordinarily successful doing one market-defining thing, it is hard to have a second act with something entirely different.”
Because these new chatbots learn their skills by analyzing huge amounts of data posted to the internet, they have a way of blending fiction with fact. They deliver information that can be biased against women and people of color. They can generate toxic language, including hate speech.
All of that could turn people against Google and damage the corporate brand it has spent decades building. As OpenAI has shown, newer companies may be more willing to take their chances with complaints in exchange for growth.
Even if Google perfects chatbots, it must tackle another issue: Does this technology cannibalize the company’s lucrative search ads? If a chatbot is responding to queries with tight sentences, there is less reason for people to click on advertising links.
“Google has a business model issue,” said Amr Awadallah, who worked for Yahoo and Google and now runs Vectara, a startup that is building similar technology. “If Google gives you the perfect answer to each query, you won’t click on any ads.”
Sundar Pichai, Google’s CEO, has been involved in a series of meetings to define Google’s AI strategy, and he has upended the work of numerous groups inside the company to respond to the threat that ChatGPT poses, according to a memo and audio recording obtained by The New York Times. Employees have also been tasked with building AI products that can create artwork and other images, such as OpenAI’s DALL-E technology, which has been used by more than 3 million people.
From now until a major conference expected to be hosted by Google in May, teams within Google’s research, Trust and Safety, and other departments have been reassigned to help develop and release new AI prototypes and products.
As the technology advances, industry experts believe, Google must decide whether it will overhaul its search engine and make a full-fledged chatbot the face of its flagship service.
Google has been reluctant to share its technology broadly because, like ChatGPT and similar systems, it can generate false, toxic and biased information. LaMDA is available to only a limited number of people through an experimental app, AI Test Kitchen.
Google sees this as a struggle to deploy its advanced AI without harming users or society, according to a memo viewed by the Times. In one recent meeting, a manager acknowledged that smaller companies had fewer concerns about releasing these tools but said Google must wade into the fray or the industry could move on without it, according to an audio recording of the meeting obtained by the Times.
Other companies have a similar problem. Five years ago, Microsoft released a chatbot, called Tay, that spewed racist, xenophobic and otherwise filthy language and was forced to immediately remove it from the internet – never to return. In recent weeks, Meta took down a newer chatbot for many of the same reasons.
Executives said in the recorded meeting that Google intended to release the technology that drove its chatbot as a cloud computing service for outside businesses and that it might incorporate the technology into simple customer support tasks. It will maintain its trust and safety standards for official products, but it will also release prototypes that do not meet those standards.
It may limit those prototypes to 500,000 users and warn them that the technology could produce false or offensive statements. Since its release on the last day of November, ChatGPT – which can produce similarly toxic material – has been used by more than 1 million people.
“A cool demo of a conversational system that people can interact with over a few rounds, and it feels mind-blowing? That is a good step, but it is not the thing that will really transform society,” Zoubin Ghahramani, who oversees the AI lab Google Brain, said in an interview with the Times last month, before ChatGPT was released. “It is not something that people can use reliably on a daily basis.”
Google has already been working to enhance its search engine using the same technology that underpins chatbots like LaMDA and ChatGPT. The technology – a “large language model” – is not merely a way for machines to carry on a conversation.
Today, this technology helps the Google search engine highlight results that aim to directly answer a question you have asked. In the past, if you typed “Do aestheticians stand a lot at work?” into Google, it did not understand what you were asking. Now, Google correctly responds with a short blurb describing the physical demands of life in the skin care industry.
Many experts believe Google will continue to take this approach, incrementally improving its search engine rather than overhauling it. “Google Search is fairly conservative,” said Margaret Mitchell, who was an AI researcher at Microsoft and Google, where she helped to start its Ethical AI team, and is now at the research lab Hugging Face. “It tries not to mess up a system that works.”
Other companies, including Vectara and a search engine called Neeva, are working to enhance search technology in similar ways. But as OpenAI and other companies improve their chatbots – working to solve problems with toxicity and bias – this could become a viable replacement for today’s search engines. Whoever gets there first could be the winner.
“Last year, I was despondent that it was so hard to dislodge the iron grip of Google,” said Sridhar Ramaswamy, who previously oversaw advertising for Google, including Search ads, and now runs Neeva. “But technological moments like this create an opportunity for more competition.”