The Google Gemini conspiracy theory
Google's latest AI product launch fiasco has sparked an intriguing conspiracy theory on the company's intentions and the future of generative AI.
- Google had to pull parts of its fancy Gemini AI model after it spat out inaccurate images and text.
- One theory is that Googlers are too woke and infected what could be a useful AI tool.
- There's another more extreme and interesting theory, though.
Google's latest AI product launch went so badly, there's now a conspiracy theory floating around that the fiasco was actually something executed on purpose.
Just like an AI model output, this theory is not really believable. But it says a lot about Google's current predicament. Especially how exposed the company's Search business is to generative AI disruption.
To catch you up: Google had to pull part of its fancy new Gemini AI model after it spat out inaccurate pictures, including depicting Google cofounders Larry Page and Sergey Brin as Asian. Some of the text output was laughably bad, too.
One theory is that Googlers are too woke and their biases have infected what could be a powerful and useful AI model. For context, this is a company that previously tagged black people in photos as "gorillas," so it's been trying hard to avoid such racist outcomes in its AI work.
The other idea
The other theory is that Google is purposely messing up generative AI product launches because the company secretly wants the tech to not catch on.
"If we were conspiracy theorists, one could argue that Google's Gemini fiasco only further erodes our trust of the machines, further pushing out the timeline for genAI consumer adoption," Mark Shmulik, a top internet analyst at Bernstein, wrote in a research note this week.
Why is this extreme idea even floating around? Because this new technology is such a threat to Google's Search business.
The existing Google Search product is stuffed with ads at the top of results. More and more over the years, and those are less and less clearly marked as ads. The result has been more people clicking on ads and huge increases in revenue and profit. A purer genAI chatbot experience would leave fewer spots for ads. When an AGI just answers your question with one thing and it's bang on, that's it. No need to click on anything — including ads.
In this future scenario, Google's main source of money might evaporate, at least for a few years while it scrambles to find something else that could replace what is probably the world's most profitable business.
I asked Google for comment on this conspiracy theory. I also quizzed them on how the company might handle a future where everyone's questions are just answered directly and perfectly, without the need to click (on ads or anything else).
A Google spokeswoman declined to comment and referred to an internal memo CEO Sundar Pichai sent to employees this week. You can read about that here.
A paid subscription future
One potential answer is already out there. Google recently launched a paid subscription service that includes Gemini Advanced, a powerful AI chatbot. It costs $20 a month, along with other Google goodies.
Is this Google future? Is the ad business over? Pichai has been talking up subscriptions as a business model for AI lately. And I've written before that the generative AI future will not be free.
Let's say 1 billion people subscribe to the Gemini Advanced package and pay $20 a month. That's $240 billion in revenue a year. Probably not as profitable revenue as Google Search, but that's still a lot.
What's Your Reaction?