The Thingification of AI
The broken-gadget era is upon us.
This is Atlantic Intelligence, a limited-run series in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here.
Recent weeks have seen the introduction of new consumer gadgets whose entire selling point revolves around artificial intelligence. Humane, a company started by ex-Apple employees, released an “AI Pin” that a user wears like a boutonniere; it answers spoken questions, can recognize and comment on objects through its camera, and projects a limited screen for displaying text. At $600 with a $24 monthly fee, the device was positioned as a kind of smartphone replacement, though reviews have not been kind, calling the Pin slow, challenging to use, and error-prone.
Last week, my colleague Caroline Mimbs Nyce reported on the Rabbit R1, a less ambitious and more affordable handheld gadget that similarly presents an AI assistant as its entire selling point. Yet, like the AI Pin, it has severe issues: “It managed to speak a summary of a handwritten page when I asked, though only with about 65 percent accuracy,” Caroline writes. “I was able to use the gadget to order an acai bowl on DoorDash, although it couldn’t handle any customizations. (I wanted peanut butter.) And I never got Uber to work. (Though at one point, the device told me the request had failed when it in fact hadn’t, leaving me on the hook for a $9 ride I didn’t even take.)”
AI has its place in consumer hardware, of course. But for now, that place seems to be the device you’re reading this newsletter on, where services such as ChatGPT, Google Gemini, and Claude are a dime a dozen.
— Damon Beres, senior editor
I Witnessed the Future of AI, and It’s a Broken Toy
By Caroline Mimbs Nyce
This story was supposed to have a different beginning. You were supposed to hear about how, earlier this week, I attended a splashy launch party for a new AI gadget—the Rabbit R1—in New York City, and then, standing on a windy curb outside the venue, pressed a button on the device to summon an Uber home. Instead, after maybe an hour of getting it set up and fidgeting with it, the connection failed.
The R1 is a bright-orange chunk of a device, with a camera, a mic, and a small screen. Press and hold its single button, ask it a question or give it a command using your voice, and the cute bouncing rabbit on screen will perk up its ears, then talk back to you. It’s theoretically like communicating with ChatGPT through a walkie-talkie. You could ask it to identify a given flower through its camera or play a song based on half-remembered lyrics; you could ask it for an Uber, but it might get hung up on the last step and leave you stranded in Queens.
What to Read Next
- Things get strange when AI starts training itself: “Programs that teach and learn from one another could warp our experience of the world and unsettle our basic understandings of intelligence,” Matteo Wong writes.
P.S.
I recently revisited my colleague Kaitlyn Tiffany’s 2021 article about the “dead internet theory,” a conspiracy that has proven to be uncomfortably prescient about the generative-AI era. “Much of the ‘supposedly human-produced content’ you see online was actually created using AI, [a conspiracy theorist who uses the online handle] IlluminatiPirate claims, and was propagated by bots,” Kaitlyn wrote. Many of the theory’s specifics are well beyond the bounds of plausibility and good taste. Yet the web is indeed being stuffed with synthetic content these days—to the detriment of all.
— Damon
What's Your Reaction?