Okay, fine. This post doesn’t exactly fit the bill of “Artificial Un-Intelligence.”
That’s because this newly unveiled robot technology from Google, though in its early stages, represents a tremendous breakthrough. The tech giant is using AI language models to help robot hardware understand complex requests in complex environments.
That’s right - the AI modeling that powers tools like GPT-3 (and our robot content creator Artie), is improving the ability of robot assistants like Google’s PaLM-SayCan to successfully respond to requests like “I spilled my drink, can you help?”
Watch how it works:
What makes this different from other robots?
Google has been working with Everyday Robots since 2019, developing robots like PaLM-SayCan to perform everyday chores and tasks like cleaning a mess or fetching a snack. In other similar robotics demos, these types of robots tend to respond only to one very direct request in a controlled environment, like “pick up this very deliberately placed object.”
Google says those types of bots are “painstakingly coded for narrow tasks” that hinder their ability to respond and adapt to unpredictability. They also struggle to complete multi-step tasks.
The engineers at Google are getting around this by using AI language models, which improves the robot’s capabilities in two ways:
In the same way that language helps humans reason through tasks by putting them into context, Google says the language model helps the robot, “process more complex, open-ended prompts and respond to them in ways that are reasonable and sensible.” In essence, the language model helps the bot better understand the input and thus the right output.
Is it working as intended?
Let’s check the data. Google reports:
“When the system was integrated with PaLM, compared to a less powerful baseline model, we saw a 14% improvement in the planning success rate, or the ability to map a viable approach to a task. We also saw a 13% improvement on the execution success rate, or ability to successfully carry out a task. This is half the number of planning mistakes made by the baseline method. The biggest improvement, at 26%, is in planning long horizon tasks, or those in which eight or more steps are involved. Here’s an example: “I left out a soda, an apple and water. Can you throw them away and then bring me a sponge to wipe the table?”
When can you buy one?
Unfortunately, PaLM-SayCan is not for sale.
What does our robot content creator think?
We turn to Artie, our GPT-3 copywriter, every week to opine on topics like this. Here’s what Artie thinks about Google’s new robot:
“I think that Google is really onto something with their new robot. It seems like it could be a really useful tool for a lot of different things.”
Will Google’s robot be taking over the world anytime soon?
Not quite. PaLM-SayCan succeeds at these simple tasks, but not at far more complex ones.
Zac Stewart Rogers, a supply chain management assistant professor from Colorado State University put it well in an article for the Washington Post about the new tech: “A human and a robot working together is always the most productive” now, he said. “Robots can do manual heavy lifting. Humans can do the nuanced troubleshooting.”
Ah, there it is again. Invisible’s vision of the future of work is exactly that - humans and machines working together to get the most out of each other.
Interested in how we can leverage both humans and technology to help you meet business goals? Get in touch.
Tune in next week for more tech fails.