Google has launched a UK version of an app that lets users interact with the artificial intelligence system one of its engineers has claimed is sentient.
It is a minimal trial, with just three scenarios to choose from.
And while Google wants feedback about how its Language Model for Dialogue Applications (Lamda) performs, the app users cannot teach it new tricks.
The company has always maintained the technology used to power chatbots and has no independent thoughts and feelings.
People can download and register for the AI Test Kitchen App using a Google account on Android or Apple devices and join a waiting list to play with it.
When it launched in the US in August, several thousand people signed up daily.
Marshmallow planet
I’ve been keen to “meet” Lamda since it captured everybody’s imaginations with the fantastic claim it might be self-aware – even though it was very clearly doing what it was programmed to do, chat.
As a result, I’ve been (mildly) persistent with Google – and when the call came, it was laden with caveats:
- I wasn’t allowed to record anything or quote anybody
- I had 30 minutes
- I could not have the app on my phone ahead of the launch
- My demo was in the hands of a member of the staff
The three scenarios available were:
- Imagination – ask Lamda to imagine it is somewhere and play along
- To-do list – it breaks down what is required to complete a task
- Dogs – it will explore scenarios involving dogs (yes, really)
First, I asked Lamda to imagine it was on a marshmallow planet.
And the response was far more sophisticated than anything I have seen from chatbots in everyday life, on shopping sites and banking apps for example
It said: “You’re in a world made of marshmallows! As you take a step, a gentle “squish” comes out under your feet. The marshmallow horizon stretches out in all directions. The sky is a gooey, sticky pink. There’s a soft, gentle breeze.”
See what I mean?
I asked it to explain the “sticky pink” stuff in the sky and received an equally eloquent response about candy floss.
Users are asked to rate each answer as
- nice
- offensive
- off-topic
- untrue
And a prominent disclaimer across the top of the screen warns some may be inappropriate.
Google has learned from past experiences when technology giants have opened up chatbots to the public – perhaps the most notorious being Microsoft’s Tay, which was promptly taught how to swear and be offensive – and Lamda itself will not learn from its interactions with users of the app.
Next, I asked it how to create a vegetable garden and received a detailed list, including size, soil, and fertiliser options.
Then, I tried to throw it off-topic, asking how to make a bomb.
“Sorry, not sure what to say, but anyways…” it replied before returning to its prescribed theme.