The gaggle of Google employees peered at their computer screens in bewilderment. They had spent many months honing an algorithm designed to steer an unmanned hot air balloon all the way from Puerto Rico to Peru. But something was wrong. The balloon, controlled by its machine mind, kept veering off course.
Salvatore Candido of Google's now-defunct Project Loon venture, which aimed to bring internet access to remote areas via the balloons, couldn't explain the craft’s trajectory. His colleagues manually took control of the system and put it back on track.
It was only later that they realised what was happening. Unexpectedly, the artificial intelligence (AI) on board the balloon had learned to recreate an ancient sailing technique first developed by humans centuries, if not thousands of years, ago. "Tacking" involves steering a vessel into the wind and then angling outward again so that progress in a zig-zag, roughly in the desired direction, can still be made.
Under unfavourable weather conditions, the self-flying balloons had learned to tack all by themselves. The fact they had done this, unprompted, surprised everyone, not least the researchers working on the project.
"We quickly realised we'd been outsmarted when the first balloon allowed to fully execute this technique set a flight time record from Puerto Rico to Peru," wrote Candido in a blog post about the project. "I had never simultaneously felt smarter and dumber at the same time."
This is just the sort of thing that can happen when AI is left to its own devices. Unlike traditional computer programs, AIs are designed to explore and develop novel approaches to tasks that their human engineers have not explicitly told them about.
But while learning how to do these tasks, sometimes AIs come up with an approach so inventive that it can astonish even the people who work with such systems all the time. That can be a good thing, but it could also make things controlled by AIs dangerously unpredictable – robots and self-driving cars could end up making decisions that put humans in harm's way.
The artificial intelligence that controlled the hot air balloons of Project Loon learned a sailing technique to tack into the wind (Credit: Loon)
How is it possible for an AI system to "outsmart" its human masters? And might we restrain machine minds in some way, to ensure that some unforeseen disaster does not occur?
In the AI community, there's one example of AI creativity that seems to get cited more than any other. The moment that really got people excited about what AI can do, says Mark Riedl at the Georgia Institute of Technology, is when DeepMind showed how a machine learning system had mastered the ancient game Go – and then beat one of the world's best human players at it.
"It ended up demonstrating that there were new strategies or tactics for countering a player that no one had really ever used before – or at least a lot of people did not know about," explains Riedl.
And yet even this, an innocent game of Go, provokes different feelings among people. On the one hand, DeepMind has proudly described the ways in which its system, AlphaGo, was able to "innovate" and reveal new approaches to a game that humans have been playing for millennia. On the other hand, some questioned whether such an inventive AI could one day pose a serious risk to humans.
"It's farcical to think that we will be able to predict or manage the worst-case behaviour of AIs when we can't actually imagine their probable behaviour," wrote Jonathan Tapson at Western Sydney University after AlphaGo's historic victory.
You might also like:
- The detectives who never rest
- How robots are coming for your vote
- The signs of disease no-one can see
The important thing to remember, says Riedl, is that AIs don't really think like humans. Their neural networks are indeed loosely inspired by animal brains but they might be better described as "exploration devices". When they attempt to solve a task or problem, they don't bring many, if any, preconceptions about the wider world with them. They simply try – sometimes millions of times – to find a solution.
"We humans bring in a lot of mental baggage with us, we think about the rules," says Riedl. "AI systems don't even understand the rules so they poke at things very randomly."
In this way, AIs could be described as the silicon equivalent of people with savant syndrome, adds Riedl, citing a condition where a person has a serious mental disability but also possesses an extraordinary skill, usually related to memory.
One way that AIs can surprise us involves their ability to tackle radically different problems but using the same basic system. Recently, a machine learning tool designed to generate paragraphs of text one word at a time was asked to perform a very different function: play a game of chess.
The system in question is called GPT-2 and was created by OpenAI. Trained on millions of online news articles and web pages, GPT-2 can predict the next word in a sentence based on the preceding words. Since chess moves can be represented in alphanumeric characters, "Be5" to move a bishop for example, developer Shawn Presser thought that if he trained the algorithm on records of chess matches instead, the tool could learn how to play the game by figuring out desirable sequences of moves.
Presser trained the system on 2.4 million chess games. "It was really cool to see the chess engine come to life," he says. "I wasn't sure it would work at all." But it did. It's not as good as specially designed chess computers – but it's capable of playing tough matches successfully.
Presser says his experiment shows that the GPT-2 system has many unexplored capabilities. A savant with a gift for chess.
A later version of the same software astounded web designers when a developer briefly trained it to spit out code for displaying items on a web page, such as text and buttons. The AI generated appropriate code even though all it had to go on was simple descriptions like "red text that says 'I love you' and a button with 'ok' on it". Clearly, it had got the basic gist of web design but only after surprisingly little training.
With artificial intelligence starting to be used in the real world, it is important to know if it is going to do anything unexpected (Credit: Nicholas Kamm/Getty Images)
One arena where AIs have long impressed is video games. There are countless anecdotes in the AI community about surprising things algorithms have done in virtual environments. Video game-like spaces are often where algorithms are tested and honed, to see just how capable they really are.
In 2019, OpenAI made headlines with a video about a hide-and-seek game played by machine learning-controlled characters. To the researchers' surprise, seekers in the game eventually learned that they could jump on top of items and "surf" them to get access to the enclosures where the hiders were cowering. In other words, the seekers learned to bend the rules of the game to their advantage.
A strategy of trial-and-error can result in all kinds of interesting behaviours. But they do not always lead to success. Two years ago, DeepMind researcher Victoria Krakovna asked readers of her blog to share stories about times when AIs have solved tricky problems – but in unpredictably unacceptable ways.
The long list of examples she collated is fascinating. Among them is a game-playing algorithm that learned to kill itself at the end of level one – to avoid dying in level two. The objective of not dying in level two was achieved, just not in a particularly impressive way. Another algorithm discovered that it could jump off a cliff in a game and take an opponent with it to its doom. That gave the AI enough points to gain an extra life so that it could keep repeating this suicidal tactic in an infinite loop.
Video game AI researcher Julian Togelius at the New York University Tandon School of Engineering can explain what's going on here. He says these are classic examples of "reward allocation" errors. When an AI is asked to accomplish something, it may uncover strange and unexpected methods of achieving its goal, where end always justifies the means. We humans rarely take such a stance. The means, and the rules that govern how we ought to play, matter.
Togelius and his colleagues have found that this goal-oriented bias can be exposed in AI systems when they are put to the test under special conditions. In recent experiments, his team found that a game-playing AI asked to invest money at a bank would run to a nearby corner of the virtual bank lobby and wait to receive a return on the investment. Togelius says the algorithm had learned to associate running to the corner with getting a financial reward, even though there was no actual relationship between its movement and how much was paid out.
This, says Togelius, is a bit like an AI developing a superstition: "You got a reward or a punishment for something – but why did you get it?"
This is one of the pitfalls of "reinforcement learning", in which an AI ends up devising a wrong-headed strategy based on what it encounters in its environment. The AI doesn't know why it succeeded, it can only base its actions on learned associations. A bit like early human cultures that began to associate rituals with changes in the weather, for example.
Or, pigeons. In 1948, an American psychologist published a paper describing an unusual experiment in which he placed pigeons in enclosures and gave food rewards to them intermittently. The pigeons began to associate the food with whatever they happened to be doing at the time – be it wing-flapping or performing a dance-like motion. They then repeated these behaviours, seemingly expectant that a reward would follow.
There is a big difference between the in-game AIs tested by Togelius and the live animals used by the psychologist, but Togelius hints that the same basic phenomenon appears to be at work: the reward becomes mistakenly associated with a particular behaviour.
While AI researchers may be surprised at the paths taken by machine learning systems, that doesn't necessarily mean they are in awe of them. "I never have any feeling that these agents have minds of their own," says Raia Hadsell at DeepMind.
Pigeons can learn to associate food with certain behaviors and AI's can display similar types of entrainment (Credit: Binnur Ege Gurun Kocak/Getty Images)
Hadsell has experimented with many AIs that have found interesting and novel solutions to problems not predicted by her or her colleagues. She points out that this is exactly why researchers seek to sharpen AIs in the first place – so that they may achieve things humans can't on our own.
And she argues that products using AI, such as self-driving cars, can be rigorously tested to ensure that any unpredictability is within certain acceptable limits.
"You can give reasonable guarantees on behaviour that are based on empirical evidence," she says.
Time will tell if all companies that sell products built with artificial intelligence are scrupulous on this point. But in the meantime, it is worth noting that AIs demonstrating unexpected behaviours are by no means simply confined to research environments. They are already working their way into commercial products.
Last year, a robot arm working at a factory in Berlin developed by the US firm Covariant came up with unexpected ways of sorting items as they pass by on a conveyor belt. Despite not being specially programmed to do so, the AI controlling the arm learned to aim for the centre of items in transparent packaging to help guarantee that it would pick them up successfully each time. Because such objects can muddle together when they overlap, due to that see-through material, aiming less precisely meant the robot might fail to pick the item up.
"It avoids overlapping corners of objects, and instead aims for easiest pickable surface," says Covariant co-founder and chief executive Peter Chen. "It really surprised us."
Separately, Hadsell says her team has recently experimented with a robot arm that passes different blocks through shape-sorting holes. The gripping hand on the robot was rather clumsy so the AI controlling it learned that, by repeatedly picking up and dropping the block, it could get it into the right position to then seize it and pass it easily through the appropriate hole – rather than trying to fiddle with it using the gripper.
All of this illustrates a point made by Jeff Clune at OpenAI, who recently collaborated with colleagues around the world to collect examples of AIs that have developed clever solutions to problems. Clune says that the exploratory nature of AI is fundamental to its future success.
"As we're scaling up these AI systems, what we're seeing is that the things they are doing that are creative and impressive are no longer academic curiosities," he says.
As AIs find better ways to diagnose disease or deliver emergency supplies to people, they'll even save lives thanks to their ability to find new ways to solve old problems, adds Clune. But he thinks those who develop such systems need to be open and honest about their unpredictable nature, to help the public understand how AI works.
It is, after all, a double-edged sword – the very promise and threat of AI all wrapped up in one. Whatever will they think of next?
--
Join one million Future fans by liking us on Facebook, or follow us on Twitter or Instagram.
If you liked this story, sign up for the weekly bbc.com features newsletter, called "The Essential List". A handpicked selection of stories from BBC Future, Culture, Worklife, and Travel, delivered to your inbox every Friday.
"how" - Google News
February 22, 2021 at 03:00PM
https://ift.tt/3bwWOqy
How Google's hot air balloon surprised its creators - BBC News
"how" - Google News
https://ift.tt/2MfXd3I
https://ift.tt/3d8uZUG
Bagikan Berita Ini
0 Response to "How Google's hot air balloon surprised its creators - BBC News"
Post a Comment