Loading Content...

Category: Artificial Intelligence

When monkeys teach monkeys…

When monkeys teach monkeys, it’s a monkey that you get. When humans teach monkeys, you get a more advanced monkey, who may not know what he really is. Some being in between. Now when humans teach humans, you get humans. But what happens, when machines teach humans?

This question is not that far in the future anymore, in fact it’s happening right now. You may have followed the news around AlphaGo, Google’s DeepMind AI machine, that beat the world’s best human Go-player, Lee Sedol from South Korea. AlphaGo not only beat the human, but humiliated him. And in addition, AlphaGo used moves that the Go-world had never seen before. The machine opened humans a totally new way of playing the game. And the machine learned by playing against itself.

An interesting fact comes when you Fan Hui, a European Go-champion who’s been playing against AlphaGo in October 2015 to train the machine. While AlphaGo beat Fan Hui 5:0, what happened is that his rank from somewhere in the 600 improved to 300. A machine teaching a human.

And that looks increasingly interesting. What is the human potential if we have AI-systems that teach us? We can only speculate and draw similarities from cases when humans are training animals, such as monkeys, parrots, or dolphins. We can increase their language and awareness and teach them skills they would otherwise not have.

A potential ethical dilemma?

This leads us to an ethical dilemma, in cases where the machine is not benevolent. The case of Microsoft’s Twitter-bot Tay is an example. The bot had what is called a cold-start, where it starts from zero and interacts with humans. It was supposed to learn how young people are communicating on Twitter and blend in. What happened is that some humans hijacked the conversation and taught Tay racism. Tay had to be turned off, and the second time it came online, it went into a spam-tirade. Also not good.

Because of the way AI works and how such machines are trained, the outcome is unclear. To put it simple, an AI-system makes decisions based on probabilities. Humans may still understand it when there are not that many decision points or nodes, but as soon as we get into more than a handful, thousands or even millions of those nodes, then we cannot predict what the outcome is.

This is good and bad. Good, because it gives us completely new viewpoints like in the moves that AlphaGo made that humans have never made before. Bad, because all our scientific thinking has been based on understanding logic and rational. If we cannot follow the machine’s decisions anymore, we are at a loss. And if there is an error that the machine makes, we won’t be able to fix the error.

Deep Learning

But it’s not only human-machine-learning that becomes interesting here. AlphaGo used machine-machine-learning by playing Go against itself. And it needs no breaks or sleep but can continuously play against itself and become better. On a funny note, this may be a good way to train self-driving cars as well. By playing video-games with photo-realistic environments such as Assassins Creed we can skip parts of the real world testing.

Shall we be afraid? Will machines make us less human? Whatever it will be, Ray Kurzweil makes the case that in fact those machines will make us understand out humanity much better (read his article here).

More Resources

Interested in Learning more about AI? The online learning platform Udacity offers free courses on AI. One is Artificial Intelligence for Robotics taught by Sebastian Thrun, the other on Deep Learning. If you are in San Francisco end of May, you also may check out the Applied Artificial Intelligence conference organized by Bootstrap Labs.