What is good? And what is evil? Think about this for a couple of seconds, think about how you define those two terms and how you’ve learned how to distinguish them.
This is not a trivial question, in fact, the concept of good and evil and how to define those terms is on people’s mind since thousands of years. All religions have found their very own definition and metaphors. In Christianity for example, the Bible clearly defines who is good and evil. God and Jesus, as well as all who follow their life rules, are good. All who turn away from God, abhor the Ten Commandments, or let them seduce by the devil are evil. To be evil, however, is not an irreversible way to ruin, the Bible teaches that a true regret of the evil deeds and the forgiveness of sins through God can lead back to the “good” way.
In Buddhism the distinction between good and evil is much less acute than in Christianity. Ask yourself the question: Is a person that kills someone else but truly regrets it afterwards now fundamentally good or evil? Is a person that steals regularly to feed his children good or evil? In Buddhism, a person is not per se good or evil but only their actions. And those actions that we put into practice will eventually fall back upon us in the future – this is called Karma.
Now what does this have to do with computers? Imagine you create a robot that has an awareness, a mind. Imagine further, this robot asks itself fundamental questions like: Who am I? Where do I come from? How would this robot be able to learn about the concepts of good and evil? Of course, we could just teach him what we consider to be good and evil, but wouldn’t that automatically restrict the potential of this robot? Maybe, there is a much better definition or a better concept to think about good and evil – if we teach a robot what good and evil is, he will never be able to find a (possible) better approach. And this is currently a situation we’re facing in the field of Artificial Intelligence (AI). AI is mostly driven by Machine Learning (ML) at the moment and ML needs a lot of data to learn. Machine Learning is in fact nothing more than finding the right parameters in an equation to calculate a result. In supervised machine learning, we give a computer some input (for example, lot of images of dogs) and the expected output (putting the tag “dog” on this images). In unsupervised learning however, we don’t tell the algorithm which outcome we expect, we just give them some input data and the computer is trying to cluster this information. Here’s an image to visualize this: The top row represents supervised learning, the bottom unsupervised learning. Note, that in unsupervised learning, the algorithm is able to cluster images of the same species. However, it’s not able to put the tag “dog” on it, as this was never taught to the algorithm (in supervised learning, it was).
Now here’s the thing: Although it looks as if a computer is actually able to “think” about how he arranges the images in unsupervised learning, he is not. In the end, the computer only compares pixel values and recognizes that giraffes typically have an elongated thing (the neck obviously) and therefore pushes all the images that have this feature into a group. If people are faced with the same picture grouping problem, they classify these images differently: They typically combine emotions with pictures and think of giraffes not only as “elongated features”, but rather see Africa, a safari or other animals that live in the same habitat as Giraffes do in their mind’s eye. And this is of fundamental importance: Machine Learning means adjusting parameters of a fixed formula. The process of thinking would mean changing and adopting this formula on a regular basis. If we want to get further with Artificial Intelligence, the question arises if we should move from Machine Learning to Machine Thinking. Machine Thinking for example, would mean adopting the source code of a program by the program itself to become better. Machine Thinking is all about reacting to the environment and adoption. This sounds similar to Reinforcement Learning (RL) but in RL, still a human has to tell the algorithm what is the goal to reach and what actions help to achieve the goal.
The ultimate goal would be a computer that could answers questions like “What is evil?” without having it to learn from humans first. This sounds like “Deep Thought”, the supercomputer in The Hitchhiker’s Guide to the Galaxy and in fact, that’s the goal, a thinking computer. Even though we are not there yet, Machine Learning and especially Reinforcement Learning are already big steps in the right direction. With Machine Thinking, we could now bring AI to the next level.