Without a doubt, Artificial Intelligence is the most intensively developed field of the IT world. Using the AI, multiple devices recognise the correct operating parameters. Starting from the automotive ABS system, to recognising smiles by our smartphones when we want to take a selfie in the best possible moment, to selecting parameters for metal smelting processes. It’s all labelled “Artificial Intelligence”. Some people associate that with some sort of a brain that thinks in a similar way people do. Unfortunately, that’s not exactly right. As the only resemblance is that Artificial Intelligence comprises neural networks built from neurons similar to human neurons. And that’s about it. However, I will keep using references to humans to describe those phenomena.
I think we can leave the “How it works” technicalities for workshops organised to those truly interested in building such networks. Today, we’ll talk about the abstract level, namely how this network “thinks”.
The question used in the title may seem a bit infantile. To present how a neural network “thinks”, we need to use an easily understandable example.
A short story about… an AI creature
Let’s imagine we have a creature with nine eyes. Four pairs of eyes look sideways, and one central eye looks forward. What’s more, the eyes are fixed, which means they can’t move in any direction. The creature, let’s call him Z, cannot move in any direction. His legs allow him to walk forwards and rotate left or right. But he can’t walk backwards, which is obviously a serious burden. Z loves apples, but only red ones, as green and unripe ones make him feel sick.
Being a child creature
Z does not exactly know which eye sees what, so he keeps bumping into walls and—what’s worse—eating green apples.
This way of operation of neural networks is called backpropagation method.
When Z makes an error (by bumping into wall or eating a green apple), neurons receive information that this decision was wrong. When the creature eats a red apple, the neural network receives information that this was a very good decision.
This way, Z will slowly learn which decisions are good, and which are bad for him. This system of “rewards and punishments” means that at some point, the creature makes less errors, but sadly they still happen. In such cases we say that Artificial Intelligence is at the level of e.g. 97%. Which means 3% of decisions are wrong ones.
Where does AI get its knowledge from?
Every neural network needs certain things so it can learn what we want it to learn. Those things are “data sets” the neural network can use to gain knowledge. Data sets can be divided into two types:
A ready data set, e.g. test results of patients and detected diseases. Thanks to this, we know what input data should be used for the neural network (Z’s eyes) — here, that data will be patients’ test results. We also know what the neural network’s response should be (what Z’s legs should do). I.e. which disease will be detected based on that data.
The second set is the result of a simulation. In the video above, we created a virtual maze for Z with apples randomly scattered around. The simulation feeds sensor readings to the neural network, receives information about the creature’s movement in return, and then simulates that movement.
In both cases, it’s important to have enough data. A simulation will always have plenty. When you take real world information, you need to make sure there is a sufficient amount of that data. To simplify things, it is accepted that the minimum amount of data should be
10n — where n is the number of “decisions” your network can make.
In Z’s case—who only moves forwards and backwards or rotates left and right—n equals 3, so you need at least 1,000 pieces of data for the network to learn. In a simulation, there are definitely more such situations. Now think how much data you would need to detect 10 different diseases based on blood test results. Quite a lot, right?
After a few minutes of learning, Z is running around and eating red apples like crazy. In the meantime, he has learned which eye looks in which direction. He has understood that the distances seen by the outmost eyes are less relevant than a wall seen by the centre eye.