How to create a brain!
So, you want to create a brain?
There are some things you already need to understand this tutorial:
- Interest
- the above
And by a brain, I actually mean the functional brain. In this tutorial, we will see how you can make your very own, functioning brain. Or at least the idea behind it.
To create one, we first need to look at the fundamental building blocks of any nervous system.
(If you already think you know enough about neurons, excitatory and inhibitory postsynaptic potentials, you can skip this part. Just look at the final mathematical representation of the neuron and get going!)
The neuron
Great things are done by a series of small things brought together.
-Vincent Van Gogh
A neuron is a cell. Just like any other cell in the body. But it has a property that makes it a little special.
Can it think? Can it imagine? Does it know it exists? Does it know that it is currently processing information that is about itself? Most probably not. (You can never be certain for metaphysical questions like these)
So what makes it so special?
It conducts electricity in one way. And it fires a charge, to stimulate another neuron when it is stimulated enough.
You may say, “But! How can that alone be the cause for so much that our brain can do?!”. Wires conduct, Muscle fibers conduct(that’s right!), and even your skin conducts, right?
Neurons conduct one way
But what makes it special is that when many such cells come together they can do some functions that they can’t do so alone. Just like how atoms, which are 99.9% empty space come together to form solid matter that we know and touch(If you are hyper geeky, and claim that the probability of an electron existing somewhere in space does not equate to empty space, I am sorry, but…come on, you get the picture!)
Forget about the labels
The connection one neuron has with another is called a “synapse”( SIN-aps). When one neuron is excited enough, it can transmit impulses along the “axon” to other neurons. This can either excite or inhibit the postsynaptic neuron(fancy term for the other neuron).
The post-synaptic neuron likewise receives such connections from many other neurons. As many as thousands! And each of these can be excitatory or inhibitory.
Not only that but also HOW MUCH excitation or inhibition it causes depends on the location where it synapses. There is a place called the “axon hillock” where it is most efficient at doing what it does, causing excitation or inhibition. And there are the peripheries where it is less so.
When the post-synaptic neuron is excited enough, if fires! Causing the same process again.
Now, “HOW MUCH” the neuron has a potential to excite, let’s call it the “weight” of the synapse. Inhibitory synapses have negative weights.
And how much the neuron must be excited to fire, let’s call it its threshold.
So, in summary, functionally, a neuron is just something the adds all of the inputs it receives and fires if it is greater than the threshold. Mathematically we represent it as below:
Mathematical representation of a neuron
The w1,w2,w3 represent the weights and x1, x2,x3 the inputs from other neurons (1 if it fires, 0 if it does not). Therefore, the Sigma function, (Σ) just adds all the weights of the activated neurons.
The activation function, (f) is such that the output is 1 if the sum is above the threshold and 0 if not.
A simple Neural Network
Now that we have got our basics of how a neuron functions, let’s jump straight into the big question. How does our brain do what it does? How can it associate so many things at once and give out complicated responses in a fraction of a second?
To understand how it does that let’s start with something simple, a reflex action and climb our way up.
A reflex pathway
(Note for geeks: Ya, Sorry about the sensory neuron relaying in the anterior horn instead of posterior, and the absence of a pseudounipolar dorsal horn nucleus and… Does it EVEN matter?)
When a sensory neuron is stimulated(Purple), here in this figure, by the candle flame, a signal is sent to the neuron supplying the muscle(Blue). There is a relay neuron(Green), that sends a signal up to the brain (and also sends inhibitory signals to the antagonistic muscle), but let’s ignore that for now. Representing this in a simple neural net below:
We can see that we don’t have weights yet, but because it is a simple circuit, and all the signals are excitatory, let’s assume all the weights to be 1.5 and set the threshold of the neuron to 1 so that 1.5 is MORE than enough to set it off.
This means everything is just stimulatory and when the input is present, you get an output. Simple right?
A more complicated network
Now let’s look at a slightly more complex reflex.
Yes, the “reflex” to itch.
This one is a bit complex because the body responds differently to different places of the itch. Say, there is some insect on your back, slightly towards the right of the midline, your right-hand goes for the itch. If it is slightly towards the left, however, a huge change in response, the left-hand goes towards the itch. The itch reflex is actually a spinal reflex and it is functional even with just the spinal cord intact. This, therefore, requires a more intricate pathway because the feeling of the “itch”, as well as the side of the itch, need to be the inputs for an output (whether or not to scratch at all, and towards which side)
Let’s first see a neural network for the pathway to scratch on the right side with the right hand due to a right side itch:
The Inputs are the Itch itself and, the touch sensation, due to whatever the itch it due to on the right side. This input goes into sensory neurons that synapse with a motor neuron. This motor neuron causes the movement of the right hand.
Now observe the weights of the synapses. The threshold is 1 (T=1), but the Weight of both the synapses is only 0.8.
This means one neuron can’t fire off the Motor neuron alone. Both of it when triggered together, can set it off, though, because the sum of the weights (0.8+0.8 = 1.6) is greater than the threshold (1). We have effectively created what is known as a logical AND gate. This requires both the inputs to be True in order for the result to be True.
Now let’s include the other side as well. (The transparency of the synapses indicate its Weight). Each neuron can also be called a node.
This looks slightly more complicated but is just 2 AND gates combined. When There is an itch and the left side touch receptors get activated, Nodes 2 and 3 (Sensory neurons) get activated respectively. This causes a total synapse potential of 0.8 at “R” Motor neuron (Which is below the threshold of 1) and a total of 1.6 at “L” Motor neuron, which is above the threshold and this sets if off. As a result, the Left hand’s muscles get activated.
For the sake of simplicity and scalability, let us represent all possible synapses in this circuit, and give Weight = 0 to those that don’t exist, effectively nullifying their existence. This is so that we can expand our circuit to do a lot more than the “normal itch response”.
Congratulations! You have built your first scalable neural net!
Now for the exciting part. What happens when we can tweak these weights? What happens then?
Just changing these weights slightly can have a huge impact on how the circuit behaves for a particular input.
Let’s start our tweaking…
We changed the weight from the 3rd sensory node to the R motor node, changing it from 0 to 0.8. So, what happens? Now there is the normal Itch response, but along with it, there is something funny occurring. When touch receptors on both sides of the body are touched, that also activates the R motor node. So, the person now scratches with the right hand if you touch him on both sides on the back! Just a small change, but a significant difference in response.
So, let’s say we want to create a response where the person scratches on both sides, even if one side itches. Done!
By tweaking these weights like this, you have effectively created an “intelligent” circuit that can be programmed to scratch any side based on any input of data, from the sensory neurons. Want to scratch on the right if someone touches on the left side? Sure, just change some weights!
What we have here is a sort of intelligence. Not even near the capacity of a tapeworm, let alone a human brain. But this neural net is just 1 layer deep, meaning there is only one layer of neurons between the input and output layers. We can achieve a lot even with this neural circuit. With deep neural networks, the possibilities are just almost limitless. One example is, how Google recognizes your voice almost always correctly in its Mobile Assistant Software. This field of using artificial neural networks to make machines become intelligent is called Deep Learning, a subdivision of machine learning that’s gaining a lot of attention recently.
These deep neural nets can identify images and also paint them (below is an example)! Recently it even composed its own piece of music. It is now almost capable of understanding natural human language, and diagnosing diseases more accurately than the average human doctor. One even recreated a Nobel Prize in Physics experiment in ONE HOUR, which took us humans years! Also, it defeated the International Champian at “Go”, the game claimed to be the most difficult to master and required “intuition” to win. The era of Artificial Intelligence is now. This is thanks to fast enough computers that can actually compute enough data to recreate realistic models of the brain. Also more efficient algorithms.
This post just covered the basics of Neural Circuits.
Coming up next, let’s see how we can use these deep neural networks to recreate how our brain perceives an image and use it to identify and classify actual images using a computer.
Stay tuned for How to create a brain — Part 2.