Thursday, November 7, 2013

A Step Towards Human-type AI


Mankind emulating nature.. again!

Computers are getting ever more powerful, but the human brain is still far more efficient than even our most powerful supercomputers, both in terms of energy consumption and computation. So it makes sense to try to create computers that emulate the human brain. But what parts should we be trying to emulate?

Well one fundamental difference between today's computers and the human brain is the functionality of their smallest components, the computer's transistor and the brain's neuron.

The transistor is based on a binary system, meaning that it can only produce two distinct values which we dub 0 and 1, while the neuron doesn't have this two-value limitation. Instead, it can produce a signal of any value.

So why does this fundamental difference between today's computer and the brain matter? 


How do computers work?

Let's start with how computers work. All of it's functionality is based on a binary system, so if you want to program some math into it, you have to deal with this limitation that it can't produce numbers like 1/3.

Now you might think that it should be sufficient to use the estimated value of say 0.3333 instead of the actual value 1/3, but this doesn't work well because this small amount of error actually results in a huge amount of error when billions of calculations are performed to produce the computer's intended result. So each calculation is producing this error, and the total error for all the calculations adds up to a lot.

So you might say that we can use more transistors for each number, thereby allowing for more precision, but this doesn't work well either. It ignores a problem with how the error grows in these calculations. Instead of the error growing linearly, in other words proportionally with the number of calculations, the error grows geometrically which means that with every iteration of computation the amount of error multiplies on itself instead of adding to itself. So the error grows to a point that it might be 100's of times larger than the actual value, making the estimated value completely useless.

To solve this error problem, we use a field of mathematics known as Numerical Analysis -- originating between 1800 and 1600 BC -- which deals with creating algorithms that reduce error in numerical calculations to the point of making the error manageable (by making the error grow linearly instead of geometrically).

You might say that this would work well, and you'd be right at least for the applications that we see in daily life. For example, digital TVs have to use algorithms because they receive video feeds at 30 frames per second (fps) while delivering video to it's screen at say 60 fps. The TV must create 30 extra frames that didn't come from the video feed. What the TV does is implement numerical algorithms that interpolate data points from the existing data points coming from the images of the video feed. This interpolation of course produces some error. But because the algorithms do pretty well at minimizing the error, most people don't notice the tiny errors that the algorithms do produce, so that's good enough.

But consider the new field of computerized facial recognition. Today's systems can take hours to perform a job that humans can do in fractions of a second. So maybe if we built computer chips whose smallest component can generate any values instead of being limited to just 0 and 1, we would be one step closer to creating a human-type artificial intelligence, also known as artificial general intelligence (AGI).


New Invention: Artificial synapse

And this step may have already been achieved. Materials scientists at the Harvard School of Engineering and Applied Sciences (SEAS) have created an artificial synapse that works much like the synapses in the human brain.* And actually it mimics the brain's synapse in more ways than I've already described.

The human brain has upwards of 86 billion neurons, each connected to upwards of 10,000 other neurons. The neurons connect to each other by extending axons to the receiving ends of neurons, called dendrites. At the meeting between an axon and a dendrite is what we call the synapse. These synapses continuously adapt to stimuli, strengthening some connections, and weakening others. On a physiological level this is what we know as learning, and it enables the kind of rapid, highly efficient computational processes that make Siri and Blue Gene seem pretty stupid.


How the synapse works

What this new brain-inspired device does is control the flow of information in a circuit while also physically adapting to changing signals. This physical adaptation is mimicking what our brains do, which is to improve or weaken an axon's conductance by adding or removing an electrically-insulating chemical known as Myelin Sheath around the axons.

"The transistor we've demonstrated is really an analog to the synapse in our brains," says co-lead author Jian Shi, a postdoctoral fellow at SEAS. "Each time a neuron initiates an action and another neuron reacts, the synapse between them increases the strength of it's connection. And the faster the neurons spike each time, the stronger the synaptic connection. Essentially, it memorizes the action between the neurons."

The artificial device uses oxygen ions to achieve the same plasticity as the biological analog of calcium ions. When a voltage is applied, these ions slip in and out of the crystal lattice of a thin film of samarium nickelate 80-nanometers in width, which is about 1/1000th the width of a strand of hair. The film acts as the synaptic channel between two platinum axon and dendrite terminals. The varying concentration of ions in the nickelate raises or lowers it's conductance -- and just as in the human brain synapse, it's conductance correlates with the strength of the connection. 

Structurally, the device consists of the nickelate semiconductor sandwiched between two platinum electrodes which is adjacent to a small pocket of solution, which is where the oxygen ions come from. An external circuit multiplexer converts the time delay into a magnitude of voltage which it applies to the ionic liquid, creating an electric field that either drives ions into or out of the nickelate. The entire device, just a few hundred microns long, which is about the width of 10 strands of hair, is embedded in a silicon chip.


What's next?

So this synaptic device brings us one step closer to emulating the architecture of the human brain. But to be clear, there is another fundamental difference between today's computers and the human brain that I expect will be more difficult to solve. The architecture of today's computers is designed to allow for programming algorithms on to it, in other words installing software on it. But the architecture of the human brain allows for programming itself. And more importantly, unlike today's computers, nobody can program a person's brain except for himself.

One possibility to finding answers to this is to understand the neuron's ability to extend it's axons to meet other neurons to create connections, as this seems to be the next most fundamental difference between the brain and today's computers.

But to be clear, emulating the architecture of the human brain isn't necessary to achieve human-type AI. We should be able to emulate the human brain's ability to program itself without emulating its physical architecture.** In this view, the key to human-type AI will be found in the field of philosophy, not computer science.


-----------------------------------------------------------------------------------

* Shi, Jian, et al. "A correlated nickelate synaptic transistor." Nature Communications (2013): Web. 11 Nov 2013.

** Deutsch, David. "Creative Blocks." Aeon Magazine, 12 Oct 2013: Web. 20 Nov 2013.

No comments:

Post a Comment