Neuromorphic design could help reverse engineer biological neural nets
08-10-2021 | By Robin Mitchell
Recently, Samsung Electronics published a paper that describes a method for copying a neural net and pasting the design into memory without the need for understanding the neural network itself. What is neuromorphic computing, what did the paper describe, and could it be the solution to developing intelligent machines?
What is neuromorphic computing?
Neuromorphic computing is a method for creating AI systems that achieve intelligence by mimicking the behaviour of neural networks. In the brain of living organisms, neurons act as basic computational units that receive and send electrical impulses. Each neuron in the brain will form connections to other nearby neurons. These links develop resistance to impulses such that an impulse can only travel from one neuron to another if the signal is strong enough.
This concept can be mapped to electrical circuits using basic processing units that act as neurons, and then these can connect to other processing units electrically. Another method for producing a neuromorphic circuit would be an array of interconnecting memristors. Each memristor has a resistance dependent on the current that flowed through it in the past. This can be used to create a neural net with weighted nodes and the ability to learn while in use (as using the grid re-enforces the neural net).
Samsung Electronics researchers publish a paper on the proposal for neuromorphic design
Recently, Samsung Electronics published a paper in nature on how a future neuromorphic AI could be developed with the use of real neurons, and hence potentially reverse-engineer the brain of living organisms.
The first step to reverse-engineering biological neural networks is a semiconductor that has an array of billions of nanoelectrodes. This array is placed in contact with neurons, and each electrode in the array (being far smaller than individual neurons), can take accurate electrical readings. The second step involves taking the electrical pattern and turning it into an array of weighted nets that perfectly simulate the electrical pattern of the biological neural net.
The final step takes the generated pattern and programs it into a non-volatile memory such as an array of memristors. This device should then behave precisely the same as the biological neural net, essentially copying and pasting the structure and functionality of a biological neural net.
While such a design would be possible using modern technology for a single slice of brain tissue (i.e. a 2D plane of neurons), it would become exponentially challenging to achieve for 3D brain structures. The human brain alone has over 100 billion neurons, and this does not include the connections between neurons, which is essential for the brain to learn and understand its environment.
Regardless, the development of 3D memory technologies could see complex AIs be quickly produced by training biological neural nets in a laboratory and then copying the electrical properties to a chip.
Could such a method develop truly intelligent AIs?
As previously stated, the biggest challenge faced by researchers is that the brain is a three-dimensional structure, which makes it very difficult to record electrical activity deep inside tissue. Furthermore, any damage done to neurons almost always adversely affects their ability to function.
It is unlikely that researchers will be able to recreate 3D neural structures. However, the ability to create 2D planes of neurons, train them, and then monitor their behaviour is more likely to be fact than fiction. At the end of the day, neurons are just basic processors that send and receive electrical information from other processors and send an electrical signal depending on some basic criteria.
As such, it should be more than possible for researchers to create arrays of two-dimensional neurons growing on top of an array of electrodes and then record what the neurons are doing.
However, the copying of neural behaviour could lead to the first AI circuits exhibiting true intelligence. While AIs can outperform humans on many tasks and learn from data, they are still not considered intelligent like humans. For example, a chess AI could beat most grandmasters, but it would not be aware that it was playing chess, nor would it know what chess was.
The copying of neural nets may be a way to program true intelligence into computers. Instead of trying to create intelligence from the ground up, it is instead copied from nature and essentially simulated on a chip. Of course, if a brain was perfectly copied into electronics, that computer system would therefore be able to think like a person, and if it could, it would also be self-aware.
Is it moral to peruse such technologies? What if the neural nets that we copy have some fundamental element of consciousness that we may not understand? Could such a device suffer? These questions would have to be explored if neural nets could be easily copied into neuromorphic devices.