Researchers Develop Graphene-based Synaptic Transistor for Neuromorphic Devices

18-08-2022 | By Robin Mitchell

Recently, researchers from the University of Texas and Sandia National Laboratories announced the development of a graphene-based transistor that exhibits properties similar to neurons. What is neuromorphic computing, what did the researchers develop, and how can neuromorphic computation help improve AI in the future?

What is neuromorphic computing?

Over the past two decades, numerous developments have been made in the field of AI, with neural nets developed in research labs now being integrated into everyday devices worldwide. When AI was first being developed, the researchers focused their attention on image recognition, but it is unlikely that any had thought about the wider applications of AI, including smart sensors, predictive maintenance, and shopping habits.

Traditional AI models utilise neural networks with multiple hidden layers to emulate intelligence. Unlike classical programming methods, AI neural nets can infer data of interest even if it has never seen that data before. A good example of this can be found in object recognition algorithms; an AI can be trained to recognise cats from dogs using training models and then recognise cats and dogs from brand new images without having ever seeing the image before. Even if the cat is an entirely different colour, breed, or shape, the AI will stand a good chance of differentiating it from a dog (by looking at common characteristics such as ear shape, eye shape, and overall profile).

These neural networks consist of weighted nodes that accept inputs from other nodes, and training the neural network involves adjusting the weight of each input to each node so that the correct output is generated. To accomplish this in circuitry, AI algorithms are typically dependent on large matrix calculations, and this is easily done using GPU circuitry. Fast forward to 2022, and dedicated neural processors can help accelerate such neural networks by mimicking the circuitry found in GPUs (albeit designed explicitly for running neural networks).

But a new type of AI processing is gaining traction with researchers called neuromorphic computing. Unlike neural networks, which rely on convolution neural nets, neuromorphic computing uses a spiking neural network that essentially operates nearly identical to neurons in living tissue. 

Instead of using weighted nodes, a spiking neural network involves neurons connected to other neurons via links, and the strength of these links can be adjusted during learning cycles. Additionally, each neuron will only produce an output pulse (i.e. spike) if some arbitrary input threshold is met. Thus, a neuromorphic computing device operates over time with pulses firing between neurons, and the adjustment of the links between neurons changes the overall behaviour. In fact, a true neuromorphic device would improve its ability throughout repeated tasks that re-enforce the connections between neurons (i.e. improving the connection between key neurons).

Researchers develop a graphene-based neuromorphic transistor

Recognising the advantages of neuromorphic devices, researchers from the University of Texas and the Sandia National Laboratories have recently developed a graphene-based transistor that can exhibit neuromorphic abilities

The new transistors are manufactured using a combination of graphene and nafion (a polymer membrane material) to make the channel and gate of the transistor, and gold contacts were used to connect the transistor to external testing systems. The conductivity of the transistor depends on the past history of the gate current such that positive-edged currents reduce the conductivity while negative-edged currents increase the conductivity. 

Furthermore, the change in conductivity depends on the size of the gate current, meaning that the new device is able to not only store information but perform rudimentary calculations (this is an essential component for neuromorphic devices). As such, these devices can take on the role of synapses at neurons by accepting inputs from other neurons and then apply a weighted calculation on incoming spikes through changes in conductivity.

Not only did the final device show great flexibility, but it also contains no toxic compounds and is biologically inert, giving it potential use in implanted devices for neural to computer links. 

How can neuromorphic computing help AI in the future?

As neuromorphic devices operate similarly to neurons in the human mind, it is believed that neuromorphic computing could be the technology that allows for machine thinking. While convoluted neural networks used in traditional AI’s can be easy to design and train, they are also highly energy demanding and require vast amounts of computation (even when a dedicated AI processor is used).

A neuromorphic device would only need to be configured once and then left to run. The lack of large processors needing to evaluate each neural connection means that neuromorphic devices run in real-time (similar to the human brain) and produce answers far quicker than traditional AI models. Furthermore, the use of synapses that can improve their performance over time on the transistor level also gives rise to an AI that can learn by doing without needing to be given large datasets and apply firmware updates.

The human mind is not a digital machine, and neuromorphic computation taps into the advantages of analogue computing to essentially recreate the mechanism which drives the human brain. 

Profile.jpg

By Robin Mitchell

Robin Mitchell is an electronic engineer who has been involved in electronics since the age of 13. After completing a BEng at the University of Warwick, Robin moved into the field of online content creation, developing articles, news pieces, and projects aimed at professionals and makers alike. Currently, Robin runs a small electronics business, MitchElectronics, which produces educational kits and resources.