11.4 C
London
Friday, October 15, 2021

How computationally complex are biological neurons?

- Advertisement -spot_imgspot_img
- Advertisement -spot_imgspot_img

[ad_1]

Our soft brains look very different from the hard silicon chips in a computer processor; But scientists have a long history of comparing the two. As Alan Turing In 1952, he said: “We are not interested in the fact that the state of the brain is like jelly. In other words, its form does not matter; “It’s just the computing power that matters.”

The most powerful systems today Artificial intelligence They use a type of machine learning called deep learning, and their algorithms learn their tasks by processing large volumes of data using hidden layers of interconnected nodes called deep neural networks.

As the name implies, deep neural networks are inspired by real neural networks within the brain, and nodes are modeled on real neurons; Or at least based on what neuroscientists knew about neurons in the 1950s, when an influential neural model called perceptron was born.

Since then, our understanding of the computational complexity of individual neurons has increased dramatically, and it has been found that biological neurons are much more complex than artificial neurons; But how much?

To find out, David Beniagoev, Aidan Segoff, and Michael London of the Hebrew University of Jerusalem trained a deep neural network to mimic the calculations of simulated biological neurons. They showed that the neural network needed 5 to 8 layers of interconnected neurons to represent the complexity of a biological neuron.

Even the authors themselves did not anticipate complexity. “I thought it was simpler and smaller,” says Beniagoev. His expectation was that three to four layers would be enough to achieve the calculations that were done inside the cell.

Timothy Lilikrp, who deploys decision-making algorithms in the company Google “It may be necessary to reconsider the old tradition of roughly comparing brain neurons with defined neurons in machine learning,” he says, referring to the results of the study.

The most basic analogy between artificial and real neurons involves how input information is managed. Both types of neurons receive input signals and, based on this information, decide whether or not to send their signal to other neurons. Artificial neurons rely on a simple calculation to make this decision; But decades of research have shown that this process is much more complex in biological neurons.

Computational neuroscientists use a type of input-output function to model the relationship between inputs received by tall tree branches, such as neurons, called dendrites, and the neurons’ decision to send a signal. This function is what the authors of the new study taught the artificial deep neural network to determine the complexity of biological neurons.

The researchers began by simulating a huge simulation of the input-output function of a neuron with distinct trees at the top and bottom of the dendritic branches known as “pyramidal neurons” that belonged to the rat cortex. They then inserted the simulation into a deep neural network with a maximum of 256 artificial neurons per layer. The researchers continued to increase the number of layers to achieve 99% accuracy at the millisecond level between the input and output of the simulated neurons. The deep neural network successfully predicted the input-output function behavior with at least 5 (but not more than 8) artificial layers, which in most networks was approximately equivalent to 1,000 artificial neurons for just one biological neuron.

Stimulation of neurons

The computational complexity of a single neuron, such as the pyramidal neuron on the left, relies on dendritic branches that are bombarded with input signals. This leads to local voltage changes as indicated by the color change of the neurons (red means high voltage and blue means low voltage). Before the neuron decides to send its signal, it experiences a threefold increase in voltage. This phenomenon is shown in the individual branches on the right, where the colors indicate the location of the dendrites from top (red) to bottom (blue).

According to Andreas Tulias, a computer neuroscience scientist at Baylor College of Medicine, the results of a new study establish a link between biological neurons and artificial neurons; But the study authors warn that the neurons described do not yet accurately represent biological neurons. “The relationship between the number of layers you have in a neural network and the complexity of the network is not clear,” says London. So we really can’t say, for example, how much more complex we will be by going from four layers to five layers. Nor can we say that the need for 1,000 artificial neurons means that biological neurons are exactly 1,000 times more complex.

Using more neurons within each layer may eventually lead to a deep neural network with one layer; But it will probably take a lot more data and time to learn the algorithm. “We tried a lot of architecture with a lot of depth and different factors, and in most cases we failed,” says London.

The authors have shared their code to encourage other researchers to find smarter solutions using fewer layers; But given the difficulty of finding a deep neural network that can mimic 99% of neurons with accuracy, the authors are confident that their results provide a meaningful comparison for further research.

Related articles:

The results of the study may provide a new way to connect the neural networks that classify images to the brain, says Lilkerp. These neural networks often require more than 50 layers. If each biological neuron is like a five-layer neural network, perhaps a 50-layer image classification network is equivalent to 10 real neurons in a biological network.

The authors also hope that their results will change the current architecture of advanced deep networks in artificial intelligence. “We want to replace the current deep network technology with something closer to the brain,” says Segf. They suggest that every simple unit in today’s deep networks be replaced by a unit that represents a neuron. In this alternative scenario, researchers and artificial intelligence engineers can add a deep 5-layer network as a small network to replace each artificial neuron.

But some researchers question whether this will really benefit AI. Anthony Zador, a neuroscientist at Cold Spring Harbor Laboratory in the United States, says this is an unanswered question and provides a fundamental new research to test.

Apart from the applications of artificial intelligence, the new article also strengthens the consensus among scientists on the computational power of dendritic trees and, on its behalf, individual neurons. In 2003, three neuroscientists modeled it in the form of a two-layer neural network to show that pyramidal neuron dendritic trees perform complex calculations.

In a new paper, the authors examine which features of pyramidal neurons inspired the much greater complexity of their 5 to 8-layer deep neural networks. They concluded that this complexity is the result of dendritic trees and special receptors that receive chemical messages on the surface of dendrites. These findings were consistent with past work in this area.

Some believe that the result of the new work means that neuroscientists should make the study of single neurons a higher priority. “This article makes thinking about individual dendrites and neurons much more important than ever,” says Conrad Cording, a computer neuroscience specialist at the University of Pennsylvania.

Others, such as Lillikrep and Zador, suggest that focusing on neurons within a circuit is important for learning how the brain handles the computational complexity of single neurons.

However, the language of neural networks may provide new insights into the power of neurons and, ultimately, the brain. “Thinking in terms of layers and depth and breadth gives us an intuitive sense of computational complexity,” says Grace Lindsey, a scientist at Computational Neurology at University College London. Lindsay also warns that the new work still compares one model to another.

Unfortunately, it is currently impossible for neuroscientists to record the complete input-output function of a neuron; So there may be more happening in biological neurons that we do not yet know. In other words, real neurons may be more complex. “We’re not sure there really is a final number between 5 and 8,” says London.

[ad_2]

- Advertisement -spot_imgspot_img
Latest news
- Advertisement -spot_img
Related news
- Advertisement -spot_img

LEAVE A REPLY

Please enter your comment!
Please enter your name here