Artificial Neurons May Make Artificial Intelligence More Efficient

In his paper published in Science Advances on the 26th January, Michael L. Schneider and his team describe their latest achievement that could be the key to making artificial intelligence more human. The team created superconducting computing chips which are modeled after neurons in the brain. This allows information to be processed more quickly and efficiently and could be the key to organic machine-learning software in advanced computing devices.

Schneider, a physicist at the US National Institute of Standards and Technology (NIST) explains that this tremendous accomplishment in neuromorphic computing improves the efficiency of certain computational tasks like decision-making although believes that “There must be a better way to do this because nature has figured out a better way to do this.”

Image credit: ktsdesign/shutterstock

Presently, artificial intelligence software (e.g.Google’s automated language and image-classification programs) also use artificial neurons. However, due to conventional hardware not being designed to process complex ‘brain-like’ algorithms, these tasks are still not as energy efficient as human's brain.

Theoretically, hardware that mimics human brains can run complex tasks more efficiently than conventional electronic systems. This is due to the difference in the way the systems process information. In conventional systems, transistors are used to process information in regulated intervals and amount (i.e. 0 or 1 bits). On the other hand, neuromorphic systems can hold and alter information from a number of sources, before producing an electrical impulse that mimics how data is processed in the human brain.

NIST is not the only group that is trying to develop neuromorphic hardware, but their implementation of niobium superconductors to electrodes that mimic neurons could potentially solve the inefficiency problem of transmitting information across synapses between transistors. The team filled this synapse-like between the superconductors with nanoclusters of magnetic manganese which can be aligned to point in different directions by altering the magnetic field in the gap. This permits the system to encode information in terms of the level of electricity and in the direction of magnetism and allows for a superior computing power without taking up additional space.

This development allows the synapses to fire up one billion times per second, much faster than other neuromorphic systems and significantly faster than human neurons while only using one ten-thousandth of the amount of energy. In addition to this, the synthetic neurons could accumulate data from 9 sources before transferring it on.

However, the discovery still needs some development before it can be used in real-world applications. To be able to use the technology for complex computing, millions of synapses would be necessary, and it is not known whether it is possible to scale the technology to be able to do this. Moreover, the synapses must be cooled with liquid helium as they can only operate at extremely low temperatures. Schneider argues that the cooling of devices requires less energy than operating conventional electronic systems with an equal amounts of power. This means that the technology may be more suitable for large data centers rather than small devices, a belief that is also held by computer engineer Steven Furber who studies neuromorphic computing at the University of Manchester, UK.

Furber also believes that any practical applications for this technology are a long way in future. He stresses that “The device technologies are potentially very interesting, but we don’t yet understand enough about the key properties of the [biological] synapse to know how to use them effectively.”

It can take 10 years or more for new computing devices to reach the commercial market and therefore, Furber believes that a number of different technological approaches should be developed. Future developments in the field of neuromorphic computing will increase as neuroscientists provide further insight into the human brain.

This story is reprinted from material from Nature, with editorial changes made by Azo Network. The original article can be found here.

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.


Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Robinson, Isabelle. (2018, February 02). Artificial Neurons May Make Artificial Intelligence More Efficient. AZoM. Retrieved on June 27, 2022 from

  • MLA

    Robinson, Isabelle. "Artificial Neurons May Make Artificial Intelligence More Efficient". AZoM. 27 June 2022. <>.

  • Chicago

    Robinson, Isabelle. "Artificial Neurons May Make Artificial Intelligence More Efficient". AZoM. (accessed June 27, 2022).

  • Harvard

    Robinson, Isabelle. 2018. Artificial Neurons May Make Artificial Intelligence More Efficient. AZoM, viewed 27 June 2022,

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type