SNN – Dataconomy https://dataconomy.ru Bridging the gap between technology and business Mon, 11 Jul 2022 12:07:08 +0000 en-US hourly 1 https://dataconomy.ru/wp-content/uploads/2025/01/DC_icon-75x75.png SNN – Dataconomy https://dataconomy.ru 32 32 Ultralow power AI chips are possible thanks to SNNs https://dataconomy.ru/2022/07/11/ai-chip-ultralow-power-levels-snns/ https://dataconomy.ru/2022/07/11/ai-chip-ultralow-power-levels-snns/#respond Mon, 11 Jul 2022 12:07:06 +0000 https://dataconomy.ru/?p=25827 Spiking neural networks (SNNs) are artificial networks that may be used to build intelligent robots or to learn more about the mechanisms behind the brain. Despite their small size, our brains have a significant amount of computing power. Since the brain processes neural signals in this way, many academics have been interested in developing artificial […]]]>

Spiking neural networks (SNNs) are artificial networks that may be used to build intelligent robots or to learn more about the mechanisms behind the brain. Despite their small size, our brains have a significant amount of computing power. Since the brain processes neural signals in this way, many academics have been interested in developing artificial networks that do the same.

The brain, on the other hand, has 100 billion tiny neurons, each of which is synaptically coupled to 10,000 other neurons and represents information through timed electrical spiking patterns. It has been difficult to mimic these neurons with hardware on a small device while also making sure that computation is carried out in an energy-efficient way. Recently, MIT researchers have built a new LEGO-like AI chip with sustainability concerns.

SNNs rely on a network of artificial neurons

In a recent study, scientists in India developed ultralow-energy artificial neurons that enable the SNNs to be placed more densely. IEEE Transactions on Circuits and Systems I: Regular Papers published the findings on May 25, according to IEEE Spectrum.

Spiking neural networks (SNNs) are artificial networks that may be used to build intelligent robots or to learn more about the mechanisms behind the brain.
SNNs need significant transistor currents to fill up their capacitors.

SNNs rely on a network of artificial neurons where a current source charges up a leaky capacitor until a threshold level is met and the artificial neurons fire. When the artificial neurons fire, the stored charge is reset to zero, just like neurons in the brain spike at a certain energy threshold. However, a lot of present SNNs need significant transistor currents to fill up their capacitors, which results in high power consumption or too-rapid artificial neuron firing.

In their research, Indian Institute of Technology, Professor Udayan Ganguly and his colleagues developed an SNN that uses band-to-band-tunneling (BTBT) current, a novel and portable current source, to charge capacitors.

Less energy is needed when using BTBT because quantum tunneling current charges the capacitor with ultralow current. The aim is to create an energy-efficient model, just like LEGO-like AI chip. In this instance, quantum tunneling refers to the phenomenon of quantum waves that allows current to pass through the artificial neuron’s silicon’s forbidding gap.

Spiking neural networks (SNNs) are artificial networks that may be used to build intelligent robots or to learn more about the mechanisms behind the brain.
SNNs rely on a network of artificial neurons where a current source charges up a leaky capacitor until a threshold level is met and the artificial neurons fire.

The BTBT method also eliminates the requirement for huge capacitors to store significant quantities of current, opening the door for smaller capacitors on a chip and resulting in space savings. The researchers saw significant energy and space reductions when they tested their BTBT neuron strategy utilizing 45-nanometer commercial silicon-on-insulator transistor technology.

“In comparison to state-of-art [artificial] neurons implemented in hardware spiking neural networks, we achieved 5,000 times lower energy per spike at a similar area and 10 times lower standby power at a similar area and energy per spike,” said Ganguly.

Following that, the researchers used their SNN to improve a speech recognition model based on the auditory cortex of the brain. The model successfully recognized spoken words with the aid of 36 extra artificial neurons and 20 artificial neurons for initial input coding, proving the approach’s viability in the real world.

Spiking neural networks (SNNs) are artificial networks that may be used to build intelligent robots or to learn more about the mechanisms behind the brain.
These applications can be run on existing servers and supercomputers, SNNs could make it possible to utilize them with edge devices.

Notably, this kind of technology may be suitable for a variety of tasks, such as voice-activity detection, speech classification, motion-pattern identification, navigation, biological signals, and more. Ganguly further points out that while these applications can be run on existing servers and supercomputers, SNNs could make it possible to utilize them with edge devices, such mobile phones and Internet of Things sensors, especially in cases when energy is at a premium.

Although his team has shown the effectiveness of their BTBT approach for specific applications, such as keyword detection, he asserts that they are interested in showing off a general-purpose reusable neurosynaptic core for a variety of applications and clients and have set up a startup company called Numelo Tech to support commercialization.

Spiking neural networks (SNNs) are artificial networks that may be used to build intelligent robots or to learn more about the mechanisms behind the brain.
These kind of efforts, just like LEGO-like AI chip, is very important for creating a sustainable world.

The ultimate goal is to develop “an extremely low-power neurosynaptic core and developing a real-time on-chip learning mechanism, which are key for autonomous biologically inspired neural networks. This is the holy grail.” These kind of efforts, just like LEGO-like AI chip, is very important for creating a sustainable world. The latest Schneider Electric research also points out that the sector professionals don’t meet their IT sustainability promises.

]]>
https://dataconomy.ru/2022/07/11/ai-chip-ultralow-power-levels-snns/feed/ 0
Neuromorphic computing and the future of AI https://dataconomy.ru/2022/04/06/neuromorphic-computing-engineering/ https://dataconomy.ru/2022/04/06/neuromorphic-computing-engineering/#respond Wed, 06 Apr 2022 15:01:39 +0000 https://dataconomy.ru/?p=22957 Neuromorphic computing is a growing computer engineering approach that models and develops computing devices inspired by the human brain. Neuromorphic engineering focuses on using biology-inspired algorithms to design semiconductor chips that will behave similarly to a brain neuron and then work in this new architecture. What is neuromorphic computing? Neuromorphic computing adds abilities to think […]]]>

Neuromorphic computing is a growing computer engineering approach that models and develops computing devices inspired by the human brain. Neuromorphic engineering focuses on using biology-inspired algorithms to design semiconductor chips that will behave similarly to a brain neuron and then work in this new architecture.

What is neuromorphic computing?

Neuromorphic computing adds abilities to think creatively, recognize things they’ve never seen, and react accordingly to machines. Unlike AIs, the human brain is fascinating at understanding cause and effect and adapts to changes swiftly. However, even the slightest change in their environment renders AI models trained with traditional machine learning methods inoperable. Neuromorphic computing aims to overcome these challenges with brain-inspired computing methods.

Gartner believes that traditional computing technology based on legacy semiconductor architecture will hit a digital barrier by 2025, necessitating a paradigm shift such as neuromorphic computing. Meanwhile, Emergen Research predicts the worldwide market for neuromorphic processing to reach $11.29 billion in 2027.

How does neuromorphic computing works?

Neuromorphic computing constructs spiking neural networks. Spikes from individual electronic neurons activate other neurons down a cascading chain, mimicking the physics of the human brain and nervous system. It works similar to how the brain sends and receives signals from neurons that spark computing. Neuromorphic chips compute more flexibly and broadly than conventional systems, which orchestrate computations in binary. Spiking neurons work without any specified pattern.

Neuromorphic computing achieves this brain-like performance and efficiency by constructing artificial neural networks out of “neurons” and “synapses.” Analog circuitry is used to connect these artificial neurons and synapses. They can modulate the amount of electricity flowing between those nodes, replicating natural brain signals’ various degrees of intensity.

Neuromorphic chips solve complex issues and adapt to new settings swiftly

These are spiking neural networks (SNN) in the brain, which can detect these distinct analog signal changes and aren’t present in conventional neural networks that employ less-nuanced digital signals.

Neuromorphic technology also envisions a new chip architecture that mixes memory and processing on each neuron rather than having distinct areas for one or the other.

Traditional chip designs based on the von Neumann architecture usually include a distinct memory unit, core processing unit (CPU), and data paths. The information must be transferred between various components as the computer completes a task, implying that data must travel back and forth numerous times. The von Neumann bottleneck is a limitation in time and energy efficiency when data transport across multiple components causes bottlenecks.

Neuromorphic computing provides a better way to handle massive amounts of data. It enables chips to be simultaneously very powerful and efficient. Depending on the situation, each neuron can perform processing or memory tasks.

Researchers are also working on alternate ways to model the brain’s synapse by utilizing quantum dots and graphene and memristive technologies, such as phase-change memory, resistive RAM, spin-transfer torque magnetic RAM, and conductive bridge RAM.

what is neuromorphic computing engineering

Advantages of neuromorphic computing

Traditional neural network and machine learning computations are well suited to current algorithms. They tend to prioritize performance or power, frequently resulting in one at the expense of the other. In contrast, neuromorphic systems deliver both rapid computing and low energy consumption.

Neuromorphic computing has two primary goals. The first is to construct a cognition machine that learns, retains data, and makes logical conclusions like humans. The second goal is to discover more about how the human brain works

Neuromorphic chips can perform many tasks simultaneously. Its event-driven working principle enables it to adapt to changing terms and conditions. These systems, which have computing capabilities such as generalization, are very flexible and robust. It’s highly fault-tolerant. Data is preserved redundantly, and even minor failures do not prevent overall performance. It can also solve complex issues and adapt to new settings swiftly.

Neuromorphic computing use cases

Edge computing devices like smartphones currently have to hand off processing to a cloud-based system, which processes the query and transmits the answer to the device for compute-intensive activities. That query wouldn’t have to be shunted back and forth with neuromorphic systems; it could be handled right on the device. The most important motivator of neuromorphic computing is the hope it provides for the future of AI.

AI systems are, by nature, highly rule-based; they’re trained on data until they can generate a particular outcome. On the other hand, our mind is much more natural to ambiguity and adaptability.

Researchers’ objective is to make the next generation of AI systems capable of dealing with more brain-like challenges. Constraint satisfaction is one of them and implies that machines must discover the best solution to a problem with many restrictions.

Neuromorphic chips are more comfortable with issues like probabilistic computing, where machines handle noisy and uncertain data. Other concepts, such as causality and non-linear thinking, are still in their infancy in neuromorphic computing systems but will mature once they become more widespread.

Neuromorphic computer systems available today

Many neuromorphic systems have been developed and utilized by academics, startups, and the most prominent players in the technology world.

Intel’s neuromorphic chip Loihi has 130 million synapses and 131,000 neurons. It was designed for spiking neural networks. Scientists use Intel Loihi chips to develop artificial skin and powered prosthetic limbs. Intel Labs’ second-generation neuromorphic research chip, codenamed Loihi 2, and Lava, an open-source software framework are also announced.

IBM’s neuromorphic system TrueNorth was unveiled in 2014, with 64 million neurons and 16 billion synapses. IBM recently announced a collaboration with the US Air Force Research Laboratory to develop a “neuromorphic supercomputer” known as Blue Raven. While the technology is still being developed, one use may be to develop smarter, lighter, and less energy-demanding drones.

The Human Brain Project (HBP), a 10-year project that began in 2013 and is funded by the European Union, was established to further understand the brain through six areas of study, including neuromorphic computing. The HBP has inspired two major neuromorphic projects from universities, SpiNNaker and BrainScaleS. In 2018, a million-core SpiNNaker system was introduced, becoming the world’s largest neuromorphic supercomputer at the time; The University of Manchester aims to scale it up to model one million neurons in the future.

The examples from IBM and Intel focus on computational performance. In contrast, the examples from the universities use neuromorphic computers as a tool for learning about the human brain. Both methods are necessary for neuromorphic computing since both types of information are required to advance AI.

what is neuromorphic computing engineering
Intel Loihi

Intel Loihi neuromorphic chip architecture

Behind the Loihi chip architecture for neuromorphic computing lies the idea of ​​bringing computing power from the cloud to the edge. The second-generation Loihi neuromorphic research chip was introduced in April 2021, along with Lava, an open-source software architecture for applications inspired by the human brain. Neuromorphic computing adapts key features of neural architectures found in nature to create a new computer architecture model. Loihi was developed as a guide to the chips that would power these computers and Lava to the applications that would run on them.

According to the research paper Advancing Neuromorphic Computing with Loihi, the most striking feature of neuromorphic technology is its ability to mimic how the biological brain evolves to solve the challenges of interacting with dynamic and often unpredictable real-world environments. Neuromorphic chips that can mimic advanced biological systems include neurons, synapses for interneuron connections, and dendrites, allowing neurons to receive messages from more than one neuron.

A Loihi 2 chip consists of microprocessor cores and 128 fully asynchronous neuron cores interconnected by a network-on-chip (NoC). All communication between neuron nuclei optimized for neuromorphic workloads takes place in the form of impulse messages that mimic neural networks in a biological brain.

Instead of copying the human brain directly, neuromorphic computing diverges in different ways by being inspired by it; for example, in the Loihi chip, part of the chip functions as the neuron’s nucleus to model biological neuron behavior. In this model, a piece of code describes the neuron. On the other hand, there are neuromorphic systems in which biological synapses and dendrites are created using asynchronous digital complementary metal-oxide-semiconductor (CMOS) technology.

What is a memristor, and why it is important for neuromorphic computing?

Memristor, the core electronic component in neuromorphic chips, has been demonstrated to function similarly to brain synapses because it has plasticity similar to that of the brain. It is utilized to make artificial structures that mimic the brain’s abilities to process and memorize data. Until recently, the only three essential passive electrical components were capacitors, resistors, and inductors. The memristor created a stir when it was first discovered because it functioned as an integration of all three previously described types of passive components.

A memristor is a passive component that keeps a connection between the time integrals of current and voltage across a two-terminal element. As a result, the resistance of a memristor changes according to the memristance function of a device, allowing for access to memory via small read charges. This electrical component can remember its past states without being powered on. Although no energy flows through them, membrane memristors exhibit the ability to retain their previous states. They may also be used as memory and processing units in tandem.

Moving computation to the memory eliminates the von Neumann bottleneck

Leon Chua first discovered the memristor in 1971 in theoretical form. In 2008, a research team at HP’s labs developed the first memristor from a thin film of Titanium Oxide. Since then, many other materials have been tested by various companies for the development of memristors.

The time it takes for data from the memory of a device to reach the processing unit is known as the von Neumann bottleneck. The processing unit must wait for the data required to perform computations. Because computations occur in the memory, neuromorphic chips do not experience this. Like how the brain works, memristors, which form the basis of neuromorphic chips, can be used for memory and calculation functions. They are composed of the first inorganic neurons, as we mentioned above.

what is neuromorphic computing engineering

Spiking neural networks (SNN) and artificial intelligence

The first generation of artificial intelligence was rule-based imitation logic, used to arrive at logical conclusions in a specific and restricted context. This tool is ideal for monitoring or optimizing a process. The focus was on artificial intelligence’s perception and detecting abilities. Deep Neural Networks have already been introduced into the application using conventional approaches such as SRAM or Flash-based. They mimic the parallelism and efficiency of the brain. Innovative DNN solutions may help to lower energy consumption for edge applications.

The next generation of artificial intelligence, which is already here and awaits our attention, extends these capabilities and fuses with human intellect, such as autonomous adaptation and the capacity to comprehend. This shift is critical to overcoming the current AI limitations caused by machine learning and inference. This is because such judgments, often uncontextualized and literal, are based on deterministic and literal interpretations of events that frequently lack context.

Teaching machines to think in a new way

The next generation of AI must be able to respond to new circumstances and abstractions for it to automate common human activities. Spiking Neural Networks (SNN) also aims to replicate the temporal aspect of neurons and synapses functionality. This allows for more energy efficiency and flexibility.

The ability to adapt to a rapidly changing environment is one of the most challenging aspects of human intelligence. One of neuromorphic computing’s most intriguing problems is the capacity to learn from unstructured stimuli while being as energy-efficient as the human brain. The computer building blocks within neuromorphic computing systems are comparable to human neurons’ logic. Spiking Neural Network (SNN) is a new way of arranging these components to mimic human neural networks.

The Spiking Neural Network (SNN) is a type of artificial intelligence that creates outputs based on the responses given by its various neurons. Every neuron in the network may be triggered separately. Each neuron transmits pulsating signals to other neurons in the system, causing them to change their electrical state immediately. SNNs mimic natural learning processes by dynamically mapping synapses between artificial neurons in response to stimuli by interpreting this data within its signals and time.

]]>
https://dataconomy.ru/2022/04/06/neuromorphic-computing-engineering/feed/ 0