Supercomputers – Dataconomy https://dataconomy.ru Bridging the gap between technology and business Mon, 01 Aug 2022 14:42:56 +0000 en-US hourly 1 https://dataconomy.ru/wp-content/uploads/2025/01/DC_icon-75x75.png Supercomputers – Dataconomy https://dataconomy.ru 32 32 Analog deep learning paves the way for energy-efficient and faster computing https://dataconomy.ru/2022/08/01/analog-deep-learning-ai-computing/ https://dataconomy.ru/2022/08/01/analog-deep-learning-ai-computing/#respond Mon, 01 Aug 2022 14:42:15 +0000 https://dataconomy.ru/?p=26601 Analog deep learning, a new branch of artificial intelligence, promises quicker processing with less energy use. The amount of time, effort, and money needed to train ever more complex neural network models is soaring as researchers push the limits of machine learning. Similar to how transistors are the essential components of digital processors, programmable resistors […]]]>

Analog deep learning, a new branch of artificial intelligence, promises quicker processing with less energy use. The amount of time, effort, and money needed to train ever more complex neural network models is soaring as researchers push the limits of machine learning.

Similar to how transistors are the essential components of digital processors, programmable resistors are the fundamental building blocks of analog deep learning. Researchers have developed a network of analog artificial “neurons” and “synapses” that can do calculations similarly to a digital neural network by repeatedly repeating arrays of programmable resistors in intricate layers. Then, this network may be trained using difficult AI tasks like image recognition and natural language processing.

What is the goal of the research team?

The goal of an interdisciplinary MIT research team was to increase the speed of a particular kind of artificial analog synapse they had previously created. They used a useful inorganic substance in the manufacturing process to give their devices a speed boost of a million times over earlier iterations, roughly a million times faster than the synapses in the human brain.

This inorganic component also contributes to the resistor’s exceptional energy efficiency. The new material is compatible with silicon production methods, in contrast to materials employed in the earlier iteration of their device. This modification has made it possible to fabricate nanometer-scale devices and may open the door to their incorporation into commercial computing hardware for deep-learning applications.

Analog deep learning, a new branch of artificial intelligence, promises quicker processing with less energy use.
Similar to how transistors are the essential components of digital processors, programmable resistors are the fundamental building blocks of analog deep learning

“With that key insight and the very powerful nanofabrication techniques we have at MIT.nano, we have been able to put these pieces together and demonstrate that these devices are intrinsically very fast and operate with reasonable voltages. This work has really put these devices at a point where they now look really promising for future applications,” explained the senior author Jesús A. del Alamo, the Donner Professor in MIT’s Department of Electrical Engineering and Computer Science (EECS).

“The working mechanism of the device is the electrochemical insertion of the smallest ion, the proton, into an insulating oxide to modulate its electronic conductivity. Because we are working with very thin devices, we could accelerate the motion of this ion by using a strong electric field and push these ionic devices to the nanosecond operation regime,” stated senior author Bilge Yildiz, the Breene M. Kerr Professor in the departments of Nuclear Science and Engineering and Materials Science and Engineering.


Promise to Obama kept: the US managed to create an exascale supercomputer


“The action potential in biological cells rises and falls with a timescale of milliseconds since the voltage difference of about 0.1 volt is constrained by the stability of water. Here we apply up to 10 volts across a special solid glass film of nanoscale thickness that conducts protons without permanently damaging it. And the stronger the field, the faster the ionic devices,” explained senior author Ju Li, the Battelle Energy Alliance Professor of Nuclear Science and Engineering and materials science and engineering professor.

The training of a neural network can be done much faster and cheaper thanks to these programmable resistors than ever before. This might speed up the process by which scientists create deep learning models, which might subsequently be used for fraud detection, self-driving cars, or picture analysis in medicine.

Analog deep learning, a new branch of artificial intelligence, promises quicker processing with less energy use.
The training of a neural network can be done much faster and cheaper thanks to these programmable resistors than ever before

“Once you have an analog processor, you will no longer be training networks everyone else is working on. You will be training networks with unprecedented complexities that no one else can afford to, and therefore vastly outperform them all. In other words, this is not a faster car; this is a spacecraft,” adds lead author and MIT postdoc Murat Onen.

The research published in Science is called “Nanosecond Protonic Programmable Resistors for Analog Deep Learning,”. It includes Frances M. Ross, the Ellen Swallow Richards Professor in the Department of Materials Science and Engineering; postdocs Nicolas Emond and Baoming Wang; and Difei Zhang, an EECS graduate student.

Why analog deep learning is faster?

For two key reasons, analog deep learning is faster and more energy-efficient than its digital version. First, because computation is done in memory rather than a processor, massive amounts of data are not constantly transported between the two. Analog processors can also carry out parallel processes. An analog processor doesn’t require more time to perform new operations as the size of the matrix increases because all computation happens simultaneously.

A protonic programmable resistor is the main component of MIT’s new analog processor technology. These nanoscale-sized resistors are placed like a chessboard in an array. One nanometer is one billionth of a meter.

Learning occurs in the human brain due to the strengthening and weakening of synapses, the connections between neurons. This approach, where training algorithms program the network weights, has been used by deep neural networks for a long time. Analog machine learning is possible with this new processor by varying the electrical conductivity of protonic resistors.

Analog deep learning, a new branch of artificial intelligence, promises quicker processing with less energy use.
For two key reasons, analog deep learning is faster and more energy-efficient than its digital version

The motion of the protons governs the conductance. More protons are pushed into a resistor channel to increase conductance, and more protons are pulled out to decrease conductance. This is done by conducting protons but blocking electrons in an electrolyte (similar to a battery’s electrolyte).

The researchers investigated various electrolyte materials to create a protonic resistor that is programmable, extremely quick, and highly energy efficient. Onen concentrated on inorganic phosphosilicate glass instead of other devices’ usage of organic chemicals (PSG).


P-computers are the future for developing efficient AI and ML systems


PSG, the powdery desiccant material used to eliminate moisture and found in little bags in new furniture boxes, is essentially silicon dioxide. Under humidified conditions, it is investigated as a proton conductor for fuel cells. It is also the most popular oxide utilized in the production of silicon. To create PSG, silicon is given unique properties for proton conduction by adding a small amount of phosphorus.

Onen postulated that an improved PSG, which would make a perfect solid electrolyte for this purpose, may have a high proton conductivity at room temperature without the requirement for water. He was accurate.

Shocking speed

Because PSG has many nanometer-sized pores with surfaces that act as pathways for proton diffusion, it allows for rapid proton transport. Additionally, it can endure extremely powerful, pulsed electric fields. Onen argues that this is crucial because increasing the device’s voltage enables protons to flow at dizzying speeds.

“The speed certainly was surprising. Normally, we would not apply such extreme fields across devices to prevent them from turning into ash. But instead, protons ended up shuttling at immense speeds across the device stack, specifically a million times faster than we had before. And this movement doesn’t damage anything, thanks to the small size and low mass of protons. It is almost like teleporting. The nanosecond timescale means we are close to the ballistic or even quantum tunneling regime for the proton, under such an extreme field,” said Li.

Analog deep learning, a new branch of artificial intelligence, promises quicker processing with less energy use.
Because PSG has a large number of nanometer-sized pores with surfaces that act as pathways for proton diffusion, it allows for rapid proton transport

The protons do not harm the substance, so the resistor can operate for millions of cycles without degrading. It is crucial to incorporate this new electrolyte into computing hardware since it makes it possible to create a programmable protonic resistor that is a million times faster than their previous device and can function well at room temperature.

PSG is an electrically insulating substance; therefore, nearly no electric current moves through it as protons move. Onen continues that this greatly improves the device’s energy efficiency.

According to del Alamo, the researchers intend to re-engineer these programmable resistors for high-volume manufacture now that they have proven their efficacy. After that, they can scale up the resistor arrays for use in systems and analyze their properties.

They also intend to research the materials to eliminate obstructions that prevent the voltage needed to effectively transfer protons into, through, and out of the electrolyte.

“Another exciting direction these ionic devices can enable is energy-efficient hardware to emulate the neural circuits and synaptic plasticity rules that are deduced in neuroscience, beyond analog deep neural networks. We have already started such a collaboration with neuroscience, supported by the MIT Quest for Intelligence,” said Yildiz.

Analog deep learning, a new branch of artificial intelligence, promises quicker processing with less energy use.
PSG is an electrically insulating substance with nearly no electric current moving through it as protons move

“The collaboration that we have will be essential to innovate in the future. The path forward is still going to be very challenging, but at the same time, it is very exciting,” explains del Alamo.

“Intercalation reactions such as those found in lithium-ion batteries have been extensively explored for memory devices. This work demonstrates that proton-based memory devices deliver impressive and surprising switching speed and endurance. It lays the foundation for a new class of memory devices for powering deep learning algorithms,” stated William Chueh, associate professor of materials science and engineering at Stanford University, who was not involved with this study.


Everything you need to know about computer science major (2022 Edition)


“This work demonstrates a significant breakthrough in biologically inspired resistive-memory devices. These all-solid-state protonic devices are based on exquisite atomic-scale control of protons, similar to biological synapses but at orders of magnitude faster rates. I commend the interdisciplinary MIT team for this exciting development, which will enable future-generation computational devices,” added Elizabeth Dickey, the Teddy & Wilton Hawkins Distinguished Professor and head of the Department of Materials Science and Engineering at Carnegie Mellon University, who also was not involved with this study.

]]>
https://dataconomy.ru/2022/08/01/analog-deep-learning-ai-computing/feed/ 0
The 3 Reasons Why Companies Should Use HPC https://dataconomy.ru/2017/02/27/3-reasons-data-intensive-computing/ https://dataconomy.ru/2017/02/27/3-reasons-data-intensive-computing/#respond Mon, 27 Feb 2017 09:00:31 +0000 https://dataconomy.ru/?p=17418 Researchers have estimated that 25 years ago, around 100GB of data was generated every day. By 1997, we were generating 100GB every hour and by 2002 the same amount of data was generated in a second. We’re on trajectory – by 2018 – to generate 50TB of data every single second – the equivalent of […]]]>

Researchers have estimated that 25 years ago, around 100GB of data was generated every day. By 1997, we were generating 100GB every hour and by 2002 the same amount of data was generated in a second. We’re on trajectory – by 2018 – to generate 50TB of data every single second – the equivalent of 2000 Blu-ray discs – a simply mind-boggling amount of information.

While the amount of data continues to skyrocket, data velocity is keeping pace. Some 90% of the data in the world was created in the last two years alone, and while data growth and speed are occurring faster than ever, data is also becoming obsolete faster than ever.

All of this leads to substantial challenges associated with identifying relevant data and quickly analyzing complex relationships to determine actionable insights. Which certainly isn’t easy, but the payoff can be substantial. CIOs gain better insight into the problems they face daily, to ultimately better manage their businesses.

Predictive analytics has become a core element behind making this possible. And while machine learning algorithms have captured the spotlight recently, there’s an equally important element to running predictive analytics – particularly when both time-to-result and data insight are critical: high performance computing. “Data intensive computing,” or the convergence of High Performance Computing (HPC), big data and analytics, is crucial when businesses must store, model and analyze enormous, complex datasets very quickly in a highly scalable environment.

Firms across a number of industry verticals, including financial services, manufacturing, weather forecasting, life sciences & pharmaceuticals, cyber-reconnaissance, energy exploration and more, all use data intensive computing to enable research and discovery breakthroughs, and to answer questions that are not practical to answer in any other way.

There are a number of reasons why these organizations turn to data intensive computing:

Product Improvement

In manufacturing, the convergence of big data and HPC is having a particularly remarkable impact. Auto manufacturers, for example, use data intensive computing on both the consumer side and the Formula 1 side. On the consumer end, the auto industry now routinely captures data from customer feedback and physical tests, enabling manufacturers to improve product quality and driver experience. Every change to a vehicle’s design impacts its performance; moving a door bolt even a few centimeters can drastically change crash test results and driver safety. Slightly re-curving a vehicle’s hood can alter wind flow which impacts gas mileage, interior acoustics and more.

In Formula 1 racing, wind flow is complicated by the interplay of wind turbulence between vehicles. During a race, overtaking a vehicle – for example – is difficult by nature. Drivers are trying to pass on a twisting track in close proximity to one other, where wind turbulence becomes highly unpredictable. To understand the aerodynamics between cars travelling at over 100 miles per hour on a winding track, engineering firms have turned to data intensive computing in order to produce images like the one below:

 

image00

Computational Fluid Dynamics simulation of a passing maneuver with unsteady flow, moving mesh and rotating tires.

(Image courtesy of Swift Engineering, Inc.)

 

Simulation and data analysis enables auto manufacturers to make changes far more quickly than when running physical tests alone, as they try to address new challenges by altering a car’s material components and design layout. On the consumer side, this leads to the development of more fuel-efficient and safer vehicles. On the Formula 1 side, modeling is key to producing safer and faster supercars.

Scalability Limits

The promise of data intensive computing is that it can bring together the technologies of the newest data analytics technologies with traditional supercomputing, where scalability is king. This marriage of technologies empowers the development of platforms to solve the most complex problems in the world.

Developed for supercomputing, globally addressable memory and low latency network technologies bring the ability to achieve new levels of scalability to analytics. Achieving application scalability can only be done if the networking and memory features of the systems are large, efficient and scalable.

Notably, two apex cloud virtues are feature richness and flexibility. To maximize these virtues, the cloud sacrifices user architectural control and consequently fails to meet the challenge of applications that require scale and complexity. Companies across all different verticals need to find the right balance of usage between the flexibility of cloud and the power of scalable systems. Finding the proper balance results in the best ROI and ultimately segments leadership in a highly competitive business landscape.

Data intensive computing… as a service

Just as the cloud is a delivery mechanism for generic computing, now data intensive, scalable system results can be delivered without necessarily purchasing a supercomputer. Deloitte Advisory Cyber Risk Services – a breakthrough threat analytics service – takes a different approach to HPC and analytics. Deloitte is using high performance technologies of Spark, Hadoop, and the Cray Graph Engine, all powered by the Urika-GX analytics engine to provide insights into how an organization’s IT infrastructure and data looks to an outside aggressor. Most importantly this service is available through a subscription-based model as well as through system acquisition.

Deloitte’s platform combines supercomputing technologies with a software framework for analytics. It is designed to help companies discover, understand and take action against cyber attackers, and the US Department of Defense currently uses it to glean actionable insights on potential threat vectors.

HPC: The Answer to Unresolved Questions

In the end, the choice to implement a data intensive computing solution comes down to the amount of data an organization has, and how quickly analysis is required. For those tackling the world’s most complicated problems, gaining unknown insights into data provides a distinct competitive advantage. Fast-moving datasets help spur innovation, inform strategy decisions, enhance customer relationships, inspire new products and more.

So if an organization is struggling to maintain its framework productivity, data intensive computing may well provide the fastest, most cost-effective solution.

 

Like this article? Subscribe to our weekly newsletter to never miss out!

Image: Sandia Labs, NCND 2.0

]]>
https://dataconomy.ru/2017/02/27/3-reasons-data-intensive-computing/feed/ 0
IBM Puts Forth the Most Sophisticated Computer Systems Ever Built – Meet the z13 Mainframe https://dataconomy.ru/2015/01/15/ibm-puts-forth-the-most-sophisticated-computer-systems-ever-built-meet-the-z13-mainframe/ https://dataconomy.ru/2015/01/15/ibm-puts-forth-the-most-sophisticated-computer-systems-ever-built-meet-the-z13-mainframe/#respond Thu, 15 Jan 2015 11:53:26 +0000 https://dataconomy.ru/?p=11435 Tuesday saw the release of IBM’s latest mainframe offering – the z13 Mainframe – touted by the giant as the most powerful and secure system ever built. The system has the capacity to deliver scale and economics together with real-time encryption and analytics catering to the need for speed and safety for trillions of transactions […]]]>

Tuesday saw the release of IBM’s latest mainframe offering – the z13 Mainframe – touted by the giant as the most powerful and secure system ever built.

The system has the capacity to deliver scale and economics together with real-time encryption and analytics catering to the need for speed and safety for trillions of transactions in the mobile economy.

“Every time a consumer makes a purchase or hits refresh on a smartphone, it can create a cascade of events on the back end of the computing environment. The z13 is designed to handle billions of transactions for the mobile economy.  Only the IBM mainframe can put the power of the world’s most secure data centers in the palm of your hand,” said Tom Rosamilia, senior vice president, IBM Systems.

“Consumers expect fast, easy and secure mobile transactions. The implication for business is the creation of a secure, high performance infrastructure with sophisticated analytics,” he added.

The Z13’s features include:

  • First system with the ability to process 2.5 billion transactions a day – equivalent of 100 Cyber Mondays every day of the year.
  • First system to make practical real-time encryption of all mobile transactions at any scale.
  • First mainframe system with embedded analytics providing real-time insights on all transactions.

The system is a result of a $1 billion investment, five years of development, the outcome of more than 500 new patents and collaboration with more than 60 clients, reports IBM’s press release.

Read more here.


(Image credit: IBM)

]]>
https://dataconomy.ru/2015/01/15/ibm-puts-forth-the-most-sophisticated-computer-systems-ever-built-meet-the-z13-mainframe/feed/ 0
5 of the Coolest Tech Developments in 2014 https://dataconomy.ru/2014/12/28/5-of-the-coolest-tech-developments-in-2014/ https://dataconomy.ru/2014/12/28/5-of-the-coolest-tech-developments-in-2014/#respond Sun, 28 Dec 2014 09:27:06 +0000 https://dataconomy.ru/?p=11150 In 2014, we witnessed some of the coolest and craziest tech stories of our time. A future where our cars drives themselves, drones deliver our mail and our surroundings adapt to our preferences automatically are no longer inconceivable- in fact, these scenarios are highly plausible. We trawled through our 2014 news archives, and have come […]]]>

In 2014, we witnessed some of the coolest and craziest tech stories of our time. A future where our cars drives themselves, drones deliver our mail and our surroundings adapt to our preferences automatically are no longer inconceivable- in fact, these scenarios are highly plausible.

We trawled through our 2014 news archives, and have come up with five of the coolest tech developments that occured in 2014. Obviously, “cool” is an entirely subjective measurement; if you have any stories you feel have been criminally overlooked, leave a comment below.

7254347346_acaedb3960_b1. The Supercomputer the Size of a Postage Stamp
Back in August, IBM unveiled a micropship which can store a million computational units into the area the size of a postage stamp. The building blocks for the chip, dubbed TrueNorth, are not your conventional binary of ones and zeros- they’re “neurosynaptic cores” of 256 neurons each. These cores were piloted by IBM back in 2011, and are based on our understanding of how the brain works.
Whilst it will undoubtedly take some time for such a technology to become readily and commercially useful, this complete reimagining of computational units is certainly exciting.
Read more here >

2. The Rise of the Self Driving Car
We are closer than ever to self driving cars becoming a reality. Tesla revealed they will have a handle on 90% of driving duties by 2016; Britain is already pondering self-driving automotive legislation; several drastically different designs for the car of the future have already resurfaced. In May, Google announced they’ll be building their own self-driving cars rather than modifying vehicles built by other manufacturers. According to Dataconomy’s news report:
Currently, Google’s robo-cars have covered 700,000 miles of public road since it began working on the vehicles in 2009. The company plans to build 200 prototype cars in Detroit and according to Urmson, “we’ll see these vehicles on the road within the year.
Read more here >

13252139975_f08e723c54_z3. A Robot Designs a Sauce
2014 has undoubtedly been a huge year for IBM Watson. IBM announced they would be creating a business unit around the artificially intelligent computer at the start of the year, which they expect to generate $10 billion in annual revenue in the next 10 years. Watson’s applications have been diverse- from fashion to the military services, including a freemium model of “Watson Analytics”, making their cognitive computing technology widely available. But our favourite application was undoubtedly using Watson to create a delicious barbecue sauce:
After implementing an algorithm meant to test Watson’s capacity for creativity, it spat out a recipe mixing white wine, butternut squash, tamarind, and Thai chilies. Although Watson’s first foray into condiments, one of it’s first testers described it as a “golden, algorithmic elixir” that was nothing short of “delicious.” What’s more, Watson didn’t include any additives such as sugar or preservatives.
The resulting sauce, Watson’s “Bengali Butternut BBQ sauce”, sounds as delicious as it is groundbreaking.
Read more here >

fcntvdm7tn5s7n3mo4gv4. Brain-to-Brain Communication Achieved for the First Time in Human History
Back in September, an international team of neuroscientists carried out “direct and non-invasive brain-to-brain transmission of information between humans” for the first time in history.
The two “brains” in question belonged to two people situated 5,000 miles apart, sitting in India and France. The team harnessed ‘internet-connected electroencephalogram and robot-assisted, image-guided transcranial magnetic stimulation’ to transmit the words the words “hola” and “ciao” from one human to the other.
Obviously, it’s a long road between transmitting one word and facilitating whole non-invasive brain-to-brain conversations. Yet, this first tentative step in undeniably huge.
Read more here >

7586864728_a4466d7703_z5. IBM and Fujifilm’s Record Breaking Tape
Back in May, IBM and Fujifilm announced they broke the world record for data stored on a single square inch of tape. The “double-coated” tape could store 85.9 billion bits per square inch on areal data density on linear magnetic particulate tape. With this density, a standard tape cartridge could store 154 terabytes of uncompressed data, which would be 62 times better than existing cartridges. Why is this so cool? Mark Lantz, a research scientist and manager of exploratory tape at IBM Research explains.
“Today, most storage technologies like HDD [hard-disk drive], flash, and DRAM [dynamic random access memory] are facing or will very soon face very difficult challenges to continue scaling. In contrast, our demonstration shows that tape can continue scaling at the current rate of doubling cartridge capacity every two years for at least the next 10 years.”
Read more here >

Many of the technologies listed here are still very much in their infancy. But in years to come, we’ll be able to look back on 2014 as a landmark year in the development of these futuristic- and potentially groundbreaking- technologies.


(Featured Image credits: IBMblrSaad Faruque, The GuardianPLOS One & Scott Schiller)

]]>
https://dataconomy.ru/2014/12/28/5-of-the-coolest-tech-developments-in-2014/feed/ 0