Quantum machine learning – Dataconomy https://dataconomy.ru Bridging the gap between technology and business Thu, 28 Apr 2022 07:52:57 +0000 en-US hourly 1 https://dataconomy.ru/wp-content/uploads/2022/12/DC-logo-emblem_multicolor-75x75.png Quantum machine learning – Dataconomy https://dataconomy.ru 32 32 The history of Machine Learning – dates back to the 17th century https://dataconomy.ru/2022/04/27/the-history-of-machine-learning/ https://dataconomy.ru/2022/04/27/the-history-of-machine-learning/#respond Wed, 27 Apr 2022 15:13:51 +0000 https://dataconomy.ru/?p=23534 Contrary to popular belief, the history of machine learning, which enables machines to learn tasks for which they are not specifically programmed, and train themselves in unfamiliar environments, goes back to the 17th century. Machine learning is a powerful tool for implementing artificial intelligence technologies. Because of its ability to learn and make decisions, machine […]]]>

Contrary to popular belief, the history of machine learning, which enables machines to learn tasks for which they are not specifically programmed, and train themselves in unfamiliar environments, goes back to the 17th century.

Machine learning is a powerful tool for implementing artificial intelligence technologies. Because of its ability to learn and make decisions, machine learning is frequently referred to as AI, even though it is technically a subdivision of AI technology. Until the late 1970s, machine learning was only another component of AI’s progress. It then diverged and evolved on its own, as machine learning has emerged as an important function in cloud computing and e-Commerce. ML is a vital enabler in many cutting-edge technology areas of our times. Scientists are currently working on Quantum Machine Learning approaches.

Remembering the basics

Before embarking on our historical adventure that will span several centuries, let’s briefly go over what we know about Machine Learning (ML).

Today, machine learning is an essential component of business and research for many organizations. It employs algorithms and neural network models to help computers get better at performing tasks. Machine learning algorithms create a mathematical model from data – also known as training data – without being specifically programmed.

The brain cell interaction model that underpins modern machine learning is derived from neuroscience. In 1949, psychologist Donald Hebb published The Organization of Behavior, in which he proposed the idea of “endogenous” or “self-generated” learning. However, it took centuries and crazy inventions like the data-storing weaving loom for us to have such a deep understanding of machine learning as Hebb had in ’49. After this date, other developments in the field were also astonishing and even jaw-dropping on some occasions.

The history of Machine Learning

For ages, we, the people, have been attempting to make sense of data, process it to obtain insights, and automate this process as much as possible. And this is why the technology we now call “machine learning” emerged. Now buckle up, and let’s take on an intriguing journey down the history of machine learning to discover how it all began, how it evolved into what it is today, and what the future may hold for this technology.

· 1642 – The invention of the mechanical adder

Blaise Pascal created one of the first mechanical adding machines as an attempt to automate data processing. It employed a mechanism of cogs and wheels, similar to those in odometers and other counting devices.

Pascal was inspired to build a calculator to assist his father, the superintendent of taxes in Rouen, with the time-consuming arithmetic computations he had to do. He created the device to add and subtract two numbers directly and multiply and divide.

Contrary to popular belief, the history of machine learning, which enables machines to learn tasks for which they are not specifically programmed
The history of machine learning: Here is a mechanical adder or a basic calculator

The calculator had articulated metal wheel dials with the digits 0 through 9 displayed around the circumference of each wheel. The user inserted a stylus into the corresponding space between the spokes and turned the knob until a metal stop at the bottom was reached to input a digit, similar to how a rotary dial on old phone works. The number is displayed in the top left window of the calculator. Then, simply redialed the second number to be added, resulting in the accumulator’s total being displayed. The carry mechanism, which adds one to nine on one dial and carries one to the next, was another feature of this machine.

· 1801 – The invention of the data storage device

When looking at the history of machine learning, there are lots of surprises. Our first encounter was a data storage device. Believe it or not, the first data storage device was, in fact, a weaving loom. The first use of data storage was in a loom created by a French inventor named Joseph-Marie Jacquard, that used metal cards with holes to arrange threads. These cards comprised a program to control the loom and allowed a procedure to be repeated with the same outcome every time.

The history of Machine Learning - dates back to the 17th century
The history of Machine Learning: A Jacquard loom showing information punchcards, National Museum of Scotland

The Jacquard Machine used interchangeable punched cards to weave the cloth in any pattern without human intervention. The punched cards were used by Charles Babbage, the famous English inventor, as an input-output medium for his theoretical, analytical engine and by Herman Hollerith to feed data to his census machine. They were also utilized to input data into digital computers, but they have been superseded by electronic equipment.

· 1847 – The introduction of Boolean Logic

In Boolean Logic (also known as Boolean Algebra), all values are either True or False. These true and false values are employed to check the conditions that selection and iteration rely on. This is how Boolean operators work. George Boole created AND, OR, and NOR operators using this logic, responding to questions about true or false, yes or no, and binary 1s and 0s. These operators are still used in web searches today.

Boolean algebra is introduced in artificial intelligence to address some of the problems associated with machine learning. One of the main disadvantages of this discipline is that machine-learning algorithms are black boxes, which means we don’t know a lot about how they autonomously operate. Random forest and decision trees are examples of machine learning algorithms that can describe the functioning of a system, but they don’t always provide excellent results. Boolean algebra is used to overcome this limitation. Boolean algebra has been used in machine learning to produce sets of understandable rules that can achieve quite good performance.

After reading the history of machine learning, you might want to check out 75 Big Data terms everyone should know.

· 1890 – The Hollerith Machine took on statistical calculations

Herman Hollerith developed the first combined mechanical calculation and punch-card system to compute statistics from millions of individuals efficiently. It was an electromechanical machine built to assist in summarizing data stored on punched cards.

Contrary to popular belief, the history of machine learning, which enables machines to learn tasks for which they are not specifically programmed
The history of machine learning: Statistical calculations were first made with electromechanical machines

The 1890 census in the United States took eight years to complete. Because the Constitution requires a census every ten years, a larger workforce was necessary to expedite the process. The tabulating machine was created to aid in processing 1890 Census data. Later versions were widely used in commercial accounting and inventory management applications. It gave rise to a class of machines known as unit record equipment and the data processing industry.

· 1943 – The first mathematical model of a biological neuron presented

The scientific article “A Logical Calculus of the Ideas Immanent in Nervous Activity,” published by Walter Pitts and Warren McCulloch, introduced the first mathematical model of neural networks. For many, that paper was the real starting point for the modern discipline of machine learning, which led the way for deep learning and quantum machine learning.

McCulloch and Pitts’s 1948 paper built on Alan Turing’s “On Computable Numbers” to provide a means for describing brain activities in general terms, demonstrating that basic components linked in a neural network might have enormous computational capability. Until the ideas were applied by John von Neuman, the architect of modern computing, Norbert Wiene, and others, the paper received little attention.

· 1949 – Hebb successfully related behavior to neural networks and brain activity

In 1949, Canadian psychologist Donald O. Hebb, then a lecturer at McGill University, published The Organization of Behavior: A Neuropsychological Theory. This was the first time that a physiological learning rule for synaptic change had been made explicit in print and became known as the “Hebb synapse.” 

Contrary to popular belief, the history of machine learning, which enables machines to learn tasks for which they are not specifically programmed
The history of machine learning: Neural networks are used in many AI systems today

McCulloch and Pitts developed cell assembly theory in their 1951 paper. McCulloch and Pitts’ model was later known as Hebbian theory, Hebb’s rule, Hebb’s postulate, and cell assembly theory. Models that follow this idea are said to exhibit “Hebbian learning.” As stated in the book: “When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.”

Hebb’s model paved the way for the development of computational machines that replicated natural neurological processes

Hebb referred to the combination of neurons that may be regarded as a single processing unit as “cell assemblies.” And their connection mix determined the brain’s change in response to stimuli.

Hebb’s model for the functioning of the mind has had a significant influence on how psychologists view stimulus processing in mind. It also paved the way for the development of computational machines that replicated natural neurological processes, such as machine learning. While chemical transmission became the major form of synaptic transmission in the nervous system, modern artificial neural networks are still built on the foundation of electrical signals traveling through wires that Hebbian theory was created around.

·  1950 – Turing found a way to measure the thinking capabilities of machines

The Turing Test is a test of artificial intelligence (AI) for determining whether or not a computer thinks like a human. The term “Turing Test” derives from Alan Turing, an English computer scientist, cryptanalyst, mathematician, and theoretical biologist who invented the test.

It is impossible to define intelligence in a machine, according to Turing. If a computer can mimic human responses under specific circumstances, it may be said to have artificial intelligence. The original Turing Test requires three physically separated terminals from one another. One terminal is controlled by a computer, while humans use the other two.

The history of Machine Learning - dates back to the 17th century
The history of Machine Learning: The IBM 700 series made scientific calculations and commercial operations easier, but the machines also provided the world with some entertainment (Image courtesy of IBM)

During the experiment, one of the humans serves as the questioner, with the second human and computer as respondents. The questioner asks questions of the respondents in a specific area of study within a specified format and context. After a determined duration or number of queries, the questioner is invited to select which respondent was real and which was artificial. The test is carried out numerous times. The computer is called “artificial intelligence” if the inquirer confirms the correct outcome in half of the test runs or fewer.

The test was named after Alan Turing, who pioneered machine learning during the 40s and 50s. In 1950, Turing published a “Computing Machinery and Intelligence” paper to outline the test.

· 1952 – The first computer learning program was developed at IBM

Arthur Samuel’s Checkers program, which was created for play on the IBM 701, was shown to the public for the first time on television on February 24, 1956. Robert Nealey, a self-described checkers master, played the game on an IBM 7094 computer in 1962. The computer won. The Samuel Checkers program lost other games to Nealey. However, it was still regarded as a milestone for artificial intelligence and provided the public with an example of the abilities of an electronic computer in the early 1960s.

The more the program played, learning which moves made up winning strategies in a ‘supervised learning mode,’ and incorporating them into its algorithm, the better it performed at the game.

Samuel’s program was a groundbreaking story for the time. Computers could beat checkers for the first time. Electronic creations were challenging humanity’s intellectual advantage. To the technology-illiterate public of 1962, this was a significant event. It established the groundwork for machines to do other intelligent tasks better than humans. And people started to think; will computers surpass humans in intelligence? After all, computers were only around for a few years back then, and the artificial intelligence field was still in its infancy…

Moving on in the history of machine learning, you might also want to check out Machine learning engineering: The science of building reliable AI systems.

· 1958 – The Perceptron was designed

In July 1958, the United States Office of Naval Research unveiled a remarkable invention: The perception. An IBM 704 – a 5-ton computer size of a room, was fed a series of punch cards and, after 50 tries, learned to identify cards with markings on the left from markings on the right.

According to its inventor, Frank Rosenblatt, it was a show of the “perceptron,” which was “the first machine capable of generating an original thought,” according to its inventor, Frank Rosenblatt.

“Stories about the creation of machines having human qualities have long been a fascinating province in the realm of science fiction,” Rosenblatt observed in 1958. “Yet we are about to witness the birth of such a machine – a machine capable of perceiving, recognizing, and identifying its surroundings without any human training or control.”

He was right about his vision, but it took almost half a decade to provide it.

· The 60s – Bell Labs’ attempt to teach machines how to read

The term “deep learning” was inspired by a report from the late 1960s describing how scientists at Bell Labs were attempting to teach computers to read English text. The invention of artificial intelligence, or “AI,” in the early 1950s began the trend toward what is now known as machine learning.

· 1967 – Machines gained the ability to recognize patterns 

The “nearest neighbor” algorithm was created, allowing computers to conduct rudimentary pattern detection. When the program was given a new object, it compared it to the existing data and classified it as the nearest neighbor, which meant the most similar item in memory.

Contrary to popular belief, the history of machine learning, which enables machines to learn tasks for which they are not specifically programmed
The history of machine learning: Pattern recognition is the basis of many AI developments achieved till now

The invention of the pattern recognition algorithm is credited to Fix and Hodges, who detailed their non-parametric technique for pattern classification in 1951 in an unpublished issue of a US Air Force School of Aviation Medicine report. The k-nearest neighbor rule was initially introduced by Fix and Hodges as a non-parametric method for pattern classification.

· 1979 – One of the first autonomous vehicles was invented at Stanford

The Stanford Cart was a decades-long endeavor that evolved in various forms from 1960 to 1980. It began as a study of what it would be like to operate a lunar rover from Earth and was eventually revitalized as an autonomous vehicle. On its own, the student invention cart could maneuver around obstacles in a room. The Stanford Cart was initially a remote-controlled television-equipped mobile robot.

The history of Machine Learning - dates back to the 17th century
The history of Machine Learning: The infamous Stanford Cart (Image courtesy of Stanford University)

A computer program was created to control the Cart through chaotic locations, obtaining all of its information about the world from on-board TV images. The Cart used a variety of stereopsis to discover things in three dimensions and determine its own motion. Based on a model created with this data, it planned an obstacle-avoiding route to the target destination. As the Cart encountered new obstacles on its trip, the plan evolved.

We are talking about the history of machine learning, but data science is also advanced today in many areas. Here are a couple interesting articles we prepared before:

· 1981 – Explanation based learning prompt to supervised learning

Gerald Dejong pioneered explanation-based learning (EBL) in a journal article published in 1981. EBL laid the foundation of modern supervised learning because training examples supplement prior knowledge of the world. The program analyzes the training data and eliminates unneeded information to create a broad rule applied to future instances. For example, if the software is instructed to concentrate on the queen in chess, it will discard all non-immediate-effect pieces.

· The 90s – Emergence of various machine learning applications 

Scientists began to apply machine learning in data mining, adaptive software, web applications, text learning, and language learning in the 1990s. Scientists create computer programs that can analyze massive amounts of data and draw conclusions or learn from the findings. The term “Machine Learning” was coined as scientists were finally able to develop software in such a way that it could learn and improve on its own, requiring no human input.

· The Millennium – The rise of adaptive programming

The new millennium saw an unprecedented boom in adaptive programming. Machine learning went hand to hand with adaptive solutions for a long time. These programs can identify patterns, learn from experience, and improve themselves based on the feedback they receive from the environment.

Deep learning is an example of adaptive programming, where algorithms can “see” and distinguish objects in pictures and videos, which was the underlying technology behind Amazon GO shops. Customers are charged as they walk out without having to stand in line.

amazon-go
The history of Machine Learning: Amazon GO shops charge customers as they walk out without standing in line (Image courtesy of Amazon)

· Today – Machine learning is a valuable tool for all industries

Machine learning is one of today’s cutting-edge technologies that has aided us in improving not just industrial and professional procedures but also day-to-day life. This branch of machine learning uses statistical methods to create intelligent computer systems capable of learning from data sources accessible to it.

Contrary to popular belief, the history of machine learning, which enables machines to learn tasks for which they are not specifically programmed
The history of machine learning: Medical diagnosis is one area that ML will change soon

Machine learning is already being utilized in various areas and sectors. Medical diagnosis, image processing, prediction, classification, learning association, and regression are just a few applications. Machine learning algorithms are capable of learning from previous experiences or historical data. Machine learning programs use the experience to produce outcomes.

Organizations use machine learning to gain insight into consumer trends and operational patterns, as well as the creation of new products. Many of today’s top businesses incorporate machine learning into their daily operations. For many businesses, machine learning has become a significant competitive differentiator. In fact, machine learning engineering is a rising area.

· Tomorrow – The future of Machine Learning: Chasing the quantum advantage

Actually, our article was supposed to end here, since we came to today in the history of machine learning, but it doesn’t, because tomorrow holds more…

For example, Quantum Machine Learning (QML) is a young theoretical field investigating the interaction between quantum computing and machine learning methods. Quantum computing has recently been shown to have advantages for machine learning in several experiments. The overall objective of Quantum Machine Learning is to make things move faster by combining what we know about quantum computing with conventional machine learning. The idea of Quantum Machine Learning is derived from classical Machine Learning theory and interpreted in that light.

The application of quantum computers in the real world has advanced rapidly during the last decade, with the potential benefit becoming more apparent. One important area of research is how quantum computers may affect machine learning. It’s recently been demonstrated experimentally that quantum computers can solve problems with complex correlations between inputs that are difficult for traditional systems.

According to Google’s research, quantum computers may be more beneficial in certain applications. Quantum models generated on quantum computing machines might be far more potent for particular tasks, allowing for quicker processing and generalization on fewer data. As a result, it’s crucial to figure out when such a quantum edge can be exploited…

]]>
https://dataconomy.ru/2022/04/27/the-history-of-machine-learning/feed/ 0
Quantum machine learning: Search for an impact https://dataconomy.ru/2022/04/18/quantum-machine-learning-definition/ https://dataconomy.ru/2022/04/18/quantum-machine-learning-definition/#respond Mon, 18 Apr 2022 14:16:02 +0000 https://dataconomy.ru/?p=23216 Quantum Machine Learning (QML) is a young theoretical research discipline exploring the interplay of quantum computing and machine learning approaches. In the last couple of years, several experiments demonstrated the potential advantages of quantum computing for machine learning. The overall goal of Quantum Machine Learning is to make things move more quickly by combining what […]]]>

Quantum Machine Learning (QML) is a young theoretical research discipline exploring the interplay of quantum computing and machine learning approaches. In the last couple of years, several experiments demonstrated the potential advantages of quantum computing for machine learning.

The overall goal of Quantum Machine Learning is to make things move more quickly by combining what we know about quantum computing with traditional machine learning. The concept of Quantum Machine Learning arises from classical Machine Learning theory and is interpreted from that perspective.

Search for an impact

In recent years, quantum computing’s development in theory and practice has advanced quickly, with the potential benefit in real applications becoming more apparent. One key area of study is how quantum computers might impact machine learning. It is recently proven experimentally that quantum computers can naturally solve specific problems with complex correlations between inputs that are extraordinarily difficult for conventional systems.

Quantum computers may be more useful in specific fields, according to Google’s research. Quantum models created on quantum computers may be far more potent for select applications, potentially offering faster processing and generalization on less data. As a result, it is critical to determine when such a quantum advantage can be obtained.

The notion of quantum advantage is often expressed in terms of computational benefits. Can a quantum computer create a more accurate result in the same amount of time as a traditional machine with some well-defined input and output problems?

Quantum computers have impressive advantages for several algorithms, such as Shor’s factoring algorithm for factorizing large primes or the quantum simulation of systems. However, data availability might dramatically impact how difficult the problem is to solve and how beneficial a quantum computer may be. Understanding when a quantum computer can help in machine learning depends on the task and the available data.

Let me refresh your basic ideas before we take on an adventure through labs and theoretical ideas to explore how quantum computing will change the future of machine learning.

What is quantum computing?

Quantum computing is a method for performing calculations that would otherwise be impossible with conventional computers. A quantum computer works by utilizing qubits. Qubits are similar to your regular bits found in a PC, but they can be put into a superposition and share entanglement. 

Classical computers can execute deterministic classical operations or generate probabilistic processes using sampling techniques. Quantum computers, on the other hand, can utilize superposition and entanglement to perform quantum computations that are almost impossible to replicate at scale with conventional computers.

What is machine learning?

Machine learning is a form of artificial intelligence that focuses on applying data and algorithms to mimic human learning for machines to improve their accuracy in predicting outcomes. Historical data is fed into machine learning algorithms to generate new output values.

Organizations use machine learning to gain insight into consumer trends and operational patterns, as well as the creation of new products. Many of today’s top businesses incorporate machine learning into their daily operations. For many businesses, machine learning has become a significant competitive differentiator. In fact, machine learning engineering is a rising area.

Google’s quantum beyond-classical experiment proved that it is possible to complete a calculation in 200 seconds on a 53 noisy qubits quantum computer, which would take 10,000 years using current algorithms on the world’s most powerful classical computer. This is the beginning of a new era in computing, known as the Noisy Intermediate-Scale Quantum (NISQ) era. Quantum computers with hundreds of noisy qubits are anticipated to be developed in the near future.

Quantum machine learning, QML, what is quantum machine learning, quantum data, superposition states, Quantum kernels, quantum computing, machine learning...
By using Quantum Machine Learning (QML), it is possible to solve ‘impossible problems’

Building blocks of Quantum Machine Learning

Quantum Machine Learning (QML) is based on two foundations: quantum data and hybrid quantum-classical models:

Quantum data

Any data source in a natural or artificial quantum system is regarded as quantum data. Quantum data exhibits superposition and entanglement, which lead to joint probability distributions that would require an exponential number of conventional computational resources to represent or maintain. The data generated by NISQ processors are noisy and usually entangled just before the measurement. Heuristic machine learning extracts useful classical information from noisy entangled data.

Superposition States

Remember how we talked about quantum computing using “qubits” instead of “bits”? A bit is a binary digit, and it serves as the basis of conventional computing. The term “qubit” refers to a Quantum Binary Digit. Qubits can be in multiple states simultaneously, unlike bits, which can only have two states: 0 and 1.

Imagine you tossed a coin. A coin has two sides: Heads (1) or Tails (0). We do not know which side the coin is on until we stop it or hit the ground after it’s in mid-air. However, if you look closer, you will notice which side it is on for that time. Depending on your viewpoint, the coin indicates both 0 and 1; it displays just one side only if you stop it to examine.

The same idea applies to qubits and is called the superposition of two states. The superposition implies that there is a chance the qubit is in several states at once. When we get our result after a coin toss, the superposition dissipates.

Entanglement

The concept of quantum entanglement states that if we take two qubits, they will always be in a superposition of two states. When one is in a spin-up state, the other is immediately in a spin-down state. There is no scenario under which both qubits are in the same state. In other words, they are always entangled. This phenomenon is known as quantum entanglement.

A quantum neural network (QNN) is a parameterized quantum computational model
ideally carried out on a quantum computer

Hybrid models

A quantum model may represent and generalize data with a quantum mechanical origin. But quantum models cannot generalize quantum data using conventional computers alone because near-term quantum processors are still tiny and noisy. Combining NISQ processors with conventional co-processors to create powerful machines has been around for a long time. To be effective, NISQ processors must work in tandem with classical co-processors. 

What did Google prove exactly?

In the “Power of Data in Quantum Machine Learning” paper published in Nature Communications, Google examined the issue of quantum advantage in machine learning to determine when it will be relevant. Google researchers explored how the complexity of a problem varies with the amount of data available and how this can sometimes allow classical learning models to compete against quantum algorithms. They then constructed a working screening strategy when one of the data embedding parameters has a qubit advantage in the kernel context.

Researchers utilized the findings from the testing procedure and learning bounds to produce an innovative approach that transmits selected aspects of feature maps from a quantum computer into classical space via projection. As a result, they have developed an approach to applying machine learning to the quantum experience that incorporates insights from classical machine learning and shows how the best empirical separation in quantum learning benefits.

Quantum machine learning: Search for an impact
Quantum Machine Learning: Prediction accuracy as a function of the number of qubits (n) for a problem engineered to maximize the potential for learning advantage in a quantum model. The data is shown for two different training data sizes (N).

We have mentioned how the idea of quantum advantage over a classical computer is often framed in terms of computational complexity classes. Bounded quantum polynomial time (BQP) problems are believed to be easier to solve by quantum computers than by classical ones, such as factoring large numbers and simulating quantum systems. On the other hand, problems easily solved on classical computers are called probabilistic polynomial (BPP) problems.

Data changes everything 

Google’s research showed that learning algorithms equipped with data from a quantum process, such as a natural process like fusion or chemical reactions, form a new class of problems (called BPP/Samp) that can efficiently perform some tasks that traditional algorithms without data cannot, and is a subclass of the problems efficiently solvable with polynomial sized advice (P/poly). This demonstrates that understanding the quantum advantage for some machine learning tasks requires looking at existing data in some cases.

Google has demonstrated that the availability of data has a significant impact on the question of whether quantum computers can aid in machine learning. They developed a practical set of tools for examining these questions and used them to develop a new projected quantum kernel method with many advantages over existing approaches.

Researchers establish towards the greatest numerical demonstration yet, 30 qubits of quantum embeddings’ learning benefits. While a full computational benefit on a real-world application has yet to be seen, this research lays the groundwork for future progress.

Quantum machine learning, QML, what is quantum machine learning, quantum data, superposition states, Quantum kernels, quantum computing, machine learning...
What is Quantum Machine Learning (QML)?

Quantum kernels handle “unsolvable” ML problems

Several quantum machine learning algorithms have postulated exponential speedups over traditional machine learning techniques by assuming that one can offer conventional data to the algorithm in quantum states.

There have been many attempts at quantum machine learning algorithms that can be referred to as “heuristics,” implying that their efficacy has no formal proof. Researchers seek ways to tackle this problem by exploring approaches that align with the actual-world needs of data access, storage, and processing. Havlíček et al.’s proposal for quantum-enhanced feature spaces, also known as quantum kernel algorithms, shows promise.

Despite the popularity of these quantum kernel approaches, the most crucial question is remains the same:

Are quantum machine learning algorithms using kernel methods capable of delivering a provable advantage over classical algorithms?

Last year, IBM released a quantum kernel algorithm that, given only classical access to data, gives a provable exponential speedup over classical machine learning techniques for a specific category of classification problems. The most fundamental issues in machine learning are classification errors. They start with training an algorithm on a set of data, known as a training set, where data points have one of two labels. After the training phase, there is a testing phase in which the algorithm must identify a new data point that has never been encountered before.

A simple example is when we provide a computer with photographs of dogs and cats, and it determines whether future photos are of dogs or cats based on this data. The goal of a practical quantum machine learning algorithm for classification should be to provide a correct label in a time that scales polynomially with the size of the data.

IBM proved that quantum kernel methods are superior to classical ones for a particular classification task. The algorithm is based on a quantum kernel approach. It uses a time-tested traditional machine learning technique to learn the kernel function, which locates the important features in the data for classification. The key to its quantum advantage is that we can build a family of data for which only quantum computers can recognize the inherent labeling patterns. Still, for conventional computers, the data appears like random noise.

IBM proved the advantage by tackling a well-known issue that divides conventional and quantum computation: computing logarithms in a cyclic group, where you can generate all group members with a single mathematical operation. The discrete log problem—is a difficulty that arises in cryptography and can be solved on a quantum computer by utilizing Shor’s well-known method, which is thought to require superpolynomial time on a normal computer.

Quantum neural networks: A myth or a reality?

Artificial neural networks (ANN) are a type of neural network model that has been extensively used in classification, regression, compression, generative modeling, and statistical inference. You can also read about the history of neural networks here. Their common feature is linear operations alternating with nonlinear transformations (such as sigmoid functions), which may be pre-determined or learned during training.

Fundamental concerns about the efficacy of neural networks have persisted, despite their success in many areas over the last decade. Are there any strict guarantees for their optimization and the predictions they give? Although they may overfit training data effectively, how do they achieve good cross-generalization performance?

In the QML literature, ANNs have received significant attention. The primary research directions have been to speed up the training of traditional models and to construct networks in which all the parts, from single neurons to learning algorithms, are run on a quantum computer (a so-called “quantum neural network”).

The first research on quantum neural networks was published in the 1990s, and there have been several papers about it. The limitations encountered when attempting to succeed in this area may be attributed to the fundamental distinctions between quantum mechanics’ linearity and the essential function of nonlinear components in ANNs or the fast developments occurring within classical ANNs.

The first research on quantum neural networks
was published in the 1990s

The study of accelerated training of neural networks using quantum resources has focused primarily on restricted Boltzmann machines (RBMs). BMRs are generative models (i.e., models that allow new observational data to be generated based on prior knowledge) suited to be studied from a quantum perspective owing to their strong connections with the Ising model.

It has been shown that computing the log-likelihood and sampling from an RBM is computationally hard. Unfortunately, many MCMC methods have drawbacks that make them unsuitable for this purpose. However, even with MCMC, sample drawing might be costly for models with numerous neurons. Quantum resources may help to reduce the cost of training.

There are two primary categories of quantum algorithms to teach RBMs. The first is based on approaches from quantum linear algebra and quantum sampling. Three algorithms were developed by Wiebe to effectively train an RBM using amplitude amplification and quantum Gibbs sampling. The second variant is more adaptive to change because it requires a quadratic lesser number of examples to train the RBM, but its scaling in terms of edges is far worse than contrastive divergence.

This method’s other advantage is that it may be used to train full Boltzmann machines (an earlier version of this algorithm was developed). A complete Boltzmann machine is a type of Boltzmann machine where the neurons correspond to the nodes of a complete graph (i.e., they are fully connected). Although full Boltzmann machines have more parameters than RBMs, they are not used in practice due to the high training cost, and, to date, the true potential of large-scale, full Boltzmann machines is unknown.

The second approach for teaching RBMs is based on QA, a quantum computation model encodes issues in an Ising model’s energy function. Use a quantum annealer to generate spin configurations, then train an RBM with those. These are physical implementations of RBMs, which come with their problems. The effective thermodynamic temperature of the physical machine is a problem.

To overcome this problem, they introduced an algorithm to estimate the effective temperature and benchmarked the performance of a physical device on some simple problems. A second critical analysis of RBMs’ quantum training showed with numerical models how the limitations that the first-generation quantum machines are likely to have, in terms of noise, connectivity, and parameter tuning, severely limit the applicability of quantum methods.

Quantum computers can be used and trained like neural networks, so we can achieve Quantum Machine Learning

A hybrid approach between training ANNs and a fully quantum neural network is the quantum Boltzmann machine proposed. In this model, the standard RBM energy function gains a purely quantum term (i.e., off-diagonal) that, according to the authors, allows a more affluent class of problems to be modeled (i.e., problems that would otherwise be hard to model classically, such as quantum states).

Whether these models can provide any advantage for classical tasks is unknown. Researchers believe that quantum Boltzmann machines might help reconstruct a quantum state’s density matrix from measurements (this operation is known in the quantum information literature as “quantum state tomography”).

Conclusion: We are close to Quantum Machine Learning

Although there is no agreement on the characteristics of a quantum ANN, research in the last two decades has aimed to create networks composed only of quantum-mechanical laws and features. In particular, most of the papers failed to reproduce basic features of ANNs (for example, the attractor dynamics in Hopfield networks). On the other hand, it can be argued that the single greatest challenge to a quantum ANN is that the quantum mechanics is linear, but ANNs require nonlinearities.

The two proposals recently solved the problem of modeling nonlinearities by employing measurements and including multiple overhead qubits in each node’s input and output. However, these models still lack some important features of a fully quantum ANN. For example, the model parameters remain classical, and it is impossible to prove that the models can converge with a polynomial number of iterations. The papers’ authors acknowledge that, in their current forms, the most-likely applications of these theories appear to be the learning of quantum objects rather than the improvement of classical data. Finally, we observe no attempts to model nonlinearities directly on amplitudes thus far.

]]>
https://dataconomy.ru/2022/04/18/quantum-machine-learning-definition/feed/ 0