Loading [MathJax]/extensions/MathZoom.js
Quantum Optimization and Quantum Learning: A Survey | IEEE Journals & Magazine | IEEE Xplore

Quantum Optimization and Quantum Learning: A Survey


Typical models of quantum intelligent algorithms. This paper summarizes the existing quantum algorithms from two aspects: quantum optimization and quantum learning. Repre...

Abstract:

Quantum mechanism, which has received widespread attention, is in continuous evolution rapidly. The powerful computing power and high parallel ability of quantum mechanis...Show More

Abstract:

Quantum mechanism, which has received widespread attention, is in continuous evolution rapidly. The powerful computing power and high parallel ability of quantum mechanism equip the quantum field with broad application scenarios and brand-new vitality. Inspired by nature, intelligent algorithm has always been one of the research hotspots. It is a frontier interdisciplinary subject with a perfect integration of biology, mathematics and other disciplines. Naturally, the idea of combining quantum mechanism with intelligent algorithms will inject new vitality into artificial intelligence system. This paper lists major breakthroughs in the development of quantum domain firstly, then summarizes the existing quantum algorithms from two aspects: quantum optimization and quantum learning. After that, related concepts, main contents and research progresses of quantum optimization and quantum learning are introduced respectively. At last, experiments are conducted to prove that quantum intelligent algorithms have strong competitiveness compared with traditional intelligent algorithms and possess great potential by simulating quantum computing.
Typical models of quantum intelligent algorithms. This paper summarizes the existing quantum algorithms from two aspects: quantum optimization and quantum learning. Repre...
Published in: IEEE Access ( Volume: 8)
Page(s): 23568 - 23593
Date of Publication: 28 January 2020
Electronic ISSN: 2169-3536

Funding Agency:


CCBY - IEEE is not the copyright holder of this material. Please follow the instructions via https://creativecommons.org/licenses/by/4.0/ to obtain full-text articles and stipulations in the API documentation.
SECTION I.

Introduction

In the early 1980s, Benioff and Feynman put forward the idea of quantum computing, pointing out that computers using quantum mechanics were more effective than classical counterparts when dealing with specific problems [1]. Feynman proposed to apply quantum mechanics to computational problems. According to [2], complex quantum systems can be simulated by standard quantum systems to solve problems that classical computers cannot solve. Although people had no idea about how to realize such a quantum simulator at that time, Feynman’s idea has directly affected the development of quantum computing. Then, the concept of quantum Turing machine was presented and the existence of universal models based on quantum mechanics was theoretically proved in [3]. Simply speaking, the computational abilities that classical computers can achieve can also be realized using quantum models. The first quantum algorithm is given in [4]. In the proposed problem, quantum computation has exponential acceleration over classical computation. [5], [6] propose quantum algorithms subsequently, which show that quantum computing has advantages over classical computers in solving certain problems. However, these specific problems are manually designed and their influence is limited to practical usage.

In 1994, Shor proposed a quantum algorithm (Shor algorithm) for factorizing large integers [7], which attracted many researchers. Large prime factorization, a NP-hard problem, is the guarantee of RSA public key cryptosystem security. It takes exponential time to solve this problem using classical computers. However, Shor algorithm shows that only polynomial time is needed with quantum computing, making RSA can be easily deciphered. After that, the quantum searching algorithm proposed by Grover can effectively find a particular data in an unordered database. Taking advantage of quantum parallelism, Grover algorithm can check all data at the same time in each turn, which greatly reduces the complexity to solve this searching problem. Since the proposition of Shor algorithm and Grover algorithm, the unique computing style and huge potential in information processing of quantum computing have attracted extensive attention.

Quantum algorithm has shown profound impact in the field of algorithm design. How to introduce the powerful storage and computing advantages of quantum computing into existing algorithms is of widespread concern. Intelligent algorithm has always been a hotspot, thus the idea of combining quantum theory with intelligent computing is naturally proposed. Utilizing characteristics of quantum parallel computing, the shortcomings of intelligent algorithm will be efficiently made up.

Existing quantum intelligent algorithms include: quantum evolutionary algorithm (QEA), quantum particle swarm optimization algorithm (QPSO), quantum annealing algorithm (QAA), quantum neural network (QNN), quantum Bayesian network (QBN), quantum wavelet transform (QWT), quantum clustering algorithm (QC), etc. It should be noted that these algorithms are designed in a way that conforms to the characteristics of quantum mechanism or are inspired by them, more or less adopting the advantages of quantum computing. As the development of quantum hardware is relatively lagging behind, these algorithms cannot be tested on a real quantum computer at present. Nevertheless, by simulating the process of quantum computing, these algorithms can show competitiveness over traditional intelligent algorithms.

From the perspective of application, intelligent algorithms can be divided into two categories: optimization and learning. On this basis, QEA, QPSO, QAA etc. are unified as quantum optimization algorithms; QNN, QBN, QWT, QC etc. are unified as quantum learning algorithms. In this paper, according to the above two topics, typical quantum algorithms are introduced in detail.

The structure of this paper is as follows: the development of quantum computing and main content of quantum intelligent algorithms are introduced in INTRODUCTION. Then, according to the two modules which are quantum optimization and quantum learning respectively, typical quantum intelligent algorithms are showed and summarized in QUANTUN OPTIMIZATION and QUANTUM LEARNING. After that, TYPICAL APPLICATIONS lists detailed applications and verifies performance of quantum intelligent algorithms. Finally, a conclusion is presented in CONCLUSION.

SECTION II.

Quantun Optimization

Using the unique characteristics of quantum computing, intelligent optimization algorithms can be improved to quantum intelligent optimization methods. Quantum optimization not only maintains the powerful global search ability and excellent robustness of intelligent algorithms, but also absorbs the advantages of quantum computing. By using the parallel computing ability of quantum mechanism, it is able to improve the diversity of population and accelerate the search speed, thus greatly improving the efficiency of intelligent optimization. In this part, some representative quantum optimization methods are explained respectively. Firstly, QEA, QPSO and QICA are introduced in detail. Then, according to the existing research content, the hotspots of mainly improvements are summarized. Finally, a summary is made.

A. Quantum Evolutionary Algorithm

QEA is a new evolutionary algorithm which adopts the concept of quantum computing mechanism. [8] combined quantum theory with genetic algorithm for the first time. The concept of quantum genetic algorithm was introduced and the field of the integration of quantum computation and evolutionary computation was opened up. Han proposed a genetic quantum algorithm (GQA) [9], and then extended it into a quantum evolutionary algorithm [10]. Compared with traditional one, QEA has the advantages of rich population diversity, good global search ability, fast convergence and easy integration with other algorithms. Over the past decades, QEA has attracted wide attention and yielded fruitful results.

1) Algorithm Description

Compared with traditional evolutionary algorithms, QEA uses quantum bit, which enables a quantum chromosome to represent a superposition of multiple states, thus bringing richer population diversity. Thanks to this special coding method, the population size of QEA can be very small and even an individual can maintain the diversity. Moreover, it is more suitable for parallel structure, which leads to a higher searching efficiency.

QEA based on quantum rotation gates encodes each individual in the population with quantum bits. For example, $Q\left ({ t }\right )=\left \{{q_{1}^{t},q_{2}^{t},\ldots ,q_{n}^{t} }\right \}$ denotes a quantum population, where $t$ denotes the current iteration and $n$ denotes the population size. By this way, the $j$ th individual in the $t$ th generation can be expressed as:\begin{equation*} q_{j}^{t}=\left [{ {\begin{array}{cccc} {\begin{array}{cc} \alpha _{j1}^{t} & \alpha _{j2}^{t}\\ \end{array}} & {\begin{array}{cc} \ldots & \alpha _{jm}^{t}\\ \end{array}}\\ {\begin{array}{cc} \beta _{j1}^{t} & \beta _{j2}^{t}\\ \end{array}} & {\begin{array}{cc} \ldots & \beta _{jm}^{t}\\ \end{array}}\\ \end{array}} }\right ]\end{equation*} View SourceRight-click on figure for MathML and additional features. where $m$ stands for the coding length of an individual. When setting each bit, $\alpha $ denotes the probability amplitude of 0 and $\beta $ denotes the probability amplitude of 1. Meanwhile, every column satisfies $\alpha ^{2}+\beta ^{2}=1$ . This coding method enables a quantum chromosome to represent $2^{m}$ probabilistic amplitudes, expanding the information capacity of chromosomes in evolutionary algorithms. When $\alpha $ or $\beta $ approaches 0 or 1, the quantum chromosome will collapse to a deterministic solution.

The procedure of QEA can be described in TABLE 1:

TABLE 1 The Procedure of QEA
Table 1- 
The Procedure of QEA

QEA based on quantum gates adopts quantum coding and updates through quantum gates to produce a better generation with a larger probability. The usage of quantum chromosomes provides QEA with strong robustness and parallel processing abilities. Due to the less degree of communication between quantum chromosomes and the high degree of parallelism of the algorithm, QEA has great potential to deal with large-scale data.

2) Research Progress

a: Encoding Scheme
i) Bloch Spherical Coordinate Coding:

A qubit Bloch spherical coordinate coding is proposed in [11], which matches a point in a Bloch sphere to a qubit. By taking 3 Bloch spherical coordinates as gene bits, a qubit can be easily represented using Bloch spherical coding. This encoding method can deal with continuous optimization problems, avoid the random brought by measurements and increase the chance of obtaining global optimum.

ii) Real Number Encoding:

A real-coded QEA is defined in [12], which achieves faster convergence and higher precision for high-dimensional function optimization. Referring to the niche mechanism, initial individuals are divided into real-coded chromosome subpopulations in [13], each subgroup using the local search ability of immune operators to find the optimal solution. In addition, the gradient of objective function can be used to speed up the search process. Selection of parameters often carries weight in meta-heuristics. Utilizing adaptive real coded QEA, [14] is able to avoid tuning of evolutionary parameters and solve power economic load dispatch problem effectively.

iii) Hybrid Encoding:

An improved form of diploid coding is adopted and a fuzzy neural network based on QEA is proposed in [15]. In addition, using triploid real encoding or multi-nary compound states of probability angle can also make QEA suitable for multi-bit encoding.

b: Improvement on Operators

Substantial ideas focus on basic operators, specifically, including the introduction of new operators and the improvement on update strategy.

i) Introducing New Operators:

Continuing the idea of traditional genetic algorithm, different crossover and mutation operators can also be applied to quantum genetic algorithm. New operators can help to jump out of local optimum and avoid premature effectively. Utilizing the independent searching ability of quantum individuals, a new crossover operator and a single qubit mutation operator are designed in [16] to enhance the searching performance of the algorithm. Drawing on the quantum coherence characteristics, a quantum crossover mechanism is proposed in [17], which conducts multiple individuals at the same time to construct new individuals and can even work when most individuals are identical. [18] refers to the concept of normative knowledge in cultural algorithms and introduces cultural operators into QEA. The utilization of normative knowledge can guide individuals to search for effective regions, thus improving the accuracy and convergence speed. Based on hybrid quantum evolution, a phased optimal algorithm is put forward in [19]. A greedy repair operator is well designed to enhance the convergence rate and an adaptive grid operator is introduced to keep the dispersion of the Pareto solutions.

ii) Improving the Update Strategy:

Quantum rotation gates have great influence on the whole evolutionary process. Appropriate rotation gates can effectively enhance the ability to find global optimum. [20] adopts an adaptive adjustment of search grid and updates quantum gate through quantum bit phase comparison. Quantum probability amplitude has the characteristic of chaos, and quantum rotation gates are to change the quantum probability amplitude from chaos to determination so as to complete evolution. Therefore, using pre-generated chaotic sequence to update quantum gates can significantly decrease the computational complexity and enable the algorithm to apply in real-time environment. From this point of view, updating quantum rotation gates by chaotic operation is proposed in [21]. In addition, the rotation angle and direction are solved through multi-objective optimization, then these parameters are used in QEA optimization process in [22]. Quantum rotation gates are improved using two-point crossover operator in [23] to ensure current solutions converge to chromosomes with higher fitness. A new rotation angle is defined and adaptively adjusted in [24] according to the evolution generations. Also, by adopting $\mathrm {H}\varepsilon $ gate, the corrective manipulation of the probability amplitude is conducted to prevent the extramalization. To avoid any tuning of parameters, [25] uses two qubits representation. When varying rotation angle of the qubits in the first set, the second qubit’s probability amplitude is effectively employed during the computation. In addition, in the evolutionary algorithm, the elitist strategy can ensure optimal individuals not be disturbed, which is a guarantee of the convergence. This strategy can also be applied to QEA.

Like operation shown in [26], an elitist QEA can prevent the optimal solution from being destroyed to render the evolution faster. Still, there are some methods that can adaptively change the rotation angle according to the fuzzy reasoning or simulated annealing algorithm.

c: Improvement on Population
i) Improvement on Population Structure:

The population structure of QEA is classified into ring, grid, binary tree, cluster, square, ladder, etc. in [27], and grid is recognized as the best population structure. The grid population structure is subdivided into square, rectangle and strip in [28] and QEA is designed to dynamically adjust the population structure according to individual fitness and population entropy. Each node in the proposed grid structure in [29] represents an individual, which can maintain the diversity of the population. In addition, [15] proposes a new coarse-grained parallel QEA with a hierarchical ring structure. Ring is used as population structure in [30] to ensure each individual only adjacent to two neighbors, and population size is also changed during evolution to maintain diversity. Using the small world theory and complex network theory, [31] analogizes the relationships among individuals in QEA, dividing the population into local small groups, which can effectively avoid premature. [32] proposes a L5-based synchronous cellular QEA based on L5 neighbors, in which individuals are located in a lattice and neighbors of each individual all undergo a QEA. Different individuals communicate through the overlapped neighbors to evolve the population.

i) Improvement on Population Size:

The parallel characteristic of QEA can be effectively utilized by dividing all individuals into independent subgroups according to a certain topological structure. On the basis of coarse-grained parallel QEA, [33] proposes a pairwise exchange algorithm to deal with individual migration, which makes full use of the advantages of local search and global search. [29] changes the size of population dynamically. When the population size increases, the randomly added population will bring in diversity. When the population size decreases, removing individuals of poorer performance will narrow down the search scope and accelerate the convergence. [26] adopts the niche technique to increase the diversity of the population, through which the similarity among individuals is largely reduced.

d: Combination With Other Algorithms

Common ideas to integrate other algorithms include combining global optimization with local optimization to realize the balance of exploration and exploitation and combining post-processing method and post-processing method to solve specific problems. The improved algorithm can fully demonstrate the advantages of previous algorithms and effectively avoid their disadvantages.

[34] draws on the idea of differential evolution. QEA is good at global search while differential evolution is skilled in local search, so this combination renders the search more efficient. Based on the coarse-grained model, a hybrid algorithm combining parallel QEA and local search algorithm is proposed in [35]. [36] combines the shuffled frog leaping algorithm to regulate the phrase of the quantum bit, through which a balance of the local and global search is achieved and the run speed is improved. In order to optimize combinational logic circuits, [37] combines QEA with local particle swarm optimization, and a circuit design scheme with minimum number of gate circuits is obtained using multi-objective optimization. This hybrid optimization method is obviously superior to other evolutionary algorithms. A multi-universe parallel quantum genetic algorithm is proposed in [17], which combines QEA with independent component analysis to separate blind source signals. [38] utilizes QEA to evolve different parameters in fuzzy C-Means to cluster better.

In addition, some improvements focus on absorbing the characteristics of other algorithms into QEA, which can effectively avoid the defects and make full use of the advantages of each algorithm. Particle swarm optimization is embedded in QEA in [39], in which quantum bits are used to represent particles and their positions to accelerate the aggregation speed. [40] proposes two hybrid approaches based on the hybridization of Firefly algorithm (FA) to solve the quality of service multicast routing problem. Two approaches, FAQEA1 and FAQEA2, are introduced, which embed the evolutionary equation of FA in the operator of QEA and replace the operator of QEA using the evolutionary equation of FA respectively.

e: Theoretical Research

Theoretical research mostly analogizes the differences and similarities between QEA and other classical evolutionary algorithms from the operation mechanism of QEA. According to the theory of quantum entanglement, it can be considered that genetic algorithm is a kind of quantum parallel computing in nature [41]. Typically, [42] shows that the process from non-equilibrium to equilibrium in quantum systems in a deep depth and has a great comparability with the convergence process of population in genetic algorithm (GA). Therefore, the combination of quantum system and GA can be proved to be effective. Details are shown in TABLE 2:

TABLE 2 Similarities
Table 2- 
Similarities

In addition, [43], [44] show that the essence of QEA is a distributed estimation algorithm (EDA). From the comparison in FIGURE 1, EDA establishes probability model through individual distribution, using this model to sample and generate new population while QEA generates new population through collapse of the probability amplitude. Collapse in QEA corresponds to sampling in EDA, which allows the population to evolve better. Compared with EDA, QEA has advantages of rich diversity and strong adaptability.

FIGURE 1. - Comparison between QEA and EDA.
FIGURE 1.

Comparison between QEA and EDA.

3) Summary

This part first introduces the basic process of QEA, and then sums up existing researches from five aspects. Specifically, there are abundant studies on the improvement of operators and combination ideas. These studies effectively utilize the characteristics of QEA, and solve specific problems from both local and global aspects by combining certain domain knowledge. In addition, the current improvement on coding schemes mainly draws on the experience of the traditional genetic algorithm and the uniqueness of quantum coding has not been fully exploited, which has great potential and should be given sufficient attention. Most of theoretical studies start from the analysis of the operation mechanism of QEA and prove the rationality of QEA from the perspective of evolution. It should be pointed out that the convergence of QEA is closely related to its parameters, thus more thorough proof should be made in this respect. In addition, future research should also focus on the practical application, so the efficient performance of QEA can be made full use of.

B. Quantum Particle Swarm Optimization

Swarm intelligence is a new optimization method. Since its emergence in the 1980s, it has attracted widespread attention. It has become a research hotspot in the field of optimization and the frontier of inter-disciplines. Swarm intelligence is a heuristic search algorithm based on swarm behavior to optimize a given function, whose optimization process embodies the characteristics of randomness, parallelism and distribution. A particle swarm optimization algorithm with quantum behavior is proposed in [45]. The motion state of a particle is described using quantum uncertainty principle, and the quantum mechanism is combined. [46] extends the quantum particle swarm optimization to the field of multi-objective optimization.

1) Algorithm Description

In particle swarm optimization (PSO), model parameters are hard to be determined, changes of particle positions lack randomness, and global optimum cannot be found easily. QPSO throws away the orientation property, leading the update of particle positions no relation with previous movement, thus increasing the randomness of the particle position. In classical mechanics, the trajectory of a particle is determined by its velocity and current state. However, in quantum mechanics, the motion of particles remains uncertain and the state of particles is described by wave function $ \Psi (\vec {x},t)$ . In a three-dimensional space, the wave function is expressed as follows:\begin{equation*} \left |{ \Psi }\right |^{2}dxdydz=Qdxdydz\tag{1}\end{equation*} View SourceRight-click on figure for MathML and additional features.$Qdxdydz$ represents the probability that a particle will appear at position $(x,y,z)$ at time $t$ . $\left |{ \Psi }\right |^{2}$ is a probability density function and satisfies (2):\begin{equation*} \int _{-\infty }^\infty {\left |{ \Psi }\right |^{2}dxdydz} =\int _{-\mathrm {\infty }}^\infty {Qdxdydz} =1\tag{2}\end{equation*} View SourceRight-click on figure for MathML and additional features. The correlation function between $\Psi (\vec {x},t)$ and time can be given by Schrödinger equation:\begin{equation*} ih\frac {\partial }{\partial t}\Psi \left ({ \vec {x},t }\right )=\hat {H}\Psi (\vec {x},t)\tag{3}\end{equation*} View SourceRight-click on figure for MathML and additional features.$\hat {H}$ is a Hamilton operator and $h$ is the Planck constant. For a single particle with mass $m$ in a potential field $ V(\vec {x})$ :\begin{equation*} \hat {H}=-\frac {h^{2}}{2m}\nabla ^{2}+V(\vec {x})\tag{4}\end{equation*} View SourceRight-click on figure for MathML and additional features.

Assume that the particle is in a $\delta $ potential well with a center $p$ . Taking a particle in one-dimensional space as an example, the potential energy function can be expressed as:\begin{equation*} V\left ({ x }\right )=-\gamma \delta \left ({ x-p }\right )=-\gamma \delta (y)\tag{5}\end{equation*} View SourceRight-click on figure for MathML and additional features.$y=x-p$ , thus $\hat {H}$ can be expressed as:\begin{equation*} \hat {H}=-\frac {h^{2}}{2m}\frac {d^{2}}{dy^{2}}-\gamma \delta (y)\tag{6}\end{equation*} View SourceRight-click on figure for MathML and additional features. Then the Schrödinger equation for this model can be transformed:\begin{equation*} \frac {d^{2}\Psi }{dy^{2}}+\frac {2m}{h^{2}}\left [{ E+\gamma \delta \left ({ y }\right ) }\right ]\Psi =0\tag{7}\end{equation*} View SourceRight-click on figure for MathML and additional features. Because $\int _{-\varepsilon }^\varepsilon {dx, \varepsilon \to {0}^{+}} $ , (8) can be obtained:\begin{equation*} \Psi ^\prime \left ({ {0}^{+} }\right )-\Psi ^\prime \left ({ {0}^{-} }\right )=-\frac {2m\gamma }{h^{2}}\Psi \mathrm {(0)}\tag{8}\end{equation*} View SourceRight-click on figure for MathML and additional features. For $y\ne 0$ , (8) can be expressed as (9):\begin{align*}&\frac {d^{2}\Psi }{dy^{2}}-\beta ^{2}\Psi =0 \\&\beta =\sqrt {-\frac {2mE}{h}} (E\mathrm {< 0)}\tag{9}\end{align*} View SourceRight-click on figure for MathML and additional features. When the following constraints are satisfied:\begin{equation*} \left |{ y }\right |\to \infty , {\varPsi} \to 0\tag{10}\end{equation*} View SourceRight-click on figure for MathML and additional features. The solution of (9) can be expressed as follows:\begin{equation*} {\varPsi} (y)\approx e^{-\beta \left |{ y }\right |}\tag{11}\end{equation*} View SourceRight-click on figure for MathML and additional features. Consider the form of the solution shown below.:\begin{equation*} {\varPsi} \left ({ y }\right )= \begin{cases} Ce^{-\beta y}&y\mathrm {>0}\\ Ce^{\beta y}&y\mathrm {< 0}\\ \end{cases}\tag{12}\end{equation*} View SourceRight-click on figure for MathML and additional features.$C$ is a constant, according to (8):\begin{equation*} -2C\beta =-\frac {2m\gamma }{h^{2}}C\tag{13}\end{equation*} View SourceRight-click on figure for MathML and additional features. So $\beta $ can be solved, \begin{equation*} \beta =\frac {m\gamma }{h^{2}}\tag{14}\end{equation*} View SourceRight-click on figure for MathML and additional features. and \begin{equation*} E=E_{0}=-\frac {h^{2}\beta ^{2}}{2m}=-\frac {m\gamma ^{2}}{2h^{2}}\tag{15}\end{equation*} View SourceRight-click on figure for MathML and additional features. Since the wave function needs to satisfy the normalization condition, the following formula holds:\begin{equation*} \int _{-\infty }^{+\infty } {\left |{ {\varPsi} (y) }\right |^{2}dy=\frac {\left |{ C }\right |^{2}}{\beta }} =1\tag{16}\end{equation*} View SourceRight-click on figure for MathML and additional features. So $\left |{ C }\right |=\sqrt \beta $ and $L=\frac {1}{\beta }=\frac {h^{2}}{m\gamma },L$ is called the characteristic length of a potential well. Then the normalized wave function can be expressed as follows:\begin{equation*} {\varPsi} \left ({ y }\right )=\frac {1}{\sqrt {L} }e^{-\frac {\left |{ y }\right |}{L}}\tag{17}\end{equation*} View SourceRight-click on figure for MathML and additional features. The corresponding probability density function $Q$ can be expressed:\begin{equation*} Q\left ({ y }\right )=\left |{ {\varPsi} (y) }\right |^{2}=\frac {1}{L}e^{-\frac {2\left |{ y }\right |}{L}}\tag{18}\end{equation*} View SourceRight-click on figure for MathML and additional features.

The wave functions of particles’ states in quantum space are given in (18). Using Monte Carlo method to simulate collapse of the probability amplitude, so positions of particles in classical mechanical space are obtained. $s$ is a random number with uniform distribution between $[0,1/L]$ :\begin{equation*} s=\frac {1}{L}rand\left ({ 0,1 }\right )=\frac {1}{L}u\mathrm ,\quad u=rand\mathrm {(0, 1)}\tag{19}\end{equation*} View SourceRight-click on figure for MathML and additional features. Use $s$ to substitute $Q$ in (18):\begin{equation*} s=\frac {1}{\sqrt {L}}e^{-\frac {\left |{ y }\right |}{L}}\tag{20}\end{equation*} View SourceRight-click on figure for MathML and additional features. So:\begin{equation*} y=\pm \frac {L}{2}ln \left({\frac {1}{u}}\right)\tag{21}\end{equation*} View SourceRight-click on figure for MathML and additional features. Because $y=x-p$ :\begin{equation*} x=p\pm \frac {L}{2}ln \left({\frac {1}{u}}\right)\tag{22}\end{equation*} View SourceRight-click on figure for MathML and additional features.

(22) achieves the accurate measurement of particles’ positions in quantum space. It is the core iteration of QPSO. By constantly updating attractor $p$ and characteristic length $L$ , the efficient search of particles in the whole decision space is realized according to the motion of quantum mechanics.

$p$ can be constructed in multiple ways and the local optimum of particles is often used as an attractor. Considering the distance between the current position and the local optimum, $ L$ is constructed in [47]. For multi-objective optimization problems, [46] constructs $L$ using the global and local optima of subproblems.

2) Research Progress

a: Encoding Scheme

In order to improve the inefficiency when dealing with discrete problems, [48] introduces the operators on discrete binary variables. In this algorithm, trajectory is a probability change of coordinate value. In addition, a fast and simple discrete PSO is created in [49] with the form of quantum expression. For combinatorial optimization, based on the concept of QEA, a discrete PSO method is proposed in [50]. Firstly, quantum angles are defined and confined to $-\pi /2$ to 0. Secondly, a new adaptive speed update method is adopted. Moreover, particles can be transferred from decimal code to binary code under the idea of QEA, which helps to solve discrete problems. Binary QPSO is designed in [51] referring to the advantages of GA, which is more effective than binary version of PSO. The algorithm redefines the position and distance between two positions, and adjusts the iteration equation to adapt to binary search space. In addition, local attractors are obtained using crossover operation to possess QPSO with characteristics of GA. A new qubit representation, called quantum angle, is defined in [52] and all subgroups are cooperated to prevent stagnation of evolution. Unlike classical QPSO, particles in [53] are encoded based on Bloch spheres. Each particle includes three Bloch coordinates and updates synchronously, which can expand the search scope and accelerate the optimization.

b: Improvement on Operations

A considerable amount of works focus on the modification of operators. For constraint problems, [54] studies Gauss, Chaos, Cauchy and Levy operators using penalty mechanism. The mutation mechanism is introduced in [55] to mutate the global optimal particle with Cauchy distribution. A new Sobol mutation operator is proposed in [56], which uses quasi-random Sobol sequence to find new solutions. Compared with random probability distribution, quasi-random sequence covers the search area more evenly, thus increasing the chance of finding a better solution. In [57], the position and velocity information of each particle is applied to adaptively adjust the inertia factor, in which the non-linear dynamic adjustment strategy of acceleration factors and mutation operation are introduced to reduce the probabilities of trapping in the local optima solution. [58] presents a novel reverse operation to improve QPSO, leading each particle with 50% probability to away from the position of the original one according to probability theory. In [59], the best particle will be randomly selected to participate in the current search domain. Also, the mean best position is changed adopting a mutation strategy and an enhancement factor is incorporated to contribute to the global search capability. [60] proposes a QPSO with extrapolation, in which particles will take an extrapolation operator when degradation occurs.

In addition, based on public history and mutant particles, a mutate operation is conducted on the best particles in [61]. The global optimum and the average optimum are mutated respectively using Cauchy distribution in [62]. The scale parameters of mutation operator are modified by annealing strategy, which improves the adaptive ability of the algorithm. [63] replaces the global optimum with a randomly selected individual optimum. A recombination operator is used in [64] to interpolate and generate new solution in search space. In [65], randomly selection of the optimal individual is introduced and a linear weight parameter is used to represent the importance of a particle according to its fitness.

c: Improvement on Population

The diversity of population has always been a key point. For multi-objective optimization, a new distance measurement is used in [66] to maintain performance. The inertia weight of particles is set dynamically, and the non-dominant sorting is used to evolve the population. Pareto minimax distance is adopted to keep the diversity so the global search and local search can be well balanced. [67] makes full use of cooperative search and competitive search among different subpopulations to make QPSO more efficient. In order to prevent premature, a threshold is set in [68] to avoid clustering. A diversity-guided QPSO is proposed in [69], [70] which sets a lower limit for diversity. Once the diversity is below this limit, the global optimum particle is mutated. Besides, random perturbation can be introduced to solve the lack of diversity in evolution [71].

In addition, clustering coefficient and characteristic distance are used to guide diversity in [72]. In evolution, the threshold of clustering coefficient and characteristic distance should be adjusted adaptively. When the clustering coefficient is large and the characteristic distance is small, the population scatters and the exploration is strengthened; Otherwise, the population gathers and exploitation is performed. An exchange strategy is proposed in [73], which establishes two particle swarms. Once the whole swarm falls into local optimum or solutions are not improved after certain iterations, the exchange strategy is implemented. Collaborative search is carried out in [74] by utilizing the mutual benefit among groups. Cooperative search is specially designed to overcome the curse of dimensionality and can easily deal with high-dimensional optimization problems. A hierarchical clustering method is used in [75] to solve dynamic optimization problem in order to improve the ability of tracking the optimal solution. In addition, convergence check, overcrowding check and overlapping check are also used to maintain the diversity of the population.

d: Combination With Other Algorithms

Like PSO, QPSO has the possibility of premature. Combining QPSO with other algorithms can effectively increase diversity, avoid premature and improve the possibility of converging to the global optimum. To solve the problem of falling into local optimum easily, [76] proposes a diversity guided method which sets an attraction state and an expansion state for the population. If the diversity is less than the pre-set value, QPSO will in the attraction state and the immune clonal algorithm will be used to conduct local search. Neighborhood search strategy is introduced into QPSO in [77], which combines local search and global search to improve the diversity, and parallel technology is used to shorten the searching time. [78] combines shuffled complex evolution and QPSO to ensure the efficiency of optimization in both low and high dimensional problems.

In addition, QPSO can be applied to optimize parameters of other algorithms. For example, it is able to optimize neural network to improve its performance. In [79], QPSO is used to optimize the weight of autoencoder neural network and the parameter of softmax to help the autoencoder classify more accurately. [80] focuses on the optimization of network topology, in which hyper-parameters of a neural network like the number of layers, neurons in each layer, etc. are tuned using QPSO.

e: Theoretical Research

The proof of the convergence of QPSO is a hot topic in theoretical study. Global convergence of QPSO is closely related to its parameters. Some researches focus on how to select appropriate parameters to ensure the convergence of QPSO. For example, by discussing adaptive parameter control methods, [81] illustrates how to set parameters to ensure the convergence. The behavior of a single particle in QPSO is analyzed in [47] from the perspective of probability measurement. Its purpose is to find the upper bound of the contraction-expansion (CE) coefficient. Within the upper bound, the selected value of CE coefficient can ensure the convergence of particles. CE coefficient is studied in [82], and a control method of coefficient with Q-learning which is able to tune the coefficient adaptively is introduced.

Besides, selection of the potential well type is critical for the convergence of QPSO. [83] explores the motion pattern of particles in square potential well and proposes the ternary correlation QPSO based on square potential well.

3) Summary

This part first introduces the basic process of QPSO, and then summarizes the existing research. Abundant researches focus on mutation operator, which enables QPSO jump out of local optimum quickly and increase the probability of searching the global optimum. In addition, the population distribution of swarm intelligence search algorithm has always been a widespread attention. Whether considering the role of guiding the search or the ability to explore the overall space, the size setting and diversity control of population should be highly valued. In addition, QPSO shows great potential for constrained optimization, integer programming and multi-objective optimization, so corresponding application should be further studied.

C. Quantum Immune Clonal Algorithm

Artificial immune system draws on the experience of the information processing of vertebrate immune system. Constructing new intelligent algorithms based on immune terminology and basic principles will provide novel ideas for solving problems. Immune clonal selection algorithm is an intelligent algorithm with strong local optimization ability through increasing the population size, which adopts clonal operators to keep strong local optimization ability. Quantum system is a parallel distributed processing system in nature, so the combination of biological evolution with quantum theory will be better at simulating the process of information.

1) Algorithm Description

[84] proposed a quantum-inspired immune clonal algorithm in 2008. The qubit-coded chromosomes can represent multiple states at the same time, bringing abundant population information. In addition, the usage of clonal operator can easily proliferate the information of the current optimal individual to the next generation, and lead the population evolves towards a better direction. The main idea of QICA is that cells capable of recognizing antigens can be selected for reproduction.

TABLE 3 lists the procedure of QICA, in which $Q(t)$ denotes the antibody population using qubit at the $t$ th generation, $P'(t)$ denotes the antibody population using classical bit and $B(t)$ denotes the best antibody population using classical bit in the subpopulation.

TABLE 3 Procedure of QICA
Table 3- 
Procedure of QICA

Obviously, in order to maintain the diversity of solutions and expand the searching scope, the strategy of replicating the parent generation is adopted in the immune clonal method, which expands the solution space at the cost of computing time. Possessing the characteristics of parallel computing, it is effective to introduce qubit encoding into immune clonal algorithm.

2) Research Progress

a: Improvement on Operators

Mutation operators help to accelerate the convergence and crossover operators help to enhance the exchange of information to increase the diversity of the population. For the improvement on immune clonal operators, many strategies are proposed. [84] utilizes quantum recombination as the crossover operations, through which more antibodies are involved during the evolution. A new selection scheme and a novel mutation operator with a chaos-based rotation gate are proposed in [85] to create new population. Produced by logistic mapping, [86] introduces chaos variables into quantum rotation gates to improve searching capability. A hybrid quantum crossover is adopted in [87] to well balance the exploration and exploitation in immune maturation process. In [88], based on the Pareto-dominance, the clonal selection and the entire cloning are adopted. Antibodies are divided into two parts: dominated ones and non-dominated ones, in which the non-dominated ones are selected.

Some methods adopt memory mechanism, and the design of affinity function is also improved. Optimal solution is obtained through the mechanism in which cross-mutation is accomplished by immune cells. Memory cells are produced while similar antibodies are suppressed in [89], [90]. The memory strategy in [91] realizes the information transfer during the courses of evolution. [92] uses replicator dynamics to model the behavior of the quantum antibody. The mechanism of immunological memory and immunologic is referred in [93], possessing QEA with antibody memory enhancement. [94] introduces an expression of antigen and antibody affinity. Moreover, [95] solves the sensitivity problems of scale parameter and slow iteration by designing effective immune operators and embedding a potential evolution formula into affinity of multi-elitist immune clonal optimization.

In addition, [96] uses a repair operator to amend the infeasible solutions so as to ensure the diversity. [97] utilizes non-feasible solutions to improve the constrained multi-objective optimization. Quantum observing entropy in [98] is introduced to evaluate the population evolutionary level, by which relevant parameters are adjusted accordingly.

b: Improvement on Population

Improvement on population is mainly aimed at population size. The common strategy is to divide the original population into several sub-populations according to certain criterion, so that sub-populations can evolve in a parallel mode to obtain more efficient results.

An antibody is proliferated and then divided into a subpopulation in [99], each is represented by multi-state gene qubits. In [100], [101], individuals are divided into independent sub-colonies, called universes. Each universe evolves independently using QICA and information among universes is exchanged by emigration. [102] proposes a niche method, in which population is automatically divided into subpopulations and local search is carried using the immune mechanism.

[103] proposes a computational model, which consists of a population space based on QEA and a belief space based on immune vaccination. The population space and the belief space create their own population and conduct evolution independently and in parallel. Besides, these spaces are able to exchange information to constitute a dual-evolution mechanism and the convergence speed can be greatly accelerated in this way. To preserve the diversity of the population, [104] adopts a suppression algorithm and a truncation algorithm to create a new population.

c: Combination With Other Algorithms

Combining with different strategies, QICA can effectively avoid the shortcomings and solve practical problems more efficiently as a pre-processing or post-processing means. [105] proposes an adaptive multiscale Bandelet based on QICA and wavelet packet for image representation, which is of low complexity and thus can be implemented rapidly. To address the problem of image segmentation, [106] introduces watershed algorithm into QICA to improve over-segmentation. A load balancing strategy is adopted in [107] to accomplish task scheduling and allocation. Based on evolutionary game theory, QICA in [92] is embedded with evolutionary game and maps the process of a quantum antibody finding the optimal solution to the process of a player pursuing maximum utility. [103] integrates QICA into the cultural frame. In addition, traditional Fuzzy C-Means (FCM) clustering is usually based on the image intensity, so the segmentation results can be rather unsatisfactory when images are interfered by noise. To address this problem, [108] modifies the FCM objective function and uses QICA to optimize this function. Moreover, to balance exploration and exploitation, [104] combines an artificial immune system based on binary encoding (BAIS) and a quantum-inspired artificial immune algorithm (QAIS). On one hand, QAIS is responsible for exploration of the search space. On the other hand, BAIS is applied for exploitation using a reverse mutation.

d: Theoretical Research

Theoretical studies of QICA mainly focus on its convergence. Using Markov chain, [109] proves that quantum-inspired evolutionary algorithm based on immune operator is completely convergent. [91] proposes a memory strategy to realize the information transfer during the courses of evolution. Theoretical analysis proves that quantum-inspired immune memory algorithm converges to the global optimum.

3) Summary

Compared with QEA and QPSO, researches and applications on QICA are relatively less, which deserve further development. Immune cloning increases the population size locally in exchange for local optimization ability. In order to maintain the diversity of solutions and expand the space of searching, it shows distinct local mining ability. The existing quantum immune clonal algorithms mainly focus on the new operators and combination with other algorithms. Most of the ideas usually come from the improvements of other quantum optimization methods, which possess the universal characteristics. However, it needs to be pointed out that QICA draws on the experience of the information processing mode of animal immune system, and the common improvement ideas fail to utilize this point of view and make full use of the characteristics of immune mechanisms such as memory cells. Simply speaking, the common immune clonal algorithms only use the main framework of artificial immune cloning, and have not made a deep exploration and application. Making full use of animal immune system and continuing to maintain and strengthen the characteristics of immunology should be the main development trend of QICA in the future.

SECTION III.

Quantum Learning

Quantum computing has shown tremendous advantages in intelligent optimization. Therefore, people put forward the idea of combining quantum theory with learning algorithm, hoping to introduce the advantages of quantum computing on the basis of existing learning algorithms and achieve more fruitful results. In this part, two classical algorithms, quantum neural network (QNN) and quantum clustering (QC) are elaborated specifically. Besides, contents of algorithms and relevant research progresses are introduced and the related works are summarized.

A. Quantum Neural Network

With the advancement of psychology, neuroscience, computer information processing and artificial intelligence, conditions for exploring consciousness of human with natural science methods have become mature and many valuable research results in neurocomputing have been achieved. Using threshold logic unit to simulate biological neurons, a well-known M-P neuron model is proposed in [110], which opens the prelude of neural network research. In order to simulate the plasticity of synapses, Hebb rule is proposed in [111], which lays the foundation for constructing a learning neural network model. After that, a learning mechanism is added to the original MP model, and the theory of neural network is put into practice for the first time in [112]. After several decades of development, artificial neural network (ANN) has achieved extensive success in many fields such as pattern recognition, automatic control, signal processing, assistant decision-making and so on. The excellent performance of neural network makes it one of the hotspots in quantum research.

In 1995, Kak proposed the concept of quantum neural computing for the first time [113]. The thought of combining nerve computing with quantum computing to form a new computing paradigm pioneered in the field of quantum study. T Menneer discussed quantum neural network (QNN) comprehensively from the point of multi-universe and considered that QNN worked better than traditional neural network. After that, new QNN models have been brought in. Based on Grover search algorithm, a quantum associative memory model is proposed in [114]. In [115], the model of a quantum neuron is described and its mechanism and corresponding training algorithm are discussed. In the meanwhile, it is proved that a single quantum neuron can perform XOR functions that a single classical neuron cannot achieve.

1) Algorithm Description

a: Quantum M-P Model

Each input of a neuron has a weight coefficient $w$ , which simulates excitation and inhibition of synapses in cerebral neurons and is used to describe the connection strength. Referring to the classical M-P model, the conceptual model of a quantum M-P model is shown in FIGURE 2:

FIGURE 2. - Quantum M-P Model.
FIGURE 2.

Quantum M-P Model.

Corresponding outputs of a neuron are expressed as follows:\begin{equation*} O=\sum \nolimits _{j} {w_{j}\phi _{j}} \mathrm ,\quad j=\mathrm {1,2,\ldots ,}2^{n}\tag{23}\end{equation*} View SourceRight-click on figure for MathML and additional features.$2^{n}$ denotes the total number of inputs, $\Phi _{j}$ denotes a quantum state and $w_{j}=(w_{j1},w_{j2},\ldots w_{j2^{n}})$ is a vector. Quantum states are represented by Dirac symbols, and the calculated output of quantum M-P can be expressed as follows:\begin{align*} O=&\sum \nolimits _{j} O_{j} =\sum \nolimits _{j} \sum \nolimits _{i} {w_{ji}x_{ji}} =\sum \nolimits _{j} \sum \nolimits _{i} {w_{ji}\left.{ \vert x_{1}\mathrm {,\ldots ,}x_{2^{n}} }\right \rangle }, \\&i=\mathrm {1,2,\cdots }2^{n}\tag{24}\\ O_{j}=&\sum \nolimits _{i=1}^{2^{n}} {w_{ji}x_{ji}} \\=&w_{j1}\left |{ \left.{\mathrm {0,0,\ldots ,0} }\right \rangle +w_{j2} }\right |\left.{ \mathrm {0,0,\ldots ,1} }\right \rangle \\&+\,\cdots +w_{j2^{n}}\left.{ \mathrm {\vert 1,1,\ldots ,1} }\right \rangle\tag{25}\end{align*} View SourceRight-click on figure for MathML and additional features.

  • If state $\phi _{j}$ is orthogonal, $w$ is an orthogonal matrix, and the output can be expressed as the following quantum unitary transformation:\begin{align*}&\hspace {-1.2pc}O=\left ({ {\begin{array}{cccccccccccccccccccc} {\begin{array}{cccccccccccccccccccc} {\begin{array}{cccccccccccccccccccc} w_{11}\\ w_{21}\\ \end{array}} & {\begin{array}{cccccccccccccccccccc} w_{12}\\ w_{22}\\ \end{array}}\\ \end{array}} & {\begin{array}{cccccccccccccccccccc} {\begin{array}{cccccccccccccccccccc} \ldots \\ \ldots \\ \end{array}} & {\begin{array}{cccccccccccccccccccc} w_{12^{n}}\\ w_{22^{n}}\\ \end{array}}\\ \end{array}}\\ {\begin{array}{cccccccccccccccccccc} {\begin{array}{cccccccccccccccccccc} \ldots \\ w_{2^{n}1}\\ \end{array}} & {\begin{array}{cccccccccccccccccccc} \ldots \\ w_{2^{n}2}\\ \end{array}}\\ \end{array}} & {\begin{array}{cccccccccccccccccccc} {\begin{array}{cccccccccccccccccccc} \ldots \\ \ldots \\ \end{array}} & {\begin{array}{cccccccccccccccccccc} \ldots \\ w_{2^{n}2^{n}}\\ \end{array}}\\ \end{array}}\\ \end{array}} }\right ) \\&\qquad \qquad \qquad \qquad \qquad \times \,\left ({ {\begin{array}{cccccccccccccccccccc} \vert \left.{ \mathrm {0,0,\ldots ,0} }\right \rangle \\ \vert \left.{ \mathrm {0,0,\ldots ,1} }\right \rangle \\ {\begin{array}{cccccccccccccccccccc} \ldots \\ \vert \left.{ \mathrm {1,1,\ldots ,1} }\right \rangle \\ \end{array}}\\ \end{array}} }\right )\tag{26}\end{align*} View SourceRight-click on figure for MathML and additional features.

  • If state $\phi _{j}$ is not orthogonal, the relationship between input and output can be modified as:\begin{equation*} O_{ik}=\sum \nolimits _{j} {w_{ij}\phi _{j}\cdot \phi _{k}} ,\quad j=\mathrm {1,2,\ldots ,}2^{n}\tag{27}\end{equation*} View SourceRight-click on figure for MathML and additional features.

$\phi _{j}\cdot \phi _{k}$ represents the inner product of two states, thus the output can be expressed as follows:\begin{align*}&\hspace {-1.2pc}O\!=\!\left ({\! {\begin{array}{cccccccccccccccccccc} w_{11} & \cdots & w_{12^{n}}\\ \vdots & \ddots & \vdots \\ w_{2^{n}1} & \cdots & w_{2^{n}2^{n}}\\ \end{array}} \!}\right )\!\! \\&\quad \times \,\left ({\! {\begin{array}{cccccccccccccccccccc} \phi _{1}\cdot \phi _{1} & \cdots & \phi _{1}\cdot \phi _{2^{n}}\\ \vdots & \ddots & \vdots \\ \phi _{2^{n}}\cdot \phi _{1} & \cdots & \phi _{2^{n}}\cdot \phi _{2^{n}}\\ \end{array}} \!}\right )\left ({\! {\begin{array}{cccccccccccccccccccc} \vert \left.{ \mathrm {0,\ldots ,0} }\right \rangle \\ \ldots \\ \left.{ \vert 1,\ldots ,1 }\right \rangle \\ \end{array}} \!}\right ) \\{}\tag{28}\end{align*} View SourceRight-click on figure for MathML and additional features.

As for the two situations of orthogonality and non-orthogonality for $\phi _{j}$ , selecting a $W$ can achieve certain functions. Based on the above quantum M-P model, the weight can be updated in TABLE 4:

TABLE 4 Procedure of Updating the Weight in Qunatum M-P Model
Table 4- 
Procedure of Updating the Weight in Qunatum M-P Model

b: Quantum Hopfield Network

Hopfield introduced the concept of energy function into the network and established the stability criterion, making the network a dynamic system with feedback mechanism to solve dynamic problems. Referring to the classical Hopfield neural network, a conceptual model of quantum Hopfield network (QHNN) is presented in FIGURE 3:

FIGURE 3. - Conceptual model of QHNN.
FIGURE 3.

Conceptual model of QHNN.

There are $N$ neurons in the network. The output of each neuron is fed back to other neurons as one of their inputs, however, it does not feedback to itself. The weight matrix $W$ is a diagonal matrix, whose elements satisfy $w_{ij}=w_{ji},w_{ii}=w_{jj}=0$ .

According to Schrödinger equation and quantum linear superposition principle, $W$ can be written as follows:\begin{equation*} W=\frac {1}{P_{s}}\sum \nolimits _{i}^{P_{s}} {\mathrm {\vert }\left.{ \phi _{i} }\right \rangle \left \langle{ \phi _{i}\vert }\right.} =\sum \nolimits _{i} {p_{i}W_{i}}\tag{29}\end{equation*} View SourceRight-click on figure for MathML and additional features.

$p_{i}$ denotes the probability that $W$ collapses to $W_{i}$ , $P_{s}$ denotes the total number of images or patterns stored in QHNN, and also represents the total number that can be recognized, $\mathrm {\vert }\left.{ \phi _{i} }\right \rangle \left ({ W_{i} }\right )$ denotes a single image or pattern stored and $\vert \left.{ \phi _{i} }\right \rangle $ is the complex conjugate of $\left \langle{ \phi _{i}\vert }\right.$ . When an external image is input into the network, the network can collapse into a stored image or pattern with a certain probability after quantum measurement, thus realizing image recognition.

According to the principle of quantum linear superposition and the matrix composed of quantum states, the quantum learning algorithm to determine weights by quantum unitary evolution is in TABLE 5:

TABLE 5 The Quantum Learning Algorithm
Table 5- 
The Quantum Learning Algorithm

Usually, traditional Hopfield networks can store $0.14N$ images with $N$ neurons. Because of the great difficulties in recognizing a large number of images or patterns, researchers have been looking for a breakthrough. Compared with traditional networks, QHNN has achieved the ability of recognizing $2^{N}$ images, which brings a huge acceleration on the storage capacity, thus creating a new model constructing style.

2) Research Progress

a: Improvement on Training Algorithm

QNN is developing rapidly and continuously. To train the network more efficiently, abundant new training algorithms have been proposed. [114] presents a quantum computational learning algorithm, taking advantage of the unique abilities of quantum computation. [116], [117] bring forward a quantum back propagation learning rule (QBP), and reconstruct a QBP neural network through removing the dummy input. A learning algorithm for quantum neuron is proposed and its properties are explored in [118]. Reference [119] presents a randomized training algorithm whose aim is to search each node’s weight independently and the complexity is reduced in the meantime.

Complex version of the training algorithm is one of focuses, too. [120] introduces a complex numbered version for the back-propagation algorithm and [121] investigates the characteristics of the learning rule. In [122], a quantum conjugate gradient back-propagation network is constructed. Rather than the steepest gradient algorithm, a conjugate gradient algorithm is chosen to accelerate the convergence. Besides, by adding an error component into the conventional error function, [123] proposes a backpropagation algorithm for the complex-valued version. To address the local minima problem, [124] puts forward an individual adaptive gain parameter backpropagation algorithm.

Besides, [125] utilizes an improved PSO to train the connection weights and thresholds between different layers of QNN. To improve learning performance, instead of using back-propagation algorithm, a real-coded genetic algorithm is applied in [126] to facilitate the supervised training of the multi-layer QNN.

In addition, the combination strategy is absorbed in the constriction of learning algorithms as well. A model of feedback quantum neuron as well as a novel multi-user detection algorithm are shown in [127]. To apply to the speech enhancement task, a speech enhancement method based on quantum BP neural network is proposed in [128]. The focus of [129], [130] is associative memory, in which the implementation of associative information processing in quantum fields is shown.

b: Improvement on Model

According to different research purposes, the quantum model has been improved to make it more suitable for corresponding applications. An extension of Hopfield model is proposed in [131] to solve the constraint satisfaction problems and the concept of the known-energy systems based on QNN is introduced in [132]. [133] presents a self-organizing QNN that can perform pattern classification automatically and self-organizationally through quantum competitive process, which is faster than traditional methods in classification. Reference [134] designs the structure of quantum storage network and shows its practical pattern storage algorithm. After that, on the base of quantum linear superposition, [135] presents a network whose elements of the storage matrix are distributed in a probabilistic way, making its storage capacity increase exponentially. Network in [136] are presented as quantum computational agents, which have learning ability via implementing reinforcement learning algorithm. Reference [137] proposes a quantum-inspired neuron using controlled-rotation gate, in which the discrete sequence input is represented by qubits and the target qubits are controlled for rotation. Reference [138] proposes a quantum version of the multilayer self-organizing neural network, in which single qubit rotation gates are operated and linear indices of fuzziness are incorporated as the system errors to adjust weights. Reference [139] proposes a quantum parallel bi-directional self-organizing neural network to realize a real-time pure color image denoising. Each constituent updates weighted connections using quantum states and rotation gates are adopted to represent weighted inter-links. Reference [140] seeks to model language using the mathematical framework in quantum mechanism. This framework unifies linguistic units in a complex-valued vector space and a complex-valued network is built for the task of semantic matching.

c: Theoretical Research

Some studies focus on the stability analysis of QNN. The stability of complex numerical neural networks is analyzed in [141], then conditions are relaxed in [142]. Reference [143] proposes an energy function for higher order complex-valued Hopfield neural network, and then studies the stability conditions to prove the convergence. An intrinsic similarity between artificial neural network and quantum theory is highlighted and analyzed in [144], and dynamic features of QNN are also investigated in detail. A class of discrete time recurrent neural networks with multivalued neurons in synchronous update mode is discussed in [145], also, the network complete stability is well established.

There are also some studies from the perspective of network learning, which focus on the theoretical analysis of network performance. Reference [146] describes how several optimization problems can be quickly solved by highly interconnected neural networks of simple analog processors. In [147], performance is improved due to the use of superposition of neural states as well as probability interpretation during the observation of the output states. As for the exploration of the computational power, [115] demonstrates that only a single quantum neuron is capable of performing the XOR function which is unrealizable with a single classical neuron, which means that a single quantum neuron has the same computational power as the two-layer perceptron. Reference [148] considers a model using particles in a two-humped potential in quantum mechanism as a neuron. Moreover, the possibility of conducting the simplest logical elements using the introduced quantum particles is shown.

Moreover, [149] shows that any quantum system has a dynamical structure of Hopfield-like associative artificial neural network and the influence on learning with gradient descent method by changing the number of neurons is described in [150]. Besides, [151] discusses the question whether the adoption of quantum computational means will affect systems of agents and the autonomy of individual.

3) Summary

Compared with quantum optimization, the development of QNN is relatively late. However, the artificial neural network has shown tremendous vitality in many fields, leading the direction of artificial intelligence. In view of this, the prospect of QNN is very broad and needs a great breakthrough space. Since QNN was first proposed, the related improvements of QNN is very rich and versions of QNN are abundant. These versions are specifically designed for different problems. Based on the strong computing power of the artificial neural network and the advantages of quantum mechanism, QNN injects new vitality into the framework of neural networks. So far, QNN is far less popular than artificial neural network, so in addition to the theoretical study of QNN, the practical application of QNN should also become the focus of future work.

B. Quantum Clustering

Cluster clustering plays a very important role in the field of data mining and is a useful tool for data analysis and knowledge discovery. The purpose of cluster analysis is to classify the sample objects into several special meaningful categories according to their similarity. Data clustering has been widely used in data mining, computer vision, information retrieval and pattern recognition. Generally, clustering methods are divided into the following categories: partition-based algorithm, hierarchy-based algorithm, density-based algorithm, grid-based algorithm and model-based algorithm [152]. As a new kind of clustering algorithm, quantum clustering (QC) attracts more and more attention, produces a large number of excellent theoretical results, and has achieved extensive success in many fields.

According to the core idea of the algorithm, quantum clustering can be divided into two categories: based on quantum optimization algorithms or inspired by quantum mechanics. In the design of clustering algorithm based on evolutionary computation, the principal problems are the coding of individuals, the distance measurement and the selection of an appropriate objective function [153]. A clustering technology based on genetic algorithm is proposed in [154]. According to the evolutionary scheme, evolutionary strategy is used to find the optimal solution in the target space, not only to update the clustering center, but also to reduce the dependence on the initial clustering center. In addition, a quantum clustering algorithm inspired by quantum mechanics is introduced in [155], which uses the gradient descent method to solve the minimum of potential energy and determine the clustering center. The physical basis of QC is illustrated in [156] by taking clustering as a physical system. By solving Schrödinger equation and using the gradient descent, minima of the resulting potential function can be obtained, which correspond to cluster centers.

1) Algorithm Description

According to different design philosophies, quantum clustering can be divided into clustering based on quantum optimization and clustering inspired by quantum mechanics.

  1. Clustering based on quantum optimization converts the process of clustering to optimization. In order to describe the clustering results better, criteria are used to evaluate the performance of the algorithm and the similarity between real classes. The mathematical form is as follows:\begin{equation*} P\left ({ C^{\ast } }\right )=\mathop {min}\limits _{c \epsilon \Omega }{P(C)}\tag{32}\end{equation*} View SourceRight-click on figure for MathML and additional features.

    $\Omega $ is a feasible set of clustering results, $C$ is the division of data set and $P$ is a criterion function, which usually reflects the similarity of data. Through the search for the minimum of $P$ , classification can be transformed into an optimization problem, and the optimal solution is obtained using quantum optimization algorithm. Any quantum intelligent algorithm that absorbs quantum thought can be used to optimize the above problem. Compared with other clustering algorithms, the objective function $P$ in quantum mechanism is nothing special. However, due to the use of quantum optimization in QC, the feasible solution set $\Omega $ is slightly different from that in traditional clustering. Each solution in the feasible solution set is constructed under the idea of quantum mechanism, and solutions are expressed using qubits. Clustering based on quantum optimization can enhance the ergodicity of solution space and the diversity of population. The optimal solution is expressed using qubit probability amplitude, so the probability of obtaining a global optimum is further increased.

  2. The basic idea of clustering inspired by quantum mechanics is: clustering focuses on the distribution of samples in a scale space, while quantum mechanics study the distribution of data in a quantum space, thus a clustering problem can be solved referring to the thought of quantum mechanics. The basic idea is to use Schrödinger equation to solve the potential energy function, then the cluster center can be determined from the point of potential energy.

Schrödinger equation, which does not consider time, is expressed as:\begin{equation*} H\Psi =\left ({ -\frac {\sigma ^{2}}{2}\nabla ^{2}+V(p) }\right )\Psi =E\Psi\tag{33}\end{equation*} View SourceRight-click on figure for MathML and additional features.

$\Psi (p)$ is a wave function, $V(p)$ is a potential energy function, $H$ is a Hamilton operator, $E$ is the energy eigenvalue of $H$ and $\sigma $ is parameter to adjust the width of wave function.

In QC, a Gaussian kernel function with a Parzen window is used to estimate the wave function (i.e., the probability distribution of the sample points):\begin{equation*} \Psi \left ({ p }\right )=\sum \nolimits _{i=1}^{N} e^{-{\parallel p-p_{i}\mathrm {\parallel }}^{2}\mathrm {/2}\sigma ^{2}}\tag{34}\end{equation*} View SourceRight-click on figure for MathML and additional features.

(34) corresponds to the observation set $\left \{{p_{1},p_{2},\ldots ,p_{N} }\right \}\subset \Re ^{d}, p_{i}={(p_{i1},p_{i2},\ldots ,p_{id})}^{T}\in \Re ^{d}$ in scale space. Gauss function can be used as a kernel function that defines a nonlinear mapping from the input space to the Hilbert space. $\sigma $ can be considered as a kernel parameter to adjust width.

When the wave function $\Psi \left ({ p }\right )$ is known, if there is only one single point $p_{1}$ in the input space, the potential energy function can be expressed by solving the Schrödinger equation.\begin{equation*} V\left ({ p }\right )=\frac {1}{2\sigma ^{2}}\left ({ p-p_{1} }\right )^{T}(p-p_{1})\tag{35}\end{equation*} View SourceRight-click on figure for MathML and additional features.

The energy eigenvalue of $H$ operator is $ E=d/2$ , and $d$ is the possible minimum eigenvalue of operator $H$ , which can be expressed by the dimension of the sample.

In general, the potential energy function with samples obeying Gaussian distribution is:\begin{align*} V\left ({ p }\right )=&E+\frac {\left ({ \frac {\sigma ^{2}}{2} }\right )\nabla ^{2}\Psi }{\Psi } \\=&E-\frac {d}{2}+\frac {1}{2\sigma ^{2}\Psi }\sum \nolimits _{i} {\mathrm {\parallel }p-p_{i}\parallel }^{2} e^{-\frac {{\parallel p-p_{i}\mathrm {\parallel }}^{2}}{2\sigma ^{2}}}\tag{36}\end{align*} View SourceRight-click on figure for MathML and additional features.

Assuming that $V$ is nonnegative and can be determined, then $E$ can be obtained by solving the above formula.

The gradient descent method is used to find the minimum of the potential energy function as the center of clustering. Core iteration is as follows:\begin{equation*} y_{i}\left ({ t+\Delta t }\right )=y_{i}\left ({ t }\right )-\eta (t)\nabla V(y_{i}(t\mathrm {))}\tag{37}\end{equation*} View SourceRight-click on figure for MathML and additional features.

The initial point is set as $y_{i}\left ({ 0 }\right )=p_{i}$ , $\eta (t)$ is the learning rate and $\nabla V$ is the gradient of the potential energy. More sophisticated global minimum search methods can be found in chapter 10 of [157]. Finally, particles will move towards the direction in which the potential energy is decreased, which means that data will gradually move towards its cluster center and gather at last. In this way, the central points of clustering can be determined by QC and points close to each other are grouped together.

Compared with traditional clustering algorithms, some advantages of quantum mechanics-inspired clustering algorithms are as follows: (i) The focus is on the selection of clustering centers rather than the search of boundaries; (ii) Centers of clusters are not determined randomly or using simple geometric centers, but completely depend on the potential information of data (iii)The number of clusters is not needed to set previously.

2) Research Progress

a: Clustering Inspired by Quantum Mechanics
i) Parameter Adjustment:

Appropriate parameters have a great impact on the performance of algorithm. However, the kernel scale parameter is often needed to be estimated through experiments for many times in QC. To address this problem, some studies have been carried out. A method for estimating parameters of kernel width is proposed in [158], [159] also proposes a framework to select suitable values of $\sigma $ by optimizing cluster separation and consistency. [160] concludes that the potential field is assimilated with the density of data and uses the K-nearest neighbors distribution for estimating the scale parameter. Besides, [161] proposes the parameter-estimated QC to achieve better performance.

ii) Modification on Distance:

In QC, the measured distance between two samples is relatively fixed, thus methods aiming to improving the distance setting are proposed. Based on the change of metric distance, [158] proposes an improved QC. In [162], the exponential form is used in the distance function to replace the Euclidean distance, which improves the iteration efficiency and achieves better clustering results.

iii) Combination With Other Algorithms:

According to the specific application of the algorithm, QC is often combined with other methods to solve practical problems. From the view of information theory, [163] combines Renyi entropy with the kernel method. Fuzzy neural network is able to handle non-linear and complex data, but the structure of model determination is a difficult yet important issue to be identified. [164] presents the fusion of fuzzy C-Means clustering method and QC, and the structure of the network is carried out at different levels. Also, in [165], the quantum state machine equips random fuzzy membership input with the fuzzy C-Means soft clustering algorithm to deal with remotely sensed multi-band image segmentation. Moreover, a quantum local potential function network is discussed in [166], which inherits the outstanding characteristics of QC through constructing the waves and the potential functions.

b: Clustering Based on Quantum Optimization
i) Improvement on Algorithm:

QEA, QPSO, QICA and other optimization methods can be used in the research of clustering problems. Based on the specific mechanism of different algorithms, corresponding improvement ideas can be put forward.

Combined with fuzzy theory, a particle swarm optimization approach is improved for image clustering in [167]. [168] introduces detour distance into QPSO and applies the particles escaping principle to avoid the phenomenon that the updated cluster center particles sink into the area of the obstacles. To address the problem of predefining clusters number in PSO, [169] updates the amount of cluster centroids in the process of iteration dynamically, preventing over-congregating particles near boundaries of solution space and making the algorithm to search for optimum in different dimensions. Reference [170] models the task of clustering a complex network as a multi-objective optimization and deals with it using QPSO, which is the first attempt to utilize the quantum mechanism based discrete PSO to solve network clustering.

When dealing with clustering problems, the slow convergence of QEA has been an important problem compared with other heuristic algorithms. By adding a fast repair facility, [171] accelerates the speed of the search process sharply. [172] presents a quantum inspired method using GA to automatically find the number of clusters when processing image data set. In the previous quantum evolutionary clustering methods, the fixed relationship between the clusters and the data points completely ignores the dataset distribution. Taking this factor into consideration, [173] modifies the function evaluating the degrees of belonging by absorbing inspiration from possibilistic clustering. Based on clonal selection principle and the immunodominance theory, an immunodomaince operator is introduced into the clonal selection process in [174], which achieves gaining priori knowledge online and realizes sharing information among different individuals. [175] makes a combination of quantum clustering method and multi-elitist immune algorithm to avoid the problem of getting stuck in local extremes. Besides, by embedding a potential evolution function into affinity calculation of multi-elitist immune clonal optimization, [95] applies QC to image segmentation.

ii) Combination With Other Algorithms:

Compared with other clustering algorithms, swarm intelligence possesses the ability that it can quickly converge to the global optimum and effectively avoid falling into the local solution. With this advantage, QPSO-based quantum clustering can be easily combined with other algorithms. [176] proposes to combine QPSO with K-Medoids, which introduces the rapid global convergence of QPSO to separate the global clusters firstly and then find the optimal exact solutions by K-Medoids. In [177], QPSO is coupled with the fuzzy C-Means clustering algorithm. The global search ability of QPSO assists in avoiding stagnation in local optima while the soft clustering of FCM helps a lot in partitioning data based on membership probabilities.

In addition, the combination strategy also includes using QC as the pre-processing or post-processing method to achieve more accurate classification results according to the actual application scenarios. For example, [38], [178] combine QC with fuzzy C-Means clustering and use QC to evolve different values which are necessary to be known in advance to perform clustering process using fuzzy C-Means.

iii) Summary:

This part explains QC from two aspects: clustering based on quantum optimization and clustering inspired by quantum mechanics. Clustering based on quantum optimization regards clustering as an optimization problem with certain criteria, then conducts clustering by mature optimization methods such as QEA and QPSO. The main hotspot is the selection of clustering criteria. Clustering inspired by quantum mechanics uses gradient descent method to solve the minimum of quantum potential energy so as to determine the clustering center. This method makes full use of quantum mechanics, draws on the experience of the particle distribution in quantum space, and fully exploits the potential information of data. In addition, the determination of the wave function and the parameter setting of the model play an important role in the clustering process, which are still needed to be further explored.

SECTION IV.

Typical Applications

In this part, several typical applications of quantum optimization and quantum learning are presented from the perspective of experimental proof. Through the modeling of different problems, practical problems are converted to be solved by quantum optimization or quantum learning methods. The experimental results also show that algorithms based on quantum mechanism achieve better results and possess great application potential.

A. QWEA Applied to SAR Image Segmentation

The goal of segmentation is to partition an image into disjoint regions. In this part, the problem based on partition clustering is viewed as a combinatorial optimization problem. The improved algorithm (QWEA) firstly uses watershed algorithm to segment the original image into small blocks, then through QEA, the optimal combination is obtained to form the final results.

1) Related Work

Image segmentation is a technology that divides the image into regions with different characteristics and extracts interesting objects. It is a basic content of image understanding. Generally speaking, there are several commonly used methods for image segmentation: region-based segmentation, edge-based segmentation, the combination of region-edge based segmentation and other advanced methods.

Watershed transform is a commonly used image segmentation algorithm with the characteristics of being simple and fast. It can obtain continuous closed edges and make full use of the edge information obtained from gradient surface. Main steps are listed in TABLE 6:

TABLE 6 Main Steps of Watershed Algorithm
Table 6- 
Main Steps of Watershed Algorithm

However, watershed transform is sensitive to noise, thus leading to over-segmentation easily. After the post-processing of the over-segmentation, unnecessary details can be removed while main parts can be retained [179]. The process of reducing over-segmentation can be regarded as an optimization process, in which the objective can be the criterion of consistency or difference between regions.

2) Modeling the Task

Texture image segmentation can be regarded as a combinatorial optimization problem. After segmentation, the appropriate sequence combination is searched as clustering results, and the searching process can be optimized by QEA. Firstly, the image is segmented by watershed algorithm to get over-segmentation feature; Secondly, the texture eigenvalues of each region are counted; Then, optimize the combination of the texture eigenvalues of these regions by QEA; At last, the final results are obtained when regions belong to the same category are merged. The detailed implementation of the above procedure (QWEA) is shown in TABLE 7:

TABLE 7 Procedure of QWEA
Table 7- 
Procedure of QWEA

3) Experiment

QWEA, a genetic clustering method (GAC) [154], a clustering algorithm based on QEA using texture features (QEAC) and K-Means (KM) are conducted on three texture images respectively and results are shown in FIGURE 4–​FIGURE 6.

FIGURE 4. - Results of image segmentation using different algorithms.
FIGURE 4.

Results of image segmentation using different algorithms.

FIGURE 5. - Results of image segmentation using different algorithms.
FIGURE 5.

Results of image segmentation using different algorithms.

FIGURE 6. - Results of image segmentation using different algorithms.
FIGURE 6.

Results of image segmentation using different algorithms.

The initial population size of QWEA and QEAC is $N=20$ . Parameters of GAC are: population size $ N=20$ , crossover probability $p_{c}=0.75$ , mutation probability $p_{m}=0.1$ . The update threshold of these four algorithms is $\varepsilon =1{0}^{-5}$ and the window size of feature extraction is 7. The size of figures below is $256\times 256$ .

FIGURE 4(a) shows a Ku band SAR image of the Rio Grande River in the central of the United States and contains river, vegetation and crop. FIGURE 5(a) is a SAR image including rivers and urban. FIGURE 6(a) is an X-SAR sub-image with a resolution of 5 cm and has four regions: rivers, urban areas and two types of crops. The initial segmentation using watershed algorithm is given in (b) and the segmentation results of QWEA, GAC, QEAC and KM are given in (c) - (f) respectively.

FIGURE 4 shows that these methods can divide the river area very well. However, when distinguishing vegetation and crops, results of QEAC, GAC and KM are far inferior to QWEA in continuity and regional consistency. All these methods can separate urban areas and rivers easily in FIGURE 5, yet QWEA maintains better regional consistency and image edges. FIGURE 6(d), (e) and (f) show that GAC, QEAC and KM incorrectly classify crop areas into river while QWEA in FIGURE 6(c) can identify these regions well, and obviously reduce the probability of misidentification.

Experimental results show that compared with other methods, QWEA possesses better region consistency, accurate region edge and less clutters when applied to SAR image.

4) Conclusion

In this paper, three algorithms are compared with QWEA, which are KM, GAC and QEAC. KM (K-means) is a classical clustering method, which has the advantages of clear and simple procedure. Because KM is relatively simple, the effect is not very ideal; GAC borrows genetic thought, which uses population search according to evolutionary mechanism and is a typical group intelligent optimization method; QEAC is based on QEA and texture features, which is also a quantum intelligent algorithm. As quantum intelligent algorithms, the segmentation results of QWEA and QEAC are better than those of GAC and KM. Compared with GAC, QEAC and QWEA absorb quantum ideas and adopt quantum coding style to make chromosomes carry more information, thus greatly improving the diversity of the population and searching for the global optimal solution more easily. Due to the use of the watershed algorithm to preprocess data and extract discrete wavelet energy features, QWEA is better than QEAC. When processing SAR image, QWEA shows better regional consistency, accurate region edge division, and less clutter compared with other methods.

B. Dynamic-Context Cooperative QPSO Applied to Medical Image Segmentation

In QPSO, the updating method of particles will determine the performance of the algorithm. In this paper, we incorporate a new method for dynamically updating the context vector to update particles.

1) Related Work

a: Cauchy Mutation

In QPSO, the global optimal particle will attract other particles in the population. Besides, it will attract the average optimal particle as well. The Cauchy mutation probability is defined as $p_{m}$ , which is used to guide the variation of the global optimum $gbest$ and the average optimum $mbest$ . $p_{m}=1/N$ , where $N$ is the size of population. The formula for the distribution is:\begin{align*} x^{\ast }=&x+\alpha \times m(x) \tag{40}\\ m\left ({ x }\right )=&b\times \frac {1}{\pi }\times \frac {1}{x^{2}+b^{2}}\tag{41}\end{align*} View SourceRight-click on figure for MathML and additional features.$b=0.2,m(x)$ is a random variable and $x^{\ast }$ is the value after variation.

b: Dynamic Selection of Contraction Factor $\alpha$

The convergence rate and the convergence degree of iteration are directly determined by the contraction factor $\alpha $ [180]. According to mathematical statistics, when $\alpha $ is between [0.3, 0.8], the average optimal position of the objective function will change continuously.

When the contraction factor $\alpha $ is between [0.3, 0.8], the dynamic setting of $\alpha $ according to [35] is as follows:\begin{equation*} \alpha =\frac {(\alpha _{1}-\alpha _{2}\mathrm {)\times (}N_{max}-N)}{N_{max}}+\alpha _{2}\tag{42}\end{equation*} View SourceRight-click on figure for MathML and additional features.$\alpha _{1}$ and $\alpha _{2}$ are the initial and final values of $\alpha $ respectively, $N$ is the current number of iteration and $N_{max}$ is the total number of iterations.

c: QPSO Based on Context Collaboration

The main idea of QPSO based on context collaboration (CCQPSO) [181] is: randomly generate multiple probabilities and use Monte Carlo theorem to measure and produce multiple individuals; select the best one of these particles, compare it with the best individual in the population and choose the best one as context variable; evaluate other individuals by comparing each dimension of the particle with the context variable to get the next generation. CCQPSO makes full use of the quantum uncertainty through multiple measurements. In addition, the time used increases linearly and the convergence is accelerated. The basic process of CCQPSO is shown in FIGURE 7.

FIGURE 7. - The basic process of CCQPSO.
FIGURE 7.

The basic process of CCQPSO.

2) Modeling the Task

In the later iteration of QPSO, the diversity will deteriorate definitely because of the aggregation state of the population. Generally, at the end of the search, particles will converge and the search space will be limited. In order to jump out the local optimum, we adopt the strategy of Cauchy mutation and dynamic selection of contraction factor to improve CCQPSO and propose a new algorithm (MCQPSO). The framework of MCQPSO is in TABLE 8:

TABLE 8 The Framework of MCQPSO
Table 8- 
The Framework of MCQPSO

3) Experiment

MCQPSO, an improved cooperative QPSO algorithm (SunCQPSO) [182] and a weight based QPSO algorithm (WQPSO) [183] are used to perform multi-threshold segmentation respectively in four CT brain images.

All these four brain CT images are $512\times 512$ in size and the number of class is set as 4. The number of the population is 20 and the contraction factor ranges from 0.3 to 0.8. For MCQPSO, the number of evolutions is 100 and the number of independent iterations is 10. Besides, the number for cooperative measurement is 5.

The segmentation results of brain CT are shown in FIGURE 8–​FIGURE 10. (a) (b) (c) (d) represent the original CT image, MCQPSO segmentation result, SunCQPSO segmentation result and WQPSO segmentation result respectively. The segmentation parts are marked in red box.

FIGURE 8. - Results of “CT 872.”
FIGURE 8.

Results of “CT 872.”

FIGURE 9. - Results of “CT 772.”
FIGURE 9.

Results of “CT 772.”

FIGURE 10. - Results of “CT 869.”
FIGURE 10.

Results of “CT 869.”

As can be seen from the results, MCQPSO has better performance than sunCQPSO and WQPSO. In addition, according to the criterion of OSTU, the larger the inter class variance, the more accurate the result is; the smaller the variance, the more robust the result is. Taking FIGURE 10 as an example, the inter-class variance and variance are shown in TABLE 9. MCQPSO has larger inter-class variance and smaller variance, which shows that the proposed MCQPSO has better result. Combined with the visual effect and statistical data results, MCQPSO can improve the accuracy of image segmentation very well.

TABLE 9 Results of “CT 869.”
Table 9- 
Results of “CT 869.”

4) Conclusion

WQPSO and SunCQPSO are used to compare with MCQPSO. All these methods are quantum intelligent algorithms, which are based on QPSO and improved reasonably. The difference is that WQPSO is an improved QPSO with weighted mean best position according to fitness values of the particles. Based on analysis of the mean best position, a linearly increasing weight parameter is introduced to render the importance of particles in population when they are evolving. The idea of collaboration is adopted in SunCQPSO to make particles cooperate more efficiently so that particles can update the information of each dimension. MCQPSO is improved on the basis of SunCQPSO and also uses cooperative mechanism to update particles. The difference is that MCQPSO adopts Cauchy mutation and dynamic selection of contract factor $\alpha $ . In addition, MCQPSO conducts multiple measurements to make full use of the quantum uncertainty. These operations enable MCQPSO to fully possess quantum properties and search the optimal solution more efficiently.

C. Quantum-Inspired Immune Clonal Multi-Objective Optimization

Based on the concept and principle of quantum computing, a quantum-inspired immune clonal multi-objective optimization algorithm (QICMOA) is proposed to solve extended 0/1 knapsack problems.

1) Related Work

Multi-objective optimization originates from the design and modeling of practical complex systems. Compared with single objective optimization, multi-objective optimization is more complex, which often needs to optimize several conflict multiple objectives at the same time. Basically, multi-objective optimization has a set of optimal solutions, elements in which are called Pareto optimal solutions or non-dominant solutions.

0/1 knapsack problem is a typical combinatorial optimization problem, which can be used as reference to other fields such as business, cryptography, applied mathematics, etc. and deserves in-depth study. By changing the number of knapsacks, this single-objective problem can be converted to the multi-objective one. Define multi-objective 0/1 knapsack problems with $n$ knapsacks and $m$ items as follows:\begin{align*}&Maximize~F(x)=(f_{1}(x),f_{2}(x),\ldots ,f_{n}(x)) \\&\mathrm {subject~to}~\sum \nolimits _{j=1}^{m} {w_{ij}x_{j} \le }c_{i} \quad i= 1,2,\ldots , n\tag{43}\end{align*} View SourceRight-click on figure for MathML and additional features. where $f_{i}\left ({ x }\right )=\sum \nolimits _{j=1}^{m} {p_{ij}x_{j}} (i=1,2,\ldots ,n)$ and $x_{j}=1(j=1,\ldots ,m) $ if item $j$ is selected. $x=(x_{1},x_{2},\ldots ,x_{m})\in \{0,1\}^{m}$ is a binary vector. As for a knapsack $i$ with capacity $c_{i}$ , $p_{ij} $ denotes the profit of item $j$ and $w_{ij}$ denotes the weight of item $j$ . The goal of this multi-objective knapsack problem is to search for a set of Pareto solutions which can be used to approximate the true Pareto front.

2) Modeling the Task

Quantum immune clonal multi-objective optimization combines the immune dominance concept and antibody clonal selection theory, using qubits to encode dominant antibodies, and conducting clone, recombination and update operations on antibodies with smaller crowding density. Dominant antibodies are able to evolve and their affinities are designed using the crowding distance.

The main loop of quantum-inspired immune clonal multi-objective optimization algorithm (QICMOA) is shown in TABLE 10.

TABLE 10 The Main Loop of QIVMOA
Table 10- 
The Main Loop of QIVMOA

3) Experiment

In this part, nine multi-objective 0/1 knapsack problems are solved using SPEA [184], NSGA [185], VEGA [186] NPGA [187] and QICMOA. The test data sets are available from [188] and 2, 3, 4 knapsacks containing 250, 500, 750 items are considered respectively during the comparison.

Based on the performance metric of coverage and represented using the symbol $I$ , TABLE 11 shows the comparison of QICMOA (Q) with SPEA (S), NSGA (NS), VEGA (V) and NPGA (NP). For example, $I (Q, S)$ means the coverage of solutions obtained by QICMOA compared with solutions obtained by SPEA. The bigger the value of $ I (Q, S)$ is, the better the performance of solutions obtained by Q.

TABLE 11 The Coverage of Two Sets for the Nine 0/1 Knapsack Problems
Table 11- 
The Coverage of Two Sets for the Nine 0/1 Knapsack Problems

For 2 knapsacks with 250 items (2–250), TABLE 11 indicates that the behavior of QICMOA is better than that of the other four algorithms. Specifically, solutions obtained by QICMOA weakly dominate solutions obtained by SPEA and clearly dominate solutions obtained by NSGA, VEGA and NPGA. As for 2 knapsacks with 500 and 750 items (2–500 and 2–750), performance of QICMOA is rather exceptional compared with others over 30 independent runs. For 3 knapsacks with 250 items (3–250), QICMOA is better than SPEA and solutions obtained by NSGA, VEGA and NPGA are clearly weakly dominated by solutions obtained using QICMOA. For 3 knapsacks with 500 items and 750 items (3–500 and 3–750), solutions obtained by QICMOA in a certain extent weakly dominate the solutions obtained by the other four algorithms. Especially, for 3 knapsacks with 500 items, all solutions using NPGA and VEGA are clearly weakly dominated compared with QICMOA. For 4 knapsacks with 250 items (4–250), all $ I(Q,S)$ are smaller than $I(S,Q)$ , which indicates SPEA performs better than QICMOA to a certain degree. However, for 4 knapsacks with 500 items and 750 items (4–500 and 4–750), QICMOA works better.

4) Conclusion

In this part, four algorithms are compared with QICMOA, which are SPEA, NSGA, VEGA and NPGA. The compared methods are all classical multi-objective optimization methods. Different from these traditional methods, QICMOA is a heuristic intelligent algorithm under the idea of quantum mechanism and immune clone. Besides, QICMOA absorbs the advantages of common multi-objective optimization methods. The fitness value of each Pareto optimal individual is assigned as the average distance of two Pareto optimal individuals on either side of this individual along each of the objectives, which is called crowding-distance and proposed in NSGA-II. In QICMOA, only less crowded Pareto optimal individuals are selected to conduct clone and recombination in the trade-off front. It is a highly effective combination of quantum intelligent optimization algorithm and traditional multi-objective optimization method, so the best performance can be undoubtedly obtained.

D. Quantum Clustering for Community Detection

Inspired by the mechanism of quantum computing, QC is applied to complete the clustering in the feature space to find the corresponding communities in the network space.

1) Related Work

In the real world, many complex systems can be abstractly represented as networks. In the study of the physical significance and mathematical properties of complex networks, it is found that many real networks have a common characteristic - community structure. That is to say, the network is composed of several communities. Internal nodes of a community are closely connected while nodes from different communities are connected relatively sparsely.

Generally, a concrete network can be represented abstractly using a graph $G=\left ({ V,E }\right )$ , which is composed of a point set $V$ and an edge set $E$ . $\omega _{ij}$ denotes the weight of an edge which connects node $i$ and node $j$ and the connection degree between node $i$ and node $ j$ can be expressed as $M_{ij}$ . The adjacency matrix of nodes is expressed as $A$ , then we have the co-adjacency matrix $M$ :\begin{equation*} M=\left ({ A }\right .^{2}+\left .{ 2A+I }\right )_{\cdot }\ast \left ({ A+I }\right )\tag{44}\end{equation*} View SourceRight-click on figure for MathML and additional features.

When there exists no edge between two nodes, the structural similarity is defined as 0. The more common neighbors exist, the structural similarity is bigger. Structural similarity is defined as follows:\begin{equation*} s_{ij}\!=\!\frac {M_{ij}}{\sqrt {\left |{ \Gamma \left ({ i }\right ) }\right |\cdot \left |{ \Gamma \left ({ j }\right ) }\right |} }\!=\!\frac {M_{ij}}{\sqrt {\sum \nolimits _{u\in \Gamma \left ({ i }\right )} \omega _{iu}^{2}} \!\cdot \!\sqrt {\sum \nolimits _{u\in \Gamma \left ({ j }\right )} \omega _{ju}^{2}}}\tag{45}\end{equation*} View SourceRight-click on figure for MathML and additional features.

Structural similarity matrix $S$ can be decomposed as $S=Q\Lambda Q^{T}$ , where $\Lambda $ is a diagonal matrix composed of eigenvalues $\lambda _{1},\lambda _{2},\cdots ,\lambda _{N}$ ($\lambda _{1}\ge \lambda _{2}\ge \cdots \ge \lambda _{N}$ ), and $Q$ is the corresponding eigenmatrix using eigenvectors $q_{1},q_{2},\cdots ,q_{N}$ as column vectors. Principal component analysis (PCA) is used to obtain the transformed data as follows:\begin{equation*} \Phi _{l}=\Lambda _{l}^{\frac {1}{2}}Q_{l}^{T}\tag{46}\end{equation*} View SourceRight-click on figure for MathML and additional features.

$\Lambda _{l}$ is a diagonal matrix containing the first $l$ eigenvalues, $Q_{l}$ is a matrix composed of the first $l$ corresponding eigenvectors, and $\Phi _{l}$ is a principal component matrix whose shape is $\left ({ N\times l }\right )$ . $\Phi _{l}$ is used as input for QC and each row corresponds to a point in the original data set.

An improved QC is used by utilizing K-nearest neighbor strategy. The set of node $p$ and its adjacent nodes are represented by $\Gamma \left ({ p }\right )$ . It can be assumed that the wave function of a node is only affected by the nodes connected to it. Based on this assumption, the wave function of node $p$ and the potential energy function can be rewritten as follow:\begin{align*} \Psi \left ({ p }\right )=&\sum \nolimits _{i=1}^{N} e^{-{\parallel p-p_{i}\mathrm {\parallel }}^{2}\mathrm {/2}\sigma ^{2}} \tag{47}\\ V\left ({ p }\right )=&E+\frac {\left ({ \sigma ^{2} \mathord {\left /{ {\vphantom {\sigma ^{2} 2}} }\right . } 2 }\right )\nabla ^{2}\psi }{\psi } \\=&E\!-\!\frac {d}{2}\!+\!\frac {1}{2\sigma ^{2}\psi }\!\!\sum \nolimits _{p_{i}\in \Gamma \left ({ p }\right )} {\left \|{ p\!-\!p_{i} }\right \|^{2}\exp \!\left [{\! -\frac {\left \|{ p\!-\!p_{i} }\right \|^{2}}{2\sigma ^{2}} \!}\right ]} \\{}\tag{48}\end{align*} View SourceRight-click on figure for MathML and additional features.

The adjacency matrix is a sparse matrix, a large number of elements of which are zero. In the improved QC, for a network with $N$ nodes, the time complexity of each iteration is reduced to $O\left ({ 2L+N }\right )$ , where $L$ is the total number of edges in the network and $2L+N\ll N^{2}$ . For large-scale networks, nodes and their adjacent information are taken into account, without considering the interference of other nodes, which will greatly reduce the running time of the algorithm. At the same time, the introduction of adjacent information will also help the performance of QC.

2) Modeling the Task

A new community detection method based on QC (QCCD) is proposed. Firstly, the structural similarity is used to measure the strength of the connection of nodes in the network, and the spectral features are extracted to transform the community detection problem into a data clustering problem. Then, QC is applied to complete the clustering in the feature space to find the corresponding communities in the network space. During the process of QC, the introduction of node adjacency information can not only improve the local analysis ability of the algorithm, but also reduce the time complexity. The procedure of QCCD is in TABLE 12:

TABLE 12 The Procedure of QCCD
Table 12- 
The Procedure of QCCD

3) Experiment

Three contrast algorithms are set up: Niu [189] is a spectral method combining QC and standard cut criterion; Fu [190] uses K-Means method to optimize module density and discover communities; Newman [191] is a spectral bisection method based on modularity matrix.

In order to evaluate the results of community division, the widely used evaluation indicators are introduced: normalized mutual information (NMI) [192] and modularity Q [193]. In addition, jaccard score (JS) [194] is adopted from the perspective of clustering.

QCCD is tested on six real world networks: Zachary’s Karate Club [195], Dolphin social network [196], Journal index network [197], American College Football [198], Santa Fe Institute (SFI) [198] and The Science Network [198].

Results of QCCD in real world network are given in TABLE 13. Errors denote the number of nodes wrongly classified and NC denotes the number of network communities obtained.

TABLE 13 Results of QCCD in Real Network
Table 13- 
Results of QCCD in Real Network

In TABLE 13, for Karate and Journal, QCCD can get the correct partition results (NMI = 1). For Football and Dolphins, we can also get the correct number of communities, with six nodes and one node wrongly classified respectively. Because SFI and Science have no standard correct results, only NC and Q can be obtained.

Distributions of points before and after conducting QC are shown in FIGURE 11. The real network partition is marked with different colors and partitions detected by algorithms are circled. Because of a large number of communities exist in Football, the result is not very clear. For the other three networks, nodes in the network are regarded as particles in quantum space and the nodes belonging to the same community will move to the same area by QC eventually, which can further expand the differences between different communities and improve the compactness of the community.

FIGURE 11. - Distribution of points before and after QC, (a)(b)(c)(d) represent results of Karate, Football, Dolphins, and Journal respectively. In addition, (#−1) represents the two-dimensional principal component mapping of the original network and (#−2) represents the final distribution after 20 iterations using QC.
FIGURE 11.

Distribution of points before and after QC, (a)(b)(c)(d) represent results of Karate, Football, Dolphins, and Journal respectively. In addition, (#−1) represents the two-dimensional principal component mapping of the original network and (#−2) represents the final distribution after 20 iterations using QC.

TABLE 14 shows results of QCCD compared with other three contrast algorithms in four real world networks. Newman focuses on maximizing the network modularity function Q. On this basis, Fu uses K-Means to optimize the modularity density. Both QCCD and Niu are proposed on the basis of QC and the discovery of communities depends entirely on the potential information of samples. TABLE 14 shows that QC-based algorithms have better community detection ability than Fu and Newman. Comparing QCCD with Niu, both of them achieve the same results on Dolphins. However, QCCD excels in Karate and Journal with correct divisions, while Niu has wrong results. For Football, JS and NMI values obtained by Niu are higher than those obtained by QCCD, but Niu divides the network into 15 communities, which are quite different from the real one. QCCD has obtained 12 communities, and only a minority of results are incorrect. In summary, the performance of QCCD is better.

TABLE 14 Comparisons of Four Algorithms on Real Networks
Table 14- 
Comparisons of Four Algorithms on Real Networks

4) Conclusion

In this paper, three algorithms are used to compare with QCCD, namely, Niu, Fu and Newman. Fu uses k-means to optimize module density, and then finds communities; Newman is a spectral bisection method based on module degree matrix. Niu is based on the standard cut criterion, and then the spectrum information is extracted and the community detection task is further completed by using QC. QCCD is similar to the spectral clustering framework, which extracts the feature information of the original network and transforms the community detection problem in the complex network into a clustering problem in the data space. Most clustering methods are sensitive to the initial value and noise, thus can be easily trapped in the local optima. However, the focus of QC is on the selection of clustering center, which completely depends on the potential information of the data itself and the number of clustering categories do not need to be presupposed. Therefore, QC is a suitable clustering choice compared with other clustering methods.

SECTION V.

Conclusion

This paper summarizes the existing quantum algorithms from two aspects: quantum optimization and quantum learning. Firstly, the related concepts and development history of quantum optimization and quantum learning are introduced. Then, classical algorithms are described in detail, and their development is summarized. Finally, related experimental proofs are given. As a new kind of algorithm, quantum intelligent algorithms combine the high efficiency of global search with high parallelism, powerful storage and computing advantages of quantum computing perfectly, which can effectively avoid the shortcomings of intelligent algorithms and improves the efficiency. Experiments also show that, compared with traditional intelligent algorithms, quantum intelligent algorithms have shown strong competitiveness and possess great potential.

References

References is not available for this document.