Cluster Synchronization as a Mechanism of Free Recall in Working Memory Networks

This article studies free recall, i


I. INTRODUCTION
Synchronization is among the key collective behaviors of many complex networks, including biological networks.A prominent example is a network of biological neurons whose activities govern the underlying mechanisms of motion, cognition, and memory.
The importance of understanding human memory functioning is evident from its central role in our cognitive functions [1], and its role as the main inspiration behind developments of artificial memory networks [2].In general, the human memory system can be categorized into short-term and long-term memory modules.Among the short-term modules is working memory (WM), which is responsible for holding and processing information in a temporary fashion in service of higher-order cognitive tasks, for example, reasoning and decision-making [3].
Among the most important types of memory tasks associated with WM is free recall: the reactivation of a stored memory item, namely pattern, in the absence of the original stimulus.A classical example is an experiment in which a subject is exposed to a set of words and is then asked to recall them in any order.Empirical studies on human subjects have established free recall as a measure of working memory capacity [4].Nonetheless, the precise mechanisms underlying free recall dynamics in the human brain are not yet fully understood [5].Several abstract neuro-computational [6] as well as more detailed spiking neural network models have been built based on different neurobiological hypotheses [7], at different levels of abstractions, to account for human experimental data on WM.
Attractor neural networks, dynamical networks that evolve towards a stable pattern, have been employed for understanding the mechanisms of memory [8], [9], [10], [11], [12], [13], including WM [7], and the display of bursty limit cycle replay of encoded memories [14].The most studied attractor neural network model is the Hopfield model, capable of storing different patterns as stable equilibria [15], [16].The abstract Hopfield model represents a memory network with a fully connected graph, symmetric weights, and capable of storing only binary values.The model has limitations on the capacity as well as recalling previously stored patterns [6].The latter has motivated designs with more than two states including the modular recurrent networks of Potts type [17].The model in this paper is originated from a biologically relevant modular model of Potts type [5].The network model in [5] is composed of H hypercolumns, each of which consists of a bundle of M elementary units, minicolumns, interacting via lateral inhibition, such that each hypercolumn module acts as a winner-take-all microcircuit.
The modular WM dynamic network, as described above, is supposed to encode, hold, and recall information.Several explanations account for WM ability to hold memory patterns during a delay period between learning and recall, either by persistent [18], [19], [20], [21] or bursty activity [14], [22], [23] of neural populations, or a combination of both [24], [25].The model we consider [5] assumes a fast Hebbian plasticity learning rule, i.e., changes in synaptic strength, combined with slow adaptation in a recurrent neural network.This model allows us to assume constant weights during a very brief holding period for studying the immediate free recall of WM, which is among standard memory exercises.
In this article, we focus on the post learning phase aiming to show the role of synchronization in WM free recall.In fact, the role of sustained, bursty synchronization, or both in free recall or replay of WM has been intensely discussed in experimental research.In particular, it has been shown that specific areas of the human cortex synchronize when engaged in WM tasks [26], [27].In this sense, several prior works have considered synchronization as a major player in WM functioning [28], [29], [30], [31].Considering mathematical analysis, the similarities between attarctor and pulse-coupled neural networks has been explored in [11] where the role of temporal synchronization in associative memory feature of these networks has been shown.

A. MAIN CONTRIBUTIONS
This paper studies immediate free recall of a biologically relevant non-spiking WM model in [5].We assume that the WM network has been trained to learn an information set composed of several memory patterns.The model assumes that all encoded patterns are orthogonal, i.e., each minicolumn is associated with only one of the patterns, and their couplings are governed by a fast Bayesian-Hebbian plasticity rule (see Section II).We assume that the network has learned a few patterns, the coupling weights and biases are constant during a very brief post-learning interval, and study the post-learning dynamics.Our contributions are formulating free recall of WM as a cluster synchronization problem, and characterizing bounds on coupling and bias values for achieving this collective behaviour.Cluster synchronization governing the free recall process consists of two main mechanisms: simultaneous activities of minicolumns representing each of the encoded patterns, and time-divided activities of minicolumns representing different patterns.First, for a network composed of H hyprcolumns and M minicolumns, using the concept of uniform and ultimate boundedness, we characterize coupling conditions under which the relative states of all minicolumns representing an arbitrary pattern are bounded and asymptotically confined within a compact set.We then show that in fact the relative state dynamics of each two minicolumns, belonging to the same pattern, is input-to-state stable, and continue by proving exponential stability of the synchronized solution for the network (Theorem 1).Second, we prove that assigning non-identical bias values for minicolumns corresponding to different patterns prevents simultaneous reactivation of the encoded patterns (Theorem 2).We then give sufficient conditions for achieving cluster synchronization in a H × M network (Corollary 1).Third, assuming that cluster synchronization is achieved for a H × 2 network, we study the behaviour of the synchronized network to derive conditions under which both of the clusters, i.e., encoded patterns, are activated in turn (Theorem 3).Specifically, we compare the behaviour of two H × 2 networks with identical and non-identical coupling weights and bias values.We obtain bounds on couplings and bias values under which both encoded patterns are recalled.Our analysis shows that having non-identical couplings and bias values for different patterns enhance free recall.Numerical simulations are given to validate our theoretical analysis.
To the best of our knowledge, revealing the mechanisms of WM free recall in the view of synchronization, as well as applications of nonlinear stability theory for this purpose have not been addressed in the literature.A preliminary result considering a network of H hypercolumns, each composed of 2 minicolumns, communicating over an all-to-all homogeneous network, i.e., equal coupling weights and bias terms, was presented in our conference paper [32].Comparably, this article elaborates on the selected model, its biological relevance, considers a H × M heterogeneous network over a regular graph, and uses a new set of techniques, leading to new results and insights.
The rest of this article is organized as follows.Section II reviews the model, explains its biological justifications, presents the problem formulation, and gives the definition of free recall in view of synchronization.The analyses for synchronization are presented in Section III.Section IV presents simulation results, and Section V summarizes the concluding remarks.

B. PRELIMINARIES
Here, we review some preliminaries from the nonlinear systems and stability theory [33].
Definition 1 (Definition 4.6 [33]): The solutions of ẋ = f (x, t ) are: (1) uniformly ultimately bounded with ultimate bound b if there exist positive constants b and c, independent of t 0 ≥ 0, and for every a ∈ (0, c), there is T = T (a, b) > 0, independent of t 0 , such that (2) globally uniformly ultimately bounded if the above holds for arbitrarily large a.
Lemma 1 (Lemma 4.6 [33]): Suppose f (t, x, u) is continuously differentiable and globally Lipschitz in (x, u), uniformly in t.If the unforced system ẋ = f (t, x, 0) has a globally exponentially stable equilibrium point at the origin x = 0, then the system ẋ = f (t, x, u) is input-to-state stable.
Lemma 2 (Lemma 2.1 [33]).Poincaré-Bendixson Criterion: For ẋ = f (x), let M be a closed bounded subset of the plane such that r M contains no equilibrium points, or contains only one equilibrium point such that the Jacobian matrix δ f /δx at this point has eigenvalues with positive real parts (Hence, the equilibrium point is unstable focus or unstable node), r Every trajectory starting in M stays in M for all future time.Then M contains a periodic orbit of ẋ = f (x).

II. MODEL AND PROBLEM FORMULATION
We consider an abstract model of working memory based on a simplification of a (non-spiking) attractor neural network originated from a modular recurrent neural network model [5].This network is composed of H modules, hypercolumns, each of which consists of M elementary units, (minicolumns).In this model, each memory item, pattern, is encoded by a mechanism into H attributes (represented by hypercolumns) each of which is composed of M intervals (represented by minicolumns).Fig. 1 shows a configuration of a network composed of 3 hypercolumns, each containing 3 minicolumns.
The dynamics of each minicolumn s i j obeys [5]  where s i j ∈ R, a i j ∈ R, o i j ∈ (0, 1) represent the activation, adaptation, and output of minicolumn i j, respectively.Constant parameters τ m > 0, τ a > 0, and g a > 0 denote time constants and the adaptaion gain.We assume that WM network has been trained to learn an information set composed of several patterns using an exogenous input I i j .In the learning phase [5] (which is outside the scope of our paper), the weight ω r ,i j and bias value β i j are determined in terms of estimated probabilities of pre-synaptic activation p r , post-synaptic activation p i j , and co-activation p as where g β is a gain parameter, ε is a low cut off value to avoid log(0) and log ε x = log max(ε, x).It is assumed that for a very short time period the weights and bias values, ω rl,i j and β i j respectively, will stay constant.Moreover, the external current I i j , which is used for training, is set to zero during the free recall process.That is, during the recall, it holds that, ωrl,i j = 0, βi j = 0, In this paper, we study the post-leaning dynamics, i.e., the model represented by ( 1)-( 3) subject to the modifications in (5).Following the learning rule in (4), we assume that the minicolumns whose activities are correlated are connected with positive (excitatory) coupling, while the minicolumns whose activities are un-correlated are connected with negative (inhibitory) coupling as described in Assumption 3.
We here introduce two terms: homogenous and hetergenous networks.The former refers to a network where all coupling weights are equal.Then, based on the Bayesian-Hebbian learning rule (4), all bias values are also equal.The latter, a heterogeneous network, models a more general setting by assuming that minicolumns corresponding to different patterns are coupled with different coupling weights, and bias values.

A. ASSUMPTIONS AND BIOLOGICAL RATIONALE OF THE MODEL
Here, we describe the notions of hypercolumn and minicolumn based on some biological justifications.
Minicolumns have been identified as the basic unit of the neocortex: narrow chains of 80 to 250 neurons extending vertically across cellular layers [34].The influential research of Hubel & Wiesel [35] describes the hypercolumn as a complete set of minicolumns working together as a small machine [36].From a morphological perspective, it is a set of neighboring cortical minicolumns, competing dynamically via lateral inhibition and exhibiting an orderly progression of their respective parameters [37].
Following [38], we consider orthogonal encoding, that is, no minicolums are shared between distinct patterns.Orthogonality between memory patterns serves as a useful abstraction of the very sparse activity in higher order cortex [40], for instance in studying storing many random binary patterns in cortical associative memory [41].Besides, sparse orthogonal encoding is considered a key element in hippocampal neural activity impacting the episodic memory capacity important in navigation.Examples include experimental findings confirming sparse orthogonal patterns in mice's retrosplenial cortex involved in spatial navigation [42].
Assumption 1: The stored patterns are orthogonal.That is, each memory pattern engages only one minicolumn in every hypercolumn.
In the rest of the paper, without loss of generality, it is assumed that each pattern ξ j , is represented by the activation of minicolumn j of each hypercolumn i.

B. PROBLEM FORMULATION
We now provide a definition for WM free recall using the notion of network synchronization.Given the post-learning WM model in (1)-( 5), under the assumptions in Section II-A, the encoded patterns are recalled in an arbitrary fashion, provided that the three following properties are satisfied: P1 the activities of minicolumns representing pattern ξ j are synchronized, i.e., their corresponding error dynamic is exponentially stable, P2 the activities of minicolumns representing different patterns do not achieve exact synchronization, P3 the synchronized minicolumns in different clusters alter the role of being the winner, i.e., having the maximum level of activity, in their corresponding hypercolumns.The main goal of this paper is to derive sufficient conditions, in the form of lower and upper bounds, on the coupling weights and bias values such that the solutions of the WM network model under analysis satisfy properties P1-P3.
Assumption 2: The underlying graph of the network with the node dynamics in (1)-( 3) is connected and undirected, i.e., ω rl,i j = ω i j,rl .The graph connecting all minicolumns of each pattern ξ j is regular, i.e., all nodes in that graph have the same degree.

III. ANALYSIS: SYNCHRONIZATION FOR FREE RECALL
This section presents stability analysis for the network modeled in (1)-( 5).First, we derive sufficient conditions for the synchronization of minicolumns representing a pattern, e.g.ξ j , in a H × M network (property P1).We then continue by deriving sufficient conditions under which M synchronized patterns form M separate clusters.The latter guarantees property P2, and prevents simultaneous reactivation of two of the encoded patterns.Last, we focus on a H × 2 network, and we assume that the minicolumns of each of the two patterns are synchronized and study conditions for their recall.Based on properties P2 and P3, the recall translates to alternating possession of the maximum level of activation, being the winner, between minicolumns representing pattern ξ 1 with those representing pattern ξ 2 .To this aim, we study conditions for existence of a limit cycle behaviour for the two minicolumns within a hypercolumn in both homogeneous and heterogeneous networks.
Assumption 3: For the network with the node dynamics in (1)-( 3), the absolute value of the coupling weight of each of the inhibitory (negative) connections is equal to ω > 0. All minicolumns representing pattern ξ j are coupled with the weight ω j > 0, and the bias value for their dynamics is equal to β j > 0.
Before we start with the analysis, for the brevity of notation, define the relative states s rl,i j = s rl − s i j , a rl,i j = a rl − a i j , o rl,i j = o rl − o i j , and β j = β j − β .
Lemma 3: Consider a heterogeneous network with the node dynamics in (1)-( 3) under Assumptions 1-3.The dynamics corresponding to the relative state of each two minicolumns representing pattern ξ j obey Theorem 1: Consider a heterogeneous network with node dynamics in (1)-( 3) under Assumptions 1-3.The relative state dynamics corresponding to each two minicolumns representing pattern ξ j , given in ( 6) and ( 7): 1) are globally uniformly and ultimately bounded in the set 2) are input-to-state stable, 3) in S u , possess an exponentially stable equilibrium at the origin.

REACTIVATION OF MULTIPLE PATTERNS
Theorem 1 proves achieving synchronization of minicolumns representing pattern ξ j .However, the recall of multiple encoded patterns requires satisfaction of properties P2 and P3, as well.In what follows, we first consider a H × M network, and prove that property P2 is satisfied if the bias values corresponding to different patterns are non-identical.As mentioned before, based on the learning rule in (4), the latter requires that coupling weights are different for different encoded patterns.We then combine the results of Theorems 1 and 2, and present sufficient conditions to have cluster synchronization for a H × M network.Theorem 2: Consider a heterogeneous network with node dynamics in (1)-( 3) under Assumptions 1-3.The relative state dynamics, ṡi j,k (t ), corresponding to two minicolumns i j and k representing two different patterns ξ j and ξ do not possess an equilibrium at the origin, provided that min and β j = β , ∀ j, , with η = H − 1. Corollary 1: Consider a heterogeneous network with node dynamics in (1)-( 3) under Assumptions 1 and 2. Sufficient conditions guaranteeing that the solution to the network system converges to M locally and exponentially stable synchronous clusters are: (1) the coupling weights of all inhibitory weights are equal to −ω < 0, (2) for the network corresponding to each pattern ξ j , it holds that the positive coupling weights are equal to ω j > 0 and the bias values are equal to β j > 0, (3) the coupling weights and bias values corresponding to the minicolumns of each two patterns ξ j and . Corollary 1 presents sufficient conditions for cluster synchronization, properties P1 and P2, of multiples encoded patterns in WM network modeled in (1)- (3).We now study conditions under which a H × 2 network satisfies all properties P1-P3.We denote the two encoded patterns by ξ 1 and ξ 2 , and assume that minicolumns representing pattern ξ 1 (ξ 2 ) are synchronized.In order to verify whether or not each of the two encoded patterns are recalled, we write the relative state dynamics between two minicolumns i1 and i2 in an arbitrary hypercolumn i, representing patterns ξ 1 and ξ 2 .The relative dynamics across two patterns obey where v 1 = η(ω + ω1 ), v 2 = η(ω + ω2 ), β1,2 = β1 − β2 , and η = H − 1.Notice that based on the definition of the output o i j in (3), i.e., o i j = e s i j h e s ih , we can write Replacing the above in ( 9) and ( 10), gives where In what follows, we study the properties of the equilibria of the system in ( 14) and ( 15), considering both homogeneous and heterogeneous networks (see Section II).Theorem 3: Consider a synchronized network of H hypercolumns and M = 2 minicolumns with the relative dynamics in ( 14) and (15).The set where γ 1 = v 2 + g a + β1,2 , and ), and η = H − 1, is forward invariant for the system.Moreover, the system in ( 14) and ( 15) has at most three equilibria, unless both v 1 − v 2 = −2 β1,2 and v 1 + v 2 = 2g a + 4 hold.Furthermore, 1) for the homogeneous network: if τ a , the relative state dynamics in ( 14) and ( 15) exhibit a limit cycle behaviour, 2) for the heterogeneous network: given the network parameters, v 1 , v 2 , τ m , τ a , g a , there exists β1,2 = 0, such that the relative state dynamics in ( 14) and ( 15) have a unique equilibrium, s * 1,2 , in M.Moreover, if the Jacobian Matrix J (17) at the equilibrium, s * 1,2 , has at least one unstable eigenvalue, then system ( 14) and ( 15) exhibits a limit cycle behaviour.
Based on the above result, non-identical bias values change the location of the equilibrium of the system ( 14), create a unique equilibrium which can be made unstable by tuning the coupling weights and other network's parametres.Based on the results of Theorem 3, we expect that conditions in Theorem 2 together with Theorem 3 guarantee the recall of at least two patterns for a H × M network.

A. DISCUSSION: LIMITATIONS AND FUTURE WORK
The biological plausibility of the model, originally derived to explain human behavioral experiments in WM immediate free recall tasks [23], is discussed in Section II-A and references therein.This article studies a non-spiking model.Among important future directions are the analysis of more-detailed network dynamics, for instance, spiking models as well as models derived from measured data in WM tasks.This article assumes that the coupling weights, learned during the encoding phase, stay constant during the recall task.This assumption is justified owing to the immediate aspect of the recall task considered in this article.In our mathematical analysis, we have assumed that the network topology is a regular graph.We envision that the relaxation of the aforementioned assumption to the general undirected graphs, which serves as a future work, will not affect the main message of this article, i.e., cluster synchronization as the WM's free recall mechanism.This article has studied two possible learning settings: homogeneous and heterogeneous networks.For the latter, we have assumed that the coupling weights and bias terms corresponding to each encoded pattern are equal.The latter assumption is inspired by the Bayesian-Hebbian learning rule (4), which depends on the probability of simultaneous activation of minicolumns corresponding to a pattern.We have approximated equal coupling weights for excitatory connections between minicolumns corresponding to an encoded pattern.However, analysis of a more general heterogeneous network, in particular in the presence of uncertainties [13], is an interesting future direction.
An important validation approach for the model and our analysis is through experiments.A closely related, but spiking, WM model capable of capturing the temporal dynamics of a realistic attractor memory network based on experimental data has been discussed in [43].For the model of this paper, experimental validation is an open future direction.
In a nutshell, our mathematical analysis sheds light on the WM model's immediate free recall mechanism.We have shown that the adaptation dynamics, size and sign of the weights, bias values, and properties of the coupling function determine the network's behavior.Stability analysis reveals which assumptions and dynamics are important in achieving a particular behavior.This understanding opens a future avenue for tuning and control.

IV. SIMULATION RESULTS
To validate our main theoretical results, we consider a network of H = 4 hypercolumns, each composed by M = 3 minicolumns.The minicolumns representing each pattern are interconnected with an all-to-all topology.The model parameters are provided in Table 1.The time constants and the adaptation gain are according to [5].
In Fig. 2 the results of Theorems 1 and 2 are verified.The coupling weights are set to ω1 = 1.3, ω2 = 1.1, ω3 = 1.2, ω = 0.2 and the bias values β1 = 110, β2 = 112, β3 = 115.It is assumed that three distinct patterns have been presented to the network during the training phase.The simulations in Fig. 2(a)-(c) show that the positively correlated minicolumns corresponding to each of the three patterns asymptotically synchronize.In Fig. 2(d) we report the time-evolution of states of a single hypercolumn: every minicolumn is activated in turn.The winner-take-all behaviour is clear at the output level with each minicolumn having the highest level of activation for a short period of time (≈ 150 ms) after which its level of activation decreases and the other minicolumns are activated (Fig. 3(b)).These simulations confirm that all the three different patterns are effectively stored in the network and recalled in the form of a limit cycle (Fig. 3(a)).
We now verify the results of Theorem 3 for a 4 × 2 network, assuming that two synchronized clusters have been formed, representing two patterns.Considering the dynamics in ( 14) and (15), define y 1 = s i1,i2 and The equilibria of the system ( 14) and ( 15) are located at the intersections of y 1 and y 2 .For the heterogeneous network, the pair (v 1 , v 2 ) is set to (11,12), and β1,2 = 6.The unique equilibrium is at s * i1,i2 = 0.13.Computation of eigenvalues of the Jacobian matrix in (17) gives (−20.4,64.7), hence an unstable equilibrium.Figs.4(a) and 5(a) show the system's unique   equilibrium and its corresponding limit cycle.For a homogeneous network, we set v = 11 and β1,2 = 0. We verify the presence of a unique equilibrium in Fig. 4(b) and the resulting limit cycles in Fig. 5(b).
Fig. 6 further illustrates the effect of tuning the relative bias term β1,2 on the location of the equilibrium point for the heterogenous network.For the system represented in the left figure, the parameter set (v 1 , v 2 , g a , β1,2 ) is set to (11, 12, 1, 1.8).Since this setting gives three equilibria, β1,2 is then increased to 14.The latter results in a unique equilibrium at s * i1,i2 = 24.Computation of eigenvalues of the Jacobian matrix in (17) gives (0, 3.13).Thus, we obtain an unstable equilibrium leading to a limit cycle behaviour.

V. CONCLUSION
We studied free recall of an abstract, yet biologically plausible, dynamic network model of working memory (WM).We have shown that WM's free recall is a cluster synchronization problem, i.e., simultaneous activities of minicolumns representing an encoded pattern, together with time-divided activities of minicolumns representing different patterns.Using nonlinear systems and stability theory, bounds on coupling and bias values were derived that guarantee M synchronized clusters for a H × M network.Furthermore, for a H × 2 network, we studied cluster synchronization assuming both identical and non-identical coupling and bias values, and derived conditions under which both of the encoded patterns are recalled.The latter result provides insights on network tuning for the recall of multiple patterns in a H × M network.Our analysis also shows that having non-identical couplings and bias values for different patterns increases the chance of their free recall.Future avenues include extending the mathematical framework for the heterogeneous network, as well as establishing connections with experimental results.Further, the effects of uncertainties for the given model deserves investigations.

APPENDIX A PROOF OF LEMMA 3
Proof: Recall (1), and (5).Computing ṡi j,k j gives Based on Assumption 3, all inhibitory (negative) coupling has the same weight, and also all minicolumns representing pattern ξ j are coupled with the weight ω j , also (−ω i j,r + ω k j,r )o r = 0; r = i, k, and, = j, (20) where ω i j,r j = ω k j,r j = ω j , and ω i j,r = ω k j,r = −ω.Considering the above modification to the dynamics of ṡi j,k j gives Based on the definition of the output in (3), we have Substituting ( 23) in ( 21) gives (6), which completes the proof.

APPENDIX B PROOF OF THEOREM 1
Proof: Following Definition 4.6, and Theorem 4.18 in [33], define V (s, a), a continuous, positive, and radially unbounded function, as follows: First, we rewrite V (s, a) = i,k, j V i j,k j , where Calculating Vi j,k j = τ m ṡi j,k j s i j,k j + τ a ȧi j,k j a i j,k j , gives Since o i j ∈ (0, 1), then −1 < o i j,k j < 1, and it holds that with ω = ω + ω j .Therefore, provided that |s i j,k j | > 2 ω and |a i j,k j | > 2g a are satisfied, Vi j,k j < 0 holds.Since Vi j,k j < 0, then V (s, a) ≤ 0 is also satisfied.Hence the solutions to ( 6) and ( 7) are ultimately bounded w.r.t to S u .( 2) System ( 6) and ( 7) can be written as ẋ = Ax + Bu, with u = o i j,k j , |u| < 1.If u = 0, the origin is globally exponentially stable, and since u is bounded then ISS stability is concluded (see Lemma 4.18 [33]).The solutions' bound is exactly as the one characterized above.
(3) Considering the dynamics in (6), the nonlinear term is o i j,k j = o i j − o k j , which takes values in the interval (−1, 1).In fact, it holds that |o i j,k j | ≤ | tanh(s i j,k j )|.Replacing tanh(s i j,k j ) in ( 6) and (7), and linearizing the system around the equilibrium (0, 0), gives ẋ = −( where x = s i j,k j , a i j,k j .The linearized system is exponentially stable, if the matrix in ( 28) is Hurwitz (see Theorem 4.13 [33]), which is guaranteed since network's parameters are positive.

APPENDIX C PROOF OF THEOREM 2
Proof: First, write the relative state dynamics between two minicoloumns i j and k , in hypercolumns i and k, representing patterns ξ j and ξ , we have Now, assume exact synchronization for minicolumns in pattern ξ j , as well as for pattern ξ .The aim is to show that if the minicolumns of one of these patterns are the winners at time t, they are the only winners.That is, no two patterns are recalled simultaneously.Assuming exact synchronization for patterns j and , ( 29) and ( 30) are reduced to where η = H − 1.In case a simultaneous reactivation of these two patterns occur, o i j,k = 0 should hold.Thus, the equilibrium of the second dynamics is a * i j,k = 0. Now, since o i j = o k , if it holds that η| ω j − ω | = | βi |, the equilibrium for the first cannot be the origin.Thus, s i j,k is stabilized at a non-zero value.Since j and are arbitrary, this holds for any two minicolumns in different patterns, including each two minicolumns in the same hypercolumn (due to orthogonality).Thus, s i j,k = 0 gives o i j,k = 0, which contradicts the assumption of o i j,k = 0, and completes the proof.

APPENDIX D PROOF OF THEOREM 3
Proof: The first part, we show that the trajectories converge to a forward invariant set.Consider the Lyapunov function Calculating the derivative of V gives V = − (τ m s i1,i2 − τ a a i1,i2 ) − a i1,i2 (a i1,i2 − g a o i1,i2 )  (16), which is forward invariant [33].We now discuss the existence and stability of equilibria within this set.Consider the dynamics in ( 14) and (15).The right hand side is the sum of two functions y 1 = s i1,i2 and y 2 = v 1 o i1 − v 2 o i2 − g a o i1,i2 + β1,2 .The system may have infinite number of equilibria if it has an equilibrium at the origin, i.e., y 1 (0) = y 2 (0) and it holds that dy 1 ds i1,i2 (0) = dy 2 ds i1,i2 (0) = 1.These conditions are satisfied if both v 1 − v 2 = −2 β1,2 and v 1 + v 2 − 2g a = 4 hold.Otherwise, since y 2 is a monotone function (see below for the proof) and has two bounded limits, similar to a tanh(•) function, system ( 14) and (15) has at most three equilibria.Now we focus on the conditions which gives a limit cycle (LC) as an attractor for the system for two cases: homogeneous and heterogeneous networks.For the homogeneous network, we refer to our result in [32].In brief, the given conditions in part (1) of Theorem 3 guarantee that the linearized system has a conjugate pair of imaginary eigenvalues, leading to a Hopf bifurcation.
For the heterogeneous network, we first argue that for a given set of network parameters, there exits β1,2 = 0 such that the system has only one equilibrium.Calculating the derivative of function y 2 = v 1 o i1 − v 2 o i2 − g a o i1,i2 + β1,2 , we obtain dy 2 ds i1,i2 = e s i1,i2 (1 + e s i1,i2 ) 2 (v 1 − 2g a + v 2 ). ( Given v 1 , v 2 , g a , the above derivative is either positive or negative, hence y 2 is a monotone function, which crosses the line y 1 = s i1,i2 at most at three points.Thus, there exists a bias term β1,2 , such that the number of equilibria is reduced to one by moving y 2 .Now, calculating the Jacobian of the system ( 14) and ( 15) around this equilibrium point, gives the matrix in (17).Provided that at least one of the eigenvalues of the Jacobian Matrix J has a positive real part, based on the Poincaré-Bendixson Criterion [33], the system has a periodic solution, hence a limit cycle, which ends the proof.

FIGURE 1 .
FIGURE 1. Network of three hypercolumns, each composed of three minicolumns, depicted as circles.Memory patterns are color-coded in blue, green, and red.Each minicolumn is positively (excitatory) coupled to the minicolumns in other hypercolumns if their activities are correlated, depicted using colored lines.Negative (inhibitory) couplings are shown as grey, dashed lines.

FIGURE 2 .FIGURE 3 .
FIGURE 2. Simulation results of a heterogenous network of H = 4 hypercolumns, each composed by M = 3 minicolumns.Each plot (a), (b), (c) corresponds to all the correlated minicolumns belonging to the same pattern across all the hypercolumns.Plot (d) shows the time-evolution of all minicolumns of one hypercolumn.

FIGURE 4 .
FIGURE 4. System in (14) and (15): The equilibria are at the intersection of y 1 and y 2 .Plots in (a) and (b) illustrate the location of the equilibrium for the heterogeneous and homogeneous networks, respectively.

FIGURE 6 .
FIGURE 6. System in (14) and (15) for a heterogeneous network with three equilibria (left plot).By increasing the relative bias value β 1,2 , a unique equilibrium is obtained (right plot).