Control Flow in Active Inference Systems—Part I: Classical and Quantum Formulations of Active Inference

Living systems face both environmental complexity and limited access to free-energy resources. Survival under these conditions requires a control system that can activate, or deploy, available perception and action resources in a context specific way. In this Part I, we introduce the free-energy principle (FEP) and the idea of active inference as Bayesian prediction-error minimization, and show how the control problem arises in active inference systems. We then review classical and quantum formulations of the FEP, with the former being the classical limit of the latter. In the accompanying Part II, we show that when systems are described as executing active inference driven by the FEP, their control flow systems can always be represented as tensor networks (TNs). We show how TNs as control systems can be implemented within the general framework of quantum topological neural networks, and discuss the implications of these results for modeling biological systems at multiple scales.


Introduction
Living things offer remarkable examples of complex, multi-level control policies that guide adaptive function at several scales.At the same time, they are made of components which are usually thought of as physical objects obeying simple rules; how can these two perspectives be unified in a rigorous manner?The framework of active inference answers this question be providing a completely general, scale-free formal framework for describing interactions between physical systems in cognitive terms.It is based on the Free Energy Principle (FEP), first introduced in neuroscience [1,2,3,4,5] before being extended to living systems in general [6,7,8,9] and then to all self-organizing systems [10,11,12,13].The FEP states that any system that interacts with its environment weakly enough to maintain its identifiability over time 1) has a Markov blanket (MB) that separates its internal states from the states of its environment [14,15,16,17,18] and 2) behaves over time in a way that asymptotically minimizes a variational free energy (VFE) measured at its MB.Equivalently, the FEP states that any system with a non-equilibrium steady-state (NESS) solution to its density dynamics (and hence an MB) will act so as to maintain its state in the vicinity of its NESS.Any system compliant with the FEP can be described as engaging, at all times, in active inference: a cyclic process in which the system observes its environment, updates its probabilistic "Bayesian beliefs" (i.e., posterior or conditional probability densities) over future behaviors, and acts on its environment so as to test its predictions and gain additional information.The internal dynamics of such a system can be described as inverting a generative model (GM) of its environment that furnishes predictions of the consequences of its actions on its MB.
As a fully-general principle, the FEP applies to all physical systems, not just to behaviorally interesting, plausibly cognitive systems, such as organisms or autonomous robots [10].Intuitively, behavior is interesting -to external observers and, we can assume, to the behaving system itself -when it is complex, situation-appropriate, and robust in the face of changing environmental conditions.Friston et al. [13] characterize interesting systems as "strange particles", whose internal (i.e., cognitive) states are influenced by their actions only via perceived environmental responses; such systems have to "ask questions" of their environments in order to get answers [19].Such systems, even bacteria and other basal organisms [20,21,22,23], have multiple ways of observing and acting upon their environments and deploy these resources in context-sensitive ways.In operations-research language, they exhibit situational awareness, i.e., awareness of the context of actions [24], and deploy attention systems to manage the informational, thermodynamic, and metabolic costs of maintaining such awareness [12,22].Situational awareness is dependent on both short-and long-term memory, or more technically, on the period of time over which precise [Bayesian] beliefs exist, sometimes referred to as the temporal depth or horizon of the GM [20,21].Upper limits can, therefore, be placed on behavioral complexity by examining the capacity and control of memory systems from the cellular scale [25] upwards.Living systems from microbial mats to human societies employ stigmergic memories [22] and hence have "extended minds" [26] in the sense of the literature on embodied, embedded, enactive, extended, and affective (4EA) cognition [27,28].Such memories must be both readable and writable; hence any system using them must have dedicated, memory-specific perception-action capabilities.
Any system with multiple perception-action (or stimulus-response) capabilities requires a control system that enables context-guided perception and action and precluding the continuous, simultaneous deployment of all available perception-action capabilities.Such self organization entails the selection of a particular course of action -i.e., policy -from all plausible policies entertained by the system's GM.In the active inference framework, the system's internal states -hence its GM -can be read as encoding posterior probability densities (i.e., Bayesian beliefs) over the causes of its sensory states, including, crucially, its own actions.This leads to the notion of planning and control as inference [29,30,31], with the ensuing selection of an action given by the most likely policy.In bacteria such as E. coli, for example, mutual inhibition between gene regulatory networks (GRNs) for different metabolic operons permit the expression of specific carbon-source (e.g., sugar) metabolism pathways only when the target carbon source is detected in the environment [32].The control of foraging behavior via chemotaxis employs a similar, in this case bistable, mechanism [33].Such mechanisms are active in multicellular morphogenesis, for example, in the headversus-tail morphology decision in planaria [34].In the human brain, mutual inhibition between competing visual processing streams is evident in binocular rivalry (switching between distinct scenes presented to left and right eyes) or in the changing interpretations of ambiguous figures such as the Necker cube [35,36]; similar competitive effects are observed in other sensory pathways [37].It also characterizes the competitive interaction between the dorsal and ventral attention systems, which implement top-down and bottom-up targeting of sensory resources, respectively [38].It is invoked at a still larger scale in global workspace models of conscious processing, in which incoming information streams must compete, with each inhibiting the others, for "access to consciousness" [39,40].Mutual inhibition creates an energetic barrier that the control system that implements switching must expend free-energy resources to overcome; the controller must not only turn "on" the preferred system, but also turn "off" the inhibition.The required free energy expenditure in turn induces hysteresis and hence the non-linear, winner-takes-all "switch" behavior in the time regime.Such barriers and their temporal consequences persist in more complex control systems whenever two perception-action capabilities are either functionally incompatible or too expensive to deploy simultaneously.
Switching between perception-action capabilities can be regarded, from a theoretical, FEP perspective, as selecting a plausible policy, or plan, supported by the GM.Technically, the probability distribution over policies or plans can be computed from a free energy functional expected under the posterior predictive density over possible outcomes, as described in §2.1 below.The control system that implements the switching process can be considered to employ the GM to predict, or assign probability distribution to, each perception-action capability (i.e., policy) as a function of context [41,42].We can consider the GM to generate probabilistic "beliefs" about the consequences of actions, where here a "belief" is just a mathematically-described structure, e.g., a classical conditional probability density or a quantum state with an assigned amplitude."Planning" or "control" can, therefore, always be cast as inference -again in the basal sense of computation -implemented by variational message passing or "belief propagation" on a (normal style) factor graph: a graph with nodes corresponding to the factors of a probability distribution and undirected edges corresponding to message-passing channels.Factor graphs can be combined with message passing schemes, with the messages generally corresponding to sufficient statistics of the factors in question, to provide an efficient computation of functions such as marginal densities [43,44].Hence one can formalize control -under the FEP -in terms of control as inference, which implies that there is a description of control in terms of message passing on a factor graph.When the GM is over discrete states, this implies a description of control in terms of tensor operators.
Nearly all simulations of planning -under discrete state space GMs -use the factorgraph formalism.Crucially, the structure of the factor graph embodies the structure of the GM and, effectively, the way that any system represents the (apparent causes of) data on its MB; i.e., the way it "carves nature at its joints," into states, objects and categorical features.Under the (classical) FEP, the factors that constitute the nodes of the factor graph correspond to the state-space factorization in a mean field approximation, as used by physicists, or by statisticians to implement variational Bayesian (a.k.a., approximate Bayesian) inference [45].See [46] for technical details, [47] for an application to the brain, and Supplementary Information, Table 1 for a list of selected applications.
We show in this paper that control flow in such systems can always be formally described as a tensor network, a factorization of some overall tensor (i.e., high-dimensional matrix) operator into multiple component tensor operators that are pairwise contracted on shared degrees of freedom [48].In particular, we show that the factorization conditions that allow the construction of a TN are exactly the same as those that allow the identification of distinct, mutually conditionally independent (in quantum terms, decoherent), sets of data on the MB, and hence allow the identification of distinct "objects" or "features" in the environment.This equivalence allows the topological structures of TNs -many of which have been well-characterized in applications of the TN formalism to other domains [48] -to be employed as a classification of control structures in active inference systems; including cells, organisms, and multi-organism communities.It allows, in particular, a principled approach to the question of whether, and to what extent, a cognitive system can impose a decompositional or mereological (i.e., part-whole) structure on its environment.Such structures naturally invoke a notion of locality, and hence of geometry.The geometry of spacetime itself has been described as a particular TN -a multiscale entanglement renormalization ansatz (MERA) [49,50,51] -suggesting a deep link between control flow in systems capable of observing spacetime (i.e., capable of implementing internal representations of spacetime) and the deep structure of spacetime as a physical construct.
We begin in §2 by analyzing the control-flow problem in three different representations of active inference.First, we employ the classical, statistical formulation of the FEP [10,11] in §2.1 to describe control flow as implementing discrete, probabilistic transitions between dynamical attractors on a manifold of computational states.We then reformulate the physical interaction in quantum information-theoretic terms in §2.2; in this formulation [12], components of the GM can be considered to be distinct quantum reference frames (QRFs) [52,53] and represented by hierarchical networks of Barwise-Seligman classifiers [54] as developed in [55,56,57,58].Control flow then implements discrete transitions between QRFs.The third step, in §2.3, employs the mapping between hierarchies of classifiers and topological quantum field theories (TQFTs) developed in [59].Here, control flow is implemented by a TQFT, with transition amplitudes given by a path integral.The second and third of these representations provide formal characterizations of intrinsic (or "quantum") context effects that are consistent with both the sheaf-theoretic treatment of contextuality in [60,61] and the Contextuality by Default (CbD) approach of [62,63]; see also the discussion in [57] and [59, §7.2].The underlying theme is that contextuality arises due to the non-existence of any globally definable (maximally connected) conditional probability distribution across all possible observations (see e.g., [64] for a review from a more general physics perspective).Extending our earlier analysis [57], we discuss reasons to expect that active inference systems will generically exhibit such context effects.
We then develop in §3 a fully-general tensor representation of control flow, and prove that this tensor can be factored into a TN if, and only if, the separability (or conditional statistical independence) conditions needed to identify distinct features of or objects in the environment are met.We show how TN architecture allows classification of control flows, and give two illustrative examples.We discuss in §4 several established relationships between TNs and artificial neural network (ANN) architectures, and how these generalize to topological quantum neural networks [59,65], of which standard deep-learning (DL) architectures are a classical limit [66].We turn in §5 to implications of these results for biology, and discuss how TN architectures correlate with the observational capabilities of the system being modeled, particularly as regards abilities to detect spatial locality and mereology.We consider how to classify known control pathways in terms of TN architecture and how to employ the TN representation of control flow in experimental design.We conclude by looking forward to how these FEP-based tools can further integrate the physical and life sciences.

Formal description of the control problem 2.1 The attractor picture
Let U be a random dynamical system that can be decomposed into subsystems with states µ(t), b(t), and η(t) such that the dependence of the µ(t) on the η(t), and vice-versa, is only via the b(t).In this case, the b(t) form an MB separating the µ(t) from the η(t).We will refer to the µ(t) as "internal" states, to the η(t) as "environment" states, and to the combined π(t) = (b(t), µ(t)) as "particular" (or "particle") states [10].The FEP is a variational or least-action principle stating that any system -that interacts sufficiently weakly with its environment -can be considered to be enclosed by an MB, i.e. any "particle" with states π(t) = (b(t), µ(t)), will evolve in a way that tends to minimize a variational free energy (VFE) F (π) that is an upper bound on (Bayesian) surprisal.This free energy is effectively the divergence between the variational density encoded by internal states and the density over external states conditioned on the MB states.It can be written [10,Eq. 2.3], The VFE functional F (π) is an upper bound on surprisal (a.k.a.self-information) I(π) = − ln p(π) > − ln p(b) because the Kullback-Leibler divergence term (D KL ) is always nonnegative.This KL divergence is between the density over external states η, given the MB state b, and a variational density Q µ (η) over external states parameterized by the internal state µ.If we view the internal state µ as encoding a posterior over the external state η, minimizing VFE is, effectively, minimizing a prediction error, under a GM encoded by the NESS density.In this treatment, the NESS density becomes a probabilistic specification of the relationship between external or environmental states and particular (i.e., "self") states.We can interpret the internal and active MB states in terms of active inference, i.e., a Bayesian mechanics [11], in which their expected flow can be read as perception and action, respectively.Here "active" states are a subset of the MB states that are not influenced by environmental states and -for the kinds of particles considered here -do not influence internal states.In other words, active inference is a process of Bayesian belief updating that incorporates active exploration of the environment.It is one way of interpreting a generalized synchrony between two random dynamical systems that are coupled via an MB.
If the "particle" π is a biological cell, it is natural to consider the MB b to be implemented by the cell membrane and the "internal" states µ to be the internal macromolecular or biochemical states of the cell; indeed, it is this association that motivated the application of the FEP to cellular life [5].In this case, the NESS corresponds to the state, or neighborhood of states, that maintain homeostasis (or more broadly, allostasis [67,68,69]) and hence maintain the structural and functional integrity of π as a living cell.This activity of selfmaintenance has been termed "self-evidencing" [70]; systems compliant with the FEP can be considered to be continually generating evidence of -or for -their continued existence [10].
In the terminology of [13] cells are "strange particles" -their signal transduction pathways monitor (components of) the states of their environments, but do not directly monitor their actions on their environments (i.e., their own active states).The consequences of any action can only, therefore, be deduced from the response of the environment.In this situation, causation is always uncertain: whether an action by the environment on the cell -what the cell detects as an environmental state change -is a causal consequence of an action the cell has taken in the past cannot be determined by the data available to the cell.Every action, therefore, increases VFE, while every observation (potentially) decreases it.The (apparent) task of the cell's GM is to minimize the increases, on average, while maximizing the decreases.
The Bayesian mechanics afforded by the FEP implies a (classical) thermodynamics; indeed, the FEP can be read as a constrained maximum entropy or caliber principle [71,72] (Sakthivadivel 2022, Sakthivadivel 2022).This follows from the fact that inference, i.e., self evidencing, entails belief updating and belief updating incurs a thermodynamic cost via the Jarzynski equality [73,74,75].This cost provides a lower bound on the thermodynamic free energy required for metabolic maintenance.For example, a cell's actions on its environment -e.g., chemotactic locomotion -are largely driven by the need to acquire thermodynamic free energy.The cell's GM cannot, therefore, minimize VFE by minimizing action [76]; instead, it must successfully predict which actions will replenish its free-energy supply.As actions are energetically expensive, this requires trading off short-term costs against longterm goals.As shown in [41], selective pressures operating on different timescales favor the development of metaprocessors that control lower-level actions in a context-dependent way; these are often implemented via a hierarchical GM [77].Such meta-level control provides probabilistic models of risk-sensitive actions in context.
While such systems may be described as regulating free-energy seeking actions, they also regulate information-seeking actions, i.e., curiosity-driven exploration [78,79,80].This follows because VFE provides an upper bound on complexity minus accuracy [81].The expected free energy (EFE), conditioned upon any action, can therefore be scored in terms of expected complexity and expected inaccuracy.Expected complexity is "risk" and corresponds to the degree of belief updating that incurs a thermodynamic cost; leading to risk-sensitive control (e.g., phototropism).Expected inaccuracy corresponds to "ambiguity" leading to epistemic behaviors (e.g., searching for lost keys under a streetlamp) [42].
When context-dependent control is considered, the neighborhood of the NESS resolves into a network of local minima corresponding to fixed perception-action loops separated by energetic barriers that the control system must overcome to switch between loops.For example, in a cell, this energetic barrier comprises the energy required to activate one pathway while de-activating another, which may include the energetic costs of phosphorylation, other chemical modifications, additional gene expression, etc. Different pairs of pathways can be expected to be separated by energetic barriers of different heights, generating a topographically-complex free energy landscape that coarse-grains, in a long-time average, to the neighborhood of the NESS, i.e., to the maintenance of allostasis [68,69,82].
As noted earlier, we can think of controllable perception-action loops as nodes on a factor graph, with the edges corresponding to pathways for control flow, and the transition probabilities labeling the edges as inversely proportional to the energetic barrier between loops.This allows representing the GM for meta-level control (i.e., hierarchical) as a message-passing system as described in [47].The presence of very high energetic barriers can render such a GM effectively one-way, as seen in the context-dependent switches between signal transduction pathways and GRNs that characterize cellular differentiation during morphogenesis.Biological examples of these include modifications of bioelectric pattern memories in planaria, which can create alternative-species head shapes that eventually remodel back to normal [83], or produce 2-headed worms which are permanent, and regenerate as 2headed in perpetuity [84].

The QRF picture
Cellular information processing has traditionally been treated as completely classical, i.e., as implemented by causal networks of macromolecules, each of which undergoes classical state transitions via local dynamical processes that are conditionally independent of the states of other parts of the network.While the "quantum" nature of proteins and other macromolecules is broadly acknowledged, the scale at which quantum effects are important remains controversial, with straightforward single-molecule decoherence models predicting decoherence times of attoseconds (10 −18 s) or less [85,86]: several orders of magnitude below the timescales of processes involved in molecular information processing [87].While functional roles for quantum coherence in intramolecular information processing have been demonstrated, intermolecular coherence remains experimentally elusive [88,89,90,91].
The free-energy budgets of both prokaryotic and eukaryotic cells are, however, orders of magnitude smaller than would be required to support fully-classical information processing at the molecular scale, suggesting that cells employ quantum coherence as a computational resource [92].Indirect evidence of longer-range, tissue-scale coherence in brains has also been reported [93].Reformulating the FEP in quantum information-theoretic terms enables it to describe situations in which long-range coherence, and hence quantum computation, cannot be neglected.
Following the development in [12], we consider a bipartite decomposition U = AB of a finite, isolated system U for which the interaction Hamiltonian H AB = H U − (H A + H B ) is sufficiently weak over the time period of interest that the joint state U is separable (i.e., factors) as |U = |A |B .In this case, we can choose orthogonal basis vectors |i k so that: where K B denotes Boltzmann's constant, T is the absolute temperature of the environment, k = A or B, the M k i are N mutually-orthogonal Hermitian operators with eigenvalues in {−1, 1}, the α k i ∈ [0, 1] are such that N i α k i = 1, and β k ≥ ln 2 is an inverse measure of k's thermodynamic efficiency that depends on the internal dynamics H k ; see [56,58,94,95] for further motivation and details of this construction and [96] for a pedagogical review.This description is purely topological, attributing no geometry to either U or B; hence it allows the "embedding space" of perceived "objects" to be an observer-dependent construct.It has several relevant consequences: • We can regard A and B as separated, and determined by independent measures.They are separated by -and interact via -a holographic screen B that can be represented, without loss of generality, by an array of N non-interacting qubits, where N is the dimension of H AB [94,95].
• A and B can be regarded as exchanging finite N -bit strings, each of which encodes one eigenvalue of H AB [94].
• A and B have free choice of basis for H AB , corresponding to free choice of local frames at B, e.g., free choice, for each qubit q i on B, of the local z axis and hence the z-spin operator s z that acts on q i [96].
• Choice of basis corresponds to choosing the zero-point of total energy) by each of A and B. The systems A and B are, therefore, in general at informational, but not at thermal equilibrium [12].
• As A and B must obtain from B or A, respectively, whatever thermodynamic free energy is required, by Landauer's principle [73,99,100], to fund the encoding of classical bits on B (as well as any other irreversible classical computation), A and B must each devote some sector F of B to free-energy acquisition.The bits in F are "burned as fuel" and so do not contribute input data to computations.Waste-heat dissipation by one system is free energy acquisition by the other.The free-energy sectors F A and F B of A and B need not align as subsets of qubits on B; that is, qubits that A regards as free-energy sources may be regarded by B as informative outputs and vice-versa [56,58].
• The actions of the internal dynamics H A and H B on B can be represented by Aand B-specific sets of QRFs, each of which both "measures" and "prepares" qubits on B. Each QRF acts on the qubits in some specific sector of B, breaking the permutation symmetry of Eq. ( 2) [56,58,59].Only QRFs acting on sectors other than F implement informative computations; we will therefore restrict attention to these QRFs.
• Each "computational" QRF can, without loss of generality, be represented by a conecocone diagram (CCCD) comprising Barwise-Seligman classifiers and infomorphisms between them [54,55].The apex of each such CCCD is, by definition, both the category-theoretic limit and colimit of the "input/output" classifiers that correspond, formally, to the operators M k i in Eq. ( 2) [56,58,59].
Typically, a CCCD is structured as a distributed information flow in the form: The holographic screen B functions as an MB separating A from B. It can be regarded as having an N -dimensional, N -qubit Hilbert space H q i = i q i .While H q i is strictly ancillary to H U = H A ⊗ H B , the classical situation can be recovered in the limit in which the entanglement entropies S(|A ), S(|B ) → 0 by considering the products H A ⊗ H q i and H B ⊗ H q i to be "particle" state spaces for A and B, respectively.In this classical limit, the states of H q i become the blanket states of an MB that functions as a classical information channel [94,95,96].In quantum holographic coding, for example, B is often represented by a polygonal tessellation of the hyperbolic disc, with qubits represented by polygonal centroids.A specific TN model of a pentagon code is developed in [97]; see in particular their Fig. 4. The geometric description of B as implementing holographic coding, and its classical limit as an MB structured as a direct acyclic graph (DAG), is further explored in the setting of TQNNs in [98].
In this quantum-theoretic picture, "systems" or "objects" observed and manipulated by A or B correspond to sectors on B that are the domains of particular QRFs deployed by A or B, respectively [58,12,59].To simplify notation, we use the same symbol, e.g., 'Q' to denote both a QRF Q and the sector dom(Q) on B. Any identifiable system X factors into a "reference" component R that maintains a time-invariant state |R or more generally, state density ρ R , that allows re-identification and hence sequential measurements over extended time, and a "pointer" component P with a time-varying state |P or density ρ P .It is this pointer component, named for the pointer of an analog instrument, which is the "state of interest" for measurements.The QRFs R and P clearly must commute, and the sectors R and P clearly must be mutually decoherent [58,12,59].All "system" sectors must be components of some overall sector E that corresponds to the "observable environment."The recording of measurement outcomes to a classical memory and the reading of previously-recorded outcomes from memory can similarly be represented by a QRF Y .As dom(Y ) is a sector on B, recorded memories of A are exposed to and hence subject to modification by B and vice-versa.Both the observable environment E and the memory sector Y must be disjoint from, and decoherent with, the free-energy sector F .
As actions on B encode classical data, they have an associated free energy cost of at least ln2 K B T per bit [73,99,100] that must originate from the source at F .Time-energy complementary associates a minimum time of h/[ln2(K B T )], with h being the Planck's constant, to this energy expenditure.We can, therefore, associate actions on B, including memory writes, with "ticks" of an internal time QRF, which we denote t A and t B for A and B, respectively.Assuming all observational outcomes are written to memory, we can represent the situation as in Fig. 1.The time QRF is effectively an outgoing bit counter that can be represented by a groupoid operator G ij : t i → t j [56].As outgoing bits are oriented in opposite directions with respect to B for A and B, the time "arrows" t A and t B point in opposite directions.Hence A and B can both be regarded as "interacting with their own futures" as discussed in [96].Measurements of a system X can be considered sequential if: 1) they separated in time according to the internal time QRF, and 2) their outcomes are recorded to memory to enable comparability across time.We show in [59] that sequential measurements can always be represented by one of two schemata.Using the compact notation: to represent a QRF S, we can represent measurements of a physical situation in which one system divides into two, possibly entangled, systems with a diagram of the form: Parametric down-conversion of a photon exemplifies this kind of process.The reverse process can be added to yield: In the second type of sequential measurement process, the pointer-state QRF P is replaced with an alternative QRF Q with which it does not commute.Sequences in which position and momentum, s z and s x are measured alternately are examples.These can be represented by the diagram As both P and Q must commute with R, the commutativity requirements for S are satisfied.
The sequences of operations depicted in Diagrams ( 6) and ( 7) clearly raise the questions of how control is implemented, and of how the context changes that drive control flow are detected.Before turning to these questions in §3, we review a path-integral representation of QRFs, show that the same representation captures the behavior of any system X identified by a QRF, and discuss the questions of multiple observers and quantum contextuality.

The TQFT picture
As a least-action principle, the FEP is fundamentally a statement about the paths followed by the joint system U through its state space.The classical FEP is amenable to a pathintegral formulation [13] that expresses the expected value of any observable (functional) Ω[x(t)] of paths x(t) through the relevant state space as ( [101], Eq. 6) : where x 0 is the initial state and p(x(t)|x 0 ) is the conditional probability of the path x(t).
Quantum theory generalizes this expression by, effectively.replacing Ω[x(t)] with an automorphism on the relevant Hilbert space and p(x(t)|x 0 ) with an amplitude for x(t) given the initial state x 0 .For some finite-dimensional Hilbert space H, the manifold of all such automorphisms is a cobordism on H, which is by definition a TQFT on H [102].
We show in [59] that any sequential measurement of any sector X of B induces a TQFT on X, considered as a projection of the N -dimensional boundary Hilbert space H q i associated with B. In particular, measurement sequences of the form of Diagram ( 6) can be mapped to cobordisms, i.e., to manifolds of maps between two designated boundaries, of the form: while sequences of the form of Diagram ( 7) can be mapped to cobordisms of the form: In either case, F : CCCD → Cob is the functor from the category CCCD of CCCDs (and hence of QRFs) to the category of Cob finite cobordisms required to define a TQFT.In general, we can state: Theorem 1 ([59] Thm. 1).For any morphism F of CCCDs in CCCD, there is a cobordism S such that a diagram of the form of Diagram (9) or (10) commutes.
referring to [59] for the proof.
Theorem 1 applies to any sequential measurement; therefore, it applies to measurements of a sector X followed by measurements of the associated memory sector Y , or vice versa.Assuming for convenience dim(X) = dim(Y ), we can consider a composite operation This composite operator Q is, by Theorem 1, a TQFT [98].Hence the operation of recording observational outcomes for a sector X made at t to memory, and then comparing them to later observations at t + ∆t, is formally equivalent to propagating the "system" X forward in time from t to t + ∆t.
Identifying QRFs as "internal" TQFTs allows a general analysis of information exchange between multiple QRFs deployed by a single system, e.g., A. Because all QRFs act on B, information exchange between QRFs requires a channel that traverses B. Any such channel is itself a QRF, one deployed by B. Considering A to comprise two observers, one deploying Q 1 and the other deploying Q 2 , that interact via a local operations, classical communication (LOCC [103]) protocol provides an example: In a LOCC protocol, one channel is considered "classical" while the other is considered "quantum"; however, this language masks the fact that both channels are physical.As pointed out in [104], all media supporting classical communication are physical, and interactions with these media are always local measurements or preparations.Hence the two channels in a LOCC protocol are physically equivalent -both are TQFTs implemented by B -although their conventional semantics are different.
Diagram (12) can, clearly also represent externally-mediated communication between any two functional components of a system, e.g., macromolecular pathways within a cell or functional networks within a brain.We show in [98] that whenever Q 1 and Q 2 are deployed by distinct -technically, separable or mutually decoherent -"observers" or "systems," they fail to commute, i.e., the commutator As shown in [63], such context dependence can, in principle, be captured classically if sufficient measurements of the context can be implemented.Such measurements would, however, have to access all of B. The existence of an MB prevents such access; in the current setting, A has access to B only via B. The finite energetic cost of measurement, and consequent requirement for a thermodynamic sector F , prevents measurement even of all of B by any finite physical system.Hence, we can expect physical systems, including all biological systems, to employ only local context-dependent control to switch between mutually non-commuting (sets of) QRFs.How context switches implemented by QRF switches induce evolution, development and learning was introduced in [22].Some specific of context switching will be discussed §5.
3 Tensor network representation of control flow

Tensor networks and holographic duality
Entanglement and quantum error correction, two concepts developed in quantum information theory, have been proved to have a fundamental role in unveiling quantum gravity [108].At the origin of this consideration there has been the discovery by Bekenstein and Hawking [109,110,111,112] that the second law of thermodynamics can be preserved in the gravitational field of a black hole, if this latter has an entropy proportional to the area of its horizon, by the inverse of the Newton gravitational constant G.This entropy is maximal, as implied by the second law itself, providing an upper bound for possible configurations of matter within a region of the same size [113,114].
Nonetheless, the scaling of the local degrees of freedom counted by the entropy does not increase as the volume, hinging toward the formulation of the holographic conjecture [115], suggesting a division between the information that can only be retrieved on the boundary world, and a merely apparent bulk world.AdS/CFT realized the holographic conjecture, postulating a duality between gravity in asymptotically AdS space and quantum field theory on the spatial infinity of the AdS space [116].Giving literal meaning to the duality, Ryu and Takayanagi (RT) proposed that entanglement of a boundary region fulfils the same law as for the black hole entropy, replacing the area of the black hole horizon with an extremal surface area that bounds the bulk region under scrutiny.
While on the boundaries the theory can be individuated assigning a specific conformal field theory (CFT), in the bulk the geometry can be associated to specific entanglement structures of the quantum systems.This is for instance what happens to the ground states of a CFT associated to an AdS space: the RT area surface increases less fast than the volume of the boundary.When the boundary is at equilibrium, in a thermal state of finite temperature, the bulk geometry corresponds to that of a black hole, its horizon being parallel to the boundary and its size increasing with the temperature.The RT surface is then confined between the boundary and the back hole horizon, approaching the boundary at higher temperature and increasing its entropy.These considerations suggest the existence of a subtle link interconnecting the structure of space-time and quantum entanglement, and hence that a theory of quantum gravity must be fundamentally holographic, where its states satisfy the RT formula for some bulk geometry.
The existence of an exact correspondence between bulk gravity and quantum theory at the boundary may hinge toward possible inconsistencies with locality.This has been discussed in the literature, in terms of local reconstruction theory [117,118,119]: variables in the bulk (e.g.bulk spins) can be controlled instantaneously from the boundary, but requiring simultaneous access to a large portion of the boundary: locality and upper speed of light do not hold exactly in this theory.Nonetheless, local observers confined in small regions at the boundary still fulfil locality and the existence of an upper limit of the speed of information exchange, in a way that is reminiscent of quantum error correction code (QECC) in quantum information theory: information is stored redundantly, in such a way that when part of it is corrupted, a reconstruction of information is still possible.Locality in the bulk is therefore a QECC property of the encoding map that realizes the duality between bulk and boundary.On the other hand, these properties are strictly connected to RT, which provides the necessary resource of entanglement for QEEC to emerge.
The RT formula and QECC are properties fulfilled by different classes of models, among which TNs [120].These have been first introduced in condensed matter physics as variational wave-functions of strongly correlated systems [121,122].TNs are many-body wavefunctions that can be derived composing few-body quantum states, which are indeed tensors.A prototype of TN is e.g. the Einstein-Podolsky-Rosen (EPR) entangled pairs of qubits: in an entangled basis, measured qbits are in some entangled pure state and can be composed with remaining ones with increasing complexity: complicated quantum entanglement can be derived by only entangling a few qubits [123].
Particularly relevant for its implications on the reconstruction of the space-time structure is the multi-scale entanglement renormalization ansatz (MERA) [124].TNs can be naturally related to holography duality by considering that their entanglement entropy can be controlled by their graph geometry.Some versions of TNs that are characterized by RT entanglement entropy and QEEC have been constructed resorting to stabilizer codes [125,126] and random tensors with large bond dimension [127].TNs with random tensors at each node can be regarded as random states restricted by the topology of the network.
Exactly as random states are almost maximally entangled, random TNs show through the RT formula an almost maximal entanglement, providing a large family of states with interesting properties to explore holographic duality.Furthermore, for random TNs the RT formula holds in generic spaces with not necessarily hyperbolic geometry, hinging toward an extension of holographic duality beyond AdS, to more general configurations in quantum gravity.Nonetheless, at least in three dimensions, random tensor networks have been related to the gravitational action, by means of the Regge calculus [128].
On the other hand, since geometry emerges as a specification of the entanglement structure, one may consider that the Einstein equations should be connected as well to the dynamics of entanglement.For small perturbations around the ground state of a CFT in boundary, linearized Einstein equations have been derived from the RT formula [129,130].Indeed, the conformal symmetry enables a relations between the energy momentum and the entanglement entropy, and consequently the area of the extremal surface c an be connected to the energy-momentum distribution at the boundary -this is equivalent to the linearized Einstein equations.
The dynamics on the boundary, on the other side, shows a chaotic behaviour, with scrambling of the single-particle operators, which evolve into multi-particle operators [131].Maximal chaotic behaviour recovered in the ladder operators commutator growth, is encoded in the out-of-time-ordered correlation (OTOC) functions, characterized by exponential growth in time and temperature.A model endowed with this properties is e.g. the Sachdev-Ye-Kitaev model, developed to describe certain systems in condensed matter physics, such as Gapless spin-fluid [132,133,134].On the other hand, operator scrambling is also related to QEEC: the chaotic dynamics at the boundary instantiates QECC preserving quantum information, efficiently hidden (and protected) behind the horizon.Nevertheless this has led to may questions, concerning the information behind the horizon being eventually accessible from the boundary though non-local measurements, the fate of the local degrees of freedom hitting the singularity, the relation among the causal structure of the bulk and a smooth geometry across the horizon.

General results
We can move to prove a general result: Theorem 2. A system A exhibits non-trivial control flow if, and only if, its control flow can be represented by a TN.
and examine some of its corollaries.We begin by defining: Definition 1.Control flow is trivial if a system deploys only one QRF.
As any collection of mutually-commuting QRFs can be represented as a single QRF [57,59], any system that deploys only mutually-commuting QRFs exhibits trivial control flow.
Systems that deploy only a single QRF "do the same thing" regardless of context, and so do not qualify as "interesting" in the sense used here.As noted above, no finite physical system can measure the entire state of its boundary with a single QRF, so no such system can simultaneously measure and act on its entire context.Any system A that deploys multiple QRFs in sequence cannot, as noted above, avoid contextuality due to unobservable effects, mediated by the action of H B , of the action of Q i on the state measured by Q j .Every action taken by an "interesting" system, in other words, at least transiently increases the VFE at its boundary.
Consider, then, a system A that deploys multiple, distinct QRFs Q 1 , Q 2 , . . .Q n , where n N = dim(H AB ).Classical control flow in A can then be represented by a matrix CF = [P ij ], where P ij is the probability of the control transition Q i → Q j .As noted earlier, any such transition has an energetic cost, which must be paid with free energy sources from F .
The matrix CF is a 2-tensor.Theorem 2 states that this tensor can be decomposed into a TN.We prove it as follows: Proof (Thm.2).Suppose first that control flow in a system A can be represented by a TN.A TN is, by definition, a factorization of a tensor operator into a network of tensor operators.This network can be either hierarchical or flat; if it is hierarchical, each layer can be considered a flat TN.Hence no generality is lost in considering just the case of a flat TN, which is an operator contraction T = . . .T ij T jk T kl . . ., where summation on shared indices is left implicit.In general, T jk = T T jk = T kj , hence these expressions do not commute.They therefore represent non-trivial control flow.Conversely, any non-trivial control flow can be written, at any fixed scale or level of abstraction, as a linear sequence of (in general probabilistic) operators.The fixed order of operators in the sequence can be encoded formally by adding "spatial" indices as needed to allow contraction over shared indices.Hence any non-trivial control flow at a fixed scale can be written as a flat TN.This construction can be repeated at each larger scale to produce a hierarchical TN over a collection of "lowest-scale" TNs.
We can now examine two corollaries of this result: Corollary 1. Decoherent reference sectors exist on B if and only if control flow can be implemented by a TN.
Proof.Decoherence sectors require independently-deployable, non-commuting QRFs.This requires a control structure that factors, by Theorem 2 a TN.Conversely, a TN factors the control structure, making QRFs independently deployable, which renders their sectors decoherent.
Equivalently, the GM factors if, and only if, control flow can be implemented by a TN.
Corollary 2. The TN of any system compliant with the FEP is a decomposition of the Identity.
Proof.The FEP applies to systems with a NESS, and drives such systems to return to (the vicinity of) the NESS after any perturbation.Hence at a sufficiently large scale, the TN of any such system is a cycle, i.e., a decomposition of the Identity.
Many standard TN models, e.g., MERAs, assume boundary conditions asymptotically far, in numbers of lowest-scale operators, from the region of the network that is of interest.Identifying such asymptotic boundary conditions yields a cyclic system.Theorem 2, together with its corollaries, provides a natural, formal means of classifying systems by their control architectures.At a high level, two characteristics distinguish systems with different architectures: • Hierarchical depth indicates the number of "virtual machine" layers [135] the architecture supports.The interfaces between these layers implement coarse-graining, removing from the higher-level representation all dimensions, and hence all information, which is contracted out of the lower-level operators.
• Number and location of contractions that yield unitary operators, and hence build in entanglement between lower-level operators.The natural limit is a MERA, in which every pair of lower-level operators is entangled at every hierarchical level [48].
The control-flow architecture, in turn, specifies the structure of the "layout" of distinguishable sectors on B and hence of detectable features/objects in the environment.Locality on B requires a hierarchical TN; detectable entanglement requires a MERA-like TN.Locality is required for detectable features/objects to appear to have components with nested decomposition.Any QRF for geometric space, and hence for spacetime, must be hierarchical, and must be a MERA if entanglement in space is to be detected.A MERA is required, in particular, if the use of coherence between spatially-separated systems as a computational or communication resource is detectable.
To illustrate the classification of systems by hierarchical level, consider the ten-step cyclic TN shown in Diagram (13): and its extension to a hierarchy as shown in Diagram ( 14): where red, blue, and green colors indicate distinct hierarchical "layers" of tensor contractions.We have trained artificial neural networks (ANNs) to execute these TNs as the sequences of state transitions shown in Table 1.The first sequence (Dataset 1) is a ten-step cycle shown Diagram (13); the second sequence (Dataset 2) layers the coarse-grained state transitions of Diagram ( 14) onto this ten-step cycle.In Dataset 2, a two-bit tag is used to differentiate the "low-level" from the coarse-grained "high-level" cycles.An example state state transition from a randomly-generated initial state is shown in Fig. 2; the red-on-green bit pattern effectively moves "up" one step on each state-transition cycle.
Table 1: Datasets used in ANN simulations.Dataset 1 specifies a ten-step cycle A → B → . . .→ J → A. Dataset 2 specifies this same cycle, with three coarse-grained cycles layered on top.The tags (0,0), (0,1), (1,0), and (1,1) distinguish the data for the low-and high-level cycles.We trained two ANNs, one to execute each of the control cycles shown in Table 1.The networks are each composed of three layers, as illustrated in Fig. 3, with network sizes of [10,50,10] and [10,200,10], respectively, for the input, hidden, and output layers.The units in the hidden layer use the rectified linear unit (ReLU) nonlinear activation function and the neurons in the output layer use the hyperbolic tangent activation function.The network is connected in a feedforward way where a neuron in one layer connects to every neuron in the next layer.Since the ANN serves as a switch state controller, we use a training scheme, similar to one-class classification [136], where the training data are the only data that the network learns to produce.In so doing, the network learns to overfit the training data, and any input outside of the designated state-encoding is discarded.The network is, therefore, not expected to deviate from the learned pattern.The network learns both control regimes wth 100% accuracy after training with 3,000 randomly-generated 10-bit inputs.
Figure 3: Feed-forward network archtecture used to learn the control cycles specified in Table 1.Each node is connected to every node of the next layer, as shown here for the first and last nodes only.The labels 'T' and 'T+1' indicate time steps in the executed control flow.
In the more realistic case of noisy input data, where binary states can be flipped, the Bidirectional Associative Memory (BAM), a minimal two-layer nonlinear feedback network [137], is a viable alternative to a shallow feed-forward ANN.The architecure is shown in Fig. 4.This BAM network learns to associate between the two initial and final states in Table 1, with similar performance to that of the feed-forward network.
Figure 4: Architecture of the Bidirectional Associative Memory (BAM) network employed here.As in Fig. 3, only the connections of the first and last nodes are shown explicitly.

Implementing control flow with TQNNs
Tensor Networks can be naturally associated to the matrix elements of physical scalar products among topological quantum neural networks (TQNNs).Physical scalar products encode indeed the dynamics of TQFTs, since they fulfill their constraints of imposing flatness of the curvature and gauge invariance.Thus, the matrix elements associated to scalar products can be seen as evolution matrix elements for the spin-network states that span the Hilbert spaces of TQNNs.

Tensor networks as classifiers for TQNNs
A notable example is provided by BF theories [138], a class of TQFTs particularly well studied in the literature of mathematical physics that enables expressing effective theories of particle physics, gravity and condensed matter, and provides as well a general framework for implementations of models of quantum information and quantum computation, machine learning (ML) and neuroscience.These are defined on the principal bundle M of a connection A for some internal gauge group G, with algebra g, according to the action on a d -dimensional manifold where B is an ad(g)-valued d-2 -form, F denotes the field-strength of A, which is a 2 -form, and the trace Tr is over the internal indices of g, ensuring gauge invariance of the density Lagrangian L = Tr[B ∧ F ] of the BF theory.
Variation with respect to the conjugated variables, the connection A and the B frame-field, closing a canonical symplectic structure, provide the equations of motion of the theory [138]: respectively the curvature constraint, imposing the flatness of the connection, and the Gauß constraint, imposing invariance under gauge transformations, having denoted with d A the covariant derivative with respect to the connection A.
At the quantum level, the states of the kinematical Hilbert space of the theory, fulfilling by construction the Gauß constraint, can be represented in terms of cylindrical functionals Cyl, supported on graphs Γ that are unions of segments γ i , the end points of which meet in nodes n, and with holonomies -elements of the group G -H γ i [A] of the connection A assigned to γ i and intertwiner operators -invariant tensor product of representations -v n assigned to the nodes n.
For G = SU(2), spin-networks |Γ, j γ , ι n , supported on Γ and labelled by the spin j γ of the irreducible representations of the group elements assigned to γ and by the quantum intertwiner numbers ι n associated to v n , represent a basis of the kinematical Hilbert space of the theory.In terms of functionals of Cyl, one can provide the holonomy representation, which is related to the "spin and intertwiner" representation of |Γ, j γ , ι n by means of the Peter-Weyl transform.This allows us to decompose the spin-network cylindric functional as [139]: with D (j) are Wigner matrices providing representation matrices of the SU(2) group elements.
The functorial evolution among spin-networks is ensured by the projector operator [140], which implements the curvature constraint in the physical scalar product among states, i.e.
in|P |out , with We may then regard |in as elements of the Hilbert space, and without loss of generality pick up those ones resulting from composing tensorially in Cyl k -representations of holonomies.We may further denote them as |j 1 . . .j k , with some ordering prescription to associate the topological structure of Γ to the sequence of spin labels.Physically evolving states P |in are distinguished from the former ones by labelling them as | j 1 • • • j k .Similarly, we introduce |out as the tensor product of (n-k )-representations of holonomies, and denote these states as |i 1 . . .i n−k .Then the matrix elements of in|P |out naturally give rise [98] to an n-tensor, i.e.

Geometric RG flow for TQNNs and TNs
The mathematical structures of TQNNs we summarized in Sec.4.1 are picturing systems "at equilibrium", for which TQFTs characterize a topological stability that percolates into the related transition amplitudes.Nonetheless, it is worth considering as well how stochastic noise might interfere with the topological order ensured by TQFTs, and study the role of "out-of-equilibrium" physics in the analysis of the evolution of the systems under scrutiny.
Out-of-equilibrium dynamics is instantiated considering a heat-flow evolution of the fundamental fields of the theory, with respect to a thermal time τ .Typical Langevin equations, complemented with stochastic noise, provide through their convergence toward the equations of motion of the theory the relaxation toward equilibrium of the field configurations representing specific systems [141].In general, given some fields φ σ , with a classical equation of motion derived, according to the variational principle δS/δφ σ , from an action S over a Euclidean manifold M, the associated Langevin equations read: with η σ a stochastic noise term.The theory at equilibrium is characterized by the symmetries of the equations of motion δS/δφ σ = 0 that are broken in the transient phase [142]; these symmetries are consistent with -and in the case of BF theories, actually generated by -the theories at equilibrium.
A prototype of geometric heat-flow was introduced by Hamilton, and then used by Perelman to prove the Poincaré conjecture, which goes under the name of Ricci flow.Here the gravitational field g µν is the basic configurational space field, while the drift terms are the Einstein equations of motion in the vacuum, which indeed are expressed by requiring that the components of the Ricci tensor vanish, i.e.R µν = 0.The Ricci flow then reads having considered now a Lorentzian manifold M. The Ricci flow equations can be further complemented introducing the Ricci target R T µν = κ 2 (T µν − 1/2g µν T ), expressed in terms of the Newton constant G = κ 2 /(8π) and the energy-momentum tensor of matter T µν , so as to obtain at equilibrium the Einstein equations: The stochastic version of the Ricci flow, with heat equation turning into a Langevin equation, has been introduced and deepened in [142] for a generic gravitational system in the presence of matter fields, describing an action S for gravity and matter.Moving then from: in which a multiplicative noise η µν = η g µν appears, the Hamiltonian analysis of the stochastic Ricci flow (SRF) in the Adomian decomposition method (ADM) variables has been derived [142].
An essential by-product of the discussion, from the Ricci flow perspective, is that the equilibration trajectories corresponds to those of a renormalization group (RG) flow.The thermal time τ plays the role of scale parameter that individuates a dimension in the bulk, which is out-of-equilibrium.The boundaries are recovered asymptotically in τ , in the infra-red regime, and are by definition at equilibrium and thus symmetric.
For a particular class of TQFTs, the BF theories we have introduced in Sec.4.1 for implementing TQNNs and TNs, the geometric RG flow acquire a specific expression as the TQFT equivalent of the gravitational Ricci flow [143].

TNs as a generalization of the main model architectures in ML
The use of TNs is an emerging topic in the ML community.The integration between the two appears quite immediate.A TN structure can be viewed as an ML model in which the parameters are properly adjusted to learn the classification of a data set.Yet, as Ref. [144] mentions, machine learning can aid, in turn, in determining a factorization of a TN approximating a data set.Moreover, TNs are also used to compress the layers of ANN architectures, besides a variety of other uses.Tensor networks are becoming more and more popular to the extent that they are a powerful tool for representing and manipulating high-dimensional data, as in the case of image and video classification tasks in which the data is represented as a high-dimensional tensor.High efficiency, flexibility, and easy to use are making them a dominant choice for many AI applications.Furthermore, besides being used to represent data, TNs can be used to process data by exploiting a number of operators.This feature makes them an effective technique for processing data in ML applications.
As it is well known, TNs are particularly well suited for representing quantum many-body states in which the dimension of the Hilbert space is exponentially large in the number of particles.The corresponding ML approach consists in: • Lifting data to exponentially higher spaces; • Applying any linear classifier f (x) = W Φ(X) to a non-linear space; • Compressing the weights by using TNs.
The output of the model is a separation of classes that would not be linearly separable in a linear space.In particular, the decision function is the overlap of the weight tensor W with the feature map tensor Φ as in Fig. 5.The weight tensor W can be approximated by the decomposition in Fig. 6.
Regularization and optimization are built as a constructive product of low-order tensors while weight compression is performed by using the Matrix Product States (MPS) decomposition.If we look at Deep Neural Networks as a piecewise composition of linear discriminators (logistic regression functions), then the TN framework appears as a generalization of the main model architectures found in the ML literature, e.g.Support Vector Machines, Kernel models, and Deep Neural Networks.
The literature concerning the use of tensor theory in traditional ML is becoming large.A short review starts with a seminal paper by Stoudenmire and Schwab [146], which demonstrated how algorithms for optimizing TNs can be adapted to supervised learning tasks by using MPS (tensor trains) to parametrize non-linear kernel learning models.Novikov, Trofimov, and Oseledets [147] have shown how an exponentially large tensor of parameters can be represented in a factorized format called Tensor Train (TT), with the consequence of obtaining a regularization of the model.van Glasser, Pancotti, and Cirac [148] explored the connection between TNs and probabilistic graphical models by introducing the concept of a "generalized tensor network architecture" for ML.Ref. [149] then designed a generative model, i.e. a traditional machine learning model that learns joint probability distributions from data and generates samples according to it, by using MPS.Ref. [150] made use of autoregressive MPSs for building an unsupervised learning model that goes beyond proofof-concept by showing performance comparable to standard traditional models.Finally, Ref. [151] analyzes the contribution of polynomials of different degrees to the supervised learning performance of different architectures.

Implications for biological control systems
Scale-free biology requires a smooth transition from quantum-like to classical-like behavior.Typical representations of metabolic, signal-transduction, and gene-regulatory pathways are entirely classical, even though many of their steps involve electron-transfer or other mechanisms that are acknowledged to require a quantum-theoretic description [87,152].As noted earlier, free-energy budget considerations suggest that both prokaryotic and eukaryotic cells employ quantum coherence as a computational resource [92].Emerging empirical evidence for longer-range entanglement in mammalian brains suggests that large-scale networks may also be using quantum coherence as a resource [93].Control flow models must, therefore, support the possibility of quantum computation in biological systems.Hierarchical TNs that include unitary components, e.g., MERA-type models, provide this capability.
In prokaryotes, the primary tasks of control flow are adapting metabolism to available resources via metabolite-driven gene regulation [153] and initiating DNA replication and cell division when conditions are favorable.We can, therefore, expect shallow hierarchies of effectively classical control transitions in these organisms.Eukaryotes, however, are characterized by both intracellular compartmentalization and morphological degrees of freedom at the whole-cell scale.We have shown previously that the FEP will induce "neuromorphic" morphologies -i.e.morphologies that segregate inputs from outputs and enable a fan-in/fan-out computational architecture -in any systems with morphological degrees of freedom [154].Such systems can be expected to have deep control hierarchies at the cellular level, with hierarchical structure correlating with morphological structure in morphologically-complex cells such as neurons [155], and in multicellular assemblages at all scales.As well as managing metabolism and replication, such systems must implement active exploration of the environment, communication with other systems, and -crucially for cognition -the writing and reading of stigmergic memories.Thus we can expect such systems to implement QRFs for spacetime and for specific kinds of objects, e.g., conspecifics and suitable substrates for recording stigmergic memories.Such QRFs rely on symmetries, and hence on redundancy of encoded (or encodable) information; they depend, in other words, on the availability of error-correcting codes [25,156].The implementation of spacetime as a quantum error-correcting code by TNs has been extensively studied by physicists; see [157] for review and [98] for a detailed analysis using the present formalism.The use of spacetime as an error-correcting code by organisms -e.g., the implementation of translational and rotational invariance of objects by dorsal visual processing in mammals [158,159] -is well-understood phenomenologically, but the details of neural implementation remain to be elucidated.
Both the context-sensitivity of, and the occurence of context effects due to non-commutativity of QRFs in, control networks can be expected to increase with their complexity and hierarchical depth."Bowtie" networks with high fan-in/fan-out to/from multi-use proteins or second messengers such as Ca 2+ are increasingly recognized as ubiquitous in high eukaryotic cells [160].Such networks have the general for of the CCCD depicted in Diagram (3).Frequently, such networks evolve via compression of information (e.g.toward share second messengers, as in [Ca 2+ ]-based interactions [161,162]) as an efficiency-increasing mechanism.Bowties introduce semantic ambiguities that must be resolved by context.Each incoming signal has its own governing semantics, but the relevant context can depend on boundary conditions which can be exceedingly difficult (if not impossible) to predetermine (see e.g., [163,164] for general discussions of the history and semantic depth of this problem).As pointed out in [22], a context change x → y is semantically problematic if for a fixed set {o i } of observations, the conditional probability distributions P(o i |x) and P(o i |y) are well defined, but the joint distribution P(o i |x ∨ y) is not [106].This occurs whenever the QRFs for x and y do not commute [57,Th 7.1].As suggested by Diagram (3), this context-switching problem affects deep learning using VAEs [165]; see e.g., the application to antimicrobial peptides in [166].In general, the structure of Diagram (3) can serve as a convenient benchmark for distinguishing signal transduction networks that incorporate co-deployable versus non-co-deployable QRFs [57].
"Quantum" context effects due to non-commutativity have, interestingly, been reported even at the scale of human language use.The "Snow Queen" experiment [167] challenged subjects with distinct, mutually-inconsistent meanings of terms such as 'kind', 'evil', or 'beautiful' in different contexts, and detected statistically-significant context effects using the CbD formalism [62,63].Such effects cannot be explained by linguistic ambiguity, misreading, etc.Such language-driven contextuality is taken up in the setting of psycholinguistics and distributional semantics in [168], which combines CbD and the sheaf theoretic [60,61] methods to systematically study semantic ambiguity as creating meaning/sense discrepancies in statements like "It was about time", "She had time on her hands to win the heat", "West led with a queen", etc.
While the notion of "languages" has thus far been applied to cells, tissues, and even nonvertebrates in a mostly metaphorical way, we can speculate that linguistic approaches to understanding the interplay between context dependence and semantic ambiguity may be useful to biology in general.Immune cells (e.g., T cells) are, for example, "programmed" or "trained" by their progenitor cells to respond to local cellular signals and ambient conditions in particular ways.Unexpected context changes may induce dysfunctional (at the organism scale) responses, including chronic disorders [169]; these can be considered consequenes of discrepancies between the "actual' semantics of incoming signals and the semantics expected by the immune systems' "language."This suggestion of possible "linguistic" contextuality seems in consonance with the hypothesis of [170] that the immune system is a cognitive (living) system implementing its exclusive system of language-grammar, which may be prone to analogous disorders of communication as those discussed in [168].Similar context effects have been observed in microbiological systems [171]; here discrepancies in experimentally derived classical probabilities arising from lactose-glucose interference signaling in E. Coli can only be explained in terms of non-classical probabilities.We note that the expression 'quantum-like' [172] is often used for such effects; however, their formal structure is exactly that given by quantum theory.
We expect that further research into quantum biology will unfold significant perspectives on human/mammalian physiology and cognitive capabilities along the lines suggested in the present article.For example, allostatic maintenance, as briefly alluded to in §2.1, can be seen as a process regulating a body's physiological conditions relative to costs and benefits while dynamically allocating resources for the purpose of overall adaptability of an organism within its internal environment.Implementing the allostatic and anticipatory mechanisms are the visceromotor cortical regions generating autonomic, hormonal, and immunological predictions leading to interoceptive inference [68,69,70,82,173,174,176,177].This process of inference in humans and mammals putatively utilizes predictive coding for the processes of homeostasis-allostasis through a hierarchy of cellular to organ-level systems, in turn connecting interoception to the processes of extercoception and proprioception [82,173,177,178,179].The basic principles follow from how allostasis provides protection against potential surprise by utilizing a framework somewhat beyond the error signaling necessary for homeostatic maintenance (it is essentially through minimizing the free energy of internal state trajectories towards combatting surprise, as discussed in §2.1).The net effect of the process is consonant with the Good Regulator theorem of [175], showing how regulation of a given system requires an internal model of that system.A further perspective is to emphasize the predictive nature of an integrated, complex, allostatic-interoceptive cortical system capable of supporting a spectrum of psychological phenomena including memory and emotions [177] (cf.[82]).Accordingly, cognitive conditions such as depression and autism have been described as abnormalities of allostatic-interoceptive inference, so impairing predictive coding mechanisms due to aberrant assimilation and mistuning of prediction errors (putatively a connectivity issue), conceivably leading to a root cause of many known cognitive conditions [82,173,178].
We anticipate that this fully general, context sensitive model of control flow will be important for understanding morphogenesis, which is not simply a feed-forward emergent system, but rather a highly context-sensitive error-minimizing process [23].Specifically, the collective intelligence of cells during embryonic development, organ regeneration, and metamorphosis can create and repair specific complex structures despite a wide range of perturbations [180].Changes in the genome, the number of cells, or the starting configuration can often be overcome: bisected embryos result in normal twins, amputated salamander limbs re-grow back to normal, and planarian fragments result in perfect little worms [181].
The competency of cellular collectives to reach the correct target morphology despite even drastic interventions requires an understanding of how they navigate, via context-sensitive control flow, problem spaces including anatomical morphospace [182], physiological, and transcriptional spaces [156,183].Understanding the navigation policies used by unconventional collective intelligences can help not only understand creative problem-solving on rapid timescales (such as the ability to regulate genes to accommodate an entirely novel stressor [184] without evolutionary adaptation), but may also have implications for predicting and managing the goals and behavioral repertoires of synthetic beings [185].

Conclusion
We have shown here how the problem of defining control flow arises in active inference systems, and provided three formal representations of the problem.We have proved that control flow in such systems can always be represented by a tensor network, provided illustrative examples, and shown how the general formalism of topological quantum neural networks can be used to implement a general model of control flow.These results provide a general formalism with which to characterize context dependence in active inference systems at any scale, from that of macromolecular pathways to that of multi-organism communities.They suggest that the concept of communication by language is not just metaphorical when applied to biological systems in general, but rather an appropriate and productive description of interactional dynamics.
We view these results as a further step toward fully integrating the formal models, concepts, and languages of physics, biology, and cognitive science.This integration is not reductive.It rather allows us to classify systems using natural measures of organizational and computational complexity, and to understand how interactions between simpler systems can implement the more complex behavior of the larger systems that they compose.
under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3).M.L. gratefully acknowledges funding from the Guy Foundation and the John Templeton Foundation, Grant 62230.A.M. wishes to acknowledge support by the Shanghai Municipality, through the grant No. KBH1512299, by Fudan University, through the grant No. JJH1512105, the Natural Science Foundation of China, through the grant No. 11875113, and by the Department of Physics at Fudan University, through the grant No. IDH1512092/001.

Figure 1 :
Figure 1: Cartoon illustration of QRFs required to observe and write a readable memory of an environmental state |E .The QRFs E and Y read the state from E and write it to the memory Y respectively.Any identified system S must be part of E. The clock G ij is a time QRF that defines the time coordinate t A .The dashed arrow indicates the observer's thermodynamic process that converts free energy obtained from the unobserved sector F of B to waste heat exhausted through F .Adapted from [58], CC-BY license.
is then a pair of QRF sequences that can be identified with TQFTs that measure and record an outcome, mapping H X → H Y , and dually use an outcome read from memory to prepare a state, mapping H Y → H X , respectively as inDiagram 11: [105,106,107] is Planck's constant.As shown in[57], Theorem 3.4 using the CCCD representation, noncommutativity of QRFs induces quantum contextuality, i.e., dependence of measurement results on "non-local hidden variables" that characterize the measurement context[105,106,107].In the current context, such hidden variables characterize the action of H B on B, affecting what A will observe next in every cycle of A-B interaction.