Next Article in Journal
Optimization of Selective Assembly for Shafts and Holes Based on Relative Entropy and Dynamic Programming
Previous Article in Journal
The Heisenberg Indeterminacy Principle in the Context of Covariant Quantum Gravity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The World as a Neural Network

Department of Physics, University of Minnesota, Duluth, Minnesota, MN 55812, USA
Entropy 2020, 22(11), 1210; https://doi.org/10.3390/e22111210
Submission received: 9 September 2020 / Revised: 19 October 2020 / Accepted: 23 October 2020 / Published: 26 October 2020
(This article belongs to the Section Statistical Physics)

Abstract

:
We discuss a possibility that the entire universe on its most fundamental level is a neural network. We identify two different types of dynamical degrees of freedom: “trainable” variables (e.g., bias vector or weight matrix) and “hidden” variables (e.g., state vector of neurons). We first consider stochastic evolution of the trainable variables to argue that near equilibrium their dynamics is well approximated by Madelung equations (with free energy representing the phase) and further away from the equilibrium by Hamilton–Jacobi equations (with free energy representing the Hamilton’s principal function). This shows that the trainable variables can indeed exhibit classical and quantum behaviors with the state vector of neurons representing the hidden variables. We then study stochastic evolution of the hidden variables by considering D non-interacting subsystems with average state vectors, x ¯ 1 , …, x ¯ D and an overall average state vector x ¯ 0 . In the limit when the weight matrix is a permutation matrix, the dynamics of x ¯ μ can be described in terms of relativistic strings in an emergent D + 1 dimensional Minkowski space-time. If the subsystems are minimally interacting, with interactions that are described by a metric tensor, and then the emergent space-time becomes curved. We argue that the entropy production in such a system is a local function of the metric tensor which should be determined by the symmetries of the Onsager tensor. It turns out that a very simple and highly symmetric Onsager tensor leads to the entropy production described by the Einstein–Hilbert term. This shows that the learning dynamics of a neural network can indeed exhibit approximate behaviors that were described by both quantum mechanics and general relativity. We also discuss a possibility that the two descriptions are holographic duals of each other.

1. Introduction

Quantum mechanics is a remarkably successful paradigm for modeling physical phenomena on a wide range of scales ranging from 10 19 m (i.e., high-energy experiments) to 10 + 26 m (i.e., cosmological observations.) The paradigm is so successful that it is widely believed that, on the most fundamental level, the entire universe is governed by the rules of quantum mechanics and even gravity should somehow emerge from it. This is known as the problem of quantum gravity that so far has not been solved, but some progress had been made in the context of AdS/CFT [1,2,3], loop quantum gravity [4,5,6], and emergent gravity [7,8,9]. Although extremely important, the problem of quantum gravity is not the only problem with quantum mechanics. The quantum framework also starts to fall apart with introduction of observers. Everything seems to work very well when observers are kept outside of a quantum system, but it is far less clear how to describe macroscopic observers in a quantum system, such as the universe itself. The realization of the problem triggered an ongoing debate on the interpretations of quantum mechanics, which remains unsettled to this day. On one side of the debate, there are proponents of the many-worlds interpretation claiming that everything in the universe (including observers) must be governed by the Schrödinger equation [10], but then it is not clear how classical probabilities would emerge. One the other side of the debate, there are proponents of the hidden variables theories [11], but there it is also unclear what is the role of the complex wave-function in a purely statistical system. It is important to emphasize that a working definition of observers is necessary not only for settling some philosophical debates, but for understanding the results of real physical experiments and cosmological observations. In particular, a self-consistent and paradoxes-free definition of observers would allow for us to understand the significance of Bell’s inequalities [12] and to make probabilistic prediction in cosmology [13]. To resolve the apparent inconsistency (or incompleteness) in our description of the physical world, we shall entertain an idea of having a more fundamental theory than quantum mechanics. A working hypothesis is that, on the most fundamental level, the dynamics of the entire universe is described by a microscopic neural network that undergoes learning evolution. If correct, then not only macroscopic observers, but, more importantly, quantum mechanics and general relativity should correctly describe the dynamics of the microscopic neural network in the appropriate limits. (Note that the idea of using neural networks to describe gravity is not new and was recently explored in contexts of quantum neural networks [14], AdS/CFT [15] and emergent gravity [16].)
In this paper, we shall first demonstrate that near equilibrium the learning evolution of a neural network can indeed be modeled (or approximated) with the Madelung equations (see Section 5), where the phase of the complex wave-function has a precise physical interpretation as the free energy of a statistical ensemble of hidden variables. The hidden variables describe the (classical) state of the individual neurons whose statistical ensemble is given by a partition function and the corresponding free energy. This free energy is a function of the trainable variables (such as bias vector and weight matrix), whose stochastic and learning dynamics we shall study (see Section 4). Note that, while the stochastic dynamics generically leads to the production of entropy (i.e., second law of thermodynamics), the learning dynamics generically leads to the destruction of entropy (i.e., second law of learning). As a result in the equilibrium, the time-averaged entropy of the system remains constant and the corresponding dynamics can be modeled while using quantum mechanics. It is important to note that the entropy (and entropy production) that we discuss here is the entropy of either hidden or trainable variables which need not vanish even for pure states. Of course, one can also discuss mixed states and then the corresponding von Neumann entropy gives an additional contribution to the total entropy.
The situation changes dramatically, whenever some of the degrees of freedom are not thermalized. While it should, in principle, be possible to model the thermalized degrees of freedom using quantum theory, the non-thermalized degrees of freedom are not likely to exactly follow the rules of quantum mechanics. We shall discuss two non-equilibrium limits: one that can nevertheless be described using classical physics (e.g., Hamiltonian mechanics) and the other one that can be described using gravitational physics (e.g., general relativity). The classical limit is relevant when the non-equilibrium evolution of the trainable variables is dominated by the entropy destruction, due to learning, but the stochastic entropy production is negligible. The dynamics of such a system is well approximated by the Hamilton–Jacobi equations with free energy playing the role of the Hamilton’s principal function (see Section 6). The gravitational limit is relevant when even the hidden variables (i.e., state vectors of neurons) have not yet thermalized and the stochastic entropy production governs the non-equilibrium evolution of the system (see Section 9). In the long run, all of the degrees of freedom must thermalize and then quantum mechanics should provide a correct description of the learning system.
It is well known that, during learning, the neural network is attracted towards a network with a low complexity, a phenomenon also known as dimensional reduction or what we call the second law of learning [16]. An example of a low complexity neural network is the one that is described by a permutation weight matrix or when the neural network is made out of one-dimensional chains of neurons. (Note that a similar phenomenon was recently observed in context of the information graph flow [17].) If the set of state vectors can also be divided into non-interacting subsets (or subsystems) with average state vectors, x ¯ 1 , …, x ¯ D , and an overall average state vector x ¯ 0 , then the dynamics of x ¯ μ can be described with relativistic strings in an emergent D + 1 dimensional space-time (see Section 8). In general, the subsystems would interact and then the emergent space-time would be described by a gravitational theory, such as general relativity (see Section 9). Note that, in either case, the main challenge is to figure out exactly which degrees of freedom have already thermalized (and, thus, can be modeled with quantum mechanics) and where degrees of freedom are still in the process of thermalization and should be modeled with other methods, such as Hamiltonian mechanics or general relativity. In addition, we shall discuss yet another method, which is motivated by the holographic principle, and it is particularly useful when the bulk neurons are still in the process of equilibration, but the boundary neurons have already thermalized (see Section 10).
The paper is organized, as follows. In Section 2, we review the theory of neural networks and, in Section 3, we discuss a thermodynamic approach to learning. In Section 4, we derive the action that governs dynamics of the trainable variables by applying the principle of stationary entropy production. The action is used to study the dynamics near equilibrium in Section 5 (which corresponds to quantum limit) and further away from equilibrium in Section 6 (which corresponds to classical limit). In Section 7, we study non-equilibrium dynamics of the hidden variables and, in Section 8, we argue that, in certain limits, the dynamics can be described in terms of relativistic strings in the emergent space-time. In Section 9, we apply the principle of stationary entropy production to derive the action that describes equilibration of the emergent space-time (which corresponds to gravitational limit) and, in Section 10, we discuss when the gravitational theory can have a holographic dual description as a quantum theory. In Section 11, we summarize and discuss the main results of the paper.

2. Neural Networks

We start with a brief review of the theory of neural networks by following the construction that was introduced in Ref. [16]. The neural network shall be defined as a neural septuple ( x , P ^ i n , P ^ o u t , w ^ , b , f , H ) , where x R N , is the state vector of neurons, P ^ i n and P ^ o u t are the projection operators to subspaces spanned by, respectively, N i n , input and, N o u t , output neurons, w ^ R N × N , is a weight matrix, b R N is a bias vector, f : R R is an activation function and H : R N × R N × R N × N R is a loss function. This definition is somewhat different from the one usually used in the literature on machine learning, but we found that it is a lot more useful for analyzing physical theories in the context of a microscopic neural network that we are interested in here. We shall not distinguish between different layers and, so, all N neurons are connected into a single neural network with connections that are described by a single N × N weight matrix, w ^ . The matrix can be viewed as an adjacency matrix of a weighted directed graph with neurons representing the nodes and elements of the weight matrix representing directed edges. However, we will distinguish between two different types of neurons: the boundary neurons, N = N i n + N o u t , and the bulk neurons, N = N N . Similarly, the boundary and bulk projection operators are defined, respectively, as P ^ = P ^ i n + P ^ o u t and P ^ = I ^ P ^ .
The state vector of neurons, x R N , or just state vector, evolves in discrete time-steps, according to equation
x ( t + 1 ) = f w ^ x ( t ) + b
which can also be written in terms of components
x i ( t + 1 ) = f w i j x j ( t ) + b i .
Note that summations over repeated indices are implied everywhere in the paper unless stated otherwise (e.g., w i j x j = j w i j x j , 2 F q k 2 = k 2 F q k 2 and F q k 2 = k F q k 2 ). A crucial simplification of the dynamical system (1) was to assume that the activation map f : R N R N acts separately on each component (2) with some activation function f ( x ) . Logistic function f ( x ) = ( 1 + exp ( x ) ) 1 and rectified linear unit f ( x ) = max ( 0 , x ) are some important examples of the activation function, but we shall use the hyperbolic tangent f ( x ) = tanh ( x ) , which is also widely used in machine learning. The main reason is that the hyperbolic tangent is a smooth odd function with a finite support that greatly simplifies analytical calculations that we shall carry out in the paper.
The main problem in machine learning, or the main learning objective, is to find a bias vector, b , and a weight matrix, w ^ , which minimize some suitably defined loss function H ( x , b , w ^ ) . In what follows, we shall consider two loss functions: the “bulk” loss and the “boundary” loss. The bulk loss function is defined as a local sum over all neurons
H ( x , b , w ^ ) = 1 2 x f w ^ x + b T x f w ^ x + b + V ( x ) = 1 2 x i f w i j x j + b i x i f w i k x k + b i + i V ( x i ) .
The first term represents the sum over squares of local errors or, equivalently, differences between the state of a neuron before, x i , and after, f w i j x j + b i , a single execution of the activation map. The second term represents a local objective, such as a binary classification of the signal x i . For example, if V ( x i ) = m 2 x i 2 , then the values of x i closer to lower- and upper-bounds are rewarded and values in-between are penalized. Although the bulk loss is much easer to analytically analyze, in practice it is often more useful to define the boundary loss function by summing over only boundary neurons,
H ( x , b , w ^ ) = H ( P ^ x , P ^ b , P ^ T w ^ P ^ ) .
In fact, the boundary loss is usually used in supervised learning, but, as was argued in [16], the bulk loss is more suitable for unsupervised learning tasks.
Instead of following the dynamics of the individual states, which might be challenging, one can use the principle of maximum entropy [18,19] to derive a canonical ensemble of states [16]. The corresponding canonical partition function is
Z ( β , b , w ^ ) = d N x e β H x , b , w ^
and the free energy is
F ( β , b , w ^ ) = 1 β log Z ( β , b , w ^ ) .
At a constant “temperature”, T = β 1 = c o n s t , the ensemble can evolve with time either due to internal (or what we shall call hidden) dynamics of the state vector, x ( t ) , or due to the external (or what we shall call training) dynamics of the bias vector, b ( t ) , and weight matrix, w ^ ( t ) . The partition function for the bulk loss function (3) with a mass-term potential, V ( x i ) = m 2 x i 2 , and a hyperbolic tangent activation function, f ( x ) = tanh ( x ) , was calculated in [16] while using Gaussian approximation. The result is
Z ( β , b , w ^ ) ( 2 π ) N / 2 det I ^ ( 1 β m ) + β G ^ 1 / 2
where
G ^ I ^ f ^ w ^ T I ^ f ^ w ^
and f ^ is a diagonal matrix of first derivatives of the activation function,
f i i d f ( y i ) d y i y i = w i j x j + b i .

3. Thermodynamics of Learning

Given the partition function, the average loss can be calculated by a simple differentiation,
U ( β , b , w ^ ) = H x , b , w ^ = β log ( Z ( β , b , w ^ ) ) = β β F ( β , b , w ^ ) .
If the neural network was trained for a long time, then the weight matrix and the bias vector are in a state that minimizes (at least locally) the average loss function and then its variations with respect to w ^ and b must vanish,
U ( β , b , w ^ ) w i j = 2 w i j β β F ( β , b , w ^ ) = 0 U ( β , b , w ^ ) b i = 2 b i β β F ( β , b , w ^ ) = 0 .
We shall call this state, the state of the learning equilibrium. An important property of the equilibrium, which follows from (11), is that the total free energy must decompose into a sum of two terms
F ( β , b , w ^ ) = A ( β ) 1 β C ( b , w ^ ) .
Likewise, the total entropy must also decompose into a sum of two terms,
S x ( β , b , w ^ ) = β 2 β F ( β , b , w ^ ) = β 2 β A ( β ) 1 β C ( b , w ^ ) = S 0 ( β ) + C ( b , w ^ )
where the first term is the familiar thermodynamic entropy
S 0 ( β ) = β 2 A ( β ) β = β ( U ( β ) A ( β ) ) .
and the second term, C ( b , w ^ ) , is related to the complexity of the neural network (see Ref. [16]).
As the learning progresses, the average loss, U ( β ) , decreases, the temperature parameter, β 1 , decreases, and, thus, one might expect that the thermodynamic entropy, S 0 , should also decrease. However, it is not the thermodynamic entropy, S 0 , but the total entropy, S x , (whose exponent describes accessible volume of the configuration space for x ) should decrease with learning. We call it the second law of learning:
Second Law of Learning: 
the total entropy of a learning system can never increase during learning and is constant in a learning equilibrium,
d d t S x 0 .
In the long run the system is expected to approach an equilibrium state with the smallest possible total entropy, S x , which corresponds to the lowest possible sum of the thermodynamic entropy, S 0 ( β ) , and of the complexity function C ( b , w ^ ) .
For a system transitioning between equilibrium states at constant temperature, T = 1 / β , variations of the free energy must vanish, d F = 0 , and then Equation (12) takes the form of the first law,
d A T d C = d U T d S x = d U T d S 0 T d C = 0 ,
or what we call the first law of learning:
First Law of Learning: 
the increment in the loss function is proportional to the increment in the thermodynamic entropy plus the increment in the complexity
d U = T d S x = T d S 0 + T d C .

4. Entropic Mechanics

So far, the neural networks were analyzed by considering statistical ensembles of the state vectors, x , but the bias vector, b , and weight matrix, w ^ , were treated deterministically. The next step is to promote b and w ^ to stochastic variables in order to study their near-equilibrium dynamics. In the next section, we will show that the training dynamics of b and w ^ can be approximated by Madelung equations, with x playing the role of the hidden variables. For this reason, we shall refer to the bias vectors and weight matrices as “trainable” variables and to the state vectors as “hidden” variables. This does not mean that the trainable variables are the quantized versions of the corresponding classical variables, but only that their stochastic evolution near equilibrium can often be described by quantum mechanics.
Consider a family of trainable variables, b ( q ) and w ^ ( q ) , parametrized by dynamical parameters q k ’s, where k ( 1 , . . . , K ) . Typically, the number of parameters K is much smaller than N + N 2 (i.e., the number of parameters required to describe a generic vector b and a generic matrix w ^ ) and the art of designing a neural architecture is to come up with functions b ( q ) and w ^ ( q ) , which are most efficient in finding solutions. To make the statement more quantitative, consider an ensemble of neural networks described by a probability distribution p ( t , q ) , which evolves with time according to a Fokker–Planck equation
p t = q k D p q k d q k d t p .
If we assume that the learning evolution (or the drift) is in the direction of the gradient of the free energy,
d q k d t = γ F q k
then
p t = q k D p q k γ F q k p .
This may be a good guess on short-time scales when the free energy does not change much, but in general both p ( t , q ) and F ( t , q ) can depend on time explicitly and implicitly though variable q . To describe such dynamics, we shall employ the principle of stationary entropy production (see Ref. [20]):
Principle of Stationary Entropy Production: 
The path taken by a system is the one for which the entropy production is stationary.
The principle can be thought of as a generalization of both, the maximum entropy principle [18,19] and the minimum entropy production principle [21,22], which is often used in non-equilibrium thermodynamics. In context of neural networks, it is beneficial to have large entropy, as it implies a higher rate with which new solutions can be discovered. Subsequently, the optimal neural architecture should be the one for which the entropy destruction is minimized or, equivalently, the entropy production is maximized. This justifies the use of the principe in context of the optimal learning systems [16].
The Shannon entropy of the distribution p ( t , q ) (not to confuse with S x ( β , q ) ) is given by
S q ( t ) = d K q p ( t , q ) log p ( t , q ) .
and using (20), the entropy production is given by
d S q d t = d K q p log ( p ) t d K q log ( p ) p t = d d t d K q p d K q log ( p ) p t = d K q log ( p ) q k D p q k γ F q k p
which can be simplified (after integrating by parts and ignoring the boundary terms, i.e., by assuming periodic or vanishing boundary conditions),
d S q d t = d K q p q k D p p q k γ F q k = d K q p 4 D 2 q k 2 + γ 2 q k 2 F p .
This quantity is a functional of both p ( t , q ) and F ( t , q ) and, thus, in addition to modeling the dynamics of the probability distribution, we must also model the dynamics of the free energy.
The total rate of change of the free energy is given by
d d t F ( t , q ) = F ( t , q ) t + d q k d t F ( t , q ) q k = F ( t , q ) t + γ F ( t , q ) q k 2
where the first term represents the change of the free energy due to dynamics of hidden variables, x , and the second term represents the change in the free energy due to dynamics of trainable variables, b and w ^ . In what follows, it will be convenient to denote the time-averaged rate of change of free energy as
d d t F ( t , q ) t V ( q ) .
Subsequently, according to the principle of stationary entropy production, the dynamics of p ( t , q ) and F ( t , q ) must be such that the entropy production is extremized subject to a constraint
F t + γ F q k 2 + V = 0 .
The optimization problem can be solved by defining the following “action”,
S q [ p , F ] = 0 T d t d S q d t + μ 0 T d t d K q p F t + γ F q k 2 + V ,
where μ is a Lagrange multiplier, and then the “equations of motion” are obtained by setting variations of the action to zero,
δ S q δ p = δ S q δ F = 0 .

5. Quantum Mechanics

In the previous section, we developed a stochastic description of the trainable variables q which describe the weight matrix w ^ ( q ) and the bias vector b ( q ) . We argued that, on short time-scales, the dynamics of the probability distribution p ( t , q ) and of the free energy F ( t , q ) is given by Equations (20) and (23), but on longer time-scales an approximate dynamics can be obtained while using the principle of stationary entropy production. The corresponding “action” is given by (26), which can be rewritten using (22),
S q [ p , F ] = 0 T d t d K q p 4 D 2 q k 2 + γ 2 q k 2 F + μ F t + μ γ F q k 2 + μ V p .
The five terms on the right hand side represent:
(1)
4 D 2 q k 2 , entropy production due to stochastic dynamics of q k ’s,
(2)
γ 2 F q k 2 , entropy production due to learning dynamics of q k ’s,
(3)
μ F t , free energy production due to dynamics of x i ’s,
(4)
μ γ F q k 2 , free energy production due to learning dynamics of q k ’s, and
(5)
μ V , the (negative of) total time-averaged free energy production.
Note that the entropy production due to stochastic dynamics is usually positive (due to the second law of thermodynamics), but the entropy production due to learning dynamics is usually negative (due to the second law of learning). While the learning entropy production is expected to dominate the dynamics far away from an equilibrium, the stochastic entropy production is expected to give the main contribution near equilibrium.
From (28), the equations of motion (27) are obtained by setting variations to zero,
δ S q [ p , F ] δ F = γ 2 q k 2 p μ t p 2 μ γ q k F q k p = 0
δ S q [ p , F ] δ p = 4 D p 2 p q k 2 + γ 2 F q k 2 + μ F t + μ γ F q k 2 + μ V = 0 .
It is convenient to define a velocity vector
u k 2 γ q k F .
and then (29) can be expressed as a Fokker-Planck equation
t p = q k u k p + γ μ 2 q k 2 p
and (30) as a Naiver-Stokes equation (after differentiating with respect to q j )
t u j + u k q k u j + γ μ 2 q k 2 u j = 2 γ q j V 4 D μ p 2 p q k 2 .
Several comments are in order. First of all, the Fokker–Planck Equation (32) differs from the “stochastic” Fokker–Planck Equation (20). This is a consequence of our assumption that (20) is only valid on very short time scales, while, according to the principle of stationary entropy production, Equations (32) and (33) must be valid on much longer time-scales. Secondly, if μ > 0 then the kinetic viscosity in the Naiver–Stokes Equation (33), γ μ , is negative, which is a consequence of the second law of learning. Additionally, finally, if we neglect the entropy production due to learning (i.e., γ 2 F q k 2 in (28)), then the resulting equations of motion would be the same as (32) and (33), but with terms in boxes set to zero. These are the well known Madelung equations that are equivalent to the Schrödinger equation
i 4 D γ t Ψ = 4 D 2 q k 2 V Ψ
for the wave-function defined as
Ψ p exp i γ 4 D F .
Moreover, in this limit, the action (28) takes the form of the Schrödinger action
S q [ Ψ ] = 0 T d t d K q Ψ * 4 D 2 q k 2 + V i 4 D γ t Ψ .
Therefore, we conclude that near equilibrium, i.e., when the first term in (28) is much larger than the second term, our system can be modeled by quantum mechanics.

6. Hamiltonian Mechanics

The next step is to consider a non-equilibrium dynamics of the trainable variables, which is relevant when the second term in (28) is much larger than the first term. This corresponds to a limit when the entropy destruction is dominated by the learning dynamics and the stochastic entropy production is negligible. The corresponding Fokker–Planck equation remains the same as before (32), but the Naiver–Stokes Equation (33) is greatly simplified
t u j + u k q k u j + γ μ 2 q k 2 u j = 2 γ q j V .
In this limit. the dynamics of the free energy F does not depend on the probability distribution p and, thus, Equation (37) decouples from (32) and can be solved separately. In terms of the free energy, the equation of motion (30) is
F t = V + γ F q k 2 + γ μ 2 F q k 2
which can be though of as a Hamilton–Jacobi equation for the Hamilton’s principle function F and a Hamiltonian function
H q k , F q k , 2 F q k 2 = V + γ F q k 2 + γ μ 2 F q k 2 .
However, note that, in classical mechanics, the Hamiltonian function only depends on q k ’s and F q k ’s, but, in our case, it also depends on one more variable k 2 F q k 2 .
From Equations (19) and (31), we get
d q j d t = γ F q j = 1 2 u j
and then (38) can be rewritten as
d F d t = F t + d q k d t F q k = γ μ 2 q k 2 F V .
In the limit when the entropy production (due to both learning and stochastic dynamics) is negligible, i.e., γ μ 2 q k 2 F V , Equations (40) and (41) can be used in order to obtain classical equations of motion
d 2 q j d t 2 = γ V q j .
In the opposite limit, V γ μ 2 q k 2 F , the equation for free energy (41) takes the following form
F t = γ F q k 2 γ μ 2 q k 2 F .
which has a simple time-independent (i.e., F t = 0 ) solution that is given by,
F = C 0 + 1 μ k log ( C k + μ q k )
where C 0 and C k ’s are arbitrary coefficients. Note that F t = 0 corresponds to a limit when the change in the free energy production due to dynamics of x i ’s being negligible or, in other words, when the training dataset is not dynamical (as is often the case in machine learning).
The solution (44) has an exact form of the free energy for a canonical ensemble (7),
F = 1 2 β log det ( ( 1 β m ) + β G ^ ) N 2 β log ( 2 π ) = 1 2 β i log ( 1 β m ) + β λ i N 2 β log ( 2 π ) ,
with μ = 2 β and the dynamical variables q i set to the eigenvalues λ i of the operator G ^ . In this limit, the average loss is
U = ( β F ) β = 1 2 i λ i 1 + β λ i = λ i F λ i ,
where, for simplicity, we have set the mass parameter to zero, m = 0 . This equation can be thought of as a viral theorem for our learning system where F λ i is the “force” acting on a “particle” at position λ i . More generally, the eigenvalues λ i ’s could be arbitrary functions of q i ’s and time t, and then
γ μ k 2 F q k 2 = γ μ i , j , k 2 F λ i λ j λ i q k λ j q k = γ β 2 μ i , k ( 1 β m ) + β λ i 2 λ i q k 2 = 2 γ β μ i , k F λ i λ i q k 2 = 2 γ β μ i , j , k F λ i λ i q k δ i j λ j q k F λ j = 2 γ β μ i , j , k , m , n F q m q m λ i λ i q k δ i j λ j q k q n λ j F q n = 2 γ β μ i , k , m , n F q m q m λ i λ i q k 2 q n λ i F q n
where we assumed that λ i q j is invertible. This implies that for the canonical free energy (45) the Hamiltonian function (39) can be written in terms of only first derivatives of the Hamilton’s principle function F,
H q k , F q k = V + γ F q m δ m n 2 β μ q m λ i λ i q k 2 q n λ i F q n ,
and, thus, the system is Hamiltonian although the kinetic term may not be canonical.

7. Hidden Variables

We have seen that neural networks can exhibit both quantum (Section 5) and classical (Section 6) behaviors if the dynamics of the trainable variables q (or equivalently of the bias vector b and weight matrix w ^ ) is followed explicitly, but the dynamics of the hidden variables (or the state vectors x ) was expressed only implicitly through F t . For this reason, it was convenient to think of the (classical) state vectors x as hidden random variables whose individual dynamics was shadowed by our statistical description. In this section, we shall be instead interested in non-equilibrium dynamics of the hidden variables, which is relevant, for example, on the time-scales that are much smaller than thermalization time.
Recall that the state of the individual neurons evolves according to (1) which can be approximated to the leading order as
x ¯ i ( 0 ) ( t + 1 ) f ^ 0 w ^ i j x ¯ j ( 0 ) ( t )
where f ^ 0 = f ^ is the matrix of first derivative of the activation function (9). More generally, we can consider D non-interacting subsystems of states vectors (e.g., D separate sets of training data), denoted by x ( d ) where d = 1 , . . . , D . Subsequently, the overall distribution of the state vectors is in general multimodal with D local maxima, x ¯ ( d ) , and each of these maxima evolves according to
x ¯ i ( d ) ( t + 1 ) f ^ d w ^ i j x ¯ j ( d ) ( t )
where
x ¯ ( 0 ) = d x ¯ ( d ) .
and
f ^ d i i d f ( y i ) d y i y i = w i j x ¯ j ( d ) + b i .
It is convenient to define a continuous time coordinate τ such that
τ x ¯ i ( μ ) ( τ ) = α ( x ¯ i ( μ ) ( t + 1 ) x ¯ i ( μ ) ( t ) )
where μ = 0 , 1 . . . D and α is an auxiliary parameter. Although the different subsystems are represented by different hidden variables x ( d ) ’s, they are all processed by the very same neural network described by the same trainable variable b and w ^ . With this respect, the hidden variables are not interacting directly with each other, but they are interacting (minimally) through the trainable variables, b and w ^ . If such (minimal) interactions are negligible, then x ¯ i ( c ) τ x ¯ i ( d ) τ δ c d with no summations over index i. Subsequently,
x ¯ i ( 0 ) τ x ¯ i ( 0 ) τ = d x ¯ i ( d ) τ x ¯ i ( d ) τ for all i
or
η μ ν x ¯ i ( μ ) τ x ¯ i ( ν ) τ = 0 for all i
where η = diag ( 1 , 1 , . . . , 1 ) . However, in general, the minimal interactions cannot be ignored and then
g μ ν i x ¯ i ( μ ) τ x ¯ i ( ν ) τ = 0 for all i
where the metric tensor g μ ν i describes the strength of the interactions. Of course, such a description is only valid if the minimal interactions are weak, which is the assumption that we are going to make.
To estimate the dynamics of hidden variables x ¯ μ , we assume that the activation function is linear f ^ d = I ^ (with the slope set to one without loss of generality) and then from (49) and (50), we have
x ¯ ( μ ) ( t + 1 ) w i j x ¯ j ( μ )
and (53) becomes
x ¯ i ( μ ) τ α w i j δ i j x ¯ j ( μ ) .
According to the second law of learning, it is expected that the neural network must have evolved to a network with a very low complexity, such as a network whose weight matrix is a permutation matrix
w ^ = π ^ .
For example, consider a permutation matrix with only a single cycle that (up to permutations of elements) is given by
π i j = 1 if i 1 = j ( mod N ) 0 otherwise .
Subsequently, Equation (58) can be rewritten as
x ¯ i ( μ ) τ = α x ¯ i 1 ( mod N ) ( μ ) ( t ) α x ¯ i ( μ ) ( t ) .
If we take a continuous limit by defining x ¯ ( μ ) ( τ , σ ) , such that
σ x ¯ ( μ ) ( τ , σ ) = α ( x ¯ i ( μ ) ( t ) x ¯ i 1 ( mod N ) ( μ ) ( t ) )
then (61) becomes
x ¯ ( μ ) τ = x ¯ ( μ ) σ .
This equation has a simple solution of a periodic “right-moving” wave. In the light-cone coordinates ξ ± τ ± σ , the equation of motion (63) is
x ¯ ( μ ) ξ + = 0
and the constraint Equation (55) is
η μ ν x ¯ ( μ ) ξ x ¯ ( ν ) ξ = 0 .

8. Relativistic Strings

In the last section, we have shown that an equation for a “right-moving” wave (64) can emerge in a statistical description of D minimally-interacting subsystems of state vectors. A natural question arises if a “left-moving” wave can also emerge in some limit and if so can the dynamics be described in terms of relativistic strings in an emergent space-time? To answer this question we first note that the permutation weight matrix (59) (with an arbitrary number of cycles) is such that
π ^ T π ^ = π ^ π ^ T = I ^
and thus
G ^ π ^ = π ^ I ^ T π ^ I ^ = I ^ π ^ π ^ T + π ^ T π ^ = G ^ π ^ T .
Because the free energy (45) depends on π ^ only through G ^ , the very same ensemble of the state vectors can equally likely evolve either towards π ^ or towards π ^ T . However, if the exact state of the microscopic weight matrix is unknown, then one must consider an ensemble that contains both options and then the average state vector is given by
x ¯ i ( μ ) = 1 2 d N x ( μ ) p ( x ( μ ) , π ^ ) x i ( μ ) + 1 2 d N x ( μ ) p ( x ( μ ) , π ^ T ) x i ( μ ) = 1 2 x ¯ i ( μ ) + 1 2 x ¯ i ( μ + )
where the two terms represent statistical averages with respect to the two distributions.
Following the analysis of the previous section, the dynamics of x ¯ i ( μ ) and x ¯ i ( μ + ) can be obtained from (58) for the respective weight matrices,
x ¯ i ( μ ) τ α π i j δ i j x ¯ j ( μ )
x ¯ i ( μ + ) τ α π i j T δ i j x ¯ j ( μ + ) .
In a continuum limit the equations are given by
x ¯ ( μ ) τ = x ¯ ( μ ) σ
x ¯ ( μ + ) τ = + x ¯ ( μ + ) σ
whose solutions represent, respectively, the right- and left-moving waves. Subsequently, the dynamics of the hidden variables (68) is indeed given by a 1 + 1 dimensional wave equation
2 τ 2 x ¯ ( μ ) ( τ , σ ) = 2 σ 2 x ¯ ( μ ) ( τ , σ ) .
In the light-cone coordinates, the wave equation is
ξ ξ + x ¯ ( μ ) ( τ , σ ) = 0
and the constraints
η μ ν x ¯ ( μ ) ξ x ¯ ( ν ) ξ = η μ ν x ¯ ( μ ) ξ + x ¯ ( ν ) ξ + = 0 .
The action that gives rise to the wave Equation (74) and constraints (75) is the Polyakov action that can be written in a covariant form as
A = d σ d τ h h a b η μ ν x ¯ ( μ ) ξ a x ¯ ( μ ) ξ b
where h a b is the world-sheet metric and h is its determinant.
In summary, we showed that D non-interacting subsystems of the state vectors x ( d ) can be described with D + 1 scalar fields in 1 + 1 dimensions. Alternatively, one can view the configuration space of the scalar fields as an emergent space-time and then our system can be described with a motion of relativistic strings in D + 1 dimensions (76). This is very similar to what is usually done in string theory, with one major difference. Our strings arise from the dynamics of the average state vectors x ¯ ( μ ) and not from the dynamics of the bias vector b and weight matrix w ^ which undergo learning. Recall that the trainable variables b and w ^ (or equivalently q ) near equilibrium can be modeled by quantum mechanics (Section 5) and further away from the equilibrium by classical mechanics (Section 6). In contrast, the state vectors x ¯ ( μ ) represent hidden variables of the quantum theory, but their dynamics (in certain limits) is conveniently described by relativistic strings.

9. Emergent Gravity

In Section 7, we showed that interactions between D subsystems can be described by Equation (56), but up until now, the analysis was restricted to g μ ν i = η μ ν . In this section, we shall generalize the construction to more general metric tensors g μ ν i , which is a function of the discrete parameter i and, consequently, a function in the emergent space-time. Moreover, we shall not make any simplifying assumptions about operator G ^ or, equivalently, about the weight matrix w ^ and the activation map f ^ d . Subsequently, by following the procedure of the previous sections, we arrive at a discrete action for the hidden variables (or state vectors),
A = g μ ν i α 2 x i ( μ ) G i j x j ( ν ) x + d x ¯ i ( μ ) d τ d x ¯ i ( ν ) d τ .
with the corresponding equation of motion
g μ ν i 2 τ 2 x ¯ i ( ν ) α 2 G i j x i ( ν ) = 0 .
The equation of motion (78) and the corresponding action (77) can be considered as a generalization of respectively the wave-equation (73) and of the string acton (76). Nevertheless, the string action can be recovered in the limit of a flat target space, g μ ν i = η μ ν , for a permutation weight matrix, w ^ = π ^ , and for a linear activation function f ^ d = I ^ .
To study the dynamics in the emergent space-time, it is convenient to rewrite (77) as
A = d D X g g μ ν T μ ν
where g is the determinant of g μ ν and
g T μ ν α 2 x i ( μ ) G i j x j ( ν ) x + d x ¯ i ( μ ) d τ d x ¯ i ( ν ) d τ α δ X α x ¯ i α
is the energy-momentum tensor density. The equilibrium dynamics of neural networks was first modeled while using the principle of maximum entropy with a constraint imposed on the loss function [16], but to study a non-equilibrium dynamics of the trainable variables, the principle of the stationary entropy production had to be used with a constraint was imposed on the dynamics of free energy (25). In this section, we study a non-equilibrium dynamics of the hidden variables, and so the constraint should be imposed on the action that describes the dynamics of the state vectors (79). Then, according to the principle of stationary entropy production, the quantity that must be extremized is
S x [ g ] = d D + 1 X g R ( g ) + κ A d D + 1 X g g μ ν T μ ν
where g R ( g ) is the local entropy production density, κ is a Lagrange multiplier, and A is a constant that represents average A . Note that the energy momentum tensor density (80) does not depend on the metric and so varying the corresponding term in (81) with respect to the metric produces the desired result
δ δ g α β d D + 1 X g g μ ν T μ ν = g T α β .
However, if we are not following the microscopic dynamics of all of the elements of the bias vector and weight matrix, then it is more useful to define
L M ( g , Q ) g μ ν T μ ν q
where Q represents the trainable variables in q (or equivalently in b and w ^ ), which were not averaged over. Subsequently, the action (81) can be written as
S x [ g , Q ] = d D + 1 X g R ( g ) + κ L M ( g , Q ) + κ A
where L M ( g , Q ) plays the role of the “matter” Lagrangian and then the energy momentum tensor should be defined as
g T α β δ δ g α β d D + 1 X g L M ( g , Q ) .
The parameter κ is a Lagrange multiplier which imposes a “global” constraint
δ S x [ g ] δ κ = A + d D + 1 X g L M ( g , Q ) = 0
but one can also impose the constraint “locally” by demanding that
A = 2 κ d D + 1 X g Λ
and then the total action becomes
S x [ g , Q ] = d D + 1 X g R ( g ) 2 Λ + κ L M ( g , Q )
where Λ is the “cosmological constant”.
Recall that the deviations of the metric g μ ν ( X ) (or g μ ν i ) from the flat metric η μ ν represent local interactions between subsystems (56). Therefore, if our system is in the process of equilibration, then the entropy production should be a local function of the metric tensor. Using a phenomenological approach due to Onsager [23], we can expand the entropy production around equilibrium [24],
g R = g L μ ν α β γ δ g α β , μ g γ δ , ν .
where
g α β , μ g α β X μ
and g L μ ν α β γ δ is the Onsager tensor density. The overall space of such tensors is pretty large, but it turns out that a very simple and highly symmetric choice leads to general relativity:
g L μ ν α β γ δ = 1 4 g 2 g α γ g β ν g μ δ g α γ g β δ g μ ν g α β g γ δ g μ ν .
After integrating by parts, neglecting boundary terms and collecting all other terms, we get
d D + 1 X g R = d D + 1 X g g μ ν 2 Γ α ν [ μ , α ] α + Γ β ν [ μ β Γ α α ] β α = = d D + 1 X g 1 4 2 g α γ g β ν g μ δ g α γ g β δ g μ ν g α β g γ δ g μ ν g α β , μ g γ δ , ν
where
Γ μ γ δ μ 1 2 g μ ν g ν γ , δ + g ν δ , γ g γ δ , ν
and
Γ α μ ν , β α X β Γ α μ ν α .
Thus, upon varying (81) with respect to the metric, we get the Einstein equations
R μ ν 1 2 R g μ ν + Λ g μ ν = κ T μ ν
where the Ricci tensor is defined as usual
R μ ν 2 Γ α ν [ μ , α ] α + Γ β ν [ μ β Γ α α ] β α .
Note that, according to definition (93), the Onsager tensor need not be positive definite that would be inconsistent with the second law of thermodynamics, but it is permitted by the second law of learning.
It is important to highlight that the Einstein Equation (95) were obtained from a particular form of the Onsager tensor (93), which is very simple and also highly symmetric. With this respect the result is phenomenological, but one might wonder whether the symmetries of the Onsager tensor can also be derived from the first principles. Moreover, since the neural networks can exhibit an approximate behavior descried by quantum mechanics (see Section 5), it would be interesting to see if the symmetries of quantum field theories (such as the standard model) might also emerge from the learning dynamics of a microscopic neural network. Note that the emergence of symmetries is extremely important not only for modeling physics systems with neural networks, but also for designing more efficient artificial neural networks (see Ref. [16]). In this paper, we only considered the emergence of the quantum phase (i.e., U ( 1 ) symmetry in Section 5) and the emergence of space-time (i.e., S O ( 1 , D ) symmetry in Section 8), and we leave the emergence of more general symmetries for future studies.

10. Holography

In the preceding sections, we applied the principle of the stationary entropy production to study the dynamics of the neural networks in two different limits. In the first limit the trainable variables q were treated stochastically, but their dynamics was constrained by the hidden variables x through the free energy, F. The resulting dynamics of the system was shown to exhibit quantum and classical behaviors that were described by the functional S q [ p , F ] (see (28)). In the second limit, the hidden variables x were treated stochastically, but their dynamics were constrained by the trainable variables q through the action, A . The resulting dynamics of the system was shown to exhibit a behavior described by the action of a gravitational metric theory, such as general relativity, S x [ g , Q ] (see (88)). The two limits are certainly very different: the “gravitational” theory describes very sparse and deep neural networks and, in the “quantum” theory, the network can be very dense and shallow. However, one might wonder if it may possible to map the sparse and deep neural network to the dense and shallow neural network without losing the ability of the neural network to learn. If the answer is affirmative, then this would imply that the two descriptions—quantum and gravitational (or dense and sparse, or shallow and deep)—are dual and either one can be used in order to describe the learning dynamics.
In this section, we shall explore an idea that the duality not only exists, but is also holographic in a sense that the degrees of freedom of the gravitational theory, i.e., x , b and w ^ , can be mapped to only boundary degrees of freedom of the quantum theory, i.e., x , b and w ^ . The non-equilibrium dynamics of both systems is governed by the principle of stationary entropy production and to justify such a mapping the entropy production of the gravitational system Δ S x should correspond to the entropy production of the quantum system Δ S q . Roughly speaking, this means that the uncertainty in the position of neurons in the bulk, x , should correspond to the uncertainty in the values of quantum variables on the boundary, i.e., b and w ^ . For example, consider a mapping that is defined by
x = P ^ x
b = P ^ b
w ^ ( ϵ ) = P ^ ϵ w ^ I ^ ϵ w ^ P ^ T
In a microscopic picture, the gravitational system consists of long chains of neurons (see Section 7) connecting different pairs of the boundary neurons, i and j, but the length of these chains is encoded in the elements of the boundary weight matrix,
d ( i , j ) = log ϵ w i j ( ϵ ) 1 .
The smaller the element w i j , the larger the number of intermediate bulk neurons that connect i to j. Whenever any two chains of neurons i-j and k-l have a chance of intersecting and forming two other chains of neurons i-l and k-j, the entropy of the bulk theory changes. On the other side of the duality, the same event can lead to the corresponding elements w i j , w k l , w k j and w i l to change or, in other words, to the entropy production in the boundary theory. Thus, it is not too unreasonable to expect that the entropy production in both system are related.
The holographic duality can be more precisely formulated by considering the action functionals that determine the dynamics in both theories. In the boundary theory, the action S q p ( q ) , F ( q ) is given by Equation (28) and in the bulk theory the action S x [ g ( X ) , Q ( X ) ] is given by Equation (88). For the two systems to be dual, the two actions must be proportional
S x [ g ( X ) , Q ( X ) ] S q p ( q ) , F ( q ) ,
or, using (28) and (88),
d D + 1 X g R ( g ) 2 Λ + κ L M ( g , Q ) d t d K q p 4 D 2 q k 2 + γ q k 2 F + μ F t + μ γ q k 2 F + μ V p .
The left hand side describes the bulk gravitational theory, the right hand side describes the boundary theory and the duality transformation is nothing but changes of variables between g , Q and p , F . Note, however, that the boundary theory can only be approximated by quantum mechanics in the limit when the entropy production due to learning (i.e., the quantity in the box in (102)) is subdominant. Therefore the holography described by (101) should be considered as more general than the holography discussed, for example, in the context of the AdS/CFT correspondence where the CFT side is quantum and the AdS side is gravitational.

11. Discussion

In this paper, we discussed a possibility that the entire universe on its most fundamental level is a neural network. This is a very bold claim. We are not just saying that the artificial neural networks can be useful for analyzing physical systems [25] or for discovering physical laws [26], we are saying that this is how the world around us actually works. With this respect it could be considered as a proposal for the theory of everything, and as such it should be easy to prove it wrong. All that is needed is to find a physical phenomenon which cannot be described by neural networks. Unfortunately (or fortunately), it is easer said than done. It turns out that the dynamics of neural networks is so complex that one can only understand it in very specific limits. The main objective of this paper was to describe the behavior of the neural networks in the limits when the relevant degrees of freedom (such as bias vector, weight matrix, state vector of neurons) can be modeled as stochastic variables that undergo a learning evolution. In this section, we shall briefly discuss the main results and implications of the results for a possible emergence of quantum mechanics, general relativity, and macroscopic observers from a microscopic neural network.
Emergent quantum mechanics is a relatively new [27,28], but rapidly evolving field [20,29,30,31,32,33], which is based on a set of very old ideas, dating back to the works of de Brogie and Bohm. The de Broglie–Bohm theory (also known as pilot wave theory or Bohmian mechanics) was originally formulated in terms of non-local hidden variables [12] which makes it an easy target. The main new insight is that quantum mechanics may not be a fundamental theory, but only a mathematical tool that allows for to carry out statistical calculations in certain dynamical systems. If correct, then one should be able to derive all of the essential ingredients (complex wave-function, Schrödinger equation, etc.) from first principle. In this paper, we did exactly that for a dynamical system of a neural network which contains two different types of degrees of freedom: trainable (e.g., bias vector and weight matrix) and hidden (e.g., state vector of neurons). What we showed is that the dynamics of the trainable variables near equilibrium is described by Madelung (or equivalently Schrödinger) equations with free energy (for a canonical ensemble of hidden variables) representing the quantum phase (see Section 5), and further away from the equilibrium their dynamics is described by Hamilton–Jacobi equations with free energy representing the Hamilton’s principal function (see Section 6). This demonstrates that the neural networks can indeed exhibit emergent quantum and also classical behaviors. It is important to emphasize that the learning dynamics was essential and the stochastic dynamics alone would not have produced the desired result.
Emergent (or entropic) gravity is also a relatively new field [7,8,9], but it is far less clear if or when progress is being made. The main problem is that emergent gravity is not just about gravity, but is also about emergent space [17,34,35,36], emergent Lorentz invariance [37,38,39,40,41], emergent general relativity [24,42,43], etc. Quite remarkably, neural networks open up a new avenue to address all of these problems in the context of the learning dynamics. It turns out that a dynamical space-time can indeed emerge from a non-equilibrium evolution of the hidden variables (i.e., state vector of neurons) in a manner that is very similar to string theory. In particular, if one considers D minimally-interacting (trough bias vector and weight matrix) subsystems with average state vectors, x ¯ 1 , …, x ¯ D (and the total average state vector x ¯ 0 ) then the dynamics of x ¯ μ can be modeled with relativistic strings in an emergent D + 1 dimensional space-time (see Section 7 and Section 8) and if the interactions are described by a metric tensor, then the dynamics can be modeled with Einstein equations (see Section 9). Once again, not only stochastic, but also learning dynamics was essential for the equilibration of the emergent space-time to exhibit behavior of a gravitational theory such as general relativity. This demonstrates that the dynamics of a neural network in the appropriate limits can be approximated by both emergent quantum mechanics and emergent general relativity, but the two limits are very different. The gravitational theory describes very sparse and deep neural networks and in the quantum theory, the neural network can be very dense and shallow. However, it is possible that there exists a holographic duality map between the bulk neurons of the deep and sparse network to the boundary neurons of the shallow and dense network (see Section 10).
We now come to one of the most controversial questions: how can macroscopic observes emerge in a physical system? The question is extremely important not only for settling some philosophical debates, but for understanding the results of real physical experiments [12] and cosmological observations [13]. As was already mentioned, our current understanding of fundamental physics does not allow for us to formulate a self-consistent and paradoxes-free definition of observers and a possibility that observers is an emergent phenomenon is certainly worth considering. Indeed, if both quantum mechanics and general relativity are not fundamental, but emergent phenomena, then why canmacroscopic observers not also emerge in some way from a microscopic neural network. Of course this is a lot more difficult task and we are not going to resolve it completely, but we shall mention an old idea that might be relevant here. It is the principle of natural selection. We are not talking about cosmological natural selection [44], but about the good old biological natural selection [45], although the two might actually be related. Indeed, if the entire universe is a neural network, then something like natural selection might be happening on all scales from cosmological ( > 10 + 15 m) and biological ( 10 + 2 10 6 m) all the way to subatomic ( < 10 15 m) scales. The main idea is that some local structures (or architectures) of neural networks are more stable against external perturbations (i.e., interactions with the rest of the network) than other local structures. As a result, the more stable structures are more likely to survive and the less stable structures are more likely to be exterminated. There is no reason to expect that this process might stop at a fixed time or might be confined to a fixed scale and, so, the evolution must continue indefinitely and on all scales. We have already seen that, on the smallest scales, the learning evolution is likely to produce structures of a very low complexity (i.e., second law of learning), such as one dimensional chains of neurons, but this might just be the beginning. As the learning progresses these chains can chop off loops, form junctions and according to natural selection the more stable structures would survive. If correct, then what we now call atoms and particles might actually be the outcomes of a long evolution starting from some very low complexity structures and what we now call macroscopic observers and biological cells might be the outcome of an even longer evolution. Of course, at present, the claim that natural selection may be relevant on all scales is very speculative, but it seems that neural networks do offer an interesting new perspective on the problem of observers.

Funding

This research received no external funding.

Acknowledgments

This work was supported in part by the Foundational Questions Institute (FQXi).

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Witten, E. Anti-de Sitter space and holography. Adv. Theor. Math. Phys. 1998, 2, 253. [Google Scholar] [CrossRef]
  2. Susskind, L. The World as a hologram. J. Math. Phys. 1995, 36, 6377. [Google Scholar] [CrossRef] [Green Version]
  3. Maldacena, J.M. The Large N limit of superconformal field theories and supergravity. Int. J. Theor. Phys. 1999, 38, 1113. [Google Scholar] [CrossRef] [Green Version]
  4. Ashtekar, A. New Variables for Classical and Quantum Gravity. Phys. Rev. Lett. 1986, 57, 2244–2247. [Google Scholar] [CrossRef]
  5. Rovelli, C.; Smolin, L. Loop Space Representation of Quantum General Relativity. Nucl. Phys. 1990, 80, B331. [Google Scholar] [CrossRef]
  6. Ashtekar, A.; Bojowald, M.; Lewandowski, J. Mathematical structure of loop quantum cosmology. Adv. Theor. Math. Phys. 2003, 7, 233–268. [Google Scholar] [CrossRef]
  7. Jacobson, T. Thermodynamics of space-time: The Einstein equation of state. Phys. Rev. Lett. 1995, 75, 1260. [Google Scholar] [CrossRef] [Green Version]
  8. Padmanabhan, T. Thermodynamical Aspects of Gravity: New insights. Rep. Prog. Phys. 2010, 73, 046901. [Google Scholar] [CrossRef] [Green Version]
  9. Verlinde, E.P. On the Origin of Gravity and the Laws of Newton. J. High Energy Phys. 2011, 1104, 029. [Google Scholar] [CrossRef] [Green Version]
  10. Everett, H. Relative State Formulation of Quantum Mechanics. Rev. Mod. Phys. 1957, 29, 454–462. [Google Scholar] [CrossRef] [Green Version]
  11. Bohm, D. A Suggested Interpretation of the Quantum Theory in Terms of ’Hidden Variables’ I. Phys. Rev. 1952, 85, 166–179. [Google Scholar] [CrossRef]
  12. Bell, J. On the Einstein Podolsky Rosen Paradox. Physics 1964, 1, 195–200. [Google Scholar] [CrossRef] [Green Version]
  13. Vanchurin, V.; Vilenkin, A.; Winitzki, S. Predictability crisis in inflationary cosmology and its resolution. Phys. Rev. D. 2000, 61, 083507. [Google Scholar] [CrossRef] [Green Version]
  14. Dvali, G. Black Holes as Brains: Neural Networks with Area Law Entropy. Fortsch. Phys. 2018, 66, 1800007. [Google Scholar] [CrossRef] [Green Version]
  15. Hashimoto, K.; Sugishita, S.; Tanaka, A.; Tomiya, A. Deep learning and the AdS/CFT correspondence. Phys. Rev. D 2018, 98, 046019. [Google Scholar] [CrossRef] [Green Version]
  16. Vanchurin, V. Towards a theory of machine learning. arXiv 2020, arXiv:2004.09280. [Google Scholar]
  17. Vanchurin, V. Information Graph Flow: A geometric approximation of quantum and statistical systems. Found. Phys. 2018, 48, 636. [Google Scholar] [CrossRef] [Green Version]
  18. Jaynes, E.T. Information Theory and Statistical Mechanics. Phys. Rev. Ser. II 1957, 106, 620–630. [Google Scholar] [CrossRef]
  19. Jaynes, E.T. Information Theory and Statistical Mechanics II. Phys. Rev. Ser. II 1957, 108, 171–190. [Google Scholar] [CrossRef]
  20. Vanchurin, V. Entropic Mechanics: Towards a stochastic description of quantum mechanics. Found. Phys. 2019, 50, 40. [Google Scholar] [CrossRef] [Green Version]
  21. Prigogine, I. Etude Thermodynamique des phénoménes irréversibles. Bull. Acad. Roy. Blg. Cl. Sci. 1945, 31, 600–606. [Google Scholar]
  22. Klein, M.J.; Meijer, P.H.E. Principle of minimum entropy production. Phys. Rev. 1954, 96, 250–255. [Google Scholar] [CrossRef]
  23. Onsager, L. Reciprocal relations in irreversible processes, I. Phys. Rev. 1931, 37, 405–426. [Google Scholar] [CrossRef]
  24. Vanchurin, V. Covariant Information Theory and Emergent Gravity. Int. J. Mod. Phys. A 2018, 33, 1845019. [Google Scholar] [CrossRef]
  25. Carleo, G.; Cirac, I.; Cranmer, K.; Daudet, L.; Schuld, M.; Tishby, N.; Vogt-Maranto, L.; Zdeborova, L. Machine learning and the physical sciences. Rev. Mod. Phys. 2019, 91, 045002. [Google Scholar] [CrossRef] [Green Version]
  26. Wu, T.; Tegmark, M. Toward an artificial intelligence physicist for unsupervised learning. Phys. Rev. E. 2019, 100, 033311. [Google Scholar] [CrossRef] [Green Version]
  27. Adler, S. Quantum Theory as an Emergent Phenomenon; Cambridge UP: Cambridge, UK, 2004. [Google Scholar]
  28. Hooft, G.T. Emergent Quantum Mechanics and Emergent Symmetries. AIP Conf. Proc. 2007, 957, 154–163. [Google Scholar]
  29. Blasone, M.; Jizba, P.; Scardigli, F. Can quantum mechanics be an emergent phenomenon? J. Phys. Conf. Ser. 2009, 174, 012034. [Google Scholar] [CrossRef] [Green Version]
  30. Grossing, G.; Fussy, S.; Mesa Pascasio, J.; Schwabl, H. The Quantum as an Emergent System. J. Phys. Conf. Ser. 2012, 361, 012008. [Google Scholar] [CrossRef] [Green Version]
  31. Acosta, D.; de Cordoba, P.F.; Isidro, J.M.; Santander, J.L.G. Emergent quantum mechanics as a classical, irreversible thermodynamics. Int. J. Geom. Meth. Mod. Phys. 2013, 10, 1350007. [Google Scholar] [CrossRef]
  32. Fernandez De Cordoba, P.; Isidro, J.M.; Perea, M.H. Emergent quantum mechanics as a thermal ensemble. Int. J. Geom. Meth. Mod. Phys. 2014, 11, 1450068. [Google Scholar] [CrossRef] [Green Version]
  33. Caticha, A. Entropic Dynamics: Quantum Mechanics from Entropy and Information Geometry. Annalen Phys. 2019, 531, 1700408. [Google Scholar] [CrossRef] [Green Version]
  34. Swingle, B. Entanglement Renormalization and Holography. Phys. Rev. D 2012, 86, 065007. [Google Scholar] [CrossRef]
  35. Almheiri, A.; Dong, X.; Harlow, D. Bulk Locality and Quantum Error Correction in AdS/CFT. J. High Energy Phys. 2015, 1504, 163. [Google Scholar] [CrossRef] [Green Version]
  36. Cao, C.; Carroll, S.M.; Michalakis, S. Space from Hilbert Space: Recovering Geometry from Bulk Entanglement? Phys. Rev. D 2017, 95, 024031. [Google Scholar] [CrossRef] [Green Version]
  37. Laughlin, R.B. Emergent relativity. Int. J. Mod. Phys. A 2003, 18, 831–854. [Google Scholar] [CrossRef] [Green Version]
  38. Bednik, G.; Pujolas, O.; Sibiryakov, S. Emergent Lorentz invariance from Strong Dynamics: Holographic examples. J. High Energy Phys. 2013, 11, 064. [Google Scholar] [CrossRef] [Green Version]
  39. Vanchurin, V. A quantum-classical duality and emergent space-time. 10th Math. Phys. Meet. 2019, 347–366. [Google Scholar]
  40. Vanchurin, V. Differential equation for partition functions and a duality pseudo-forest. arXiv 2019, arXiv:1910.11268. [Google Scholar]
  41. Vanchurin, V. Dual Path Integral: A non-perturbative approach to strong coupling. arXiv 2019, arXiv:1912.09265. [Google Scholar]
  42. Barcelo, C.; Visser, M.; Liberati, S. Einstein gravity as an emergent phenomenon? Int. J. Mod. Phys. D 2001, 10, 799–806. [Google Scholar] [CrossRef] [Green Version]
  43. Cao, C.; Carroll, S.M. Bulk entanglement gravity without a boundary: Towards finding Einstein?s equation in Hilbert space. Phys. Rev. D 2018, 97, 086003. [Google Scholar] [CrossRef] [Green Version]
  44. Smolin, L. Did the Universe Evolve? Class. Quantum Gravity 1992, 9, 173–191. [Google Scholar] [CrossRef]
  45. Darwin, C. On the Origin of Species by Means of Natural Selection, or the Preservation of Favored Races in the Struggle for Life; Harvard University Press Cambridge: Cambridge, MA, USA, 1859. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Vanchurin, V. The World as a Neural Network. Entropy 2020, 22, 1210. https://doi.org/10.3390/e22111210

AMA Style

Vanchurin V. The World as a Neural Network. Entropy. 2020; 22(11):1210. https://doi.org/10.3390/e22111210

Chicago/Turabian Style

Vanchurin, Vitaly. 2020. "The World as a Neural Network" Entropy 22, no. 11: 1210. https://doi.org/10.3390/e22111210

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop