Time-dependent variational principle for open quantum systems with artificial neural networks [Blogpost]
Time-dependent variational principle for open quantum systems with artificial neural networks
Descriptions of Open Quantum Systems (OQS)
In a quantum mechanical description, the state of a system is either given by a wave function
This scaling is even worse for the density matrix compared to the wave function
At the same time one knows that the quantum state, for many applications of interest, is an object that has a lot of internal structure. This can be exploited to parameterize it efficiently. The question of what a good parameterization scheme is for this application has seen a lot of research in recent years, when it was discovered that Artificial Neural Networks (ANNs) present a suitable option.
How does the ANN encode a quantum state?
As density matrices may not be the ideal object to approximate using an ANN, we can ’translate’ the density matrix into the POVM-formalism. Here, instead of a complex
The ANN can be viewed as an extremely complex non-linear function that maps input strings of measurement outcomes
Autoregressive networks and the RNN
One pressing question is what kind of network architecture is suited to encode these discrete probability distribution. One may observe that each probability
The Time-Dependent Variational Principle (TDVP)
The dynamics we wish to represent in the model are those generated by the Lindbladian master equation,
The question we wish to then answer is the following: What is the update of the network parameters that optimally correspond to the update in the probabilities that are encoded by these very parameters?
The answer to this question is given by the TDVP-formalism. Here one compares, by means of a suitable distance measure, the known updates of the probabilities to the updates of probabilities that stem from a (yet unknown) parameter update
Importantly, for the right distance measures, such as the KL-divergence or the Hellinger distance,
The above discussion is summarized in the figure below.
Fig. 1: The denstiy matrix and POVM-formalism in contrast to the variational approach based on the neural network whose paramters are dynamically adapted according to the TDVP.
Dissipative quantum dynamics with the TDVP
To benchmark our system we looked at two toy model systems in 1D and 2D respectively. Both models are anisotropic Heisenberg models with different anisotropies, whose Hamiltonian is given by
where
Fig. 2: (a) and (b): Mean magnetizations and next-nearest neighbour connected correlation functions in the anisotropic 1D Heisenberg model for
Let us now turn to a physically motivated example: The dissipative dynamics of a confinement model. Confinement in many-body-dynamics refers to the finite length over which correlations between spins spread out if there exists an associated energy penalty for the alignment in a specific direction. Such a system is for example given by the Transverse Field Ising Model (TFIM) with additional longitudinal field
The larger
Fig. 3: (a) and (b): Mean magnetizations and suppressed spreading of correlations in a 32 spin chain with periodic boundary conditions that is subject to the above Hamiltonian and dephasing with rate
Further reading: https://arxiv.org/pdf/2104.00013