https://www.linkedin.com/in/leonard-wossnig/

Leonard Wossnig - Stealth | LinkedIn

About

Leonard Wossnig is an entrepreneur and executive. He is Co-Founder of a new stealth…

Experience & Education

View Leonard’s full experience

See their title, tenure and more.

Publications

arXiv June 1, 2018

Adversarial learning is one of the most successful approaches to modelling high-dimensional probability

distributions from data. The quantum computing community has recently begun to generalize

this idea and to look for potential applications. In this work, we derive an adversarial algorithm

for the problem of approximating an unknown quantum pure state. Although this could be done

on error-corrected quantum computers, the adversarial formulation enables us to execute the…

Adversarial learning is one of the most successful approaches to modelling high-dimensional probability

distributions from data. The quantum computing community has recently begun to generalize

this idea and to look for potential applications. In this work, we derive an adversarial algorithm

for the problem of approximating an unknown quantum pure state. Although this could be done

on error-corrected quantum computers, the adversarial formulation enables us to execute the algorithm

on near-term quantum computers. Two ansatz circuits are optimized in tandem: One tries to

approximate the target state, the other tries to distinguish between target and approximated state.

Supported by numerical simulations, we show that resilient backpropagation algorithms perform

remarkably well in optimizing the two circuits. We use the bipartite entanglement entropy to design

an efficient heuristic for the stopping criteria. Our approach may find application in quantum state

tomography.

See publication

arXiv November 1, 2017

Optimization problems in disciplines such as machine learning are commonly solved with

iterative methods. Gradient descent algorithms find local minima by moving along the

direction of steepest descent while Newton's method takes into account curvature

information and thereby often improves convergence. Here, we develop quantum versions

of these iterative optimization algorithms and apply them to polynomial optimization with a

unit norm constraint. In each step, multiple…

Optimization problems in disciplines such as machine learning are commonly solved with

iterative methods. Gradient descent algorithms find local minima by moving along the

direction of steepest descent while Newton's method takes into account curvature

information and thereby often improves convergence. Here, we develop quantum versions

of these iterative optimization algorithms and apply them to polynomial optimization with a

unit norm constraint. In each step, multiple copies of the current candidate are used to

improve the candidate using quantum phase estimation, an adapted quantum principal

component analysis scheme, as well as quantum matrix multiplications and inversions. The

required operations perform polylogarithmically in the dimension of the solution vector and

exponentially in the number of iterations. Therefore, the quantum algorithm can be …

See publication

arXiv

We present a quantum algorithm for simulating the dynamics of Hamiltonians that are not

necessarily sparse. Our algorithm is based on the assumption that the entries of the

Hamiltonian are stored in a data structure that allows for the efficient preparation of states

that encode the rows of the Hamiltonian. We use a linear combination of quantum walks to

achieve a poly-logarithmic dependence on the precision. The time complexity measured in

terms of circuit depth of our…

We present a quantum algorithm for simulating the dynamics of Hamiltonians that are not

necessarily sparse. Our algorithm is based on the assumption that the entries of the

Hamiltonian are stored in a data structure that allows for the efficient preparation of states

that encode the rows of the Hamiltonian. We use a linear combination of quantum walks to

achieve a poly-logarithmic dependence on the precision. The time complexity measured in

terms of circuit depth of our algorithm is $ O (t\sqrt {N}\lVert H\rVert\text {polylog}(N, t\lVert

H\rVert, 1/\epsilon)) $, where $ t $ is the evolution time, $ N $ is the dimension of the system,

and $\epsilon $ is the error in the final state, which we call precision. Our algorithm can

directly be applied as a subroutine for unitary Hamiltonians and solving linear systems,

achieving a $\widetilde {O}(\sqrt {N}) $ dependence for both applications. Subjects …

See publication

arXiv

Simulating the time-evolution of quantum mechanical systems is BQP-hard and expected to

be one of the foremost applications of quantum computers. We consider the approximation

of Hamiltonian dynamics using subsampling methods from randomized numerical linear

algebra. We propose conditions for the efficient approximation of state vectors evolving

under a given Hamiltonian. As an immediate application, we show that sample based

quantum simulation, a type of evolution where the…

Simulating the time-evolution of quantum mechanical systems is BQP-hard and expected to

be one of the foremost applications of quantum computers. We consider the approximation

of Hamiltonian dynamics using subsampling methods from randomized numerical linear

algebra. We propose conditions for the efficient approximation of state vectors evolving

under a given Hamiltonian. As an immediate application, we show that sample based

quantum simulation, a type of evolution where the Hamiltonian is a density matrix, can be

efficiently classically simulated under specific structural conditions. Our main technical

contribution is a randomized algorithm for approximating Hermitian matrix exponentials. The

proof leverages the Nystr\" om method to obtain low-rank approximations of the Hamiltonian.

We envisage that techniques from randomized linear algebra will bring further insights …

See publication

Physical review letters

Solving linear systems of equations is a frequently encountered problem in machine

learning and optimization. Given a matrix A and a vector b the task is to find the vector x such

that A x= b. We describe a quantum algorithm that achieves a sparsity-independent runtime

scaling of O (κ 2 n polylog (n)/ε) for an n× n dimensional A with bounded spectral norm,

where κ denotes the condition number of A, and ε is the desired precision parameter. This

amounts to a polynomial…

Solving linear systems of equations is a frequently encountered problem in machine

learning and optimization. Given a matrix A and a vector b the task is to find the vector x such

that A x= b. We describe a quantum algorithm that achieves a sparsity-independent runtime

scaling of O (κ 2 n polylog (n)/ε) for an n× n dimensional A with bounded spectral norm,

where κ denotes the condition number of A, and ε is the desired precision parameter. This

amounts to a polynomial improvement over known quantum linear system algorithms when

applied to dense matrices, and poses a new state of the art for solving dense linear systems

on a quantum computer. Furthermore, an exponential improvement is achievable if the rank

of A is polylogarithmic in the matrix dimension. Our algorithm is built upon a singular value

estimation subroutine, which makes use of a memory architecture that allows for efficient …

See publication

-

The Harrow-Hassidim-Lloyd (HHL) quantum algorithm for sampling from the solution

of a linear system provides an exponential speed-up over its classical counterpart. The

problem of solving a system of linear equations has a wide scope of applications, and thus

HHL constitutes an important algorithmic primitive. In these notes, we present the HHL

algorithm and its improved versions in detail, including explanations of the constituent subroutines.

More specifically, we discuss…

The Harrow-Hassidim-Lloyd (HHL) quantum algorithm for sampling from the solution

of a linear system provides an exponential speed-up over its classical counterpart. The

problem of solving a system of linear equations has a wide scope of applications, and thus

HHL constitutes an important algorithmic primitive. In these notes, we present the HHL

algorithm and its improved versions in detail, including explanations of the constituent subroutines.

More specifically, we discuss various quantum subroutines such as quantum phase

estimation and amplitude amplification, as well as the important question of loading data

into a quantum computer, via quantum RAM. The improvements to the original algorithm

exploit variable-time amplitude amplification as well as a method for implementing linear

combinations of unitary operations (LCUs) based on a decomposition of the operators using

Fourier and Chebyshev series. Finally, we discuss a linear solver based on the quantum

singular value estimation (QSVE) subroutine.

See publication

Proc. R. Soc. A

Recently, increased computational power and data availability, as well as algorithmic

advances, have led machine learning (ML) techniques to impressive results in regression,

classification, data generation and reinforcement learning tasks. Despite these successes,

the proximity to the physical limits of chip fabrication alongside the increasing size of

datasets is motivating a growing number of researchers to explore the possibility of

harnessing the power of quantum…

Recently, increased computational power and data availability, as well as algorithmic

advances, have led machine learning (ML) techniques to impressive results in regression,

classification, data generation and reinforcement learning tasks. Despite these successes,

the proximity to the physical limits of chip fabrication alongside the increasing size of

datasets is motivating a growing number of researchers to explore the possibility of

harnessing the power of quantum computation to speed up classical ML algorithms. Here

we review the literature in quantum ML and discuss perspectives for a mixed readership of

classical ML and quantum computation experts. Particular emphasis will be placed on

clarifying the limitations of quantum algorithms, how they compare with their best classical

counterparts and why quantum resources are expected to provide advantages for …

See publication

-

We develop a quantum-classical hybrid algorithm for function optimization that is particularly useful in the

training of neural networks since it makes use of particular aspects of high-dimensional energy landscapes. Due

to a recent formulation of semi-supervised learning as an optimization problem, the algorithm can further be

used to find the optimal model parameters for deep generative models. In particular, we present a truncated

saddle-free Newton’s method based on recent…

We develop a quantum-classical hybrid algorithm for function optimization that is particularly useful in the

training of neural networks since it makes use of particular aspects of high-dimensional energy landscapes. Due

to a recent formulation of semi-supervised learning as an optimization problem, the algorithm can further be

used to find the optimal model parameters for deep generative models. In particular, we present a truncated

saddle-free Newton’s method based on recent insight from optimization, analysis of deep neural networks and

random matrix theory. By combining these with the specific quantum subroutines we are able to exhaust quantum

computing in order to arrive at a new quantum-classical hybrid algorithm design. Our algorithm is expected

to perform at least as well as existing classical algorithms while achieving a polynomial speedup. The speedup

is limited by the required classical read-out. Omitting this requirement can in theory lead to an exponential

speedup.

See publication

-

Quantum mechanics fundamentally forbids deterministic discrimination of quantum

states and processes. However, the ability to optimally distinguish various

classes of quantum data is an important primitive in quantum information science.

In this work, we train near-term quantum circuits to classify data represented by

non-orthogonal quantum probability distributions using the Adam stochastic optimization

algorithm. This is achieved by iterative interactions of a classical…

Quantum mechanics fundamentally forbids deterministic discrimination of quantum

states and processes. However, the ability to optimally distinguish various

classes of quantum data is an important primitive in quantum information science.

In this work, we train near-term quantum circuits to classify data represented by

non-orthogonal quantum probability distributions using the Adam stochastic optimization

algorithm. This is achieved by iterative interactions of a classical device

with a quantum processor to discover the parameters of an unknown non-unitary

quantum circuit. This circuit learns to simulates the unknown structure of a generalized

quantum measurement, or Positive-Operator-Value-Measure (POVM), that

is required to optimally distinguish possible distributions of quantum inputs. Notably

we use universal circuit topologies, with a theoretically motivated circuit design,

which guarantees that our circuits can in principle learn to perform arbitrary

input-output mappings. Our numerical simulations show that shallow quantum

circuits could be trained to discriminate among various pure and mixed quantum

states exhibiting a trade-off between minimizing erroneous and inconclusive outcomes

with comparable performance to theoretically optimal POVMs. We train

the circuit on different classes of quantum data and evaluate the generalization

error on unseen mixed quantum states. This generalization power hence distinguishes

our work from standard circuit optimization and provides an example of

quantum machine learning for a task that has inherently no classical analogue.

See publication

Honors & Awards

  • Google PhD Fellowship

Google

Apr 2019

Google PhD Fellowship in Quantum Computing

  • Royal Society PhD Fellowship with Simone Severini

Royal Society

Sep 2017

More activity by Leonard

Other similar profiles

Explore collaborative articles

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

Explore More

Add new skills with these courses