Skip to content

Conversation

@josephbowles
Copy link
Contributor

@josephbowles josephbowles commented Oct 6, 2025

Demo: Generative quantum advantage for classical and quantum problems:

Summary:
This is a new demo for this paper: https://arxiv.org/abs/2509.09033

It is a quantum machine learning paper. The demo is rather critical, since the main aim of the demo is to draw attention to the fact that their approach is in a sense 'cheating' given the common understanding of the term 'learning' in ML, and that the assumptions they use are not at all common in real-world ML.

The text is hopefully fairly low on sass however. I have tried to be as matter of fact as possible whilst still calling them out where necessary. The most opinionated section is the last, where I try to convince the reader that what they do is not ultimately useful for QML.

Relevant references:
https://arxiv.org/abs/2509.09033

Possible Drawbacks:

Related GitHub Issues:


  • GOALS — Why are we working on this now?
    This paper was released recently by Google/Caltech and has and will get a lot of attention as an important result in the field. Publishing a critical demo like this is part of our 'thought leader' mission.

  • AUDIENCE — Who is this for?

PhD+ researchers in quantum computing.

  • KEYWORDS — What words should be included in the marketing post?

quantum machine learning
generative AI
complexity

  • Which of the following types of documentation is most similar to your file?
    (more details here)

    • Demo

@github-actions
Copy link

github-actions bot commented Oct 6, 2025

👋 Hey, looks like you've updated some demos!

🐘 Don't forget to update the dateOfLastModification in the associated metadata files so your changes are reflected in Glass Onion (search and recommendations).

Please hide this comment once the field(s) are updated. Thanks!

@github-actions
Copy link

github-actions bot commented Oct 6, 2025

Your preview is ready 🎉!

You can view your changes here

Deployed at: 2025-11-03 09:49:36 UTC

@josephbowles josephbowles requested a review from vbelis October 7, 2025 14:38
Comment on lines +20 to +23
*Theorem 1 (Informal: Classically hard, quantumly easy generative models). Under standard
complexity-theoretic conjectures, there exist distributions p(y|x) mapping classical n-bit strings
to m-bit strings that a quantum computer can efficiently learn to generate using classical data
samples, but are hard to generate with classical computers.*
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi! what do you think about changing the format for these special boxes to something like the .. admonition:: class note? (see the green boxes in Maria's demo ) Let me know and if so, I can implement the change.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea, have added it here. Feel free to add them elsewhere :)

but also to be able to learn it efficiently from data, and they investigate a specific scenario in
which this is possible.

In this demo we will unpack one of the main results of the paper to understand its core mechanics.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

of their paper, that paper, Huang's paper ...any of those.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually, just wondering if you might wanna cite the paper several times during the demo. Maybe not for this case, but there is another instance "(see Appendix H2 of the paper)" where it might be helpful.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have added some more citations

# 2. Entangle the qubits by performing controlled-Z gates between some pairs of nearest neighbour
# qubits on the lattice. If the qubits are horizontal neighbours on the lattice, a CZ is always
# applied.
# 3. Perform a single qubit Z rotation :math:`U_{z}(\theta_{ij})=\exp(-\frac{i}{2}\theta_{ij}Z)` with
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like the phrase "single qubit Z rotation" should have a hyphen somewhere haha just don't know where:
single-qubit Z rotation

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, have updated all the 'singe qubit' to 'single-qubit'

# 2. Perform steps 2-4 as before
#
# In order to be able to prove the result, the choice and distribution of possible input states must
# satisfy a particular property called ‘local decoupling’ (see Appendix C2 of the paper). One
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and yeah, maybe it is easier for the reader to cite the paper other times so then can click in the hyperlink and don't have to go looking for it.


In this demo we will unpack one of the main results of the paper to understand its core mechanics.
We will see that the problem is constructed so that learning the hard distribution boils down to
performing single qubit tomography, and we will debate the scope of this technique in relation to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
performing single qubit tomography, and we will debate the scope of this technique in relation to
performing single qubit tomography, and we will discuss the scope of this technique in relation to

#
# For each :math:`x`, the probability distribution :math:`p(\boldsymbol{y}|x)` corresponds to a IDQNN
# where—rather than all qubits being in the :math:`\vert + \rangle` state—each input qubit can be prepared in either the
# :math:`\vert 0 \rangle` or :math:`\vert + \rangle` state, which is determined by the input :math:`x`. We therefore adapt the first step of
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here it is zero or plus and in line 149 is plus or minus

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch!

Copy link
Collaborator

@vbelis vbelis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like it! I believe it strikes perfect balance between being critical but also technical and not too opinionated.

# Recipe for sampling from the deep circuit
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# 1. Generate all bits :math:`y_{ij}` for :math:`i<4` uniformly at random.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps make explicit for the reader that we first classically generate these bits?

# into the precise structure we had above by inserting the classically controlled Z gates at every
# layer. If we happen to sample the all zero bitstring for :math:`y_{ij}` values that control these
# gates, then we will sample from this hard distribution. In this sense the distribution
# :math:`p(\boldsymbol{y})` is ‘hard’ since any classical algorithm will fail to reproduce this part
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please feel free to judge if it's unnecessary: I am not sure if this is clear: ''this part of the distribution'' (for general audience of the demo). Perhaps an extra sentence that highlights the thought process of the argument even more ''Even if a classical model could learn most p(y) configurations, the one corresponding to y=00...0 would be hard since it correspond to learning the output distribution of a universal quantum circuit''.

# :align: center
#
# The authors argue that the conditional distribution :math:`p(\boldsymbol{y}|x)` should also be
# considered hard to sample from classically, since for the input :math:`x=0` we can use the argument
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe consider explicitly stating which "argument of the previous section"? Something like "Similarly to the argument above, for x=1,2 we can classically simulate but for x=0 not. Hence, p(y|x) is not classically simulable as a whole".

# authors used this trick to simulate a shallow IDQNN circuit on 816 qubits using a deep circuit with
# 68 qubits. To do this they actually work with a deep circuit on a 2D lattice, and map it to a
# shallow circuit on a 3D lattice. This obviously complicates things a bit (and makes drawing pictures
# a lot harder!) so we will stick to the 2D vs 1D example above; in the end, it will contain
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could mention somewhere what the authors call "compressed" circuits to make it easier for the reader if they want to read the paper after the demo.

# Although the above suggests the conditional distribution is unknown, we actually know a lot about
# it. In particular, we need to work under the assumption that we know the exact structure of the
# quantum circuits that produce the data, except for the rotation angles :math:`\theta_{ij}`
# (i.e. this is included in the \`prior knowledge’ of the problem). To learn, we therefore just need
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# (i.e. this is included in the \`prior knowledge’ of the problem). To learn, we therefore just need
# (i.e. this circuit structure is included in the \`prior knowledge’ of the problem). To learn, we therefore just need

# (i.e. this is included in the \`prior knowledge’ of the problem). To learn, we therefore just need
# to infer the parameters :math:`\theta_{ij}` from the data, which will allow us to generate new data by
# simply implementing the resulting circuits. This is very different from real world problems the
# typical situation in classical generative machine learning, where a precise parametric form of the
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is one of the most crucial sentences in the demo; it is the first time our main message is communicated. Perhaps it could be made even more powerful without breaking the flow. Perhaps something along these lines:

  • "This situation is very different from real world problems. The precise parametric form of the ground truth distribution is not known in practical applications of classical generative machine learning ." (or any type of ML really)
  • "This situation is very different from the typical problems that classical generative machine learning is applied to. In real world problems, a precise parametric form of the ground truth distribution is not known."


In this demo we will unpack one of the main results of the paper to understand its core mechanics.
We will see that the problem is constructed so that learning the hard distribution boils down to
performing single qubit tomography, and we will debate the scope of this technique in relation to
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about adding a comment that this is not even tomography, since we are doing multiple projective measurements, but just one. Too sassy?

# produced the data was necessary in order to learn. In reality, such a precise form of the
# ground truth distribution is not known and models have to be learnt with vastly less prior
# knowledge. To evoke the eternal cat picture analogy, it is like we are provided with a model that
# generates near perfect pictures of cats, and all we need to do is find the right parameters for the
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Honestly, even the color tuning of the fur seems far more complex than deducing the singe-qubit rotation. I like the cat analogy, feel free to remove it if you want to stay more technical, no hard preference.

# capture highly non-linear correlations, such as how edges, textures, and object parts co-occur
# across different spatial locations. In the quantum setup, although the distribution for the input
# :math:`x=0` is undoubtedly complex, the data that is used for learning comes from the much simpler
# product distributions in which there are no correlations between bits.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe remind the reader why is that so? "Reminder, the input x, i.e. the data, is a sample from the uniform distribution"

Comment on lines 243 to 250
# This is a measurement on a rotated single qubit state, for which the expectation value for
# :math:`y_{12}` is
#
# :math:`\langle y_{12} \rangle = (1-\cos(\theta_{12}))/2`.
#
# Rearranging this equation we have
#
# :math:`\theta_{12} = \frac{1}{2} \arccos(1 - 2\langle y_{12} \rangle)`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you want these equations to be inline or centered? right now, they are aligned to the left

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have centred them

daniela-angulo and others added 6 commits October 28, 2025 14:06
Just fixing a tiny mistake in the date of publication so we don't forget later
Co-authored-by: Daniela Angulo <42325731+daniela-angulo@users.noreply.github.com>
Co-authored-by: Daniela Angulo <42325731+daniela-angulo@users.noreply.github.com>
Co-authored-by: Vasilis Belis <48251467+vbelis@users.noreply.github.com>
Co-authored-by: Daniela Angulo <42325731+daniela-angulo@users.noreply.github.com>
Co-authored-by: Daniela Angulo <42325731+daniela-angulo@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants