From Fibonacci Chains on Paper to Messy Reality on Hardware

Author: Gauge Freedom, Inc. (lead: Marcelo M. Amaral)

TL;DR

  • Quantum computing is no longer about if it will work, but which stack, for which problem, under which constraints.
  • The people building the main platforms (IBM, D-Wave, Quantinuum, etc.) are fully occupied making the hardware itself scale; most users can’t afford to master every SDK, architecture, and failure mode.
  • “Standard” algorithms that look fine in textbooks often fail in practice because of embedding penalties, noise, leakage, and schedule choices.
  • In our new preprint, A Hierarchy of Fibonacci Forbidden-Word Hamiltonians: From the Golden Chain to the Plastic Chain and Aperiodic Order, we use a structured family of Hamiltonians as workloads and show on a D-Wave annealer that naïve approaches break down—while carefully tuned reverse annealing and embeddings recover >99% success.
  • We believe there is growing room for independent, multidisciplinary agents—people who can move across platforms, translate real problems into hardware-aware formulations, and use AI to explore and debug quantum programs that most teams won’t want to hand-craft.
  • Gauge Freedom is positioning its research and tooling right at that interface: physics-rich benchmarks, multi-platform experiments, and advisory work that’s grounded in actual data, not just slides.

1. The Quantum Question Has Changed

For about a decade, the dominant question around quantum computing was: Will any of this actually work?

In 2025, the question is shifting to something more practical:

Which architecture, for which problem, with what trade-offs?

A few signals from the last 1–2 years:

  • D-Wave reported a materials-simulation experiment where their annealing system outperformed a leading classical supercomputer on a problem of scientific relevance, framing it as “quantum advantage” for an annealing-based device.
  • IBM has been steadily advancing its superconducting roadmap, from Condor to Heron and now Nighthawk and Loon, explicitly targeting quantum advantage by 2026 and fault-tolerant computing by 2029, with real-time error-correction decoders implemented on classical AMD hardware.
  • Quantinuum’s trapped-ion H2 system continues to set records in quantum volume and has been used to demonstrate certified randomness and key error-correction milestones at 56+ qubits.

Meanwhile, other players are pushing architectures tailored to error correction and modularity (for example, IQM’s new Halocene line focused on QEC research).

This is no longer a world with one canonical “quantum computer.” Instead, we have at least:

  • Gate-based superconducting machines (IBM, Google, IQM, etc.).
  • Trapped-ion systems with high fidelities and all-to-all or racetrack connectivity (Quantinuum and others).
  • Annealing systems optimized for combinatorial optimization and certain quantum dynamics (D-Wave).

Each has different native operations, connectivity constraints, noise profiles, and cost models.

For enterprises, labs, and even small teams, this creates a new kind of problem:

How do we decide what to run where—and how do we know if it’s actually working as advertised?


2. Why Real Workloads and Hands-On Experience Matter

On slides, the workflow often looks like this:

  1. Pick a “standard” algorithm (QAOA, VQE, some annealing formulation, etc.).
  2. Map the problem onto qubits through a known encoding.
  3. Run it on hardware; collect the results; compare to classical baselines.

In practice, that picture is missing the most painful details:

  • Embedding penalties and layout: For annealers and some gate-based devices, naive embeddings can destroy the effective problem structure or introduce huge overhead in couplers and ancillae.
  • Noise and leakage: The error model you think you have is rarely the one you actually experience once control electronics and crosstalk get involved.
  • Schedule and control: On an annealer, for instance, the choice of schedule, pause, and reverse-annealing parameters can completely change whether the system finds nontrivial solutions or just gets stuck.
  • Algorithmic “comfort zone”: Many teams reach for the same few named algorithms regardless of whether they match the underlying hardware.

These are not problems you solve by reading one API doc. They are the kinds of issues that show up only when you repeatedly run structured workloads, vary the knobs, and keep track of failure modes.

That’s where we see a gap emerging:

  • The platform teams (IBM, D-Wave, Quantinuum, etc.) are rightly focused on improving their own stack—hardware, firmware, compilers, error correction.
  • The users (enterprises, domain scientists, investors, public programs) want honest performance answers across stacks, phrased in their own language.
  • In between, we need independent, multidisciplinary agents who can move across platforms, model classes, and even different programming paradigms—and who are willing to get their hands dirty with embeddings, schedules, and AI-assisted experiment design.

3. A Case Study: Fibonacci Forbidden-Word Hamiltonians as Benchmarks

Our new preprint,
“A Hierarchy of Fibonacci Forbidden-Word Hamiltonians: From the Golden Chain to the Plastic Chain and Aperiodic Order” (Marcelo Maciel Amaral, arXiv:2511.10672), grew out of exactly this need: to have structured, physics-meaningful workloads that reveal something nontrivial about both algorithms and hardware.

Very briefly:

  • We start from the well-known golden chain, a 1D Hamiltonian describing interacting Fibonacci anyons, whose ground states can be encoded as binary strings obeying a simple local constraint (“no SS”).
  • We generalize this to an infinite hierarchy of 1D, frustration-free Hamiltonians by forbidding the minimal forbidden factors of the Fibonacci word up to length K-th Fibonacci number.
  • Each rung K in the hierarchy corresponds to a stricter set of local “forbidden words.” The size of the ground-state space scales like λNK, where λNK is an effective growth constant and N is system size.
  • As K increases, the λK values form a staircase of entropies:
    • The base rung (golden chain) is tied to the golden ratio φ≈1.618.
    • The first genuinely new rung, the Plastic chain, is governed by the plastic constant ρ≈1.3247, with a neat four-term recurrence controlling the ground-state counts.
    • Higher rungs keep tightening the constraints until we reach an aperiodic fixed point with λ=1 and zero entropy.

Mathematically, this gives us a controlled way to flow from a high-entropy topological phase to a zero-entropy aperiodic phase by turning on more forbidden patterns—a kind of explicit renormalization-group trajectory in “entropy space.”

But the part that matters for this Field Note is what happens when we use these models as actual workloads on real hardware.


4. What Happened on the D-Wave Annealer

We implemented small instances of these Hamiltonians as higher-order binary optimization problems and mapped them to a D-Wave Advantage system via standard HOBO→QUBO reductions and embeddings.

A simplified summary of what we saw:

  • Rung K=3 (Golden chain, quadratic penalties):
    • Trivial for the annealer: we achieved essentially 100% success, recovering all theoretically predicted ground states for the tested system sizes.
    • This is the regime where a “standard” mapping + standard forward annealing is perfectly adequate.
  • Rung K=4 (Plastic chain, cubic penalties):
    • The problem is already noticeably harder.
    • We clearly see a unit spectral gap consistent with our theoretical construction, but only a modest fraction of reads land in the true ground-state manifold.
    • We recover most, but not all, distinct ground states in the experiment.
  • Rungs K≥5 (higher-order constraints):
    • Here, the naive approach breaks down.
    • Forward annealing with off-the-shelf reductions and embeddings struggles badly: success probabilities drop toward zero for many embeddings, and the system is dominated by reduction artifacts and embedding variability rather than the “physics” of the target Hamiltonian.
    • However, once we switch to reverse annealing, seeded near good configurations, and tune reduction strengths and embeddings, success probabilities jump above 99% on the same logical instance.

The important point is not “D-Wave is good” or “D-Wave is bad.” It’s this:

The same logical Hamiltonian can look either impossible or easy depending on how you formulate and drive it on real hardware.

The difference is not a new theorem; it is experience:

  • recognizing that the initial mapping gave too much weight to reduction penalties,
  • noticing that certain embeddings are pathological,
  • knowing when to give up on pure forward annealing and treat reverse annealing as a local-refinement primitive.

This is exactly the kind of lesson that turns into a benchmark and, over time, a service.


5. Where Gauge Freedom Fits

Gauge Freedom is the holding company “holding” DBAs initiatives that exist in an unusual corner of the landscape:

  • We care about gauge theory, constrained Hamiltonians, and anyonic models as serious physics.
  • We also care about practical, auditable computation, including cryptographic receipts (Intelexta), reproducible benchmarks, and cost / energy tracking.
  • And we are comfortable mixing quantum hardware experiments with AI-assisted workflows—using large language models to help generate, refactor, and analyze quantum code, without pretending they magically “solve” quantum computing.

In the quantum space, we see three complementary roles for ourselves:

  1. Physics-rich benchmark design
    • Use models like the Fibonacci forbidden-word hierarchy, quasicrystal Hamiltonians, and anyonic codes as structured workloads.
    • Run them across platforms (annealing, gate-based, trapped-ion) to expose where each stack shines or struggles.
  2. Independent, multi-platform advisory
    • Help teams decide:
      • When a gate-based approach makes more sense than annealing (and vice versa).
      • How to encode their real optimization / simulation problems.
      • Which control knobs to treat as first-class citizens (schedules, embeddings, error-mitigation, reverse annealing, etc.).
    • Translate between “physics talk” and “business or domain talk.”
  3. Tooling and data products (future GaugeBench)
    • We’re gradually turning these experiments into reusable internal tooling and datasets to support our work with partners.

The new Fibonacci hierarchy paper is not a product pitch. It’s a proof of method:

  • Take a mathematically clean, physically meaningful family of models.
  • Implement them on real hardware.
  • Observe where naive expectations fail.
  • Extract the knobs (like reverse annealing and embeddings) that actually decide success.

That’s exactly the loop we want to keep running—for quantum computing, and for human–AI symbiosis more broadly.


6. Looking Ahead

Over the coming months, we plan to share more Field Notes on:

  • Gate-based vs annealing in practice: when each architecture is a good fit, and when it’s fighting the problem.
  • How anyons, and constrained Hamiltonian codes can serve as practical benchmarks, not just beautiful math.
  • How we’re using AI (including our own tools) to help with quantum experiment design, code generation, and reproducibility—without outsourcing judgment to a black box.

If you’re:

  • building or operating quantum hardware,
  • evaluating quantum technologies for your organization, or
  • working on problems where topology, or constrained Hamiltonians matter,

we’re always open to conversations and collaborations.

For those who want the technical details, the full preprint is here:

A Hierarchy of Fibonacci Forbidden-Word Hamiltonians: From the Golden Chain to the Plastic Chain and Aperiodic Order
Marcelo M. Amaral (Gauge Freedom, Inc.)
arXiv:2511.10672 · DOI: 10.48550/arXiv.2511.10672

Further reading:

D-Wave quantum advantage announcement
IBM quantum roadmap
Quantinuum H2 system overview