Quantum Computing Skepticism, Part 2: My View and Responses to Skeptical Claims—Featuring John Preskill, Scott Aaronson, Dave Bacon, Aram Harrow, and Boaz Barak

In a previous post I presented the skeptical views toward quantum computing of Robert Alicki, Michel Dyakonov, Leonid Levin, Oded Goldreich, Liam McGuinness, Moshe Vardi and a few others. As I mentioned there, I am quite interested in understanding the the diverse perspectives by other people on this issue. Let me add that I am also very interested in connections with mathematics. (Majorana zero modes, discussed in my previous post is an example of an area with rich mathematical connections.)

In this post, I  briefly describe and give links to my own skeptical view. I move on to give descriptions of the debate and counter arguments by John Preskill, Aram Harrow, Scott Aaronson, Dave Bacon, Boaz Barak and others. Finally, I add a few thoughts on the connections between my arguments and the physical skeptical arguments and include a roadmap to the debate from my perspective. (This roadmap was also presented in a separate post and is slightly expanded over here.)

A quick update about Microsoft’s claims of creating topological qubits. Sergey Frolov and Vincent Moudrik (who had major contributions in scrutinizing earlier claims in this area) will have a live videotaped discussion on February 28, 2025 on the topic “How close we are to a topological qubit?” Here is the Zoom registration link

My own theory that quantum error-correction is not possible. 

From my perspective, the question of whether quantum computation is possible is among the most important open scientific questions of our time. I hold the view that building scalable quantum computers—and even achieving some early milestones on the path to quantum computing—is impossible. Given the optimistic consensus among many scientists, I believe the burden of proof lies with skeptics like myself. This necessitates continuous examination and refinement of our arguments and the exploration of new ways to articulate and develop them.

My argument against quantum computing is:

Empirical probability distributions that quantum computers in the intermediate scale produce are combinations of noise-sensitive distributions that are useless for computation, and computationally very primitive robust distributions. This combination does not support, for larger scales, robust quantum information but it does support robust classical information. It cannot lead to quantum supremacy nor to good quality quantum error correcting codes.

Noise sensitivity that plays a role in my argument is a mathematical theory developed by Benjamini, Schramm and myself in the late 90s. (It is arguably related to chaotic phenomena in physics.)

Here are some links.

(i) G. Kalai, The argument against quantum computersQuantum, Probability, Logic: Itamar Pitowsky’s Work and Influence (M. Hemmo and O. Shenker, eds.), pp. 399–422, Springer, 2019.

(ii) G. Kalai, The quantum computer puzzle, Notices AMS, May 2016. (Expanded version.)

My papers after 2014 relies strongly on my study of  noisy Boson sampling with Guy Kindler.

(iii). G. Kalai and G. Kindler, Gaussian Noise Sensitivity and BosonSampling, 2014. 

The general arguments, various connections to mathematics and TOC, and a preliminary study of the 2019 Google’s experiment can be found in this paper

(iv). G. Kalai, The argument against quantum computers, the quantum laws of nature, and Google’s supremacy claims, The Intercontinental Academia Laws: Rigidity and Dynamics (M. J. Hannon and E. Z. Rabinovici, eds.), World Scientific, 2024. arXiv:2008.05188, 2020.

In the wider context of “mathematical computer science: theory and practice”

(v). G. Kalai, Three puzzles on mathematics computations, and games, Proc. ICM2018;

(vi) A 2011 paper summarizing my work on correlated errors. How Quantum Computers Fail: Quantum Codes, Correlations in Physical Systems, and Noise Accumulation

(vii) My first 2005 paper on the topic was:  G. Kalai, Thoughts on Noise and Quantum Computation

Blog posts: The Argument Against Quantum Computers – A Very Short Introduction (2020) ; Inaugural address at the Hungarian Academy of Science: The Quantum Computer – A Miracle or Mirage (2022). Movies: Why Quantum Computers Cannot Work: The Movie! (2014); Cartoons: If Quantum Computers are not Possible Why are Classical Computers Possible? (2017); Interview: Quanta Magazine. A recent blog post that led to interesting discussion: Seven Assertions about Quantum Computing.

Challenges to My Skeptical View

As I mentioned at the beginning of this post, I hold the view that building scalable quantum computers—and even achieving some of the early milestones in quantum computing—is impossible. These early milestones include certain claims of “quantum supremacy” that have already been made, but which, in my view, do not hold water. Another early milestone which I expect to be beyond reach is the creation of  logical qubit and gate of very good quality.

The roadmap to this debate outlined below, includes three major aspects of my research (items 4, 5 and 6):

  1. The claim that “above the fault-tolerance threshold,” systematic relationships will emerge between the desired quantum state and the noise.
  2. The claim that samples produced by NISQ  computers represent a low-level computational complexity class, LDP (as identified by Guy Kindler and me), and that such samples inherently exhibit noise-sensitive behavior.
  3. In addition, samples from NISQ computers will include a large noise sensitive component. This means that the empirical distribution is inherently non-stationary and unpredictable. 

These directions present intriguing mathematical challenges, and exploring their connections to various issues in quantum physics—including those raised by other quantum computing skeptics—would be valuable. Testing my perspective and specific conjectures against experimental progress remains an ongoing challenge. Some of the specific predictions I have made can already be tested with current experiments.

Furthermore, critically examining current experimental work is itself a worthwhile endeavor. This scrutiny touches on significant technical and conceptual statistical problems and intersects with the broader and crucial issue of the “replication crisis” in various scientific fields.

Two frequently asked questions

Q1: What is the computational complexity class LDP?

A1: LDP is the class of probability distributions P on o-1 vectors (x_1,x_2,\dots, x_n) of length n, that can be approximated by polynomials of bounded degree D. This class is a subclass of probability distributions that can be approximated by {\bf AC}^0 circuits (bounded depth circuits). It is a very tiny subclass of the class BPP of probability distributions that can be approximated by classical computation.

Q2: If indeed the empirical distribution is inherently non-stationary and unpredictable couldn’t it still represent quantum supremacy? It looks that noise makes simulating on a classical computer even harder! 

A2: Consider, for concreteness, samples of bitstrings, 0-1 vectors of length n. A nonstationary and unpredictable behavior of the samples means that the individual bitstrings in a sample are functions of many unknown parameters of the noise.  If every bitstring  depends, among other things, on the temporal parameters of the noise, then the complexity of the sample—measured in terms of input size—fails to demonstrate any computational advantage. 

This commonly asked question illustrates how, once someone gets into the mindset of believing in miraculous, naturally occurring quantum computational supremacy that simply needs to be “harnessed,” it becomes difficult to abandon this perspective.

_______________

I move now to present arguments by proponent of quantum computers about the debate. John Preskill is a quantum computing proponent who has offered thoughtful and balanced descriptions of the “pros and cons” in some of his works.

Papers by John Preskill

Following are links to some very useful papers by John Preskill. Preskill’s homepage contains a lot of interesting lectures, links and other interesting material,  and so is the group blog “Quantum Frontiers“.

p.1 Quantum computers pro and con (1998). This paper  passes the “test of time” with flying colors and it includes a lot of interesting issues and ideas. The paper was, in part,  a response to the 1996 paper of S. Haroche, S. and J. M.  Raimond,  Quantum computing: dream or nightmare?  (It is on pages 341-344 of the following full proceedings of an interesting 1996 conference.) One theme of the paper is the question regarding the abilities of quantum computers with “a few tens of qubits” (NISQ in modern language). 

Among a lot of other interesting assertions, Preskill proposed the following statement: “The performance of devices with just a few tens of qubits cannot be easily simulated or predicted.”

One way to interpret this statement (in modern terms later coined by Preskill himself) is that “NISQ computers can lead to quantum computational supremacy”—an idea that has been widely promoted and studied since around 2010. (This is likely what John had in mind.) Another interpretation of the very same statement is that NISQ computers produce inherently unpredictable empirical distributions, which is a key ingredient of my theory. 

p.2 Quantum computing and the entanglement frontier  (2012).  This is a very interesting paper where (among other things) the term “quantum supremacy” was introduced. 

p.3  Sufficient condition on noise correlations for scalable quantum computing (2013) This paper responds to concerns by Robert Alicki and myself regarding inherently correlated errors. 

p.4  Quantum computing in the NISQ era and beyond  (2018). In this hugely cited paper  the word “NISQ” ( for “noisy intermediate-scale quantum”) was invented. Let me mention that a two-scale analysis of the computational power of quantum circuits is also a crucial ingredient in my 2016 paper on quantum computation (and subsequent papers). 

Seeing John Preskill at the Google “ten septillions over classical” announcement video reminded me the words of the Hebrew song “לאן התגלגלת יא דודו” (Where did you get yourself into, ya Dudu) 🙂 .

Next we give some details and links on my debate with Aram Harrow, on Dave Bacon’s early writings, on Scott Aaronson’s views, and on a 2017 blogpost by Boaz Barak. 

The 2012 Kalai-Harrow debate and other Internet Discussions

My 2012 debate with Aram Harrow was kindly hosted by Ken Reagan and Dick Lipton on GLL.

Here are the links to the eight posts: My initial post “Perpetual Motion of The 21st Century?” was followed by three posts by Aram. The first “Flying Machines of the 21st Century?” the second “Nature does not conspire” and the third “The Quantum super-PAC.” We then had two additional posts “Quantum refutations and reproofs” and “Can you hear the shape of a quantum computer?.” Finally we had  two conclusion posts: “Quantum repetition” and “Quantum supremacy or classical control?” Some general issues raised in the discussion are listed in this comment, and these follow-ups  (1,2,3,4,5,6).

Cris Moore wrote in the very first comment of the debate: 

It’s reasonable to take as a physical postulate (…) that NP-complete problems can’t be solved by a physical device in polynomial time (…). But I don’t see why the factoring problem deserves this status.

Sampling according to the value of permanents, a task which is outside the polynomial hierarchy, should raise the eyebrows even of those who are not impressed by factoring! (And, going outside PH, can be achieved already by bounded depth quantum circuits.)

There were additional discussions on GLL, SO, my blog, and other blogs. Here are a few selected posts with interesting discussions: A Few Slides and a Few Comments From My MIT Lecture on Quantum Computers;(2013) Why Quantum Computers Cannot Work: The Movie! (2014); Quantum Supremacy and Complexity (2016); I wrote several posts summarizing my debate with Aram, where I tried to present the different participants, different point of views, and amusing moments: I, II, III.

 

Scott Aaronson’s Writings and Views

In 2004 Scott wrote an interesting paper Multilinear formulas and skepticism of quantum computing. The paper calls skeptics to offer “sure/Shor” separators, namely, quantum states that are possible with a quantum computers but beyond reach according to skeptical views.

Since the end of 2005, Scott’s blog “Shtetl Optimized” (SO) includes many interesting discussions on matters related to quantum computation. Scott’s lecture about skepticism of Quantum Computing (later turning into a chapter in his book Quantum Computing Since Democritus) is discussed in this 2008 SO post. Another interesting post and discussion thread from 2006 is: Reasons to believe II: quantum edition. There, Scott started out by repeating (“for the ten billionth time”) the Official Scott Aaronson Quantum Computing Position Statement:

  • It’s entirely conceivable that quantum computing will turn out to be impossible for a fundamental reason.
  • This would be much more interesting than if it’s possible, since it would overturn our most basic ideas about the physical world.
  • The only real way to find out is to try to build a quantum computer.
  • Such an effort seems to me at least as scientifically important as (say) the search for supersymmetry or the Higgs boson.
  • I have no idea — none — how far it will get in my lifetime.

Since 2010 or so Scott raised and strongly promoted the idea that NISQ computers (specifically boson-samplers) can demonstrate quantum computational supremacy.  The argument for why this is impossible is central to my own research since 2013. For people who can digest the idea of efficient quantum factoring, bosonsampling provides something even harder to digest: Precise sampling according to values of permanents – a task outside of the polynomial hierarchy! (This is item 9 in our roadmap.)

In recent years, Scott has expressed more and more confidence that the skeptical view has been refuted. See, e.g., his post: My “Quantum Supremacy: Skeptics Were Wrong” 2020 World Speaking Tour. Scott also argues that “20 years of covering hyped, dishonestly-presented non-milestones in quantum computing” gives more credibility to his support of what he regards as “real milestones.”

Scott sometimes expresses his opinions in an emotional or unrestrained manner. His writings and lectures sometimes include exaggerations or inaccuracies, often to emphasize his perspective or for comedic effect. While these features attract interest and attention, they can pose difficulties in technical discussions.

 

Boaz Barak’s 2017 Post

Boaz Barak wrote in 2017 an interesting post The different forms of quantum computing skepticism. Boaz’s post is centered around a description of four scenarios (or possible “worlds” in the style of Russel Impagliazzo) about the matter. Boaz’ world are called Superiorita (quantum computers are realistic and computational superior), Popscitopia  (Quantum computers are realistic and can solve NP-complete problems), Skepticland  (quantum computers are not realistic), and Classicatopia (quantum algorithms can efficiently be simulated on classical computers). The comment thread also contains a distinction between quantum computer believers who regard quantum fault-tolerance as the basic reason for the ability to overcome noise and those believers (“fundamentalists”) who regard quantum fault-tolerance as a “red herring” and think that, in principle, the noise could be reduced to the extent that we could run Shor’s algorithm quantum computers with no need of quantum fault-tolerance.  Boaz concluded his post by stating where he stands noting that a decade earlier he had the opposite stance. In his 2017 post Boaz regarded quantum computing as the beautiful choice. In addition he believed that noise reflects only engineering difficulties that are not fundamental barriers and that “with sufficient hard work and resources the noise can be driven down to as close to zero as needed”. I reblogged Boaz’s post and added some commentary here

Let me also mention that Or Sattath and Uriel Shinar wrote a paper relating  QKD (quantum key distribution schemes) and quantum computing skepticism. They argued that the inability for quantum memory (referred by Or and Uriel as “Quamnesia”) could be a blessing for certain cryptographic tasks!

Boaz himself wrote and hosted several interesting posts about computation and physics and let me mention especially his enlightening post on black holes.

Dave Bacon’s old posts over The Quantum Pontiff

Early on, there were interesting discussions on skepticism of quantum computers on Dave Bacon’s blog “The Quantum Pontiff”.  I will only mention a few posts.

b.1 Here is a 2006 post titled  What Are the Assumptions of Classical Fault-Tolerance?

Dave concluded his posts with the following suggestion: 

I just want to suggest that the more we learn about the theory of fault-tolerant quantum computation, the more we recognize its similarity to probablistic classical computation. I call this general view of the world the “principal of the ubiquitous factor of two in quantum computing.” The idea being mostly that quantum theory differs from probablistic theory only in the necessity to deal with phase as well as amplitude errors, or more generally, to deal with going from probabilities to amplitudes.

b.2 My participation

I believe this was the first time I participated in a blog discussion, and I presented a couple early conjectures and intuitions in the comment section over Dave’s blog. Much of the discussion was about Dave’s question on why classical computation is possible. In one comment I raised the intuition that the noise for complex states inherently carries similar complexity, and in the following comment I offered a mathematical formulation. (I tried to put this intuition on rigorous mathematical grounds in the following years.) 

b.3 The paper of Alicki, Lidar, and Zanardi.

Earlier, an interesting post from 2005  Thermodynamics is Tricky Business, contains Bacon’s respond to the paper of Alicki, Lidar and Zanardi.  (The post appeared on the same day the paper was posted  on the arXive!)
Bacon summarized the main argument of Alicki, Lidar, and Zanardi as follows:
The main argument put forth in this paper is that if we want to have a quantum computer with Markovian noise as is assumed in many (but not all) of the threshold calculations, then this leads to the following contradictions. If the system has fast gates, as required by the theorem, then the bath must be hot and this contradicts the condition that we need cold ancillas. If, on the other hand, the bath is cold, then the gates can be shown to be necessarily slow in order to get a valid Markovian approximation.
 
There is a very interesting discussion thread featuring Alicki, Lidar, Gottesman, and others. 
 
b.4 The paper by Alicki and Michał Horodecki.
 
In another 2006 post, Bacon discussed the paper  “Can one build a quantum hard drive? A no-go theorem for storing quantum information in equilibrium systems,” by Robert Alicki and Michał Horodecki. Especially interesting was a comment by John Preskill offering Kitaev’s (hypothetical) four-dimensional toric code as a counterexample. A subsequent paper by Alicki with Michał Horodecki,  Paweł Horodecki and Ryszard Horodecki studied Kitaev’s 4-D model. My current perspective is that neither the 2D toric code nor the 4D version are feasible for a quantum computer or for direct physical implementation, since they (likely) goes beyond the LDP computational complexity class. 
 
Here is a picture of Michał Horodecki and me. Debates and disagreements are part of what makes academic life so rewarding!
 

On the Scope of Certain Physical Laws in a World Without Quantum Fault Tolerance

Here are a few thoughts on how my arguments, introduced at the beginning of this post, connect with those of Robert Alicki and other skeptical physical arguments. As I said before, I do not expect a mathematical theorem proving the impossibility of quantum computers, nor do I foresee a derivation of their impossibility from an agreed-upon physical principle. However, in a world devoid of quantum fault tolerance—or even in restricted cases where quantum fault tolerance does not occur—the scope of certain physical laws may extend beyond what is strictly dictated by their mathematical derivations.

Robert Alicki, in his first critical paper on quantum computers, relied on the time-energy uncertainty principle (TEUP). This principle was rigorously established by (Yakir) Aharonov, Massar, and Popescu for completely unknown Hamiltonians. However, (Dorit) Aharonov and Atya regarded TEUP as a misconception, since strong violations follow from Shor’s theorem—though these violations depend on the existence of quantum fault tolerance. In a world without quantum fault tolerance, the TEUP may hold in greater generality.

Another tension arises from the reversibility of noiseless quantum evolution, which allows complex quantum processes to be run both forward and backward in time. This seems, in itself, to conflict with thermodynamics. Alicki, along with several coauthors, has provided detailed thermodynamic arguments suggesting inherent limitations on quantum computation—arguments that effectively “kill” quantum information while preserving only robust classical information. These arguments, however, did not convince the broader physics community, as there were notable counterarguments, which Alicki himself addressed in his papers (and Dave Bacon in his blog). If an argument emerges demonstrating that quantum fault tolerance is fundamentally impossible—one that does not rely on thermodynamics—it could lend further justification to Alicki’s thermodynamic claims.

A third example of a similar nature is the no-cloning theorem. Mathematically, it is well established (and not difficult) that one cannot clone a completely unknown quantum state. However, for a known quantum state described by a quantum circuit, it seems possible to run the circuit multiple times to generate as many copies as needed—or is it? For complex quantum states, achieving this requires quantum fault tolerance. In a world without quantum fault tolerance, a stronger version of the no-cloning theorem may hold. (See the fourth prediction in this post.)

Finally, one might ask whether a world devoid of quantum fault tolerance also precludes the existence of stable non-Abelian anyons. While such a world would certainly rule out very stable topological qubits—and, in fact, any kind of highly stable qubit—the implications for non-Abelian anyons remain an open question. I raised this as a question in an earlier post, along with an idea for how to approach it.

Roadmap for the Debate on Quantum Computing (slightly expanded)

Here is a roadmap to the debate about quantum computers from my perspective. Skeptical claims are in blue, responses to these claims are in red. Points 1-10 represent skeptical claims made over the years, while points a-s are replies to these skeptical claims. Points (i)-(xii) are replies to some of these replies. Underlined claims were made by me. Green boldface indicates points of agreement. (We published a somewhat shorter version also as a separate post.)

  1. The understanding of noise is a crucial ingredient of any model of computation. Noise may cause quantum computing to fail.

    • a) Noise is not inherent. There is no inherent “noise” in quantum mechanics.

      • (i) A realistic computational model must include also the modeling of errors.

    • b) Quantum error correction and quantum fault tolerance have the potential to deal successfully with noise and various forms of inaccuracies.

    • c) Topological quantum computing gives an alternative avenue for quantum computing that does not depend on the circuit model.
  2. General forms of quantum noise (in particular, correlated errors) will fail quantum error correction.

    • d) These types of general noise are not physical.

    • e) These types of general noise would cause classical computation to fail as well.

    • f) For arbitrary forms of noise, if the error rate is sufficiently small, log-depth quantum computing survives.

  3. Specific physical noise models differ from the ideal model.

    • g) We will be able to extend the threshold theorem to these models as well.

      • (ii) All those extensions still assume common properties that might be unrealistic.

  4. There will be a systematic relationship between the “signal” and the noise. For example, the noise for a pair of entangled qubits will be highly correlated. (This argument extends to attempts to build quantum gadgets needed for topological quantum computing.)

    • h) This cannot be: how would nature know what the intended state is? It is like saying, “wet roads cause rain to pour.”

      • (iii) This systematic relation can be a general rule if you consider special-purpose quantum computational devices.

      • (iv) This systematic relation holds for NISQ (noisy intermediate scale quantum) systems and more generally for systems that do not apply quantum fault tolerance. This systematic relation describes quantum physics “above the quantum fault tolerance threshold.”

    • i) Such a principle would harm classical computation as well, unless you believe that by a miracle the threshold for fault tolerance lies between what is required for classical and quantum computation.

      • (v) The argument about the miraculous location of the threshold is incorrect because classical computers are not universal quantum computers.

  5. NISQ computers represent a primitive classical computational power (LDP), which implies an inherent reason they cannot achieve quantum computational supremacy. This suggests there is an absolute lower bound on noise rates that prevents effective quantum error correction. 

    • j) It is incorrect to apply asymptotic insights (on the number of qubits) to systems with only tens or a few hundred qubits.

      • (vi) We commonly apply asymptotic insights to small systems.

    • k) But why classical information and computation are possible?

      • (vii) The computational class LDP still supports classical information.

    • l) The Google 2019 experiment refutes this argument.

      • (Agreed upon) Google’s 2019 supremacy claims appear to be incorrect.

    • m) Some aspects of the Google 2019 experiment and its replications are promising and support basic assumptions about quantum noise.

      • (viii) These aspects of Google 2019 appear to be incorrect as well.

    • n) (New) The 2019 Google experiment was long ago superseded. Several recent experiments, including Google’s “septillion years advantage over classical” and other results from IBM, Quantinuum, QuEra, and USTC, are more robust.

      • (ix) We need to scrutinize these assertions as well. Gradual experimental progress is expected to hit a barrier, and fantastical experimental claims are expected to be falsified.

  6. Samples from NISQ computers will include a large noise sensitive component. This means that the empirical distribution is inherently non-stationary and unpredictable. 

    • o)  The samples may still express quantum computational supremacy. It is just that we cannot harness or use this supremacy.

      • (x) Each bitsring in the sample (and even each bit) is a function of many temporal noise parameters; There is no reason to think that this function represent unusual computational power and there is no computational supremacy hiding in this picture.

  7. Building quantum computers is a ridiculously hard engineering task.

    • p) It is very hard but not ridiculously hard; progress is being made, and a long-term approach is necessary.

      • (xi) Even achieving near-term tasks is inherently impossible.

      • (Agreed upon) We may have an opportunity to test this matter in the next few years.

  8. (Simplified) Quantum computations conflict with important principles and laws from physics and computing theory:

    • 7.1 The time-energy uncertainty principle.

    • 7.2 The effective Church-Turing thesis.

    • 7.3 Several principles of thermodynamics.

    • 7.4 Bohr’s correspondence principle.

    • q) (Simplified) These physical and computational rules are not universally correct, which is part of what makes quantum computation so remarkable as it would enable novel physical and computational behaviors.

  9. Quantum computers will allow sampling according to the exact values of permanents. Achieving it is implausible given that computing permanents is #P complete.
    • r) Yes!!  This is the computational trade mark twist of Nature.

  10. Quantum computers will (likely) enable to compute efficiently on a quantum computer the nth digit of the fine structure constant for large values of n where it has no physical meaning, as easy as we compute the digits of e and π with today’s digital computers.
    • s) Yeah, why not?

Summary

Proponents of Quantum Computers: BQP is the computational complexity class representing our quantum physical world. BQP characterizes the complexity of quantum processes on a microscopic scale, as evidenced by the challenges of simulating quantum physics on classical computers. Building quantum computers is primarily an engineering task, and pursuing this endeavor will drive exciting scientific progress across various fields. Many promising avenues exist, ranging from different implementations of quantum circuits to non-abelian anyons and topological quantum computing. Numerous impressive experimental results have already been achieved.

Skeptics of Quantum Computers: BPP is the computational class representing computation in the physical world. Classical information and computation are sustained in our noisy quantum world through the “averaging” of noisy quantum information.  Quantum computational supremacy or high quality quantum error correction are inherently impossible and there is no compelling evidence for them in natural quantum processes. Advances in experimental quantum computing should be scrutinized carefully. Gradual experimental progress is expected to hit a barrier, and fantastical experimental claims are unlikely to withstand rigorous examination.

A “meta claim” of proponents is that there is no symmetry between the two sides of the debate: the prevalent view shared by many experts is that quantum computation is possible.

Posted in Computer Science and Optimization, Controversies and debates, Physics, Quantum | Tagged , , , , , , | 2 Comments

Majorana Zero Modes and Topological Qubits

This post contains the first item, devoted to Majorana zero modes, from an ambitious planned post on some quantum physics mysteries, related to quantum information and computation. (Some items are also related to noise sensitivity and associated Fourier analysis.)  

Majorana zero modes are fascinating quantum objects in their own right whose creation would be an important first step towards topological quantum computers.

I decided to post this item separately because a few days ago, Microsoft announced successful creation of both Majorana zero modes and topological quantum qubits. (See this post over “Shtetl Optimized”, and also here.) This announcement (as earlier announcements in the area) is controversial and the paper should be carefully scrutinized.

Majorana zero modes and topological quantum computing

nzm5

On the left you can see David DiVincenzo’s famous 7-steps road map to quantum computers (the picture is taken from a paper by Devoret and Schoelkopf). On the right an analogous roadmap for topological quantum computing. My argument about quantum computation would exclude reaching high-quality topological qubits and I am curious if my theory allows the creation of Majorana zero modes.

Question 1: Is it possible to create convincing demonstrations of Majorana zero modes (MZM)? 

Majorana zero modes are quantum states whose creation is considered a crucial first step towards topological quantum computing. 

A few more details on the physics: a Majorana fermion (proposed by Majorana in 1937) is a fermion that is its own antiparticle. They were not decisively observed in high-energy physics but commonly believed to describe certain neutrinos. Condensed-matter analogs of Majorana fermions were constructed and they represent the Bogoliubov quasiparticle. Majorana zero modes are Majorana fermions with an additional property and they were not (convincingly) constructed experimentally so far. See the review paper from 2015 Majorana zero modes and topological quantum computation by S. Sarma,  M. Freedman, & C. Nayak.  The area of topological quantum computing was pioneered by Kitaev in the late 90s. Here is a classic 2003 survey article Topological quantum computing by Freedman, Kitaev, Larsen, and  Wang. 

Where do we stand now? Since 2012, there were a dozen or so experiments claiming to achieve MZM. (And quite a few other experiments claiming nonabelian anyons and related phenomena.) Proponents regard these experiments as good evidence but admit that other explanations not involving MZM are still possible. Opponents claim that in all the experiments it is much more likely that some mundane “quantum dots” were observed. In addition, opponents raised concerns regarding the quality and reliability of MZM experiments and at least one major  MZM paper had to be retracted. Proponents disagree with some of these concerns and find them harmful to the field. Researchers from both sides believe that MZM can eventually be experimentally created. We mentioned this debate here, and see also this SO post (and discussion) “Sayorana Majorana”.

There are some attempts to demonstrate MZM on NISQ computers, see papers by  Stenger et al., Rančić, and Mi et al. . (Even if successful, these attempts will only show some aspects of MZMs while not others.) Representing MZM over NISQ computers opens the door to a “no go” approach based on the complexity class LDP of Guy Kindler and me. 

Question 2: Is it the case that samples from states demonstrating MZM on a NISQ computer goes (for large numbers of qubits) beyond the Kalai-Kindler LDP complexity class?

A positive answer to Question 2 would give some support to the assertion that MZM’s are inherently out of reach (namely to a negative answer to Question 1). 

The computational class LDP

LDP is the class of probability distributions P on o-1 vectors (x_1,x_2,\dots, x_n) of length n, that can be approximated by polynomials of bounded degree D. This class is a small subclass of probability distributions that can be approximated by {\bf AC}^0 circuits (bounded depth circuits), which is itself a very tiny subclass of the class BPP of probability distributions that can be approximated by classical computation. The class LDP arose as a complexity class for sampling problems but we can adopt the same definitions for real functions (or Boolean functions) of n 0-1 variables. 

Functions in LDP are efficiently learnable. (This is in contrast with functions in the much larger class {\bf AC}^0.)

Additional details, comments, links and references 

The very recent news from Microsoft

1) About the very recent Microsoft announcement. It is based on a paper that is just being published by “Nature” that was released as an arXiv preprint a year ago. Apparently, it is also based on further experimental work claimed by the Microsoft team in the past year. A curious aspect of the paper is that the editors and referees do not endorse the author’s claim that Majorana zero modes were demonstrated, but see other qualities in the physical devices described in the paper that merit publication. (Achieving a topological qubit is a further difficult step beyond MZMs.) Update: There is an interesting discussion in the comment thread over SO and in other places where considerable concerns are raised regarding Microsoft’s claims. (See also the comment thread below for further links.) Another update: Sergey Frolov and Vincent Moudrik will have a live videotaped discussion on February 28, 2025 on the topic “How close we are to a topological qubit?” Here is the Zoom registration link

About the theory

2) There were several ideas about implementing topological quantum computing since the early 2000. It is related to “nonabelian anyons,” remarkable (still) hypothetical states of matter predicted by physicists as early as the 1980s.  The 1991 paper by Greg Moore and Nick Read  “Nonabelions in the fractional quantum hall effect”  plays a special role. See this Wikipedea article about “anyons“. Nonabelian anyons are related to fascinating mathematics. 
 
3)  “Kitaev’s wire” were especially simple (they are sometimes referred to as “poor-men nonabelian anyons”), but there were good physics reasons why they are not realistic. (Kitaev’s surface codes are abelian anyons.)
 
4) In 2010 two groups of researchers offered a physical way to implement “Majorana zero modes.” One group was 
Y. Oreg, G.Refael, F. Von Oppen:  Helical liquids and Majorana bound states in quantum wires, and the other was
 

Experimental progress 

5) In 2012 the first experimental majorana zero modes were created in the paper   Signatures of Majorana fermions in hybrid superconductor-semiconductor nanowire devices by V. Mourik, K. Zuo, S. M. Frolov, S. R. Plissard, E. P. A. M. Bakkers, and L. P. Kouwenhoven.
 
6) A  year later  the paper Spin-resolved Andreev levels and parity crossings in hybrid superconductor–semiconductor nanostructures by Eduardo J. H. Lee, Xiaocheng Jiang, Manuel Houzet, Ramon Aguado, Charles M. Lieber, and Silvano De Franceschi  offered a much more mundane explanation to the states created in item 5 as “Andreev particles”. 
 

More experimental claims and controversy

7) Since then there have been a dozen or so experimental claims about achieving Majorana zero modes. Proponents regard these experiments as good evidence but admit that other explanations not involving MZM are still possible. Opponents claim that in all the experiments it is much more likely that some mundane quantum states were observed. 
 
8) In addition, opponents raised concerns regarding the quality and reliability of MZM experiments and at least one major  MZM paper had to be retracted.  Proponents disagree with some of these concerns and find them harmful to the field.  (See also this article in Quanta Magazine.) 
 

A few more comments

9) The attempts to simulate MZMs on quantum circuits, if successful, will only demonstrate (with poor fidelity) some aspects of physically constructed MZM’s. In the other direction, showing that “MZM states” constructed by quantum circuits are not in LDP would provide an argument against the possibility of physically constructing MZMs.
 
10) Some researchers  thought that topological quantum computing is the only viable avenue to quantum computing since achieving the error rate needed for quantum circuits is infeasible. Mike Freedman gave a lecture at Microsoft “Quantum computing: the minority report,” with this massage, and this sentiment is expressed in the abstract of Freedman, et al.’s 2003 survey article mentioned above.
 

Late and early conversations

 
11) In recent years, I had useful conversations on the subject with Adi Stern and with Sergey Frolov. I am thankful to them both.

12) In my very early days of skepticism (2005) Nick Read challenged me to extend my skeptical study to topological quantum computing. (John Preskill made a similar challenge in an email correspondence a year later; see  below.). BTW, here is a nice 2012 article Topological phases and quasiparticle braiding‏, by Nick. My general argument about quantum computers excludes also the possibility of good quality topological qubits, and Question 2 above is related to the feasibility of MZM’s.

13) Even earlier, in the late 90, before I got personally interested in quantum computing, I had some nice conversations (sometimes over lunch) about topological quantum computing, physics, and knot invariants, in Microsoft (Mike Freedman, Alexei Kitaev) and Jerusalem  (Dorit Aharonov, Dror Bar-Nathan, and Michael Larsen). 
 

My lecture at Microsoft: Good news and bad news

15) In 2018 I was invited to give a lecture at Microsoft about quantum computers. I concluded my abstract with the sentence:
 
“So the bad news is that Microsoft will fail in its quantum computer endeavor and the good news is that Google and IBM will fail as well.”
 
The Microsoft people felt uncomfortable about this sentence and asked me to consider modifying the abstract to which I gladly agreed and wrote instead: 
 
“The good news is that understanding the failure of quantum computers promises interesting insights and new connections for quantum physics.” 
 

Three recent posts on quantum computing

  1. Robert Alicki, Michel Dyakonov, Leonid Levin, Oded Goldreich, and Others – A Summary of Some Skeptical Views On Quantum Computing.
  2. Roadmap for the Debate about Quantum Computers
  3. Seven Assertions about Quantum Computing.

From a 2005 email of Nick Read

 (March 20, 2005) …I’m a bit interested in another form of QC. so-called topological QC … Some ground states I invented for quantum Hall effect systems would be quite suitable for this, if they exist in nature.

In this approach, decoherence should be less of an issue because information is stored non-locally (indeed “topologically”) and is supposed not to be susceptible to noise. You will need to confront this form of QC if you really intend to prove “pessimistic” general results!

From a 2006 email of John Preskill: 

 (April 23, 2006) “The core of the idea of quantum fault tolerance is that the logical information processed by a quantum computer can be stored in a form that is inaccessible to local observers and therefore robust against local noise. A particularly vivid realization of this idea is the basis of topological quantum computing: for n widely separated anyons in a nonabelian topologically ordered two dimensional medium, there is an exponentially large Hilbert space describing the possible ways for the anyons to be “fused” together. All of the states in this Hilbert space look identical to any observer who can inspect the anyons only one at a time. Furthermore, the information can be processed if the anyon world lines execute a braid in spacetime, even though the anyons never come close to one another.

It might be interesting to think about what plausible noise model might prevent a highly entangled quantum state from being created in a topological quantum computer. Nonabelian anyons have not yet been seen experimentally, but probably will be within the next couple of years (in experiments with fractional quantum Hall systems). It might then be possible to test your hypotheses concerning nonlocal noise in such systems in the next ten years of so.”

A question to my readers: I joined Gmail in 2002, and just noticed that emails from before 2005 are no longer kept in the system. Do you know anything about it?  

Posted in Computer Science and Optimization, Controversies and debates, Physics, Quantum | Tagged , , | 14 Comments

Robert Alicki, Michel Dyakonov, Leonid Levin, Oded Goldreich, and Others – A Summary of Some Skeptical Views On Quantum Computing.

In this post, I provide links, references, and a brief discussion of the skeptical views regarding quantum computing held by Robert Alicki, Michel Dyakonov, Leonid Levin, Oded Goldreich, and a few others. In the next post I will briefly describe my own skeptical view, and move on to give counter arguments by proponents of quantum computing. These pair of posts are spinoffs of a planned post that deals with a variety of quantum mysteries many of which intersect with quantum computation and quantum information.

The debate on quantum computing

The question of whether quantum computation is possible is among the most fascinating open scientific questions of our time. I hold the view that building scalable quantum computers—and even achieving some early milestones on the path to quantum computing—is impossible. Skeptical perspectives on quantum computing have existed since the inception of the field. For instance, in the early 1990s, concerns raised by Rolf Landauer, William Unruh and others helped motivate the development of quantum error correction and the “threshold theorem”, and in 1996 shortly after the threshold theorem was proved,  Serge Haroche, and Jean-Michel Raimond wrote a paper “Quantum computing: dream or nightmare? that raised concerns about the feasibility of quantum error-correction and fault tolerance quantum computation. 

Personally, I am very interested not only in uncovering the “ground truth” about our physical world but also in understanding the diverse perspectives on this issue. Over the years, I have had engaging discussions with numerous colleagues. In this post, I primarily give the floor to my fellow skeptics. Without further ado, let us begin with Robert Alicki.

Robert Alicki’s papers and viewpoint

The earliest paper by Alicki that I am aware of that is skeptical towards quantum computing is from 2000.
 

a) Alicki’s papers from 2000 and the connection to the Heisenberg Energy-Time uncertainty

R. Alicki, “Can quantum computer perform better than classical?”  In this paper Alicli argued (among other things) that the Heisenberg energy-time uncertainty may put some limits on quantum computation.  There is a subsequent paper  “On Non Efficiency of Quantum Computer” which  further developed this connection.

It is interesting to note that the 2016 paper “Fast-forwarding of Hamiltonians and Exponentially Precise Measurements” by Yosi Atia and Dorit Aharonov presents a very strong violation of the energy-time uncertainty relation (referred to by Yosi and Dorit as a “misconception”) based on Shor’s factorization algorithm. 

 

b) Critique of the mathematical assumptions of the “threshold theorem”

There are several papers where Alicki analyzed several models of decoherence and criticized the mathematical assumptions for the threshold theorem as physically unrealistic. In  a 2004 paper “Quantum error correction fails for Hamiltonian models“, Robert argued that “the existing schemes of fault-tolerant quantum computation designed for discrete-time models and based on quantum error correction fail for continuous-time Hamiltonian models even with Markovian noise.” The 2005 paper Internal Consistency of Fault-Tolerant Quantum Error Correction in Light of Rigorous Derivations of the Quantum Markovian Limit (Authors: R. Alicki, D. A. Lidar, P. Zanardi) raises concerns about the physical consistency of the mathematical assumptions of the threshold theorem. There is a special section listing possible objections for the inconsistencies. (Thermodynamics plays a crucial role in Alicki, Lidar, and Zanardi’s analysis.) There is an additional one-page 2007 paper by Robert with a critique of the assumptions of two recent (then) fault-tolerance papers. Robert had several other related papers which are relevant to criticism of quantum computers, proposing models for quantum noise,  and some papers (with Michal Horodecki) analyzing Kitaev’s proposed states. (I will come back to some of these papers and interesting related discussions toward the end of this post.)
 

c) A “no go theorem” based on thermodynamics.

I am aware of two additional papers by Alicki that give a thermodynamic argument for the impossibility of  quantum computing. The first paper is: R. Alicki and M. Horodecki, Can one build a quantum hard drive? A no-go theorem for storing quantum information in equilibrium systems (2006). The paper  asserted that thermodynamics implies that the set of effectively stabilized states for a large system  possesses always a classical structure (simplex) and hence cannot support quantum information. It was motivated by the theory of KMS states

In the next paper Alicki acknowledged that many physicists don’t find the argument convincing and some physicists doubt the relevance of thermodynamics altogether: Quantum memory as a perpetuum mobile? Stability v.s. reversibility of information processing (2009). In the paper there is an interesting discussion about the relevance of thermodynamics to quantum information processing. 

d)   “Critique of fault-tolerant quantum information processing

The paper summarized some of Alicki’s earlier arguments, and proposed some new points of view. It appeared in a 2012 book  edited by Lidar and Brun. The introduction asserts that quantum computing “presents a bold challenge to the rather well established principles called the Strong Church-Turing Thesis and the Bohr Correspondence Principle.” The paper goes on to critically discuss the threshold theorems for fault-tolerant quantum computation, and the idea of quantum memory based on topological degrees of freedom.
 

e) Alicki’s reaction to the 2019 “quantum supremacy experiment”.

In September 2019 The Google quantum supremacy claims became publicly known (the paper was leaked from NASA servers a month before publication). A few days later I wrote about my early impressions from the paper and and its fantastic claims and I also asked Robert what he thought about the excitements, Robert responded (September 2019):

Generally, I am more pessimistic than you. I think that QC is a new type of postmodern science based on hype, self-promotion and lack of serious criticism.

Robert relied on his own bad experience with a Nature 2009 paper by Ansmann, M. et al. on “Violation of Bell’s inequality in Josephson phase qubits.” Robert claimed to have found serious flaws in that paper and wrote “I could not believe that respectable scientists were able to do such things”. “For example,” Robert wrote “they claim the 244(!!!) standard deviation violation of Bell inequality while in fact the violation was of the order of experimental error.”

On the technical issues Robert wrote:

I still believe that I have strong arguments against fault-tolerance in QC which exclude e.g. useful implementations of Shor algorithm. I cannot exclude some temporal “windows of quantum supremacy” for certain very specific tasks similarly to the windows of “analog supremacy” in the 60-ties.  However, I do not believe that this is the case.  My doubts are based on the experience with gates implemented by quantum dots. I have written a paper (PRA 2004) with Horodecki’s and some experts in quantum dots from Wroclaw showing that the error per 1-qubit gate cannot be smaller than 0.1%. I am sure that superconducting qubits cannot do better because the ultimate physical mechanisms of decoherence are based on the same electromagnetic interactions. Taking into account that multiqubit errors are more malicious I think that such accuracy (claimed to be reached by Google QC) is too low for quantum supremacy.

This 2000 paper by Alicki proposed a classical analog demonstration of “quantum supremacy” following the Google’s 2019 quantum supremacy paper. 

skeptics-and-friends

Top left with Michel Dyakonov (Jerusalem, 2014), Top right with Robert Alicki (Gdynia, 2014), bottom left Oded Goldreich, bottom middle, with Michael Ben Or, Dorit Aharonov, and Robert Alicki (Jerusalem, 2005), bottom right Leonid Levin. (Click to enlarge.)

 

Leonid Levin’s point of view

Leonid Levin wrote a short (very dense and sarcastic) essay Polynomial Time and Extravagant Models (Section 2 of his 2003 article The Tale of One-Way Functions) Polynomial Time and Extravagant Models. 

Here is a quote from Leonid’s paper: 

The major problem is the requirement that basic quantum equations hold to multi-hundredth if not millionth decimal positions where the significant digits of the relevant quantum amplitudes reside. We have never seen a physical law valid to over a dozen decimals. Typically, every few new decimal places require major rethinking of most basic concepts. Are quantum amplitudes still complex numbers to such accuracies or do they become quaternions, colored graphs, or sick-humored gremlins? I suspect physicists would doubt even the laws of arithmetic pushed that far. In fact, we know that the most basic laws cannot all be correct to hundreds of decimals: this is where they stop being consistent with each other!

Leonid concludes this part of his paper with the following statement: “The rest of the article ignores any extravagant models and stands fully committed to the Polynomial Overhead Church–Turing Thesis: Any computation that takes t steps on an s-bit device can be simulated by a Turing Machine in sO(1)t steps within sO(1) cells.”

And here is  what Leonid wrote to me:

(2013) “Your conjectures are
interesting, and refreshing on the background of mindless wishful thinking
of most QC community. The question of WHAT EXACTLY is the obstacle
to quantum computing is very interesting (for physics, not for CS).

Yet, if your conjectures are refuted or circumvented, other obstacles
would come instead.”

 
(2015) “My skepticism is not based on
specific obstacles. When I read Harry Potter or Tolkien, I do not
bother with finding specific obstacles for implementing the tools
envisioned in these wonderful books. Simple computations show that
QC ideas are far beyond the realms for which we have any evidence.
So this is Sci.Fi in my view with no need of specific obstacles.

And yet it is an interesting thought experiment that may provoke
deeper understanding of QM.”

See also this 1994(!) posting by Leonid.

Oded Goldreich’s point of view

Oded Goldreich, holds the view (On Quantum Computing, 2005, and an earlier version from 2004) that quantum computers represent a fantastical possibility in a wondrous universe that is unlikely to be credible.

Oded starts his essay by making the following claims: “Firstly, the Laws of QM are merely a refutable model (or assumption) about the real world, they are not the reality itself nor can they ever be proved to provide a full description of reality.” And, “Secondly, as far as I know (and here I may be wrong), QM says that certain things are not impossible, but it does not say that every thing that is not impossible is indeed possible.” 

Here is another quote of Oded:

From my perspective/understanding of TOC (theory of computing), the QC is weird model unworthy of study. It tells us nothing about what we are really interested in. (Still, at rare times, it may be of help (like result is area A may sometimes allow to solve a problem in area B, or just inspire thinking).)

Oded also writes: “I wish to stress that I am not concerned with the question of whether or not QC that factor integers of the size used in commercial applications of RSA will be built in our life-time. I am concerned of the question of whether the (simplified, in my mind) model of QC (with unit cost operations) is an adequate model of reality. Needless to say, this question seems to lie more in the domain of physics than in that of computer science.”

He also argues about the analogy between randomized computation and quantum computation: “At times, I heard the believers of QC draw an analogy between QC and randomized computation. In my opinion, this analogy is highly misleading. The key point about QC is the ability to actually maintain a huge vector of (exponentially many) “possibilities”, whereas a randomized algorithm merely maintains one possibility (indeed selected at random) in each time. The probability space of the execution of a probabilistic algorithm is a mental experiment taking place in the analysis (not in reality).”

Here is a post about Oded’s 60th birthday conference.

Michel Dyakonov’s point of view

Michel Dyakonov describes his point of view in his 2020 book Will We Ever Have a Quantum Computer?  Dyakonov regards quantum computers as a ridiculously hard technological task and as an absurd idea from the physics point of view. Dyakonov mentioned that in his view the progress so far is very limited: quantum computers cannot honestly factor 21 to 7×3 and he describes his main point as follows:

  1. The hopes for scalable quantum computing are founded entirely on the “threshold
    theorem”: once the error per qubit per gate is below a certain value, indefinitely
    long computations are possible.
  2. The mathematical proof of the threshold theorem heavily relies on a number of
    assumptions supposed to be fulfilled exactly, as axioms.
  3. In the physical world nothing can be exact when one deals with continuous
    quantities. For example, it is not possible to have zero interaction between qubits,
    it can only be small
  4. Since the basic assumptions cannot be fulfilled exactly, the question is: What is
    the required precision with which each assumption should be fulfilled?
  5. Until this crucial question is answered, the prospects of scalable quantum
    computing will remain very doubtful.”

The book followed several earlier articles which already raised these main points:

Dyakonov regards the task of building quantum computers as ridiculously difficult and compares it to building houses with one-atom thin walls and  teaching donkeys to speak English.  

Wolfram, t’ Hooft’, Baez, Laughlin, Leggett…

Among other scientists who expressed skepticism about quantum computing  are Stephen Wolfram, Gerard ‘t Hooft, Robert B. Laughlin, John Baez, and others. (Anthony Leggett proposed an explanation for impossibility of related quantum states.) I will add here some links to their views if those will become available to me. 

More recent quantum skepticism: Svozil, Gourianov,  McGuinness, and Vardi.

In the 2017 article Delivering on a quantum promise by Karl Svozil in “Physicsword”, Svozil writes  “While many of the quantum manifesto’s short- and medium-term goals appear feasible, some of the long-term goals might not be achievable even in principle.” (In contrast, my own skeptical view  is that some crucial short-term goals, primarily creating the necessary quantum error correcting codes, are infeasible.)

Nikita Gourianov’s Financial Times article The Quantum Computing Bubble is discussed in this Shtetl Optimized post.

Moshe Vardi  gives a very nice description of  the two sides of the debate and concludes with a declaration “Count me a quantum skeptic!” in his 2019 article Quantum Hype and Quantum Skepticism for Communications of the ACM.  (My reading is that Moshe does not endorse the skeptical point of view that quantum computing is infeasible but does not dismiss it either. He expresses skepticism about other aspects of the grand technological and commercial vision of quantum computers.)

Liam McGuinness’s skepticism is related to the Heisenberg limit in quantum metrology and related quantum physics laws.  To present McGuinness’s opinion let me offer a very brief introduction to quantum sensing: In quantum sensing, using N independent measurements based on N particles offer an improvement of a factor \sqrt n in the precision. This improvement is referred to as the “standard quantum limit”.  There are further claims, both theoretical and empirical,  that if these N particles are entangled, an improvement by a factor of N compared to a single particle can be reached.  This factor N improvement using entanglement is referred to as the “Heisenberg limit.” 

McGuinness argues (against the prevalent view as well as a large body of experimental claims), that entanglement does not offer substantial advantage for sensing and does not allow an improvement of the “standard quantum limit” (towards the “Heisenberg bound”). He also argues that quantum computation is out of reach, as it requires (in Liam’s opinion) breaking (exponentially) even the Heisenberg bound. McGuinness’s reasoning does not refer to noise, see his paper The case against entanglement improved measurement precision.

Pictures!

My 2014 post Pictures from Recent Quantum Months has some pictures with Robert Alicki, Michel Dyakonov, Michal Horodecki, Yuri Gurevich, John Sidles, Connie Sidles, and Rico Picone. Here is a picture with Scott Aaronson and Guy Kindler (2015). There are two pictures of Aharonov-Alicki-Ben-Or-Kalai from 2005 and 2013

The claim of intractable engineering barriers. (Possible in principle and not in practice.)

Is it possible that something works in theory but not in practice? This is a cool philosophical question. 

Immanuel Kant argued that this is not possible in his paper: “Über den Gemeinspruch: Das mag in der Theorie richtig sein, taugt aber nicht für die Praxis” (1793) (English translation); Scott Aaronson made a similar argument in his post Mistake of the Week: “X works on paper, but not in the real world”. Aram Harrow offered quite the opposite view and he wrote in our debate:

“There are many reasons why quantum computers may never be built. We may fail to bring the raw error rate below the required threshold while controlling and interacting large numbers of qubits. We may find it technically possible to do this, but too costly to justify the economic value of the algorithms we are able to implement. We might even find an efficient classical simulation of quantum mechanics, or efficient classical algorithms for problems such as factoring. The one thing I am confident of is that we are unlikely to find any obstacle in principle to building a quantum computer.”

Current optimism in the community (written before the Willow announcement).

My impression is that people in the quantum computing community are quite optimistic about the future of quantum computers. Here is a nice recent post from SO: Quantum Computing: Between Hope and Hype (and my comment), and here is a post that I wrote (among other things) about Dorit Aharonov’s lecture who expressed optimism that skeptics (and specifically me) will have to change their mind in view of recent experimental progress.

Some technological breakthroughs from other areas that may give reasons for hope

Two recent technological breakthroughs that can serve as “role models” for quantum computing are: a) The idea to move from CPU to GPU did wonders for neural networks and deep learning. b) The success of anti missile missiles, and perhaps also laser-based anti-missiles systems. (Apropos anti missiles missiles, see this post about Chomskian linguistics.) 

Chaos and Quantum mechanics

Both Alicki and Dyakonov mentioned in their writings that the tension between chaos in classical physics and the behavior of (ideal) quantum systems could be related to an  explanation for the impossibility of quantum computing. Chaos is also related to my theory: Noise sensitivity that plays a role in my argument is a mathematical theory related to chaotic phenomena in physics. (A theory which is different from Lorenz’ mathematical theory of chaos, and has roots in an early work of Weiner about chaos; see the section Weiner Chaos and Lorenz Chaos in this post.) 

 

Posted in Computer Science and Optimization, Controversies and debates, Physics, Quantum | Tagged , , , , , , | 3 Comments

Roadmap for the Debate about Quantum Computers

Here is a roadmap for the debate about quantum computers from my perspective. Skeptical claims are in blue, responses to these claims are in purple. Points 1-8 represent skeptical claims made over the years, while points a-p are replies to these skeptical claims. Points (i)-(x) are replies to some of these replies. Underlined claims were made by me. Green boldface indicates points of agreement. 

  1. The understanding of noise is a crucial ingredient of any model of computation. Noise may cause quantum computing to fail.

    • a) Noise is not fundamental. There is no inherent “noise” in quantum mechanics.

      • (i) A realistic computational model must include also the modeling of errors.

    • b) Quantum error correction and quantum fault tolerance have the potential to deal successfully with noise and various forms of inaccuracies.

    • c) Topological quantum computing gives an alternative avenue for quantum computing that does not depend on the circuit model.
  2. General forms of quantum noise (in particular, correlated errors) will fail quantum error correction.

    • d) These types of general noise are not physical.

    • e) These types of general noise would cause classical computation to fail as well.

    • f) For arbitrary forms of noise, if the error rate is sufficiently small, log-depth quantum computing survives.

  3. Specific physical noise models differ from the ideal model.

    • g) We will be able to extend the threshold theorem to these models as well.

      • (ii) All those extensions still assume common properties that might be unrealistic.

  4. There will be a systematic relationship between the “signal” and the noise. For example, the noise for a pair of entangled qubits will be highly correlated. (This argument extends to attempts to build quantum gadgets needed for topological quantum computing.)

    • h) This cannot be: how would nature know what the intended state is? It is like saying, “wet roads cause rain to pour.”

      • (iii) This systematic relation can be a general rule if you consider special-purpose quantum computational devices.

      • (iv) This systematic relation holds for NISQ (noisy intermediate scale quantum) systems and more generally for systems that do not apply quantum fault tolerance. This systematic relation describes quantum physics “above the quantum fault tolerance threshold.”

    • i) Such a principle would harm classical computation as well, unless you believe that by a miracle the threshold for fault tolerance lies between what is required for classical and quantum computation.

      • (v) The argument about the miraculous location of the threshold is incorrect because classical computers are not universal quantum computers.

  5. NISQ computers represent a primitive classical computational power (LDP), which implies an inherent reason they cannot achieve quantum computational supremacy. This suggests there is an absolute lower bound on noise rates that prevents effective quantum error correction. 

    • j) It is incorrect to apply asymptotic insights (on the number of qubits) to systems with only tens or a few hundred qubits.

      • (vi) We commonly apply asymptotic insights to small systems.

    • k) But why classical information and computation are possible?

      • (vii) The computational class LDP still supports classical information.

    • l) The Google 2019 experiment refutes this argument.

      • (Agreed upon) Google’s 2019 supremacy claims appear to be incorrect.

    • m) Some aspects of the Google 2019 experiment and its replications are promising and support basic assumptions about quantum noise.

      • (viii) These aspects of Google 2019 appear to be incorrect as well.

    • n) (New) The 2019 Google experiment was long ago superseded. Several recent experiments, including Google’s “septillion years advantage over classical” and other results from IBM, Quantinuum, QuEra, and USTC, are more robust.

      • (ix) We need to scrutinize these assertions as well. Gradual experimental progress is expected to hit a barrier, and fantastical experimental claims are expected to be falsified.

  6. Building quantum computers is a ridiculously hard engineering task.

    • o) It is very hard but not ridiculously hard; progress is being made, and a long-term approach is necessary.

      • (x) Even achieving near-term tasks is inherently impossible.

      • (Agreed upon) We may have an opportunity to test this matter in the next few years.

  7. (Simplified) Quantum computations conflict with important principles and laws from physics and computing theory:

    • 7.1 The time-energy uncertainty principle.

    • 7.2 The effective Church-Turing thesis.

    • 7.3 Several principles of thermodynamics.

    • 7.4 Bohr’s correspondence principle.

    • p) (Simplified) These physical and computational rules are not universally correct, which is part of what makes quantum computation so remarkable as it would enable novel physical and computational behaviors.

  8. Quantum computers will allow sampling according to the exact values of permanents. Achieving it is implausible given that computing permanents is #P complete.

Summary

Proponents of Quantum Computers: BQP is the computational complexity class representing our quantum physical world. BQP characterizes the complexity of quantum processes on a microscopic scale, as evidenced by the challenges of simulating quantum physics on classical computers. Building quantum computers is primarily an engineering task, and pursuing this endeavor will drive exciting scientific progress across various fields. Many promising avenues exist, ranging from different implementations of quantum circuits to non-abelian anyons and topological quantum computing. Numerous impressive experimental results have already been achieved.

Skeptics of Quantum Computers: BPP is the computational class representing computation in the physical world. Classical information and computation are sustained in our noisy quantum world through the “averaging” of noisy quantum information.  Quantum computational supremacy or high quality quantum error correction are inherently impossible and there is no compelling evidence for them in natural quantum processes. Advances in experimental quantum computing should be scrutinized carefully. Gradual experimental progress is expected to hit a barrier, and fantastical experimental claims are unlikely to withstand rigorous examination.

A “meta claim” of proponents is that there is no symmetry between the two sides of the debate: the prevalent view shared by many experts is that quantum computation is possible. 

_____________

Future Plans for Quantum Posts

I am planning an ambitious post, “Some Quantum Mysteries,” which will explore several mysteries in quantum physics, many of which are connected to quantum information theory. The first item is about the elusive Majorana zero modes, and the second item is about the security of quantum key distributions.

As a spinoff of the mysteries post, I am also writing two posts on the debate surrounding quantum computing. The first will summarize a few skeptical perspectives on quantum computing from Robert Alicki, Michel Dyakonov, Leonid Levin, Oded Goldreich, and others. The second post will begin with the roadmap presented here, briefly outline my own skeptical views with relevant links, and then delve into the broader debate, including counterarguments from John Preskill, Aram Harrow, Scott Aaronson, Dave Bacon, Boaz Barak, and others.

Seven assertions about quantum computing

In my previous post, I presented seven assertions about quantum computing that emerged from my research. My primary goal was to articulate these assertions as clearly as possible. (Of course, there is much to discuss regarding the arguments for their validity, precise quantitative formulations, falsifiable predictions, and their place within the broader context of the quantum computing endeavor.)

Posted in Computer Science and Optimization, Controversies and debates, Quantum | Tagged , , | 7 Comments

Seven Assertions about Quantum Computing.

The purpose of this post is to present seven assertions about quantum computing that arose in my research. I welcome questions and remarks and will gladly clarify or elaborate on them.

Four Predictions About Quantum Computation in General

  1. Inherent Noise in Two-Qubit Gates
    Two-qubit gates will inherently be noisy. Engineering efforts to reduce this noise will encounter a barrier that prevents achieving the quality required for quantum fault-tolerance.

  2. Correlated Errors in Cat States
    Cat states will inevitably experience substantially correlated errors.  (A cat state {\frac{1}{\sqrt 2}}\left|00\right\rangle +{\frac{1}{\sqrt 2}} \left|11\right\rangle represents entanglement between two qubits.)

  3. Limitations of NISQ Devices
    Samples generated from boson sampling and quantum computers in the intermediate scale correspond to a primitive computational complexity class called LDP (low degree polynomials). As a result, they are incapable of demonstrating quantum supremacy or achieving high-quality quantum error-correction. This argument imposes a computational-complexity-based limit on the engineering efforts to reduce errors in NISQ (Noisy Intermediate-Scale Quantum) devices.

  4. Noise Sensitivity in Small Quantum Circuits
    The empirical distributions of samples obtained from 12-qubit random circuits inherently contain a significant noise-sensitive component. This means that the empirical distributions are necessarily non-stationary  (changing over time), and are inherently unpredictable. 

Three Assertions About Google’s 2019 Quantum Supremacy Experiment

  1. Statistical Unreasonableness in Fidelity Predictions
    Predictions based on the fidelities (error-rates) of individual components of Google’s experiment were successful in a statistically unreasonable manner.

  2. Lack of Separation Between Calibration and Sampling Stages
    Contrary to Google’s description of their experiment, there was no strict separation between the calibration stage and the stage of generating samples from large quantum circuits.

  3. Lack of Transparency in Crucial Data
    Google has not provided essential data required to rigorously scrutinize their experiment. Specifically:

    • Over five years after the experiment, the team has not shared the individual two-qubit gate fidelities. This lack of disclosure is unreasonable.
    • Details about the calibration stage remain concealed under commercial secrecy.

Assertions (1)-(4) form part of my broader argument against quantum computers (see this post for a brief description and this paper); Assertion (5)-(7) are part of the case against the Google 2019 quantum supremacy experiment (see this post for a summary and links to my relevant papers). 

I believe all seven assertions are correct; however, they are not definitive. For the four assertions in the first part, the arguments are partially heuristic, and I do not anticipate stronger claims in the form of a mathematical theorem proving that quantum computers are impossible, nor a derivation of their impossibility from an agreed-upon physical principle.  Additionally,  in both parts, there are several counterarguments that warrant consideration.

Posted in Physics, Quantum, Statistics | Tagged , , , | 24 Comments

More Annotated Pictures from Fall/Winter 2024

  • Pictures from Jerusalem, Tel Aviv, and Athens (!) and Rehovot.
  • The new Google‘s 10,000,000,000,000,00,000,000,000,000 supremacy announcement; 
  • Sarnak’s lectures; the “A-Bass note”  and, fullerences, 3-polytopes with hexagonal and pentagonal faces (answer to TYI58);
  • What was the music in the ancient Jewish Temple?
  • Winter school in Rehovot; Terrific quantum error-correcting codes; 
  • A picture with my grandchildren and more.

December 2024

Jerusalem: Einstein Institute of Mathematics

20241202_111155

With Itamar Zwick and Shahar Mozes and Yuka.

Tel Aviv Museum and skyline

telaviv skyline

Nava de-Shalit, Udi de-Shalit, Mazi and I on the rooftop of the Tel Aviv Museum.

Google‘s 10,000,000,000,000,000,000,000,000 supremacy announcement

willow

The Google willow announcement. (Left:) Sundar Pichai’s tweet and Elon Musk’s reply. (Right:) The announcement boosted stock values for quantum computing companies (but had no effect on Bitcoin’s price). Google’s claim of exponential increase in reducing-error capabilities is based on alleged exponential growth of the function: Λ(5)=1; Λ(7)=2 🙂 .

Here are some remarks on the latest sensational supremacy claim. Just a few hours before the announcement, I explained—based on Google‘s 2019 claims— why the scientific claims made by Google Quantum AI should not be taken too seriously

Until this latest announcement, I mistakenly believed (and hoped) that after the 2019 supremacy claim, the Google Quantum AI team had largely shifted toward more fruitful directions. However, the new “septillion” claim retrospectively justifies our efforts over the past five years to meticulously scrutinize Google’s 2019 supremacy experiment.

Beyond serious concerns regarding Google’s methodology, these new extraordinary supremacy claims also appear to conflict with results on classical spoofing by Gao et al. and by Aharonov et al.

Does an assertion by Sundar Pichai, followed by a “wow” from Elon Musk, now constitute a new scientific proof system? We have already seen trillion-dollar companies and multi-billionaires (arguably) reshape democracy—should we now expect them to redefine the nature of science as well?  

The Cascade Lectures in Combinatorics

Athens!

athens

For the first time since the beginning of the terrible war we went to a few days trip abroad, to wonderful Athens.

20241222_193635(1)

For a mathematician, seeing all these Greek letters coming alive is an interesting experience!

Sarnak’s lectures and workshop

ps5

Peter Sarnak

Fullerences

Peter Sarnak talked about the remarkable class of simple three-dimensional polytopes with (only) pentagonal and hexagonal faces. Let F be the class of planer 3-connected cubic graphs with n vertices so that all faces (including the outer face) are 5-gons or 6-gons. (Equivalently, F can be regarded as graphs of simple 3-polytopes with 5-sided and 6-sided faces.) Such polytopes arouse in chemistry and they are called Fullerenes. It follows from Euler’s theorem that among the 2-faces there are always 12 pentagons. We asked in the previous post

Test your intuition (58): Does the number of graphs in F with n vertices grows exponentially or polynomially with the number of vertices?

Answer to test your intuition (58): The number is roughly n^9. It is polynomial in n.

Peter mentioned remarkable papers by Thurston, Shapes of polyhedra and triangulations of the sphere (1998), and by Engel, Goedgebeur, and Smillie, Exact enumeration of fullerenes (2023).  I am not aware of a direct combinatorial argument for the polynomial growth rate. (ChatGPT’s answer was: exponential.)

Now, what about higher dimensions?

What is the origin of the term “The A-Bass note”?

I asked Peter if the term is related to the famous mathematician Hyman Bass. But, no, it is related to the bass note in music and refers to the lowest eigenvalues. 

New members lectures at the Israeli Academy of Sciences and Humanities 

Lectures by 4 of the new members of the Israeli Academy of Science and humanities.  Top left Miki Elad, top right Ashraf Brik, bottom left: Tamar Dayan and bottom right: Edwin Seroussi. I always liked these short lectures aimed for a large audience. 

Edwin Seroussi is an Israeli musicologist.  He is often being asked, “Professor, what was the music in the Jewish Temple?” His fascinating lecture was devoted to answers to this question. A prominent role in the talk played Shabbethai Meshorer Bass (1641–1718) a Jewish scholar born in Kalitz in 1641, who moved to Prague at the age of 14 after both his parents died in persecutions during the Russo-Swedish war. Bass was a bass singer in the Altneuschule synagogue of Prague and his name was derived from his position. 

You may wonder if Hyman Bass is related to Shabbethai Bass, but as Hyman (Chaim) told me the answer is:  “No, My parents were both from Lithuania. ‘Bass’ was the Ellis Island (immigration) translation of Bashekevich. At least one fourth generation descendent has reinstalled that name.”

Winter school on expansion in groups, combinatorics, and complexity

Speakers of the first day: Gil Cohen, Nir Lazarovich, Amir Yehudayoff, Esty Kelman, and Pavel Panteleev.

Outstanding quantum error-correcting codes

Pavel Panteleev gave a beautiful lecture on classical and quantum low-density parity-check (LDPC) codes, classical good locally testable (LTC) codes, and the quest for good quantum locally testable codes. (Here, “good” is not just a compliment—it refers to codes with positive rate.) The recent paper Maximally Extendable Product Codes are Good Coboundary Expanders by Gleb Kalachev and Pavel Panteleev brings us closer to this goal, with high-dimensional expanders (HDX) playing a crucial role. 

We previously discussed some earlier breakthroughs here and here. (And Quanta Magazine covered them here and here.) Here is a post about a 2014 workshop on Quantum PCP and/or Quantum Codes and HDX.

Gleb and Pavel are also leading contributors to classical spoofing algorithms for random circuit sampling. Their algorithms enabled the confirmation of certain fidelity estimates for large circuits in Google‘s 2019 quantum supremacy paper. Over the past few years, Yosi Rinott, Tomer Shoham and I have had several fruitful collaboration with them on this topic, and they were very helpful to our project. 

More pictures


With my grandchildren Yonatan, Yoav, and Ilan

With Boris Solomyak (left), Moshe White and Dror Bar-Nathan (right). Dror told me about his 2022 paper with  Roland van der Veen on exciting efficiently-computable knot invariants.  Stay tuned!

Dror also found three pictures from the late eighties. Yuval Peres, Dror, Shahar Mozes, and me. (The right picture from Princeton and the other two from Yale.) 

Tel Aviv beach

Posted in Analysis, Combinatorics, Conferences, Music, Number theory, personal, Quantum, Test your intuition | Tagged , , , , | 5 Comments

Test Your Intuition (58): Polyhedra with 5-sided and 6-sided faces.

Let F be the class of planar 3-connected cubic graphs with n vertices, with all faces (including the outer face) are either pentagons or hexagons. Equivalently, F can be viewed as the family of graphs of simple 3-polytopes with n vertices and only 5-sided and 6-sided faces.

Test your intuition: Does the number of graphs in F with n vertices grows exponentially or polynomially with n

Posted in Combinatorics, Convex polytopes, Test your intuition | Tagged | 2 Comments

Annotated Pictures from Fall 2024

Apart from pictures, I write about the very first “Test your Intuition” question, a pioneering work of Tutubalin, the hierarchy of valuations of Lehman, Lehman, and Nisan, and three conjectures of Miki Tarsi.

September 2024

Rothschild Symposium

20240919_134935

From left to right: Nati Linial, Andrei Broder, me and Yosi Azar. (The Rothchild Symposium September.)

It was great to see Andrei after we did not meet in person for a couple of decades.

Both Andrei Broder and and Yosi Azar are mentioned in my 2008 post Test Your Intuition (1). Here is the question again:

Question:  Suppose that we sequentially place n balls into n boxes by putting each ball into a randomly chosen box. It is well known that when we are done, the fullest box has with high probability (1+o(1))\ln n/\ln \ln n balls in it. Suppose instead that for each ball we choose two boxes at random and place the ball into the one which is less full at the time of placement. What will be the maximum occupancy?

rs1

Amichai Lampert, Hillel Furstenberg and Tammy Ziegler, Noam Lifshitz and me, Emmanuel Breuillard

Emmanuel Breuillard discussed central limit theorems for Heisenberg groups and more general nilpotent Lie groups, with a particular focus on his 2023 paper co-authored with Timothée Bénard. (I attached above his picture the definition of the Heisenberg group.) This fascinating area builds on the pioneering 1960 work of Valerii Nikolaevich Tutubalin, and the new paper answers a question posed by Tutubalin in the 1960s. Two decades ago, Emmanuel himself made significant progress in this field during his doctoral thesis, which is when we first met at Yale.

rs-agt

Shiri Ron gave a great lecture titled Strengthening truthfulness: Is it possible?  (taken literally this is a very timely subject 🙂 ). The slide on the right showcases a fascinating hierarchy of valuations (real functions over the discrete cube) introduced by Benny Lehman, Danny Lehman, and Noam Nisan in their 2002 paper Combinatorial Auctions with Decreasing Marginal Utilities. The class of real functions defined on the discrete cube (equivalently, on subsets of \{1,2,,\dots, n\}) plays a significant role in both optimization theory and game theory. An intriguing question would be to study the Fourier-Walsh expansions of functions within each class.

 

20240919_162600

with Michael Ben-Or and Shafi Goldwasser

October 2024

Rosh Hashana and Sukkot

20241002_204106

Tel Aviv, Rosh Hashana, Lior and his nephew Yonatan

jer2

Jerusalem, Sukkot 2024, in the streets of my childhood.

Reichman University Memorial Exhibit

memorial7-10A visual memorial to the victims of October 7 at Reichman University

November 2024

wigder1

Amazing (?) coincidences! I first met Avi and Edna on the train to Jerusalem and when we left the train we also met my son Hagai.

TAU-wig

Nati Linial, Avi Wigderson, Michal Linial, Miki Tarsi, me, Anita Tarsi, Yehuda Afek, and Edna Wigderson. (We met just before a lecture by Nati.)

And here are three long standing beautiful conjectures by Miki Tarsi and collaborators.

Alon-Tarsi conjecture (link: Wolfram mathword): A Latin n \times n square is said to be odd if it contains an odd number of rows and columns that are odd permutations. Otherwise, it is said to be even. The Alon-Tarsi conjecture states that for even n, the number of even Latin squares differs from the number of odd Latin squares. This was proved by Drisco in 1988 when n is of the form 2^rp and p is a prime. (Here are slides of a nice lecture by Eric Moorhouse.)

Alon-Jaeger-Tarsi conjecture: For any (finite) field F with at least four elements and any non-singular matrix M over F there is a vector x such that both x and Mx have only non-zero entries. Here is a 2021 paper The Alon-Jaeger-Tarsi conjecture via group ring identities, by János Nagy and Péter Pál Pach where the conjecture is proved for fields of sufficiently large primes. 

Jaeger- Linial- Payan-Tarsi conjecture: for any prime number p, there is a constant c such that for any n, the union (with repetition) of the vectors of any family of c linear bases of \mathbb Z_p^n forms an additive basis of \mathbb Z_p^n (i.e. any element of \mathbb Z_p^n can be expressed as the sum of a subset of these vectors). Here is the paper by Jaeger-Linial-Payan-Tarsi and a paper by Alon-Linial-Meshulam where the conjecture is discussed.

About Miki Tarsi: In Summer 1978, we spent two weeks together at a summer school in combinatorics in Montreal. Among the speakers were Vera Sós, Richard Wilson, and Haim Hanani. While in Montreal, Miki wrote a fascinating children’s thriller about ants for his son Hagai.

Posted in Combinatorics, Conferences, personal, Probability | Tagged | 2 Comments

Jiaoyang Huang, Theo Mckenzie, Horng-Tzer Yau: Ramanujan Property and Edge Universality of Random Regular Graphs

A central problem in combinatorics, probability theory, and analysis is to understand the spectrum of  random d-regular graphs G with n vertices. The following paper marks a huge leap in our understanding of this problem. 

Ramanujan Property and Edge Universality of Random Regular Graphs, by Jiaoyang Huang, Theo Mckenzie, and Horng-Tzer Yau.

Abstract: We consider the normalized adjacency matrix of a random d-regular graph on N vertices with any fixed degree d\geq 3 and denote its eigenvalues as

\displaystyle \lambda_1=d/\sqrt{d-1}\geq \lambda_2\geq\lambda_3\cdots\geq \lambda_N.

We establish the following two results as N\rightarrow \infty.

  • (i) With high probability, all eigenvalues are optimally rigid, up to an additional N^{o(1)} factor.
    Specifically, the fluctuations of bulk eigenvalues are bounded by N^{-1+o(1)}, and the fluctuations of edge eigenvalues are bounded by N^{-2/3+o(1)}.
  • (ii) Edge universality holds for random d-regular graphs. That is, the distributions of \lambda_2 and \lambda_N converge to the Tracy-Widom distribution associated with the Gaussian Orthogonal Ensemble.

As a consequence, for sufficiently large N, approximately 69\% of d-regular graphs on $N$ vertices are Ramanujan, meaning

\displaystyle\max \{\lambda_2,|\lambda_N|\}\leq 2.

HMY

Discussion

This new terrific paper connects to some of the most exciting developments in mathematics of the last fifty years: 

(1) Random matrix theory, the Tracy-Widom theory and universatility.

The joint probability density of eigenvalues in random matrices is expressed by the following formula:

\displaystyle \frac{1}{Z_{\beta,b}}\prod_{k=1}^n e^{-\frac{\beta}{4}\lambda_k^2}\prod_{i<j}|\lambda_j-\lambda_i|^\beta ,

where β=1,2,4 corresponds to different random matrix ensembles (GOE, GUE, etc.), (though other values of β are also important!).  This formula gives a lot of information on the distribution of individual eigenvalues. See also this post of Terry Tao on the Gaussian ensembles in random matrix theory. 

In the early 1990s,  Craig Tracy and Harold Widom identified the distribution of the normalized largest eigenvalue of a random Hermitian matrix (GOE, β=1) and for random unitary matrix (GUE, β=2).  The new paper proves universality for incidence graphs of d-regular graphs. Here “universality” means that “after proper normalization and shifts, spectral statistics agree with those from GOE.” (This was conjectured by Sarnak  and by Miller, Novikoff, and Sabelli.) 

Random matrix theory also has important applications in combinatorics, physics, and number theory. For example, Baik, Deift, and Johansson established a connection between the Tracy-Widom distribution and combinatorics in their 1999 paper, On the distribution of the length of the longest increasing subsequence of random permutations. 

(2) Ramanujan graphs. 

Ramanujan graphs were constructed in the 1980s independently by Margulis and by Lubotzky, Philips and Sarnak (who also coined the term).

Consider the adjacency matrix of a d-regular graph G with n vertices. The eigenvalues d and are trivial: d always appears, while appears only if G is bipartite. A graph is called a Ramanujan graph if all its nontrivial eigenvalues belong to the interval

[-2\sqrt{d-1},+2\sqrt{d-1}].

A theorem of Alon and Boppana asserts that  for every \epsilon>0 there are only finitely many d-regular graphs that all their non-trivial eigenvalues have absolute value less than 2\sqrt{d-1}-\epsilon. (Note that Huang,  Mckenzie, and Yau further normalize the adjacency matrices so that the interval becomes [-2,2] for every d.) 

Complete graphs and complete bipartite graphs are Ramanujan and the challenge is to construct Ramanujan graphs where the degree d is fixed and the number of vertices n is large. The Ramanujan property proved by Margulis and Lubotzky, Philips and Sarnak for the graphs they constructed relies on deep number theory. The original Ramanujan graphs (both bipartite and not) by Margulis and Lubotzky, Philips and Sarnak had degrees which were of the form p+1, where p is a prime. Moshe Morgenstern extended this to degrees of the form q+1 for every prime power q.

A decade ago, Marcus, Spielman, and Srivastava introduced a new construction for arbitrary degrees in the bipartite case using Bilu-Linial graph lifting (see the 2013 post New Ramanujan graphs).

However, a major open question remained

Are there d-regular Ramanujan (non-bipartite) graphs when d is not a prime power?

As far as I know, this question is answered in the affirmative for the first time by Huang,  Mckenzie, and Yau in the new paper. 

(3) Random regular graphs 

In 2008, Joel Friedman proved that a random d-regular graph is, with high probability, nearly Ramanujan — a result that confirmed a conjecture by Noga Alon. A graph is said to be nearly Ramanujan if its nontrivial eigenvalues lie within the interval [-2\sqrt{d-1}-o(1),+2\sqrt{d-1}+o(1)]. We reported in earlier posts about a simple proof by Charles Bordenave and a related recent result by Chen, Garza-Vargas, Tropp, and van Handel. Let me also mention Doron Puder’s 2014 beautiful paper: Expansion of random graphs: new proofs, new results.

A major open question was  

Are random d-regular graphs Ramanujan? Are random 3-regular graphs Ramanujan? 

Early in the study of Ramanujan graphs, opinions were divided. Peter Sarnak conjectured that the answer is negative with high probability as the number of vertices tends to infinity, while Noga Alon conjectured the opposite—that it is positive with high probability. The two even made a bet about this question, though the exact terms of the wager were eventually forgotten.

Subsequent computer simulations suggested that the probability of a random graph being Ramanujan lies strictly between 0 and 1. This observation aligns with conjectures by Sarnak, Miller, Novikoff, and Sabelli. The new paper computes the precise probability. A random d-regular graph is Ramanujan with probability c+o(1) where c is roughly 0.69, (this constant remains the same across all values of d).  

There is a rich theory of random regular graphs with many exciting results and problems. Let me mention a 1994 paper of Wormald, brought to my attention by Geva Yashfe, asserting that with high probability random 3-regular graphs are 3-edge colorable (and even Hamiltonian). 

Thanks to Nati Linial and Peter Sarnak who told me about the new paper.

Two slightly related stories

(1) The Tracy-Widom distribution is expected to emerge in the context of first passage planar percolation. However, proving this result seems far out of reach at present.

(2) A decade ago or so, Yuval Peres told me that that familiarity with the law of iterated logarithms serves as a good test of one’s knowledge of probability theory. This remark motivated me to study the law, to ask multiple experts to explain it to me, and to try to come up with related problems to test my intuition. This led me to pose a question on MathOverflow  Laws of Iterated Logarithm for Random Matrices and Random Permutation.

Elliot Paquette and Ofer Zeitouni later resolved my problem for random matrices in their paper Extremal eigenvalue fluctuations in the GUE minor process and the law of fractional logarithm. A year later, when I met Yuval again and shared my progress in understanding the law of iterated logarithm with him, Yuval remarked that while the law of iterated logarithms is a necessary condition for understanding probability theory, it is not sufficient 🙂 . 

Another connection is that the law of iterated logarithm is related to the genesis of the Khintchin inequalities. (This is briefly described  (with references) in Ron Blei’s book Analysis in Integer and Fractional Dimensions. ) Khintchin’s inequality is a grandparent of hypercontractive inequalities which have broad applications in combinatorics, including results related to first passage percolation, and even a little for random matrices. 

Posted in Analysis, Combinatorics, Probability | Tagged , , , | 1 Comment

The Answer to TYI (57): In Dimension Three or More, Intuitive Norms are Euclidean

Consider \mathbb R^n equipped with a norm. Given a finite set of points K and a point x, we consider T(x,K), the sum of distances from x to the points in K. Next we consider the set of points M(K) that attain the minimum of T(x,K). Such a point is called a geometric median of K with respect to the norm. We say that the norm is intuitive if M(K) has non empty intersection with the convex hull of K.

For the Euclidean norm, the geometric median is also known as the Fermat-Weber point, the Torricelli point, and Haldane’s median.

Our first question in TYI (57) was if there are non-intuitive norms. We further asked which  norms are intuitive.

(In the question I used “nice” instead of “intuitive” and did not mention the term “geometric median”.)

The answer is given by a recent theorem of  Shay Moran, Alexander Shlimovich, and Amir Yehudayoff in their paper: Intuitive norms are Euclidean:

Answer to TYI (57): In the plane all norms are intuitive; in dimensions 3 and more, only Euclidean norms are.

Update: Amir told me that the result was proved already by Durier and gave the following historical description:

  • The first to prove this is Durier (94).
  • There is a sequence of more detailed results by Lewicki (95), and Benitez, Fernandez and Soriano (2002).
  • Garkavi and Klee (60s) proved a similar result for the Chebyshev point (the L-infinity version of the geometric median).
  • Mendoza and Pakhrou (2005) proved the same result for all L-p versions.
Posted in Convexity, Geometry, Test your intuition | Tagged , , | 3 Comments