Quantum theory provides an extremely accurate description of fundamental processes in physics. It thus seems likely that the theory is applicable beyond the, mostly microscopic, domain in which it has been tested experimentally. Here, we propose a Gedankenexperiment to investigate the question whether quantum theory can, in principle, have universal validity. The idea is that, if the answer was yes, it must be possible to employ quantum theory to model complex systems that include agents who are themselves using quantum theory. Analysing the experiment under this presumption, we find that one agent, upon observing a particular measurement outcome, must conclude that another agent has predicted the opposite outcome with certainty. The agents’ conclusions, although all derived within quantum theory, are thus inconsistent. This indicates that quantum theory cannot be extrapolated to complex systems, at least not in a straightforward manner.
According to quantum theory, a measurement may have multiple possible outcomes. Single-world interpretations assert that, nevertheless, only one of them "really" occurs. Here we propose a gedankenexperiment where quantum theory is applied to model an experimenter who herself uses quantum theory. We find that, in such a scenario, no single-world interpretation can be logically consistent. This conclusion extends to deterministic hidden-variable theories, such as Bohmian mechanics, for they impose a single-world interpretation.
A thought experiment has shaken up the world of quantum foundations, forcing physicists to clarify how various quantum interpretations (such as many-worlds and the Copenhagen interpretation) abandon seemingly sensible assumptions about reality.
IN WATCHING the flow of events over the past decade or so, it is hard to avoid the feeling that something very fundamental has happened in world history. The past year has seen a flood of articles commemorating the end of the Cold War, and the fact that "peace" seems to be breaking out in many regions of the world. Most of these analyses lack any larger conceptual framework for distinguishing between what is essential and what is contingent or accidental in world history, and are predictably superficial. If Mr. Gorbachev were ousted from the Kremlin or a new Ayatollah proclaimed the millennium from a desolate Middle Eastern capital, these same commentators would scramble to announce the rebirth of a new era of conflict.
And yet, all of these people sense dimly that there is some larger process at work, a process that gives coherence and order to the daily headlines. The twentieth century saw the developed world descend into a paroxysm of ideological violence, as liberalism contended first with the remnants of absolutism, then bolshevism and fascism, and finally an updated Marxism that threatened to lead to the ultimate apocalypse of nuclear war. But the century that began full of self-confidence in the ultimate triumph of Western liberal democracy seems at its close to be returning full circle to where it started: not to an "end of ideology" or a convergence between capitalism and socialism, as earlier predicted, but to an unabashed victory of economic and political liberalism.
August 29, 2018 at 09:37AM
Building on this, could we create a list of governments and empires and rank them in order of the length of their spans? There may be subtleties in changes of regimes in some eras, but generally things are probably reasonably well laid out. I wonder if the length of life of particular governments follows a power law? One would suspect it might. ❧
August 29, 2018 at 09:43AM
Highlights, Quotes, Annotations, & Marginalia
The triumph of the West, of the Western idea, is evident first of all in the total exhaustion of viable systematic alternatives to Western liberalism. ❧
August 29, 2018 at 08:53AM
What we may be witnessing is not just the end of the Cold War, or the passing of a particular period of postwar history, but the end of history as such: that is, the end point of mankind’s ideological evolution and the universalization of Western liberal democracy as the final form of human government. ❧
What if, in fact, we’ve only just found a local maximum? What if in the changing landscape there are other places we could potentially get to competitively that supply greater maxima? And possibly worse, what if we need to lose value to get from here to unlock even more value there?
August 29, 2018 at 08:56AM
Hegel believed that history culminated in an absolute moment – a moment in which a final, rational form of society and state became victorious. ❧
and probably not a bad outcome in an earlier era that thought of things in terms of clockwork and lacked the ideas of quantum theory and its attendant uncertainties.
August 29, 2018 at 08:59AM
Believing that there was no more work for philosophers as well, since Hegel (correctly understood) had already achieved absolute knowledge, Kojève left teaching after the war and spent the remainder of his life working as a bureaucrat in the European Economic Community, until his death in 1968. ❧
This is depressing on so many levels.
August 29, 2018 at 09:05AM
Paul Kennedy’s hugely successful “The Rise and Fall of the Great Powers”, which ascribes the decline of great powers to simple economic overextension. ❧
Curious how this may relate to the more recent “The End of Power” by Moisés Naím. It doesn’t escape one that the title of the book somewhat echoes the title of this particular essay.
August 29, 2018 at 09:18AM
But whether a highly productive modern industrial society chooses to spend 3 or 7 percent of its GNP on defense rather than consumption is entirely a matter of that society’s political priorities, which are in turn determined in the realm of consciousness. ❧
It’s not so much the percentage on produced defense goods, but how quickly could a society ramp up production of goods, services, and people to defend itself compared to the militaries of its potential aggressors.
In particular, most of the effort should go to the innovation side of war materiel. The innovation of the atomic bomb is a particularly nice example in that as a result of conceptualizing and then executing on it it allowed the US to win the war in the Pacific and hasten the end of war in Europe. Even if we otherwise had massive stockpiles of people or other weapons, our enemies could potentially have equaled them and dragged the war on interminably. It was the unknown unknown via innovation that unseated Japan and could potentially do the same to us based on innovation coming out of almost any country in the modern age.
August 29, 2018 at 09:24AM
Weber notes that according to any economic theory that posited man as a rational profit-maximizer, raising the piece-work rate should increase labor productivity. But in fact, in many traditional peasant communities, raising the piece-work rate actually had the opposite effect of lowering labor productivity: at the higher rate, a peasant accustomed to earning two and one-half marks per day found he could earn the same amount by working less, and did so because he valued leisure more than income. The choices of leisure over income, or of the militaristic life of the Spartan hoplite over the wealth of the Athenian trader, or even the ascetic life of the early capitalist entrepreneur over that of a traditional leisured aristocrat, cannot possibly be explained by the impersonal working of material forces, ❧
Science could learn something from this. Science is too far focused on the idealized positive outcomes that it isn’t paying attention to the negative outcomes and using that to better define its outline or overall shape. We need to define a scientific opportunity cost and apply it to the negative side of research to better understand and define what we’re searching for.
Of course, how can we define a new scientific method (or amend/extend it) to better take into account negative results–particularly in an age when so many results aren’t even reproducible?
August 29, 2018 at 09:32AM
FAILURE to understand that the roots of economic behavior lie in the realm of consciousness and culture leads to the common mistake of attributing material causes to phenomena that are essentially ideal in nature. ❧
August 29, 2018 at 09:44AM
“Protestant” life of wealth and risk over the “Catholic” path of poverty and security. ❧
Is this simply a restatement of the idea that most of “the interesting things” happen at the border or edge of chaos? The Catholic ethic is firmly inside the stable arena while that of the Protestant ethic is pushing the boundaries.
August 29, 2018 at 09:47AM
Hence it did not matter to Kojève that the consciousness of the postwar generation of Europeans had not been universalized throughout the world; if ideological development had in fact ended, the homogenous state would eventually become victorious throughout the material world. ❧
This presupposes that homeostasis could ever be achieved.
One thinks of phrases like “The future is here, it just isn’t evenly distributed.” But everything we know about systems and evolving systems often indicates that homeostasis isn’t necessarily a good thing. In many cases, it means eventual “death” instead of evolving towards a longer term lifespan. Again, here Kauffmann’s ideas about co-evolving systems and evolving landscapes may provide some guidance. What if we’re just at a temporary local maximum, but changes in the landscape modify that fact? What then? Shouldn’t we be looking for other potential distant maxima as well?
August 29, 2018 at 09:52AM
But that state of consciousness that permits the growth of liberalism seems to stabilize in the way one would expect at the end of history if it is underwritten by the abundance of a modern free market economy. ❧
Writers spend an awful lot of time focused too carefully on the free market economy, but don’t acknowledge a lot of the major benefits of the non-free market parts which are undertaken and executed often by governments and regulatory environments. (Hacker & Pierson, 2016)
\August 29, 2018 at 10:02AM
Are there, in other words, any fundamental “contradictions” in human life that cannot be resolved in the context of modern liberalism, that would be resolvable by an alternative political-economic structure? ❧
Churchill famously said “…democracy is the worst form of Government except for all those other forms that have been tried from time to time…”
Even within this quote it is implicit that there are many others. In some sense he’s admitting that we might possibly be at a local maximum but we’ve just not explored the spaces beyond the adjacent possible.
August 29, 2018 at 10:08AM
For our purposes, it matters very little what strange thoughts occur to people in Albania or Burkina Faso, for we are interested in what one could in some sense call the common ideological heritage of mankind. ❧
While this seems solid on it’s face, we don’t know what the future landscape will look like. What if climate change brings about massive destruction of homo sapiens? We need to be careful about how and why we explore both the adjacent possible as well as the distant possible. One day we may need them and our current local maximum may not serve us well.
August 29, 2018 at 10:10AM
I feel like this word captures very well the exact era of Trumpian Republicanism in which we find ourselves living.
August 29, 2018 at 10:37AM
After the war, it seemed to most people that German fascism as well as its other European and Asian variants were bound to self-destruct. There was no material reason why new fascist movements could not have sprung up again after the war in other locales, but for the fact that expansionist ultranationalism, with its promise of unending conflict leading to disastrous military defeat, had completely lost its appeal. The ruins of the Reich chancellery as well as the atomic bombs dropped on Hiroshima and Nagasaki killed this ideology on the level of consciousness as well as materially, and all of the pro-fascist movements spawned by the German and Japanese examples like the Peronist movement in Argentina or Subhas Chandra Bose’s Indian National Army withered after the war. ❧
And yet somehow we see these movements anew in America and around the world. What is the difference between then and now?
August 29, 2018 at 11:46AM
This is not to say that there are not rich people and poor people in the United States, or that the gap between them has not grown in recent years. But the root causes of economic inequality do not have to do with the underlying legal and social structure of our society, which remains fundamentally egalitarian and moderately redistributionist, so much as with the cultural and social characteristics of the groups that make it up, which are in turn the historical legacy of premodern conditions. ❧
August 29, 2018 at 11:47AM
But those who believe that the future must inevitably be socialist tend to be very old, or very marginal to the real political discourse of their societies. ❧
and then there are the millennials…
August 29, 2018 at 11:51AM
Beginning with the famous third plenum of the Tenth Central Committee in 1978, the Chinese Communist party set about decollectivizing agriculture for the 800 million Chinese who still lived in the countryside. The role of the state in agriculture was reduced to that of a tax collector, while production of consumer goods was sharply increased in order to give peasants a taste of the universal homogenous state and thereby an incentive to work. The reform doubled Chinese grain output in only five years, and in the process created for Deng Xiaoping a solid political base from which he was able to extend the reform to other parts of the economy. Economic Statistics do not begin to describe the dynamism, initiative, and openness evident in China since the reform began. ❧
August 29, 2018 at 11:58AM
At present, no more than 20 percent of its economy has been marketized, and most importantly it continues to be ruled by a self-appointed Communist party which has given no hint of wanting to devolve power. ❧
If Facebook were to continue to evolve at it’s current rate and with it’s potential power as well as political influence, I could see it attempting to work the way China does in a new political regime.
August 29, 2018 at 12:04PM
IF WE ADMIT for the moment that the fascist and communist challenges to liberalism are dead, are there any other ideological competitors left? Or put another way, are there contradictions in liberal society beyond that of class that are not resolvable? Two possibilities suggest themselves, those of religion and nationalism. ❧
August 29, 2018 at 12:19PM
This school in effect applies a Hobbesian view of politics to international relations, and assumes that aggression and insecurity are universal characteristics of human societies rather than the product of specific historical circumstances. ❧
August 29, 2018 at 12:30PM
But whatever the particular ideological basis, every “developed” country believed in the acceptability of higher civilizations ruling lower ones ❧
August 29, 2018 at 12:37PM
Perhaps this very prospect of centuries of boredom at the end of history will serve to get history started once again. ❧
Has it started again with nationalism, racism, and Trump?
August 29, 2018 at 12:48PM
Welcome to qubyte.codes! The personal site of Mark Stanley Everitt.
I'm also interested in the social side and ethics of software development. I'm a regular mentor at Codebar in Brighton.
I lived and worked in Tokyo for a number of years, initially as an academic (I hold a PhD in quantum optics and quantum information), and later as a programmer. I speak a little Japanese.
If you're interested in code I've published, I'm qubyte on GitHub.
Andrew Jordan reviews Peter Woit's Quantum Theory, Groups and Representations and finds much to admire.
To be published by Cambridge University Press in April 2018.
Upon publication this book will be available for purchase through Cambridge University Press and other standard distribution channels. Please see the publisher's web page to pre-order the book or to obtain further details on its publication date.
A draft, pre-publication copy of the book can be found below. This draft copy is made available for personal use only and must not be sold or redistributed.
This largely self-contained book on the theory of quantum information focuses on precise mathematical formulations and proofs of fundamental facts that form the foundation of the subject. It is intended for graduate students and researchers in mathematics, computer science, and theoretical physics seeking to develop a thorough understanding of key results, proof techniques, and methodologies that are relevant to a wide range of research topics within the theory of quantum information and computation. The book is accessible to readers with an understanding of basic mathematics, including linear algebra, mathematical analysis, and probability theory. An introductory chapter summarizes these necessary mathematical prerequisites, and starting from this foundation, the book includes clear and complete proofs of all results it presents. Each subsequent chapter includes challenging exercises intended to help readers to develop their own skills for discovering proofs concerning the theory of quantum information.
Co-Founder and CEO at Cambridge Quantum Computing
The books introduce subjects like rocket science, quantum physics and general relativity — with bright colors, simple shapes and thick board pages perfect for teething toddlers. The books make up the Baby University series — and each one begins with the same sentence and picture — This is a ball — and then expands on the titular concept.
We discuss properties of the "beamsplitter addition" operation, which provides a non-standard scaled convolution of random variables supported on the non-negative integers. We give a simple expression for the action of beamsplitter addition using generating functions. We use this to give a self-contained and purely classical proof of a heat equation and de Bruijn identity, satisfied when one of the variables is geometric.
Just over a year ago a senior Google engineer (Greg Corrado) explained why quantum computers, in the opinion of his research team did not lend themselves to Deep Learning techniques such as convolutional neural networks or even recurrent neural networks.
As a matter of fact, Corrado’s comments were specifically based on Google’s experience with the D-Wave machine, but as happens so often in the fast evolving Quantum Computing industry, the nuance that the then architecture and capacity of D-Wave’s quantum annealing methodology did not (and still does not) lend itself to Deep Learning or Deep Learning Neural Network (“DNN”) techniques was quickly lost in the headline. The most quoted part of Corrado’s comments became a sentence that further reinforced the view that Corrado (and thus Google) were negative about Deep Learning and Quantum Computing per-se and quickly became conflated to be true of all quantum machines and not just D-Wave :
“The number of parameters a quantum computer can hold, and the number of operations it can hold, are very small” (full article here).
The headline for the article that contained the above quote was “Quantum Computers aren’t perfect for Deep Learning“, that simply serves to highlight the less than accurate inference, and I have now lost count of the number of times that someone has misquoted Corrado or attributed his quote to Google’s subsidiary Deep Mind, as another way of pointing out limitations in quantum computing when it comes either to Machine Learning (“ML”) more broadly or Deep Learning more specifically.
Ironically, just a few months earlier than Corrado’s talk, a paper written by a trio of Microsoft researchers led by the formidable Nathan Wiebe (the paper was co-authored by his colleagues Ashish Kapoor and Krysta Svore) that represented a major dive into quantum algorithms for deep learning that would be advantageous over classical deep learning algorithms was quietly published on arXiv. The paper got a great deal less publicity than Corrado’s comments, and in fact even as I write this article more than 18 months after the paper’s v2 publication date, it has only been cited a handful of times (copy of most recent updated paper here)
Before we move on, let me deal with one obvious inconsistency between Corrado’s comments and the Wiebe/Kapoor/Svore (“WKS”) paper and acknowledge that we are not comparing “apples with apples”. Corrado was speaking specifically about the actual application of Deep Learning in the context of a real machine – the D-Wave machine, whilst WKS are theoretical quantum information scientists and their “efficient” algorithms need a machine before they can be applied. However, that is also my main point in the article. Corrado was speaking only about D-Wave, and Corrado is in fact a member of the Quantum Artificial Intelligence team, so it would be a major contradiction if Corrado (or Google more broadly) felt that Quantum Computing and AI were incompatible !
I am not here speaking only about the semantics of the name of Corrado’s team. The current home page, as of Nov 27th 2016, for Google’s Quantum AI Unit (based out in Venice Beach, LA) has the following statement (link to the full page here):
“Quantum Computing merges two great scientific revolutions of the 20th century: computer science and quantum physics. Quantum physics is the theoretical basis of the transistor, the laser, and other technologies which enabled the computing revolution. But on the algorithmic level today’s computing machinery still operates on “classical” Boolean logic. Quantum computing is the design of hardware and software that replaces Boolean logic by quantum law at the algorithmic level. For certain computations such as optimization, sampling, search or quantum simulation this promises dramatic speedups. Soon we hope to falsify the strong Church-Turing thesis: we will perform computations which current computers cannot replicate. We are particularly interested in applying quantum computing to artificial intelligence and machine learning. This is because many tasks in these areas rely on solving hard optimization problems or performing efficient sampling”
There is a lot to digest in that quote – including the tantalising statement about the strong “Church-Turing Thesis” (“CTT”). Coincidentally this is a very rich area of debate and research that if even trivially followed in this article would take up far more space than is available. For those interested in the foundational aspects of CTT you could do worse than invest a little time listening to the incomparable Scott Aaronson who spoke over summer on this topic (link here). And just a last word on CTT whilst we are on the subject, few, if anyone, will speculate right now that quantum computers will actually threaten the original Church-Turing Thesis and in the talk referenced above, Scott does a great job in outlining just why that is the case. Ironically the title of his talk is “Quantum Supremacy” and the quote that I have taken from Google’s website is directly taken from the team led by Hartmut Neven who has stated very publicly that Google will achieve that standard (ie Quantum Supremacy) in 2017.
Coming back to Artificial Intelligence and quantum computing, we should remember that even as recently as 14 to 18 months ago, most people would have been cautious about forecasting the advent of even small scale quantum computing. It is easy to forget, especially in the heady days since mid 2016, but none of Google, IBM or Microsoft had unveiled their advances, and as I wrote last week (here), things have clearly moved on very significantly in a relatively short space of time. Not only do we have an open “arms” race between the West and China to build a large scale quantum machine, but we have a serious clash of some of the most important technology innovators in recent times. Amazingly, scattered in the mix are a small handful of start-ups who are also building machines. Above all however, the main takeaway from all this activity from my point of view is that I don’t think it should be surprising that converting “black-box”neural network outputs into probability distributions will become the focus for anyone approaching DNN from a quantum physics and quantum computing background.
It is this significant advance that means that for the very same reason that Google/IBM/Microsoft talk openly about their plans to build a machine (and in the case of Google an acknowledgement that they have actually now built a quantum computer of their own) means that one of the earliest applications likely to be tested on even proto-type quantum computers will be some aspect of Machine Learning. Corrado was right to confirm that in the opinion of the Google team working at the time, the D-Wave machine was not usable for AI or ML purposes. It was not his fault that his comments were mis-reported. It is worth noting that one of the people most credibly seen as the “grandfather” of AI and Machine Learning, Geoffrey Hinton is part of the same team at Google that has adopted the Quantum Supremacy objective. There are clearly amazing teams assembled elsewhere, but where quantum computing meets Artificial Intelligence, then its hard to beat the sheer intellectual fire power of Google’s AI team.
Outside of Google, a nice and fairly simple way of seeing how the immediate boundary between the theory of quantum machine learning and its application on “real” machines has been eroded can be seen by looking at two versions of exactly the same talk by one of the sector’s early cheer leaders, Seth Lloyd. Here is a link to a talk that Lloyd gave through Google Tech Talks in early 2014, and here is a link to exactly the same talk except that it was delivered a couple of months ago. Not surprisingly Lloyd, as a theorist, brings a similar approach to the subject as WKS, but in the second of the two presentations, he also discusses one of his more recent pre-occupations in analysing large data sets using algebraic topological methods that can be manipulated by a quantum computer.
For those of you who might not be familiar with Lloyd I have included a link below to the most recent form of his talk on a quantum algorithm for large data sets represented by topological analysis.
One of the most interesting aspects that is illuminated by Lloyds position on quantum speed up using quantum algorithms for classical machine learning operations is his use of the example of the “Principal Component Analysis” algorithm (“PCA”). PCA is one of the most common machine learning techniques in classical computing, and Lloyd (and others) have been studying quantum computing versions for at least the past 3 to 4 years.
Finding a use case for a working quantum algorithm that can be implemented in a real use case such as one of the literally hundreds of applications for PCA is likely to be one of the earliest ways that quantum computers with even a limited number of qubits could be employed. Lloyd has already shown how a quantum algorithm can be proven to exhibit “speed up” when looking just at the number of steps taken in classifying the problem. I personally do not doubt that a suitable protocol will emerge as soon as people start applying themselves to a genuine quantum processor.
At Cambridge Quantum Computing, my colleagues in the quantum algorithm team have been working on the subject from a different perspective in both ML and DNN. The most immediate application using existing classical hardware has been from the guys that created ARROW> , where they have looked to build gradually from traditional ML through to DNN techniques for detecting and then classifying anomalies in “pure” times series (initially represented by stock prices). In the recent few weeks we have started advancing from ML to DNN, but the exciting thing is that the team has always looked at ARROW> in a way that lends itself to being potentially upgraded with possible quantum components that in turn can be run on early release smaller scale quantum processor. Using a team of quantum physicists to approach AI problems so they can ultimately be worked off a quantum computer clearly has some advantages.
There are, of course, a great many areas other than the seemingly trivial sphere of finding anomalies in share prices where AI will be applied. In my opinion the best recently published overview of the whole AI space (an incorporating the phase transition to quantum computing) is the Fortune Article (here) that appeared at the end of September and not surprisingly the focus on medical and genome related AI applications for “big” data driven deep learning applications figure highly in that part of the article that focuses on the current state of affairs.
I do not know exactly how far we are away from the first headlines about quantum processors being used to help generate efficiency in at least some aspects of DNN. My personal guess is that deep learning dropout protocols that help mitigate the over-fitting problem will be the first area where quantum computing “upgrades” are employed and I suspect very strongly that any machine that is being put through its paces at IBM or Google or Microsoft is already being designed with this sort of application in mind. Regardless of whether we are years away or months away from that first headline, the center of gravity in AI will have moved because of Quantum Computing.
It is argued that if the non-unitary measurement transition, as codified by Von Neumann, is a real physical process, then the "probability assumption" needed to derive the Second Law of Thermodynamics naturally enters at that point. The existence of a real, indeterministic physical process underlying the measurement transition would therefore provide an ontological basis for Boltzmann's Stosszahlansatz and thereby explain the unidirectional increase of entropy against a backdrop of otherwise time-reversible laws. It is noted that the Transactional Interpretation (TI) of quantum mechanics provides such a physical account of the non-unitary measurement transition, and TI is brought to bear in finding a physically complete, non-ad hoc grounding for the Second Law.
Remarkable progress of quantum information theory (QIT) allowed to formulate mathematical theorems for conditions that data-transmitting or data-processing occurs with a non-negative entropy gain. However, relation of these results formulated in terms of entropy gain in quantum channels to temporal evolution of real physical systems is not thoroughly understood. Here we build on the mathematical formalism provided by QIT to formulate the quantum H-theorem in terms of physical observables. We discuss the manifestation of the second law of thermodynamics in quantum physics and uncover special situations where the second law can be violated. We further demonstrate that the typical evolution of energy-isolated quantum systems occurs with non-diminishing entropy.