With the news of Google achieving “quantum supremacy”, I found it pertinent to speak about the current landscape of quantum computing. What does this mean to us as a potential threat? What is currently being done in the world of cryptography to further protect/prepare us.
Google along with a few other major tech players are currently developing their own quantum computers. The companies working on this are the largest tech companies ranging from IBM to Lockheed Martin to Microsoft. Before we dive into the importance of what Google has achieved, first we must understand the current working environment.
Quantum computing development began towards the end of the 20th century, revolving around theoretical plausibility of non-classical computation. Three decades later and with the advancement of technological quantum processors, Google has proclaimed to have achieved quantum supremacy. Google said they completed a problem with their current quantum computer “Sycamore” that would take the top super-computers of today 10,000 years to compute.
IBM has refuted the actual quantum supremacy claim of Google. In computational complexity theory, quantum supremacy is achieved if the time complexity of a problem is achieved within a superpolynomial window against the current top super-computers. IBM says that Google has actually achieved a quantum advantage instead of actual supremacy. This is just a small time increase on the problem that was solved by “Sycamore”. IBM states:
We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity. This is in fact a conservative, worst-case estimate, and we expect that with additional refinements the classical cost of the simulation can be further reduced.
A quantum advantage is only an advantage in polynomial time rather than superpolynomial time.
This was first established with Cobham’s Theory. With a few tweaks, IBM says it could further optimize the time needed to solve for this. Supremacy is achieved when the computation is not feasible within a certain range.
This is proof of the optimizations that quantum computers offer and with only 53 quantum bits, defeating the super computers of today by even a few hours shows the progress quantum computation has made over the past few years. IBM has been involved at the forefront of computational technology since the middle of the 20th century, and has subjective views of this event. Even if Google hasn’t achieved quantum supremacy, time will further help widen the time complexity gap.
With many people in tech, especially in blockchain, speaking about this, the same few myths tend to surface every time there is news surrounding the quantum field:
Most of the time, the individuals that are suspecting encryption breaking tech to be around the corner are the least informed. There is a potential threat to RSA but this is MANY years down the road, and that’s IF it is even plausible.
The General Number Field Sieve is one of the most efficient algorithms to solve for large factor integers but still currently cannot solve for the extremely large factorization.
The RSA cryptosystem is one of the first public-key algorithms created and makes up much of the backbone of the internet infrastructure. RSA creates a public and private key-pair that is derived from two large prime factors. This system is built upon asymmetric encryption. What makes RSA so difficult for an attacker to break is the large integer factorization that created the keys. Currently there is no efficient way that’s possible.
There are many different potential attacks that have been theorized to break these systems. Scott Aaronson has an amazing FAQ in regards to quantum computing.
Q2. If Google has indeed achieved quantum supremacy, does that mean that now “no code is uncrackable”
There are two issues here. First, the devices currently being built by Google, IBM, and others have 50–100 qubits and no error-correction…With known error-correction methods, that could easily translate into millions of physical qubits, and those probably of a higher quality than any that exist today. I don’t think anyone is close to that, and we have no idea how long it will take.
But the second issue is that, even in a hypothetical future with scalable, error-corrected QCs, on our current understanding they’ll only be able to crack some codes, not all of them. By an unfortunate coincidence, the public-key codes that they can crack include most of what we currently use to secure the Internet: RSA, Diffie-Hellman, elliptic curve crypto, etc. But symmetric-key crypto should only be minimally affected. And there are even candidates for public-key cryptosystems (for example, based on lattices) that no one knows how to break quantumly after 20+ years of trying, and some efforts underway now to start migrating to those systems.
Bruce Schneier writes in multiple blog posts that there is a threat to cryptography that arises from quantum computing. He isn’t sure what the danger/threat will look like, but says it is far off if it even materializes. He isn’t fear mongering though, his views are based in reason. In an academic article that he links to in one of his blog posts, the researchers say that they can lower the cost of integer factorization as well as solving for discrete logs over finite fields in about 8 hours. This is theoretical and utilizes 20 million noisy qubits, while the current “Sycamore” has 53 qubits. Eight hours to crack RSA is an extremely short amount of time, but getting to this point in the quantum computing timeline seems a long way off. In 2009, the pinnacle of the quantum computing was close to 10 qubits. So in 10 years, the largest achievements in the quantum computing field have increased only 40 qubits. This doesn’t mean that the technology will scale linearly. I’m pointing out how long it has taken to achieve this much. If technological break-throughs occur in the de-coherence of qubits, this will further help optimization as well as get us closer to fault-tolerant quantum computing.
Both Aaronson and Schneier address that what has been achieved which is very promising. It is still very far from achieving anything that might put the cryptosystems that the internet is built upon in immediate danger.
Finally there is the quantum computing integer factorization algorithm — Shor’s Algorithm. Shor’s algorithm when given a large integer N, can find its prime factors. This is the crux of the RSA problem, and if this can be discovered, private keys can be de-coupled from their public keys. Development in quantum de-coherence will allow for operations of larger and larger qubit sets, which will make running Shor’s algorithm possible against large enough key bits. Jonathan Hui wrote a great post decompiling Shor’s algorithm and how it can factorize a 2048-bit key. The idea behind Shor’s algorithm is that it is looking for the period of a modulo function. Shor’s algorithm doesn’t iterate through all possible factors of N to discover the prime factors. Instead, using the period finding sub-routine inside of Shor’s algorithm, you obtain the specific information needed to factorize the large numbers.
Shor’s algorithm and a few other quantum algorithms are waiting on the sideline for quantum computing tech to reach a fault tolerance high enough for these algorithms to be performed without issues. This looks to be a little ways off.
There exists a potential risk against bitcoin and other blockchains since they are all based off of public-key cryptography. On the Bitcoin wiki there are contingency plans if SHA-256 is broken:
SHA-256 is broken
Situation: Severe, 0-day failure of SHA-256. First/second preimage resistance or collision resistance can be defeated with only a few days of work.
Attacker may be able to defeat OP_CHECKSIG, which hashes transactions before signing.
Attacker may be able to split the network by creating identical transactions or blocks with the same hashes.
Attacker may be able to create blocks very quickly.
The alert system may be compromised.
Users will be notified to shut down their clients. Note that the attacker may be able to send valid alerts, which could disrupt notification efforts.
OP_CHECKSIG will be changed to use some other hash outside of old blocks.
All addresses in the version-1 chain that have a known public key and at least one unspent output will have their public keys hardcoded into the client. When a version-2 transaction spends one of these version-1 outputs, the hardcoded public key will be used instead of the hash.
The version-1 chain will be securely hashed into a hash tree. At least the root of the version-1 tree will be hardcoded into the client.
All hashing Bitcoin does will use the new hashing algorithm.
There is a second section focusing on if ECDSA is broken:
ECDSA is broken
Situation: an attacker can sign for a public key that he does not own the private key for in only a few days of work.
Attacker can spend money that is not his in a large number of cases. Transactions to addresses that have never been used before may be protected if SHA-256 and RIPEMD-160 are still strong.
Alert system may be compromised.
If the attacker can’t get the private key from the public key easily and a stronger algorithm that can use ECDSA keys is available:
Switch to the stronger algorithm.
Get users to update. Alerts will be compromised.
OP_CHECKSIG should use some other signing algorithm.
As soon as the new version of Bitcoin is run, it should automatically send all old transactions somewhere else using the new algorithm.
Get users to update immediately. Alerts will be compromised.
Both of these issues can be mitigated relatively quick, the only issue is that these fixes would need to be adopted in a relatively short manner. The patches need to also be able to be resilient against the newly discovered threat or the fix will share the same fate as its predecessor.
Educate yourselves and learn how to remain private and secure. If you have any questions regarding securing digital assets or any other security questions related to cryptocurrency security, you can reach me at [email protected]