Quantum information is the information of the state of a quantum system. For example, this one restricts to specific gate sets, and derives an upper bound to the fault-tolerant threshold of 15% in a specific case (but then the question arises as to why you wouldn't pick the scheme with the highest upper bound, rather than the lowest!).Optical lattices use lasers to separate rubidium atoms (red) for use as information bits in neutral-atom quantum processors-prototype devices which designers are trying to develop into full-fledged quantum computers. But the different perspective allowed me to estimate a threshold beyond which the noise accumulation would simply be too high, and thus the error correction scheme would fail.ĭifferent works have made different assumptions. This would map directly to fault tolerance via concatenated error correcting codes. So, I wondered what would happen if you allowed a concatenated scheme of distillation of noisy Bell pairs, but allowed some errors to occur when applying the various operations. It's a two-way relationship if you come up with a better distillation strategy, that defines a better error correcting code and vice-versa. In essence, if you have multiple noisy Bell states, one strategy for making a single high quality Bell state is to teleport the codewords of an error correction code through them. The idea was roughly to make use of a well-known connection between error correction codes and distillation of multiple noisy Bell states into a single, less noisy Bell state. I suspect the assumptions that I made to get there wouldn’t apply to every possible scenario, but I came up with an answer of 5.3% ( non-paywall version). I don’t remember the details any more), I tried to calculate an upper bound on a fault tolerant threshold. These and other details give us lots to optimize over. There are also the boundary conditions, which are what distinguishes a planar code from a toric code, etc. More drastically, we could even make quite big changes to the nature of the stabilizers. The stabilizers can be changed to an alternating pattern, or a YYYY stabilizer can used, to better deal with certain noise types. It’s also worth noting that even within the family of surface codes there are variations to explore. These slides might point you in the right directions for more resources on this matter. Unfortunately, they pay for this by being much less realistic to build. They can let you do a full universal gate set in a straightforward and fault-tolerant way. Expensive techniques like magic state distillation will still be needed for both cases to get additional operations, as required for universality. The Color code can also manage the S gate without too much trouble, but that still just restricts us to the Clifford gates. The surface code can only do the X, Z and H gates very simply, but they aren’t enough. The surface code is quite bad at this, which is a major reason that people still consider other codes, despite the great advantages of the surface codes. Other than the threshold, another important factor is how easy it is to do quantum computation on the stored information. See this paper for an example of how to do this, and the references therein for others. The threshold then corresponds to the point of a phase transition. To get upper bounds for a specific code and specific noise model, we can sometimes map the model to one of statistical mechanics. Your definition of what is fast enough for the task at hand will have a big effect on what the threshold is. And for that you need a decoder, which ideally adapts to the specifics of the error model while remaining fast enough to keep up. Check out this paper for an earlier reference in some of these things.Įxact threshold numbers mean you need a specific error model, as you know. As I think you suspected, this is the required procedure when try to keep the stored information coherent for as long as possible. It is the decoding problem that is 3D, due to tracking changes to the 2D lattice over time. Note that the paper you linked to doesn’t have a 3D surface code. With an assumption of all elements failing with equal probability (and doing so in a certain way) it has a threshold of around 1%. As far as I’m aware, the surface code is still regarded as the best.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |