Now the argument for GGGQEP is two parts – the GGG and the QEP – and it can be framed, in the most simplified way, as follows:

Gadolinium Gallium Garnet – or GGG in short – has a number of exotic properties that make it the perfect candidate for the next level of massive data storage. Firstly, it is crystalline in nature, and its ingots can easily be grown through Czochralski’s seeding. Secondly, it has a cubic lattice, and hence any 3D laser-writing upon it, and subsequent reads, can be conducted with utmost accuracy. Thirdly, its crystal hardness, as expressed on the Mohs scale, falls somewhere between that of Orthoclase Feldspar, and Topaz – which is neither too hard to economically process in a large scale, nor too soft to risk data compromise due to frequent,and prolonged, read-write cycles. And lastly, GGG has already been used in a related field – the magnetic bubble memory field – as a substrate for the magnetic-optical films. Hence, its magnetic and optical properties, including its refractive index, are well documented.

Now for some elementary, but important, figures. The molecular formula of GGG is Gd3Ga5O12. One mole of GGG (i.e. 6.023 * 10^23 molecules) has a weight of 872.9118 grams which, at a density of 7.08g/cm^3,evaluates to a mole volume of 123.29cm^3. If this was crafted into a perfect cube, the sides would be exactly 4.977095cm long. For the sake of simplicity, this can be rounded off to 5cm. So a GGG cube of precisely 5cm*5cm*5cm would have approximately one mole of the molecules. Through a rather extensive extrapolation, it can be worked out that GGG molecules hold data (in bits) equivalent to about 12.5% of the total number of molecules present. Hence, one mole can hold about 7.52*10^22 bits of data, which can further be evaluated to 8153.20034 exabytes of data. Each exabyte is equivalent to one billion gigabytes. For the sake of perspective, the entire*indexed *internet, at the moment, has about 700 exabytes of data. So with a single GGG cube measuring 5cm per side, more than 11 internets, at present size, can be comfortably accommodated.

* *

Quantum Electronic Processing – or QEP in short – is an area still fraught with all manner of cognitive and execution hurdles. Cognitive in the sense that the central idea in it is pretty counter-intuitive. And execution in the sense that once the idea is grasped, the implementation still raises some major problems. But there is a promising solution, and hence both the cognitive and the executive hurdles will be addressed here. Quantum processing differs from traditional processing in one major way: it doesn’t just deal with the binary states of 0 and 1, but also deals with a superposition of all states between those two ranges. Effectively, qubits – the particulates of quantum computing – can attain any value between 0 and 1… and they can do so, for all those values, *simultaneously*. Due to this, quantum processing introduces an inherent *parallelism *into computing. And this, in turn, makes quantum computing to potentially be several million times faster and more efficient than traditional, transistor based, bit processing.

So much for Moore’s law, it would seem.

Unfortunately, in order to harness the vast power of qubits, one major hurdle has to be overcome in the execution phase. Somehow, the quantum dynamics in such a process have to be observed *before* they actually occur. Otherwise, any interference with the dynamics in any conventional sense – say by trying to observe them *while they are happening* – will immediately destroy the quantum superposition parallelism, as all quantum waves collapse into the traditional binary states – either a 0 or a 1. In other words, if the quantum processor is accessed in the conventional way, it behaves exactly like a traditional processor – able to work out only one function at a time. In order for it to operate at its full parallelism potential, no observations of its quantum dynamics can be done while they are occurring, *or* *after* they have occurred.But there is no restriction against observing the dynamics *before *they’ve occurred – if such can be achieved.

Incredibly, there exists a way of getting around the quantum observation problem. Quantum entanglement. This is a phenomenon that occurs when two similar sub-nuclei particles, such as photons or electrons, interact with each other for an instance, and thereafter attain correlating, but opposite, attributes for such traits as polarization, spin and momentum. For instance, if one entangled particle has a clockwise spin, its entangled partner will have an anticlockwise spin. The interesting thing with such particles is that they retain this correlation despite any distances between them – and communications between them have experimentally been found to be instantaneous. The most recent data from quantum entanglement shows the interactions to be at *least ten* *thousand *times faster than the speed of light. So the communication can’t be happening in any classical sense, as this would violate relativity.

Quantum entanglement can be utilized in quantum computing by using one of the entangled pairs for measurements, while leaving the other particle inside the quantum processor. As long as the quantum processor does not interfere with the quantum superposition state of this second particle, measurements from the other entangled particle can consistently, and reliably, give information about the quantum states of the two. The only challenge is to avoid triggering a reverse decoherence of the two particles –which can be achieved by ensuring that all measurements of the entangled particle remains within the dephasing margins they themselves set, or by deploying optical pulsing mechanisms into the system. In this way, the total amount of information being processed per unit time is capped off only by the bounds set in by Holevo’s limitative theorem.

The rationality behind creating the GGGQEP (Gadolinium Gallium Garnet Quantum Electronic Processor) system is simple. The two factors that limit computing capacity most are storage volume and processing speeds. Other factors, such as data transfer rates, are easily dealt with by using such things as optical, instead of electrical, communications pathways. Bigger file sizes are likely to be created as the storage capacities increase, but even now, such file systems as BRTF, XFS, and even the common NTFS can theoretically handle sizes up to one exabyte. The increased computation needed for the user-space can, in turn, be easily handled using quantum algorithms, such as “Simon’s problem”, “Shor’s algorithm”, and several other algorithms found in the “Abelian Hidden Subgroup Problem”.

Through extreme laser engineering at the molecular level for GGG, and a bit of lateral thinking about quantum dynamics, the future of computing seems boundless. Most of the discretionary technologies described here are already operational – some in futuristic prototypes, and others in such facilities as DARPA, NASA and CERN. Whether or not GGG is already in use as a storage media remains “classified”, but from a theoretical perspective, there isn’t anything stopping its use. What remains to be done, therefore, is to combine all the various technologies, and create the next generation of computers. This, judging by current trends, might happen within this lifetime… and computing will change fundamentally, and forever.

Dare to dream