As the tools for science (such as telescopes, particle accelerators, and gene sequencers) have become more sophisticated and exacting, the data that we are able to gather from them is in danger of overwhelming current statistical and machine learning approaches for analysis and understanding. These tools generate hundreds of petabytes of data that must be processed to extract the findings that scientists seek. To put the scope of the issue in context, just one petabyte is the equivalent of approximately 500 billion pages of printed text, about 20 million tall file cabinets stuffed with paper.
Using current classical algorithms, the analysis of hundreds of petabytes of image-based data is particularly challenging and time-consuming, so scientists are clamoring for more efficient ways to handle and process that complex scientific information. To this end, quantum computing could offer an interesting approach to speed up analysis in a wide variety of fields, including image processing.
A team from Lawrence Berkeley National Laboratory (Berkeley Lab)’s Scientific Data and Applied Mathematics and Computational Research (AMCR) divisions recently published its work in Nature Scientific Reports that highlights their solution: a novel, unified framework called the quantum pixel representation (QPIXL) framework that outperforms other quantum image representations. QPIXL produces circuits that require fewer quantum gates without introducing additional or ancilla qubits. Plus, the state-of-the-art framework includes a compression algorithm that further reduces gate complexity by up to 90% without significantly sacrificing image quality. An implementation of their algorithms is publicly available as part of the Quantum Image Pixel Library (QPIXL++).