• chevron_right

      Quantum error correction used to actually correct errors

      news.movim.eu / ArsTechnica · Wednesday, 3 April - 15:08 · 1 minute

    Image of a chip with a device on it that is shaped like two triangles connected by a bar.

    Enlarge / Quantinuum's H2 "racetrack" quantum processor. (credit: Quantinuum)

    Today's quantum computing hardware is severely limited in what it can do by errors that are difficult to avoid. There can be problems with everything from setting the initial state of a qubit to reading its output, and qubits will occasionally lose their state while doing nothing. Some of the quantum processors in existence today can't use all of their individual qubits for a single calculation without errors becoming inevitable.

    The solution is to combine multiple hardware qubits to form what's termed a logical qubit. This allows a single bit of quantum information to be distributed among multiple hardware qubits, reducing the impact of individual errors. Additional qubits can be used as sensors to detect errors and allow interventions to correct them. Recently, there have been a number of demonstrations that logical qubits work in principle .

    On Wednesday, Microsoft and Quantinuum announced that logical qubits work in more than principle. "We've been able to demonstrate what's called active syndrome extraction, or sometimes it's also called repeated error correction," Microsoft's Krysta Svore told Ars. "And we've been able to do this such that it is better than the underlying physical error rate. So it actually works."

    Read 19 remaining paragraphs | Comments

    • chevron_right

      Quantum computing progress: Higher temps, better error correction

      news.movim.eu / ArsTechnica · Wednesday, 27 March - 22:24 · 1 minute

    conceptual graphic of symbols representing quantum states floating above a stylized computer chip.

    Enlarge (credit: vital )

    There's a strong consensus that tackling most useful problems with a quantum computer will require that the computer be capable of error correction. There is absolutely no consensus, however, about what technology will allow us to get there. A large number of companies, including major players like Microsoft, Intel, Amazon, and IBM, have all committed to different technologies to get there, while a collection of startups are exploring an even wider range of potential solutions.

    We probably won't have a clearer picture of what's likely to work for a few years. But there's going to be lots of interesting research and development work between now and then, some of which may ultimately represent key milestones in the development of quantum computing. To give you a sense of that work, we're going to look at three papers that were published within the last couple of weeks, each of which tackles a different aspect of quantum computing technology.

    Hot stuff

    Error correction will require connecting multiple hardware qubits to act as a single unit termed a logical qubit. This spreads a single bit of quantum information across multiple hardware qubits, making it more robust. Additional qubits are used to monitor the behavior of the ones holding the data and perform corrections as needed. Some error correction schemes require over a hundred hardware qubits for each logical qubit, meaning we'd need tens of thousands of hardware qubits before we could do anything practical.

    Read 21 remaining paragraphs | Comments

    • chevron_right

      Antibodies against anything? AI tool adapted to make them

      news.movim.eu / ArsTechnica · Wednesday, 20 March - 22:26

    A ribbon-based string that represents the structure of the backbone of a protein.

    Enlarge

    Antibodies are incredibly useful. Lots of recently developed drugs rely on antibodies that bind to and block the activity of specific proteins. They're also great research tools, allowing us to identify proteins within cells, purify both proteins and cells, and so on. Therapeutic antibodies have provided our first defenses against emerging viruses like Ebola and SARS-CoV-2.

    But making antibodies can be a serious pain, because it involves getting animals to make antibodies for us. You need to purify the protein you want the antibodies to stick to, inject it into an animal, and get the animal will produce antibodies as part of an immune response. From there, you either purify the antibodies, or to purify the cells that produce them. It's time-consuming, doesn't always work, and sometimes produces antibodies with properties that you're not looking for.

    But thanks to developments in AI-based protein predictions, all that hassle might become unnecessary. A recently developed diffusion model for protein structures has been adapted to antibody production and has successfully designed antibodies against flu virus proteins.

    Read 13 remaining paragraphs | Comments

    • chevron_right

      Alternate qubit design does error correction in hardware

      news.movim.eu / ArsTechnica · Friday, 9 February - 17:57

    Image of a complicated set of wires and cables hooked up to copper colored metal hardware.

    Enlarge (credit: Nord Quantique)

    There's a general consensus that performing any sort of complex algorithm on quantum hardware will have to wait for the arrival of error-corrected qubits. Individual qubits are too error-prone to be trusted for complex calculations, so quantum information will need to be distributed across multiple qubits, allowing monitoring for errors and intervention when they occur.

    But most ways of making these "logical qubits" needed for error correction require anywhere from dozens to over a hundred individual hardware qubits. This means we'll need anywhere from tens of thousands to millions of hardware qubits to do calculations. Existing hardware has only cleared the 1,000-qubit mark within the last month, so that future appears to be several years off at best.

    But on Thursday, a company called Nord Quantique announced that it had demonstrated error correction using a single qubit with a distinct hardware design. While this has the potential to greatly reduce the number of hardware qubits needed for useful error correction, the demonstration involved a single qubit—the company doesn't even expect to demonstrate operations on pairs of qubits until later this year.

    Read 13 remaining paragraphs | Comments

    • chevron_right

      Quantum computing startup says it will beat IBM to error correction

      news.movim.eu / ArsTechnica · Tuesday, 9 January - 21:49 · 1 minute

    The current generation of hardware, which will see rapid iteration over the next several years.

    Enlarge / The current generation of hardware, which will see rapid iteration over the next several years. (credit: QuEra)

    On Tuesday, the quantum computing startup Quera laid out a road map that will bring error correction to quantum computing in only two years and enable useful computations using it by 2026, years ahead of when IBM plans to offer the equivalent . Normally, this sort of thing should be dismissed as hype. Except the company is Quera, which is a spinoff of the Harvard Universeity lab that demonstrated the ability to identify and manage errors using hardware that's similar in design to what Quera is building.

    Also notable: Quera uses the same type of qubit that a rival startup, Atom Computing, has already scaled up to over 1,000 qubits . So, while the announcement should be viewed cautiously—several companies have promised rapid scaling and then failed to deliver—there are some reasons it should be viewed seriously as well.

    It’s a trap!

    Current qubits, regardless of their design, are prone to errors during measurements, operations, or even when simply sitting there. While it's possible to improve these error rates so that simple calculations can be done, most people in the field are skeptical it will ever be possible to drop these rates enough to do the elaborate calculations that would fulfill the promise of quantum computing. The consensus seems to be that, outside of a few edge cases, useful computation will require error-corrected qubits.

    Read 16 remaining paragraphs | Comments

    • chevron_right

      Multiple Chat GPT instances combine to figure out chemistry

      news.movim.eu / ArsTechnica · Wednesday, 20 December - 19:14 · 1 minute

    Image of a lab with chemicals, but no people present.

    Enlarge / The lab's empty because everyone's relaxing in the park while the AI does their work. (credit: Fei Yang )

    Despite rapid advances in artificial intelligence, AIs are nowhere close to being ready to replace humans for doing science. But that doesn't mean that they can't help automate some of the drudgery out of the daily grind of scientific experimentation. For example, a few years back, researchers put an AI in control of automated lab equipment and taught it to exhaustively catalog all the reactions that can occur among a set of starting materials.

    While useful, that still required a lot of researcher intervention to train the system in the first place. A group at Carnegie Mellon University has now figured out how to get an AI system to teach itself to do chemistry. The system requires a set of three AI instances, each specialized for different operations. But, once set up and supplied with raw materials, you just have to tell it what type of reaction you want done, and it'll figure it out.

    An AI trinity

    The researchers indicate that they were interested in understanding what capacities large language models (LLMs) can bring to the scientific endeavor. So all of the AI systems used in this work are LLMs, mostly GPT-3.5 and GPT-4, although some others—Claude 1.3 and Falcon-40B-Instruct—were tested as well. (GPT-4 and Claude 1.3 performed the best.) But, rather than using a single system to handle all aspects of the chemistry, the researchers set up distinct instances to cooperate in a division of labor setup and called it "Coscientist."

    Read 17 remaining paragraphs | Comments

    • chevron_right

      If AI is making the Turing test obsolete, what might be better?

      news.movim.eu / ArsTechnica · Friday, 15 December - 00:16 · 1 minute

    A white android sitting at a table in a depressed manner with an alchoholic drink. Very high resolution 3D render.

    Enlarge (credit: mevans )

    If a machine or an AI program matches or surpasses human intelligence, does that mean it can simulate humans perfectly? If yes, then what about reasoning—our ability to apply logic and think rationally before making decisions? How could we even identify whether an AI program can reason? To try to answer this question, a team of researchers has proposed a novel framework that works like a psychological study for software.

    "This test treats an 'intelligent' program as though it were a participant in a psychological study and has three steps: (a) test the program in a set of experiments examining its inferences, (b) test its understanding of its own way of reasoning, and (c) examine, if possible, the cognitive adequacy of the source code for the program," the researchers note .

    They suggest the standard methods of evaluating a machine’s intelligence, such as the Turing Test , can only tell you if the machine is good at processing information and mimicking human responses. The current generations of AI programs, such as Google’s LaMDA and OpenAI’s ChatGPT, for example, have come close to passing the Turing Test, yet the test results don’t imply these programs can think and reason like humans.

    Read 22 remaining paragraphs | Comments

    • chevron_right

      Quantum computer performs error-resistant operations with logical qubits

      news.movim.eu / ArsTechnica · Wednesday, 6 December - 22:05

    Image of a table-top optical setup, with lots of lenses and mirrors in precise locations.

    Enlarge / Some of the optical hardware needed to get QuEra's machine to work. (credit: QuEra)

    There's widespread agreement that most useful quantum computing will have to wait for the development of error-corrected qubits. Error correction involves distributing a bit of quantum information—termed a logical qubit—among a small collection of hardware qubits. The disagreements mostly focus on how best to implement it and how long it will take.

    A key step toward that future is described in a paper released in Nature today. A large team of researchers, primarily based at Harvard University, have now demonstrated the ability to perform multiple operations on as many as 48 logical qubits. The work shows that the system, based on hardware developed by the company QuEra, can correctly identify the occurrence of errors, and this can significantly improve the results of calculations.

    Yuval Boger, QuEra's chief marketing officer, told Ars: "We feel it is a very significant milestone on the path to where we all want to be, which is large-scale, fault-tolerant quantum computers.

    Read 22 remaining paragraphs | Comments

    • chevron_right

      IBM adds error correction to updated quantum computing roadmap

      news.movim.eu / ArsTechnica · Monday, 4 December - 15:40 · 1 minute

    Image of a series of silver-covered rectangles, each representing a processing chip.

    Enlarge / The family portrait of IBM's quantum processors, with the two new arrivals (Heron and Condor) at right. (credit: IBM)

    On Monday, IBM announced that it has produced the two quantum systems that its roadmap had slated for release in 2023. One of these is based on a chip named Condor, which is the largest transmon-based quantum processor yet released, with 1,121 functioning qubits. The second is based on a combination of three Heron chips, each of which has 133 qubits. Smaller chips like Heron and its successor, Flamingo, will play a critical role in IBM's quantum roadmap—which also got a major update today.

    Based on the update, IBM will have error-corrected qubits working by the end of the decade, enabled by improvements to individual qubits made over several iterations of the Flamingo chip. While these systems probably won't place things like existing encryption schemes at risk, they should be able to reliably execute quantum algorithms that are far more complex than anything we can do today.

    We talked with IBM's Jay Gambetta about everything the company is announcing today, including existing processors, future roadmaps, what the machines might be used for over the next few years, and the software that makes it all possible. But to understand what the company is doing, we have to back up a bit to look at where the field as a whole is moving.

    Read 20 remaining paragraphs | Comments