• chevron_right

      Alternate qubit design does error correction in hardware

      news.movim.eu / ArsTechnica · Friday, 9 February - 17:57

    Image of a complicated set of wires and cables hooked up to copper colored metal hardware.

    Enlarge (credit: Nord Quantique)

    There's a general consensus that performing any sort of complex algorithm on quantum hardware will have to wait for the arrival of error-corrected qubits. Individual qubits are too error-prone to be trusted for complex calculations, so quantum information will need to be distributed across multiple qubits, allowing monitoring for errors and intervention when they occur.

    But most ways of making these "logical qubits" needed for error correction require anywhere from dozens to over a hundred individual hardware qubits. This means we'll need anywhere from tens of thousands to millions of hardware qubits to do calculations. Existing hardware has only cleared the 1,000-qubit mark within the last month, so that future appears to be several years off at best.

    But on Thursday, a company called Nord Quantique announced that it had demonstrated error correction using a single qubit with a distinct hardware design. While this has the potential to greatly reduce the number of hardware qubits needed for useful error correction, the demonstration involved a single qubit—the company doesn't even expect to demonstrate operations on pairs of qubits until later this year.

    Read 13 remaining paragraphs | Comments

    • chevron_right

      Quantum computing startup says it will beat IBM to error correction

      news.movim.eu / ArsTechnica · Tuesday, 9 January - 21:49 · 1 minute

    The current generation of hardware, which will see rapid iteration over the next several years.

    Enlarge / The current generation of hardware, which will see rapid iteration over the next several years. (credit: QuEra)

    On Tuesday, the quantum computing startup Quera laid out a road map that will bring error correction to quantum computing in only two years and enable useful computations using it by 2026, years ahead of when IBM plans to offer the equivalent . Normally, this sort of thing should be dismissed as hype. Except the company is Quera, which is a spinoff of the Harvard Universeity lab that demonstrated the ability to identify and manage errors using hardware that's similar in design to what Quera is building.

    Also notable: Quera uses the same type of qubit that a rival startup, Atom Computing, has already scaled up to over 1,000 qubits . So, while the announcement should be viewed cautiously—several companies have promised rapid scaling and then failed to deliver—there are some reasons it should be viewed seriously as well.

    It’s a trap!

    Current qubits, regardless of their design, are prone to errors during measurements, operations, or even when simply sitting there. While it's possible to improve these error rates so that simple calculations can be done, most people in the field are skeptical it will ever be possible to drop these rates enough to do the elaborate calculations that would fulfill the promise of quantum computing. The consensus seems to be that, outside of a few edge cases, useful computation will require error-corrected qubits.

    Read 16 remaining paragraphs | Comments

    • chevron_right

      Multiple Chat GPT instances combine to figure out chemistry

      news.movim.eu / ArsTechnica · Wednesday, 20 December - 19:14 · 1 minute

    Image of a lab with chemicals, but no people present.

    Enlarge / The lab's empty because everyone's relaxing in the park while the AI does their work. (credit: Fei Yang )

    Despite rapid advances in artificial intelligence, AIs are nowhere close to being ready to replace humans for doing science. But that doesn't mean that they can't help automate some of the drudgery out of the daily grind of scientific experimentation. For example, a few years back, researchers put an AI in control of automated lab equipment and taught it to exhaustively catalog all the reactions that can occur among a set of starting materials.

    While useful, that still required a lot of researcher intervention to train the system in the first place. A group at Carnegie Mellon University has now figured out how to get an AI system to teach itself to do chemistry. The system requires a set of three AI instances, each specialized for different operations. But, once set up and supplied with raw materials, you just have to tell it what type of reaction you want done, and it'll figure it out.

    An AI trinity

    The researchers indicate that they were interested in understanding what capacities large language models (LLMs) can bring to the scientific endeavor. So all of the AI systems used in this work are LLMs, mostly GPT-3.5 and GPT-4, although some others—Claude 1.3 and Falcon-40B-Instruct—were tested as well. (GPT-4 and Claude 1.3 performed the best.) But, rather than using a single system to handle all aspects of the chemistry, the researchers set up distinct instances to cooperate in a division of labor setup and called it "Coscientist."

    Read 17 remaining paragraphs | Comments

    • chevron_right

      If AI is making the Turing test obsolete, what might be better?

      news.movim.eu / ArsTechnica · Friday, 15 December - 00:16 · 1 minute

    A white android sitting at a table in a depressed manner with an alchoholic drink. Very high resolution 3D render.

    Enlarge (credit: mevans )

    If a machine or an AI program matches or surpasses human intelligence, does that mean it can simulate humans perfectly? If yes, then what about reasoning—our ability to apply logic and think rationally before making decisions? How could we even identify whether an AI program can reason? To try to answer this question, a team of researchers has proposed a novel framework that works like a psychological study for software.

    "This test treats an 'intelligent' program as though it were a participant in a psychological study and has three steps: (a) test the program in a set of experiments examining its inferences, (b) test its understanding of its own way of reasoning, and (c) examine, if possible, the cognitive adequacy of the source code for the program," the researchers note .

    They suggest the standard methods of evaluating a machine’s intelligence, such as the Turing Test , can only tell you if the machine is good at processing information and mimicking human responses. The current generations of AI programs, such as Google’s LaMDA and OpenAI’s ChatGPT, for example, have come close to passing the Turing Test, yet the test results don’t imply these programs can think and reason like humans.

    Read 22 remaining paragraphs | Comments

    • chevron_right

      Quantum computer performs error-resistant operations with logical qubits

      news.movim.eu / ArsTechnica · Wednesday, 6 December - 22:05

    Image of a table-top optical setup, with lots of lenses and mirrors in precise locations.

    Enlarge / Some of the optical hardware needed to get QuEra's machine to work. (credit: QuEra)

    There's widespread agreement that most useful quantum computing will have to wait for the development of error-corrected qubits. Error correction involves distributing a bit of quantum information—termed a logical qubit—among a small collection of hardware qubits. The disagreements mostly focus on how best to implement it and how long it will take.

    A key step toward that future is described in a paper released in Nature today. A large team of researchers, primarily based at Harvard University, have now demonstrated the ability to perform multiple operations on as many as 48 logical qubits. The work shows that the system, based on hardware developed by the company QuEra, can correctly identify the occurrence of errors, and this can significantly improve the results of calculations.

    Yuval Boger, QuEra's chief marketing officer, told Ars: "We feel it is a very significant milestone on the path to where we all want to be, which is large-scale, fault-tolerant quantum computers.

    Read 22 remaining paragraphs | Comments

    • chevron_right

      IBM adds error correction to updated quantum computing roadmap

      news.movim.eu / ArsTechnica · Monday, 4 December - 15:40 · 1 minute

    Image of a series of silver-covered rectangles, each representing a processing chip.

    Enlarge / The family portrait of IBM's quantum processors, with the two new arrivals (Heron and Condor) at right. (credit: IBM)

    On Monday, IBM announced that it has produced the two quantum systems that its roadmap had slated for release in 2023. One of these is based on a chip named Condor, which is the largest transmon-based quantum processor yet released, with 1,121 functioning qubits. The second is based on a combination of three Heron chips, each of which has 133 qubits. Smaller chips like Heron and its successor, Flamingo, will play a critical role in IBM's quantum roadmap—which also got a major update today.

    Based on the update, IBM will have error-corrected qubits working by the end of the decade, enabled by improvements to individual qubits made over several iterations of the Flamingo chip. While these systems probably won't place things like existing encryption schemes at risk, they should be able to reliably execute quantum algorithms that are far more complex than anything we can do today.

    We talked with IBM's Jay Gambetta about everything the company is announcing today, including existing processors, future roadmaps, what the machines might be used for over the next few years, and the software that makes it all possible. But to understand what the company is doing, we have to back up a bit to look at where the field as a whole is moving.

    Read 20 remaining paragraphs | Comments

    • chevron_right

      Atom Computing is the first to announce a 1,000+ qubit quantum computer

      news.movim.eu / ArsTechnica · Tuesday, 24 October - 14:02 · 1 minute

    A dark blue background filled with a regular grid of lighter dots

    Enlarge / The qubits of the new hardware: an array of individual atoms. (credit: Atom Computing)

    Today, a startup called Atom Computing announced that it has been doing internal testing of a 1,180 qubit quantum computer and will be making it available to customers next year. The system represents a major step forward for the company, which had only built one prior system based on neutral atom qubits—a system that operated using only 100 qubits.

    The error rate for individual qubit operations is high enough that it won't be possible to run an algorithm that relies on the full qubit count without it failing due to an error. But it does back up the company's claims that its technology can scale rapidly and provides a testbed for work on quantum error correction. And, for smaller algorithms, the company says it'll simply run multiple instances in parallel to boost the chance of returning the right answer.

    Computing with atoms

    Atom Computing, as its name implies, has chosen neutral atoms as its qubit of choice (there are other companies that are working with ions). These systems rely on a set of lasers that create a series of locations that are energetically favorable for atoms. Left on their own, atoms will tend to fall into these locations and stay there until a stray gas atom bumps into them and knocks them out.

    Read 17 remaining paragraphs | Comments

    • chevron_right

      IBM has made a new, highly efficient AI processor

      news.movim.eu / ArsTechnica · Friday, 20 October - 18:31 · 1 minute

    Image of a series of chips on a black background, with one chip labelled

    Enlarge (credit: IBM )

    As the utility of AI systems has grown dramatically, so has their energy demand. Training new systems is extremely energy intensive, as it generally requires massive data sets and lots of processor time. Executing a trained system tends to be much less involved—smartphones can easily manage it in some cases. But, because you execute them so many times, that energy use also tends to add up.

    Fortunately, there are lots of ideas on how to bring the latter energy use back down. IBM and Intel have experimented with processors designed to mimic the behavior of actual neurons. IBM has also tested executing neural network calculations in phase change memory to avoid making repeated trips to RAM.

    Now, IBM is back with yet another approach, one that's a bit of "none of the above." The company's new NorthPole processor has taken some of the ideas behind all of these approaches and merged them with a very stripped-down approach to running calculations to create a highly power-efficient chip that can efficiently execute inference-based neural networks. For things like image classification or audio transcription, the chip can be up to 35 times more efficient than relying on a GPU.

    Read 14 remaining paragraphs | Comments

    • chevron_right

      New analysis suggests human ancestors nearly died out

      news.movim.eu / ArsTechnica · Friday, 1 September - 18:56 · 1 minute

    Image of an excavation of a human skeleton.

    Enlarge (credit: Getty Images )

    Multiple lines of evidence indicate that modern humans evolved within the last 200,000 years and spread out of Africa starting about 60,000 years ago. Before that, however, the details get a bit complicated. We're still arguing about which ancestral population might have given rise to our lineage. Somewhere about 600,000 years ago, that lineage split off Neanderthals and Denisovans, and both of those lineages later interbred with modern humans after some of them left Africa.

    Figuring out as much as we currently know has required a mix of fossils, ancient DNA, and modern genomes. A new study argues there is another complicating event in humanity's past: a near-extinction period where almost 99 percent of our ancestral lineage died. However, the finding is based on a completely new approach to analyzing modern genomes, and so it may be difficult to validate.

    Tracing diversity

    Unless a population is small and inbred, they will have genetic diversity: a collection of differences in their DNA ranging from individual bases up to large rearrangements of chromosomes. These differences are tracked when testing services estimate where your ancestors were likely to originate. Some genetic differences arose recently, while others have been floating around our lineage since before modern humans existed.

    Read 20 remaining paragraphs | Comments