• chevron_right

      Atom Computing is the first to announce a 1,000+ qubit quantum computer

      news.movim.eu / ArsTechnica · Tuesday, 24 October - 14:02 · 1 minute

    A dark blue background filled with a regular grid of lighter dots

    Enlarge / The qubits of the new hardware: an array of individual atoms. (credit: Atom Computing)

    Today, a startup called Atom Computing announced that it has been doing internal testing of a 1,180 qubit quantum computer and will be making it available to customers next year. The system represents a major step forward for the company, which had only built one prior system based on neutral atom qubits—a system that operated using only 100 qubits.

    The error rate for individual qubit operations is high enough that it won't be possible to run an algorithm that relies on the full qubit count without it failing due to an error. But it does back up the company's claims that its technology can scale rapidly and provides a testbed for work on quantum error correction. And, for smaller algorithms, the company says it'll simply run multiple instances in parallel to boost the chance of returning the right answer.

    Computing with atoms

    Atom Computing, as its name implies, has chosen neutral atoms as its qubit of choice (there are other companies that are working with ions). These systems rely on a set of lasers that create a series of locations that are energetically favorable for atoms. Left on their own, atoms will tend to fall into these locations and stay there until a stray gas atom bumps into them and knocks them out.

    Read 17 remaining paragraphs | Comments

    • chevron_right

      IBM has made a new, highly efficient AI processor

      news.movim.eu / ArsTechnica · Friday, 20 October - 18:31 · 1 minute

    Image of a series of chips on a black background, with one chip labelled

    Enlarge (credit: IBM )

    As the utility of AI systems has grown dramatically, so has their energy demand. Training new systems is extremely energy intensive, as it generally requires massive data sets and lots of processor time. Executing a trained system tends to be much less involved—smartphones can easily manage it in some cases. But, because you execute them so many times, that energy use also tends to add up.

    Fortunately, there are lots of ideas on how to bring the latter energy use back down. IBM and Intel have experimented with processors designed to mimic the behavior of actual neurons. IBM has also tested executing neural network calculations in phase change memory to avoid making repeated trips to RAM.

    Now, IBM is back with yet another approach, one that's a bit of "none of the above." The company's new NorthPole processor has taken some of the ideas behind all of these approaches and merged them with a very stripped-down approach to running calculations to create a highly power-efficient chip that can efficiently execute inference-based neural networks. For things like image classification or audio transcription, the chip can be up to 35 times more efficient than relying on a GPU.

    Read 14 remaining paragraphs | Comments

    • chevron_right

      New analysis suggests human ancestors nearly died out

      news.movim.eu / ArsTechnica · Friday, 1 September, 2023 - 18:56 · 1 minute

    Image of an excavation of a human skeleton.

    Enlarge (credit: Getty Images )

    Multiple lines of evidence indicate that modern humans evolved within the last 200,000 years and spread out of Africa starting about 60,000 years ago. Before that, however, the details get a bit complicated. We're still arguing about which ancestral population might have given rise to our lineage. Somewhere about 600,000 years ago, that lineage split off Neanderthals and Denisovans, and both of those lineages later interbred with modern humans after some of them left Africa.

    Figuring out as much as we currently know has required a mix of fossils, ancient DNA, and modern genomes. A new study argues there is another complicating event in humanity's past: a near-extinction period where almost 99 percent of our ancestral lineage died. However, the finding is based on a completely new approach to analyzing modern genomes, and so it may be difficult to validate.

    Tracing diversity

    Unless a population is small and inbred, they will have genetic diversity: a collection of differences in their DNA ranging from individual bases up to large rearrangements of chromosomes. These differences are tracked when testing services estimate where your ancestors were likely to originate. Some genetic differences arose recently, while others have been floating around our lineage since before modern humans existed.

    Read 20 remaining paragraphs | Comments

    • chevron_right

      New robot searches for solar cell materials 14 times faster

      news.movim.eu / ArsTechnica · Thursday, 24 August, 2023 - 15:09 · 1 minute

    Image of a robotic printer and some samples it has prepared.

    Enlarge / RoboMapper in action. (credit: Aram Amassian)

    Earlier this year, two-layer solar cells broke records with 33 percent efficiency. The cells are made of a combination of silicon and a material called a perovskite. However, these tandem solar cells are still far from the theoretical limit of around 45 percent efficiency, and they degrade quickly under sun exposure, making their usefulness limited.

    The process of improving tandem solar cells involves the search for the perfect materials to layer on top of each other, with each capturing some of the sunlight the other is missing. One potential material for this is perovskites, which are defined by their peculiar rhombus-in-a-cube crystal structure. This structure can be adopted by many chemicals in a variety of proportions. To make a good candidate for tandem solar cells, the combination of chemicals needs to have the right bandgap—the property responsible for absorbing the right part of the sun’s spectrum—be stable at normal temperatures, and, most challengingly, not degrade under illumination.

    The number of possible perovskite materials is vast, and predicting the properties that a given chemical composition will have is very difficult. Trying all the possibilities out in the lab is prohibitively costly and time-consuming. To accelerate the search for the ideal perovskite, researchers at North Carolina State University decided to enlist the help of robots.

    Read 9 remaining paragraphs | Comments

    • chevron_right

      IBM team builds low-power analog AI processor

      news.movim.eu / ArsTechnica · Wednesday, 23 August, 2023 - 19:23

    Cartoon image of two chips with information flowing between and around them.

    Enlarge (credit: IBM )

    Large Language Models, the AI tech behind things like Chat GPT, are just what their name implies : big. They often have billions of individual computational nodes and huge numbers of connections among them. All of that means lots of trips back and forth to memory and a whole lot of power use to make that happen. And the problem is likely to get worse.

    One way to potentially avoid this is to mix memory and processing. Both IBM and Intel have made chips that equip individual neurons with all the memory they need to perform their functions. An alternative is to perform operations in memory , an approach that has been demonstrated with phase-change memory.

    Now, IBM has followed up on its earlier demonstration by building a phase-change chip that's much closer to a functional AI processor. In a paper released on Wednesday by Nature, the company shows that its hardware can perform speech recognition with reasonable accuracy and a much lower energy footprint.

    Read 13 remaining paragraphs | Comments

    • chevron_right

      What does it take to get AI to work like a scientist?

      news.movim.eu / ArsTechnica · Tuesday, 8 August, 2023 - 18:27

    Digital generated image of glowing dots connected into brain icon inside abstract digital space.

    Enlarge (credit: Andriy Onufriyenko )

    As machine-learning algorithms grow more sophisticated, artificial intelligence seems poised to revolutionize the practice of science itself. In part, this will come from the software enabling scientists to work more effectively. But some advocates are hoping for a fundamental transformation in the process of science. The Nobel Turing Challenge , issued in 2021 by noted computer scientist Hiroaki Kitano , tasked the scientific community with producing a computer program capable of making a discovery worthy of a Nobel Prize by 2050.

    Part of the work of scientists is to uncover laws of nature—basic principles that distill the fundamental workings of our Universe. Many of them, like Newton’s laws of motion or the law of conservation of mass in chemical reactions, are expressed in a rigorous mathematical form. Others, like the law of natural selection or Mendel’s law of genetic inheritance, are more conceptual.

    The scientific community consists of theorists, data analysts, and experimentalists who collaborate to uncover these laws. The dream behind the Nobel Turing Challenge is to offload the tasks of all three onto artificial intelligence.

    Read 22 remaining paragraphs | Comments

    • chevron_right

      GPT-3 aces tests of reasoning by analogy

      news.movim.eu / ArsTechnica · Monday, 31 July, 2023 - 19:55

    A hammer being used to force a square block through a round hole.

    Enlarge (credit: zoom )

    Large language models are a class of AI algorithm that relies on a high number computational nodes and an equally large number of connections among them. They can be trained to perform a variety of functions— protein folding, anyone ?—but they're mostly recognized for their capabilities with human languages.

    LLMs trained to simply predict the next word that will appear in text can produce human-sounding conversations and essays, although with some worrying accuracy issues. The systems have demonstrated a variety of behaviors that appear to go well beyond the simple language capabilities they were trained to handle.

    We can apparently add analogies to the list of items that LLMs have inadvertently mastered. A team from University of California, Los Angeles has tested the GPT-3 LLM using questions that should be familiar to any Americans that have spent time on standardized tests like the SAT. In all but one variant of these questions, GPT-3 managed to outperform undergrads who presumably had mastered these tests just a few years earlier. The researchers suggest that this indicates that Large Language Models are able to master reasoning by analogy.

    Read 12 remaining paragraphs | Comments

    • chevron_right

      New legged robots designed to explore planets as a team

      news.movim.eu / ArsTechnica · Friday, 21 July, 2023 - 17:56 · 1 minute

    Image of three red, legged robots exploring rocky terrain.

    Enlarge / The robots exploring a simulated alien environment. (credit: ETH Zurich / Takahiro Miki )

    While rovers have made incredible discoveries, their wheels can hold them back, and erratic terrain can mean damage. There is no replacing something like Perseverance , but sometimes rovers could use a leg up, and they could get that from a small swarm of four-legged robots.

    They look like giant metal insects, but the trio of ANYmal robots customized by researchers at ETH Zurich was tested in environments as close to the harsh lunar and Martian terrain as possible. Robots capable of walking could assist future rovers and mitigate the risk of damage from sharp edges or loss of traction in loose regolith. Not only do the ANYmals’ legs help them literally step over obstacles, but these bots work most efficiently as a team. They are each specialized for particular functions but still flexible enough to cover for each other—if one glitches, the others can take over its tasks.

    “Our technology can enable robots to investigate scientifically transformative targets on the Moon and Mars that are unreachable at present using wheeled rover systems,” the research team said in a study recently published in Science Robotics.

    Read 11 remaining paragraphs | Comments

    • chevron_right

      Is distributed computing dying, or just fading into the backdrop?

      news.movim.eu / ArsTechnica · Tuesday, 11 July, 2023 - 13:44 · 1 minute

    Image of a series of bar graphs in multiple colors.

    Enlarge / This image has a warm, nostalgic feel for many of us. (credit: SETI Institute )

    Distributed computing erupted onto the scene in 1999 with the release of SETI@home, a nifty program and screensaver (back when people still used those) that sifted through radio telescope signals for signs of alien life.

    The concept of distributed computing is simple enough: You take a very large project, slice it up into pieces, and send out individual pieces to PCs for processing. There is no inter-PC connection or communication; it’s all done through a central server. Each piece of the project is independent of the others; a distributed computing project wouldn't work if a process needed the results of a prior process to continue. SETI@home was a prime candidate for distributed computing: Each individual work unit was a unique moment in time and space as seen by a radio telescope.

    Twenty-one years later, SETI@home shut down, having found nothing. An incalculable amount of PC cycles and electricity wasted for nothing. We have no way of knowing all the reasons people quit (feel free to tell us in the comments section), but having nothing to show for it is a pretty good reason.

    Read 15 remaining paragraphs | Comments