• chevron_right

      Lightening the load: AI helps exoskeleton work with different strides

      news.movim.eu / ArsTechnica · Monday, 1 July - 17:31 · 1 minute

    Image of two people using powered exoskeletons to move heavy items around, as seen in the movie Aliens.

    Enlarge / Right now, the software doesn't do arms, so don't go taking on any aliens with it. (credit: 20th Century Fox)

    Exoskeletons today look like something straight out of sci-fi. But the reality is they are nowhere near as robust as their fictional counterparts. They’re quite wobbly, and it takes long hours of handcrafting software policies, which regulate how they work—a process that has to be repeated for each individual user.

    To bring the technology a bit closer to Avatar ’s Skel Suits or Warhammer 40k power armor, a team at North Carolina University’s Lab of Biomechatronics and Intelligent Robotics used AI to build the first one-size-fits-all exoskeleton that supports walking, running, and stair-climbing. Critically, its software adapts itself to new users with no need for any user-specific adjustments. “You just wear it and it works,” says Hao Su, an associate professor and co-author of the study.

    Tailor-made robots

    An exoskeleton is a robot you wear to aid your movements—it makes walking, running, and other activities less taxing, the same way an e-bike adds extra watts on top of those you generate yourself, making pedaling easier. “The problem is, exoskeletons have a hard time understanding human intentions, whether you want to run or walk or climb stairs. It’s solved with locomotion recognition: systems that recognize human locomotion intentions,” says Su.

    Read 11 remaining paragraphs | Comments

    • chevron_right

      Researchers describe how to tell if ChatGPT is confabulating

      news.movim.eu / ArsTechnica · Thursday, 20 June - 19:32 · 1 minute

    Researchers describe how to tell if ChatGPT is confabulating

    Enlarge (credit: Aurich Lawson | Getty Images)

    It's one of the world's worst-kept secrets that large language models give blatantly false answers to queries and do so with a confidence that's indistinguishable from when they get things right. There are a number of reasons for this. The AI could have been trained on misinformation; the answer could require some extrapolation from facts that the LLM isn't capable of; or some aspect of the LLM's training might have incentivized a falsehood.

    But perhaps the simplest explanation is that an LLM doesn't recognize what constitutes a correct answer but is compelled to provide one. So it simply makes something up, a habit that has been termed confabulation .

    Figuring out when an LLM is making something up would obviously have tremendous value, given how quickly people have started relying on them for everything from college essays to job applications. Now, researchers from the University of Oxford say they've found a relatively simple way to determine when LLMs appear to be confabulating that works with all popular models and across a broad range of subjects. And, in doing so, they develop evidence that most of the alternative facts LLMs provide are a product of confabulation.

    Read 14 remaining paragraphs | Comments

    • chevron_right

      Exploration-focused training lets robotics AI immediately handle new tasks

      news.movim.eu / ArsTechnica · Friday, 10 May - 18:22 · 1 minute

    A woman performs maintenance on a robotic arm.

    Enlarge (credit: boonchai wedmakawand)

    Reinforcement-learning algorithms in systems like ChatGPT or Google’s Gemini can work wonders, but they usually need hundreds of thousands of shots at a task before they get good at it. That’s why it’s always been hard to transfer this performance to robots. You can’t let a self-driving car crash 3,000 times just so it can learn crashing is bad.

    But now a team of researchers at Northwestern University may have found a way around it. “That is what we think is going to be transformative in the development of the embodied AI in the real world,” says Thomas Berrueta who led the development of the Maximum Diffusion Reinforcement Learning (MaxDiff RL), an algorithm tailored specifically for robots.

    Introducing chaos

    The problem with deploying most reinforcement-learning algorithms in robots starts with the built-in assumption that the data they learn from is independent and identically distributed. The independence, in this context, means the value of one variable does not depend on the value of another variable in the dataset—when you flip a coin two times, getting tails on the second attempt does not depend on the result of your first flip. Identical distribution means that the probability of seeing any specific outcome is the same. In the coin-flipping example, the probability of getting heads is the same as getting tails: 50 percent for each.

    Read 16 remaining paragraphs | Comments

    • chevron_right

      High-speed imaging and AI help us understand how insect wings work

      news.movim.eu / ArsTechnica · Monday, 22 April - 20:16 · 1 minute

    Black and white images of a fly with its wings in a variety of positions, showing the details of a wing beat.

    Enlarge / A time-lapse showing how an insect's wing adopts very specific positions during flight. (credit: Florian Muijres, Dickinson Lab)

    About 350 million years ago, our planet witnessed the evolution of the first flying creatures. They are still around, and some of them continue to annoy us with their buzzing. While scientists have classified these creatures as pterygotes, the rest of the world simply calls them winged insects.

    There are many aspects of insect biology, especially their flight , that remain a mystery for scientists. One is simply how they move their wings. The insect wing hinge is a specialized joint that connects an insect’s wings with its body. It’s composed of five interconnected plate-like structures called sclerites. When these plates are shifted by the underlying muscles, it makes the insect wings flap.

    Until now, it has been tricky for scientists to understand the biomechanics that govern the motion of the sclerites even using advanced imaging technologies. “The sclerites within the wing hinge are so small and move so rapidly that their mechanical operation during flight has not been accurately captured despite efforts using stroboscopic photography, high-speed videography, and X-ray tomography,” Michael Dickinson, Zarem professor of biology and bioengineering at the California Institute of Technology (Caltech), told Ars Technica.

    Read 15 remaining paragraphs | Comments

    • chevron_right

      Quantum error correction used to actually correct errors

      news.movim.eu / ArsTechnica · Wednesday, 3 April - 15:08 · 1 minute

    Image of a chip with a device on it that is shaped like two triangles connected by a bar.

    Enlarge / Quantinuum's H2 "racetrack" quantum processor. (credit: Quantinuum)

    Today's quantum computing hardware is severely limited in what it can do by errors that are difficult to avoid. There can be problems with everything from setting the initial state of a qubit to reading its output, and qubits will occasionally lose their state while doing nothing. Some of the quantum processors in existence today can't use all of their individual qubits for a single calculation without errors becoming inevitable.

    The solution is to combine multiple hardware qubits to form what's termed a logical qubit. This allows a single bit of quantum information to be distributed among multiple hardware qubits, reducing the impact of individual errors. Additional qubits can be used as sensors to detect errors and allow interventions to correct them. Recently, there have been a number of demonstrations that logical qubits work in principle .

    On Wednesday, Microsoft and Quantinuum announced that logical qubits work in more than principle. "We've been able to demonstrate what's called active syndrome extraction, or sometimes it's also called repeated error correction," Microsoft's Krysta Svore told Ars. "And we've been able to do this such that it is better than the underlying physical error rate. So it actually works."

    Read 19 remaining paragraphs | Comments

    • chevron_right

      Quantum computing progress: Higher temps, better error correction

      news.movim.eu / ArsTechnica · Wednesday, 27 March - 22:24 · 1 minute

    conceptual graphic of symbols representing quantum states floating above a stylized computer chip.

    Enlarge (credit: vital )

    There's a strong consensus that tackling most useful problems with a quantum computer will require that the computer be capable of error correction. There is absolutely no consensus, however, about what technology will allow us to get there. A large number of companies, including major players like Microsoft, Intel, Amazon, and IBM, have all committed to different technologies to get there, while a collection of startups are exploring an even wider range of potential solutions.

    We probably won't have a clearer picture of what's likely to work for a few years. But there's going to be lots of interesting research and development work between now and then, some of which may ultimately represent key milestones in the development of quantum computing. To give you a sense of that work, we're going to look at three papers that were published within the last couple of weeks, each of which tackles a different aspect of quantum computing technology.

    Hot stuff

    Error correction will require connecting multiple hardware qubits to act as a single unit termed a logical qubit. This spreads a single bit of quantum information across multiple hardware qubits, making it more robust. Additional qubits are used to monitor the behavior of the ones holding the data and perform corrections as needed. Some error correction schemes require over a hundred hardware qubits for each logical qubit, meaning we'd need tens of thousands of hardware qubits before we could do anything practical.

    Read 21 remaining paragraphs | Comments

    • chevron_right

      Antibodies against anything? AI tool adapted to make them

      news.movim.eu / ArsTechnica · Wednesday, 20 March - 22:26

    A ribbon-based string that represents the structure of the backbone of a protein.

    Enlarge

    Antibodies are incredibly useful. Lots of recently developed drugs rely on antibodies that bind to and block the activity of specific proteins. They're also great research tools, allowing us to identify proteins within cells, purify both proteins and cells, and so on. Therapeutic antibodies have provided our first defenses against emerging viruses like Ebola and SARS-CoV-2.

    But making antibodies can be a serious pain, because it involves getting animals to make antibodies for us. You need to purify the protein you want the antibodies to stick to, inject it into an animal, and get the animal will produce antibodies as part of an immune response. From there, you either purify the antibodies, or to purify the cells that produce them. It's time-consuming, doesn't always work, and sometimes produces antibodies with properties that you're not looking for.

    But thanks to developments in AI-based protein predictions, all that hassle might become unnecessary. A recently developed diffusion model for protein structures has been adapted to antibody production and has successfully designed antibodies against flu virus proteins.

    Read 13 remaining paragraphs | Comments

    • chevron_right

      Alternate qubit design does error correction in hardware

      news.movim.eu / ArsTechnica · Friday, 9 February - 17:57

    Image of a complicated set of wires and cables hooked up to copper colored metal hardware.

    Enlarge (credit: Nord Quantique)

    There's a general consensus that performing any sort of complex algorithm on quantum hardware will have to wait for the arrival of error-corrected qubits. Individual qubits are too error-prone to be trusted for complex calculations, so quantum information will need to be distributed across multiple qubits, allowing monitoring for errors and intervention when they occur.

    But most ways of making these "logical qubits" needed for error correction require anywhere from dozens to over a hundred individual hardware qubits. This means we'll need anywhere from tens of thousands to millions of hardware qubits to do calculations. Existing hardware has only cleared the 1,000-qubit mark within the last month, so that future appears to be several years off at best.

    But on Thursday, a company called Nord Quantique announced that it had demonstrated error correction using a single qubit with a distinct hardware design. While this has the potential to greatly reduce the number of hardware qubits needed for useful error correction, the demonstration involved a single qubit—the company doesn't even expect to demonstrate operations on pairs of qubits until later this year.

    Read 13 remaining paragraphs | Comments

    • chevron_right

      Quantum computing startup says it will beat IBM to error correction

      news.movim.eu / ArsTechnica · Tuesday, 9 January, 2024 - 21:49 · 1 minute

    The current generation of hardware, which will see rapid iteration over the next several years.

    Enlarge / The current generation of hardware, which will see rapid iteration over the next several years. (credit: QuEra)

    On Tuesday, the quantum computing startup Quera laid out a road map that will bring error correction to quantum computing in only two years and enable useful computations using it by 2026, years ahead of when IBM plans to offer the equivalent . Normally, this sort of thing should be dismissed as hype. Except the company is Quera, which is a spinoff of the Harvard Universeity lab that demonstrated the ability to identify and manage errors using hardware that's similar in design to what Quera is building.

    Also notable: Quera uses the same type of qubit that a rival startup, Atom Computing, has already scaled up to over 1,000 qubits . So, while the announcement should be viewed cautiously—several companies have promised rapid scaling and then failed to deliver—there are some reasons it should be viewed seriously as well.

    It’s a trap!

    Current qubits, regardless of their design, are prone to errors during measurements, operations, or even when simply sitting there. While it's possible to improve these error rates so that simple calculations can be done, most people in the field are skeptical it will ever be possible to drop these rates enough to do the elaborate calculations that would fulfill the promise of quantum computing. The consensus seems to be that, outside of a few edge cases, useful computation will require error-corrected qubits.

    Read 16 remaining paragraphs | Comments