• chevron_right

      Quantum effects of D-Wave’s hardware boost its performance

      news.movim.eu / ArsTechnica · Wednesday, 19 April, 2023 - 19:52

    Image of large, black metal boxes that house D-Wave hardware.

    Enlarge / The D-Wave hardware is, quite literally, a black box. (credit: D-Wave)

    Before we had developed the first qubit, theoreticians had done the work that showed that a sufficiently powerful gate-based quantum computer would be able to perform calculations that could not realistically be done on traditional computing hardware. All that is needed is to build hardware capable of implementing the theorists' work.

    The situation was essentially reversed when it came to quantum annealing . D-Wave started building hardware that could perform quantum annealing without a strong theoretical understanding of how its performance would compare to standard computing hardware. And, for practical calculations, the hardware has sometimes been outperformed by more traditional algorithms.

    On Wednesday, however, a team of researchers, some at D-Wave, others at academic institutions, is releasing a paper comparing its quantum annealer with different methods of simulating its behavior. The results show that actual hardware has a clear advantage over simulations, though there are two caveats: errors start to cause the hardware to deviate from ideal performance, and it's not clear how well this performance edge translates to practical calculations.

    Read 18 remaining paragraphs | Comments

    • chevron_right

      Large language models also work for protein structures

      news.movim.eu / ArsTechnica · Thursday, 16 March, 2023 - 19:01 · 1 minute

    Artist's rendering of a collection of protein structures floating in space

    Enlarge (credit: CHRISTOPH BURGSTEDT/SCIENCE PHOTO LIBRARY )

    The success of ChatGPT and its competitors is based on what's termed emergent behaviors. These systems, called large language models (LLMs), weren't trained to output natural-sounding language (or effective malware ); they were simply tasked with tracking the statistics of word usage. But, given a large enough training set of language samples and a sufficiently complex neural network, their training resulted in an internal representation that "understood" English usage and a large compendium of facts. Their complex behavior emerged from a far simpler training.

    A team at Meta has now reasoned that this sort of emergent understanding shouldn't be limited to languages. So it has trained an LLM on the statistics of the appearance of amino acids within proteins and used the system's internal representation of what it learned to extract information about the structure of those proteins. The result is not quite as good as the best competing AI systems for predicting protein structures, but it's considerably faster and still getting better.

    LLMs: Not just for language

    The first thing you need to know to understand this work is that, while the term "language" in the name "LLM" refers to their original development for language processing tasks, they can potentially be used for a variety of purposes. So, while language processing is a common use case for LLMs, these models have other capabilities as well. In fact, the term "Large" is far more informative, in that all LLMs have a large number of nodes—the "neurons" in a neural network—and an even larger number of values that describe the weights of the connections among those nodes. While they were first developed to process language, they can potentially be used for a variety of tasks.

    Read 17 remaining paragraphs | Comments

    • chevron_right

      Do better coders swear more, or does C just do that to good programmers?

      news.movim.eu / ArsTechnica · Tuesday, 14 March, 2023 - 18:35

    A person screaming at his computer.

    Enlarge (credit: dasilvafa )

    Ever find yourself staring at a tricky coding problem and thinking, “shit”?

    If those thoughts make their way into your code or the associated comments, you’re in good company. When undergraduate student Jan Strehmel from Karlsruhe Institute of Technology analyzed open source code written in the programming language C, he found no shortage of obscenity. While that might be expected, Strehmel’s overall finding might not be: The average quality of code containing swears was significantly higher than the average quality of code that did not.

    “The results are quite surprising!” Strehmel said. Programmers and scientists may have a lot of follow-up questions. Are the researchers sure there aren’t certain profanity-prone programmers skewing the results? What about other programming languages? And, most importantly, why would swears correlate with high-quality code? The work is ongoing, but even without all the answers, one thing’s for sure: Strehmel just wrote one hell of a bachelor’s thesis.

    Read 10 remaining paragraphs | Comments

    • chevron_right

      Is the future of computing biological?

      news.movim.eu / ArsTechnica · Wednesday, 1 March, 2023 - 16:30

    Image of neurons glowing blue against a black background

    Enlarge (credit: Andriy Onufriyenko )

    Trying to make computers more like human brains isn’t a new phenomenon. However, a team of researchers from Johns Hopkins University argues that there could be many benefits in taking this concept a bit more literally by using actual neurons, though there are some hurdles to jump first before we get there.

    In a recent paper , the team laid out a roadmap of what's needed before we can create biocomputers powered by human brain cells (not taken from human brains, though). Further, according to one of the researchers, there are some clear benefits the proposed “organoid intelligence” would have over current computers.

    “We have always tried to make our computers more brain-like,” Thomas Hartung, a researcher at Johns Hopkins University’s Environmental Health and Engineering department and one of the paper’s authors, told Ars. “At least theoretically, the brain is essentially unmatched as a computer.”

    Read 8 remaining paragraphs | Comments

    • chevron_right

      Google’s improved quantum processor good enough for error correction

      news.movim.eu / ArsTechnica · Wednesday, 22 February, 2023 - 23:18 · 1 minute

    Image of two silver squares with dark squares embedded in them.

    Enlarge / Two generations of Google's Sycamore processor. (credit: Google Quantum AI)

    Today, Google announced a demonstration of quantum error correction on its next generation of quantum processors, Sycamore. The iteration on Sycamore isn't dramatic—it's the same number of qubits, just with better performance. And getting quantum error correction isn't really the news—they'd managed to get it to work a couple of years ago.

    Instead, the signs of progress are a bit more subtle. In earlier generations of processors, qubits were error-prone enough that adding more of them to an error-correction scheme caused problems that were larger than the gain in corrections. In this new iteration, adding more qubits and getting the error rate to go down is possible.

    We can fix that

    The functional unit of a quantum processor is a qubit, which is anything—an atom, an electron, a hunk of superconducting electronics—that can be used to store and manipulate a quantum state. The more qubits you have, the more capable the machine is. By the time you have access to several hundred, it's thought that you can perform calculations that would be difficult to impossible to do on traditional computer hardware.

    Read 11 remaining paragraphs | Comments

    • chevron_right

      Grid of atoms is both a quantum computer and an optimization solver

      news.movim.eu / ArsTechnica · Thursday, 16 February, 2023 - 12:30 · 1 minute

    Image of elaborate optical hardware

    Enlarge (credit: QuEra)

    Quantum computing has entered a bit of an awkward period. There have been clear demonstrations that we can successfully run quantum algorithms, but the qubit counts and error rates of existing hardware mean that we can't solve any commercially useful problems at the moment. So, while many companies are interested in quantum computing and have developed software for existing hardware (and have paid for access to that hardware), the efforts have been focused on preparation. They want the expertise and capability needed to develop useful software once the computers are ready to run it.

    For the moment, that leaves them waiting for hardware companies to produce sufficiently robust machines—machines that don't currently have a clear delivery date. It could be years; it could be decades. Beyond learning how to develop quantum computing software, there's nothing obvious to do with the hardware in the meantime.

    But a company called QuEra may have found a way to do something that's not as obvious. The technology it is developing could ultimately provide a route to quantum computing. But until then, it's possible to solve a class of mathematical problems on the same hardware, and any improvements to that hardware will benefit both types of computation. And in a new paper, the company's researchers have expanded the types of computations that can be run on their machine.

    Read 17 remaining paragraphs | Comments

    • chevron_right

      Twitter ditches free access to data, potentially hindering research

      news.movim.eu / ArsTechnica · Friday, 10 February, 2023 - 16:49

    Image of blue birds with speech bubbles.

    Enlarge (credit: Sean Gladwell )

    Twitter-owner Elon Musk has recently decided to close down free access to Twitter's application programming interface (API), which gives users access to tweet data. There are many different uses for the data provided by the social media platform. Third-party programs like Tweetbot—which helps users customize their feeds—have used Twitter's APIs, for example.

    Experts in the field say the move could harm academic research by hindering access to data used in papers that analyze behavior on social media. When USC professor of computer science Kristina Lerman first heard about the move, she said her team started “scrambling to connect to collect the data we need for some of the projects we have going on this semester,” though the urgency subsided when more details were released, she told Ars.

    Twitter will begin offering basic access to its API for $100 per month. There are few if any details released yet, but Twitter’s website shows that there are tiers of access with different tweet access limits, along with other limits on features like filtering. The higher tiers cost more.

    Read 7 remaining paragraphs | Comments

    • chevron_right

      What are companies doing with D-Wave’s quantum hardware?

      news.movim.eu / ArsTechnica · Monday, 2 January, 2023 - 12:00

    What are companies doing with D-Wave’s quantum hardware?

    Enlarge (credit: Getty Images)

    While many companies are now offering access to general-purpose quantum computers, they're not currently being used to solve any real-world problems, as they're held back by issues with qubit count and quality. Most of their users are either running research projects or simply gaining experience with programming on the systems in the expectation that a future computer will be useful.

    There are quantum systems based on superconducting hardware that are being used commercially; it's just that they're not general-purpose computers.

    D-Wave offers what's called a quantum annealer. The hardware is a large collection of linked superconducting devices that use quantum effects to reach energetic ground states for the system. When properly configured, this end state represents the solution to a mathematical problem. Annealers can't solve the same full range of mathematical problems as general-purpose quantum computers, such as the ones made by Google, IBM, and others. But they can be used to solve a variety of optimization problems.

    Read 24 remaining paragraphs | Comments

    • chevron_right

      DeepMind’s latest AI project solves programming challenges like a newb

      news.movim.eu / ArsTechnica · Thursday, 8 December, 2022 - 21:15 · 1 minute

    Blurred hands are typing on a laptop computer in the dark with illuminated keyboard and illegible mystic program code visible on the screen.

    Enlarge / If an AI were asked to come up with an image for this article, would it think of The Matrix ? (credit: EThamPhoto )

    Google's DeepMind AI division has tackled everything from StarCraft to protein folding . So it's probably no surprise that its creators have eventually turned to what is undoubtedly a personal interest: computer programming. In Thursday's edition of Science, the company describes a system it developed that produces code in response to programming typical of those used in human programming contests.

    On an average challenge, the AI system could score near the top half of participants. But it had a bit of trouble scaling, being less likely to produce a successful program on problems where more code is typically required. Still, the fact that it works at all without having been given any structural information about algorithms or programming languages is a bit of a surprise.

    Rising to the challenge

    Computer programming challenges are fairly simple: People are given a task to complete and produce code that should perform the requested task. In an example given in the new paper, programmers are given two strings and asked to determine whether the shorter of the two could be produced by substituting backspaces for some of the keypresses needed to type the larger one. Submitted programs are then checked to see whether they provide a general solution to the problem or fail when additional examples are tested.

    Read 16 remaining paragraphs | Comments