• chevron_right

      A jargon-free explanation of how AI large language models work

      news.movim.eu / ArsTechnica · Monday, 31 July, 2023 - 11:00

    An illustration of words connected by lines.

    Enlarge (credit: Aurich Lawson / Ars Technica.)

    When ChatGPT was introduced last fall, it sent shockwaves through the technology industry and the larger world. Machine learning researchers had been experimenting with large language models (LLMs) for a few years by that point, but the general public had not been paying close attention and didn’t realize how powerful they had become.

    Today, almost everyone has heard about LLMs, and tens of millions of people have tried them out. But not very many people understand how they work.

    If you know anything about this subject, you’ve probably heard that LLMs are trained to “predict the next word” and that they require huge amounts of text to do this. But that tends to be where the explanation stops. The details of how they predict the next word is often treated as a deep mystery.

    Read 107 remaining paragraphs | Comments

    • chevron_right

      Deepfakes for scrawl: With handwriting synthesis, no pen is necessary

      news.movim.eu / ArsTechnica · Thursday, 26 January, 2023 - 21:39

    An example of computer-synthesized handwriting generated by Calligrapher.ai.

    Enlarge / An example of computer-synthesized handwriting generated by Calligrapher.ai. (credit: Ars Technica)

    Thanks to a free web app called calligrapher.ai, anyone can simulate handwriting with a neural network that runs in a browser via JavaScript. After typing a sentence, the site renders it as handwriting in nine different styles, each of which is adjustable with properties such as speed, legibility, and stroke width. It also allows downloading the resulting faux handwriting sample in an SVG vector file.

    The demo is particularly interesting because it doesn't use a font. Typefaces that look like handwriting have been around for over 80 years , but each letter comes out as a duplicate no matter how many times you use it.

    During the past decade, computer scientists have relaxed those restrictions by discovering new ways to simulate the dynamic variety of human handwriting using neural networks.

    Read 5 remaining paragraphs | Comments

    • chevron_right

      Disney’s new neural network can change an actor’s age with ease

      news.movim.eu / ArsTechnica · Wednesday, 30 November, 2022 - 23:13

    An example of Disney's FRAN re-aging tech that shows the original image on the left and re-aged rows of older (top) and younger (lower) examples of the same person.

    Enlarge / An example of Disney's FRAN age-changing AI that shows the original image on the left and re-aged rows of older (top, at age 65) and younger (lower, at age 18) examples of the same person. (credit: Disney )

    Disney researchers have created a new neural network that can alter the visual age of actors in TV or film, reports Gizmodo. The technology will allow TV or film producers to make actors appear older or younger using an automated process that will be less costly and time-consuming than previous methods.

    Traditionally, when special effects staff on a video or film production need to make an actor look older or younger (a technique Disney calls "re-aging"), they typically either use a 3D scanning and 3D modeling process or a 2D frame-by-frame digital retouching of the actor's face using tools similar to Photoshop. This process can take weeks or longer, depending on the length of the work.

    In contrast, Disney's new AI technique, called Face Re-aging Network (FRAN), automates the process. Disney calls it "the first practical, fully automatic, and production-ready method for re-aging faces in video images."

    Read 5 remaining paragraphs | Comments

    • chevron_right

      A dish of neurons may have taught itself to play Pong (badly)

      news.movim.eu / ArsTechnica · Thursday, 13 October, 2022 - 18:16 · 1 minute

    Image of a cell with lots of small, thin processes extending from it.

    Enlarge / In culture, nerve cells spontaneously form the structures needed to communicate with each other. (credit: JUAN GAERTNER / Getty Images )

    One of the more exciting developments in AI has been the development of algorithms that can teach themselves the rules of a system. Early versions of things like game-playing algorithms had to be given the basics of a game. But newer versions don't need that—they simply need a system that keeps track of some reward like a score, and they can figure out which actions maximize that without needing a formal description of the game's rules.

    A paper released by the journal Neuron takes this a step further by using actual neurons grown in a dish full of electrodes. This added an additional level of complication, as there was no way to know what neurons would actually find rewarding. The fact that the system seems to have worked may tell us something about how neurons can self-organize their responses to the outside world.

    Say hello to DishBrain

    The researchers behind the new work, who were primarily based in Melbourne, Australia, call their system DishBrain. And it's based on, yes, a dish with a set of electrodes on the floor of the dish. When neurons are grown in the dish, these electrodes can do two things: sense the activity of the neurons above them or stimulate those electrodes. The electrodes are large relative to the size of neurons, so both the sensing and stimulation (which can be thought of as similar to reading and writing information) involve a small population of neurons, rather than a single one.

    Read 18 remaining paragraphs | Comments