• chevron_right

      Producing more but understanding less: The risks of AI for scientific research

      news.movim.eu / ArsTechnica · Wednesday, 6 March - 18:08 · 1 minute

    3d illustration of brain with wires

    Enlarge / Current concerns about AI tend to focus on its obvious errors. But psychologist Molly Crockett and anthropologist Lisa Messeri argue that AI also poses potential long-term epistemic risks to the practice of science. (credit: Just_Super/E+ via Getty)

    Last month, we witnessed the viral sensation of several egregiously bad AI-generated figures published in a peer-reviewed article in Frontiers, a reputable scientific journal. Scientists on social media expressed equal parts shock and ridicule at the images, one of which featured a rat with grotesquely large and bizarre genitals.

    As Ars Senior Health Reporter Beth Mole reported , looking closer only revealed more flaws, including the labels "dissilced," "Stemm cells," "iollotte sserotgomar," and "dck." Figure 2 was less graphic but equally mangled, rife with nonsense text and baffling images. Ditto for Figure 3, a collage of small circular images densely annotated with gibberish.

    The paper has since been retracted, but that eye-popping rat penis image will remain indelibly imprinted on our collective consciousness. The incident reinforces a growing concern that the increasing use of AI will make published scientific research less trustworthy, even as it increases productivity. While the proliferation of errors is a valid concern, especially in the early days of AI tools like ChatGPT, two researchers argue in a new perspective published in the journal Nature that AI also poses potential long-term epistemic risks to the practice of science.

    Read 47 remaining paragraphs | Comments

    • chevron_right

      How to get started with machine learning and AI

      news.movim.eu / ArsTechnica · Wednesday, 22 June, 2022 - 13:00 · 1 minute

    "It's a cookbook?!"

    Enlarge / "It's a cookbook?!" (credit: Aurich Lawson | Getty Images)

    "Artificial Intelligence" as we know it today is, at best, a misnomer. AI is in no way intelligent, but it is artificial. It remains one of the hottest topics in industry and is enjoying a renewed interest in academia. This isn't new—the world has been through a series of AI peaks and valleys over the past 50 years. But what makes the current flurry of AI successes different is that modern computing hardware is finally powerful enough to fully implement some wild ideas that have been hanging around for a long time.

    Back in the 1950s, in the earliest days of what we now call artificial intelligence, there was a debate over what to name the field. Herbert Simon, co-developer of both the logic theory machine and the General Problem Solver , argued that the field should have the much more anodyne name of “complex information processing.” This certainly doesn’t inspire the awe that “artificial intelligence” does, nor does it convey the idea that machines can think like humans.

    However, "complex information processing" is a much better description of what artificial intelligence actually is: parsing complicated data sets and attempting to make inferences from the pile. Some modern examples of AI include speech recognition (in the form of virtual assistants like Siri or Alexa) and systems that determine what's in a photograph or recommend what to buy or watch next. None of these examples are comparable to human intelligence, but they show we can do remarkable things with enough information processing.

    Read 23 remaining paragraphs | Comments