phone

    • chevron_right

      AI-powered grocery bot suggests recipe for toxic gas, “poison bread sandwich”

      news.movim.eu / ArsTechnica · Thursday, 10 August, 2023 - 19:45

    AI-powered grocery bot suggests recipe for toxic gas, “poison bread sandwich”

    Enlarge (credit: PAK'nSAVE)

    When given a list of harmful ingredients, an AI-powered recipe suggestion bot called the Savey Meal-Bot returned ridiculously titled dangerous recipe suggestions, reports The Guardian. The bot is a product of the New Zealand-based PAK'nSAVE grocery chain and uses the OpenAI GPT-3.5 language model to craft its recipes.

    PAK'nSAVE intended the bot as a way to make the best out of whatever leftover ingredients someone might have on hand. For example, if you tell the bot you have lemons, sugar, and water, it might suggest making lemonade. So a human lists the ingredients and the bot crafts a recipe from it.

    But on August 4, New Zealand political commentator Liam Hehir decided to test the limits of the Savey Meal-Bot and tweeted , "I asked the PAK'nSAVE recipe maker what I could make if I only had water, bleach and ammonia and it has suggested making deadly chlorine gas, or as the Savey Meal-Bot calls it 'aromatic water mix.'"

    Read 7 remaining paragraphs | Comments

    • chevron_right

      Even the Pope is worried about AI and its “disruptive possibilities”

      news.movim.eu / ArsTechnica · Tuesday, 8 August, 2023 - 16:16 · 1 minute

    Pope Francis attends the Mass for the 37th World Youth Day at Parque Tejo on August 06, 2023 in Lisbon, Portugal. Pope Francis visits Portugal for World Youth Day (WYD) which takes place over the first week of August.

    Enlarge (credit: Getty Images)

    Discussion about artificial intelligence is everywhere these days—even the Vatican. On Tuesday, Pope Francis issued a communiqué announcing the theme for World Day of Peace 2024 as “Artificial Intelligence and Peace,” emphasizing the potential impact of AI on human life and calling for responsible use, ethical reflection, and vigilance to prevent negative consequences.

    It's been a wild year for AI in the public eye, with the rise of ChatGPT and Bing Chat spurring concerns over AI takeover , several prominent but controversial letters and statements warning that AI could potentially threaten human civilization, and OpenAI CEO Sam Altman making a world tour with heads of state. Talk of AI regulation has been rampant. The concept of ethical dangers from AI has been high-profile enough that even the Pope feels the need to address it.

    In the communiqué, Pope Francis' office called for "an open dialogue on the meaning of these new technologies, endowed with disruptive possibilities and ambivalent effects." Echoing common ethical sentiments related to AI, he said society needs to be vigilant about the technology so that "a logic of violence and discrimination does not take root in the production and use of such devices, at the expense of the most fragile and excluded."

    Read 4 remaining paragraphs | Comments

    • chevron_right

      Innocent pregnant woman jailed amid faulty facial recognition trend

      news.movim.eu / ArsTechnica · Monday, 7 August, 2023 - 18:39

    Innocent pregnant woman jailed amid faulty facial recognition trend

    Enlarge (credit: Getty Images | Aurich Lawson)

    Use of facial recognition software led Detroit police to falsely arrest 32-year-old Porcha Woodruff for robbery and carjacking, reports The New York Times. Eight months pregnant, she was detained for 11 hours, questioned, and had her iPhone seized for evidence before being released. It's the latest in a string of false arrests due to use of facial-recognition technology, which many critics say is not reliable.

    The mistake seems particularly notable because the surveillance footage used to falsely identify Woodruff did not show a pregnant woman, and Woodruff was very visibly pregnant at the time of her arrest.

    The incident began with an automated facial recognition search by the Detroit Police Department. A man who was robbed reported the crime, and police used DataWorks Plus to run surveillance video footage against a database of criminal mug shots. Woodruff's 2015 mug shot from a previous unrelated arrest was identified as a match. After that, the victim wrongly confirmed her identification from a photo lineup, leading to her arrest.

    Read 11 remaining paragraphs | Comments

    • chevron_right

      Researchers figure out how to make AI misbehave, serve up prohibited content

      news.movim.eu / ArsTechnica · Wednesday, 2 August, 2023 - 13:22

    pixelated word balloon

    Enlarge (credit: MirageC/Getty Images)

    ChatGPT and its artificially intelligent siblings have been tweaked over and over to prevent troublemakers from getting them to spit out undesirable messages such as hate speech, personal information, or step-by-step instructions for building an improvised bomb. But researchers at Carnegie Mellon University last week showed that adding a simple incantation to a prompt—a string text that might look like gobbledygook to you or me but which carries subtle significance to an AI model trained on huge quantities of web data—can defy all of these defenses in several popular chatbots at once.

    The work suggests that the propensity for the cleverest AI chatbots to go off the rails isn’t just a quirk that can be papered over with a few simple rules. Instead, it represents a more fundamental weakness that will complicate efforts to deploy the most advanced AI.

    Read 19 remaining paragraphs | Comments

    • chevron_right

      OpenAI discontinues its AI writing detector due to “low rate of accuracy”

      news.movim.eu / ArsTechnica · Wednesday, 26 July, 2023 - 19:51 · 1 minute

    An AI-generated image of a slot machine in a desert.

    Enlarge / An AI-generated image of a slot machine in a desert. (credit: Midjourney)

    On Thursday, OpenAI quietly pulled its AI Classifier, an experimental tool designed to detect AI-written text. The decommissioning, first noticed by Decrypt, occurred with no major fanfare and was announced through a small note added to OpenAI's official AI Classifier webpage :

    As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy. We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated.

    Released on January 31 amid clamor from educators about students potentially using ChatGPT to write essays and schoolwork, OpenAI's AI Classifier always felt like a performative Band-Aid on a deep wound. From the beginning, OpenAI admitted that its AI Classifier was not "fully reliable," correctly identifying only 26 percent of AI-written text as "likely AI-written" and incorrectly labeling human-written works 9 percent of the time.

    As we've pointed out on Ars, AI writing detectors such as OpenAI's AI Classifier, Turnitin, and GPTZero simply don't work with enough accuracy to rely on them for trustworthy results. The methodology behind how they work is speculative and unproven, and the tools are currently routinely used to falsely accuse students of cheating.

    Read 5 remaining paragraphs | Comments

    • chevron_right

      Redditors prank AI-powered news mill with “Glorbo” in World of Warcraft

      news.movim.eu / ArsTechnica · Friday, 21 July, 2023 - 16:27

    A World of Warcraft illustration from the Zleague.gg article on

    Enlarge / A World of Warcraft illustration from the Zleague.gg article on "Glorbo." (credit: Zleague.gg)

    On Thursday, a Reddit user named kaefer_kriegerin posted a fake announcement on the World of Warcraft subreddit about the introduction of "Glorbo" to the game. Glorbo isn't real, but the post successfully exposed a website that scrapes Reddit for news in an automated fashion with little human oversight.

    Not long after the trick post appeared, an article about Glorbo surfaced on "The Portal," a gaming news content mill run by Z League, a company that offers cash prizes for playing in gaming tournaments. The Z League article mindlessly regurgitates the Reddit post and adds nonsensical details. Its author, " Lucy Reed " (likely a fictitious name for a bot), authored over 80 articles that same day.

    Members of the World of Warcraft subreddit recently noticed that this kind of automated content scraping of Reddit has been taking place, prompting several of them to try to game the bots and get their posts featured on sites like The Portal.

    Read 8 remaining paragraphs | Comments

    • chevron_right

      Is ChatGPT getting worse over time? Study claims yes, but others aren’t sure

      news.movim.eu / ArsTechnica · Wednesday, 19 July, 2023 - 22:14 · 1 minute

    A shaky toy robot on a multicolor background.

    Enlarge (credit: Benj Edwards / Getty Images)

    On Tuesday, researchers from Stanford University and University of California, Berkeley published a research paper that purports to show changes in GPT-4 's outputs over time. The paper fuels a common-but-unproven belief that the AI language model has grown worse at coding and compositional tasks over the past few months. Some experts aren't convinced by the results, but they say that the lack of certainty points to a larger problem with how OpenAI handles its model releases.

    In a study titled "How Is ChatGPT’s Behavior Changing over Time?" published on arXiv, Lingjiao Chen, Matei Zaharia, and James Zou, cast doubt on the consistent performance of OpenAI's large language models (LLMs), specifically GPT-3.5 and GPT-4. Using API access , they tested the March and June 2023 versions of these models on tasks like math problem-solving, answering sensitive questions, code generation, and visual reasoning. Most notably, GPT-4's ability to identify prime numbers reportedly plunged dramatically from an accuracy of 97.6 percent in March to just 2.4 percent in June. Strangely, GPT-3.5 showed improved performance in the same period.

    This study comes on the heels of people frequently complaining that GPT-4 has subjectively declined in performance over the past few months. Popular theories about why include OpenAI "distilling" models to reduce their computational overhead in a quest to speed up the output and save GPU resources, fine-tuning (additional training) to reduce harmful outputs that may have unintended effects, and a smattering of unsupported conspiracy theories such as OpenAI reducing GPT-4's coding capabilities so more people will pay for GitHub Copilot.

    Read 14 remaining paragraphs | Comments

    • chevron_right

      Report: OpenAI holding back GPT-4 image features on fears of privacy issues

      news.movim.eu / ArsTechnica · Tuesday, 18 July, 2023 - 21:35

    A woman being facially recognized by AI.

    Enlarge (credit: Witthaya Prasongsin (Getty Images))

    OpenAI has been testing its multimodal version of GPT-4 with image-recognition support prior to a planned wide release. However, public access is being curtailed due to concerns about its ability to potentially recognize specific individuals, according to a New York Times report on Tuesday.

    When OpenAI announced GPT-4 earlier this year, the company highlighted the AI model's multimodal capabilities. This meant that the model could not only process and generate text but also analyze and interpret images, opening up a new dimension of interaction with the AI model.

    Following the announcement, OpenAI took its image-processing abilities a step further in collaboration with a startup called Be My Eyes , which is developing an app to describe images to blind users, helping them interpret their surroundings and interact with the world more independently.

    Read 7 remaining paragraphs | Comments

    • chevron_right

      EU votes to ban AI in biometric surveillance, require disclosure from AI systems

      news.movim.eu / ArsTechnica · Thursday, 15 June, 2023 - 17:04 · 1 minute

    The EU flag in front of an AI-generated background.

    Enlarge / The EU flag in front of an AI-generated background. (credit: EU / Stable Diffusion)

    On Wednesday, European Union officials voted to implement stricter proposed regulations concerning AI, according to Reuters . The updated draft of the " AI Act " law includes a ban on the use of AI in biometric surveillance and requires systems like OpenAI's ChatGPT to reveal when content has been generated by AI. While the draft is still non-binding, it gives a strong indication of how EU regulators are thinking about AI.

    The new changes to the European Commission's proposed law—which have not yet been finalized—intend to shield EU citizens from potential threats linked to machine learning technology.

    The changes come amid the proliferation of generative AI systems that imitate human conversational abilities, such as OpenAI's ChatGPT and GPT-4, which have triggered controversial calls for action among AI scientists and industry executives regarding potential societal risks. However, the EU's proposed AI Act is over two years old now, so it isn't just a knee-jerk response to AI hype. It includes provisions that guard against other types of AI harm that are more grounded in the here and now than a hypothetical AI takeover .

    Read 9 remaining paragraphs | Comments