phone

    • chevron_right

      Air Force denies running simulation where AI drone “killed” its operator

      news.movim.eu / ArsTechnica · Friday, 2 June, 2023 - 16:21 · 1 minute

    An armed unmanned aerial vehicle on runway, but orange.

    Enlarge / An armed unmanned aerial vehicle on runway, but orange. (credit: Getty Images)

    Over the past 24 hours, several news outlets reported a now-retracted story claiming that the US Air Force had run a simulation in which an AI-controlled drone " went rogue " and "killed the operator because that person was keeping it from accomplishing its objective." The US Air Force has denied that any simulation ever took place, and the original source of the story says he "misspoke."

    The story originated in a recap published on the website of the Royal Aeronautical Society that served as an overview of sessions at the Future Combat Air & Space Capabilities Summit that took place last week in London.

    In a section of that piece titled "AI—is Skynet here already?" the authors of the piece recount a presentation by USAF Chief of AI Test and Operations Col. Tucker "Cinco" Hamilton, who spoke about a "simulated test" where an AI-enabled drone, tasked with identifying and destroying surface-to-air missile sites, started to perceive human "no-go" decisions as obstacles to achieving its primary mission. In the "simulation," the AI reportedly attacked its human operator, and when trained not to harm the operator, it instead destroyed the communication tower, preventing the operator from interfering with its mission.

    Read 6 remaining paragraphs | Comments

    • chevron_right

      AI-expanded album cover artworks go viral thanks to Photoshop’s Generative Fill

      news.movim.eu / ArsTechnica · Wednesday, 31 May, 2023 - 22:05 · 1 minute

    An AI-expanded version of a famous album cover involving four lads and a certain road created using Adobe Generative Fill.

    Enlarge / An AI-expanded version of a famous album cover involving four lads and a certain road created using Adobe Generative Fill. (credit: Capitol Records / Adobe / Dobrokotov )

    Over the weekend, AI-powered makeovers of famous music album covers went viral on Twitter thanks to Adobe Photoshop's Generative Fill, an image synthesis tool that debuted in a beta version of the image editor last week. Using Generative Fill, people have been expanding the size of famous works of art, revealing larger imaginary artworks beyond the borders of the original images.

    This image-expanding feat, often called " outpainting " in AI circles (and introduced with OpenAI's DALL-E 2 last year), is possible due to an image synthesis model called Adobe Firefly , which has been trained on millions of works of art from Adobe's stock photo catalog. When given an existing image to work with, Firefly uses what it knows about other artworks to synthesize plausible continuations of the original artwork. And when guided with text prompts that describe a specific scenario, the synthesized results can go in wild places.

    For example, an expansion of Michael Jackson's famous Thriller album rendered the rest of Jackson's body lying on a piano. That seems reasonable, based on the context. But depending on user guidance, Generative Fill can also create more fantastic interpretations: An expansion of Katy Perry's Teenage Dream cover art (likely guided by a text suggestion from the user) revealed Perry lying on a gigantic fluffy pink cat.

    Read 4 remaining paragraphs | Comments

    • chevron_right

      OpenAI execs warn of “risk of extinction” from artificial intelligence in new open letter

      news.movim.eu / ArsTechnica · Tuesday, 30 May, 2023 - 17:12

    An AI-generated image of

    Enlarge / An AI-generated image of "AI taking over the world." (credit: Stable Diffusion)

    On Tuesday, the Center for AI Safety (CAIS) released a single-sentence statement signed by executives from OpenAI and DeepMind, Turing Award winners, and other AI researchers warning that their life's work could potentially extinguish all of humanity.

    The brief statement, which CAIS says is meant to open up discussion on the topic of "a broad spectrum of important and urgent risks from AI," reads as follows: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

    High-profile signatories of the statement include Turing Award winners Geoffery Hinton and Yoshua Bengio, OpenAI CEO Sam Altman, OpenAI Chief Scientist Ilya Sutskever, OpenAI CTO Mira Murati, DeepMind CEO Demis Hassabis, Anthropic CEO Dario Amodei, and professors from UC Berkeley, Stanford, and MIT.

    Read 14 remaining paragraphs | Comments

    • chevron_right

      Among AI dangers, deepfakes worry Microsoft president most

      news.movim.eu / ArsTechnica · Thursday, 25 May, 2023 - 22:10

    An AI-generated image of a

    Enlarge / An AI-generated image of a "wall of fake images." (credit: Stable Diffusion)

    On Thursday, Microsoft President Brad Smith announced that his biggest apprehension about AI revolves around the growing concern for deepfakes and synthetic media designed to deceive, Reuters reports .

    Smith made his remarks while revealing his " blueprint for public governance of AI" in a speech at Planet World , a language arts museum in Washington, DC. His concerns come when talk of AI regulations is increasingly common, sparked largely by the popularity of OpenAI's ChatGPT and a political tour by OpenAI CEO Sam Altman.

    Smith expressed his desire for urgency in formulating ways to differentiate between genuine photos or videos and those created by AI when they might be used for illicit purposes , especially in enabling society-destabilizing disinformation.

    Read 8 remaining paragraphs | Comments

    • chevron_right

      Fake Pentagon “explosion” photo sows confusion on Twitter

      news.movim.eu / ArsTechnica · Tuesday, 23 May, 2023 - 21:01 · 1 minute

    A fake AI-generated image of an

    Enlarge / A fake AI-generated image of an "explosion" near the Pentagon that went viral on Twitter. (credit: Twitter)

    On Monday, a tweeted AI-generated image suggesting a large explosion at the Pentagon led to brief confusion, which included a reported small drop in the stock market. It originated from a verified Twitter account named "Bloomberg Feed," unaffiliated with the well-known Bloomberg media company, and was quickly exposed as a hoax. However, before it was debunked, large accounts such as Russia Today had already spread the misinformation, The Washington Post reported .

    The fake image depicted a large plume of black smoke alongside a building vaguely reminiscent of the Pentagon with the tweet "Large Explosion near The Pentagon Complex in Washington D.C. — Inital Report." Upon closer inspection, local authorities confirmed that the image was not an accurate representation of the Pentagon. Also, with blurry fence bars and building columns, it looks like a fairly sloppy AI-generated image created by a model like Stable Diffusion .

    Before Twitter suspended the false Bloomberg account, it had tweeted 224,000 times and reached fewer than 1,000 followers, according to the Post, but it's unclear who ran it or the motives behind sharing the false image. In addition to Bloomberg Feed, other accounts that shared the false report include “Walter Bloomberg” and “Breaking Market News," both unaffiliated with the real Bloomberg organization.

    Read 6 remaining paragraphs | Comments

    • chevron_right

      Poll: 61% of Americans say AI threatens humanity’s future

      news.movim.eu / ArsTechnica · Wednesday, 17 May, 2023 - 16:39

    An AI-generated image of

    Enlarge / An AI-generated image of "real space invaders" threatening the earth. (credit: Midjourney)

    A majority of Americans believe that the rise of artificial intelligence technology could put humanity's future in jeopardy, according to a Reuters/Ipsos poll published on Wednesday. The poll found that over two-thirds of respondents are anxious about the adverse effects of AI, while 61 percent consider it a potential threat to civilization.

    The online poll, conducted from May 9 to May 15, sampled the opinions of 4,415 US adults. It has a credibility interval (a measure of accuracy) of plus or minus 2 percentage points.

    The poll results come amid the expansion of generative AI use in education, government, medicine, and business, triggered in part by the explosive growth of OpenAI's ChatGPT , which is reportedly the fastest-growing software application of all time. The application's success has set off a technology hype race among tech giants such as Microsoft and Google, who stand to benefit from having something new and buzzy to potentially increase their share prices.

    Read 8 remaining paragraphs | Comments

    • chevron_right

      President Biden meets with AI CEOs at the White House amid ethical criticism

      news.movim.eu / ArsTechnica · Friday, 5 May, 2023 - 21:29

    US President Joe Biden and Vice President Kamala Harris meet the 'Investing in America Cabinet.'

    Enlarge / US President Joe Biden and Vice President Kamala Harris meet the 'Investing in America Cabinet' to discuss the Investing in America agenda in the Roosevelt Room of the White House in Washington, DC, on May 5. (credit: Jim Watson/AFP via Getty )

    On Thursday, President Joe Biden held a meeting at the White House with CEOs of leading AI companies, including Google, Microsoft, OpenAI, and Anthropic, emphasizing the importance of ensuring the safety of AI products before deployment. During the meeting, Biden urged the executives to address the risks that AI poses. But some AI experts criticized the exclusion of ethics researchers who have warned of AI's dangers for years.

    Over the past few months, generative AI models such as ChatGPT have quickly gained popularity and rallied intense tech hype, driving companies to develop similar products at a rapid pace.

    However, concerns have been growing about potential privacy issues , employment bias, and the potential for using them to create misinformation campaigns . According to the White House, the administration called for greater transparency, safety evaluations, and protection against malicious attacks during a "frank and constructive discussion" with the executives.

    Read 11 remaining paragraphs | Comments

    • chevron_right

      Stone-hearted researchers gleefully push over adorable soccer-playing robots

      news.movim.eu / ArsTechnica · Monday, 1 May, 2023 - 21:22 · 1 minute

    In a still from a DeepMind demo video, a researcher pushes a small humanoid robot to the ground.

    Enlarge / In a still from a DeepMind demo video, a researcher pushes a small humanoid robot to the ground. (credit: DeepMind)

    On Wednesday, researchers from DeepMind released a paper ostensibly about using deep reinforcement learning to train miniature humanoid robots in complex movement skills and strategic understanding, resulting in efficient performance in a simulated one-on-one soccer game.

    But few paid attention to the details because to accompany the paper, the researchers also released a 27-second video showing one experimenter repeatedly pushing a tiny humanoid robot to the ground as it attempts to score. Despite the interference (which no doubt violates the rules of soccer ), the tiny robot manages to punt the ball into the goal anyway, marking a small but notable victory for underdogs everywhere.

    DeepMind's "Robustness to pushes" demonstration video.

    On the demo website for "Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning," the researchers frame the merciless toppling of the robots as a key part of a "robustness to pushes" evaluation, writing, "Although the robots are inherently fragile, minor hardware modifications together with basic regularization of the behavior during training lead to safe and effective movements while still being able to perform in a dynamic and agile way."

    Read 5 remaining paragraphs | Comments

    • chevron_right

      Why ChatGPT and Bing Chat are so good at making things up

      news.movim.eu / ArsTechnica · Thursday, 6 April, 2023 - 15:58

    Why ChatGPT and Bing Chat are so good at making things up

    Enlarge (credit: Aurich Lawson | Getty Images)

    Over the past few months, AI chatbots like ChatGPT have captured the world's attention due to their ability to converse in a human-like way on just about any subject. But they come with a serious drawback: They can present convincing false information easily, making them unreliable sources of factual information and potential sources of defamation .

    Why do AI chatbots make things up, and will we ever be able to fully trust their output? We asked several experts and dug into how these AI models work to find the answers.

    “Hallucinations”—a loaded term in AI

    AI chatbots such as OpenAI's ChatGPT rely on a type of AI called a "large language model" (LLM) to generate their responses. An LLM is a computer program trained on millions of text sources that can read and generate "natural language" text—language as humans would naturally write or talk. Unfortunately, they can also make mistakes.

    Read 41 remaining paragraphs | Comments