• chevron_right

      Apple releases eight small AI language models aimed at on-device use

      news.movim.eu / ArsTechnica · Yesterday - 20:55

    An illustration of a robot hand tossing an apple to a human hand.

    Enlarge (credit: Getty Images)

    In the world of AI, what might be called "small language models" have been growing in popularity recently because they can be run on a local device instead of requiring data center-grade computers in the cloud. On Wednesday, Apple introduced a set of tiny source-available AI language models called OpenELM that are small enough to run directly on a smartphone. They're mostly proof-of-concept research models for now, but they could form the basis of future on-device AI offerings from Apple.

    Apple's new AI models, collectively named OpenELM for "Open-source Efficient Language Models," are currently available on the Hugging Face under an Apple Sample Code License . Since there are some restrictions in the license, it may not fit the commonly accepted definition of "open source," but the source code for OpenELM is available.

    On Tuesday, we covered Microsoft's Phi-3 models , which aim to achieve something similar: a useful level of language understanding and processing performance in small AI models that can run locally. Phi-3-mini features 3.8 billion parameters, but some of Apple's OpenELM models are much smaller, ranging from 270 million to 3 billion parameters in eight distinct models.

    Read 7 remaining paragraphs | Comments

    • chevron_right

      Millions of IPs remain infected by USB worm years after its creators left it for dead

      news.movim.eu / ArsTechnica · Yesterday - 18:49 · 1 minute

    Millions of IPs remain infected by USB worm years after its creators left it for dead

    Enlarge (credit: Getty Images)

    A now-abandoned USB worm that backdoors connected devices has continued to self-replicate for years since its creators lost control of it and remains active on thousands, possibly millions, of machines, researchers said Thursday.

    The worm—which first came to light in a 2023 post published by security firm Sophos—became active in 2019 when a variant of malware known as PlugX added functionality that allowed it to infect USB drives automatically. In turn, those drives would infect any new machine they connected to, a capability that allowed the malware to spread without requiring any end-user interaction. Researchers who have tracked PlugX since at least 2008 have said that the malware has origins in China and has been used by various groups tied to the country’s Ministry of State Security.

    Still active after all these years

    For reasons that aren’t clear, the worm creator abandoned the one and only IP address that was designated as its command-and-control channel. With no one controlling the infected machines anymore, the PlugX worm was effectively dead, or at least one might have presumed so. The worm, it turns out, has continued to live on in an undetermined number of machines that possibly reaches into the millions, researchers from security firm Sekoia reported .

    Read 10 remaining paragraphs | Comments

    • chevron_right

      School athletic director arrested for framing principal using AI voice synthesis

      news.movim.eu / ArsTechnica · Yesterday - 15:30 · 1 minute

    Illustration of a robot speaking.

    Enlarge (credit: Getty Images )

    On Thursday, Baltimore County Police arrested Pikesville High School's former athletic director, Dazhon Darien, and charged him with using AI to impersonate Principal Eric Eiswert, according to a report by The Baltimore Banner . Police say Darien used AI voice synthesis software to simulate Eiswert's voice, leading the public to believe the principal made racist and antisemitic comments.

    The audio clip, posted on a popular Instagram account, contained offensive remarks about "ungrateful Black kids" and their academic performance, as well as a threat to "join the other side" if the speaker received one more complaint from "one more Jew in this community." The recording also mentioned names of staff members, including Darien's nickname "DJ," suggesting they should not have been hired or should be removed "one way or another."

    The comments led to significant uproar from students, faculty, and the wider community, many of whom initially believed the principal had actually made the comments. A Pikesville High School teacher named Shaena Ravenell reportedly played a large role in disseminating the audio. While she has not been charged, police indicated that she forwarded the controversial email to a student known for their ability to quickly spread information through social media. This student then escalated the audio's reach, which included sharing it with the media and the NAACP.

    Read 5 remaining paragraphs | Comments

    • chevron_right

      Cisco firewall 0-days under attack for 5 months by resourceful nation-state hackers

      news.movim.eu / ArsTechnica · 2 days ago - 20:55 · 1 minute

    A stylized skull and crossbones made out of ones and zeroes.

    Enlarge (credit: Getty Images )

    Hackers backed by a powerful nation-state have been exploiting two zero-day vulnerabilities in Cisco firewalls in a five-month-long campaign that breaks into government networks around the world, researchers reported Wednesday.

    The attacks against Cisco’s Adaptive Security Appliances firewalls are the latest in a rash of network compromises that target firewalls, VPNs, and network-perimeter devices, which are designed to provide a moated gate of sorts that keeps remote hackers out. Over the past 18 months, threat actors—mainly backed by the Chinese government—have turned this security paradigm on its head in attacks that exploit previously unknown vulnerabilities in security appliances from the likes of Ivanti , Atlassian , Citrix , and Progress . These devices are ideal targets because they sit at the edge of a network, provide a direct pipeline to its most sensitive resources, and interact with virtually all incoming communications.

    Cisco ASA likely one of several targets

    On Wednesday, it was Cisco’s turn to warn that its ASA products have received such treatment. Since November, a previously unknown actor tracked as UAT4356 by Cisco and STORM-1849 by Microsoft has been exploiting two zero-days in attacks that go on to install two pieces of never-before-seen malware, researchers with Cisco’s Talos security team said . Notable traits in the attacks include:

    Read 12 remaining paragraphs | Comments

    • chevron_right

      Deepfakes in the courtroom: US judicial panel debates new AI evidence rules

      news.movim.eu / ArsTechnica · 2 days ago - 20:14

    An illustration of a man with a very long nose holding up the scales of justice.

    Enlarge (credit: Getty Images )

    On Friday, a federal judicial panel convened in Washington, DC, to discuss the challenges of policing AI-generated evidence in court trials, according to a Reuters report . The US Judicial Conference's Advisory Committee on Evidence Rules , an eight-member panel responsible for drafting evidence-related amendments to the Federal Rules of Evidence , heard from computer scientists and academics about the potential risks of AI being used to manipulate images and videos or create deepfakes that could disrupt a trial.

    The meeting took place amid broader efforts by federal and state courts nationwide to address the rise of generative AI models (such as those that power OpenAI's ChatGPT or Stability AI's Stable Diffusion ), which can be trained on large datasets with the aim of producing realistic text, images, audio, or videos.

    In the published 358-page agenda for the meeting, the committee offers up this definition of a deepfake and the problems AI-generated media may pose in legal trials:

    Read 9 remaining paragraphs | Comments

    • chevron_right

      Hackers infect users of antivirus service that delivered updates over HTTP

      news.movim.eu / ArsTechnica · 3 days ago - 21:03

    Hackers infect users of antivirus service that delivered updates over HTTP

    Enlarge (credit: Getty Images)

    Hackers abused an antivirus service for five years in order to infect end users with malware. The attack worked because the service delivered updates over HTTP, a protocol vulnerable to attacks that corrupt or tamper with data as it travels over the Internet.

    The unknown hackers, who may have ties to the North Korean government, pulled off this feat by performing a man-in-the-middle (MiitM) attack that replaced the genuine update with a file that installed an advanced backdoor instead, said researchers from security firm Avast today .

    eScan, an AV service headquartered in India, has delivered updates over HTTP since at least 2019, Avast researchers reported. This protocol presented a valuable opportunity for installing the malware, which is tracked in security circles under the name GuptiMiner.

    Read 10 remaining paragraphs | Comments

    • chevron_right

      Microsoft’s Phi-3 shows the surprising power of small, locally run AI language models

      news.movim.eu / ArsTechnica · 3 days ago - 20:47

    An illustration of lots of information being compressed into a smartphone with a funnel.

    Enlarge (credit: Getty Images)

    On Tuesday, Microsoft announced a new, freely available lightweight AI language model named Phi-3-mini, which is simpler and less expensive to operate than traditional large language models (LLMs) like OpenAI's GPT-4 Turbo . Its small size is ideal for running locally, which could bring an AI model of similar capability to the free version of ChatGPT to a smartphone without needing an Internet connection to run it.

    The AI field typically measures AI language model size by parameter count. Parameters are numerical values in a neural network that determine how the language model processes and generates text. They are learned during training on large datasets and essentially encode the model's knowledge into quantified form. More parameters generally allow the model to capture more nuanced and complex language-generation capabilities but also require more computational resources to train and run.

    Some of the largest language models today, like Google's PaLM 2 , have hundreds of billions of parameters. OpenAI's GPT-4 is rumored to have over a trillion parameters but spread over eight 220-billion parameter models in a mixture-of-experts configuration. Both models require heavy-duty data center GPUs (and supporting systems) to run properly.

    Read 8 remaining paragraphs | Comments

    • chevron_right

      Windows vulnerability reported by the NSA exploited to install Russian backdoor

      news.movim.eu / ArsTechnica · 4 days ago - 20:36

    Kremlin-backed hackers exploit critical Windows vulnerability reported by the NSA

    Enlarge (credit: Getty Images)

    Kremlin-backed hackers have been exploiting a critical Microsoft vulnerability for four years in attacks that targeted a vast array of organizations with a previously undocumented backdoor, the software maker disclosed Monday.

    When Microsoft patched the vulnerability in October 2022—at least two years after it came under attack by the Russian hackers—the company made no mention that it was under active exploitation. As of publication, the company’s advisory still made no mention of the in-the-wild targeting. Windows users frequently prioritize the installation of patches based on whether a vulnerability is likely to be exploited in real-world attacks.

    Exploiting CVE-2022-38028, as the vulnerability is tracked, allows attackers to gain system privileges, the highest available in Windows, when combined with a separate exploit. Exploiting the flaw, which carries a 7.8 severity rating out of a possible 10, requires low existing privileges and little complexity. It resides in the Windows print spooler, a printer-management component that has harbored previous critical zero-days . Microsoft said at the time that it learned of the vulnerability from the US National Security Agency.

    Read 7 remaining paragraphs | Comments

    • chevron_right

      Microsoft’s VASA-1 can deepfake a person with one photo and one audio track

      news.movim.eu / ArsTechnica · 7 days ago - 13:07 · 1 minute

    A sample image from Microsoft for

    Enlarge / A sample image from Microsoft for "VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time." (credit: Microsoft )

    On Tuesday, Microsoft Research Asia unveiled VASA-1 , an AI model that can create a synchronized animated video of a person talking or singing from a single photo and an existing audio track. In the future, it could power virtual avatars that render locally and don't require video feeds—or allow anyone with similar tools to take a photo of a person found online and make them appear to say whatever they want.

    "It paves the way for real-time engagements with lifelike avatars that emulate human conversational behaviors," reads the abstract of the accompanying research paper titled, "VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time." It's the work of Sicheng Xu, Guojun Chen, Yu-Xiao Guo, Jiaolong Yang, Chong Li, Zhenyu Zang, Yizhong Zhang, Xin Tong, and Baining Guo.

    The VASA framework (short for "Visual Affective Skills Animator") uses machine learning to analyze a static image along with a speech audio clip. It is then able to generate a realistic video with precise facial expressions, head movements, and lip-syncing to the audio. It does not clone or simulate voices (like other Microsoft research ) but relies on an existing audio input that could be specially recorded or spoken for a particular purpose.

    Read 11 remaining paragraphs | Comments