• chevron_right

      Un malware cible les joueurs de Call of Duty qui veulent tricher et vole tous leurs Bitcoins

      news.movim.eu / Korben · Friday, 29 March - 10:52 · 1 minute

    Vous connaissez la dernière ?

    Des petits malins ont eu l’idée de distribuer des logiciels de triche pour Call of Duty et d’autres jeux sur Battle.net. Sauf que… surprise ! En réalité, ces soi-disant « cheats » étaient bourrés de malwares qui aspirent les Bitcoins des joueurs ! Un coup de maître digne d’un scénario de Mr Robot.

    D’après les experts en cybersécurité de VX Underground , cette attaque d’hameçonnage ciblée aurait potentiellement compromis près de 5 millions de comptes, rien que ça ! Une fois installé sur l’ordi de la victime, le malware s’attaque direct au portefeuille Bitcoin Electrum pour siphonner les précieuses crypto. On parle de presque 3,7 millions de comptes Battle.net, plus de 560 000 comptes Activision et environ 117 000 comptes ElitePVPers. Autant dire que les cybercriminels se sont fait plaisir.

    Bon, Activision affirme que ses serveurs n’ont pas été directement compromis. Mais quand même, ça la fout mal. La boîte conseille à tous les joueurs qui auraient pu cliquer sur un lien louche de changer rapidement leur mot de passe et d’activer l’authentification à deux facteurs ce qui est un minimum.

    Bref, méfiez-vous comme de la peste des logiciels de triche. Comme dirait l’autre, si c’est trop beau pour être vrai, c’est que ça cache sûrement une entourloupe !

    Puis où est passé le plaisir de jouer à la loyale, de progresser à la sueur de son front et de laminer ses adversaires à la régulière ?

    • chevron_right

      PyPI halted new users and projects while it fended off supply-chain attack

      news.movim.eu / ArsTechnica · Thursday, 28 March - 18:50

    Supply-chain attacks, like the latest PyPI discovery, insert malicious code into seemingly functional software packages used by developers. They're becoming increasingly common.

    Enlarge / Supply-chain attacks, like the latest PyPI discovery, insert malicious code into seemingly functional software packages used by developers. They're becoming increasingly common. (credit: Getty Images)

    PyPI, a vital repository for open source developers, temporarily halted new project creation and new user registration following an onslaught of package uploads that executed malicious code on any device that installed them. Ten hours later, it lifted the suspension.

    Short for the Python Package Index, PyPI is the go-to source for apps and code libraries written in the Python programming language. Fortune 500 corporations and independent developers alike rely on the repository to obtain the latest versions of code needed to make their projects run. At a little after 7 pm PT on Wednesday, the site started displaying a banner message informing visitors that the site was temporarily suspending new project creation and new user registration. The message didn’t explain why or provide an estimate of when the suspension would be lifted.

    About 10 hours later, PyPI restored new project creation and new user registration. Once again, the site provided no reason for the 10-hour halt.

    Read 10 remaining paragraphs | Comments

    • chevron_right

      Fujitsu says it found malware on its corporate network, warns of possible data breach

      news.movim.eu / ArsTechnica · Monday, 18 March - 19:44

    Fujitsu says it found malware on its corporate network, warns of possible data breach

    Enlarge (credit: Getty Images)

    Japan-based IT behemoth Fujitsu said it has discovered malware on its corporate network that may have allowed the people responsible to steal personal information from customers or other parties.

    “We confirmed the presence of malware on several of our company's work computers, and as a result of an internal investigation, it was discovered that files containing personal information and customer information could be illegally taken out,” company officials wrote in a March 15 notification that went largely unnoticed until Monday. The company said it continued to “investigate the circumstances surrounding the malware's intrusion and whether information has been leaked.” There was no indication how many records were exposed or how many people may be affected.

    Fujitsu employs 124,000 people worldwide and reported about $25 billion in its fiscal 2023, which ended at the end of last March. The company operates in 100 countries. Past customers include the Japanese government. Fujitsu’s revenue comes from sales of hardware such as computers, servers, and telecommunications gear, storage systems, software, and IT services.

    Read 3 remaining paragraphs | Comments

    • chevron_right

      « Une importante menace pour les Mac » : ce voleur de mot de passe est à prendre très au sérieux

      news.movim.eu / Numerama · Sunday, 17 March - 06:03

    Un nouvelle version d'un logiciel malveillant dédié au vol de mot passe a fait son apparition. Les pirates ont amélioré leur outil pour cibler les ordinateurs Mac.

    • chevron_right

      Attention aux cyberattaques : ces logiciels malveillants sont les plus dangereux

      news.movim.eu / JournalDuGeek · Friday, 15 March - 16:03

    Mot Passe Cadenas Donnees Personelles

    Alors que la France a été visée ces dernières semaines par plusieurs cyberattaques, une étude récente se penche sur les logiciels malveillants les plus utilisés par les hackers.
    • chevron_right

      Researchers create AI worms that can spread from one system to another

      news.movim.eu / ArsTechnica · Saturday, 2 March - 11:47 · 1 minute

    Researchers create AI worms that can spread from one system to another

    Enlarge (credit: Jacqui VanLiew; Getty Images)

    As generative AI systems like OpenAI's ChatGPT and Google's Gemini become more advanced, they are increasingly being put to work. Startups and tech companies are building AI agents and ecosystems on top of the systems that can complete boring chores for you : think automatically making calendar bookings and potentially buying products . But as the tools are given more freedom, it also increases the potential ways they can be attacked.

    Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers has created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn't been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research.

    Nassi, along with fellow researchers Stav Cohen and Ron Bitton, created the worm, dubbed Morris II, as a nod to the original Morris computer worm that caused chaos across the Internet in 1988. In a research paper and website shared exclusively with WIRED, the researchers show how the AI worm can attack a generative AI email assistant to steal data from emails and send spam messages—breaking some security protections in ChatGPT and Gemini in the process.

    Read 15 remaining paragraphs | Comments

    • chevron_right

      WhatsApp finally forces Pegasus spyware maker to share its secret code

      news.movim.eu / ArsTechnica · Friday, 1 March - 20:27

    WhatsApp finally forces Pegasus spyware maker to share its secret code

    Enlarge (credit: NurPhoto / Contributor | NurPhoto )

    WhatsApp will soon be granted access to explore the "full functionality" of the NSO Group's Pegasus spyware—sophisticated malware the Israeli Ministry of Defense has long guarded as a "highly sought" state secret, The Guardian reported .

    Since 2019, WhatsApp has pushed for access to the NSO's spyware code after alleging that Pegasus was used to spy on 1,400 WhatsApp users over a two-week period, gaining unauthorized access to their sensitive data, including encrypted messages. WhatsApp suing the NSO, Ars noted at the time, was "an unprecedented legal action" that took "aim at the unregulated industry that sells sophisticated malware services to governments around the world."

    Initially, the NSO sought to block all discovery in the lawsuit, "due to various US and Israeli restrictions," but that blanket request was denied. Then, last week, the NSO lost another fight to keep WhatsApp away from its secret code.

    Read 12 remaining paragraphs | Comments

    • Sc chevron_right

      LLM Prompt Injection Worm

      news.movim.eu / Schneier · Friday, 1 March - 19:34 · 2 minutes

    Researchers have demonstrated a worm that spreads through prompt injection. Details :

    In one instance, the researchers, acting as attackers, wrote an email including the adversarial text prompt, which “poisons” the database of an email assistant using retrieval-augmented generation (RAG) , a way for LLMs to pull in extra data from outside its system. When the email is retrieved by the RAG, in response to a user query, and is sent to GPT-4 or Gemini Pro to create an answer, it “jailbreaks the GenAI service” and ultimately steals data from the emails, Nassi says. “The generated response containing the sensitive user data later infects new hosts when it is used to reply to an email sent to a new client and then stored in the database of the new client,” Nassi says.

    In the second method, the researchers say, an image with a malicious prompt embedded makes the email assistant forward the message on to others. “By encoding the self-replicating prompt into the image, any kind of image containing spam, abuse material, or even propaganda can be forwarded further to new clients after the initial email has been sent,” Nassi says.

    It’s a natural extension of prompt injection. But it’s still neat to see it actually working.

    Research paper: “ ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications .

    Abstract: In the past year, numerous companies have incorporated Generative AI (GenAI) capabilities into new and existing applications, forming interconnected Generative AI (GenAI) ecosystems consisting of semi/fully autonomous agents powered by GenAI services. While ongoing research highlighted risks associated with the GenAI layer of agents (e.g., dialog poisoning, membership inference, prompt leaking, jailbreaking), a critical question emerges: Can attackers develop malware to exploit the GenAI component of an agent and launch cyber-attacks on the entire GenAI ecosystem?

    This paper introduces Morris II , the first worm designed to target GenAI ecosystems through the use of adversarial self-replicating prompts . The study demonstrates that attackers can insert such prompts into inputs that, when processed by GenAI models, prompt the model to replicate the input as output (replication), engaging in malicious activities (payload). Additionally, these inputs compel the agent to deliver them (propagate) to new agents by exploiting the connectivity within the GenAI ecosystem. We demonstrate the application of Morris II against GenAI-powered email assistants in two use cases (spamming and exfiltrating personal data), under two settings (black-box and white-box accesses), using two types of input data (text and images). The worm is tested against three different GenAI models (Gemini Pro, ChatGPT 4.0, and LLaVA), and various factors (e.g., propagation rate, replication, malicious activity) influencing the performance of the worm are evaluated.

    • chevron_right

      Hugging Face, the GitHub of AI, hosted code that backdoored user devices

      news.movim.eu / ArsTechnica · Friday, 1 March - 18:02

    Photograph depicts a security scanner extracting virus from a string of binary code. Hand with the word "exploit"

    Enlarge (credit: Getty Images)

    Code uploaded to AI developer platform Hugging Face covertly installed backdoors and other types of malware on end-user machines, researchers from security firm JFrog said Thursday in a report that’s a likely harbinger of what’s to come.

    In all, JFrog researchers said, they found roughly 100 submissions that performed hidden and unwanted actions when they were downloaded and loaded onto an end-user device. Most of the flagged machine learning models—all of which went undetected by Hugging Face—appeared to be benign proofs of concept uploaded by researchers or curious users. JFrog researchers said in an email that 10 of them were “truly malicious” in that they performed actions that actually compromised the users’ security when loaded.

    Full control of user devices

    One model drew particular concern because it opened a reverse shell that gave a remote device on the Internet full control of the end user’s device. When JFrog researchers loaded the model into a lab machine, the submission indeed loaded a reverse shell but took no further action.

    Read 17 remaining paragraphs | Comments