• chevron_right

      Elon Musk predicts superhuman AI will be smarter than people next year

      news.movim.eu / TheGuardian · Tuesday, 9 April - 11:38

    His claims come with a caveat that shortages of training chips and growing demand for power could limit plans in the near term

    Superhuman artificial intelligence that is smarter than anyone on Earth could exist next year, Elon Musk has said, unless the sector’s power and computing demands become unsustainable before then.

    The prediction is a sharp tightening of an earlier claim from the multibillionaire, that superintelligent AI would exist by 2029. Whereas “superhuman” is generally defined as being smarter than any individual human at any specific task, superintelligent is often defined instead as being smarter than every human’s combined ability at any task.

    Continue reading...
    • chevron_right

      One engineer’s curiosity may have saved us from a devastating cyber-attack | John Naughton

      news.movim.eu / TheGuardian · Saturday, 6 April - 15:00 · 1 minute

    In discovering malicious code that endangered global networks in open-source software, Andres Freund exposed our reliance on insecure, volunteer-maintained tech

    On Good Friday, a Microsoft engineer named Andres Freund noticed something peculiar. He was using a software tool called SSH for securely logging into remote computers on the internet, but the interactions with the distant machines were significantly slower than usual. So he did some digging and found malicious code embedded in a software package called XZ Utils that was running on his machine. This is a critical utility for compressing (and decompressing) data running on the Linux operating system, the OS that powers the vast majority of publicly accessible internet servers across the world. Which means that every such machine is running XZ Utils.

    Freund’s digging revealed that the malicious code had arrived in his machine via two recent updates to XZ Utils, and he alerted the Open Source Security list to reveal that those updates were the result of someone intentionally planting a backdoor in the compression software. It was what is called a “supply-chain attack” (like the catastrophic SolarWinds one of 2020 ) – where malicious software is not directly injected into targeted machines, but distributed by infecting the regular software updates to which all computer users are wearily accustomed. If you want to get malware out there, infecting the supply chain is the smart way to do it.

    Continue reading...
    • chevron_right

      Google set to charge for internet searches with AI, reports say

      news.movim.eu / TheGuardian · Thursday, 4 April - 14:32


    Cost of artificial intelligence service could mean leaders in sector turning to subscription models

    Google is reportedly drawing up plans to charge for AI-enhanced search features, in what would be the biggest shake up to the company’s revenue model in its history.

    The radical shift is a natural consequence of the vast expense required to provide the service, experts say, and would leave every leading player in the sector offering some variety of subscription model to cover its costs.

    Continue reading...
    • chevron_right

      Why I wrote an AI transparency statement for my book, and think other authors should too | Kester Brewin

      news.movim.eu / TheGuardian · Thursday, 4 April - 10:36

    Until we have a mechanism to test for artificial intelligence, writers need a tool to maintain trust in their work. So I decided to be completely open with my readers

    ‘Where do you get the time?” For many years, when I’d announce to friends that I had another book coming out, I’d take responses like this as a badge of pride.

    These past few months, while publicising my new book about AI, God-Like, I’ve tried not to hear in those same words an undertone of accusation: “Where do you get the time?” Meaning, you must have had help from ChatGPT, right?

    Continue reading...
    • chevron_right

      ‘Many-shot jailbreaking’: AI lab describes how tools’ safety features can be bypassed

      news.movim.eu / TheGuardian · Wednesday, 3 April - 13:38

    Paper by Anthropic outlines how LLMs can be forced to generate responses to potentially harmful requests

    The safety features on some of the most powerful AI tools that stop them being used for cybercrime or terrorism can be bypassed simply by flooding them with examples of wrongdoing, research shows.

    In a paper from the AI lab Anthropic, which produces the large language model (LLM) behind the ChatGPT rival Claude , researchers described an attack they called “many-shot jailbreaking”. It is as simple as it is effective.

    Continue reading...
    • chevron_right

      Wearable AI: will it put our smartphones out of fashion?

      news.movim.eu / TheGuardian · Sunday, 31 March - 11:00 · 1 minute

    Portable AI-powered devices that connect directly to a chatbot without the need for apps or a touchscreen are set to hit the market. Are they the emperor’s new clothes or a gamechanger?

    Imagine it: you’re on the bus or walking in the park, when you remember some important task has slipped your mind. You were meant to send an email, catch up on a meeting, or arrange to grab lunch with a friend. Without missing a beat, you simply say aloud what you’ve forgotten and the small device that’s pinned to your chest, or resting on the bridge of your nose, sends the message, summarises the meeting, or pings your buddy a lunch invitation. The work has been taken care of, without you ever having to prod the screen of your smartphone.

    It’s the sort of utopian convenience that a growing wave of tech companies are hoping to realise through artificial intelligence. Generative AI chatbots such as ChatGPT exploded in popularity last year, as search engines like Google, messaging apps such as Slack and social media services like Snapchat raced to integrate the tech into their systems. Yet while AI add-ons have become a familiar sight across apps and software, the same generative tech is now making an attempt to join the realm of hardware, as the first AI-powered consumer devices rear their heads and jostle for space with our smartphones.

    Continue reading...
    • chevron_right

      How did a small developer of graphics cards for gamers suddenly become the third most valuable firm on the planet? | John Naughton

      news.movim.eu / TheGuardian · Saturday, 30 March - 16:00

    By turning his computer chip-making company Nvidia into a vital component in the AI arms race, Jensen Huang has placed himself at the forefront of the biggest gold rush in tech history

    A funny thing happened on our way to the future. It took place recently in a huge sports arena in San Jose, California, and was described by some wag as “AI Woodstock”. But whereas that original music festival had attendees who were mainly stoned on conventional narcotics, the 11,000 or so in San Jose were high on the Kool-Aid so lavishly provided by the tech industry.

    They were gathered to hear a keynote address at a technology conference given by Jensen Huang, the founder of computer chip-maker Nvidia, who is now the Taylor Swift of Silicon Valley. Dressed in his customary leather jacket and white-soled trainers, he delivered a bravura 50-minute performance that recalled Steve Jobs in his heyday, though with slightly less slick delivery. The audience, likewise, recalled the fanboys who used to queue for hours to be allowed into Jobs’s reality distortion field, except that the Huang fans were not as attentive to the cues he gave them to applaud.

    Continue reading...
    • chevron_right

      ‘It’s very easy to steal someone’s voice’: how AI is affecting video game actors

      news.movim.eu / TheGuardian · Friday, 29 March - 10:02

    The increased use of AI to replicate the voice and movements of actors has benefits but some are concerned over how and when it might be used and who might be left short-changed

    When she discovered her voice had been uploaded to multiple websites without her consent, the actor Cissy Jones told them to take it down immediately. Some complied. “Others who have more money in their banks basically sent me the email equivalent of a digital middle finger and said: don’t care,” Jones recalls by phone.

    “That was the genesis for me to start talking to friends of mine about: listen, how do we do this the right way? How do we understand that the genie is out of the bottle and find a way to be a part of the conversation or we will get systematically annihilated? I know that sounds dramatic but, given how easy it is to steal a person’s voice, it’s not far off the mark .

    Continue reading...
    • chevron_right

      AI ‘apocalypse’ could take away almost 8m jobs in UK, says report

      news.movim.eu / TheGuardian · Wednesday, 27 March - 05:00

    Women, younger workers and lower paid are at most risk from artificial intelligence, says IPPR thinktank

    Almost 8 million UK jobs could be lost to artificial intelligence in a “jobs apocalypse”, according to a report warning that women, younger workers and those on lower wages are at most risk from automation.

    The Institute for Public Policy Research (IPPR) said that entry level, part-time and administrative jobs were most exposed to being replaced by AI under a “worst-case scenario” for the rollout of new technologies in the next three to five years.

    Continue reading...