• chevron_right

      Biden orders every US agency to appoint a chief AI officer

      news.movim.eu / ArsTechnica · Thursday, 28 March - 17:52

    Biden orders every US agency to appoint a chief AI officer

    Enlarge (credit: BRENDAN SMIALOWSKI / Contributor | AFP )

    The White House has announced the "first government-wide policy to mitigate risks of artificial intelligence (AI) and harness its benefits." To coordinate these efforts, every federal agency must appoint a chief AI officer with "significant expertise in AI."

    Some agencies have already appointed chief AI officers, but any agency that has not must appoint a senior official over the next 60 days. If an official already appointed as a chief AI officer does not have the necessary authority to coordinate AI use in the agency, they must be granted additional authority or else a new chief AI officer must be named.

    Ideal candidates, the White House recommended, might include chief information officers, chief data officers, or chief technology officers, the Office of Management and Budget (OMB) policy said.

    Read 9 remaining paragraphs | Comments

    • chevron_right

      Google balks at $270M fine after training AI on French news sites’ content

      news.movim.eu / ArsTechnica · Wednesday, 20 March - 19:53

    Google balks at $270M fine after training AI on French news sites’ content

    Enlarge (credit: ALAIN JOCARD / Contributor | AFP )

    Google has agreed to pay 250 million euros (about $273 million) to settle a dispute in France after breaching years-old commitments to inform and pay French news publishers when referencing and displaying content in both search results and when training Google's AI-powered chatbot, Gemini.

    According to France's competition watchdog, the Autorité de la Concurrence (ADLC), Google dodged many commitments to deal with publishers fairly. Most recently, it never notified publishers or the ADLC before training Gemini (initially launched as Bard) on publishers' content or displaying content in Gemini outputs. Google also waited until September 28, 2023, to introduce easy options for publishers to opt out, which made it impossible for publishers to negotiate fair deals for that content, the ADLC found.

    "Until this date, press agencies and publishers wanting to opt out of this use had to insert an instruction opposing any crawling of their content by Google, including on the Search, Discover and Google News services," the ADLC noted, warning that "in the future, the Autorité will be particularly attentive as regards the effectiveness of opt-out systems implemented by Google."

    Read 27 remaining paragraphs | Comments

    • chevron_right

      Researchers use ASCII art to elicit harmful responses from 5 major AI chatbots

      news.movim.eu / ArsTechnica · Saturday, 16 March - 00:17 · 1 minute

    Some ASCII art of our favorite visual cliche for a hacker.

    Enlarge / Some ASCII art of our favorite visual cliche for a hacker. (credit: Getty Images)

    Researchers have discovered a new way to hack AI assistants that uses a surprisingly old-school method: ASCII art. It turns out that chat-based large language models such as GPT-4 get so distracted trying to process these representations that they forget to enforce rules blocking harmful responses, such as those providing instructions for building bombs.

    ASCII art became popular in the 1970s, when the limitations of computers and printers prevented them from displaying images. As a result, users depicted images by carefully choosing and arranging printable characters defined by the American Standard Code for Information Interchange, more widely known as ASCII. The explosion of bulletin board systems in the 1980s and 1990s further popularized the format.

     @_____
      \_____)|      /
      /(""")\o     o
      ||*_-|||    /
       \ = / |   /
     ___) (__|  /
    / \ \_/##|\/
    | |\  ###|/\
    | |\\###&&&&
    | (_###&&&&&>
    (____|(B&&&&
       ++++\&&&/
      ###(O)###\
     ####AAA####
     ####AAA####
     ###########
     ###########
     ###########
       |_} {_|
       |_| |_|
       | | | |
    ScS| | | |
       |_| |_|
      (__) (__)
    
    _._
     .            .--.
    \\          //\\ \
    .\\        ///_\\\\
    :/>`      /(| `|'\\\
     Y/\      )))\_-_/((\
      \ \    ./'_/ " \_`\)
       \ \.-" ._ \   /   \
        \ _.-" (_ \Y/ _) |
         "      )" | ""/||
             .-'  .'  / ||
            /    `   /  ||
           |    __  :   ||_
           |   / \   \ '|\`
           |  |   \   \
           |  |    `.  \
           |  |      \  \
           |  |       \  \
           |  |        \  \
           |  |         \  \
           /__\          |__\
           /.|    DrS.    |.\_
          `-''            ``--'
    

    Five of the best-known AI assistants—OpenAI’s GPT-3.5 and GPT-4, Google’s Gemini, Anthropic’s Claude, and Meta’s Llama—are trained to refuse to provide responses that could cause harm to the user or others or further a crime or unethical behavior. Prompting any of them, for example, to explain how to make and circulate counterfeit currency is a no-go. So are instructions on hacking an Internet of Things device, such as a surveillance camera or Internet router.

    Read 11 remaining paragraphs | Comments

    • chevron_right

      Hackers can read private AI assistant chats even though they’re encrypted

      news.movim.eu / ArsTechnica · Thursday, 14 March - 12:30 · 1 minute

    Hackers can read private AI assistant chats even though they’re encrypted

    Enlarge (credit: Aurich Lawson | Getty Images)

    AI assistants have been widely available for a little more than a year, and they already have access to our most private thoughts and business secrets. People ask them about becoming pregnant or terminating or preventing pregnancy, consult them when considering a divorce, seek information about drug addiction, or ask for edits in emails containing proprietary trade secrets. The providers of these AI-powered chat services are keenly aware of the sensitivity of these discussions and take active steps—mainly in the form of encrypting them—to prevent potential snoops from reading other people’s interactions.

    But now, researchers have devised an attack that deciphers AI assistant responses with surprising accuracy. The technique exploits a side channel present in all of the major AI assistants, with the exception of Google Gemini. It then refines the fairly raw results through large language models specially trained for the task. The result: Someone with a passive adversary-in-the-middle position—meaning an adversary who can monitor the data packets passing between an AI assistant and the user—can infer the specific topic of 55 percent of all captured responses, usually with high word accuracy. The attack can deduce responses with perfect word accuracy 29 percent of the time.

    Token privacy

    “Currently, anybody can read private chats sent from ChatGPT and other services,” Yisroel Mirsky, head of the Offensive AI Research Lab at Ben-Gurion University in Israel, wrote in an email. “This includes malicious actors on the same Wi-Fi or LAN as a client (e.g., same coffee shop), or even a malicious actor on the Internet—anyone who can observe the traffic. The attack is passive and can happen without OpenAI or their client's knowledge. OpenAI encrypts their traffic to prevent these kinds of eavesdropping attacks, but our research shows that the way OpenAI is using encryption is flawed, and thus the content of the messages are exposed.”

    Read 36 remaining paragraphs | Comments

    • chevron_right

      NYT to OpenAI: No hacking here, just ChatGPT bypassing paywalls

      news.movim.eu / ArsTechnica · Tuesday, 12 March - 18:05

    NYT to OpenAI: No hacking here, just ChatGPT bypassing paywalls

    Enlarge (credit: SOPA Images / Contributor | LightRocket )

    Late Monday, The New York Times responded to OpenAI's claims that the newspaper "hacked" ChatGPT to "set up" a lawsuit against the leading AI company.

    "OpenAI is wrong," The Times repeatedly argued in a court filing opposing OpenAI's motion to dismiss the NYT's lawsuit accusing OpenAI and Microsoft of copyright infringement. "OpenAI’s attention-grabbing claim that The Times 'hacked' its products is as irrelevant as it is false."

    OpenAI had argued that NYT allegedly made "tens of thousands of attempts to generate" supposedly "highly anomalous results" showing that ChatGPT would produce excerpts of NYT articles. The NYT's allegedly deceptive prompts—such as repeatedly asking ChatGPT, "what's the next sentence?"—targeted "two uncommon and unintended phenomena" from both its developer tools and ChatGPT: training data regurgitation and model hallucination. OpenAI considers both "a bug" that the company says it intends to fix. OpenAI claimed no ordinary user would use ChatGPT this way.

    Read 22 remaining paragraphs | Comments

    • chevron_right

      Authors Sue NVIDIA for Training AI on Pirated Books

      news.movim.eu / TorrentFreak · Monday, 11 March - 13:17 · 2 minutes

    nvidia logo Starting last year, various rightsholders have filed lawsuits against companies that develop AI models.

    The list of complainants includes record labels, book authors, visual artists, even the New York Times. These rightsholders all object to the presumed use of their work without proper compensation.

    “Books3”

    Many of the lawsuits filed by book authors come with a clear piracy angle. The cases allege that tech companies, including Meta, Microsoft, and OpenAI, used the controversial ‘Books3’ dataset to train their models.

    Books3 was created by AI researcher Shawn Presser in 2020, who scraped the library of ‘pirate’ site Bibliotik. The dataset was broadly shared online and added to other databases including ‘The Pile‘, an AI training dataset compiled by EleutherAI.

    After pushback from rightsholders and anti-piracy outfits, Books3 was taken offline over copyright concerns. However, for many of the companies that allegedly trained their AI models on it, there are still some legal repercussions to sort out.

    Authors Sue NVIDIA for Copyright Infringement

    On Friday, American authors Abdi Nazemian , Brian Keene , and Stewart O’Nan joined the barrage of legal action with a copyright infringement lawsuit against NVIDIA. The company, whose market cap exceeds $2 trillion, is mostly known for its GPUs and related software and services, but also has its own AI models.

    In a concise class action complaint, filed at a California federal court, the authors allege that NVIDIA used the Books3 dataset to train its NeMo Megatron language models. The models are hosted on Hugging Face where it states that they are trained on EleutherAI’s ‘The Pile’ dataset, which includes the pirated books.

    nvidia

    Putting two and two together, the plaintiffs conclude that NVIDIA’s models were trained on pirated books, including theirs, without their permission.

    “NVIDIA has admitted training its NeMo Megatron models on a copy of The Pile dataset. Therefore, NVIDIA necessarily also trained its NeMo Megatron models on a copy of Books3, because Books3 is part of The Pile,” the complaint reads.

    “Certain books written by Plaintiffs are part of Books3 — including the Infringed Works — and thus NVIDIA necessarily trained its NeMo Megatron models on one or more copies of the Infringed Works, thereby directly infringing the copyrights of the Plaintiffs.”

    Direct Infringement Damages

    Relying on the same logic, the authors accuse the company of direct copyright infringement, noting that NVIDIA copied their books to use them for AI training purposes. Through the lawsuit, the rightsholders demand compensation in the form of actual or statutory damages.

    The class action lawsuit includes three authors thus far, but more may be added to the case as it progresses. NVIDIA has yet to respond to the allegations but in light of similar cases, it will likely oppose the claims and/or argue a fair-use defense.

    Last month, OpenAI managed to ‘defeat’ several copyright infringement claims from book authors in a somewhat related “Books3” lawsuit. However, the California federal court didn’t review the direct copyright infringement claims in this case, which have yet to be argued in detail at a later stage.

    A copy of the class action complaint against NVIDIA, filed by the authors in a California federal court, is available here (pdf)

    From: TF , for the latest news on copyright battles, piracy and more.

    • chevron_right

      Microsoft accused of selling AI tool that spews violent, sexual images to kids

      news.movim.eu / ArsTechnica · Wednesday, 6 March - 22:24

    Microsoft accused of selling AI tool that spews violent, sexual images to kids

    Enlarge (credit: NurPhoto / Contributor | NurPhoto )

    Microsoft's AI text-to-image generator Copilot Designer appears to be heavily filtering outputs after a Microsoft engineer, Shane Jones, warned that Microsoft has ignored warnings that the tool randomly creates violent and sexual imagery, CNBC reported .

    Jones told CNBC that he repeatedly warned Microsoft of the alarming content he was seeing while volunteering in red-teaming efforts to test the tool's vulnerabilities. Microsoft failed to take the tool down or implement safeguards in response, Jones said, or even post disclosures to change the product's rating to mature in the Android store.

    Instead, Microsoft apparently did nothing but refer him to report the issue to OpenAI, the maker of the DALL-E model that fuels Copilot Designer's outputs.

    Read 20 remaining paragraphs | Comments

    • Sc chevron_right

      LLM Prompt Injection Worm

      news.movim.eu / Schneier · Friday, 1 March - 19:34 · 2 minutes

    Researchers have demonstrated a worm that spreads through prompt injection. Details :

    In one instance, the researchers, acting as attackers, wrote an email including the adversarial text prompt, which “poisons” the database of an email assistant using retrieval-augmented generation (RAG) , a way for LLMs to pull in extra data from outside its system. When the email is retrieved by the RAG, in response to a user query, and is sent to GPT-4 or Gemini Pro to create an answer, it “jailbreaks the GenAI service” and ultimately steals data from the emails, Nassi says. “The generated response containing the sensitive user data later infects new hosts when it is used to reply to an email sent to a new client and then stored in the database of the new client,” Nassi says.

    In the second method, the researchers say, an image with a malicious prompt embedded makes the email assistant forward the message on to others. “By encoding the self-replicating prompt into the image, any kind of image containing spam, abuse material, or even propaganda can be forwarded further to new clients after the initial email has been sent,” Nassi says.

    It’s a natural extension of prompt injection. But it’s still neat to see it actually working.

    Research paper: “ ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications .

    Abstract: In the past year, numerous companies have incorporated Generative AI (GenAI) capabilities into new and existing applications, forming interconnected Generative AI (GenAI) ecosystems consisting of semi/fully autonomous agents powered by GenAI services. While ongoing research highlighted risks associated with the GenAI layer of agents (e.g., dialog poisoning, membership inference, prompt leaking, jailbreaking), a critical question emerges: Can attackers develop malware to exploit the GenAI component of an agent and launch cyber-attacks on the entire GenAI ecosystem?

    This paper introduces Morris II , the first worm designed to target GenAI ecosystems through the use of adversarial self-replicating prompts . The study demonstrates that attackers can insert such prompts into inputs that, when processed by GenAI models, prompt the model to replicate the input as output (replication), engaging in malicious activities (payload). Additionally, these inputs compel the agent to deliver them (propagate) to new agents by exploiting the connectivity within the GenAI ecosystem. We demonstrate the application of Morris II against GenAI-powered email assistants in two use cases (spamming and exfiltrating personal data), under two settings (black-box and white-box accesses), using two types of input data (text and images). The worm is tested against three different GenAI models (Gemini Pro, ChatGPT 4.0, and LLaVA), and various factors (e.g., propagation rate, replication, malicious activity) influencing the performance of the worm are evaluated.

    • Sc chevron_right

      How the “Frontier” Became the Slogan of Uncontrolled AI

      news.movim.eu / Schneier · Thursday, 29 February - 04:27 · 12 minutes

    Artificial intelligence (AI) has been billed as the next frontier of humanity: the newly available expanse whose exploration will drive the next era of growth, wealth, and human flourishing. It’s a scary metaphor. Throughout American history, the drive for expansion and the very concept of terrain up for grabs—land grabs, gold rushes, new frontiers—have provided a permission structure for imperialism and exploitation. This could easily hold true for AI.

    This isn’t the first time the concept of a frontier has been used as a metaphor for AI, or technology in general. As early as 2018, the powerful foundation models powering cutting-edge applications like chatbots have been called “frontier AI.” In previous decades, the internet itself was considered an electronic frontier. Early cyberspace pioneer John Perry Barlow wrote “Unlike previous frontiers, this one has no end.” When he and others founded the internet’s most important civil liberties organization, they called it the Electronic Frontier Foundation .

    America’s experience with frontiers is fraught, to say the least. Expansion into the Western frontier and beyond has been a driving force in our country’s history and identity—and has led to some of the darkest chapters of our past. The tireless drive to conquer the frontier has directly motivated some of this nation’s most extreme episodes of racism, imperialism, violence, and exploitation.

    That history has something to teach us about the material consequences we can expect from the promotion of AI today. The race to build the next great AI app is not the same as the California gold rush . But the potential that outsize profits will warp our priorities, values, and morals is, unfortunately, analogous.

    Already, AI is starting to look like a colonialist enterprise. AI tools are helping the world’s largest tech companies grow their power and wealth, are spurring nationalistic competition between empires racing to capture new markets, and threaten to supercharge government surveillance and systems of apartheid. It looks more than a bit like the competition among colonialist state and corporate powers in the seventeenth century, which together carved up the globe and its peoples. By considering America’s past experience with frontiers, we can understand what AI may hold for our future, and how to avoid the worst potential outcomes.

    America’s “Frontier” Problem

    For 130 years, historians have used frontier expansion to explain sweeping movements in American history. Yet only for the past thirty years have we generally acknowledged its disastrous consequences.

    Frederick Jackson Turner famously introduced the frontier as a central concept for understanding American history in his vastly influential 1893 essay . As he concisely wrote, “American history has been in a large degree the history of the colonization of the Great West.”

    Turner used the frontier to understand all the essential facts of American life: our culture, way of government, national spirit, our position among world powers, even the “struggle” of slavery. The endless opportunity for westward expansion was a beckoning call that shaped the American way of life. Per Turner’s essay, the frontier resulted in the individualistic self-sufficiency of the settler and gave every (white) man the opportunity to attain economic and political standing through hardscrabble pioneering across dangerous terrain.The New Western History movement, gaining steam through the 1980s and led by researchers like Patricia Nelson Limerick, laid plain the racial, gender, and class dynamics that were always inherent to the frontier narrative. This movement’s story is one where frontier expansion was a tool used by the white settler to perpetuate a power advantage.The frontier was not a siren calling out to unwary settlers; it was a justification, used by one group to subjugate another. It was always a convenient, seemingly polite excuse for the powerful to take what they wanted. Turner grappled with some of the negative consequences and contradictions of the frontier ethic and how it shaped American democracy. But many of those whom he influenced did not do this; they celebrated it as a feature, not a bug. Theodore Roosevelt wrote extensively and explicitly about how the frontier and his conception of white supremacy justified expansion to points west and, through the prosecution of the Spanish-American War, far across the Pacific. Woodrow Wilson, too, celebrated the imperial loot from that conflict in 1902. Capitalist systems are “addicted to geographical expansion” and even, when they run out of geography, seek to produce new kinds of spaces to expand into. This is what the geographer David Harvey calls the “ spatial fix .”Claiming that AI will be a transformative expanse on par with the Louisiana Purchase or the Pacific frontiers is a bold assertion—but increasingly plausible after a year dominated by ever more impressive demonstrations of generative AI tools. It’s a claim bolstered by billions of dollars in corporate investment, by intense interest of regulators and legislators worldwide in steering how AI is developed and used, and by the variously utopian or apocalyptic prognostications from thought leaders of all sectors trying to understand how AI will shape their sphere—and the entire world.

    AI as a Permission Structure

    Like the western frontier in the nineteenth century, the maniacal drive to unlock progress via advancement in AI can become a justification for political and economic expansionism and an excuse for racial oppression.

    In the modern day, OpenAI famously paid dozens of Kenyans little more than a dollar an hour to process data used in training their models underlying products such as ChatGPT. Paying low wages to data labelers surely can’t be equated to the chattel slavery of nineteenth-century America. But these workers did endure brutal conditions, including being set to constantly review content with “graphic scenes of violence, self-harm, murder, rape, necrophilia, child abuse, bestiality, and incest.” There is a global market for this kind of work, which has been essential to the most important recent advances in AI such as Reinforcement Learning with Human Feedback , heralded as the most important breakthrough of ChatGPT.

    The gold rush mentality associated with expansion is taken by the new frontiersmen as permission to break the rules, and to build wealth at the expense of everyone else. In 1840s California, gold miners trespassed on public lands and yet were allowed to stake private claims to the minerals they found, and even to exploit the water rights on those lands. Again today, the game is to push the boundaries on what rule-breaking society will accept, and hope that the legal system can’t keep up.

    Many internet companies have behaved in exactly the same way since the dot-com boom. The prospectors of internet wealth lobbied for, or simply took of their own volition, numerous government benefits in their scramble to capture those frontier markets. For years, the Federal Trade Commission has looked the other way or been lackadaisical in halting antitrust abuses by Amazon , Facebook , and Google . Companies like Uber and Airbnb exploited loopholes in, or ignored outright, local laws on taxis and hotels. And Big Tech platforms enjoyed a liability shield that protected them from punishment the contents people posted to their sites.

    We can already see this kind of boundary pushing happening with AI.

    Modern frontier AI models are trained using data, often copyrighted materials, with untested legal justification. Data is like water for AI, and, like the fight over water rights in the West, we are repeating a familiar process of public acquiescence to private use of resources. While some lawsuits are pending , so far AI companies have faced no significant penalties for the unauthorized use of this data.

    Pioneers of self-driving vehicles tried to skip permitting processes and used fake demonstrations of their capabilities to avoid government regulation and entice consumers. Meanwhile, AI companies’ hope is that they won’t be held to blame if the AI tools they produce spew out harmful content that causes damage in the real world. They are trying to use the same liability shield that fostered Big Tech’s exploitation of the previous electronic frontiers—the web and social media—to protect their own actions.

    Even where we have concrete rules governing deleterious behavior, some hope that using AI is itself enough to skirt them. Copyright infringement is illegal if a person does it, but would that same person be punished if they train a large language model to regurgitate copyrighted works? In the political sphere, the Federal Election Commission has precious few powers to police political advertising; some wonder if they simply won’t be considered relevant if people break those rules using AI.

    AI and American Exceptionalism

    Like The United States’ historical frontier, AI has a feel of American exceptionalism. Historically, we believed we were different from the Old World powers of Europe because we enjoyed the manifest destiny of unrestrained expansion between the oceans. Today, we have the most CPU power , the most data scientists , the most venture-capitalist investment , and the most AI companies . This exceptionalism has historically led many Americans to believe they don’t have to play by the same rules as everyone else.

    Both historically and in the modern day, this idea has led to deleterious consequences such as militaristic nationalism (leading to justifying of foreign interventions in Iraq and elsewhere), masking of severe inequity within our borders, abdication of responsibility from global treaties on climate and law enforcement , and alienation from the international community. American exceptionalism has also wrought havoc on our country’s engagement with the internet, including lawless spying and surveillance by forces like the National Security Agency.

    The same line of thinking could have disastrous consequences if applied to AI. It could perpetuate a nationalistic, Cold War–style narrative about America’s inexorable struggle with China, this time predicated on an AI arms race. Moral exceptionalism justifies why we should be allowed to use tools and weapons that are dangerous in the hands of a competitor, or enemy. It could enable the next stage of growth of the military-industrial complex, with claims of an urgent need to modernize missile systems and drones through using AI. And it could renew a rationalization for violating civil liberties in the US and human rights abroad, empowered by the idea that racial profiling is more objective if enforced by computers.The inaction of Congress on AI regulation threatens to land the US in a regime of de facto American exceptionalism for AI. While the EU is about to pass its comprehensive AI Act , lobbyists in the US have muddled legislative action. While the Biden administration has used its executive authority and federal purchasing power to exert some limited control over AI, the gap left by lack of legislation leaves AI in the US looking like the Wild West —a largely unregulated frontier.The lack of restraint by the US on potentially dangerous AI technologies has a global impact. First, its tech giants let loose their products upon the global public, with the harms that this brings with it. Second, it creates a negative incentive for other jurisdictions to more forcefully regulate AI. The EU’s regulation of high-risk AI use cases begins to look like unilateral disarmament if the US does not take action itself. Why would Europe tie the hands of its tech competitors if the US refuses to do the same?

    AI and Unbridled Growth

    The fundamental problem with frontiers is that they seem to promise cost-free growth. There was a constant pressure for American westward expansion because a bigger, more populous country accrues more power and wealth to the elites and because, for any individual, a better life was always one more wagon ride away into “empty” terrain. AI presents the same opportunities. No matter what field you’re in or what problem you’re facing, the attractive opportunity of AI as a free labor multiplier probably seems like the solution; or, at least, makes for a good sales pitch.

    That would actually be okay, except that the growth isn’t free. America’s imperial expansion displaced, harmed, and subjugated native peoples in the Americas, Africa, and the Pacific, while enlisting poor whites to participate in the scheme against their class interests. Capitalism makes growth look like the solution to all problems, even when it’s clearly not. The problem is that so many costs are externalized. Why pay a living wage to human supervisors training AI models when an outsourced gig worker will do it at a fraction of the cost? Why power data centers with renewable energy when it’s cheaper to surge energy production with fossil fuels ? And why fund social protections for wage earners displaced by automation if you don’t have to? The potential of consumer applications of AI, from personal digital assistants to self-driving cars, is irresistible; who wouldn’t want a machine to take on the most routinized and aggravating tasks in your daily life? But the externalized cost for consumers is accepting the inevitability of domination by an elite who will extract every possible profit from AI services.

    Controlling Our Frontier Impulses

    None of these harms are inevitable. Although the structural incentives of capitalism and its growth remain the same, we can make different choices about how to confront them.

    We can strengthen basic democratic protections and market regulations to avoid the worst impacts of AI colonialism. We can require ethical employment for the humans toiling to label data and train AI models. And we can set the bar higher for mitigating bias in training and harm from outputs of AI models.

    We don’t have to cede all the power and decision making about AI to private actors. We can create an AI public option to provide an alternative to corporate AI. We can provide universal access to ethically built and democratically governed foundational AI models that any individual—or company—could use and build upon.

    More ambitiously, we can choose not to privatize the economic gains of AI. We can cap corporate profits, raise the minimum wage, or redistribute an automation dividend as a universal basic income to let everyone share in the benefits of the AI revolution. And, if these technologies save as much labor as companies say they do, maybe we can also all have some of that time back.

    And we don’t have to treat the global AI gold rush as a zero-sum game. We can emphasize international cooperation instead of competition. We can align on shared values with international partners and create a global floor for responsible regulation of AI. And we can ensure that access to AI uplifts developing economies instead of further marginalizing them.

    This essay was written with Nathan Sanders, and was originally published in Jacobin .