• chevron_right

      EPA’s PFAS rules: We’d prefer zero, but we’ll accept 4 parts per trillion

      news.movim.eu / ArsTechnica · Wednesday, 10 April - 18:58 · 1 minute

    A young person drinks from a public water fountain.

    Enlarge (credit: Layland Masuda )

    Today, the Environmental Protection Agency announced that it has finalized rules for handling water supplies that are contaminated by a large family of chemicals collectively termed PFAS (perfluoroalkyl and polyfluoroalkyl substances). Commonly called "forever chemicals," these contaminants have been linked to a huge range of health issues, including cancers, heart disease, immune dysfunction, and developmental disorders.

    The final rules keep one striking aspect of the initial proposal intact: a goal of completely eliminating exposure to two members of the PFAS family. The new rules require all drinking water suppliers to monitor for the chemicals' presence, and the EPA estimates that as many as 10 percent of them may need to take action to remove them. While that will be costly, the health benefits are expected to exceed those costs.

    Going low

    PFAS are a collection of hydrocarbons where some of the hydrogen atoms have been swapped out for fluorine. This swap retains the water-repellant behavior of hydrocarbons while making the molecules highly resistant to breaking down through natural processes—hence the forever chemicals moniker. They're widely used in water-resistant clothing and non-stick cooking equipment and have found uses in firefighting foam. Their widespread use and disposal has allowed them to get into water supplies in many locations.

    Read 8 remaining paragraphs | Comments

    • chevron_right

      US, UK ink AI pact modeled on intel sharing agreements

      news.movim.eu / ArsTechnica · Tuesday, 2 April - 13:42

    outline of faces behind numbers

    Enlarge (credit: LagartoFilm/Dreamstime)

    The US and UK have signed a landmark agreement on artificial intelligence, as the allies become the first countries to formally cooperate on how to test and assess risks from emerging AI models.

    The agreement, signed on Monday in Washington by UK science minister Michelle Donelan and US commerce secretary Gina Raimondo, lays out how the two governments will pool technical knowledge, information and talent on AI safety.

    The deal represents the first bilateral arrangement on AI safety in the world and comes as governments push for greater regulation of the existential risks from new technology, such as its use in damaging cyber attacks or designing bioweapons.

    Read 16 remaining paragraphs | Comments

    • Sc chevron_right

      How the “Frontier” Became the Slogan of Uncontrolled AI

      news.movim.eu / Schneier · Thursday, 29 February - 04:27 · 12 minutes

    Artificial intelligence (AI) has been billed as the next frontier of humanity: the newly available expanse whose exploration will drive the next era of growth, wealth, and human flourishing. It’s a scary metaphor. Throughout American history, the drive for expansion and the very concept of terrain up for grabs—land grabs, gold rushes, new frontiers—have provided a permission structure for imperialism and exploitation. This could easily hold true for AI.

    This isn’t the first time the concept of a frontier has been used as a metaphor for AI, or technology in general. As early as 2018, the powerful foundation models powering cutting-edge applications like chatbots have been called “frontier AI.” In previous decades, the internet itself was considered an electronic frontier. Early cyberspace pioneer John Perry Barlow wrote “Unlike previous frontiers, this one has no end.” When he and others founded the internet’s most important civil liberties organization, they called it the Electronic Frontier Foundation .

    America’s experience with frontiers is fraught, to say the least. Expansion into the Western frontier and beyond has been a driving force in our country’s history and identity—and has led to some of the darkest chapters of our past. The tireless drive to conquer the frontier has directly motivated some of this nation’s most extreme episodes of racism, imperialism, violence, and exploitation.

    That history has something to teach us about the material consequences we can expect from the promotion of AI today. The race to build the next great AI app is not the same as the California gold rush . But the potential that outsize profits will warp our priorities, values, and morals is, unfortunately, analogous.

    Already, AI is starting to look like a colonialist enterprise. AI tools are helping the world’s largest tech companies grow their power and wealth, are spurring nationalistic competition between empires racing to capture new markets, and threaten to supercharge government surveillance and systems of apartheid. It looks more than a bit like the competition among colonialist state and corporate powers in the seventeenth century, which together carved up the globe and its peoples. By considering America’s past experience with frontiers, we can understand what AI may hold for our future, and how to avoid the worst potential outcomes.

    America’s “Frontier” Problem

    For 130 years, historians have used frontier expansion to explain sweeping movements in American history. Yet only for the past thirty years have we generally acknowledged its disastrous consequences.

    Frederick Jackson Turner famously introduced the frontier as a central concept for understanding American history in his vastly influential 1893 essay . As he concisely wrote, “American history has been in a large degree the history of the colonization of the Great West.”

    Turner used the frontier to understand all the essential facts of American life: our culture, way of government, national spirit, our position among world powers, even the “struggle” of slavery. The endless opportunity for westward expansion was a beckoning call that shaped the American way of life. Per Turner’s essay, the frontier resulted in the individualistic self-sufficiency of the settler and gave every (white) man the opportunity to attain economic and political standing through hardscrabble pioneering across dangerous terrain.The New Western History movement, gaining steam through the 1980s and led by researchers like Patricia Nelson Limerick, laid plain the racial, gender, and class dynamics that were always inherent to the frontier narrative. This movement’s story is one where frontier expansion was a tool used by the white settler to perpetuate a power advantage.The frontier was not a siren calling out to unwary settlers; it was a justification, used by one group to subjugate another. It was always a convenient, seemingly polite excuse for the powerful to take what they wanted. Turner grappled with some of the negative consequences and contradictions of the frontier ethic and how it shaped American democracy. But many of those whom he influenced did not do this; they celebrated it as a feature, not a bug. Theodore Roosevelt wrote extensively and explicitly about how the frontier and his conception of white supremacy justified expansion to points west and, through the prosecution of the Spanish-American War, far across the Pacific. Woodrow Wilson, too, celebrated the imperial loot from that conflict in 1902. Capitalist systems are “addicted to geographical expansion” and even, when they run out of geography, seek to produce new kinds of spaces to expand into. This is what the geographer David Harvey calls the “ spatial fix .”Claiming that AI will be a transformative expanse on par with the Louisiana Purchase or the Pacific frontiers is a bold assertion—but increasingly plausible after a year dominated by ever more impressive demonstrations of generative AI tools. It’s a claim bolstered by billions of dollars in corporate investment, by intense interest of regulators and legislators worldwide in steering how AI is developed and used, and by the variously utopian or apocalyptic prognostications from thought leaders of all sectors trying to understand how AI will shape their sphere—and the entire world.

    AI as a Permission Structure

    Like the western frontier in the nineteenth century, the maniacal drive to unlock progress via advancement in AI can become a justification for political and economic expansionism and an excuse for racial oppression.

    In the modern day, OpenAI famously paid dozens of Kenyans little more than a dollar an hour to process data used in training their models underlying products such as ChatGPT. Paying low wages to data labelers surely can’t be equated to the chattel slavery of nineteenth-century America. But these workers did endure brutal conditions, including being set to constantly review content with “graphic scenes of violence, self-harm, murder, rape, necrophilia, child abuse, bestiality, and incest.” There is a global market for this kind of work, which has been essential to the most important recent advances in AI such as Reinforcement Learning with Human Feedback , heralded as the most important breakthrough of ChatGPT.

    The gold rush mentality associated with expansion is taken by the new frontiersmen as permission to break the rules, and to build wealth at the expense of everyone else. In 1840s California, gold miners trespassed on public lands and yet were allowed to stake private claims to the minerals they found, and even to exploit the water rights on those lands. Again today, the game is to push the boundaries on what rule-breaking society will accept, and hope that the legal system can’t keep up.

    Many internet companies have behaved in exactly the same way since the dot-com boom. The prospectors of internet wealth lobbied for, or simply took of their own volition, numerous government benefits in their scramble to capture those frontier markets. For years, the Federal Trade Commission has looked the other way or been lackadaisical in halting antitrust abuses by Amazon , Facebook , and Google . Companies like Uber and Airbnb exploited loopholes in, or ignored outright, local laws on taxis and hotels. And Big Tech platforms enjoyed a liability shield that protected them from punishment the contents people posted to their sites.

    We can already see this kind of boundary pushing happening with AI.

    Modern frontier AI models are trained using data, often copyrighted materials, with untested legal justification. Data is like water for AI, and, like the fight over water rights in the West, we are repeating a familiar process of public acquiescence to private use of resources. While some lawsuits are pending , so far AI companies have faced no significant penalties for the unauthorized use of this data.

    Pioneers of self-driving vehicles tried to skip permitting processes and used fake demonstrations of their capabilities to avoid government regulation and entice consumers. Meanwhile, AI companies’ hope is that they won’t be held to blame if the AI tools they produce spew out harmful content that causes damage in the real world. They are trying to use the same liability shield that fostered Big Tech’s exploitation of the previous electronic frontiers—the web and social media—to protect their own actions.

    Even where we have concrete rules governing deleterious behavior, some hope that using AI is itself enough to skirt them. Copyright infringement is illegal if a person does it, but would that same person be punished if they train a large language model to regurgitate copyrighted works? In the political sphere, the Federal Election Commission has precious few powers to police political advertising; some wonder if they simply won’t be considered relevant if people break those rules using AI.

    AI and American Exceptionalism

    Like The United States’ historical frontier, AI has a feel of American exceptionalism. Historically, we believed we were different from the Old World powers of Europe because we enjoyed the manifest destiny of unrestrained expansion between the oceans. Today, we have the most CPU power , the most data scientists , the most venture-capitalist investment , and the most AI companies . This exceptionalism has historically led many Americans to believe they don’t have to play by the same rules as everyone else.

    Both historically and in the modern day, this idea has led to deleterious consequences such as militaristic nationalism (leading to justifying of foreign interventions in Iraq and elsewhere), masking of severe inequity within our borders, abdication of responsibility from global treaties on climate and law enforcement , and alienation from the international community. American exceptionalism has also wrought havoc on our country’s engagement with the internet, including lawless spying and surveillance by forces like the National Security Agency.

    The same line of thinking could have disastrous consequences if applied to AI. It could perpetuate a nationalistic, Cold War–style narrative about America’s inexorable struggle with China, this time predicated on an AI arms race. Moral exceptionalism justifies why we should be allowed to use tools and weapons that are dangerous in the hands of a competitor, or enemy. It could enable the next stage of growth of the military-industrial complex, with claims of an urgent need to modernize missile systems and drones through using AI. And it could renew a rationalization for violating civil liberties in the US and human rights abroad, empowered by the idea that racial profiling is more objective if enforced by computers.The inaction of Congress on AI regulation threatens to land the US in a regime of de facto American exceptionalism for AI. While the EU is about to pass its comprehensive AI Act , lobbyists in the US have muddled legislative action. While the Biden administration has used its executive authority and federal purchasing power to exert some limited control over AI, the gap left by lack of legislation leaves AI in the US looking like the Wild West —a largely unregulated frontier.The lack of restraint by the US on potentially dangerous AI technologies has a global impact. First, its tech giants let loose their products upon the global public, with the harms that this brings with it. Second, it creates a negative incentive for other jurisdictions to more forcefully regulate AI. The EU’s regulation of high-risk AI use cases begins to look like unilateral disarmament if the US does not take action itself. Why would Europe tie the hands of its tech competitors if the US refuses to do the same?

    AI and Unbridled Growth

    The fundamental problem with frontiers is that they seem to promise cost-free growth. There was a constant pressure for American westward expansion because a bigger, more populous country accrues more power and wealth to the elites and because, for any individual, a better life was always one more wagon ride away into “empty” terrain. AI presents the same opportunities. No matter what field you’re in or what problem you’re facing, the attractive opportunity of AI as a free labor multiplier probably seems like the solution; or, at least, makes for a good sales pitch.

    That would actually be okay, except that the growth isn’t free. America’s imperial expansion displaced, harmed, and subjugated native peoples in the Americas, Africa, and the Pacific, while enlisting poor whites to participate in the scheme against their class interests. Capitalism makes growth look like the solution to all problems, even when it’s clearly not. The problem is that so many costs are externalized. Why pay a living wage to human supervisors training AI models when an outsourced gig worker will do it at a fraction of the cost? Why power data centers with renewable energy when it’s cheaper to surge energy production with fossil fuels ? And why fund social protections for wage earners displaced by automation if you don’t have to? The potential of consumer applications of AI, from personal digital assistants to self-driving cars, is irresistible; who wouldn’t want a machine to take on the most routinized and aggravating tasks in your daily life? But the externalized cost for consumers is accepting the inevitability of domination by an elite who will extract every possible profit from AI services.

    Controlling Our Frontier Impulses

    None of these harms are inevitable. Although the structural incentives of capitalism and its growth remain the same, we can make different choices about how to confront them.

    We can strengthen basic democratic protections and market regulations to avoid the worst impacts of AI colonialism. We can require ethical employment for the humans toiling to label data and train AI models. And we can set the bar higher for mitigating bias in training and harm from outputs of AI models.

    We don’t have to cede all the power and decision making about AI to private actors. We can create an AI public option to provide an alternative to corporate AI. We can provide universal access to ethically built and democratically governed foundational AI models that any individual—or company—could use and build upon.

    More ambitiously, we can choose not to privatize the economic gains of AI. We can cap corporate profits, raise the minimum wage, or redistribute an automation dividend as a universal basic income to let everyone share in the benefits of the AI revolution. And, if these technologies save as much labor as companies say they do, maybe we can also all have some of that time back.

    And we don’t have to treat the global AI gold rush as a zero-sum game. We can emphasize international cooperation instead of competition. We can align on shared values with international partners and create a global floor for responsible regulation of AI. And we can ensure that access to AI uplifts developing economies instead of further marginalizing them.

    This essay was written with Nathan Sanders, and was originally published in Jacobin .

    • chevron_right

      US agency tasked with curbing risks of AI lacks funding to do the job

      news.movim.eu / ArsTechnica · Saturday, 23 December - 11:45

    They know...

    Enlarge / They know... (credit: Aurich / Getty)

    US president Joe Biden’s plan for containing the dangers of artificial intelligence already risks being derailed by congressional bean counters.

    A White House executive order on AI announced in October calls on the US to develop new standards for stress-testing AI systems to uncover their biases, hidden threats, and rogue tendencies. But the agency tasked with setting these standards, the National Institute of Standards and Technology (NIST), lacks the budget needed to complete that work independently by the July 26, 2024, deadline, according to several people with knowledge of the work.

    Read 15 remaining paragraphs | Comments

    • Sc chevron_right

      AI and Trust

      news.movim.eu / Schneier · Tuesday, 5 December - 05:51 · 17 minutes

    I trusted a lot today. I trusted my phone to wake me on time. I trusted Uber to arrange a taxi for me, and the driver to get me to the airport safely. I trusted thousands of other drivers on the road not to ram my car on the way. At the airport, I trusted ticket agents and maintenance engineers and everyone else who keeps airlines operating. And the pilot of the plane I flew in. And thousands of other people at the airport and on the plane, any of which could have attacked me. And all the people that prepared and served my breakfast, and the entire food supply chain—any of them could have poisoned me. When I landed here, I trusted thousands more people: at the airport, on the road, in this building, in this room. And that was all before 10:30 this morning.

    Trust is essential to society. Humans as a species are trusting. We are all sitting here, mostly strangers, confident that nobody will attack us. If we were a roomful of chimpanzees, this would be impossible. We trust many thousands of times a day. Society can’t function without it. And that we don’t even think about it is a measure of how well it all works.

    In this talk, I am going to make several arguments. One, that there are two different kinds of trust—interpersonal trust and social trust—and that we regularly confuse them. Two, that the confusion will increase with artificial intelligence. We will make a fundamental category error. We will think of AIs as friends when they’re really just services. Three, that the corporations controlling AI systems will take advantage of our confusion to take advantage of us. They will not be trustworthy. And four, that it is the role of government to create trust in society. And therefore, it is their role to create an environment for trustworthy AI. And that means regulation. Not regulating AI, but regulating the organizations that control and use AI.

    Okay, so let’s back up and take that all a lot slower. Trust is a complicated concept, and the word is overloaded with many meanings. There’s personal and intimate trust. When we say that we trust a friend, it is less about their specific actions and more about them as a person. It’s a general reliance that they will behave in a trustworthy manner. We trust their intentions, and know that those intentions will inform their actions. Let’s call this “interpersonal trust.”

    There’s also the less intimate, less personal trust. We might not know someone personally, or know their motivations—but we can trust their behavior. We don’t know whether or not someone wants to steal, but maybe we can trust that they won’t. It’s really more about reliability and predictability. We’ll call this “social trust.” It’s the ability to trust strangers.

    Interpersonal trust and social trust are both essential in society today. This is how it works. We have mechanisms that induce people to behave in a trustworthy manner, both interpersonally and socially. This, in turn, allows others to be trusting. Which enables trust in society. And that keeps society functioning. The system isn’t perfect—there are always going to be untrustworthy people—but most of us being trustworthy most of the time is good enough.

    I wrote about this in 2012 in a book called Liars and Outliers . I wrote about four systems for enabling trust: our innate morals, concern about our reputations, the laws we live under, and security technologies that constrain our behavior. I wrote about how the first two are more informal than the last two. And how the last two scale better, and allow for larger and more complex societies. They enable cooperation amongst strangers.

    What I didn’t appreciate is how different the first and last two are. Morals and reputation are person to person, based on human connection, mutual vulnerability, respect, integrity, generosity, and a lot of other things besides. These underpin interpersonal trust. Laws and security technologies are systems of trust that force us to act trustworthy. And they’re the basis of social trust.

    Taxi driver used to be one of the country’s most dangerous professions. Uber changed that. I don’t know my Uber driver, but the rules and the technology lets us both be confident that neither of us will cheat or attack each other. We are both under constant surveillance and are competing for star rankings.

    Lots of people write about the difference between living in a high-trust and a low-trust society. How reliability and predictability make everything easier. And what is lost when society doesn’t have those characteristics. Also, how societies move from high-trust to low-trust and vice versa. This is all about social trust.

    That literature is important, but for this talk the critical point is that social trust scales better. You used to need a personal relationship with a banker to get a loan. Now it’s all done algorithmically, and you have many more options to choose from.

    Social trust scales better, but embeds all sorts of bias and prejudice. That’s because, in order to scale, social trust has to be structured, system- and rule-oriented, and that’s where the bias gets embedded. And the system has to be mostly blinded to context, which removes flexibility.

    But that scale is vital. In today’s society we regularly trust—or not—governments, corporations, brands, organizations, groups. It’s not so much that I trusted the particular pilot that flew my airplane, but instead the airline that puts well-trained and well-rested pilots in cockpits on schedule. I don’t trust the cooks and waitstaff at a restaurant, but the system of health codes they work under. I can’t even describe the banking system I trusted when I used an ATM this morning. Again, this confidence is no more than reliability and predictability.

    Think of that restaurant again. Imagine that it’s a fast food restaurant, employing teenagers. The food is almost certainly safe—probably safer than in high-end restaurants—because of the corporate systems or reliability and predictability that is guiding their every behavior.

    That’s the difference. You can ask a friend to deliver a package across town. Or you can pay the Post Office to do the same thing. The former is interpersonal trust, based on morals and reputation. You know your friend and how reliable they are. The second is a service, made possible by social trust. And to the extent that is a reliable and predictable service, it’s primarily based on laws and technologies. Both can get your package delivered, but only the second can become the global package delivery systems that is FedEx.

    Because of how large and complex society has become, we have replaced many of the rituals and behaviors of interpersonal trust with security mechanisms that enforce reliability and predictability—social trust.

    But because we use the same word for both, we regularly confuse them. And when we do that, we are making a category error.

    And we do it all the time. With governments. With organizations. With systems of all kinds. And especially with corporations.

    We might think of them as friends, when they are actually services. Corporations are not moral; they are precisely as immoral as the law and their reputations let them get away with.

    So corporations regularly take advantage of their customers, mistreat their workers, pollute the environment, and lobby for changes in law so they can do even more of these things.

    Both language and the laws make this an easy category error to make. We use the same grammar for people and corporations. We imagine that we have personal relationships with brands. We give corporations some of the same rights as people.

    Corporations like that we make this category error—see, I just made it myself—because they profit when we think of them as friends. They use mascots and spokesmodels. They have social media accounts with personalities. They refer to themselves like they are people.

    But they are not our friends. Corporations are not capable of having that kind of relationship.

    We are about to make the same category error with AI. We’re going to think of them as our friends when they’re not.

    A lot has been written about AIs as existential risk. The worry is that they will have a goal, and they will work to achieve it even if it harms humans in the process. You may have read about the “ paperclip maximizer “: an AI that has been programmed to make as many paper clips as possible, and ends up destroying the earth to achieve those ends. It’s a weird fear. Science fiction author Ted Chiang writes about it. Instead of solving all of humanity’s problems, or wandering off proving mathematical theorems that no one understands, the AI single-mindedly pursues the goal of maximizing production. Chiang’s point is that this is every corporation’s business plan. And that our fears of AI are basically fears of capitalism. Science fiction writer Charlie Stross takes this one step further, and calls corporations “ slow AI .” They are profit maximizing machines. And the most successful ones do whatever they can to achieve that singular goal.

    And near-term AIs will be controlled by corporations. Which will use them towards that profit-maximizing goal. They won’t be our friends. At best, they’ll be useful services. More likely, they’ll spy on us and try to manipulate us.

    This is nothing new. Surveillance is the business model of the Internet. Manipulation is the other business model of the Internet.

    Your Google search results lead with URLs that someone paid to show to you. Your Facebook and Instagram feeds are filled with sponsored posts. Amazon searches return pages of products whose sellers paid for placement.

    This is how the Internet works. Companies spy on us as we use their products and services. Data brokers buy that surveillance data from the smaller companies, and assemble detailed dossiers on us. Then they sell that information back to those and other companies, who combine it with data they collect in order to manipulate our behavior to serve their interests. At the expense of our own.

    We use all of these services as if they are our agents, working on our behalf. In fact, they are double agents, also secretly working for their corporate owners. We trust them, but they are not trustworthy. They’re not friends; they’re services.

    It’s going to be no different with AI. And the result will be much worse, for two reasons.

    The first is that these AI systems will be more relational. We will be conversing with them, using natural language. As such, we will naturally ascribe human-like characteristics to them.

    This relational nature will make it easier for those double agents to do their work. Did your chatbot recommend a particular airline or hotel because it’s truly the best deal, given your particular set of needs? Or because the AI company got a kickback from those providers? When you asked it to explain a political issue, did it bias that explanation towards the company’s position? Or towards the position of whichever political party gave it the most money? The conversational interface will help hide their agenda.

    The second reason to be concerned is that these AIs will be more intimate. One of the promises of generative AI is a personal digital assistant. Acting as your advocate with others, and as a butler with you. This requires an intimacy greater than your search engine, email provider, cloud storage system, or phone. You’re going to want it with you 24/7, constantly training on everything you do. You will want it to know everything about you, so it can most effectively work on your behalf.

    And it will help you in many ways. It will notice your moods and know what to suggest. It will anticipate your needs and work to satisfy them. It will be your therapist, life coach, and relationship counselor.

    You will default to thinking of it as a friend. You will speak to it in natural language, and it will respond in kind. If it is a robot, it will look humanoid—or at least like an animal. It will interact with the whole of your existence, just like another person would.

    The natural language interface is critical here. We are primed to think of others who speak our language as people. And we sometimes have trouble thinking of others who speak a different language that way. We make that category error with obvious non-people, like cartoon characters. We will naturally have a “theory of mind” about any AI we talk with.

    More specifically, we tend to assume that something’s implementation is the same as its interface. That is, we assume that things are the same on the inside as they are on the surface. Humans are like that: we’re people through and through. A government is systemic and bureaucratic on the inside. You’re not going to mistake it for a person when you interact with it. But this is the category error we make with corporations. We sometimes mistake the organization for its spokesperson. AI has a fully relational interface—it talks like a person—but it has an equally fully systemic implementation. Like a corporation, but much more so. The implementation and interface are more divergent of anything we have encountered to date…by a lot.

    And you will want to trust it. It will use your mannerisms and cultural references. It will have a convincing voice, a confident tone, and an authoritative manner. Its personality will be optimized to exactly what you like and respond to.

    It will act trustworthy, but it will not be trustworthy. We won’t know how they are trained. We won’t know their secret instructions. We won’t know their biases, either accidental or deliberate.

    We do know that they are built at enormous expense, mostly in secret, by profit-maximizing corporations for their own benefit.

    It’s no accident that these corporate AIs have a human-like interface. There’s nothing inevitable about that. It’s a design choice. It could be designed to be less personal, less human-like, more obviously a service—like a search engine . The companies behind those AIs want you to make the friend/service category error. It will exploit your mistaking it for a friend. And you might not have any choice but to use it.

    There is something we haven’t discussed when it comes to trust: power. Sometimes we have no choice but to trust someone or something because they are powerful. We are forced to trust the local police, because they’re the only law enforcement authority in town. We are forced to trust some corporations, because there aren’t viable alternatives. To be more precise, we have no choice but to entrust ourselves to them. We will be in this same position with AI. We will have no choice but to entrust ourselves to their decision-making.

    The friend/service confusion will help mask this power differential. We will forget how powerful the corporation behind the AI is, because we will be fixated on the person we think the AI is.

    So far, we have been talking about one particular failure that results from overly trusting AI. We can call it something like “hidden exploitation.” There are others. There’s outright fraud, where the AI is actually trying to steal stuff from you. There’s the more prosaic mistaken expertise, where you think the AI is more knowledgeable than it is because it acts confidently. There’s incompetency, where you believe that the AI can do something it can’t. There’s inconsistency, where you mistakenly expect the AI to be able to repeat its behaviors. And there’s illegality, where you mistakenly trust the AI to obey the law. There are probably more ways trusting an AI can fail.

    All of this is a long-winded way of saying that we need trustworthy AI. AI whose behavior, limitations, and training are understood. AI whose biases are understood, and corrected for. AI whose goals are understood. That won’t secretly betray your trust to someone else.

    The market will not provide this on its own. Corporations are profit maximizers, at the expense of society. And the incentives of surveillance capitalism are just too much to resist.

    It’s government that provides the underlying mechanisms for the social trust essential to society. Think about contract law. Or laws about property, or laws protecting your personal safety. Or any of the health and safety codes that let you board a plane, eat at a restaurant, or buy a pharmaceutical without worry.

    The more you can trust that your societal interactions are reliable and predictable, the more you can ignore their details. Places where governments don’t provide these things are not good places to live.

    Government can do this with AI. We need AI transparency laws. When it is used. How it is trained. What biases and tendencies it has. We need laws regulating AI—and robotic—safety. When it is permitted to affect the world. We need laws that enforce the trustworthiness of AI. Which means the ability to recognize when those laws are being broken. And penalties sufficiently large to incent trustworthy behavior.

    Many countries are contemplating AI safety and security laws—the EU is the furthest along—but I think they are making a critical mistake. They try to regulate the AIs and not the humans behind them.

    AIs are not people; they don’t have agency. They are built by, trained by, and controlled by people. Mostly for-profit corporations. Any AI regulations should place restrictions on those people and corporations. Otherwise the regulations are making the same category error I’ve been talking about. At the end of the day, there is always a human responsible for whatever the AI’s behavior is. And it’s the human who needs to be responsible for what they do—and what their companies do. Regardless of whether it was due to humans, or AI, or a combination of both. Maybe that won’t be true forever, but it will be true in the near future. If we want trustworthy AI, we need to require trustworthy AI controllers.

    We already have a system for this: fiduciaries. There are areas in society where trustworthiness is of paramount importance, even more than usual. Doctors, lawyers, accountants…these are all trusted agents. They need extraordinary access to our information and ourselves to do their jobs, and so they have additional legal responsibilities to act in our best interests. They have fiduciary responsibility to their clients.

    We need the same sort of thing for our data. The idea of a data fiduciary is not new. But it’s even more vital in a world of generative AI assistants.

    And we need one final thing: public AI models. These are systems built by academia, or non-profit groups, or government itself, that can be owned and run by individuals.

    The term “public model” has been thrown around a lot in the AI world, so it’s worth detailing what this means. It’s not a corporate AI model that the public is free to use. It’s not a corporate AI model that the government has licensed. It’s not even an open-source model that the public is free to examine and modify.

    A public model is a model built by the public for the public. It requires political accountability, not just market accountability. This means openness and transparency paired with a responsiveness to public demands. It should also be available for anyone to build on top of. This means universal access. And a foundation for a free market in AI innovations. This would be a counter-balance to corporate-owned AI.

    We can never make AI into our friends. But we can make them into trustworthy services—agents and not double agents. But only if government mandates it. We can put limits on surveillance capitalism. But only if government mandates it.

    Because the point of government is to create social trust. I started this talk by explaining the importance of trust in society, and how interpersonal trust doesn’t scale to larger groups. That other, impersonal kind of trust—social trust, reliability and predictability—is what governments create.

    To the extent a government improves the overall trust in society, it succeeds. And to the extent a government doesn’t, it fails.

    But they have to. We need government to constrain the behavior of corporations and the AIs they build, deploy, and control. Government needs to enforce both predictability and reliability.

    That’s how we can create the social trust that society needs to thrive.

    This essay previously appeared on the Harvard Kennedy School Belfer Center’s website.

    • chevron_right

      EU game devs ask regulators to look at Unity’s “anti-competitive” bundling

      news.movim.eu / ArsTechnica · Thursday, 21 September, 2023 - 19:44

    EU game devs ask regulators to look at Unity’s “anti-competitive” bundling

    Enlarge

    In the wake of Unity's sudden fee structure change announcement last week , a European trade group representing thousands of game developers is calling on governments to "update their regulatory framework" to curb what they see as a "looming market failure" caused by "potentially anti-competitive market behavior."

    In an open letter published last week , the European Games Developer Federation goes through a lot of the now-familiar arguments for why Unity's decision to charge up to $0.20 per game install will be bad for the industry. The federation of 23 national game developer trade associations argues that the new fee structure will make it "much harder for [small and midsize developers] to build reliable business plans" by "significantly increas[ing] the game development costs for most game developers relying on [Unity's] services."

    The organization also publicly worries about "professional game education institutions" that may need to update their curriculums wholesale if there is a mass exodus from Unity's engine. "Many young industry professionals who have built their career plans on mastering Unity’s tools [will be put] in a very difficult position."

    Read 9 remaining paragraphs | Comments

    • chevron_right

      Major AI companies form group to research, keep control of AI

      news.movim.eu / ArsTechnica · Wednesday, 26 July, 2023 - 13:16

    logos of four companies

    Enlarge / The four companies say they launched the Frontier Model Forum to ensure "the safe and responsible development of frontier AI models." (credit: Financial Times)

    Four of the world’s most advanced artificial intelligence companies have formed a group to research increasingly powerful AI and establish best practices for controlling it, as public anxiety and regulatory scrutiny over the impact of the technology increases.

    On Wednesday, Anthropic, Google, Microsoft and OpenAI launched the Frontier Model Forum, with the aim of “ensuring the safe and responsible development of frontier AI models.”

    In recent months, the US companies have rolled out increasingly powerful AI tools that produce original content in image, text or video form by drawing on a bank of existing material. The developments have raised concerns about copyright infringement, privacy breaches and that AI could ultimately replace humans in a range of jobs.

    Read 11 remaining paragraphs | Comments

    • chevron_right

      US states’ social media laws to protect kids create challenges for platforms

      news.movim.eu / ArsTechnica · Monday, 15 May, 2023 - 13:22

    US states’ social media laws to protect kids create challenges for platforms

    Enlarge (credit: Financial Times/Getty Images)

    Social media platforms are struggling to navigate a patchwork of US state laws that require them to verify users’ ages and give parents more control over their children’s accounts.

    States including Utah and Arkansas have already passed child social media laws in recent weeks, and similar proposals have been put forward in other states, such as Louisiana, Texas, and Ohio. The legislative efforts are designed to address fears that online platforms are harming the mental health and wellbeing of children and teens amid a rise in teen suicide in the US.

    But critics—including the platforms themselves, as well as some children’s advocacy groups—argue the measures are poorly drafted and fragmented, potentially leading to a raft of unintended consequences.

    Read 22 remaining paragraphs | Comments

    • chevron_right

      Crypto and the US government are headed for a decisive showdown

      news.movim.eu / ArsTechnica · Tuesday, 9 August, 2022 - 13:52

    Crypto and the US government are headed for a decisive showdown

    Enlarge (credit: Elena Lacey | Getty Images)

    If you have paid casual attention to crypto news over the past few years, you probably have a sense that the crypto market is unregulated—a tech-driven Wild West in which the rules of traditional finance do not apply.

    If you were Ishan Wahi, however, you would probably not have that sense.

    Wahi worked at Coinbase, a leading crypto exchange, where he had a view into which tokens the platform planned to list for trading—an event that causes those assets to spike in value. According to the US Department of Justice, Wahi used that knowledge to buy those assets before the listings, then sell them for big profits. In July, the DOJ announced that it had indicted Wahi, along with two associates, in what it billed as the “first ever cryptocurrency insider trading tipping scheme.” If convicted, the defendants could face decades in federal prison.

    Read 34 remaining paragraphs | Comments