• chevron_right

      Paper: Stable Diffusion “memorizes” some images, sparking privacy concerns

      news.movim.eu / ArsTechnica · Wednesday, 1 February, 2023 - 18:37 · 1 minute

    An image from Stable Diffusion’s training set compared to a similar Stable Diffusion generation when prompted with

    Enlarge / An image from Stable Diffusion’s training set compared (left) to a similar Stable Diffusion generation (right) when prompted with "Ann Graham Lotz." (credit: Carlini et al., 2023)

    On Monday, a group of AI researchers from Google, DeepMind, UC Berkeley, Princeton, and ETH Zurich released a paper outlining an adversarial attack that can extract a small percentage of training images from latent diffusion AI image synthesis models like Stable Diffusion . It challenges views that image synthesis models do not memorize their training data and that training data might remain private if not disclosed.

    Recently, AI image synthesis models have been the subject of intense ethical debate and even legal action . Proponents and opponents of generative AI tools regularly argue over the privacy and copyright implications of these new technologies. Adding fuel to either side of the argument could dramatically affect potential legal regulation of the technology, and as a result, this latest paper, authored by Nicholas Carlini et al., has perked up ears in AI circles.

    However, Carlini's results are not as clear-cut as they may first appear. Discovering instances of memorization in Stable Diffusion required 175 million image generations for testing and preexisting knowledge of trained images. Researchers only extracted 94 direct matches and 109 perceptual near-matches out of 350,000 high-probability-of-memorization images they tested (a set of known duplicates in the 160 million-image dataset used to train Stable Diffusion), resulting in a roughly 0.03 percent memorization rate in this particular scenario.

    Read 7 remaining paragraphs | Comments

    • chevron_right

      New Go-playing trick defeats world-class Go AI—but loses to human amateurs

      news.movim.eu / ArsTechnica · Monday, 7 November, 2022 - 19:43 · 1 minute

    Go pieces and a rulebook on a Go board.

    Enlarge / Go pieces and a rulebook on a Go board. (credit: Getty Images )

    In the world of deep-learning AI, the ancient board game Go looms large. Until 2016, the best human Go player could still defeat the strongest Go-playing AI. That changed with DeepMind's AlphaGo , which used deep-learning neural networks to teach itself the game at a level humans cannot match. More recently, KataGo has become popular as an open source Go-playing AI that can beat top-ranking human Go players.

    Last week, a group of AI researchers published a paper outlining a method to defeat KataGo by using adversarial techniques that take advantage of KataGo's blind spots. By playing unexpected moves outside of KataGo's training set, a much weaker adversarial Go-playing program (that amateur humans can defeat) can trick KataGo into losing.

    To wrap our minds around this achievement and its implications, we spoke to one of the paper's co-authors, Adam Gleave , a Ph.D. candidate at UC Berkeley. Gleave (along with co-authors Tony Wang, Nora Belrose, Tom Tseng, Joseph Miller, Michael D. Dennis, Yawen Duan, Viktor Pogrebniak, Sergey Levine, and Stuart Russell) developed what AI researchers call an " adversarial policy ." In this case, the researchers' policy uses a mixture of a neural network and a tree-search method (called Monte-Carlo Tree Search ) to find Go moves.

    Read 8 remaining paragraphs | Comments