• chevron_right

      Stability AI plans to let artists opt out of Stable Diffusion 3 image training

      news.movim.eu / ArsTechnica · Thursday, 15 December, 2022 - 22:42 · 1 minute

    An AI-generated image of someone leaving a building.

    Enlarge / An AI-generated image of a person leaving a building, thus opting out of the vertical blinds convention. (credit: Ars Technica)

    On Wednesday, Stability AI announced it would allow artists to remove their work from the training dataset for an upcoming Stable Diffusion 3.0 release. The move comes as an artist advocacy group called Spawning tweeted that Stability AI would honor opt-out requests collected on its Have I Been Trained website. The details of how the plan will be implemented remain incomplete and unclear, however.

    As a brief recap, Stable Diffusion , an AI image synthesis model, gained its ability to generate images by "learning" from a large dataset of images scraped from the Internet without consulting any rights holders for permission. Some artists are upset about it because Stable Diffusion generates images that can potentially rival human artists in an unlimited quantity. We've been following the ethical debate since Stable Diffusion's public launch in August 2022.

    To understand how the Stable Diffusion 3 opt-out system is supposed to work, we created an account on Have I Been Trained and uploaded an image of the Atari Pong arcade flyer (which we do not own). After the site's search engine found matches in the Large-scale Artificial Intelligence Open Network ( LAION ) image database, we right-clicked several thumbnails individually and selected "Opt-Out This Image" in a pop-up menu.

    Read 6 remaining paragraphs | Comments

    • chevron_right

      Artist finds private medical record photos in popular AI training data set

      news.movim.eu / ArsTechnica · Wednesday, 21 September, 2022 - 15:43 · 1 minute

    Censored medical images found in the LAION-5B data set used to train AI. The black bars and distortion have been added.

    Enlarge / Censored medical images found in the LAION-5B data set used to train AI. The black bars and distortion have been added. (credit: Ars Technica)

    Late last week, a California-based AI artist who goes by the name Lapine discovered private medical record photos taken by her doctor in 2013 referenced in the LAION-5B image set, which is a scrape of publicly available images on the web. AI researchers download a subset of that data to train AI image synthesis models such as Stable Diffusion and Google Imagen .

    Lapine discovered her medical photos on a site called Have I Been Trained that lets artists see if their work is in the LAION-5B data set. Instead of doing a text search on the site, Lapine uploaded a recent photo of herself using the site's reverse image search feature. She was surprised to discover a set of two before-and-after medical photos of her face, which had only been authorized for private use by her doctor, as reflected in an authorization form Lapine tweeted and also provided to Ars.

    Lapine has a genetic condition called Dyskeratosis Congenita . "It affects everything from my skin to my bones and teeth," Lapine told Ars Technica in an interview. "In 2013, I underwent a small set of procedures to restore facial contours after having been through so many rounds of mouth and jaw surgeries. These pictures are from my last set of procedures with this surgeon."

    Read 14 remaining paragraphs | Comments