• person rss_feed

    x843k’s feed

    Blog

    • chevron_right

      Google Risks Devouring the Entire Web

      x843k · Monday, 14 August, 2023 - 07:58 edit · 6 minutes · 7 visibility

    #Google has already cannibalized the web. What worries, in my opinion, is the manipulative power of these tools: people no longer read the contents on websites but are satisfied with the search engine's answers. #ai

    www.editorialedomani.it Google Risks Devouring the Entire Web Andrea Daniele Signorelli 8 - 10 minutes

    The new experimental search engine, based on generative artificial intelligence, uses the content present on websites to produce complete answers to user searches, creating a vicious cycle that risks cannibalizing the internet.

    Until a few years ago, Google had only one task: to direct users to web content that, according to the algorithm that regulates the search engine's operation, had the best chances of answering their queries.

    From a certain point of view, Google was an altruistic tool: aside from ads and sponsored results, its work not only made users' lives easier (as they would otherwise have been lost in the sea of the web) but also allowed online publications to receive a substantial portion (if not the majority) of their traffic.

    Over time, things began to change, taking a leap in 2018 with the latest version of "featured snippets." Through these previews, Google no longer just shows classic links but provides at the top of the page a box containing the most relevant excerpt, for our search, from a web page.

    A few lines that often, in many cases - game results, song lyrics, historical event dates, and other brief information - are sufficient to satisfy user requests, so they no longer need to click on the provided link. A machine of answers

    For example, if you search for "how fast does a cheetah run," you won't need to click on the first link that appears (which redirects to the Focus.it website) to receive the desired information: it will be enough to read the snippet, which states that the maximum speed of the cheetah is 112 kilometers per hour.

    If you search for the lyrics of any song, it will be fully displayed in the preview, eliminating the need to click on the link of the site that originally provided that content.

    As stated by experts at the time, with this crucial change, Google had (partly) ceased to be just a search engine to become more and more a "machine of answers." However, this kind of evolution has inevitably damaged online publications (which, to varying degrees, create content specifically to be found on Google).

    Precisely quantifying the damage is impossible, as the growth or decline of traffic related to SEO (search engine optimization, optimizing content to achieve the best position on Google) depends on a myriad of factors and makes isolating a single one impossible. Either SEO or death

    What is certain, though, is that despite the changes, Google has continued to be a fundamental traffic driver for all the most important websites in the world.

    Even a site like The New York Times receives - according to data from the specialized company Semrush - about 25 percent of its traffic from Google. This percentage jumps to 65 percent (and almost 90 percent when adding all other search engines) in the case of sites like WikiHow, specifically designed to provide answers to questions users ask on Google (the same applies to the much more authoritative Wikipedia).

    Many Italian publications also rely on SEO to gain traffic and generate revenue through advertising: from Salvatore Aranzulla's tech advice website to those offering recipes, household tips, Christmas gift ideas, answers to various curiosities, and more.

    These are often professional entities that generate significant profits (Aranzulla's website, in 2021, generated 3.8 million euros), but their traffic is at risk of being decimated by Google's increasingly evident desire to become the only site people need when seeking information. Zero clicks

    Things are further complicating. Last May, Alphabet (Google's parent company) introduced the beta version of the Search Generative Experience (Sge). Similar to ChatGPT and other Large Language Models (machine learning systems capable of generating various types of text), Google's experimental search engine can rephrase the myriad of information on websites it accesses to generate texts coherent with our searches.

    For instance, if we search for information on the best budget computers, Sge uses material from various specialized sites to entirely generate the result (the same goes for biographies of historical figures, economic information, video game reviews, and various curiosities), showing only a few of the links used to create it in a corner.

    All of this inevitably means that the vast majority of users will no longer click on any links, limiting themselves to consulting the text generated by the search engine. "Google's goal is to provide a 'zero-click' search, which utilizes the information from publications and authors who spend time and effort creating content without offering them any benefit," explained TechRaptor's CEO, Rutledge Dauguette, to Cnbc.

    When the current limits of reliability in the provided answers are overcome, the path initiated with previews will reach its culmination: Google will no longer redirect users to other sites but will become the only portal they need to consult. In this way, however, Google risks creating a vicious cycle. In fact, more than one vicious cycle. A miniature Internet

    Let's proceed in order. As stated by Dauguette, all the articles that Sge reprocesses to provide its textual content were originally produced by online publications, often with the aim of receiving traffic from Google.

    If search engines start cannibalizing the work of others to entirely produce the texts shown to users, the online publications used as sources will consequently be less motivated to create new content.

    Google is aware of all this, to the extent that it stated through a spokesperson that it will continue to "prioritize approaches that generate valuable traffic for a wide range of creators, supporting the health of the open web."

    The problem, however, as Justin Pot wrote in The Atlantic, is that "SGE's very premise necessarily implies more content on Google and less traffic sent to websites. [All this] could lead us to a smaller version of the internet, with fewer sites, less content, and consequently a worse experience for everyone." Vicious cycles

    And so, we come to the first vicious cycle: if fewer sites publish content because they are no longer economically incentivized to do so, how will Sge find new useful material to generate its answers? As suggested by Barry Diller, chairman of the Iac media group, online publications could also react by preventing Google from collecting their material from the web unless it pays a fair compensation.

    However, all this could further benefit content created by those who are not interested in receiving traffic but only in increasing their online visibility.

    This is the case with articles written by so-called "content writers," writers employed by companies to promote their business. Most content with titles like "the best alternatives to Airbnb" is, in fact, written by competitors of the well-known short-term rental platform, exploiting SEO rules for promotional purposes (and therefore not paying much attention to the quality and accuracy of the information).

    Moreover, tools like ChatGpt are increasingly used to quickly produce a vast amount of such material, leading to another vicious cycle: editorial content written by artificial intelligence is used by another artificial intelligence (like Sge) to produce further texts, which in turn circulate on the web and become the source of new artificially generated articles.

    A constant cannibalization that risks inundating the internet with increasingly uniform, less reliable, and increasingly dubious quality content

    . Is this truly the future of the web?