• FauxLiving@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    9
    ·
    4 days ago

    The point of the article is that there is a difference between a bot which is just scraping data as fast as possible and a user requesting information for their own use.

    Cloudflare doesn’t distinguish these things. It would be like Cloudflare blocking your browser because it was automatically fetching JavaScript from multiple sources in order to render the page you navigated to.

    I’m sure you can recognize how annoyed you would be with Cloudflare if you had to enter 4 captchas in order to load a single web page or, as here, have your page fail to load some elements that you requested because Cloudflare thinks fetching JavaScript or pre caching links is the same as web crawler activity.

    • r00ty@kbin.life
      link
      fedilink
      arrow-up
      7
      arrow-down
      2
      ·
      4 days ago

      Yes, but my point is I cannot tell the difference. If they can convince cloudflare they deserve special treatment and exemption then they can probably get it.

      I would argue there being a difference “depends” though. There’s two problems I see. They are only potentially not guilty of one.

      The first problem is, that AI crawlers are a true DDoS and this is I think the main reason most (including myself) do not want them. They cause performance issues by essentially speed running collecting every unique piece of data from your site. If they’re dynamic as the article says then they are potentially not doing this. I cannot say for sure here.

      The second problem is, many sites are monetized from advert revenue or otherwise motivated by actual organic traffic. In this case, I would bet some money that this company is taking the data from these sites, not providing ad revenue or organic traffic and serving it to the querying user with their own ads included. In which case, this is also very very bad.

      So, their beef is only potentially partially valid. Like I say, if they can convince cloudflare, and people like me to add exceptions for them, then great. So far though, I’m not convinced. AI scrapers have a bad reputation in general, and it’s deserved. They need to do a LOT to escape that stigma.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        5
        ·
        4 days ago

        This isn’t about AI crawlers. This is about users using AI tools.

        There’s a massive difference in server load between a user summarizing one page from your site and a bot trying to hit every page simultaneously.

        The second problem is, many sites are monetized from advert revenue or otherwise motivated by actual organic traffic.

        Should Cloudflare block users who use ad block extensions in their browser now?

        The point of the article is that Cloudflare is blocking legitimate traffic, created by individual humans, by classifying that traffic as bot traffic.

        Bot traffic is blocked because it creates outsized server load, this is something that user created traffic doesn’t do.

        People use Cloudflare to protect their sites against bot traffic so that human users can access the site without it being ddos’d by bot traffic. By classifying user generated traffic and scraper generated traffic as the same thing, Cloudflare is incorrectly classifying traffic and blocking human users from accessing websites,

        Websites are not able to opt out of this classification scheme. If they want to use Cloudflare for bot protection then they have to also agree that users using AI tools cannot access their sites even if the website owner wants to allow it. Cloudflare is blocking legitimate traffic and not allowing their customers to opt out of this scheme.

        It should be pretty easy to understand how a website owner would be upset if their users couldn’t access their website.

        • r00ty@kbin.life
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          4 days ago

          And their “AI tool” looks just like the hundreds of AI scraping bots. And I’ve already said the answer is easy. They need to differentiate themselves enough to convince cloudflare to make an exception for them.

          Until then, they’re “just another AI company scraping data”

          • FauxLiving@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 days ago

            Well, Cloudflare is adding, to the control panel, the ability to whitelist Perplexity and other AI sources (default: on).

            Looks like they differentiated themselves enough.

            • r00ty@kbin.life
              link
              fedilink
              arrow-up
              1
              ·
              4 days ago

              That option is only likely to be for paid accounts. The freebie users like me have to make our own anti bot WAF rules. Or, as I do, just toss every page I expect a user to be using via managed challenge. Adding exceptions uses up precious space in those rules which I’ve used to put in exceptions for genuine instance to instance traffic.

              But I am glad they were able to convince cloudflare. Good for them.

    • pressanykeynow@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 days ago

      Cloudflare doesn’t distinguish these things

      It does.

      You just make useragent like “AI bot request initiated by user” and the website owners will decide for themselves to allow your traffic or not.

      If your bot pretends to not be a bot, it should be blocked.

      Edit. Btw Openai does this.