Also known as snooggums on midwest.social and kbin.social.

  • 0 Posts
  • 22 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle






  • the university or the government does not project these things and adjust the available program sizes

    They kinda do, but only the part where they increase program sizes after demand exists and only wind down when the market is saturated. They can’t really work too far ahead if they don’t know ow something will be in demand and they don’t want to tell students to not do something they offer just because there are too many graduates. Add the four or five years to graduation and you get a system that lags behind reality even if the planning was better.

    But a well designed post secondary education means graduates can go into similar or related fields, they aren’t limited to what is on their diploma except in their own minds.


  • Personally I think that claim is a decent one: user-initiated request should not be subject to robot limitations, and are not the source of DDOS attack to web sites.

    They are one of the sources!

    The AI scraping when a user enters a prompt is DDOSing sites in addition to the scraping for training data that is DDOSing sites. These shitty companies are repeatedly slamming the same sites over and over again in the least efficient way because they are not using the scraped data from training when they process a user prompt that does a web search.

    Scraping once extensively and scraping a bit less but far more frequently have similar impacts.








  • But a user initiated operation isn’t the same as a bot.

    Oh fuck off with that AI company propaganda.

    The AI companies already overwhelmed sites to get training data and are repeating their shitty scraping practices when users interact with their AI. It’s the same fucking thing.

    Web crawlers for search engines don’t scrape pages every time a user searches like AI does. Both web crawlers and scrapers are bots, and how a human initiates their operation, scheduled or not, doesn’t matter as much as the fact that they do things very differently and only one of the two respects robots.txt.