Ever had an “AI” show up at 2AM on an emergency call to fix a gas leak? How about an “AI” to cook a breakfast sandwich? Maybe an “AI” is taking over babysitting while you’re out of town…? No?
“AI” doesn’t do anything. But if your job primarily revolves around words or pictures on a screen, maybe “AI” can help you with that.
AI isn’t going to take anyone’s job.
We will fire a bunch of workers while delusion nepo babies try to figure out why an autocomplete bot can think critically or do any complex tasks, then they will close their buisness or rehire people after a few years of failure, and it won’t impact the owner’s quality of life in any way because they have more wealth then they will ever need
We should absolutely have a UBI that’s funded by taxing 100% of wealth over a set number and redistributing it perpetually.
Agreed for the most part, but I disagree about the 100% taxes thing. I think we should instead cap inheritance/gifts, not income. You can be as wealthy as you want, but once you die, it all goes back to the common pot.
I don’t care about rich people, I mostly just care about generational wealth.
I mean, those are kinda two sides of the same coin. Both ways to limit the compounding of wealth in few hands.
I’m open to all these ideas, and more
Wealth and income are two different things. We should tax wealth savagely, i.e., the ownership of assets, and we should also tax income, but to a lesser degree.
Just to level set: income refers to the flow of money earned over a period, like a salary or wages, while wealth represents the accumulated assets minus liabilities at a specific point in time
I think money as we know is already over. We don’t need AI for that. Just look at prices, wages and economy.
- Eat the rich
- Luxury gay space communism
You joke but post scarcity anarchism is probably the only truly viable post capitalist society where the state actually has a real chance of withering away. That means good praxis is anything which reduces scarcity - both in the form of technological developments and sustainability/ecology. And yes, harm reduction measures which foster collaboration and social cohesion and create actualized humans with real agency and a real stake in their own communities.
The problem with so much leftist thought is precisely that it denies agency to those it seeks to liberate. “Luxury gay space communism” is a meme, but it’s based on a post-left idea which is actually far more rooted in reality than a lot of ML orthodoxy.
When democratic governance withers what fills the power vacuum is feudalism.
Technofeudalism is feudalism with computers.
Ironically, to create a space that selects for and protects distributed decisionmaking (the desire of most sane anarchists), you need a strong government!
Anarchism is a project. It’s not just a matter of eliminating the state. That would just result in Mad Max.
You need people to work together to help each others needs. I help you because I might need help someday, too. That builds a real community. And then maybe, just maybe, we solve each others problems enough that the state is unnecessary.
Is it a pipe dream? Maybe. But the steps towards that are worth doing, anyway.
Of course the power dynamics cannot ever be eliminated (either by breeding or enculturation) from the interpersonal relationships.
Instead, power can be regulated and managed, to maximize distributed decisionmaking, and to protect those decisionmakers who could not or would not protect themselves.
In a free for all, feudalism will always result. The strong and the willing will rule over the weak and the unwilling.
There have to be limits to the power dynamics. Those limits will have to be enforced to protect the vulnerable, the gullible, and the unwilling (those who have the capability to exercise power, but refuse by choice), etc. This requires advanced democratic governance with a very strong government.
Doing away with the government is just a speedrun toward technofeudalism.
Working to create a protected space that selects for distributed decisionmaking is the actual project. That’s an actually sane, worthwhile and achievable goal.
AI can’t do my job.
I’m the guy they call when the machines go down.
I fully suspect some billionaire will invent “vibe repairing”.
Followed quickly by “vibe bankruptcy proceedings”
“What then?”
“Same as it ever was!”
We all fight over resources that actually matter (like food, water, shelter and security) instead the previous things (money), for the enjoyment of our overlords.
Seriously, the people who have power to change the outcome of the future seem to either straight not be planning for this future scenario, or are planning for a horribly distopian version of this future scenario.
Oh, they’ve planned for it. They have their billionaire bunkers. Bezos has three that we know of.
Same as ever…was that money wasn’t needed.
Do you need money within your neighborhood or your family? Do you pay people for giving a favor?
If an AI puts you out of work it should have to pay your salary.
Or more likely it was a shitty job that shouldn’t have been done by a human in the first place.
Hell, even just like 3 quarters of your salary would be ok for most. You’d have to cut your cost of living, but you’d never have to ever work again
There will still be money, we just won’t have any. The rich will have armies of robots and watch us all starve to death.
Oh no, money will keep being money. We will just never see a penny and finally be doomed to be full slaves. As intended by the system and those that designed it.
But the billionaires won’t need us as slaves once they have their fleets of robots.
You can always repurpose an asset and use it in another way or resell. They won’t need us for what we do today but maybe they’ll get a liking for human flesh afterwards. Always useful.
The current tech/IT sector is heavily relying on and riding hype trains. It’s a bit like the fashion industry that way. But this AI hype so far has only been somewhat useful.
Current general LLMs are decent for prototyping or example output to jump-start you into the general direction of your destination, but their output always needs supervision and most often it needs fixing. If you apply unreliable and constantly changing AI to everything, and completely throw out humans, just because it’s cheaper, then you’ll get vastly inferior results. You probably get faster results, but the results will have tons of errors which introduces tons of extra problems you never had before. I can see AI fully replacing some jobs in some specific areas where errors don’t matter much. But that’s about it. For all other jobs or purposes, AI will be an extra tool, nothing more, nothing less.
AI has its uses within specific domains, when trained only on domain-specific and truthful data. You know, things like AlphaZero or AlphaGo. Or AIs revealing new methods not known before to reach the same goal. But these general AIs like ChatGPT which are trained on basically the whole web with all the crap in it… it’s never going to be truly great. And it’s also becoming worse over time, i.e. not improving much at all, because the web will be even fuller with AI-generated crap in the future. So the AIs slurp up all that crap too. The training data gets muddier over time. The promise of AIs getting even more powerful as time goes on is just a marketing lie. There’s most likely a saturation curve, and we’re most likely very close to the saturation already, where it won’t really get any better. You could already see this by comparing the jump from GPT-3 to GPT-4 (big) and then GPT-4 to GPT-5 (much smaller). Or take a look at FSD cars. Also not really happening, unless you like crashes. Of course, the companies want to keep the illusion rolling so they’ll always claim the next big revolution is just around the corner. Because they profit from investments and monthly paying customers, and as long as they can keep that illusion up and profit from that, they don’t even need to fulfill any more promises.
Current general LLMs are decent for prototyping or example output to jump-start you into the general direction of your destination, but their output always needs supervision and most often it needs fixing.
This.
LLMs do not produce anything that can be relied upon confidently without human review, and after the bubble pops, that’s only going to become more true.
Hell, I’m glad the first time I ever used it it gave me a
buggedhallucinated and false reply. I asked it to give me a summary of the 2023 Super Bowl and learned that Patrick Mahomes kicked a field goal to win the game.
If.