A Critical Juncture - Point 1: Soon, there will be no jobs.
This is part of a special series written by Board Member Charles Pellicane on AI Ethics. We will release these articles on Fridays.
It’s true. Jobs are going to disappear. It’s not going to be tomorrow. It’s not going to be next week or next month or next year. But, and get used to me saying this, it is inevitable. Right now, someone is working hard at designing artificial intelligence to eliminate your job. And someone else is working hard at designing artificial intelligence that can eliminate their job. And so on and so forth on down the line. Instead of turtles all the way down, it’s coders that are looking to cannibalize their own employment like a line of progressively larger fish all about to munch down on the smaller one in its mouth. A fractal of efficiency or a blackhole of better profits, take your pick.
Companies are advertising to hire their AI services instead of people, including advertising AI services for HR services. AI hiring AI. AI firing AI. Where does that leave you and I? Not in a good place. In a capitalistic system built on competition, we cannot compete. AI can work 24/7, 365 with no religious holidays, sick kids, or dying relatives.
We used to think there would be certain safe jobs from AI. We also used to think computers would never beat a Human at chess and go, but Deep Blue beat Garry Kasparov and Google DeepMind’s AlphaGo beat Fan Hui.. Like lifeboats in a sinking ship some thought we could swim to certain creative tasks or interpersonal things, but that is wrong. We will be replaced by more empathetic machines and more creative computers. No examples exist where a machine will not be able to eventually replace a human doing the same work.
Marketing? AI for the copy, AI for the content, AI for the strategy. Manufacturing? Robots, assembly lines, and sensors. Janitor? Build a robot. Sculpter? Just a good enough 3D printer that can handle any type of material. Truck driver? Autonomous vehicles. CEO? Smart enough computer running the right decision/leadership algorithms through thousands of case studies to always select the most advantageous choice for the business. Author? Please, just feed AI a topic and watch it write. Singer? Hologram superstars with AI voices never get tired or sore or scratchy or…
You get the point. And I know, there are still glitches. AI hallucinates and struggles counting r’s in strawberry (although it seems that was recently fixed) among other problems, but that’s today. The first airplane was made of canvas and wood and bicycle parts. Only 120 years later and we have sent people to the moon and rovers to mars while millions of people fly daily. Where is AI going to be 120 years from now? Again, we will engineer, improve, and maximize our way to no work for humans. It. Is. Inevitable.
If you are struggling to imagine a world where AI does every task, start with any singular task. Now imagine that a computer, robot, or machine has been specifically designed for solving that issue and that it has performed the task an infinite number of times while being improved upon constantly. Even with uniquely human traits like empathy, if the profit is there someone will look to design the human element out of the work. AI therapists already exist and while they may not be any good now, there will be a future where they will be.
Even problems with the AI will be able to be fixed by AI. One can imagine software fixing other software and this is all done without the intervention of humans. Here are just a few examples of the AI tools currently marketed in early 2025 that are looking to eliminate human workers:
Microsoft’s Sales Agent, Salesforce’s Agentforce, Meta’s Customer Service Agents, Copy.ai, Synthesia, Gong.io, Intercom, Ada, Motion, Fathom, Otter.ai, Deel, Xero AI, Krisp, Fireflies, etc.
From a review of this list it's evident that even if these softwares are making promises they can’t currently keep, enough resources, research and development will find the way to profit from this. The next industrial revolution is the removal of humans from the process entirely. AI augmenting humans is just an evolutionary step towards replacement. After all, human error is so common we have a name for it, why not eliminate even the possibility? Wouldn’t that be more efficient and therefore more profitable? That is of course, the capitalist response: to remove the inefficiency and increase profits regardless of the externalities. Which brings me to Point 2, that Capitalism is a snake that will eat its own tail.