Social Media is Bad For You, and AI Will Be Too

“Try Gemeni for free!
“AI is now part of your subscription!
“Our AI will help you write like a pro!”

…until “you have used your monthly AI Credits, click here to upgrade.”

Recognizing that all public AI has been released by profit-driven companies, how much do you want your virtual assistant/chatbot/therapist to know about you? “Tell it your problems and needs — it’s here to help you live your best life!”… Google/X/Amazon/Meta, etc.,* quietly scraping your data to build a detailed personality profile and “help offer you the goods and services you need.”

It’s all bullshit: marketing for the sake of sales and quarterly profits. And like 23 and Me, your data may get sold. And

Anthropic, makers of Claude, is an exception, at least for now. The company tout s”Privacy-first AI that helps you create in confidence,” and “We treat AI safety as a systematic science.” This is rare stakeholder-focus from a big for-profit company — explicitly doing well by doing good. Hopefully they will sustain that orientation when finances get tight.

Why do we go along with this usurous extraction, watching the show, and voyeristically trying to replace experience with artificial supplements? Like highly-processed foods, tech, and now AI seduces you. Triggering dopamine, much like sugar and carbs. AI wants you to consume it, to enjoy it, and to seek more. You get hooked, like all humans, on the thing that makes you feel good, safe, and understood. You become addicted, which makes facebook morally equivalent to big tobacco.

And just when extraction capitalism has you hooked, like a casino the fees and losses kick in. Used-up monthly AI credits and starter AI packages, like room and meal comps. “Once they’re reeled in, they happily pay a monthly fee.”

The limited advertising you see during AI chatbot sessions is a rare instance of business recognizing that it would be creepy to overtly sell things based on things people have said about their mothers. That self-restraint will likely dissolve in a hail of advertising as users become increasingly complacent, following the evolution of the web over the last 30 years: Idealism –> Implementation –> Massive public participation contribution–> Advertising –> Now: content almost completely obscured by ads and pop-ups, and loaded with 3rd party cookies, or worse.

Harvesting in AI is not about immediate display ads — it’s about profiling individual people for long-term gain. As new ways to sell present themselves, all that data on you will prove extremely valuable.

Sadly, the liberal enterprise, long a bulwark against bad behavior, is compromised. There will be no near-term US conrols, education, or regulation on commercial AI. Instead we must define and declare the value we place on human well-being over the seduction of tech, stakeholders over profit, creativity over regurgitation. How do we set the stage to use tech to strengthen the social contract, not corrupted by self-interest? Is there a practical political path to such an outcome?

Barring that, the profit motive will prevail, because there are no meaningful controls on AI technology, its application, or outrageous profit from it.

As the Beatles put it (sort-of), “Will AI walk away from a fool and their money?”

* See an April 2025 Scientific American article comparing big tech to big tobacco.

Leave a Comment