“Try Gemeni for free!
“AI is now part of your subscription!
“Our AI will help you write like a pro!”
…then “you have used your monthly AI Credits, click here to upgrade.”
Add another $14.99 to your list of monthlies.
It’s very cool. I use it a lot and it’s part of my daily life, but always, and by definition, it’s artificial. Think students learning to write prompts instead of essays, and the output being a rehashing of ideas, without consciousness, creativity, or feeling. Good for research and analysis, bad for creative writing and problem-solving.
Recognizing that most public AI comes from profit-driven companies, and remembering that the facebook user is the product they sell, how much do you want your virtual assistant/chatbot/therapist to know about you? “Tell me your problems and needs — I’m here to help you live your best life!”… Google/X/Amazon/Meta, etc.,* quietly scraping your data to build a detailed personality profile and “help offer you the goods and services you need.”
Sorry, but it’s bullshit: marketing for the sake of sales and quarterly profits. And like 23 and Me, your data may get sold. At day’s end it’s still artificial, like canned soup.
Anthropic, makers of Claude, is an exception to extraction profit, at least for now. The company tout s”Privacy-first AI that helps you create in confidence,” and “We treat AI safety as a systematic science.” This is rare stakeholder-focus from a big for-profit company — explicitly doing well by doing good. Hopefully they will sustain that orientation when finances get tight.
Why do we go along with this usurous extraction, watching the show, and voyeristically trying to replace experience with artificial supplements? Like highly-processed foods, tech, and now AI seduces you. Triggering dopamine, much like sugar and carbs. AI wants you to consume it, to enjoy it, and to seek more. You get hooked, like all humans, on the thing that makes you feel good, safe, and understood. You become addicted, which makes facebook morally equivalent to big tobacco.
And just when extraction capitalism has you hooked, like a casino, fees kick in. Used-up monthly AI credits and starter AI packages, like room and meal comps. “Once they’re reeled in, they happily pay a monthly fee.”
The limited advertising you see during AI chatbot sessions is an investment on the provider’s part. Many also recognize that it would be creepy to overtly advertise merchandise based on what people have said about their mothers. That deliberate self-restraint will dissolve in a hail of advertising as users become increasingly complacent, following the evolution of the web over the last 30 years: Idealism ➡️ Adoption ➡️ Massive public content contribution➡️ Monetization of personal data ➡️Loss of privacy to advertising. All this has given us the internet we have now: content completely obscured by ads and pop-ups, loaded with 3rd party cookies (or worse), and a queasy feeling that value is being extracted from me.
“Enhancing current best practices,” AI- harvested data is not about instantaneous display ads — it’s about profiling individual people for long-term gain. Gathering all that data on you that ultimately will prove to be extremely valuable. Like credit scores on steroids
Sadly, there will be no near-term US regulation of AI. So we must build a moral framework to define and declare the value we place on human well-being over the seduction of tech, explicitly placing stakeholder needss over profit, creativity over regurgitation. How do we set the stage to use tech to strengthen the social contract, not corrupted by self-interest? Is there a practical political path to such an outcome?
Barring that, the profit motive will prevail, because there are no meaningful controls on AI technology, its application, or outrageous future profits at the user’s expense.
As the Beatles put it (sort-of), “Will AI walk away from a fool and their money?”
* See an April 2025 Scientific American article comparing big tech to big tobacco.

