Just like every new tool ever, AI is electrifyingly jacked – Newsletter 1-3

AI Jacked

Happy northern-hemisphere’s summer’s end! This week we dive into the ethics of AI.

Simultaneously a great advance and menacing spectre, we don’t actually know what we’ve created/are creating.

However, like all tools ever, AI will do what it’s told for better or worse: the intent is all important.

I love tools and tech, always have, and AI is truly amazing. But it’s not alive, it’s not conscious, and it certainly isn’t human. It has no intrinsic values, morals, ethics, feelings, thoughts, or desires. It just follows instructions. Behaviors arise from, and are the responsibility of its leaders, designers, developers, and users.

Definitions

We don’t actually know what “intelligence” is, human or otherwise. Or, it has meant so many different things over time that it has no meaning. Efforts to objectively measure intelligence have never succeeded.

Also, and I haven’t seen this much, the word intelligence applied to anything artificial demeans us flesh and blood folks because humans transcend their technologies.  Confoundingly, there’s this weird, popular, and dangerous idea that AI is “better” or “smarter” than people.

The word “intelligence” causes us to think of the technology incorrectly. Either we need to stop using the term “AI” (unlikely), or recognize that human intelligence is but one aspect of being, and that AI is derivative of that.

Alternative Questions

How is human consciousness, experience, and embodiment different from generative AI? 
Is AI more than a tool? If so, how? Some people already want to give status and legal protections to AI as self-aware entities.
Aren’t human developers responsible for their AIs? Aren’t their intentions manifested in the systems?

Preventing Harm

Thinking ahead about consequences, mature developers (like Anthropic, purveyors of Claude) have values-based, stakeholder-focused procedures that ask, over and over: “to what end? for what purpose? How will it help? How could it harm?”

AI Tools


Like hammers, facebook, and cookware, AI does what its developers and users intend. It is not conscious, it doesn’t have an inner experience, and it won’t destroy stuff (or us) unless we tell it to…


Of course, we are telling it to: military applications are already abundant — weaponized AI. All the more reason to attend to purposes.


Runaway AI?


Living organisms are programmed by their DNA for self-preservation and reproduction. There are no controls or guardrails. From bacteria to sapiens, we do what’s necessary to adapt, hide, migrate, etc. in order to survive.


AI should never do those things, either on their own or at human hands.


Stakeholder-centric Values


The trick is in the values we choose as a baseline — articulating up front what is important to us. Right now it’s a loosey-goosey free-for-all, with money playing an overwhelming role. OpenAI, for example, just got $300B in their latest round of funding, while shady backroom AI’s swindle octogenarians.


The ethics lens offers a simple solution: put stakeholder needs ahead of extreme profit, power, lying cheating and stealing. Pretty unrealistic on a grand scale, but conscientious folks are trying.


Published values or ethics statements help to set a standard against which to measure success and failure. They tend to keep people honest, if they’re authentic and enforceable. Marketing will say that such things limit revenue and growth. Agreed: it may not lead to a $Billion, but it’ll be a whole lot more fun, responsible, and sustainable.


Besides, a business which seeks quick profit by any means necessary is unlikely to declare their values publicly. And they probably aren’t reading this newsletter…


At the end of the day, AI, like Technology in general, seldom causes good or evil apart from the intent behind it.


Read more in this week’s blog post:

Leave a Comment