Cooking Up Science

baby, costume, asian-775503.jpg

Applied science offers reliable repeatability, like a cooking recipe refined iteratively over time, or getting your phone settings right. GOod ethical practices are similar: keep improving and adapting with as much mindfulness as you can muster

Like human habits, for good or ill, computers perform recursively, doing things over and over. For example, software systems “learn” from errors, testing hypotheses against data in the cloud, under human direction, and making adjustments. Like making ratatouille over and over until it’s really good.

Since information systems drive repeatability at warp speed, we begin to see apparent “artificial Intelligence” — useful language models like GPT. However, they mimic “cognition” based on information on the web. The technology is neither sentient nor an AI “singularity.”

Instead it is an enabler and like all tools before, and it will be used for good and ill on a continuum. Without deliberately imposed human values, AI contains no moral dictums, no social guidelines — no constraints, like the computer run amok in 2001: A Space Odyssey, TRON, or an early episode of Star Trek.

Ethics and morals, by contrast, are about people, about relief from inevitable suffering, and helping each other. Without explicit imposition, AI’s output simply parrots information from humanity’s output — the good, the bad and the ugly.

AI is neither magic nor divine, it has no “person-hood,” and only its human enablers are (vaguely) accountable for its harms, like a drone strike. Similarly, responsibility for agents, bots, systems, AIs, etc. falls to the people and groups that create them. People can be held accountable; systems, like corporations, are harder to pin down.

If you leave your casserole in the oven an extra two minutes this time, maybe it’ll come out fluffier. It’s not the oven’s fault if you forget about it and it turns into carbon.

Leave a Comment