AI is already being used for political deception. Will it walk away from fools and their money?
It’s incredibly useful. It created the image above. It answers my questions, seems to have a personality, and the appearance of concsiousness — our finest golem ever. And yet, it’s still just a tool.
Like all tools, it will be used for good and ill. The distinction emerges in hindsight — seeing the consequences of the ethics which guided purposes, development, and guardrails. Human-focused AI designed to provide access and unbiased guidance for everyone, developed in the context of usability, with careful mitigation of potential harms is likely to have a positive impact.
A system designed solely to maximize profit is not.
Google and Amazon’s recommendations are profit-oriented — algorithms present advertised and high-margin items first, which drives revenue from both direct profit and advertising. Sellers who meet the algorithm’s criteria see huge volume, while those on page 2 see almost none.
Algorithms are increasingly used to deny insurance claims. AI will discuss the matter with you, anticipate your objections, and offer you alternatives. It will automatically escalate pressure if you are not responsive enough.
AI intensifies established methods, like supercharging upselling, personalized product placement, and dubious, profit-motivated chatbots.
What happens as AI enters the financial markets and “advises” the uninformed?
Interesting. I have a client that is using AI to analyze how an individual is investing; taking input to determine investment priorities and interests, and making suggestions that align with those interests and the recognized returns on investment. Again, this is AI based. It is using the LLM to analyze both interests and potential growth opportunities. One caution that I see is to provide transparency to the process. The system needs to be up-front about how it is doing what it is doing.
Thanks Robert! Transparency in AI systems is both needed and difficult to implement — the system may not give the same answer twice, for example. You are exactly right about being up front, and if your learning model conforms closely to your ethics, then those policies are likely to get codified into how the system actually works. Transparency comes if you can show that your AI has applied your policies and prinicples effectively and consistently… “you can trust us.” Failure can mean brand compromise in an instant — think CrowdStrike.