Besides telling people to eat rocks and cook with gasoline, AI is ripe for decietful applicaitons. Will you walk away from fools and their money if AI can grab it for you?
It has no remorse, no endemic conscience: it’s still just a tool, and it does what it’s told (or taught) to do.
Like all tools, it will be used for good and ill. The distinction emerges in hindsight — seeing the consequences of the ethics which guided purposes, development, and guardrails. Human-focused AI designed to provide access and unbiased guidance for everyone, developed in the context of usability, with careful mitigation of potential harms is likely to have a positive impact.
Even if it operates legally, a system designed solely to maximize profit can operate unethically, that is, against the interests of its customer and vendor stakeholders.
Google and Amazon tread a fine line: their recommendations are profit-oriented, and they manipulate customers towards high-margin items. Algorithms present advertised and high-margin items first, which drives revenue from both sales and advertising. Sellers who meet the algorithm’s criteria see huge volume, while those on page 2 see almost none.
For example, algorithms are increasingly used to deny insurance claims, collect payments, and influence elections. AI is happy to discuss your claim with you, anticipate your objections, and offer various payment plans. It will automatically escalate pressure if you are not responsive enough. If you’d like to speak with a representative, your wait time will be… 96 minuts.
AI intensifies established methods, like supercharging upselling, personalized product placement, and dubious, profit-motivated chatbots.
What happens when AI, directed malevolently, provides seeminly trustworthy financial planning services, “advising” the uninformed about investments and their retirement?
Interesting. I have a client that is using AI to analyze how an individual is investing; taking input to determine investment priorities and interests, and making suggestions that align with those interests and the recognized returns on investment. Again, this is AI based. It is using the LLM to analyze both interests and potential growth opportunities. One caution that I see is to provide transparency to the process. The system needs to be up-front about how it is doing what it is doing.
Thanks Robert! Transparency in AI systems is both needed and difficult to implement — the system may not give the same answer twice, for example. You are exactly right about being up front, and if your learning model conforms closely to your ethics, then those policies are likely to get codified into how the system actually works. Transparency comes if you can show that your AI has applied your policies and prinicples effectively and consistently… “you can trust us.” Failure can mean brand compromise in an instant — think CrowdStrike.