ai-as-a-new-moral-compass

OpenAI invests in research on algorithms for understanding morality

AI

November 25, 2024

OpenAI has awarded a grant to researchers at Duke University to study algorithms capable of predicting human moral judgments, according to TechCrunch, citing documents filed with the U.S. Internal Revenue Service.

The project aims to teach AI to resolve ethical conflicts

The initiative, known as the “AI-Morality Research,” will continue until 2025. Its leader, Professor Walter Sinnott-Armstrong, declined to comment on interim results. However, he previously co-authored a book with colleague Jana Borg about the potential of artificial intelligence as a “moral GPS” that can assist people in making ethical decisions.

The researchers already have experience developing algorithms to determine who should receive donor organs. They have also studied scenarios in which people are willing to entrust AI with making complex decisions.

OpenAI’s goal is to train algorithms to “predict human moral judgments” in complex medical, legal, and business conflicts.

Sam Altman’s company is also working on the launch of an AI agent, codenamed “Operator,” which could be the next breakthrough in artificial intelligence.