AI will be key to addressing social, economic, and ecological challenges at a global scale. However, its limitations must also be acknowledged.
AI & Cities: Risks, Applications and Governance, a report published by the United Nations Human Settlements Programme (UN-Habitat) in collaboration with the Mila-Quebec Artificial Intelligence Institute, points to some of these risks. “In order for an algorithm to reason, it must gain an understanding of its environment,” the authors write. “This understanding is provided by the data. Whatever assumptions and biases are represented in the dataset will be reproduced in how the algorithm reasons and what output it produces.”
As noted earlier, AI turns human-defined goals into mathematical ones. But if the human-defined goals are based on existing preconceptions, then the data will end up reinforcing those assumptions.
AI also falls short in evaluating its own performance. As the UN-Habitat report notes, “While it may be tempting to see algorithms as neutral ’thinkers,’ they are neither neutral nor thinkers.” AI has no grasp of wider context, and so can only produce results based on its pre-defined optimization goals, which may be at odds with wider considerations— or worse, serve a misleading agenda.
AI systems are mathematical and cannot integrate nuance. This means AI can sometimes end up excluding or underrepresenting subjective, qualitative information from its findings.