Artificial Intelligence
Urban AI and Ways of Knowing
8 min
AI is the talk of the hour. With systems like ChatGPT and Midjourney becoming more popular and seemingly sophisticated, we are also increasingly confronted with many ethical questions that those technologies engender.
In May 2024, OpenAI's "Sky" chatbot made headlines for sounding similar to actor Scarlett Johansson’s voice. Sky was taken offline following complaints from the public and the actor herself. This incident highlights the importance of consent and robust ethical guidelines.
In another case, the PIGEON (Predicting Image Geolocations) Project from Stanford University, where an AI could identify locations of non-geo-tagged photos, raised concerns among privacy experts as one's location is personal and sensitive information.
While highlighting and addressing the ethical implications of AI is crucial, there’s an elephant in the room: AI has an epistemological problem.
Epistemology broadly refers to ways of knowing. It concerns the theory of knowledge - that is, what we know, how we know, and from where we know.
In this article, I reflect on this issue, mainly considering my department’s (ITC-PGM) research focus relating to AI, cities, and the Majority World.
I first heard about AI’s epistemological issues at the workshop AI Ethics from The Majority World: Reconstructing the Global Debate through Decolonial Lenses held in Bonn, Germany, in May 2024. Neurobiologist Dr. Melanie Cheung from New Zealand reiterated this line during her talk on decolonising AI methodologies.
For AI technologies, epistemology would mean the formalisation of knowledge systems within which the AI is trained and developed. In our increasingly datafied cities, epistemology is linked to how data is produced and what's represented in the data. This has implications in various domains, such as research, governance, and policymaking. People and knowledges not represented in the data are sidelined, which creates disparity and marginalisation in urban societies.
For AI ethics in cities, epistemology matters in two ways. On the one hand, we need to be more aware of the epistemological foundations that shape urban AI development. On the other hand, we need to critically engage with how epistemology shapes our understanding of what's right and wrong in terms of urban AI.
Urban AI refers to an AI system “that incorporates data derived from the urban environment, which is then processed by algorithms”.
Urban AI is distinct from other forms of AI on three grounds. I here use those three characteristics to bring attention to epistemological issues with urban AI that merit the attention of geospatial specialists.
1. The urban complexities with which the AI interacts.
Cities are complex and unique. And so are the knowledges and experiences of people who live in them. In turn, these knowledges and experiences shape a city.
Shannon Mattern, in her book A City Is Not a Computer: Other Urban Intelligences, offers useful insights into these knowledges and intelligences of the city.
She explains that in the race to make cities computer-like giant information processing systems, disadvantaged populations and their knowledges are often structurally removed from cities’ knowledge politics and memory. For example, urban areas and dwellers may get erased in the process of digitisation as they were never part of the original database. The unique knowledges of the erased don’t get formalised in the urban environment within which urban AI learns and grows.
Thus, certain knowledges often do not find themselves in the design and development of urban AI. What often takes over are the knowledges of the dominant social groups. In the case of urban AI, it's often Western-centric knowledge systems that guide urban AI development.
This phenomenon of urban AI sidelining local knowledges and perpetuating Western-centric knowledges reproduces colonialism in digital forms.
2. The specific policy contexts within which urban AI operates.
Urban AI is embedded within specific policy contexts. The creation of such policies is heavily influenced by the local political environment. Marginalised populations and knowledges are often inadequately represented or structurally hidden in the process of policy formulation.
Samuel Segun, a speaker at the Bonn workshop, highlighted that neo-liberal corporate-driven AI policy contexts prioritise financial profit over values rooted in African knowledge systems - such as dignity and community upliftment. Segun brings up facial recognition technology (FRT) as an example. Segun argues that reducing people’s faces to mere data points and labels for facial recognition impacts human dignity.
Moreover, the data used to train FRT have largely been collected and labelled in the Western context, which makes misclassification of faces more common outside of the Western context.
Such development priorities of urban AI that ignore local contexts risk further marginalising underprivileged groups in urban societies in the Majority World.
Moreover, we have to ask ourselves how multiple knowledges come to matter when companies like Niantic, the US-based development firm behind Pokémon Go, are using player-contributed scans to train their large Geospatial Model.
At the moment, there is little research into comprehending how such developments hang together and shape and change how we're able to know the world.
3. The hybridity of urban AI with its materiality and infrastructural component.
The hybridity of urban AI is also linked to AI epistemologies. The physical and material infrastructure of urban AI can have a profound impact on nature and climate.
Erick Tambo, another speaker at the workshop, noted that the current development trends of AI focusing primarily on economic gains could harm the environment and negatively affect environmental sustainability.
Another speaker, Dodzi Koku Hattoh, supports this with the example of how AI infrastructure, such as data centres, can have a significant negative impact on the environment.
Listening to both speakers, I inferred that without knowledge systems that focus on inclusivity and plurality in both the human and non-human world, urban AI is also bound to turn urban spaces into profit-making spaces, leaving a trail of environmental harm with its resource-intensive physical and material infrastructure.
That may happen because the current narratives around urban AI and sustainability focus on one aspect of an issue and sideline others: AI for sustainability (using AI to combat climate change) instead of sustainable AI.
Several speakers at the workshop suggested interesting measures to combat such epistemological imbalances in AI development.
Dylan Merrick argues that the first step towards addressing such imbalances is to focus on community-based ownership and defining AI with Indigenous or local groups. Such approaches could allow for relational interpretations of AI and could help bring diversified and contextualised knowledges to the current Western-centric AI epistemologies.
A focus on relationality is also advocated by the African Ubuntu philosophy – "I am because we are". Wakanyi Hoffman states that Ubuntu philosophy emphasises that a person is a product of relationships, highlighting the importance of the interrelationships between humans, nature, and technology.
Modestha Mensah argues that such epistemologies rooted in principles of relationality, communality, care, and responsibility could offer us a way to re-imagine AI for social justice and environmental integrity rather than for economic profit.
Implementing such suggestions is easier said than done. The discussions at the workshop taught me that there is a gap between critical AI scholarship and AI development in practice.
This is where the role of ITC is crucial. ITC offers the perfect mix between the development and use of AI and, though still nascent, critical research on AI.
Perhaps with more inter-departmental dialogues, student projects informed by such critical perspectives, and collaborations on the topic of urban AI, ITC could lead the way for more just and inclusive urban and GeoAI where a plurality and diversity of knowledges are valued and leveraged for a better future.
This article benefitted tremendously from the insightful comments and additions from Dr. Fran Meissner.
If you’re interested in learning more about critical perspectives on AI, check out our free course on Geoversity called GeotechE: Geotechnology Ethics. Click here to register.
Header image: Chris 73 / Wikimedia Commons Creative Commons Attribution-Share Alike 3.0 Unported