Search

The white saviour industrial complex and global AI governance – Developing Economics

  • Share this:
The white saviour industrial complex and global AI governance – Developing Economics

In the realm of international development, the ‘white saviour’ trope has long been a subject of critique and controversy. This phenomenon, often rooted in colonialist attitudes, positions Western individuals or entities as benevolent rescuers of non-Western communities, usually without acknowledging or addressing systemic multidimensional inequalities, colonial/racial privilege, and local agency of indigenous communities. The white saviour complex has not only perpetuated harmful stereotypes but has also undermined the efforts and voices of those it claims to help.

As artificial intelligence (AI) has emerged as a global force to potentially achieve sustainable development goals, we see a new manifestation of the white saviour industrial complex within emerging global AI governance.

Global governance and the (colonial) race towards AI supremacy

The race towards AI supremacy has increasingly mirrored colonial-era power dynamics, with familiar and new nations striving to establish dominance in global AI technology and its governance. In this contemporary digital race, wealthier nations, primarily from the Global North, leverage their significant resources and technological advancements to dictate the terms of AI development and deployment. This pursuit often marginalises and sidelines the contributions and needs of the Global Majority, perpetuating historical patterns of exploitation and inequality.

The development of AI systems is inextricably linked to the continuities of historical injustices, constituting a ‘colonial supply chain of AI’. There are global economic and political power imbalances in AI production, with value being extracted from the labour of workers in the Majority world to benefit Western technology companies. This perpetuates an ‘international division of digital labour’ that concentrates the most stable, well-paid AI jobs in the West, while exporting the most precarious, low-paid work to the Majority world.

The competitive drive for AI supremacy is not just about technological innovation but also about control over ideologies, global narratives, economic power, and geopolitical influence.

Additionally, AI development is often shaped by Western values and knowledge, marginalising non-Western alternatives and limiting possibilities for decolonising AI – a reflection of a broader pattern of ‘hegemonic knowledge production’ within the ‘colonial matrix of power’.

As the old adage goes “there is nothing new under the sun”, from the international development sector, research on development economics, to science, the epistemic challenges in global AI governance are reflective of historical structural inequities. Much of the research and policy development in many academic fields has historically been conducted by scholars and institutions based in the Global North. This dominance has shaped the research agenda, methodologies, and policy recommendations in ways that may not fully align with the needs and perspectives of the Global Majority.

Scholars from the Global North often have more resources, better funding, and greater access to academic networks, which allows them to dominate many fields. Likewise, the ethical, legal, social, and policy (ELSP) research on AI development and deployment is often led by Western academics who prioritise issues, solutions, and policies that resonate with Western perspectives. Consequently, Western academics also potentially overlook or misinterpret other contexts, needs, and overall create a global AI divide that does not reflect the socioeconomic realities and lived experiences of the Global Majority, or risk perpetuating existing forms of marginalisation.

Navigating the white saviour complex in global AI governance

Much like traditional international development initiatives, AI governance often involves Western-developed solutions being implemented in non-Western contexts, with imposed solutions through a ‘copy and paste’ approach to Western ‘golden standards’, with several significant consequences. For example, Western nations often export their governance models as ‘golden standards’, assuming that these frameworks will be universally applicable. However, this approach neglects the unique social, political, and economic landscapes of non-Western countries. For instance, AI regulations designed for the European Union may not be suitable for countries with different governance structures or developmental priorities.

These solutions often designed for Western innovation ecosystems do not consider non-Western local nuances, needs, or cultural contexts. As a result, they are often ineffective or even harmful, reinforcing dependency rather than fostering self-sufficiency, which can lead to a loss of agency and autonomy, perpetuating cycles of dependency and underdevelopment, with devastating intergenerational effects.

Furthermore, the imposition of Western AI governance frameworks can reinforce a cycle of dependency, where non-Western countries rely on external expertise and solutions rather than developing their own capacities. This dynamic can stifle local innovation and self-sufficiency, leading to long-term detrimental effects on local governance and technological development.

While there are increased calls for a decolonial informed approach (DIA) to global AI governance, from academia and civil society organisations, in many global AI forums and discussions, voices from the Global Majority still remain underrepresented. Western technical experts and policymakers dominate the conversation, often marginalising those who are most affected by AI technologies. This exclusion mirrors historical patterns of disenfranchisement and reinforces geopolitical power imbalances.

The ethical considerations surrounding AI governance, such as data privacy, bias mitigation, and transparency, may be inadequately addressed when Western frameworks are applied without adaptation. For example, data privacy laws that work in Western contexts may not consider the cultural attitudes towards privacy in non-Western societies, leading to ethical dilemmas and potential harms. While these frameworks are important and often abide to Western versions of ‘democracy’ and ‘rule of law’, lessons from history reveal that this emerging ethical imperialism may not fully encompass the lived experience, diverse values, and ethical considerations of different cultures in our global society. Imposing a singular ethical perspective can be seen as a form of ethical imperialism, where Western norms are prioritised over non-Western local traditions and beliefs.

Technical artifacts, including AI systems, are not value neutral, they are developed by individuals and organisations that bring their own values, beliefs, and biases into the design process. For instance, if the majority of AI researchers come from a particular cultural or socioeconomic background, their perspectives will likely dominate the development process, leading to systems that reflect their worldviews. This can result in algorithms that prioritise certain types of data or decision-making processes that align with those values, while neglecting others. These embedded values can influence how technologies are designed, implemented, and utilised, potentially perpetuating existing power dynamics and intersectional inequalities.

Another concern is that many international development assistance (IDA) organisations that are increasingly funding access to digital public goods (DPG) , digital public infrastructure (DPI), and AI related policies still maintain an inherent colonial culture where as Corinne Gray writes “Suddenly, we find ourselves in a world where the act of calling out racism is more offensive than racism itself.”

From dataalgorithmic bias and the inequality associated with technological transitions, we must critically address undertones of injustice to ensure that AI advancements contribute to equitable digital development rather than reinforcing historical injustices and systemic power imbalances of colonisation.

The South African context: an insidious manifestation of the white saviour trope

In South Africa, the white saviour trope has subtle nuances, but with very harmful effects. The country’s history of apartheid has left deep racial and economic divides, with a privileged minority often holding significant power and influence, particularly in knowledge creation and the digital economy. Today, this dynamic is evident in how privileged minorities are supported to position themselves as the ‘voices of African people’ and ‘advocating African values’ at the global stage on discussions related to frontier technologies and the overall digital economy.

An unsurprising phenomena since, according to the World Bank, “The dualism that stems from the legacy of demographic and spatial exclusion in South Africa is reflected in the digital economy landscape, and a large share of South Africans remain disconnected from the opportunities it has created.”

Certain privileged individuals and groups assume the role of leaders representing broader Indigenous local communities often legitimised by tokenistic relationships with Indigenous subordinates. These ‘African voices’ may not genuinely reflect the diverse perspectives and needs of the Indigenous majority. Instead, their viewpoints often align more closely with their own interests of virtue signalling, the business of international aid, or of the Western institutions they are affiliated with.

These self-appointed spokespersons backed by generous funding often act as intermediaries between local communities and international entities. However, their legitimacy and commitment to true diversity, equity, and inclusion (DEI) are often questionable as they benefit from the status quo of being palatable to international donors. As intergenerational beneficiaries of colonisation and apartheid, they have limited motivation to challenge systemic issues and the overall status quo in practice when situations that call for allyship, ethics, and decolonisation (which these individuals publicly advocate for) end in cognitive dissonance and default to protecting white fragility.

By ignoring performative allyship, maintaining colonial practices in IDA, and encouraging privileged minorities to dominate the narrative on the socio-technical disruptions associated with new technologies for the Global Majority, we risk perpetuating historical cycles of disenfranchisement which hinders genuine progress towards real equity, epistemic justice, and risk perpetuating scenarios where the voices of the marginalized and real victims remain unheard.

Moving towards truly responsible global AI governance

To counter the white saviour industrial complex in global AI governance, a shift towards more inclusive and equitable practices is necessary, that places positionality and reflexivity at the centre of global AI governance and overall ELSP research on the digital economy. This shift could be based upon the following approaches.

Inclusive representation is paramount, voices from the Global Majority and marginalised communities should be included in global AI governance discussions. This means creating platforms and opportunities including resource allocation for diverse perspectives to be heard and considered.

Context-specific solutions should be considered. AI solutions should be tailored to local contexts, ethical, legal, social, cultural, and economic factors should be considered through engaging with local experts and communities, to understand their unique needs and challenges. Local experts should also be supported with the capacity to create home-grown solutions as well as contribute to technical discussions at the global stage.

Moreover, funding to boost collaborative frameworks must be prioritised. Developing collaborative governance frameworks that involve multiple stakeholders, including governments, civil society, and the private sector, can help create more balanced and effective policies. These frameworks require concerted funding and should prioritise bottom-up co-creation and mutually beneficial partnerships, rather than top-down imposition.

Ethical pluralism is key. Recognising and respecting the plurality of ethical perspectives is essential. Global AI governance should be flexible enough to incorporate different ethical frameworks and values, allowing for a more nuanced and comprehensive approach to AI ethics.

Finally, there is a need to decolonise research and policy. In both AI governance and development economics, it is important to decolonise research and policy-making processes. This involves including reflexivity on researcher positionality, valuing indigenous knowledge systems, promoting local research initiatives, and ensuring that policy recommendations are grounded in local realities, including through thought leadership of indigenous technical experts representing their communities.

The white saviour industrial complex in global AI governance reflects broader historical and systemic issues. As AI continues to shape our world, it is imperative that we address these issues head-on. We can move towards a more just and equitable global AI governance landscape by reflecting on the atrociousness of the past to ensure our collective efforts to foster inclusivity in the digital age respect local contexts, embrace ethical pluralism, reflexivity, and a decolonial informed approach—an exercise which will not only enhance the effectiveness of truly global AI solutions for good but also empower communities worldwide to shape their own technological futures.

Shamira Ahmed is a pioneering policy entrepreneur. As founder and executive director of the Data Economy Policy Hub (DepHUB), she is the first indigenous African woman to establish an independent think tank in South Africa. Shamira was a 2023-2024 Policy Leader Fellow at the EUI Florence School of Transnational Governance. She is an active member of many global expert working groups. Shamira has published a wide range of knowledge products that focus on diverse areas such as measuring the data-driven digital economy, sustainable digital transformation, and the multidimensional aspects of crafting human-centred responsible transnational AI governance, that benefits the Global Majority.

This post was first published on EUIdeas.

Disclaimer: This story is auto-aggregated by a computer program and has not been created or edited by budgetbuddy.
Publisher: Source link