Volume 17, Issue 2, August 2020

Algorithmic Colonization of Africa

Abeba Birhane*

Download PDF

© 2020 Abeba Birhane
Licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Abstract
We live in a world where technological corporations hold unprecedented power and influence. Technological solutions to social, political, and economic challenges are rampant. In the Global South, technology that is developed with Western perspectives, values, and interests is imported with little regulation or critical scrutiny. This work examines how Western tech monopolies, with their desire to dominate, control and influence social, political, and cultural discourse, share common characteristics with traditional colonialism. However, while traditional colonialism is driven by political and government forces, algorithmic colonialism is driven by corporate agendas. While the former used brute force domination, colonialism in the age of AI takes the form of ‘state-of-the-art algorithms’ and ‘AI driven solutions’ to social problems. Not only is Western-developed AI unfit for African problems, the West’s algorithmic invasion simultaneously impoverishes development of local products while also leaving the continent dependent on Western software and infrastructure. By drawing examples from various parts of the continent, this paper illustrates how the AI invasion of Africa echoes colonial era exploitation. This paper then concludes by outlining a vision of AI rooted in local community needs and interests.

Keywords
Algorithmic Colonization; Africa; artificial intelligence; algorithms

Cite as: Abeba Birhane, "Algorithmic Colonization of Africa" (2020) 17:2 SCRIPTed 389 https://script-ed.org/?p=3888
DOI: 10.2966/scrip.170220.389


* PhD Candidate, School of Computer Science, University College Dublin, Ireland and Lero - The Irish Software Research Centre [email protected]

1       Introduction

Traditional colonial power seeks unilateral power and domination over colonized people. It declares control of the social, economic, and political sphere by reordering and reinventing social order in a manner that benefits it. In the age of algorithms, this control and domination occurs not through brute physical force but rather through invisible and nuanced mechanisms such as control of digital ecosystems and infrastructure. Common to both traditional and algorithmic colonialism is the desire to dominate, monitor, and influence social, political, and cultural discourse through the control of core communication and infrastructure mediums. While traditional colonialism is often spearheaded by political and government forces, digital colonialism is driven by corporate tech monopolies – both of which are in search of wealth accumulation. The line between these forces is fuzzy as they intermesh and depend on one another. Political, economic, and ideological domination in the age of AI takes the form of ‘technological innovation’, ‘state-of-the-art algorithms’, and ‘AI solutions’ to social problems. Algorithmic colonialism, driven by profit maximization at any cost, assumes that the human soul, behaviour, and action is raw material free for the taking. Knowledge, authority, and power to sort, categorize, and order human activity rests with the technologist, for which we are merely data producing “human natural resources”.[1]

In Surveillance Capitalism, Zuboff[2] remarks that “conquest patterns” unfold in three phases. First the colonial power invents legal measure to provide justification for invasion. Then declarations of territorial claims are asserted. These declarations are then legitimized and institutionalized, as they serve as tools for conquering by imposing new reality. These invaders do not ask permission as they build ecosystems of commerce, politics, and culture and declare legitimacy and inevitability. Conquests by declaration are invasive and sometimes serve as a subtle way to impose new facts on the social world and for the declarers, they are a way to get others to agree with those facts. For technology monopolies, such processes allow them to take things that live outside the market sphere and declare them as new market commodities. In 2016 Facebook declared that it is creating a population density map of most of Africa using computer vision techniques, population data, and high-resolution satellite imagery.[3] Facebook, in the process, assigned itself the authority responsible for mapping, controlling, and creating population knowledge of the continent. In doing so, not only does Facebook assume that the continent (its people, movement, and activities) are up for grabs for the purpose of data extraction and profit maximization,  by creating the population map, Facebook also assumed authority over what is perceived as legitimate knowledge of the continent’s population. Statements such as “creating knowledge about Africa’s population distribution”, “connecting the unconnected”, and “providing humanitarian aid” served as justification for Facebook’s project. For many Africans this echoes old colonial rhetoric; “we know what these people need, and we are coming to save them. They should be grateful.”

Currently, much of Africa’s digital infrastructure and ecosystem is controlled and managed by Western monopoly powers such as Facebook, Google, Uber, and Netflix.[4] These tech monopolies present such exploitations as efforts to “liberate the bottom billion”, helping the ‘unbanked’ bank, or connecting the ‘unconnected’ – the same colonial tale but now under the guise of technology. “I find it hard to reconcile a group of American corporations, far removed from the realities of Africans, machinating a grand plan on how to save the unbanked women of Africa. Especially when you consider their recent history of data privacy breaches (Facebook) and worker exploitation (Uber)”, Kimani writes.[5] Nonetheless, algorithmic colonialism dressed in “technological solutions for the developing world” receives applause and rarely faces resistance and scrutiny.

It is important, however, to note that this is not a rejection of AI technology in general, or even of AI that is originally developed in the West, but a rejection of a particular business model advanced by big technology monopolies that impose particular harmful values and interests while stifling approaches that do not conform to its values. When practiced cautiously, access to quality data and use of various technological and AI developments indeed hold potential for benefits to the African continent and the Global South in general. Access to quality data and secure infrastructure to share and store data, for example, can help improve the healthcare and education sector.  Gender inequalities which plague every social, political, and economic sphere in Ethiopia, for instance, have yet to be exposed and mapped through data. Such data is invaluable in informing long-term gender-balanced decision making which is an important first step to societal and structural changes. Such data also aids general societal-level awareness of gender disparities, which is central for grassroot change. Crucial issues across the continent surrounding healthcare and farming, for example, can be better understood and better solutions can be sought with the aid of locally developed technology. A primary example is a machine learning model that can diagnose early stages of disease in the cassava plant, which is developed by Wayua, a Kenyan researcher and her team.[6]

Having said that, the marvelousness of technology and its benefits to the continent is not what this paper is set out to discuss. There already exist countless die-hard techno-enthusiasts, both within and outside the continent, some of whom are only too willing to blindly adopt anything ‘data-driven’, or AI-based without a second thought of the possible harmful consequences. Mentions of ‘technology’, ‘innovation’, and ‘AI’ continually and consistently bring with them evangelical advocacy, blind trust, and little, if any, critical engagement. They also bring with them invested parties that seek to monetize, quantify, and capitalize every aspect of human life, often at any cost. The atmosphere during one of the major technology conferences in Tangier, Morocco embodies this tech-evangelism. CyFyAfrica 2019, The Conference on Technology, Innovation, and Society[7] is one of Africa’s biggest annual conferences attended by various policy makers, UN delegates, ministers, governments, diplomats, media, tech corporations, and academics from over 65 (mostly African and Asian) nations. Although these leaders want to place “the voice of the youth of Africa at the front and centre”, the atmosphere was one that can be summed up as a race to get the continent ‘teched-up’. Efforts to implement the latest, state-of-the-art machine learning tool or the next cutting-edge application were applauded and admired while the few voices that attempt to bring forth discussions of the harms that might emerge with such technology get buried under the excitement. Given that the technological future of the continent is overwhelmingly driven and dominated by such techno-optimists, it is crucial to pay attention to the cautions that need to be taken and the lessons that need to be learned from other parts of the world.

2       Context Matters

One of the central questions that need attention in this regard is the relevance and appropriateness of AI software developed with values, norms, and interests of Western societies to that of users across the African continent[8]. Value systems vary from culture to culture including what is considered a vital problem and a successful solution, what constitutes sensitive personal information, and opinions on prevalent health and socioeconomical issues. Certain matters that are considered critical problems in some societies may not be considered so in other societies. Solutions devised in a one culture may not transfer well to another. In fact, the very problems that the solution is set out to solve may not be considered problems for other cultures.

The harmful consequences of lack of awareness to context is most stark in the health sector. In a comparative study that examined early breast cancer detection practices between Sub-Saharan Africa (SSA) and high income countries, Black and Richmond (2019)[9] found that applying what has been ‘successful’ in the West, i.e. Mammograms, to SSA is not effective in reducing mortality from breast cancer. A combination of contextual factors, such as a lower age profile, presentation with advanced disease, and limited available treatment options all suggest that self-examination and clinical breast examination for early detection methods serve women in SSA better than medical practice designed for their counterparts in high income countries. Throughout the continent, health care is one of the major areas where ‘AI solutions’ are actively sought and Western-developed technological tools are imported. Without critical assessment of their relevance, the deployment of Western eHealth systems might pose more harm than benefit.

The importing of AI tools made in the West by Western technologists may not only be irrelevant and harmful due to lack transferability from one context to another but also is an obstacle that hinders the development of local products. For example, “Nigeria, one of the more technically developed countries in Africa, imports 90% of all software used in the country. The local production of software is reduced to add-ons or extensions creation for mainstream packaged software.”[10] The West’s algorithmic invasion simultaneously impoverishes development of local products while also leaving the continent dependent on its software and infrastructure.

3       Data are people

The African equivalent of Silicon Valley’s tech start-ups can be found in every possible sphere of life around all corners of the continent — in ‘Sheba Valley’ in Addis Abeba, ‘Yabacon Valley’ in Lagos, and ‘Silicon Savannah’ in Nairobi, to name a few — all pursuing ‘cutting-edge innovations’ in sectors like banking, finance, healthcare, and education. They are headed by technologists and those in finance from both within and outside of the continent who seemingly want to ‘solve’ society’s problems and using data and AI to provide quick ‘solutions’. As a result, the attempt to ‘solve’ social problems with technology is ripe and this is exactly where problems arise. Complex cultural, moral, and political problems that are inherently embedded in history and context are reduced to problems that can be measured and quantified – matters that can be ‘fixed’ with the latest algorithm. As dynamic and interactive human activities and processes are automated, they are inherently simplified to the engineers’ and tech corporations’ subjective notions of what they mean. The reduction of complex social problems to a matter that can be “solved” by technology also treats people as passive objects for manipulation. Humans, however, far from being passive objects, are active meaning seekers embedded in dynamic social, cultural, and historical backgrounds.[11]

The discourse around ‘data mining’, ‘abundance of data’, and ‘data rich continent’ shows the extent to which the individual behind each data point is disregarded. This muting of the individual, a person with fears, emotions, dreams, and hopes, is symptomatic of how little attention is given to matters such as people’s well-being and consent, which should be the primary concerns if the goal indeed is to ‘help’ those in need. Furthermore, this discourse of ‘mining’ people for data is reminiscent of the coloniser attitude that declares humans as raw material free for the taking.

Data is necessarily always about something and never about an abstract entity. The collection, analysis, and manipulation of data potentially entails monitoring, tracking, and surveilling people. This necessarily impacts people directly or indirectly whether it manifests as change in their insurance premiums or refusal of services. The erasure of the person behind each data point makes it easy to ‘manipulate behaviour’ or ‘nudge’ users, often towards profitable outcomes for companies. Considerations around the wellbeing and welfare of the individual user, the long-term social impacts, and the unintended consequences of these systems on society’s most vulnerable are pushed aside, if they enter the equation at all. For companies that develop and deploy AI, at the top of the agenda is the collection of more data to develop profitable AI systems rather than the welfare of individual people or communities. This is most evident in the FinTech sector, one of the prominent digital markets, in Africa. People’s digital traces from their interactions with others to how much they spend on their mobile top ups, is continually surveyed and monitored to form data for making loan assessments. Smartphone data from browsing history, likes, and locations are recorded forming the basis for a borrower’s creditworthiness.

AI technologies that aid decision-making in the social sphere are, for the most part, developed and implemented by the private sector whose primary aim is to maximise profit. Protecting individual privacy rights and cultivating a fair society is therefore the least of their concern especially if such practice gets in the way of “mining” data, building predictive models, and pushing products to customers. As decision-making of social outcomes is handed over to predictive systems developed by profit-driven corporates, not only are we allowing our social concerns to be dictated by corporate incentives, we are also allowing moral questions to be dictated by corporate interest. ‘Digital nudges’, behaviour modifications developed to suit commercial interests, are a prime example. As ‘nudging’ mechanisms become the norm for ‘correcting’ individuals’ behaviour, eating habits, or exercising routines, those developing predictive models are bestowed with the power to decide what ‘correct’ is. In the process, individuals that do not fit our stereotypical ideas of a ‘fit body’, ‘good health’, and ‘good eating habits’ end up being punished, outcast, and pushed further to the margin. When these models are imported as state-of-the-art technology that will save money and ‘leapfrog’ the continent into development, Western values and ideals are enforced, either deliberately or intentionally.

4       Blind trust in AI hurts the most vulnerable

The use of technology within the social sphere often, intentionally, or accidentally, focuses on punitive practices, whether it is to predict who will commit the next crime or who may fail to repay their loan. Constructive and rehabilitative questions such as why people commit crimes in the first place or what can be done to rehabilitate and support those that have come out of prison are rarely asked. Technology designed and applied with the aim of delivering security and order, necessarily bring cruel, discriminatory, and inhumane practices to some. The cruel treatment of the Uighurs in China[12] and the unfair disadvantaging of the poor[13]  are examples in this regard. Similarly, as cities like Harare,[14] Kampala, and Johannesburg[15] introduce the use of facial recognition technology, the question of their accuracy (given they are trained on unrepresentative demographic datasets) and relevance should be of primary concern – not to mention the erosion of privacy and the surveillance state that emerges with these technologies.

With the automation of the social comes the automation and perpetuation of historical bias, discrimination, and injustice. As technological solutions are increasingly deployed and integrated into social, economic, and political spheres, so are the problems that arise with the digitisation and automation of everyday life. Consequently, the harms of digitization and ‘technological solutions’ affect individuals and communities that are already at the margins of society. For example, as Kenya embarks on the project of national biometric IDs for its citizens, it risks excluding racial, ethnic, and religious minorities that have historically been discriminated. Enrolling on the national biometric ID requires documents such as a national ID card and birth certificate. However, these minorities have historically faced challenges acquiring such documents. If the national biometric system comes to effect, these minority groups are rendered stateless and face challenges registering a business, getting a job, or travelling.[16] Furthermore, sensitive information about individuals is extracted which raises questions such as where this information will be stored, how it will be used, and who has access.

FinTech and the digitization of lending has come to dominate the ‘Africa rising’ narrative; a narrative which supposedly will ‘lift many out of poverty’. Since its arrival in the continent in the 1990s, FinTech has largely been portrayed as a technological revolution that will ‘leap-frog’ Africa into development. The typical narrative[17] preaches the microfinance industry as a service that exists to accommodate the underserved and a system that creates opportunities for the ‘unbanked’ who have no access to a formal banking system. Through its microcredit system, the narrative goes, Africans living in poverty can borrow money to establish and expand their microenterprise ventures. However, a closer critical look reveals that the very idea of FinTech microfinancing is a reincarnation of colonialist era rhetoric that works for Western multinational shareholders. These stakeholders get wealthier by leaving Africa’s poor communities in perpetual debt. In Bateman’s words: “like the colonial era mining operations that exploited Africa’s mineral wealth, the microcredit industry in Africa essentially exists today for no other reason than to extract value from the poorest communities.”[18] Far from a tool that ‘lifts many out of poverty’, FinTech is a capitalist market premised upon profitability of perpetual debt of the poorest. For instance, although Safaricom is 35% owned by the Kenyan government, 40% of the shares are controlled by Vodafone, a UK multinational corporation while the other 25%, are held mainly by wealthy foreign investors.[19] According to Loubere,[20] Safaricom reported an annual profit of $US 620 million in 2019, which was directed into dividend payments for investors. Like traditional colonialism, wealthy individuals and corporations of the Global North continue to profit from some of the poorest communities except now it takes place under the guise of ‘revolutionary’ and ‘state-of-the-art’ technology. Despite the common discourse of paving a way out of poverty, FinTech actually profits from poverty. It is an endeavour engaged in expansion of its financial empire through indebting Africa’s poorest.

Loose regulations and lack of transparency and accountability under which the microfinance industry operates in, as well as overhyping the promise of technology, makes it difficult to challenge and interrogate its harmful impacts. Like traditional colonialism, those that benefit from FinTech, microfinancing, and various lend apps operate from a distance. For example, Branch[21] and Tala,[22] two of the most prominent FinTech apps in Kenya,[23] operate from their California headquarters and export “Silicon Valley’s curious nexus of technology, finance, and developmentalism”.[24] The expansion of Western led digital financing systems, furthermore, brings with it a negative knock-on effect on existing local traditional banking and borrowing systems that have long existed and functioned in harmony with locally established norms and mutual compassion.

5       Lessons from the Global North

Globally, there is an increasing awareness of the problems that arise with automating social affairs illustrated by ongoing attempts to integrate ethics into computer science programs[25] within academia, various ‘ethics boards’ within industry, as well as various proposed policy guidelines. These approaches to develop, implement, and teach responsible and ethical AI take multiple forms, perspectives, directions, and present plurality of views. This plurality is not a weakness but rather a desirable strength which is necessary for accommodating a healthy, context-dependent remedy. Insisting on a single AI integration framework for ethical, social, and economic issues that arise in various contexts and cultures is not only unattainable but also imposes a one-size-fits-all, single worldview. Companies like Facebook who enter into African ‘markets’ or embark on projects such as creating population density maps with little to no regard for local norms or cultures are in danger of enforcing a one-size-fits-all imperative. Similarly, for African developers, start-ups, and policy makers working to solve local problems with home grown solutions, what is considered ethical and responsible needs to be seen as inherently tied to local contexts and experts.

AI, like Big Data, is a buzzword that gets thrown around carelessly; what it refers to is notoriously contested across various disciplines, and oftentimes it is mere mathematical snake oil[26] that rides on overhype. Researchers within the field, reporters in the media, and industries that benefit from it, all contribute to the overhype and exaggeration of the capabilities of AI. This makes it extremely difficult to challenge the deeply engrained attitude that ‘all Africa is lacking is data and AI’. The sheer enthusiasm with which data and AI are subscribed to as gateways out of poverty or disease would make one think that any social, economic, educational, and cultural problems are immutable unless Africa imports state-of-the-art technology.

The continent would do well to adopt a dose of critical appraisal when regulating, deploying, and reporting AI. This requires challenging the mindset that portrays AI with God-like power and as something that exists and learns independent of those that create it. People create, control, and are responsible for any system. For the most part such people consist of a homogeneous group of predominantly white, middle-class males from the Global North. Like any other tool, AI is one that reflects human inconsistencies, limitations, biases, and the political and emotional desires of the individuals behind it and the social and cultural ecology that embed it. Just like a mirror that reflects how society operates – unjust and prejudiced against some individuals and communities.

AI tools that are deployed in various spheres are often presented as objective and value free. In fact, some automated systems which are put forward in domains such as hiring[27] and policing[28] are put forward with the explicit claim that these tools eliminate human bias. Automated systems, after all, apply the same rules to everybody. Such claim is in fact one of the single most erroneous and harmful misconceptions as far as automated systems are concerned.  As O’Neil explains “algorithms are opinions embedded in code”.[29] This widespread misconception further prevents individuals from asking questions and demanding explanations. How we see the world and how we chose to represent the world is reflected in the algorithmic models of the world that we build. The tools we build necessarily embed, reflect, and perpetuate socially and culturally held stereotypes and unquestioned assumptions.

For example, during the CyFyAfrica 2019 conference,[30] the Head of Mission, UN Security Council Counter-Terrorism Committee Executive Directorate addressed work that is being developed globally to combat online counterterrorism. Unfortunately, the Director focused explicitly on Islamic groups, portraying an unrealistic and harmful image of global online terrorism. Contrary to such portrayal, more that 60 percent of U.S. mass shootings in 2019 were, for instance, carried out by white-nationalist extremists.[31] In fact, white supremacist terrorists carried out more attacks than any other type of group in recent years in the U.S.

In Johannesburg, one of the most surveilled cities in Africa, ‘smart’ CCTV networks provide a powerful tool to segregate, monitor, categorize, and punish individuals and communities that have historically been disadvantaged.  Vumacam,[32] an AI powered surveillance company, is fast expanding throughout South Africa, normalizing surveillance and erecting apartheid era segregation and punishment under the guise of ‘neutral’ technology and security. Vumacam currently provides a privately owned video-management-as-a-service infrastructure,[33] with a centralized repository of video data from CCTV. Kwet explains in apartheid era passbooks served as a means to segregate the population, inflict mass violence, and incarnate the black communities.[34] Similarly, “[s]mart surveillance solutions like Vumacam are explicitly built for profiling, and threaten to exacerbate these kinds of incidents.” Although the company claims its technology is neutral and unbiased, what it deems ‘abnormal’ and ‘suspicious’ behaviour disproportionally constitutes those that have historically been oppressed. What the Vumacam software flags as ‘unusual behaviour’ tends to be dominated by the black demographic and most commonly those that do manual labour such as construction workers.[35] According to Clarno, “[t]he criminal in South Africa is always imagined as a black male”.[36] Despite its claim to neutrality, Vumacam software perpetuates this harmful stereotype.

Stereotypically held views drive what is perceived as a problem and the types of technology we develop to ‘resolve’ them. In the process we amplify and perpetuate those harmful stereotypes. We then interpret the findings through the looking glass of technology as evidence that confirms our biased intuitions and further reinforces stereotypes. Any classification, clustering, or discrimination of human behaviours and characteristics that AI systems produce reflects socially and culturally held stereotypes, not an objective truth.

A robust body of research in the growing field of Algorithmic Injustice[37],[38] illustrates that various applications of algorithmic decision-makings result in biased and discriminatory outcomes. These discriminatory outcomes often affect individuals and groups that are already on society’s margins, those that are viewed as deviants and outliers – people who do not conform to the status quo. Given that the most vulnerable are affected by technology disproportionally, it is important that their voices are central in any design and implementation of any technology that is used on or around them. However, contrary to this, many of the ethical principles applied to AI are firmly utilitarian; the underlying principle is the best outcome for the greatest number of people. This, by definition, means that solutions that centre minorities are never sought. Even when unfairness and discrimination in algorithmic decision-making processes are brought to the fore — for instance, upon discovering that women have been systematically excluded from entering the tech industry,[39] minorities forced into inhumane treatment,[40],[41] and systematic biases have been embedded into predictive policing systems[42]— the ‘solutions’ sought do not often centre those that are disproportionally impacted. Mitigating proposals devised by corporate and academic ethics boards are often developed without the consultation and involvement of the people that are affected. Prioritizing the voice of those disproportionally impacted every step of the way, including in the design, development, and implementation of any technology, as well as in policymaking, requires actually consulting and involving vulnerable groups of society. This, of course, requires a considerable amount of time, money, effort, and genuine care for the welfare of the marginalized which often goes against most corporates’ business models. Consulting those who are potentially likely to be negatively impacted might (at least as far as the West’s Silicon Valley is concerned) also seem beneath the ‘all-knowing’ engineers who seek to unilaterally provide a ‘technical fix’ for any complex social problem.

6       Conclusion

As Africa grapples between digitizing and automating various services and activities and protecting the consequential harm that technology causes, policy makers, governments, and firms that develop and apply various technology to the social sphere need to think long and hard about what kind of society we want and what kind of society technology drives. Protecting and respecting the rights, freedoms, and privacy of the very youth that the leaders want to put at the front and centre should be prioritised. This can only happen with guidelines and safeguards for individual rights and freedom put in place, continually maintained, revised, and enforced. In the spirit of communal values that unifies such a diverse continent, ‘harnessing’ technology to drive development means prioritizing welfare of the most vulnerable in society and the benefit of local communities, not distant Western start-ups or tech monopolies.

The question of technologization and digitalisation of the continent is also a question of what kind of society we want to live in. The continent has plenty of techno-utopians but few that would stop and ask difficult and critical questions.  African youth solving their own problems means deciding what we want to amplify and showing the rest of the world; shifting the tired portrayal of the continent (hunger and disease) by focusing attention on the positive vibrant culture (such as philosophy, art, and music) that the continent has to offer . It also means not importing the latest state-of-the-art machine learning systems or some other AI tools without questioning the underlying purpose and contextual relevance, who benefits from it, and who might be disadvantaged by the application of such tools. Moreover, African youth in the AI field means creating programs and databases that serve various local communities and not blindly importing Western AI systems founded upon individualistic and capitalist drives. In a continent where much of the Western narrative is hindered by negative images such as migration, drought, and poverty; using AI to solve our problems ourselves starts with a rejection of such stereotypical images. This means using AI as a tool that aids us in portraying how we want to be understood and perceived; a continent where community values triumph and nobody is left behind.


[1]     Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (London: Profile Books, 2019).

[2]     Ibid.

[3]     Faine Greenwood, “Facebook Is Putting Us All on the Map Whether We like It or Not” (Medium, 29 July 2019), available at https://onezero.medium.com/facebook-is-putting-us-all-on-the-map-whether-we-like-it-or-not-c3f178a8b430 (accessed 26 October 2019).

[4]     Michael Kwet, “Digital Colonialism Is Threatening the Global South” (Al Jazeera, 13 March 2019), available at  https://www.aljazeera.com/indepth/opinion/digital-colonialism-threatening-global-south-190129140828809.html (accessed 18 July 2019).

[5]     Michael Kimani, “5 Reasons Why Facebook’s New Cryptocurrency ‘Libra’ is Bad News for Africa” (Kioneki, 28 June 2019), available at https://kioneki.com/2019/06/28/5-reasons-why-facebooks-new-cryptocurrency-libra-is-bad-news-for-africa/ (accessed 28 October 2019).

[6]     Karen Hao, “The future of AI research is in Africa” (MIT Technology Review, 21 June 2019),  available at https://www.technologyreview.com/2019/06/21/134820/ai-africa-machine-learning-ibm-google/   (accessed 10 November 2019).

[7]     Observer Research Foundation, “CYFY Africa” (ORF, 2019) available at https://www.orfonline.org/cyfy-africa/ (accessed 20 September 2019).

[8]     Crystal Biruk, Cooking Data: Culture and Politics in an African Research World (Durham: Duke University Press, 2018).

[9]     Eleanor Black and Robyn Richmond, “Improving early detection of breast cancer in sub-Saharan Africa: why mammography may not be the way forward.” (2019) 15(1) Globalization and Health 3.

[10]    Knowledge Commons Brasil, “Digital Colonialism & the Internet as a Tool of Cultural Hegemony”, available at https://web.archive.org/web/20190731000456/http://www.knowledgecommons.in/brasil/en/whats-wrong-with-current-internet-governance/digital-colonialism-the-internet-as-a-tool-of-cultural-hegemony/ (accessed 10 November 2019).

[11]    Abeba Birhane, “Descartes Was Wrong: ‘a Person Is a Person through Other Persons’” (Aeon, 2017), available at https://aeon.co/ideas/descartes-was-wrong-a-person-is-a-person-through-other-persons (accessed 22 July 2020).

[12]    Paul Mozur, “One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority” (New York Times, 14 April 2019), available at https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html (accessed 24 June 2019).

[13]    Mary Madden, “The Devastating Consequences of Being Poor in the Digital Age” (New York Times, 25 April 2019), available at https://www.nytimes.com/2019/04/25/opinion/privacy-poverty.html (accessed 10 November 2019).

[14]    Farai Mudzingwa, “Mnangagwa’s Govt Getting Facial Recognition Tech From China” (TechZim, 13 April 2018), available at https://www.techzim.co.zw/2018/04/mnangagwas-govt-getting-facial-recognition-tech-from-china/ (accessed 10 April 2020).

[15]    Heidi Swart, “Joburg’s New Hi-Tech Surveillance Cameras: A Threat to Minorities That Could See the Law Targeting Thousands of Innocents.” (Daily Maverick, 28 September 2018), available at https://www.dailymaverick.co.za/article/2018-09-28-joburgs-new-hi-tech-surveillance-cameras-a-threat-to-minorities-that-could-see-the-law-targeting-thousands-of-innocents/  (accessed 15 July  2019).

[16]    Abdi Latif Dahir and Carlos Mureithi, “Kenya’s High Court Delays National Biometric ID Program” (New York Times, 31 January 2020), available at https://www.nytimes.com/2020/01/31/world/africa/kenya-biometric-ID-registry.html?referringSource=articleShare (accessed 5 April  2020).

[17]    Nadeem Hussain, “Microfinance and Fintech” (MIT Technology Review, 22 November 2017), available at http://www.technologyreview.pk/microfinance-and-FinTech/ (accessed 20 March 2020).

[18]    Milford Bateman, “The problem with microcredit in Africa” (Africa is a Country, 9 October 2019), available at https://africasacountry.com/2019/09/a-fatal-embrace (accessed 2 April 2020).

[19]    Nicholas Loubere, “The Curious Case of M-Pesa’s Miraculous Poverty Reduction Powers”

(The Developing Economics Blog, 14 June 2019), available at   https://developingeconomics.org/2019/06/14/the-curious-case-of-m-pesas-miraculous-poverty-reduction-powers (accessed 28 March 2020).

[20]    Ibid.

[21]    ‘Branch’, available at https://branch.co/about (accessed 3 April 2020).

[22]  Forbes ‘Tala’, available at https://www.forbes.com/companies/tala/ (accessed 3 April 2020).

[23]    Kevin Donovan and Emma Park, “Perpetual Debt in the Silicon Savannah” (Boston Review, 20 September 2019), available at https://bostonreview.net/class-inequality-global-justice/kevin-p-donovan-emma-park-perpetual-debt-silicon-savannah (accessed 5 April 2020).

[24]    Ibid.

[25]    Casey Fiesler, Natalie Garrett, and Nathan Beard, “What Do We Teach When We Teach Tech Ethics? A Syllabi Analysis.” (2020) Symposium on Computer Science Education (SIGCSE’20), available at https://dl.acm.org/doi/abs/10.1145/3328778.3366825 (accessed 5 April 2020).

[26]    Arvind Narayanan, “The 2019 Arthur Miller Lecture on Science and Ethics” (Massachusetts Institute of Technology STS Program, 18 November 2019), available at https://sts-program.mit.edu/event/arthur-miller-lecture-on-science-and-ethics/ (accessed 18 March 2020).

[27]    HireVue, “Pre-Employment Assessment & Video Interview Tools”, available at https://www.hirevue.com/ (accessed 2 December 2019).

[28]    PredPol, “Predict Prevent Crime: Predictive Policing Software”, available at https://www.predpol.com/ (accessed 1 December 2019).

[29]    Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (London: Penguin Books, 2017).

[30]    Observer Research Foundation, supra n. 7.

[31]    “White Supremacist Extremism JIB” (Scribd, 2017), available at https://www.scribd.com/document/356288299/White-Supremacist-Extremism-JIB  (accessed 3 September 2019).

[32]    Vumacam, “Vumacam, A Smart Surveillance Solution”, available at https://www.vumacam.co.za/features/ (accessed 27 March 2020).

[33]    Michael Kwet, “Smart CCTV Networks Are Driving an AI-Powered Apartheid in South Africa” (Vice, 22 November 2019), available at https://www.vice.com/en_us/article/pa7nek/smart-cctv-networks-are-driving-an-ai-powered-apartheid-in-south-africa?utm_campaign=sharebutton (accessed 22 March 2020).

[34]    Ibid.

[35]    Ibid.

[36]    Andy Clarno, Neoliberal Apartheid: Palestine/Israel and South Africa after 1994 (Chicago: University of Chicago Press, 2017).

[37]    Ruha Benjamin, Race after Technology: Abolitionist Tools for the New Jim Code (Cambridge: Polity, 2019).

[38]    Safiya Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: New York University Press, 2018).

[39]    Anja Lambrecht and Catherine Tucker, “Algorithmic Bias? An Empirical Study into Apparent Gender-Based Discrimination in the Display of STEM Career Ads” (2016) SSRN Electronic Journal, available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2852260 (accessed 5 April 2020).

[40]    Chris Buckley, Paul Mozur and Austin Ramzy, “How China Turned a City into a Prison” (New York Times, 4 April 2019), available at https://www.nytimes.com/interactive/2019/04/04/world/asia/xinjiang-china-surveillance-prison.html?smid=tw-share (accessed 18 June 2019).

[41]    Mozur, supra n. 15.

[42]    Rashida Richardson, Jason Schultz, and Kate Crawford, “Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice” (2019) 94 New York University Law Review Online 192-233.

Algorithmic Colonization of Africa

One thought on “Algorithmic Colonization of Africa

  • August 19, 2020 at 8:35 am
    Permalink

    Google translate did a good job for a French summary of your paper! Could be useful in West Africa

    Nous vivons dans un monde où les entreprises technologiques détiennent un pouvoir et une influence sans précédent. Les solutions technologiques aux défis sociaux, politiques et économiques sont monnaie courante. Dans les pays du Sud, la technologie développée avec des perspectives, des valeurs et des intérêts occidentaux est importée avec peu de réglementation ou d’examen critique. Ce travail examine comment les monopoles technologiques occidentaux, avec leur désir de dominer, de contrôler et d’influencer le discours social, politique et culturel, partagent des caractéristiques communes avec le colonialisme traditionnel. Cependant, alors que le colonialisme traditionnel est conduit par les forces politiques et gouvernementales, le colonialisme algorithmique est conduit par les agendas des entreprises. Alors que le premier utilisait la domination de la force brute, le colonialisme à l’ère de l’IA prend la forme «d’algorithmes de pointe» et de «solutions fondées sur l’IA» aux problèmes sociaux. Non seulement l’IA développée en Occident est impropre aux problèmes africains, mais l’invasion algorithmique de l’Occident appauvrit simultanément le développement de produits locaux tout en laissant le continent dépendant des logiciels et des infrastructures occidentaux. En tirant des exemples de diverses parties du continent, cet article illustre comment l’invasion de l’IA en Afrique fait écho à l’exploitation de l’époque coloniale. Cet article conclut ensuite en présentant une vision de l’IA ancrée dans les besoins et les intérêts des communautés locales.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.