Why global actors are striving to soften disruption and tackle bias in AI
Growing up, Marcel Atemkeng dreamed of becoming a pilot, but since his family couldn’t afford aviation school, he decided to study mathematics and computer science instead. “I must confess that AI and Big Data were not in my dreams when I was younger,” he says.
Atemkeng developed a passion for space science, followed NASA-led discoveries closely and watched every documentary on National Geographic. Then, one day, while watching a space science programme on TV, he heard the narrator talk about plans to build the world’s largest and most sensitive radio telescope – the Square Kilometer Array (SKA) – in South Africa and Australia, and realised there was financial support available for students interested in conducting research at the intersection of radio telescopes, applied mathematics, AI and computer science. Soon after applying, he was awarded a PhD scholarship at Rhodes University and began manipulating large amounts of data.
“For only four hours of simulated data from the telescope, my code could run for five days before outputting the results,” Atemkeng says. “I became passionate about the amount of data the SKA would produce in the future – a data stream likely to be in the order of PB/s. At this data scale, even using the most powerful supercomputers, the computation will always remain a major challenge. So I became interested in developing tools for big data, which can be implemented with AI and machine learning algorithms.”
For Atemkeng, most of Africa’s problems – from poverty to food access for its rapidly growing population, education and healthcare access in rural areas – could be solved using machine learning and big data analytics. His work was recently recognised with a Kambule Doctoral Award for advancing “our quest for a better understanding of our universe” by Deep Learning Indaba, an organisation that aims to strengthen machine learning and AI in Africa by ensuring Africans play an active role in shaping it.
“The SKA is a complex and challenging project,” says Atemkeng. “It offers Africa a unique opportunity to showcase its scientific and technological capabilities.”
Atemkeng is part of a growing network of AI and big data experts working to build solutions with global impact but that are sensitive to local priorities, and are busy shaping the future of AI in their image in the process.
Over the past few years, developments in AI, big data and other Fourth Industrial Revolution technologies have transformed our world in profound ways. In the workplace and training industry alone, machine learning and predictive technologies have ushered in new possibilities in terms of inclusion and diversification. Last year, as college campuses migrated online in response to the COVID-19 pandemic, AI-powered chatbots capable of simulating human conversation were deployed to interact with students in automated yet personalised ways. In firms across the world, AI recruitment software that automates and standardises the interview process in job applications already allows companies to sift through larger-than-ever pools of applicants and find the right match. AI-fueled e-learning platforms are also revolutionising learners’ journeys by hyperpersonalizing paths based on performance and learning style, catering to students’ individual needs and helping them reach their full potential.
Yet, more often than not, advances in AI have been accompanied by accusations of bias directed at tools that are often introduced with the aim of enhancing objective decision-making. Back in 2016, it only took one day for Microsoft’s Tay chatbot to be taken offline after interacting with trolls on Twitter and absorbing racist and misogynistic language. Two years later, Amazon scrapped its AI-based recruitment engine because of bias against women, after it was trained to select applicants based on patterns in resumes that mostly came from men in the tech industry. In AI-based learning systems, there are growing concerns on how students’ personal data might be shared and used by algorithms in order to create personalized learning environments, with calls to develop data ownership models that are ethical and transparent. And last year, after cancelling exams due to the COVID-19 pandemic, the British government abandoned plans to use an algorithm that had been trained to predict results based on students’ existing grades as well as their school’s historic performance, with the consequence that bright students from less advantaged schools saw their expected grades drop.
“As a powerful new technology destined to reshape most sectors, AI will do so in ways that tend to reflect people's bias and ideologies,” Dr Stephen Cave, Executive Director of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, says. “At the same time, we’re in this exciting developmental phase where we have an opportunity to try and make sure that the use of this technology is steered towards shaping the kind of equitable future that we want.”
In recent years, countless organisations, community initiatives and companies have emerged worldwide to do just that – research what could go wrong, raise awareness, soften disruption, tackle bias and ensure the benefits and risks of AI are distributed equally.
Late last year, in conversation with Sam Charrington of the TWIML (This Week in Machine Learning) podcast, the founder of non-profit Deep Learning Indaba and Senior Research Scientist at Google’s Deepmind Shakir Mohamed explained why he felt the need to start the organisation, and just how much the community he helped create has evolved and grown since its early days.
When Deep Learning Indaba was first launched in 2016, it was mostly a collective with a mission to strengthen machine learning and AI in Africa. It was born out of the widespread awareness that African AI specialists were nowhere to be found in international conferences, and evolved alongside similar grassroots organisations – such as Data Science Africa and Black in AI – that choose to focus on research, network-building and training to address the diversity crisis in AI. Five years later, Deep Learning Indaba has grown from an annual event – Indaba refers to an open conference traditionally held by the Zulu and Xhosa peoples of South Africa– into a well-established non-profit promoting the advancement of AI in Africa through several different programmes.
“We support communities in individual countries across the continent as they know themselves what it is they need,” Mohamed says. “The idea was to create the kind of community where people know each other, see each other, understand they are not alone, and we recognise excellence and leadership through awards.”
As the recipient of a Deep Learning Indaba award, Atemkeng was invited as a keynote speaker to several national and international conferences and workshops in AI and machine learning, and will have the opportunity to present his work at the University of Oxford.
More than anything, he feels that the experience has transformed him “into a world-class academic”, which in turn has strengthened his desire to transfer his skills to the younger generation as a lecturer – “students will apply them to solve real-life problems across different sectors in Africa,” he says, starting a chain reaction.
Today, the Indaba project is no longer just about the organisation itself, but about the machinery it set in motion to lift the African AI community together, making it possible for an ecosystem of local solutions to emerge and thrive across the continent.
For Mohamed, grassroots organisation Masakhane is one such example. After meeting at one of the Indaba events, the group evolved into a community of people passionate about natural language processing (NLP) and African languages determined to strengthen NLP research in the field with the awareness that, according to their mission statement, “the tragic past of colonialism has been devastating for African languages in terms of their support, preservation and integration and this has resulted in technological space that does not understand our names, our cultures, our places, our history.” Masakhane’s goal is to ultimately ensure new technological advances will be shaped “for Africa, by Africans.”
Rose Delilah Gesicho, programme coordinator for the Nairobi Women in Machine Learning and Data Science community, sees a strong link between the future of AI and community development in the effort to create diversity.
“We bring women who are interested in AI, machine learning and data science together and we give them the resources they need,” she says. “We also invite women working in the field across Africa and Kenya to show that AI is meant to be inclusive. It’s a way to encourage women to pursue AI and also equip them with the knowledge and skills that are required so we can increase representation in STEM.”
Similar efforts have emerged beyond Africa, with training groups and initiatives to decolonize and broaden the AI conversation now present in Eastern Europe, India, South America, Southeast Asia and elsewhere.
“What you see is communities across the world taking responsibility for their own development, their own trajectory, their own understanding, their own path, their own connectivity, using the resources that are available to them, and bringing that into their own conversation with their own cultural values, their own unique experience,” Mohamed says. “The world of AI today is a much more global space than it was five years ago, because of the amazing work of these communities.”
Alongside the growing number of training communities and organisations aimed at diversifying AI design and making it more inclusive, new players have emerged in the education sector to ensure new technologies also benefit those most likely to be negatively affected by them, and to ensure the risks and potential of AI integration are understood and reclaimed at the grassroots level.
Back in 2017, McKinsey estimated that between 400 million and 800 million workers globally would be displaced by 2030 due to automation.
“New jobs will be available,” the report reads. “However, people will need to find their way into [them]. Of the total displaced, 75 million to 375 million may need to switch occupational categories and learn new skills.”
Yet, the impact of intelligent machines in the workplace likely won’t be distributed evenly. According to global forecasting firm Oxford Economics, for instance, manufacturing jobs are particularly vulnerable to this shift, with 20 million of them at risk of displacement by automation.
Companies like Axonify in Canada, which was selected as a 2020 Technology Pioneer by the World Economic Forum, are contributing to upskilling and reskilling higher risk groups by developing highly personalised, AI-powered learning solutions to train frontline employees – from grocery workers and retail associates to manufacturing line employees – in ways that accommodate their busy schedules.
“One of the challenges of supporting frontline employees is that they’re very time-limited,” says JD Dillon, Chief Learning Architect at Axonify. “They have to clock in and get to work, and when their shift is over, they clock out and go home. We apply the idea of ‘microlearning’ in order to fit that very targeted, AI-driven training into the couple of minutes they can find in their day.”
Microlearning embeds a habit of learning into a worker’s routine and has little to share with more traditional approaches to education and training. AI is involved to the extent it generates individual learning journeys based on data such as an employee’s work experience, their performance in relation to the results they’re being held accountable to, and more. An algorithm determines where to focus training for each employee at any given time. The startup also recently developed an AI-driven content authoring tool to generate inquiry-based learning content automatically.
“The goal is to help people do their best today so that they can look down the road and find additional development opportunities,” says Dillon. “I try to make things very grounded as opposed to thinking that people will develop blockchain skills through microlearning. Maybe, but for me it’s about supporting the cashier in becoming the next assistant manager, helping them build their confidence so they can decide where they want to go next.”
The Factory2Fit project, which ran between 2016 and 2019 and was funded by the European Union, similarly focused on using AI to empower workers by finding human-centred solutions in factory environments particularly exposed to the impact of automation. The project actively engaged workers from pilot companies through participatory design activities and data models that took into account their perspectives and preferences alongside their skills, performance and industry needs. The purpose was to find new ways for people and intelligent systems to work together.
Upon completion, project coordinator Eija Kaasinen noted that “any smart factory solution should keep in mind that the technology itself is not smart; it is the combination of advanced technology and human practical knowledge that is smart”.
Back in 2018, to ensure the impact of emerging technologies would be understood and shaped by all stakeholders involved, including civil society at large, the University of Helsinki and tech consultancy Reaktor developed a series of free online courses open to anyone interested in learning more about AI. Initially, the course was aimed at improving AI literacy within Finland – “to prove that AI should not be left in the hands of a few elite coders,” COO of Reaktor Megan Schaible said. Since then, Elements of AI has reached more than 600,000 students from 170 countries, contributing to demystifying AI as an intricate discipline for the few and starting, instead, a worldwide conversation.
Elements of AI was recently selected as part of a wider research project funded by Business Finland, aimed at developing new technologies for the future of learning. In particular, the course will serve as a case study for researchers to get insights into how personal data should be collected and used by digital learning platforms to benefit users first, and how this could be used and shared ethically to understand learning behavior in digital learning environments.
There’s one common thread running through these initiatives, communities and organisations: the desire to transform AI into a force for good by diversifying the tech sector, educating the wider public on the risks and potential of these technologies, ensuring no one is disproportionately affected by the consequences of such deep technological shifts, and ultimately putting people back at the heart of the conversation and the AI design process.
In order to achieve this, according to Dr Stephen Cave of the University of Cambridge, it may be helpful to start thinking of AI as a bureaucratic system that needs to be kept in check and adjusted as it grows rather than an anthropomorphic robot with its own agency, as it is often represented in popular culture.
“AI is a very categorical way of thinking that most resembles a bureaucracy,” he says. “A soulless machine that is an efficient means of managing large populations, and it does so according to rules which require categorising people so they can be processed according to those rules. We've had 100 years of dealing with bureaucracies; we know they can be heartless and opaque, and produce as many wrong decisions as right ones. We know there are things we need to do about that, like trying to instil transparency. We're constantly fighting to make them more aligned with the broad values we want our society to have. And that’s what we should do with AI, and why we need to be able to have this conversation.”
Back at Rhodes University, Atemkeng, who was recently appointed as a permanent member of the faculty, now leads a small group of master’s and PhD students – the Rhodes AI Research Group – on topics such as machine learning, big data, astronomical data processing and more. He sees his role as more than just about advancing AI knowledge in Africa.
“I chose to be an academic because I want to transfer my skills to the younger generation,” he says. “I want to provide them with the possibility of a better future.”
“At ITCILO, we’re aiming to bring AI to the world of our global participants and let them experiment in a safe zone to address sustainability challenges that they are facing in their daily worlds.” Tom Wambeke, Chief of Learning Innovation at ITCILO
“I see this as a very positive moment for the ITC and the world in general. We have already started using immersive technologies and data to understand the preferences and habits of our learners, and shape our training programmes to respond to learners’ needs. We’re embracing the lifelong learning approach, accompanying university leavers for 50 years and longer by designing learning activities that are relevant throughout their life.” Charles Crevier, Manager of the Social Protection, Governance and Tripartism Programme at ITCILO
“AI represents an opportunity for learning, but it’s also a question of how inclusive we go with the way we design it and choose to use it. It is essential to inform users that come into contact with this technology unknowingly, and have an open conversation with learners.” Francesca Biasiato, Programme Officer for Gender Equality & Diversity Inclusion at ITCILO
“Regarding the rollout of AI in less developed countries, the challenge is going to be one of bandwidth, which may limit the optimisation of certain AI protocols that require high-speed connectivity. [...] Also, when it comes to AI, you're beginning to deal with something that's not visible. It's like magic of sorts, where you do something and voilà there's the answer. In a lot of African folk knowledge, we have these magical things, but when formal Western education was introduced in Africa, these customs and traditions were suppressed. Now we have these inventions that relate back to that. It's gonna be interesting to see how AI begins to be viewed as something that is different from indigenous mythology.” Gamelihle Sibanda, currently seconded by the United Nations to the South Africa Government as Chief Technical Adviser to the Expanded Public Works Programme