Media Monitoring Africa (MMA) welcomes the opportunity to provide this submission to the Working Group on discrimination against women and girls, setting out the perspective of young people and media workers affected by the gendered effects of artificial intelligence (AI) in Africa.
It is by now a well-established fact that digital technologies, including AI, have significant effects on the right to gender equality and that many of the consequences of these technologies are particularly gendered. For example, technology-facilitated gender-based violence, which includes the use of AI-generated mis- and disinformation as well as deep-fakes, hate speech, and targeted harassment bots, disproportionately targets women and gender minorities. Disturbingly, the use of AI is enabling the creation of such content to be generated and distributed at warp speed with little ability to track or halt
It must be emphasised that while gender is a key determinant of the rights consequences of AI in the present age, intersecting characteristics such as race, nationality, sexual orientation, disability, geography, and income further mediate the effects on individuals and, as such, an intersectional approach that takes stock of overlapping experiences of discrimination is vital. In the South African context, linguistic diversity is a particularly significant challenge for the deployment of AI systems that are typically trained only in a small number of globally dominant languages, risking exacerbating the digital divide and threatening the future of minority languages. Even where AI systems attempt to engage with these other languages, the lower volumes of training data available results in less accurate and less useful tools.
Read our full submission here.
