Research

Artificial Intelligence

CINDERELLA'S SHOE WONT FIT SOUNDARYA: AN AUDIT OF FACIAL PROCESSING TOOLS ON INDIAN FACES

Gaurav Jain & Smriti Parsheera

Working Paper, December 2021

The increasing adoption of facial processing systems in India is fraught with concerns of privacy, transparency, accountability, and missing procedural safeguards. At the same time, we also know very little about how these technologies perform on the diverse features, characteristics, and skin tones of India's 1.34 billion-plus population. In this paper, we test the face detection and facial analysis functions of four commercial facial processing tools on a dataset of Indian faces. The tools display varying error rates in the face detection and gender and age classification functions. The gender classification error rate for Indian female faces is consistently higher compared to that of males -- the highest female error rate being 14.68%. In some cases, this error rate is much higher than that shown by previous studies for females of other nationalities. Age classification errors are also high. Despite taking into account an acceptable error margin of plus or minus 10 years from a person's actual age, age prediction failures are in the range of 14.3% to 42.2%. These findings point to the limited accuracy of facial processing tools, particularly for certain demographic groups, and the need for more critical thinking before adopting such systems.

A CONVERSATION ON AI ACTIVISM


Sarayu Natarajan & Smriti Parsheera


ACM Interactions, January - February, 2021


In this dialogue we bring up some conceptual and practical concerns for AI activism. The imagination of the piece is situated in the Indian context. We discuss the role of human agents in every step of the AI process, opportunities and risks that it presents, the politics of AI and avenues for AI activism.

CODES AND COALITION: A PATH TO GLOBAL GOVERNANCE OF AI?

Digital Debates — CyFy Journal, 2021

A global consensus on the governance of AI and other critical technologies currently seems far from sight. Given this, the rise of non-binding coalitions such as the AI principles adopted by OECD members and endorsed by G20 and the Global Partnership on AI, appear to be the new normal in global technology governance. These frameworks could offer the benefit of peer learning and informal scrutiny without the heavy handedness of binding international norms, which appear both infeasible and undesirable at this time. But the coalition-based approach comes with its own set of challenges, particularly in terms of gaps in priorities, participation, and perspectives. Notably, currently AI discussions are mainly concentrated within a group of advanced or emerging economies leaving large parts of the developing world out of these conversations. This leaves them to either accept the available principles as fait accompli or remain excluded from the gains of AI knowledge sharing systems, both of which are less than optimum solutions.

ADOPTION AND REGULATION OF FACIAL RECOGNITION TECHNOLOGIES IN INDIA: WHY AND WHY NOT?

Data Governance Network Working Paper No. 5, 5 December 2019

The widespread adoption of facial recognition technologies by the public and private sectors, without any meaningful debate or regulation, raises a number of concerns. These concerns revolve around issues of transparency, privacy and civil liberties, accuracy and effectiveness, and evidence of biased outcomes. This paper outlines the various contexts in which the use of this technology is being discussed in India and the challenges that it presents on account of the lack of an informed policy debate and appropriate legal and procedural safeguards. It focuses, in particular, on the proposed National Automated Facial Recognition System and the many ways in which it falls short of satisfying the tests laid down by the Supreme Court in the Puttaswamy right to privacy case.

Paper | Blog post

FAIRNESS AND NON-DISCRIMINATION

Smriti Parsheera, Philip Catania, Sebastian Cording, Diogo Cortiz, Massimo Donna, Kit Lee & Arie van Wijngaarden

Responsible AI: A Global Policy Perspective, Charles Morgan ed., ITechLaw, 2019

The power and influence of AI systems continues to grow as they are increasingly being integrated in many aspects of our daily lives. The impact of this becomes particularly serious when algorithmic decision making extends to fundamental aspects of our lives such as whether we are profiled as a potential suspect, whether we get called for an interview, or whether our mortgage application is approved. It therefore becomes essential that the developers and adopters of these systems should be cognizant of the ways in which AI systems can perpetuate and exacerbate existing biases, the reasons for the same and the ways to mitigate biased outcomes. In this chapter we will look at some of the areas where automated decision-making is being deployed at scale, the risk that this may poses to individuals and the emerging thinking on measures to ensure fairness and non-discrimination in AI applications.

A GENDERED PERSPECTIVE ON ARTIFICIAL INTELLIGENCE

International Telecommunications Union's Kaleidoscope Conference, Santa Fe, Argentina, November, 2018 (Awarded the second best paper prize)

Focusing on the role of gender in the knowledge-making processes around artificial intelligence, this paper highlights the imbalanced power structure of those processes and the consequences of that imbalance. It then proposes a three-stage pathway towards bridging this gap. This includes adopting a set of publicly developed standards on AI, which should embed the concept of “fairness by design” and investing in research and development towards technological tools that can help translate the ethical principles into actual practice.

Paper | Presentation