Inclusiveness and AI panel with the I2Hub...

Wednesday, May 8, 2024 (12:00 - 13:30 EST)

What are systemic black boxes in AI Systems:
Throwing off their lids for inclusivity

In this panel session we are interested in extending the trope of a “black box” to explore the different types of black box problems that are happening across AI socio-technical systems from cradle to grave. We will not only expose some of these black boxes within AI systems ( e.g., environmental concerns, procurement, design, policy, implementation, and use, etc.) with a view to opening their lids, we will also explore the different ways in which individuals are thinking about how to blow-up these black boxes with a view towards diversity, inclusion, belonging and justice. This may involve extending XAI practices to wider contexts and systems, or entirely different approaches and tactics that are more participatory or holistic in nature.

 

Background: What is a systemic black box in Artificial Intelligence (AI) systems?

In the context of AI systems, a black box refers to a technical device, system or object viewed in terms of its inputs and outputs, with limited or no knowledge of its internal workings and processes. Its implementation is “opaque” (black). It is a device, system or object that has “lost track” of the inputs that informed its decision-making structure to produce outputs – or more precisely was never keeping track. This is a part of what is described as the “black box problem” that are challenging on many levels, for example:

  • difficulty in fixing the AI systems when problematic outcomes are produced.
  • “trust” questions of system robustness for handling a variety of contexts/situations; and
  • inaccuracies, missing experiences/perspectives, embedded unequal power relations and discriminatory features in the data-worlds reflections/amplifications that are used to train and build them and then serve to reproduce those harms and inequities into future contexts and decision-making AI processes.

Several value structures and belief systems underpin and permeate within AI development where black boxes can almost be seen as a type of social norm: 

  • The belief that black boxes are a necessary evil in the development of sufficiently accurate AI models that is supported by the dichotomous reasoning that equates increased complexity of the AI model with accuracy versus more “simple – easy to interpret” transparent AI models as not-so accurate.
  • The model-objects assumptions that turn individuals into passive recipients of the algorithmic outcomes that are built “for them” and are designed and implemented for convenience of use. A user provides an input to produce an instant output without any knowledge of how this industrial knowledge sausage is made. This is also a powerful gatekeeping mechanism for inclusion and exclusion practices.

There have been responses to concerns around the black box problem, some regulatory in nature but predominantly in the form of Explainable AI (XAI) or transparent AI practices. XAI may take the form of increased documentation of process but more commonly frames the problem of the black box as a technical one, which can be solved through a better technical solution. However, this does not always lead to addressing concerns about who is and is not included within these processes, or concerns for interpretability, access, or data-decision making/distributive justice.

About the speakers:

Dr. Cristián Bravo

Dr. Bravo is an Associate Professor and Canada Research Chair in Banking and Insurance Analytics at the University of Western Ontario, Canada, and Director of the Banking Analytics Lab. His research focuses on data science, analytics, and credit risk, particularly exploring multimodal deep learning, causal inference, and social network analysis to understand consumer-financial institution relations. With over 75 publications in prestigious journals and conferences, he contributes significantly to operational research, finance, and computer science. He frequently appears on CBC News’ Weekend Business Panel, and has been quoted by The Wall Street Journal, WIRED, CTV, The Toronto Star, The Globe and Mail, and Global News discussing Banking, Finance, and Artificial Intelligence.

Dr. Nisarg Shah

Dr. Shah is an Associate Professor of Computer Science at the University of Toronto. He is also a Research Lead for Ethics of AI at the Schwartz Reisman Institute for Technology and Society, a Faculty Affiliate of the Vector Institute for Artificial Intelligence, and part of the Board of Advisors of the nonprofit AIGS Canada. His work has received prestigious recognitions such as the Kalai Prize by Game Theory Society (2024), "Innovators Under 35" by MIT Technology Review Asia Pacific (2022), "AI's 10 to Watch" by IEEE Intelligent Systems (2020), Victor Lesser Distinguished Dissertation Award by IFAAMAS (2016), and a PhD Fellowship by Facebook (2014-15). Shah conducts research at the intersection of computer science and economics, addressing issues of fairness, efficiency, elicitation, and incentives that arise when humans are affected by algorithmic decision-making. His recent work develops theoretical foundations for fairness in fields such as voting, resource allocation, matching, and machine learning. He has co-developed the not-for-profit website Spliddit.org, which has helped more than 250,000 people make provably fair decisions in their everyday lives. He earned his PhD in computer science at Carnegie Mellon University and was a postdoctoral fellow at Harvard University.

Dr. Ruth Bankey

Dr. Bankey is on the Board of Directors of the Inclusive Innovation Hub (I2Hub), which creates & supports a community of transformation in inclusive innovation practice and research by increasing access, co-creating new knowledge, building networks & collaborating to rethink what inclusivity means in & for innovation. As a part of the I2Hub she is working on developing, with other members, an “inclusive innovation prototype for systems change” as a part of the 2024 u-lab 2x cohort, Presencing Institute, MIT. She is a collaborator on SSHRC Partnership Development Grant entitled “Enabling Systems Transitions towards Inclusive Innovation”, for which the I2Hub plays a leadership role and for which she is specifically concerned to address the question of What are leading practices in inclusiveness in the development of Artificial Intelligence Technologies? Her focus on inclusive innovation also extends to other areas of her work for example, as a collaborator on “Meaningfully Engaging the Public in Artificial Intelligence a SSHRC Insights Grant”. Her role as the Chair of the Board of Directors for both the Capacity Building Institute of Canada (CBI) and Sustainable Capacity Solutions, ENGOs whose vision is to see the environmental non-profit sector equipped with the crucial tools and resources needed to overcome capacity constraints and foster community resilience. And her work for one of the Canada Revenue Agency’s Program areas in its Business Intelligence, Research & Analytics Division (BIRAD).

Scroll to Top