The Risk of Compounding Inequality

March 30, 2023
De-arteaga1

A Q&A with Maria De-Arteaga, Assistant Professor at the McCombs School of Business Information, Risk, and Operations Management Department and member of the Good Systems project, Designing Responsible AI Technologies to Curb Disinformation. Maria is a speaker in a roundtable at the 2023 Good Systems Symposium focused on how to use AI to advance racial justice and combat disinformation.

Why is a Professor at a Business School talking about Artificial Intelligence?  
At the McCombs School of Business, a core objective of our research is to understand the impact that technologies have on organizations and society, and to design new technologies and interventions grounded in this understanding. At McCombs, our slogan is “human centered, future focused,” and responsible AI is central to this mission.    
 
In your opinion, when it comes to AI and society, what's the most pressing ethical concern? 
The risk of compounding inequality. This is a risk when we consider the use of AI to automate or support decisions in high-stakes settings, such as hiring and allocating public resources. It is also a concern when we consider the invisible, underpaid labor behind technologies such as generative models, as well as the implications of these new tools on the future of work.  
 
From cell phones to electric cars, the World Wide Web to ChatGPT, regulation and legislation almost always lag behind technological advancement and innovation. So why would things be any different with AI tech? 
They aren’t. There are several examples where new AI tools have been deployed in high-stakes settings without any adequate oversight or regulation, resulting in a large human cost. Part of this is because good regulation requires a good understanding of the risks, as well as a good understanding of the effects of certain interventions, and this knowledge is non-trivial. This is an argument that is often provided by those claiming that it is “too early” to create regulation. While I agree that there is a big risk of bad regulation, I don’t think this is an excuse for irresponsible deployment. If we believe that we do not yet understand something enough to regulate it, then we should not be deploying it in high-stakes settings. 
 
Can you elaborate on what you’ll be talking about at the Good Systems Symposium? Is there any one event or speaker you are most excited to see?    
I will be discussing our research on the design of responsible AI technologies to curb misinformation. In particular, I will discuss our work on diversity in the machine learning pipeline and its intersection with issues of justice in misinformation detection systems.  

I am really looking forward to learning from Meme Styles about her work on data activism and Afrofuturism, and I am also excited for the panel on AI and surveillance in smart cities.  

Maria De-Arteaga will be a panelist in a research to practice roundtable discussion entitled, “Using AI to Advance Racial Justice And Combat Disinformation” at 9am on Tuesday, April 4, at the Good Systems Symposium 2023.
 

Grand Challenge:
Good Systems