Positive Friction – An Interview with Sharon Strover

April 3, 2023
Strover

Sharon Strover is a professor of communications in the Moody College of Communication and chair of Good Systems at The University of Texas at Austin. We sat down with Strover in advance of the 2023 Good Systems Symposium, taking place April, 3 - 4, on UT Austin’s campus.

Mis- and disinformation are key points of interest for many communications scholars right now. How to reckon with a post-truth age from a communications standpoint is a tricky enough question to answer when the issues arising are human-made. When AI technology is also one of the culprits, things get very complicated indeed.

“As a professor of communications, I’ve always been interested in how we interact with media-related technologies of all kinds,” said Strover, who teaches in the School of Journalism and Media and Department of Radio-Television-Film. “From conventional TV and radio to the kinds of information and services provided by computers and the Internet.”

And, while web-based communications are often seen as the logical technological progression from TV and radio, the information dynamic has shifted from a one-way process – where TV and radio simply feed us information – to a situation where we consume information on the Internet and, in exchange, those providing it collect and harvest our personal data.   

What’s So Unique About Good Systems?       

“There are six Good Systems research projects in total, and they are all taking a deeper dive into ethical issues in AI than ever before. Not only that, but each research group is collaborating with the others to identify commonalities across search fields. This is a multifaceted and complex issue and sharing expertise across disciplines will help us come to a better understanding of the totality of what’s at stake and, therefore, help us find macro solutions.”    

– Sharon Strover, professor, Moody College of Communication, and chair, Good Systems

“Systems gather a variety of data about our behavior – online and offline,” Strover said. Surveillance technology, for example, doesn’t even ask for permission.      

“All of this data ends up in a database somewhere,” she added. “What becomes of it has a lot of curious and, in some cases, serious ramifications.”

Good Systems is an interdisciplinary research initiative at UT Austin focused not simply on what AI can do, but what AI should do for society. As Good Systems chair, and a founding member, Strover believes the most pressing ethical concern in AI technology currently facing society is how our personal data is being used.

“The systems of message manipulation, and quiet data gathering, that have become fundamental economic engines for online services are deeply troubling,” Strover said. “We have no control over what data is gathered about us and by whom, how it is used and reused, sold and resold.”

Strover believes this issue remains under-scrutinized for two reasons: Too many people are making money from our data, and data policy is considered boring so people aren’t interested.

One potential solution is perhaps even more boring than data policy itself, especially in a sector as fast-paced as technology.

This approach Strover advocates is sometimes referred to as friction. Introducing friction to any sector within technology is rarely met with much enthusiasm though. Putting the brakes on “progress,” for any reason, is almost counterintuitive to the American spirit of enterprise.

She is not alone in thinking this. The Future of Life Institute wrote an open letter calling for all AI research labs to pause all training for AI systems more powerful than GPT-4 for six months. As of the end of March, it had been signed by over 2,500 people, including Tesla’s Elon Musk and UT Austin’s Peter Stone, professor of Computer Science and societal leader in the ethical design of AI.

“Not only that but our current regulatory system is poorly equipped to grapple with the impacts of these systems,” Strover added. “We need to slow down and start being more methodical about how we implement new systems."

There are numerous examples – from the World Wide Web to the electric car – of how policy and regulation lag behind technological progress to the point where severe negative impacts must happen before authorities sit up and take notice.

The potential consequences of allowing advances in AI to continue untethered could lead to problems on an unprecedented scale. “With AI technology, there’s a lot at stake because of the scale of generative AI models, large language models and visual models finding their way into a lot of the indicators we as humans rely upon to understand the world,” Strover warned.

Despite being in uncharted territory, there are steps we can put in place to help mitigate major issues in AI, such as algorithmic bias.

“We need to cultivate a greater understanding and literacy around how algorithms operate, enabling people, particularly decision-makers embedded in institutions, to recognize the operations of algorithms in various circumstances so that they better understand what it is they’re working with,” she said.

Sharon Strover will be interviewing Chad Jenkins, Professor of Robotics at the University of Michigan, in a keynote conversation on “Navigating Human-Robot Relationships and the Future of Work” on April 3 at the 2023 Good Systems Symposium.       
 

Grand Challenge:
Good Systems