Understanding corporate data responsibility through the user’s eyes – Scholar Q&A

November 6, 2023 • Jonathan McVerry

Lim and Shin

Look up information about the Cayman Islands, and you can expect to see advertising about the vacation spot to show up in your Facebook feed. For most consumers, that kind of personalized advertising is not new. However, new algorithms powered by artificial intelligence and the use of data harvesting has changed the landscape. Scholars Joon Soo Lim, Syracuse University; Chunsik Lee and Junga Kim, University of North Florida; and Donghee Shin, Texas Tech University, are leading a project that will examine corporate data responsibility and its relation to consumer trust. Specifically, they will be studying algorithms. How can perceptions of CDR be measured? What conditions are needed to ensure trust in the system? Lim, a three-time Page Center scholar, and Shin explain the project in this Q&A. Shin, Lee and Kim are all first-time scholars with the Center. The study is a part of the Page Center’s 2023 call for research proposals on digital analytics.

Here at the Page Center, we’ve studied and written a lot about CSR (corporate social responsibility). Can you explain CDR (corporate data responsibility)?

Lim: That’s a good question. We are also curious about the use of the term. I found a couple of scholars, especially European scholars, that have proposed the term CDR. Defining it is part of this project. We believe corporate data responsibility might be one part of corporate social responsibility. In the future, for example, corporations will need to be proactive in preventing some negative incidents from happening and really embrace this idea of CDR.

Shin: There is an increasing importance, because we know the scandals of Facebook where it shared customer data with Cambridge Analytica. We know about the whistleblower, Frances Haugen, and Facebook’s misuse and manipulation of consumer data. We know the case of TikTok, which continues to drive people to radicalized or extreme content. We know there is that kind of risk when a corporation uses data. And now more and more companies are utilizing AI and embedding it in their service, and they are publicizing or releasing statements on how they use, store and analyze data responsibly.

There are a lot of areas involved in this line of work – communication, law, technology. Can you expand on the interdisciplinary aspects of the research, as well as its importance to your project?

Lim: Making people understand something so they are prepared for potential risk is an important role of public relations. Not just from a consumer privacy protection, but also protecting brand equity. We are approaching this project not as AI scientists. Because nowadays, computer scientists are studying and practicing what one might call algorithmic ethics. Some of the ethics of the algorithm can be explained by a corporate site, but it is very hard to understand for ordinary people. Too much hinges on high profile scandals like Cambridge Analytica and TikTok. It might be really difficult to balance their practice with real communication. So, we are approaching this issue from a different perspective to learn what kind of factors might affect people's understanding of AI. Also, what is their perception and their potential attitudes and beliefs? What kind of factors among algorithmic ethics and privacy experience may be related to trust?

Shin: We started this project from a micro perspective looking at consumers and how they process information from an AI algorithm – like a news recommendation system or product recommendation system. The model we created gives an implication at the institutional level. For example, how can a company better relay the relationship with using AI to the public or consumer? How can they help the consumer understand that they are doing the right thing with the data? How does the user understand that when they open something with an AI algorithm, data is being collected. What is their actual understanding of the practice? Because it's a very complicated matter.

What’s the origin story of this project?

Lim: The research idea was based on our observation of what was happening around the world. We don’t receive personalized recommendations from just big tech companies now, but also small companies and even academic publishers nowadays. It’s not just Amazon, Google and Netflix. Retailers like Zara, the Spanish clothing company, recently gave store managers around the world access to use its massive customer data collection, including geolocation data. So, in real time, a local store manager can adapt what they are selling.

Shin: I've been inspired by the work of Shyam Sundar [director of the Center for Socially Responsible Artificial Intelligence and faculty member at the Donald P. Bellisario College of Communications at Penn State]. He talked about human AI and actions. He emphasized the importance of user processing. When I started, I worked for a university in Korea, and in Korea there is a similar problem with the personalized news provided when using an AI algorithm. Of course, it's very convenient and beneficial for people to have a personalized news, but at the same time it creates an echo chamber. It’s a rabbit hole or a bubble, and that’s a problem. There’s a great deal of public criticism, legal issues, as well as market issues. When I was in Dubai and Abu Dhabi, there was the same problem. The government was using an AI algorithm to set agendas and frame the news. So, I think CSR is a very important concept to understand. I'm very excited about this work and giving a foundation to CDR, something that I think is the next generation of CSR.

What are the steps to making this project happen? What’s your timeline?

Lim: We have been discussing next steps in conducting our research survey. The survey is very interesting because we try to represent some of the companies – not just big tech, but also smaller companies. But, in this study we want to test the idea based on what the big tech companies are doing, like Amazon and Spotify and TikTok. All those companies represent the best in what they do, whether it’s video sharing, music streaming or e-commerce. So, we selected those diverse companies, then we ask people their perception.
I think it might be very beneficial if we consider some of the very practical approaches to add some insight into how the findings from the research can be developed into practical ideas, like how to develop brand index or AI trust index.

Shin: I'm excited about this project because this work is new but relates to work I’ve done the past several years and is built upon previous findings. We are very confident in producing meaningful findings. We want to build a foundation to give companies so we can help them conduct their business and collect and store corporate data responsibly.