‘How does this apply to the real world?’ by senior research fellow Lee Ahern

November 5, 2020 • Lee Ahern

Lee's blog

As a researcher and educator, I seldom get to share my views.

Just kidding.

As a researcher and educator, I share my views all the time. That is what we do. We publish research regularly (or try to) that reflects how we believe “the world works” (based on evidence and sound reasoning, we hope).

We stand up in front of students to explain concepts and theories and how we think “the world works.” More recently we are not standing up in front of students but rather sitting at home in front of our laptops.

But we realize that no matter how rigorous and transparent we are, we do not have a monopoly on knowledge. Theories evolve and even the “laws of nature” can change as science advances. Societies and cultures advance, and new realities emerge.

It was a unique opportunity, therefore, when Penn State World Campus invited me to do a webinar exploring how and why our master’s degree in strategic communication might provide students with skills and knowledge that will be useful in the “real world.”

When people talk about “real world” skills they are usually talking about applied knowledge such as how to work a specific piece of software or create a type of content. These kinds of skills are important, but you don’t need graduate school to learn them. There are seemingly limitless training resources at the end of a simple Google search.

So if we don’t have theories and concepts that reflect “the truth” with mathematical certainty, and we don’t need to do platform- and industry-specific skills training, how does graduate school “apply to the real world?”

A number of things came to mind:

  • We teach people concepts that can be applied across situations and issues.
  • We teach skills that can be brought to bear across different industries and contexts.
  • After some thought I came to the conclusion that the most important way higher education applies to the real world is by focusing on things the real world doesn’t. And the best example of this kind of thing aligns perfectly with the mission of the Page Center—ethics.

In the real world, the operative question is if you can do something, not should you do it. Higher education does not teach you what’s right and what’s wrong, but it does provide the opportunity to explore the ethical dimensions of strategic communication and provide useful frameworks to consider them.

For example, in my recent World Campus webinar, this ethical lens was turned toward the emerging area of algorithmic content curation, recommendations and ad targeting. The combination of big data and artificial intelligence has created a new media environment fraught with a host of new ethical questions. The discussion, moderated by Patrick Plaisance, the Don W. Davis Professor in Ethics in the Bellisario College of Communications, explored different perspectives for considering the implications of artificial intelligence-driven media and advertising developed by Christian Sandvig (University of Michigan).

This deeper dive into a more expansive ethical toolbox is exactly the kind of thing higher education provides the luxury of discussing. More likely in the real world would be a discussion of how much higher the click-through and conversion rates might be for “more relevant” content and ads.

This is not to say that practitioners don’t think about ethics, only that priorities are different, and dedicating a lot of time and brainpower to sorting out the ethical dimensions of new technology is not always possible.

First up was discussion of algorithmic determinism—the idea that exposure to curated content and advertising creates a mediated reality where “the algorithm knows best,” and acceptance of repeated suggestions seems the best path to happiness. In a few years people have become reliant on digital navigation systems and will follow recommended routes even when they know the way (assuming that their automated map knows more about traffic conditions than they do).

While this is relatively harmless and even positive for navigation, through the potential to reduce traffic by re-routing travelers from congested areas, it is worthwhile considering if other automated suggestions might be more ethically complex.

Does seeing the same jacket promoted to you on social media and through remarketing technology create a mediated reality where you start to think the jacket must be the thing for you to wear? Ultimately algorithmic determinism is a potential threat to autonomy. If and when the acceptance of digital recommendations and suggestions becomes too automatic, our active and cogent participation in making decisions is diminished.

Next was an exploration of “who is responsible,” in the context of the co-production of knowledge. It is easy to blame the algorithm when we don’t like the results—too much hate speech, extremism and racism.

But algorithms are driven by the data that feeds into them. Most AI is useless and meaningless without data, and the data comes from us. So it is impossible to think about “who is responsible” without looking in the mirror. We can lament click-bait and ad-driven content farms, but they would not exist if people didn’t click. The co-production of knowledge provides an important ethical lens through which to view a lot of what is going wrong in the emerging media environment.

It can be disheartening to think about the negative impacts of new technology. It can make you nostalgic for the good old days of three broadcast television networks (four if you count PBS). But this can be a trap as well. The old media environment was not great for everyone; many voices were privileged while others were marginalized or excluded.

We can’t go back so the discussion needs to be how we move forward. With algorithms making more and more decisions for us, or to assist us, based on data we feed into them, we have to decide collectively whether we like the result.

On a somewhat more hopeful note, we talked about the concept of algorithmic auditing. There is no use reading the lines of code of an algorithm, nor is it practical. Many are millions of lines of code, with no one person understanding all of it. Many are tightly protected intellectual property.

But even if we had complete transparency and could somehow read all the code, it would not give us the whole picture. We have to let the programs run and learn on the data we shed through every aspect of our digital lives (and we are nearing the point where all of our behavior leaves a digital jet-trail).

But we have to consciously examine the output at the other end, and the impact it has on our lives, and think critically about the result. Is it a net good? Together as a society, we have to examine these outcomes and decide—that’s the idea of algorithmic auditing. And its not enough to be passive observers, we need to develop deliberate and deliberative processes for monitoring and optimizing the role AI plays in our mediated lives.

In a short while we were able to find some important ways higher education helps address real-world problems, and hopefully provided an example of how the most applicable kind of knowledge is the kind the real world does not teach you to apply.

Lee Ahern has been a senior research fellow at the Page Center since 2014. He is an associate professor of public relations-advertising at Penn State's Bellisario College of Communications. He also leads the College's Science Communication Program.