Fiona McEvoy is a writer, researcher, and founder of YouTheData.com, a platform for discussing the societal impact of tech and AI. With a background in the arts and philosophy, Fiona brings her perspective to topics like algorithmic bias, deep fakes, emotional AI, and what she loosely calls “algorithmic influence”: the way that AI systems impact our behavior. We sat down with Fiona to talk about some misuses of AI, the dangers of letting algorithms cultivate and curate our choices, and why “responsible AI” should hopefully become a redundant term in the future.
“We really shouldn’t talk about ‘ethical AI’”
AI itself is not ethical—people are. “We really shouldn’t talk about ‘ethical AI,’” Fiona said, “because AI is just a system. It’s built by people, it runs on data that comes from people, it’s deployed by people. And those people are responsible for the way it exists and the way it’s used in the world.”
How do people build AI responsibly? According to Fiona, it’s about making sure everyone involved in the process—in terms of development, deployment, and use—is consciously evaluating what they’re using and how they’re using it, and continually anticipating the potential harm. The end goal is to be accountable to those impacted by the algorithm’s decisions.
“There are some decisions that AI really shouldn’t be making”
Where the impact of an algorithm’s decisions has social consequences, Fiona believes there must be diverse people involved at every step of the way. Sometimes that might lead to “just accepting that there are some decisions that AI really shouldn’t be making.”
As one example of potential misuse, Fiona has recently been thinking a lot about AI in hiring. Increasingly, video interviews are fed into algorithms that use the footage to judge whether candidates are motivated, or anxious, or enthusiastic. These systems are based on what Fiona described as “junk science”: the idea that facial expressions can be directly used to interpret emotions. As Fiona explained, “How my face expresses enthusiasm may be very obvious—but sometimes it may not be!” Furthermore, “the cultural and generational differences in the way we express ourselves are huge.”
Fiona finds the whole concept more than a little disturbing. “We already know that these systems can be horribly biased. Getting into a ‘brave new world,’ where cameras are trained on us—trying to constantly guess who we are from how we move, rather than what we say—is, I think, a problematic evolution.”
“It’s important that the ‘nudge’ techniques don’t turn into ‘shove’ techniques”
Fiona has been deeply interested in “how our choices and cultivated and curated by algorithms.” To a large degree, it’s very convenient to be shown just the right product on a site like Amazon or get personalized search results on Google. Fiona compared this to getting a tailored piece of clothing made: “You give away all your measurements, which you’d normally never do, because you know you’re going to get something that you’re going to like out of it.”
Yet, Fiona said, “it’s important that the parameters—the “nudge” techniques—don’t turn into “shove” techniques.” The algorithms are incentivized to want us to be more predictable—after all, if our tastes suddenly change, the AI’s suggestions will become less accurate. It can be dangerous if “we start to act within the bounds that we’ve been shown” and end up with tunnel vision. When we increasingly allow third parties to mold and shape our choices, not only are we giving up our self-determination, but the risk is that “this doesn’t allow us to evolve...it keeps us kind of static.”
Fiona shared a few examples of how society is “adapting to AI, rather than the other way around.” During the pandemic, schoolwork happened more and more through automated online grading systems. Kids realized they could game the system by just putting in keywords from their textbooks—since the algorithms were looking to match certain terms, and didn’t care about anything else. There’s something similar happening with AI in the adult world. If you’re preparing a resume, Fiona explained, “the advice now is don’t try to be interesting, don’t try to be funny, because it’s off-putting to the algorithms.”
The way people have done homework or applied to jobs has changed a lot over the last 50 years, regardless of AI. But Fiona worries something else is happening here. “Evolution is fine,” she said, “but evolution to make us all alike and sort of static and the same within a category doesn’t feel like evolution—it feels like homogenization.”
“This is largely an exercise in trying to anticipate harm”
As a writer and researcher, Fiona is approached by many startups and younger companies wanting to do the right thing with AI and get ahead of the pack. “Those with an appetite for mitigating risk are quite wise to make sure that their processes are fit for ethical AI,” she said. To implement AI responsibly, “this is largely an exercise in trying to anticipate harm.” In other words, teams should extensively think through how and where things could go wrong. Including: Who uses the product and how might they accidentally or deliberately misuse it? And when something does go wrong, who is responsible—who is the first person the team would pick up the phone and call?
It’s also important for companies who work on AI to make sure that they incentivize employees to put up their hands and report something that’s wrong. “And make sure that’s seen as a positive,” Fiona said, “rather than: ‘Let’s not complain about this right now, let’s get this product to market.’”
Not long ago, Fiona explained, there was no such thing as Data Privacy Officers, and now everyone knows that this is an area that has to be taken very seriously. Hopefully, the same thing can happen with AI. “Responsible AI” needs to not feel “strange” or “extra,” Fiona said. “I almost hope the terminology goes away.”
For previous episodes, please visit our Resource Hub.
If you have any questions or would like to nominate a guest, please contact us.