The ethics of UX research in AI: questions the field needs to answer

I have so many questions

I held space for an unconference session at UX y'all just recently. My topic was AI ethics in UX research. If you don't know what an unconference is, here is an article that explains it pretty well. Essentially, participants drive the conversation or present on topics that their own choosing. For UX Y’all, there was only one session that offered an unconference format. We were able to submit a topic suggestion and mine was selected for the session.

One of my favorite things about the session was that I didn't actually lead it. Instead, a group of amazing UX researchers joined me at the table with only the topic "AI ethics in UX research" as a prompt. I had some preconceived notions about what our conversation would be. I thought we might talk about the use of note takers, data security in LLMs, or whether or not synthetic users are an ethical way to conduct research. But that wasn't the case at all.

Instead, we launched into a deep conversation about what it means to conduct the research that supports the creation of AI products that eventually replace (or could replace) human jobs. In the 45 minutes we had to discuss, we were really only able to reflect on and form some critical questions that the field will need to grapple with as we move forward in research on AI-powered products. Here are a few of the higher-level ones that we touched on.

  • If we are conducting research on a product, that "makes people more efficient,” and that product might lead to those “more efficient” people losing their jobs, whose problems are we actually solving?

  • If we aren't solving the users’ problems, are we even conducting UX research?

  • If the purpose of UX is to make people's lives better, what are the ethical implications of conducting research on efficiencies that don't?

  • What does it mean to advocate for the user if both the user and the researcher are concerned that the suggested changes will lead to job loss?

  • If UX research is meant to help identify what could go wrong, where is the line between issues with the product, and issues with how the product is leveraged? If what could go wrong is that people could lose their jobs, should we be flagging that?

  • What is the role of the researcher in ensuring there are humans in the loop? Is this our responsibility?

  • Since we are often the front line for new products with users, what is our role in socializing AI as part of the future of work?

And finally,

  • Is helping build these types of efficiencies, knowing what we know, even ethical?

While we did chat around these things and share examples of what we are seeing in our work, these questions are so large and so complex that we certainly didn't solve them in the 45 minutes we had together. What I feel we did do is define some very important conversations that we must be having in the field. And while this conversation was had by researchers with the context of UX research, I would argue that these questions certainly expand into all of UX, and could also be applied to product. I mean, I could say that they just go out to everyone… but at that point I've become so abstract it would be hard to do anything with them.

The jaded part of me wonders if this is one of the reasons that UX professionals are finding themselves cut out of the loop when it comes to AI. Even when we aren't focused on AI-powered products, we often find ourselves stuck in a battle between business goals and user needs. As this conflict intensifies, can we assume that one way to avoid having to grapple with some of these ethical questions is to simply take UX practitioners out of the picture? Mind you, I'm not saying that the other people we work with in tech aren't willing to have ethical conversations. Instead, what I'm highlighting is the fact that the people whose job it is to hold that space are being removed from the room. Is the UX debt that I've been writing about also indicative of a larger ethical debt?

I'd like to explore these things in the coming months. Stay tuned.

Previous
Previous

Are we looking at a revival of the UX generalist?

Next
Next

UX researchers are AI ambassadors