UX researchers are AI ambassadors
We are often the front line in trust
Press enter or click to view image in full size
I've been interviewing for roles in UX research now for three months. In most cases, the companies I am chatting with are building AI powered products. This has been really exciting for me as it is an area I really enjoy. Talking about the intersection of AI and UX research has led me to do some reflection on what it means to be a UX researcher and how it might change as we continue to interface with users on AI powered products.
Most of my experience has been in B2B SaaS products, which means that while there are plenty of people, including myself, who talk to the buyers of the products, fewer people are talking directly to the users who are in the workflows on a day-to-day basis. Mind you, I'm not saying that customers can't be users, but it is not uncommon for those who are purchasing to not be using the product regularly. This can cause some disconnection between what a buyer is trying to accomplish and the user experience on the other end. I cannot tell you how many times I have been talking to a user who is telling me something that is in direct conflict with feedback we have gotten from the buyers at the same company. In these cases, I find that my role starts to become not only data collector and insights deliverer, but also diplomatic bearer of incongruent news.
I don't mind this role. I worked for a very long time in conflict and crisis management, so in many ways I am uniquely suited to handle it.
As more and more AI powered tools get into the market, buyers are chasing the value they promise to add. I've written before that a lot of these tools are going to market too quickly and without the proper UX research and design that will make them intuitive and delightful to use. The gap between the promise of these tools and the experience of them seems to be even wider than with other tools I have researched. In addition, when AI is introduced into the equation, there are additional considerations around trust and safety that must be addressed at the user level.
This means that UX researchers are now not only facing the front line of a potential gap between what customers and users think about a product, but also what they think about AI. We are ambassadors for the product and for the technology itself, having to manage expectations on how the tools are being positioned, and how the company is demonstrating and expressing its commitment to transparency and safety. While these things may be handled with buyers, there is a high likelihood that users will not have been exposed to the responsible AI conversation at all.
What this means for researchers is that they should not only be versed in personas, workflows, road, maps, and other aspects of product development, but they should also be well aware of how their company is approaching AI as a tool. This means they should be able to point to any public facing transparency statements, governance statements, or other policies around AI and it's use in the products. Not being aware of these things could possibly cause further concern or distrust in a user because their primary contact or window into the product cannot help assure them of the safety of their data or their experience using the tools.
It feels like a heavy responsibility. However, this is just another layer on top of the ways in which user researchers already serve as ambassadors for their companies. Someone who already takes that role very seriously has likely already done the work to cover these additional user needs.
If that's you, well done! If it's not, I highly recommend you do the work to get there. You will be a much stronger researcher for it.