
Building a UX Research Repository that Works
How I leveraged AI tools to improve team performance
My team had a problem…
We had:
Four years of research reports that were informing our roadmap.
An immense data lake of user information that we could learn from.
Relationships with colleagues across the company reaching out to us to be thought partners.
…but this was not it.
The problem was one of siloing and scale
Research was accessible through researchers, so we spent hours searching, compiling, and summarizing , not gathering new data.
Data was siloed and report-based, so cross-study analysis was nearly impossible.
Researchers held the knowlege, so when they left, so did much of what they learned about our users.
Our team was not as efficient or effective as we could be.
Our needs
We reviewed the goals for building a repository and honed them down to four specific requirements:
-
We needed a tool that could not only store reports, but survey data, videos, and raw data from A/B tests and analytics.
-
We were tired of dealing with limited search functions in tools like Confluence or our Extranet, which were not targeted at our research, and only provided documents as a search result.
-
We were working increasingly more in a cross-functional capacity. We wanted to put the data where everyone could query it.
-
Anything we implemented had to take either the same amount of time, or less time, to build and maintain than we were spending on searching and sharing data on our own.
We started a process to identify search for the ideal tool that would meet these requirements.
-
Go beyond the problem
We brainstormed all of the needs and problems our team had - no matter the area.
-
Mapped problems to tools
We pulled in a list of available tools that could possibly offer a repository option, and then mapped them to the other problesm as well.
-
Said hi to AI
The tools we found that seemed to meet the most of our needs had one thing in common: they were powered with AI.
-
But that stuff is expensive...
We knew we would need to demonstrate the value of purchasing an AI tool when we already had other products where we could (kind of) store our data.
We knew the answer was in the fourth need: net-neutral capacity
We conducted a proof of concept using a free trial with a tool called Marvin, to show our leadership that buying the tool was worth it…
Did a diary study on the team for a month to get a baseline on how much time we spent on research activities, and on delivering insights.
Ran four studies during the trial, including interviews, focus groups, an A/B test, and the ingestion of a survey.
We fed a series of old reports into the tool.
We invited teammates into the trial to query the data with the directive, “try to break it.”
Proof of concept findings
We saved a lot of time:
Using the tool, we were able to decrease the time to complete a study by 15%.
It was easy to search:
Everyone who participated in the trial was able to query the data effectively.
All of our data could be stored:
There were no limitations on the kind of data we could put into the tool, and all of it was queryable.
We needed to onboard and train our team… and the model itself.
-
I developed a series of trainings to onboard team members into the tool, and ensure they used it properly.
This included:
Ethical use of AI.
How to check AI’s work.
Prompt engineering.
-
I needed to prepare a LLM using only the data my team generated.
Added data into the tool a little bit at a time.
Tested using standard prompts with known answers.
Invited teammates to test with different prompts.
-
I built out a series of documents to support the team in using the model.
Training and onboarding materials.
A library of prompts.
Information on data security and ethical use, including FAQs for market facing teams.
Final outcomes
After six months of using the tool, here are some top line results:
My team can complete studies 25% faster
As we became more familiar using the tool, we were able to leverage the interview and analysis features to complete studies more quickly.
We help our teams do work faster
Having the ability to query all data helps PMs complete discovery faster, and refine research questions with less back-and-forth with researchers.
From end to end, initiatives take an average of two fewer weeks to complete.
The reporting format within the tool cut the read time on our reports by 50%.
The team gets accurate results
The LLM we built delivered results with 96% accuracy on day one, and climbed to 98% over the next two quarters.
The team is more satisfied
We did a pre-post test (eNPS) on satisfaction with our work. The team’s satisfaction with UXR after full implementation of the tool went up ay 8 points.