Case workers are the front line for patients in need of social supports. They need to find the right resources, fast, to help people in emergent situations.

This case study outlines how I led the research to improve the search and referral experience in a social services resource directory.

After I launched our first NPS survey I found that more than half of user comments were focused on the search experience. Users were unhappy with resource quality, search functionality, and high rejection rates.

I partnered with my colleagues in CS to map the most unhappy users to open escalations and churn risks with our buyers, and found that there was significant potential to retain revenue while improving the user experience.

This extended to our public-facing resource directory as well.

I defined the users and their needs, and worked with my colleagues in product to develop the roadmap for improvements.

One of the challenges we faced was that our users were turning to Google as a workaround when our search did not work for them. But how does a growth stage startup compete with that? I conducted industry research to find an answer.

The answer was - we don’t. We had to be more than just search.

Product is a team sport, and I had a great team to work with. We leveraged the data I collected to develop a long-term plan that would add increasing functionality to the process.

The goal was to launch the first AI-powered social services resource directory.

My role in phase one was to determine where our data was failing us, and make suggestions for improvement.

Once we started iterating, I conducted tests to ensure the AI-powered resource updates were meeting user needs, and worked to earn the trust of users who were concerned about AI being introduced into their workflows.

We saw a decrease in the number of searches where users applied filters, and shortened the average time to complete the workflow by 22 seconds.

From there, I tested AI matching models to validate improved search relevancy, and ensure we could deliver results our users could trust.

Through my research I made suggestions for quick wins. In response to users wanting some quick-click referral selections, we added a favorites feature.

Enhanced search launched with a 96% accuracy rate, and our pre-post PSAT ranking went up by nearly 1.5 points.

In the midst of research for phase three I was laid off in a restructure. Here is an outline of what I was working on, and what I planned to do had I been on the team for the duration of the project.

Why would I share a project I did not get to complete?

Because what I learned on this project is foundational to how I approach research on AI powered products.

Next
Next

Redesigning user onboarding