Under our team's overarching goal of: "make finding the right influencers as easy as possible", we had received customer feedback that our current filter sets were not grouped intuitively, and the experience of using them was "clunky".
The product manager I worked with took our usage data from Google Analytics and formatted it in Google Sheets for legibility. When graphed, the data clearly showed a long-tailed distribution, with some filters being essentially unused. We decided to run some qualitative observational studies with our internal services team (who use the software on behalf of our clients) to add context and further understand the cause of this distribution. As I observed our service team search for influencers, I took extensive notes in a Notion document that was shared with our tech lead and PM. I also observed some user trainings focused on "finding the right influencers", and shared findings in the same way. Because of this shared knowledge, when discussing potential solutions, everyone was aware of the research that had been done, and was able to converse on an equal footing.
Simultaneously, we performed exploratory research around the "best practices" for filtering UX, focusing specifically on situations where the ideal results set is a group of things rather than a singular ideal result. These findings, mostly screenshots, GIFs and links to articles were collated into a separate "inspiration" document, where pros and cons to each solution were discussed via comments.
From this usage data and feedback, we saw that users were often making their search too narrow, limiting the size of their results set prematurely. When discussing this with the product manager, we agreed that this was because of the large amount of options offered. From this, we decided to greatly reduce our filter offerings to simplify the experience. This also gave us the opportunity to re-organize our filters so that the ones most often used were easily accessible.
An additional benefit of these studies was that they allowed us to observe first hand the "clunky" user experience, which we worked with our services team to co-design solutions for. To improve the experience of using the filters, we needed to make small fixes to the micro-interactions of our filtering UX so that people could more accurately apply filters and preview the results sets.
In conjunction with my peer in product we discussed various groupings for the filter sets, as well as how to order them appropriately. These prototype groupings were then quickly tested with our internal team, to ensure that we weren't making biased decisions:
We also conducted additional rounds of user testing regarding the positioning of applied filters in relation to their groupings, ensuring that the relationships between results set and applied filters was easy to parse:
I worked hand in hand with engineering as they were building the feature, offering feedback around interactions as well as discussing potential tradeoffs as they came up. Before our full release, we included a select group end users in a "beta test" via a feature flag, to allow use to observe the filter's use directly, as well as gather any last minute feedback - which was overwhelmingly positive. The triad of me, our PM, and the lead engineer all QA-ed the feature extensively before its full release.
Throughout the user testing process, we were asked for numerous additional features that would provide value (such as saved searches and "similar" influencer recommendations), which we decided to table until our core filtering functionality met a base level of usability due to their complexity of implementation as well as use.
We identified what filters we wanted to eliminate (over 25% of the previous set) from the feedback provided by our internal team and analytics data. Of note, after further discussions with our marketing and sales team, we chose to retain slightly more than that for strategic reasons.
When looking to reorganize the filters, we relied on our user testing of our potential groupings, eventually settling on a more intuitive solution where the four most common ones filters were placed in a single menu. Additionally, we changed the filtering mechanism to allow user to type in numerical bounds for filters such as follower count and engagement rate, where there may be specific values that they are trying to isolate.
To help mitigate the common pattern we observed during our observational studies where users were prematurely reducing their results set, we added a "results set preview" help coach users on how their filters would effect the number of results shown, with the hope that they would learn what filters were most valuable for them specific brand.
Once again to mitigate the "clunky" user experience, we generally improved the experience of applying, viewing and editing filters by giving them dedicated space in our header, and following general best practices around applying filter sets. Notably, our filter reduction also allowed us to increase the size of our search bar, and to add in additional helper text.
There was a significant reduction in support tickets regarding our filters, in addition to unsolicited feedback from both internal and external users that the new filters were much easier to use. An unexpected result was that the number of searches ran increased by 47%.
We continued to monitor our analytics data for filter "trends", as well as remaining in close contact with our support/training teams to understand customer use cases for any filters we may have depreciated.
Regarding the search increase, we wanted to further investigate user satisfaction with search results and the value to the end user of using the filters and search bar in conjunction with one-another. This is an area of research that I wish we had pursued further at the concept phase of our project, as the research we ran was much more reactive rather than proactive, and we potentially could have delivered more value to the end user by focusing on our search experience rather than filtering.