Modernizing Blue Nile's product filters resulted in significant reductions in user-error rates and an increases in discoverability & conversions.
Blue Nile was an early innovator in the e-commerce game, introducing a customer-centric online buying model for diamonds, engagement rings, and jewelry.
For any company selling a product online, discoverability is key and years of user input — backed by data analysis — revealed that our users were struggling with discoverability and increased cognitive load when using our product sorting and filtering system.
Updating our product filters was an opportunity for Blue Nile to again demonstrate our dedication to our customers by making it faster and easier for our users to find what they were looking for. This would also benefit our brand by creating significant improvements in click-through rates, conversions, and brand loyalty.
As UX Designer and Manager of User Testing, under CX Manager Michelle Meyer, I lead the effort to research, perform user testing, and design the new filtering system with the input of our in-house design team.
Prior to starting this project, we had examined user reviews and Google Analytics data. Next we developed a common language for communicating. Finally, we examined the current filter experience, how our competitors were failing/succeeding at it, and which e-commerce companies were driving filter innovation.
We quickly realized as we began this project that we needed a common language for this project to promote accurate communication between departments and stakeholders. So we started by defining each element of the filter/sort functionality and distributing a visual aid for quick reference.
AUDITING THE CATALOG PAGES
As we began gathering data from different departments, we realized that we needed to address filters on more than 350 catalog pages to create a master list of every filter set and, name, and icon. During this process we discovered redundant catalog pages, missing filters sets, and errors in the filter count — all of which would have to be addressed as we moved through this project.
RESEARCHING BEST PRACTICES
Fortunately for me, experts in the field of human–computer interaction have already done a great deal of testing and research surrounding the process of digital filtering and sorting functionalities. I was able to do a deep dive into the recommendations provided by the Nielsen Norman Group, The Baymard Institute, and others. I took copious notes and provided a summary of my findings to stakeholders.
UNDERSTANDING THE COMPETITION
With a firmer idea of what I needed to look for, both in terms of functionality — and in terms of aesthetics, I performed a competitive analysis to understand how our competitors were filtering their products. Additionally, I looked outside of the jewelry industry to find inspiration from other online e-commerce companies that I felt were meeting their user's needs in a particularly effective and attractive way.
SUBDIVIDING OUR USER AUDIENCE
As part of our company styleguide, we already had two user persona "couples" for whome we'd defined demographics and psychographics. However, as filters get interacted with by both of our key personas, we needed to find a different way to segment our target audiences. So instead of biographical breakdown, we segmented our audience into the 5 types of shoppers to better predict their behavior.
While researching filter styles, I kept sketches and thoughts regarding the best approaches to the new filter design. So when it came time to focus on design, we spent a lot of time on digital wireframe concepting, already having a few strong sketches to work from.
After a brief whiteboard brainstorming session for initial filter design concepts — inspired by our research — we began wireframing for all three device types (desktop, tablet, & mobile) and exploring the application of brand style colors and patterns to the base wireframe models.
We completed a deep dive on menu placement, dropdown models, filter menus, sort functionality, applied filter notification, product count, filter count, and even type-area length to accommodate localization (language translation.)
Given the simplicity of the design, availability of pre-existing assets, and the speed with which a high fidelity prototype could be assembled, we opted for a hi-fi prototype for testing purposes. I used a combination of Adobe and Axure software to model user interaction with the two most commonly used filters — metals and gemstones.
Due to the quantity of filters we had, we wanted to perform card sort testing to understand how users were defining jewelry related terms — and how helpful those terms were. Once we had our final recommendations for our list of filters and filter sets, we performed usability testing on interactive prototypes to understand how our changes would be received.
OPEN CARD SORT
Performing an open card sort test allowed us to understand how user grouped similar filters. Our goal for this effort was to understand if we had filter types grouped correctly, and to understand how those groups were named by the users. We worked with Optimal Workshop, who provides a card sort testing platform. I then watched the videos, documented the results, and presented findings to the design team.
CLOSED CARD SORT
Performing the closed card sort test was a way to further understand how users fit content into a pre-existing structure. Users were asked to place filters into pre-named groups specifically chosen to help us discover how users defined words or concepts with similar meanings, such as "ring" vs. "band" — and how important those small differences were to users when it applied to the process of filtering.
A/B USABILITY TESTING
Once we had our filter names and sets sorted out, we updated our prototypes and began A/B testing to analyze how the changes we were proposing would impact the user's success rate and brand impression. Testing the original filters against the new filters resulted in an overwhelmingly positive response — especially on mobile devices.
Once we had our testing results, we were able to move forward with completing final concepts for handoff to the development team.
While the underlying physical structure of what we were asking the development team to build, was relatively simple in design, the sheer number of moving parts multiplied by the complexities of the algorithms involved, provided quite the challenge.
Our overhaul of the product filter functionality required significant changes to the algorithms used to populate the filter & sort menus and product results. I felt it was important to make sure the developers and quality assurance team had a point of reference. To this end, I created a cheat sheet with use cases and visual examples for every if/else pathway the user may take. I also provided a list of icon use-cases, to help developers identify filters requiring additional imagery.
The final product was a sleek, refined experience that allowed users to quickly discover products, change thier sort order, and add their bling to their shopping bag with efficiency.
It was important to us that our users could not only intuit how the filters functioned, but that the filter sets they used most frequently were readily available to them. However, we also wanted to make our less used filters available. To achieve this effect, we hid any filters that did not fit in a single streamlined row (dependant on device size) and placed them in a list of additional filter sets. This allows the user to customize thier filtering experience to meet their needs.
On mobile, one of our greatest concerns was making sure the user always felt confident that their filters had been applied — as the filter UI takes up the full mobile screen area. To achieve this we added a placebo button to allow users to "Apply" thier settings, even though the developers had coded the filters to apply on selection. The addition of this single non-functional button resulted in a drastic reduction of user errors and the resulting frustration and page abandonment that we had seen in the past.
The new catalog filter system succeeded on multiple levels. Google Analytics page tracking showed a 25% reduced bounce rate on catalog pages and an increase in click-through rates. Data tracking revealed that users were using multiple filters per page, including those hidden in the "More Filter" dropdown menu. Error case logging showed a reduction in the frequency in which users accidentally reset their filters and/or filtered their results down to zero products. Finally, compared to benchmark studies, user testing revealed a 169% increase in filter discoverability times. Overall, these changes resulted in an increase in conversion rates and improvements in user feedback scores.