Play Video
Learn the results of the Baymard Institie year-long extensive usability research study on how users construct product search queries and navigate the results, specifically on e-commerce sites. Despite testing multi-million dollar sites, more than 700 search-specific usability issues arose during testing. In this talk, Bauyward will cover 6 common search UX pitfalls, where e-commerce sites‘ search UI and logic often misalign with user expectations and user behavior.
Intended Audience
Anyone working in ecommerce search looking for actionable takeaways. Devs, Designers and Product Managers, etc.
Attendee Takeaway
You’ll walk away with 6 very very specific and actionable „e-commerce search UX“ takeaways based on thousands of hours of large-scale e-commerce search UX research.
Speaker
Kathryn Totz, Senior UX Researcher, Baymard Institute
Transcript:
[Kathryn Totz]
Hi, and welcome to 6 UX Lessons from Testing Ecommerce Search Experiences. My name is Kathryn Totz and I’m a UX Researcher at Baymard Institute.
Now, since 2009, Baymard has conducted more than 71,000 hours worth of large-scale research on all aspects of the e-commerce user experience, including search across desktop, mobile web and mobile apps. And when we’re not doing our own large-scale research, we do client work for companies like these to provide ideas for how to improve their e-commerce UX. Before we dive into some of our research findings for search, let me first quickly introduce Baymard’s research methodology, and just the overall of research, the overall structure of our research foundation.
At Baymard we perform large-scale UX testing using a variety of methodologies, including large-scale qualitative usability testing with more than 1,900 user sessions, in-lab eye-tracking studies and quantitative studies spanning nearly 15,000 participants. Throughout our testing, we aim not necessarily to say anything specific about the sites that we’re testing themselves, but rather to uncover overall patterns of user behavior. And then from there, we determine what general design patterns work and which ones don’t during the e-commerce user experience.
Throughout all of our testing, users have run into more than 11,000 specific and preventable usability issues. And we’ve distilled all of these into almost 600 best-practice UX guidelines that show the exact design patterns that we’ve observed in testing, to consistently cause issues for users, as well as the design patterns that consistently solve them.
And then we’ve taken all 600 of our weighted guidelines and used them to manually benchmark some of the largest and most recognizable e-commerce sites across the United States and Europe, leading to the world’s most comprehensive e-commerce UX benchmark database, with more than 40,000 UX performance scores and 35,000 best and worst practice examples. And all of this we bundle into our Baymard Premium Research Catalog. So, this is where the findings of today’s presentation come from.
This brings us back to Baymard’s research findings for search usability. So, if we take a look at our overall search benchmark, you can see that the overall state of e-commerce search is actually a user experience well below what might be expected, which with around 61% of all sites performing below an acceptable search performance. Now that said, this still mediocre overall performance is actually an improvement compared to earlier benchmarks from say 2017, where over 85% of sites were below that acceptable overall search UX performance range.
Now, while users may still be able to use search on these sites, this performance score is a pretty clear indication that the e-commerce search experience isn’t nearly as easy to use as it should be, and that users‘ search success rates can be dramatically improved on most of these sites, even when we’re looking at these top 60 major e-commerce sites, which have an abundance of resources.
And so, today we’ll share some of the most important ways that sites can begin to improve their onsite search tools.
Of course, the biggest reason to improve search as part of the overall e-commerce user experience is that it’s such a big part of many users‘ product finding strategy. And when they perform a search and receive no or poor results, how can they know if it’s because the site doesn’t carry the products they’re looking for, or just because the search engine doesn’t support the kind of query that they entered?
In our large-scale UX testing, we observed that in this situation, most users will conclude that the site actually doesn’t carry the products they’re looking for, even when the site actually does. And these users will typically end up then abandoning the site. And not only that, but this also damages the user’s perception of the brand going forward, because they’ve now developed a kind of misunderstanding of what the site carries as well as how it performs, which is going to make them less likely to return in the future.
Generally speaking, users are going to assume that all matching products are returned in a search. So, if they don’t find anything they like, that makes them more likely to leave the site. So, just as important as returning a few items that match the expected results is making sure that the results don’t leave anything out that could be relevant.
So to this end, one of the most important ways to help users use search to find what they want is to make sure your search supports the most common types of search queries. And the first of these is product type searches.
On 60% of e-commerce sites, according to our benchmark, search requires users to type the exact same product type jargon that the site itself uses. So, it’ll fail to return relevant products for alternate terms or synonyms. So, for example, using the term blow dryer, when hairdryer is what is used on the site. Or multifunction printer versus all-in-one printer. And this statistic is actually unchanged since 2017.
So, in the example on the left here, in this example, this user’s search for writing table returned some relevant results, but not the 100-plus results that the term writing desk would have returned.
And so, when the user terminology is not the same as the site terminology, a search tool that doesn’t take this into account is going to lead to some users who intend to find the same products but use different words to describe it, to have different experiences. And some users are going to receive fewer or lower quality results.
So instead, to support product type searches, search needs to integrate a synonym dictionary on all category and product names. This just helps ensure that search is going to actually honor users‘ intent rather than their knowledge of site or industry-specific terminology. And in particular sites should reference things like regional or international differences, non-domain expert terminology, and outdated tech terminology, as well as just general synonyms.
Now, this is obviously going to look like a manual process in a lot of cases, but users may easily abandon the site if they mistakenly believe the site doesn’t carry what they’re looking for, just based on that difference in vocabulary.
We’ve also observed in testing that users often include additional qualifiers in their search query, using it to look for products with very specific attributes. So for instance, with a query like this, it can actually be understood as a product type search with two particular features. And it’s important that the search engine understands that these included features are not on the same level of query as the product type itself, rather serve to narrow or modify that product type. So in this example, results for “red shoes” or “knitted blanket” are not going to be relevant, even though a rudimentary search tool might identify those as matches based on the terms that are used in this query.
To further illustrate, here’s an example from one of our real mobile test sessions, where this user used a kind of feature search in an attempt to hone in on these very specific details that she wanted in an office chair. So, she searched for “office chair”, “blue casual”, where blue is that feature. But as you can see, there was only one result and it was completely irrelevant to her search for office chairs.
During feature searches, users have this expectation that entering these product attributes will essentially allow them to filter by search to view the suitable results, which obviously in this case was not successful at all.
In effect, the search tool should recognize the product type as separate from these features or modifiers, and then use the features essentially as product filters. In fact, there’s often a direct relationship between the features that users search for and the product-specific filters that you likely already have in place. And in many cases, the search tool can react to this kind of search by actually pre applying the filters to the category that matches the user’s product type search.
And in testing, we observed this search behavior definitely provided this additional level of transparency to the search logic for users. And it also makes it easy to modify or adjust their search parameters.
Now, whereas, features have very clear definitions, themes have somewhat softer boundaries, but they’re nevertheless a very common modifier for users searching for particular items. So, these would be things for… For example, would be searches like a “living room rug”, a “spring jacket”, a “retro lamp”, that don’t necessarily have a very concrete definition of what qualifies as retro. In our overall search UX benchmark, 46% of sites don’t support this kind of thematic search query. And that makes it harder for users to translate the kind of product they’re thinking of in their head into words that they can search for that the tool is going to understand and help them find those products.
Again, despite being an algorithmic tool, it’s important for search to respond to these kinds of real-world themes and use their vocabulary and return relevant results, even for these less concrete terms. Now, clearly a great deal of interpretation is going to be required to support these kinds of thematic searches, in terms of the meaning of the actual query itself and also in the internal tagging of products, but it’s a vital step to make sure that search responds accurately to some of these most common query types that users are likely to enter.
Now, even when supporting the most common query types, users may still not find exactly what they were hoping for from the search results page. In practice, users might end up with too few results, maybe because their original query was too narrow; or maybe they have too many results because their search was too broad. It’s also possible that the results are just not what they expected, and so they want to reformulate their query to try a different strategy.
And in our large-scale UX testing, we’ve observed that users revised their search query an average of 2.2 times, going through multiple iterations of a query before settling on a search results page to explore. So, clearing users‘ search queries each time they are submitted actually makes it a lot more difficult to use the search feature because in order to make that iteration, users are going to have to retype their query again and again. And of course this is going to make it take longer for them to use search, to find products. And in some cases it can even nudge users to abandon using search as a product-finding strategy.
So, to support users‘ natural inclination to revise that initial query, search results pages should persist their previous input within the search bar, which again, just helps them remember how they had previously searched and then add those additional parameters to what’s already there. Additionally, the original query should be presented elsewhere directly on the page. So, in case users are distracted while they are changing that search query, they still have that reference.
Another component of a good search tool that is often very top of mind for our UX audit clients is included in robust autocomplete functionality. Autocomplete keyword query suggestions, where users are provided with a list of potential search queries based on their initial input are pretty ubiquitous now on e-commerce sites, which really makes them a web convention. Sites without any kind of autocomplete, search suggestions are pretty rare, but it does remain unsupported by some sites.
In fact, users have become so reliant on autocomplete as a key part of their product-finding process, that during our large-scale testing, we observed that breaking with this expectation actually caused users to believe that the site search was outright broken. And while some users spend additional time to fully type out and submit their query in that, you know, kind of old fashioned way in the absence of autocomplete, others at this point actually abandoned search and instead turned to navigation to find what they were looking for.
Longtime exposure to the kind of autocomplete available in search engines and other e-commerce sites makes the presence of autocomplete really expected by most users. So for instance, 78% of subjects in our latest rounds of mobile testing relied on autocomplete.
However, it’s not enough to simply support autocomplete suggestions, but the suggested results themselves have to be highly relevant and valuable.
And in particular, in testing, we observed issues when sites‘ autocomplete features only suggested products and not actual queries. By lacking query suggestions, users can be pushed into a specific product too soon in their product-exploration process, when they’re just beginning to consider the variety that the site has to offer.
Keyword queries are not only the baseline expectation that users have when using search auto-complete, but it also provides a variety of benefits, reassuring them that the site has results for all of the suggested queries, providing an overview of the diverse options that the site carries that they might not have even thought of, and guiding them to an appropriately specific results page. Effectively, autocomplete suggestions are not just a way to save the user from typing out their entire query. In fact, in several instances in testing, we’ve observed the suggestions actually slow users down because they stop typing and read those suggestions that are displayed.
However, the end results are much more effective because the autocomplete helps users to formulate better search queries. So, those extra few seconds invested in improving a query upfront will pay off through better search results in the long run.
Now, when it comes to search queries, it should be no surprise that users aren’t perfect spellers, but they really shouldn’t have to be when it comes to getting good search results. Users on both desktop and mobile are prone to making spelling errors. And on mobile, we actually observed that they’re even more common because of the small mobile keyboard. Yet despite their frequency, 27% of our UX benchmark sites don’t yield useful results if users misspelled just a single character in their search for a product title.
And when users received no or poor results from a search, even one that contains some errors, they might not realize their mistake and then they’re going to misinterpret the lack of results as the site not having what they need.
So, to illustrate another example from testing, this mobile user typed too quickly, and she accidentally misspelled the word “backpack”, which yielded no autocomplete results. And that resulted in the need to go back and revise her query to include that missing letter. Now, from that initial misspelling, it’s pretty clear to us as humans, what her intended query was. And so, a robust search tool needs to also be able to make these inferences, even when there are spelling errors and typos present.
So, an ideal search tool will be smart enough to catch basic misspellings and typos, and then still return the same accurate and relevant results for the intended query as if it had been spelled correctly. This obviously is going to give users the most seamless experience, allowing them to continue to those anticipated results without having to rethink and retype their query. And finally, we’ll touch on some of our most important research findings for optimizing on-site search for mobile.
Now, interestingly, we’ve observed across multiple rounds of desktop and mobile testing that users are more likely to use search as a primary strategy on mobile: 60% versus 40%. Many users find mobile sites overall more difficult to use because the menus are often condensed and hidden by default, there’s not as easy of an oversight of the navigation, and of course, simply targeting navigation links can be a lot more difficult on mobile interfaces compared to desktop.
Because of this increased inclination towards search, it’s especially important for search to be find-able and easy to use on mobile. And one issue we’ve observed for mobile search is when the search tool doesn’t display a submit button for completing a search. Now, users can use the mobile keyboard to submit a search, but this is often not the preferred method for a large proportion of mobile users who are going to pause and then try to find that submit button, causing a significant delay.
And you can see in this example from testing. I dunno, yeah. You can see in this example from testing just how troubled this user was by the lack of that search button.
Instead of course, including a clear submit button gives users that option they’re looking for, and then just makes sure that there’s no delay in users submitting that query.
Now, mobile search is also prone to issues due to the smaller mobile viewport, things like sticky elements, navigation options. Even the mobile keyboard itself obscuring autocomplete and search results. So, just keeping that in mind when placing these potentially competing elements to help users focus on search and preventing any overlapping issues.
Finally, of course, due to that mobile viewport, users are also prone to have issues with readability in accurately targeting links, again, especially within the search autocomplete. So, using a large font size, wrapping lengthy query suggestions rather than truncating them, and just ensuring adequate hit areas and spacing, again, just helps prevent some of these potential mobile interaction issues.
So, that covers six of Baymard’s essential research findings for improving the e-commerce search experience. But of course, a best-in-class search tool is going to require a lot more than just these considerations. Now, through our research on desktop and mobile, we’ve identified almost 50 other important search parameters that all need to align in order to ensure that best possible search experience. And these are available in our Complete Research Catalog in Baymard Premium.
And we also have almost 600 other UX research best-practice guidelines, spanning the entire e-commerce experience, as well as our audit services. So, if you’re interested in learning more about our research findings, I hope you’ll check us out at www.Baymard.com.
Thank you so much for sharing your time with me today learning about some of our research findings for improving e-commerce search. I hope you found some valuable insights.