Case Study 02

Search & Results:
Removing 14 filters
to improve search

ACV's SRP had 32 active filters. They'd been added one at a time over 4 years, each by a team convinced their filter was the one dealers really needed. The cumulative effect was a search experience so complex that dealers were calling support to ask how to find cars.

Role Lead Designer — strategy, research, execution
Timeline 4 months (discovery → launch)
Team 1 Designer, 1 PM, 5 Engineers, Data Science
Platform iOS, Android, Web
Status Shipped — currently in production

100,000 vehicles. 5 minutes. One decision.

At any given time, ACV's marketplace has 80,000–120,000 active vehicle listings. Dealers aren't browsing — they're sourcing inventory for their lots. They have specific needs: right make, right condition, right price band, right geography for transport costs. The SRP is the tool that helps them find those vehicles in the window between customer appointments.

Search performance directly correlates to auction participation. Dealers who find relevant inventory bid. Dealers who can't find relevant inventory leave — and come back less frequently.

The baseline

At project start: 32 available filters, 18 visible by default on mobile. Average session-to-bid conversion was 14%. Support ticket volume for "couldn't find vehicle" was the second-highest category after payment issues.

The filter system was designed for a power user who no longer existed.

The original SRP was built for desktop, for dealers who spent hours a day on the platform, and for an inventory size of ~20,000 listings. All three of those conditions had changed. 80% of sessions were now mobile. Average session length had dropped from 18 minutes to 6. Inventory had grown 5x.

But the filter system hadn't changed. Every feature request from a vocal power user had been implemented as a permanent filter. The result was a UI that required a scroll just to see all available filters — before you'd even started filtering.

The real problem wasn't filter quantity. It was filter architecture. Dealers weren't being served by the right filters in the right moment. They were being asked to configure a search experience every time, from scratch, with no memory of their preferences and no signal about what would help them find what they needed faster.

The decisions that shaped the redesign

Keep all 32 filters, improve the layout. Dealers had come to rely on specific filters. Removing access creates friction.
Rejected
Data-ranked top 8 default, full library accessible. Pull filter usage analytics. The 8 most-used filters go in the default strip. Everything else behind a "More filters" sheet. Nothing is removed — just reorganized.
Chosen
Personalized filters based on purchase history. Show dealers filters relevant to what they typically buy.
Phase 2
Why we chose this: Usage data showed that 78% of all filter interactions touched 8 or fewer filters. The remaining 24 filters accounted for 22% of interactions — but they were distributed across thousands of niche use cases, not concentrated in a few. Hiding them behind one tap preserved access while dramatically reducing the default cognitive load. The hardest part of this decision was internal: every team that owned a "hidden" filter had to be convinced their filter was still accessible, not deleted. We ran a filter-by-filter stakeholder review that took 3 weeks but created the alignment we needed to ship.
Sidebar filter panel (desktop pattern). Persistent visibility of all active and available filters.
Mobile-incompatible
Sticky pill strip above results. Active filters display as removable pills, always visible, one tap to remove. Filter count badge on the "Filters" button shows total active filters at a glance.
Chosen
Why we chose this: Dealers needed to be able to modify searches quickly — add a filter, remove a filter, scan results, adjust. The pill strip pattern gives them a persistent view of their current search state without consuming result space. The "clear all" button at the end of the strip is prominent but not dangerous — it requires the same tap as removing individual filters, just more efficient when starting over.
Newest first. Dealers had always used this. Familiar. Predictable.
Rejected
Ending soonest. Drives urgency. Good for auction mechanics.
Considered
Relevance score. A model trained on purchase history, search patterns, and geographic preference. New default — but user can switch to any sort at one tap.
Chosen
Why we chose this: A/B test (run before full launch) showed relevance sort increased bid rate from position 1–5 in results by 31% versus recency sort. Dealers who were shown vehicles most likely to match their buying patterns bid faster. The risk was introducing a "black box" sort that dealers couldn't understand — we mitigated this by making the sort prominent, labeled clearly, and switchable in one tap. We also added a "Why this vehicle?" tooltip for the top relevance matches that explains what signals drove the ranking. That transparency feature was the difference between dealer acceptance and dealer skepticism.

The costs of simplification

What we gained

Dramatically reduced default complexity

18 visible default filters → 8. The mobile screen real estate reclaimed let results appear above the fold on every device we tested.

What we gave up

Instant access to niche filters

Dealers who used specialty filters (auction lane, seller rating, transport estimate) now had to go one tap deeper. This created real friction for ~8% of our buyer base.

Ongoing tension

Relevance sort vs. dealer autonomy

Relevance sort works well in aggregate but surfaces algorithmic mistakes individually. One bad recommendation from a dealer's perspective undermines trust in the system. We're still working on explainability.

What we gained

Reduced support ticket volume

"Couldn't find vehicle" support tickets dropped 51% post-launch. Part of this was better filter UX, part was relevance sort surfacing vehicles dealers wanted before they had to search for them.

Results at 90 days

+18%
Search session to bid placed
−47%
Abandoned search sessions
−51%
"Couldn't find vehicle" support tickets

What I'd do differently

The stakeholder alignment work took longer than the design work. Every team that owned a filter had to be brought along. I should have started those conversations 6 weeks earlier and run filter attribution analysis before research began, not after.

Relevance sort explainability was underbaked at launch. The "Why this vehicle?" tooltip was added in week 3 of rollout after dealer feedback. It should have been in scope from day one. Black-box algorithms erode trust faster than bad algorithms.

We didn't build a filter usage feedback loop. We made decisions based on historical filter analytics but didn't build instrumentation to track how filter usage patterns changed post-launch. We're flying partially blind on whether our "top 8" choices are still the right 8.