ACV's SRP had 32 active filters. They'd been added one at a time over 4 years, each by a team convinced their filter was the one dealers really needed. The cumulative effect was a search experience so complex that dealers were calling support to ask how to find cars.
At any given time, ACV's marketplace has 80,000–120,000 active vehicle listings. Dealers aren't browsing — they're sourcing inventory for their lots. They have specific needs: right make, right condition, right price band, right geography for transport costs. The SRP is the tool that helps them find those vehicles in the window between customer appointments.
Search performance directly correlates to auction participation. Dealers who find relevant inventory bid. Dealers who can't find relevant inventory leave — and come back less frequently.
At project start: 32 available filters, 18 visible by default on mobile. Average session-to-bid conversion was 14%. Support ticket volume for "couldn't find vehicle" was the second-highest category after payment issues.
The original SRP was built for desktop, for dealers who spent hours a day on the platform, and for an inventory size of ~20,000 listings. All three of those conditions had changed. 80% of sessions were now mobile. Average session length had dropped from 18 minutes to 6. Inventory had grown 5x.
But the filter system hadn't changed. Every feature request from a vocal power user had been implemented as a permanent filter. The result was a UI that required a scroll just to see all available filters — before you'd even started filtering.
The real problem wasn't filter quantity. It was filter architecture. Dealers weren't being served by the right filters in the right moment. They were being asked to configure a search experience every time, from scratch, with no memory of their preferences and no signal about what would help them find what they needed faster.
18 visible default filters → 8. The mobile screen real estate reclaimed let results appear above the fold on every device we tested.
Dealers who used specialty filters (auction lane, seller rating, transport estimate) now had to go one tap deeper. This created real friction for ~8% of our buyer base.
Relevance sort works well in aggregate but surfaces algorithmic mistakes individually. One bad recommendation from a dealer's perspective undermines trust in the system. We're still working on explainability.
"Couldn't find vehicle" support tickets dropped 51% post-launch. Part of this was better filter UX, part was relevance sort surfacing vehicles dealers wanted before they had to search for them.
The stakeholder alignment work took longer than the design work. Every team that owned a filter had to be brought along. I should have started those conversations 6 weeks earlier and run filter attribution analysis before research began, not after.
Relevance sort explainability was underbaked at launch. The "Why this vehicle?" tooltip was added in week 3 of rollout after dealer feedback. It should have been in scope from day one. Black-box algorithms erode trust faster than bad algorithms.
We didn't build a filter usage feedback loop. We made decisions based on historical filter analytics but didn't build instrumentation to track how filter usage patterns changed post-launch. We're flying partially blind on whether our "top 8" choices are still the right 8.