If you collect passive feedback, what you perceive as a neck and neck battle between two options may actually be a clear cut victory for one. What’s required is normalized feedback. The App Store is not one such an example. Apps ask for feedback from users having a positive experience, in our case people we think are setting up dates. That’s blended with passive feedback. People who are not happy with product change, or those who are banned for misconduct tend to leave angry reviews. Sometimes this is actively solicited feedback, but not from the developer: think #deleteuber. The people who are quietly disappointed, or who have an underwhelming experience aren’t well represented.

So what is normalized feedback and how do you get it?

I’m only aware of one technique that really delivers, and it’s derived from the same idea that goes into randomized control trials (RCTs). Because you won’t be able to control for each important variable independently, you really need to jumble everyone up and have them separated into equally sized buckets. You can then analyze the behavior and look for patterns after the fact.

If you have existing users you want to research, it’s pretty easy to use an existing experiment infrastructure and apply it to garner feedback. The only problem is RCTs require lots of data points, particularly to measure small changes. And feedback for an idea is speculative, what you really need is feedback for a change that’s already been made. In the quest for normalized feedback this is a fancy way of saying: learn by shipping.

If change is researched in the form of a RCT, feedback recorded during or after the trial is normalized, and can be analyzed for winners and losers after the fact. I call this quant leads qual. Imagine a trial where a success metric is being tracked for all users. After the trial period, an analysis on the test group might discover that of losers, a particular demographic was over represented. Now the qualitative comes into play: interview users in those different pockets to how they feel and respond to the change. Finally, research and iterate on what can be done to improve their experience. It’s possible to model the global impact of particular issues by scaling the affected demo in the test up to the total user base.

(S) or (V) users across demographic and test groups. Distribution of unsolicited feedback can change depending on many factors.

Testing is particularly effective for downstream effects. Users experience your product as it is today, and it’s not their job to anticipate the second order effects of change. Adding a new feature may reduce engagement on existing ones. How many Instagram likes come at the cost of Facebook posts?

On the other hand…

Before dismissing the vocal , take extra care to understand why they are making the effort to speak up in the first place. Just because their opinion isn’t popular doesn’t mean it’s not right, or not important. It could be the cornerstone of their enthusiasm for the product.

Particularly if a product is social in nature, these are individuals who have certain traits, not traits that define groups of individuals. People use products in different ways for different reasons. Endlessly doubling down on what is already done well for an existing cohort makes a product unlikely to grow. Tread with caution.


  • Democratic product process requires equal representation
  • Vocalized opinions are strong, but may not be representative
  • Normalizing feedback is a powerful tool to understand all users
  • Randomize who gets a trial behavior first, interview after: quant leads qual
  • Keep an eye out for downstream effects, experience never exists in a vacuum

Source link


Please enter your comment!
Please enter your name here