What is  ?

Sofar Sounds is a music community that enables music lovers to discover new artists through secret shows, called “Sofars,” which take place in unexpected locations and providing an intimate concert experience. Tickets range from $18–25 in the US, and audience sizes tend to be small—about 30 people per show. Attendees don’t know the show’s exact location or which artists are playing until about 48 hours before the show, thus resulting in a unique and intimate concert experience for musicians and audiences alike.

The Brief

Sofar Sounds wants to create a feature that will enable attendees to tip the musicians during and after the event as well as connect with musicians and attendees.


We believe that allowing Sofar Sounds’ audiences to tip musicians and engage with other attendees will not only enable musicians to make more money but also foster more between audiences and artists at Sofar events.

Process & Responsibilities

We used the five-step Design Process to define, design, and test our solutions. Working as a team of three, my group had five days to conduct research and synthesis, and another five days to ideate, design, prototype, test, and iterate.

My responsibilities included:

  • Research: writing screener survey and interview questions, deploying screener surveys, structuring the user research plan, scheduling and conducting interviews
  • Synthesis (as a group): conducted affinity mapping, defined personas, designed and mapped user journeys
  • Ideation: digitized feature prioritization matrix using MoSCoW method, facilitated two rounds of design studio (performed as a group), designed mid-fi wireframes using Sketch
  • Prototyping & Testing: Designed a mid-fi InVision prototype, two rounds of usability testing, facilitating test debriefing and design iterations, performing quality checks on high-fidelity wireframes and prototype



In order to better understand the problem space, we had to conduct some user research to learn more about how people interact at *. We hoped to identify user goals, behaviors and pain points pertaining to the concert experience.

*As a semi-frequent concertgoer who has attended at least one music-related event per month for the last five years, this was definitely one of those “you are not your user” moments, and I had to set all personal biases aside while conducting user interviews.

**EXCLUSIVE** photos of musicians who agreed to being interviewed for research purposes. Take that, Pitchfork!

Screener Surveys

Early on, we identified two user types for whom we would be designing this project: concert attendees and musicians.

We distributed two screener surveys to our personal networks using Google Forms: one survey for music lovers (attendees), and another for musicians. These screener surveys were designed to help us identify potential participants in our user research.

Screener Challenges

Here’s the thing about screener surveys, though: their effectiveness can be a hit or miss depending on your budget, timeline, location, and personal network. Our musician-focused survey was great for gathering broad insights on artists’ motivations, platforms for self-promotion, and income satisfaction, but terrible for scheduling in-person interviews.

Although we got survey responses from some very seasoned musicians—like, at least one of them played @ Coachella last year, okay????—very few agreed to participate in an in-person interview, and none of them were available for an interview during our very limited research window.

So we improvised. Thankfully, we’d injected a trick response in the other screener survey:

GOD BLESS BROOKLYN’S DIY MUSIC SCENE, which has been a more effective networking platform than LinkedIn has ever been in my life.

At least six of our “music lover” respondents had performed at a concert or music festival in the last year! Some of them had Spotify accounts!

Call it double-dipping if you will, but I call it being resourceful. We were able chat with at least four interviewees as both “attendees” and “musicians.”


We conducted in-person user interviews of eight participants, with four for each user type. A research plan was developed prior to the interviews, wherein we listed a set of questions for each user type. Each interview was about 15 minutes long, and was recorded through a combination of handwritten notes and Otter Voice Memos.

After the interviews, our team gathered to write notes and synthesize our research.

Source link


Please enter your comment!
Please enter your name here