Operating similarly to code reviews, I’ve found that this approach promotes a higher level of understanding between our two teams and strengthens our patterns and design systems by formalizing a way to share feedback. Our engineering team is mostly remote, so this also serves as a way to communicate in a structured, asynchronous setting.
By remaining objective during the review process and allowing it to create space for questions, our teams are able to ship higher quality work in a way that supports both the Product Design and Engineering team’s efforts and contributions toward a project.
In the Beginning
There was a time in the early days of ATTN: where our Product Design and Front-end Engineering team were the same few people. Okay, mostly the same person — me (👋).
The upside? Our team was able to ship features fast because I designed them as we built them. If I wanted to test something, I tested it. At the time our team also had (and still have!) a pattern library that describes and creates the foundation of our design system, undoubtedly making some decisions easier instead of always designing from scratch.
Even with this pattern library in place, I still found myself exploring new design work in the form of campaign initiatives, iterative page layouts, and product-tested feature additions (which normally meant introducing either new patterns or new pattern modifications). A lot of my design decisions lacked the space and time to go through a full design process, with most of my design work being done in the browser and reviewed for the first time in either a code review or staging instance. This meant little research and little time for feedback.
I supplemented with data where I could, relying heavily on tools like Optimizely and Google Analytics to validate feature direction based on user engagement.
Eventually our team grew, and we brought on our first UX Design hire who would be able to give time towards a full design process. With this hire, we quickly had to adjust to a new way of doing things. Instead of reviewing everything in pull requests, our Designer was able to introduce spec’d design mocks as well as new design tooling that explained his vision, allowing for iterative feedback before getting to code.
We also brought on more Front-end team members around this time and quickly began to scale up our production and overall output. Still a small team (👋 new friends), we had our UX Designer review designs that were implemented — but normally after features had launched. Unfortunately, this made any fixes hard to prioritize in our future sprints when up against functional bugs and new project initiatives. It did, however, help promote healthy communication between the designer and engineers.
The Need for a Process
As time went on, in an effort to avoid creating a backlog of design override cruft we introduced a process that inserted our designer as a reviewer before features were shipped to production.
To avoid an ownership standoff, I wanted to set boundaries and cap these reviews by establishing the following objectives:
Design Reviews !== Code Reviews
Designers who can code? Love them (Heck, I am one!). During these reviews, however, the engineer has final say on how that code is written. Designers are encouraged to dive into the code to give suggestions if it benefits or simplifies communication on why an element needs to be adjusted.
Engineers no longer have to review design work in code reviews
For example, it no longer becomes the front-end reviewer’s responsibility to catch issues in a Pull Request regarding color use, font-sizing, and spacing that may stray from the mock design.
This objective made logical sense to us — a designer has worked within these mocks for days (or in some cases, weeks) prior to a front-end engineer touching them. If something is off, a designer will be able to catch it quickly since they know what to look for in the first place.
This also frees up our engineers to focus their feedback within code reviews towards the way things are built, instead of the way things look.
Changes requested by a designer have to be categorized with HIGH, MEDIUM, or LOW priority
Even though design reviews are meant to save time overall, I still wanted us to be realistic about the priorities of the build — especially in situations where we have launch dates to factor into our timelines.
By categorizing all change requests with a priority level, engineers can prioritize their tasks based on their sprint progress. All high priority items should be addressed, but medium and low are under the engineer’s discretion. If a change request can’t be met, an engineer then becomes responsible for adding it as a bug fix at a later date, and putting it in the backlog to be prioritized in a future sprint.
Questions and conversation before a review is encouraged
These reviews in no way supplement or replace collaboration during a feature build. We have project slack channels meant for discussion and questions around design feedback, handoffs, and implementation questions that engineers and designers are encouraged to use prior to review as needed.
So How Does It Actually Work?
Our engineering team primarily works in Github and Trello. In an effort to mirror code reviews as much as possible, we utilize a markdown template that is housed in our team wiki, and open up issues within a Project’s repository for every design review once an Engineer is ready for feedback. Engineers open the review, alert PDx via Github and Slack, and then designers self assign based on availability and familiarity with the project.
After a design review is completed, the Engineer can then open a PR with the established changes for an official code review.
Our Design Review markdown template includes the following:
- Link to the staging instance being reviewed
- Link to submitted design mocks and spec’d files for reference
- Summary checklist of high level changes that a designer should review
- Notes from the Engineer that highlight outstanding questions or make note of known issues that a reviewer should know about
Designers are required to then submit change requests after reviewing that follow the HIGH/MEDIUM/LOW format as noted above. They should also submit screenshots or quicktime videos to help illustrate their experience if it clarifies their feedback.
We also limit design reviews to only one OS and one browser on both desktop and mobile to keep these as straightforward and simple as possible. We prioritize QA testing separately and through a different approach.
While this approach works for our team — it is by no means perfect.
There are times where design reviews can step too far into QA, but we do our best to avoid it. We’re also experimenting on when in the build Design Reviews make the most sense, and it varies between projects. Also, sometimes it’s best to do design reviews by component, other times its better to review a full page.
My advice would be to first establish a system that works for both your design team and your engineering team. It’s key that everyone is on board and willing to do the work these reviews take. We’ve found that documenting them is helpful as opposed to doing them verbally. If we do have a session over Hangouts or Skype, we do our best to then document our conversation in the review dialogue to reference later.
It’s also important to create boundaries and objectives (like I listed above) for these reviews that support your team’s priorities. The key is to avoid any ownership confusion and getting stuck in a feedback loop with endless changes. At the end of the day, communication really promotes success in this process.