Here is the first of a series of case study I wrote a few years back on a very specific aspect of my work at Withings on embedded software and never shared online. Hopefully it will be useful to you people!
Arriving at Withings on December 2012, the first two products I had the chance to work on were the Smart Body Analyzer (the new scale) and the Pulse, the brand’s first activity tracker. I’ll be talking about the Pulse in a second article.
Both were being developed in parallel. They needed to share the same identity, as they were designed to be complementary and work with the same companion app.
When I joined the team, the project was on industrialisation mode; hardware choices were final.
The Smart Body Analyzer was built on the knowledge acquired from Withings first scale, the Body Scale, which had neat tricks like identifying the user based on its weight. The hardware interaction was super simple and worked pretty well, without much improvement left to make.
The interface was a succession of 3 screens (Weight, Fat Mass and Body Mass Index). The Smart Body Analyzer would add Heart Rate monitoring, Temperature and Air Quality monitoring (CO2 level), increasing the number of screens and the complexity of the information displayed.
The Smart Body Analyzer’s screen
The scale has a 2,4 x 1,6 inches (6 x 4 cm) LCD screen, and a 128 x 64 pixels definition. It is meant to be read from a 5 to 7 feet distance (1,5 to 2 m).
- It has no grey levels (only black and white), so no possibility for Anti-aliasing.
- Pixels are not squares, they are rectangles (0,8 x 1 ratio)
- The screen has a very strong afterglow
The reason for the odd pixel ratio was to make a bigger screen while maintaining a screen with a limited number of pixels and a heigh and width that remain multiples of 8, easier to manage from an embedded software perspective.
Learning from the Body Scale
The Smart Body Analyzer(SBM), used the exact same screen as the existing Body Scale. As you can see on the visual, the font type on the screen features some large, rounded font.
Three problems were identifiable:
- The rounded font type made the pixels very visible
- The pixels being rectangular and not square, what should be quarter circle appeared oval:
- The last problem is not visible on a snapshot. When the user steps on the scale, the scale take a few seconds to converge to the right weight measurement.
During that time, numbers on the screen are changing.
As the screen has a high afterglow (meaning it takes a long time for pixels to turn OFF), rapidly changing numbers on a screen made it difficult to read:
Addressing the pixel ratio problem
Most of those problems could be overcome with visual solutions.
Using the pixel ratio (0,8 x 1), we can recreate a perfect visual square using the following grid (0,8 x 5 = 4, 1 x 4 = 4):
Then, building a matrix on that square, we can draw a 3 by 5 digital display, enough to display any number.
To maximize the size of the digits for better legibility, I added wider spaces inside the matrix and playing with half squares units improved the grid both in legibility and style:
Addressing the afterglow problem
In order to minimize the afterglow’s effect on legibility, we could maximize the overlapping zones between numbers by sticking to the matrix, thus reducing the screens perceived flickering:
The result are the following digits. They would be used to display the main information on the screen.
We also needed a smaller alpha numerical font for labels and secondary informations. The existing font was bold, leaving small openings.
Displayed as white on black on the screen, the light tended to eat the inner negative space and make it harder to read.
I designed the new font to give the verticals a stronger weight than the horizontal and more inner space. It looks thinner and is more readable:
Using the screen’s characteristics
Here is a video of the UI of the new scale at launch.
The new fonts are in use.
The screen’s strong afterglow and high latency prevent any moving transition, but create a natural Fade in / Fade out effect that is heavily used in the UI, for instance on the transitions between screens and on the spinner.
Delivering assets for embedded UI
Items were then named and delivered to the Embedded Software Team as GIF files, then translated into bytes arrays they could use.
I simply kept a photoshop file with old school “cutting” areas that generated a number off assets that could be automatically compiled by the dev team.
For complex animations, the constraint was the available memory.
We could not keep long series of pictures in the firmware.
As a result most animations needed to be computer generated.
Processing proved to be a great tool; computer generated animations built in processing were relatively easy to duplicate on the firmware.
Here is the prototype of the “Birthday screen” showed earlier:
Most issues and opportunities only show up at that point, when you are actually looking at the device, at the right distance, in the right conditions, showing it to different people.