Agency and attribution in user interface string design

This is going to sound obvious, but technology is useful because it does things. Swing a hammer and it will transfer force. Throw a boomerang and and it will (usually) come back. Simple tools like these exhibit behaviors because we interact with them in intentional ways, but the behaviors are rather straightforward, and they typically cease as soon as we set the tools down. Other technology has a little more autonomy: An alarm clock makes noise if a certain time is reached. An air conditioner kicks in if a preset temperature threshold is exceeded. These technologies do things without proximate input, but they still behave according to trivial enough rules that we can think of them as completely mechanical, totally lacking meaningful agency or intent.

Computer software is rather different. Its behaviors and capabilities are complex enough that we tend to — either by the makers’ design, or simply by intuition — assign agency and character. At its best, software can be “intelligent” and “take care of things” for us. At its worst, software can “do dumb things” we didn’t want it to do, or even, if it’s just not working right at all, “have a bad day”. We know these products aren’t alive or conscious (…yet), but we nonetheless ascribe to them a certain autonomous existence, even personality.

UX designers should wield this intentionally. The more autonomy a product exhibits, and the more complex its behaviors get, the more it becomes something users can trust or distrust, love or hate. But software is not human, of course, and it’s not entirely obvious how exactly this agency and attribution should be designed. Should the software present itself as self aware? (“I’m sorry, I couldn’t find your account.”) Should it simply be a medium for its makers? (“Sorry, we couldn’t find your account.”) Should it avoid the question altogether? (“Your account couldn’t be found.”) There is no universal right answer, but it’s helpful to know the options — and ideally be consistent, so a user can know what to expect. Here are some of the most common patterns:

Third-person product agency

One approach is to simply use the product as the actor and reference it by name. A piece of software can sync, open, configure, uninstall, add, use, remove, and a long list of other verbs. These interactions aren’t necessarily all mechanical: Weather would like to use my location, Git doesn’t care about remote names, and Zune apparently will listen when I tell it things. Overall, third-person product agency is a pretty useful and straightforward pattern that generally makes it quite clear what’s happening — even when it takes some poetic license and gives software desires. This is a great option to use in many cases.

macOS Weather: “Weather would like to use your current location.”
Dropbox Paper: “Paper is currently experiencing performance issues.”
GitHub: “Git doesn’t care what you name your remotes…”
Zune: “Tell Zune three of your favorite artists.”
Visual Studio: “Please wait while Windows configures Microsoft Visual Studio…”

Channeled human agency

Another approach is to leave the agency with the people who made the product capable of acting in the first place — its makers — and channel it through the product. This most commonly manifests in “we” statements: “We’re creating your account”, “We’re searching your inbox”, “We’re backing up your files”, etc. This can be a comforting reminder that the technology you’re using ultimately is the result of people who are (hopefully) trying to be helpful, even if they’re not directly helping at that instant. However, channeled human agency can raise some awkward questions: Is there literally a person searching through my inbox right now? This pattern is a good option to use when a little extra human touch is desirable, but is best avoided when it creates uncomfortable privacy implications.

GitHub Importer: “We’re importing commits from your other repository.”
Outlook People web app: “We didn’t find anything to show here.”
Upwork error page: “Due to technical difficulties we are unable to process your request.”
Windows Live Messenger: “We can’t sign you in…”
Google Flights: “We recommend that you book both flights at the same time”

Embodied agency

It’s also possible to give agency to a specific section or component of a product, often one with anthropomorphic characteristics or personality. This is typically implemented with conversational interfaces, using virtual assistants and embedded chat bots. These agents refer to themselves as “I” and may have eyes or a full face. Embodied agency can encourage users to interact in a more open and natural way, but also significantly raises user expectations: If I’m really supposed to talk to this thing, it sure better be able to understand me. The tolerance for error and forgiveness for silly mistakes go way down, and trust can be easily lost if the agent underperforms. This pattern can work very well when a product is able to exhibit significant intelligence and carry conversation relatively close to human level, but otherwise may be asking for trouble.

Cortana: “Hello. What can I do for you?”
Siri: “Go ahead, I’m listening…”
Google Assistant: “Hi, how can I help?”
Microsoft CaptionBot: “I am not really confident, but I think it’s a person holding a glass of beer.”
Computershare: “Hi, I’m Penny, Computershare’s virtual agent.”

First-person product agency

An alternative to embodied product agency is for the software as a whole to present itself in the first person, but without any obvious human-like or conversational disposition. The software wants me to give it a moment, or can’t log me in, or is offering to resize an image for me — but there’s no one in sight. I can’t speak for everyone here, but I find this pattern disconcerting and borderline upsetting. It’s the user experience equivalent of thinking you’re alone by the campfire and suddenly hearing footsteps behind you; you didn’t think there was anyone around, but suddenly there is. It’s not impossible to use first-person product agency effectively, but there’s rarely a great reason to use it over one of the other techniques.

Framer: “I can move myself to the Applications folder if you’d like.”
CleanMyMac: “Give me a moment.”
LANDR: “Hold on! I’m coming.”
Tweaks.com Logon Changer: “Do you want me to create a copy and automatically adjust the image?”
PowerPoint: “Tell me what you want to do”

Unattributed agency

Finally, thanks to the flexibility of natural language, we can also avoid the actor question altogether, by simply using grammatical constructs that don’t specify agency — typically sentence fragments. If the case of iOS Wi-Fi password sharing, “Successfully shared your Wi-Fi password” could be reasonably turned into a complete sentence as:

  • You (the user) successfully shared your Wi-Fi password”
  • iOS (the product) successfully shared your Wi-Fi password”
  • We (the product’s maker, Apple) successfully shared your Wi-Fi password”

In addition to sentence fragments, unattributed agency also tends to rely on the passive tense: “The downloaded files are being extracted”, rather than saying any particular person or thing “is extracting the downloaded files.” This pattern works particularly well in user interface design, as space is often at a premium; if something can be said in fewer words, say it in fewer words. It’s robust, easy to follow, and a great default choice when there’s no particular reason to use anything else.

iOS: “Succesfully shared your Wi-Fi password.”
Photoshop Elements: “Please wait while the downloaded files are being extracted.”
Songkick: “Scanning your playlists and Library to track artists.”
macOS Photos: “Preparing Library…”
iCloud: “Can’t load Reminders.”

Wrap-up

While all agency patterns do have something to offer, some are frequently better bets. In general:

  1. Start with unattributed agency: “Syncing files…”
  2. Add third-person product agency to the mix as desired: “FileSync is still syncing files…”
  3. Use channeled human agency when a more personal touch is helpful: “We’re sorry, your files couldn’t be synced.”
  4. Follow embodied agency when implementing a full-fledged conversational interface: “Hi! I’m Filo, I’ll help you sort through your stuff to find old files. What are you looking for today?”
  5. Avoid first-person product agency unless all of the above simply won’t work.

And whatever you do… be consistent!

Word: “Tell me what you want to do” (where “me” refers to Word) and “Tell me more” (where “me” refers to the user)

https://medium.com/media/05d5fd32eda31cbd1b83287606744532/href


“May I ask who’s speaking?” was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here