Captology describes the shaded area where computing technology and persuasion overlap (recreated from BJ Fogg’s CHI 98 paper “Persuasive Computers”).
Interactive technologies have many advantages over traditional media because they are interactive. They also have advantages over humans because they can be more persistent (e.g., software update reminders), offer anonymity (great for sensitive topics), can access and manipulate large amounts of data (e.g., Amazon recommendations), can use many styles and modes (text, graphic, audio, video, animation, and simulations), can easily scale, and are pervasive. This last advantage is even more pronounced today, with mobile phones being an extension of our arms, and an increased proliferation of smart devices, embedded computing, IoT, wearable technology, augmented reality, virtual reality, and virtual assistants powered by AI being embedded in anything and everything around us. In addition, today’s technological advances allow us to time and target moments of persuasion for high impact, since it is easy to know a user’s location, context, time, routine, and give them the ability to take action. This could be a reminder from your smartwatch to stand or move, or an offer from the coffee shop while you are a few blocks away.
Ethics, new technology, and interactive media
The use of persuasion in traditional media over the past decades has raised questions around the ethical use of persuasion. With new media and pervasive technology, there are more questions about the ethical use of persuasion, some of which are due to the advantages pervasive technology has over traditional media and humans. Anyone using persuasive methods to change people’s minds or behavior should have a thorough understanding of the ethical implications and impact of their work. One of the key responsibilities of a designer during any design process is to be an advocate of the user. This role becomes even more crucial when persuasion techniques are intentionally used in design, since users may be unaware of the persuasion tactics. Even worse, some users may not be capable to detect these tactics, as may be the case with children, seniors, or other vulnerable users.
BJ Fogg provides six factors that give interactive technologies an advantage over users when it comes to persuasion:
- Persuasive intent is masked by novelty.
The web and email are no longer novel, and most of us have wizened up to deceptive web practices and the promises of Nigerian princes, but we still find novelty in new mobile apps, voice interfaces, AR, and VR. Not too long ago, the craze with Pokémon Go raised many ethical questions.
- Positive reputation of new technology.
While “It must be true — I saw it on the internet” is now a punchline, users are still being persuaded to like, comment, share, retweet, spread challenges, and make fake news or bot-generated content viral.
- Unlimited persistence.
Would you like a used car salesman following you around after your first visit, continually trying to sell you a car? While that thankfully does not happen in real life, your apps and devices are with you all the time, and the ding and glowing screen have the ability to persistently persuade us, even in places and times that may be otherwise inappropriate. This past Lent, my son took a break from his mobile device. When he started it after Easter, he had hundreds of past notifications and alerts from one mobile game offering all sorts of reminders and incentives to come back and use it.
- Control over how the interaction unfolds.
Unlike human persuasion, where the person being persuaded has the ability to react and change course, technology has predefined options, controlled by the creators, designers, and developers. When designing voice interfaces, creators have to define what their skill will be able to do, and for everything else come back with a “Sorry I can’t help with that.” Just last month, a social network blocked access to their mobile website, asking me to install their app to access their content, without an escape or dismiss option.
- Can affect emotion while still being emotionless.
New technology doesn’t have emotion. Even with the recent advances in artificial intelligence, machines do not feel emotion like humans do. Back to the Google Duplex assistant call mentioned at the beginning, issues can arise when people are not aware that the voice at the other end is just an emotionless machine, and treat it as another person just like them.
- Cannot take responsibility for negative outcomes of persuasion.
What happens when something goes wrong, and the app or the technology cannot take responsibility? Do the creators shoulder that responsibility, even if their persuasion strategies have unintended outcomes, or if misused by their partners? Mark Zuckerberg accepted responsibility for the Cambridge Analytica scandal before and during the congressional hearings.
With these unfair advantages at our disposal, how do we as creators, designers, and developers make ethical choices in our designs and solutions? For one, take a step back and consider the ethical implication and impact of our work, and then take a stand for our users. Many designers are pushing back and being vocal about some of the ethically questionable nature of tech products and designs. There’s Tristan Harris, a former Google design ethicist, who has spoken out about how tech companies’ products hijack users’ minds. Sean Parker, Napster founder and former president of Facebook, described how Facebook was designed to exploit human “vulnerability.” And Basecamp’s Jonas Downey ruminates on how most software products are owned and operated by corporations, whose business interests often contradict their users’ interests.
Design code of conduct
AIGA, the largest professional membership organization for design, has a series on Design Business and Ethics. “Design Professionalism” author Andy Rutledge also created a Code of Professional Conduct. Both are very detailed and cover the business of design, but not specifically ethics related to design that impacts or influences human behavior. Other professionals who impact the human mind have ethical principles and codes of conduct, like those published by The American Psychological Association and the British Psychological Society. The purpose of these codes of conduct is to protect participants as well as the reputation of psychology and psychologists themselves. When using psychology in our designs, we could examine how the ethical principles of psychologists are applicable to our work as creators, designers, and developers.
Principles and questions
Using the Ethical Principles of Psychologists as a framework, I defined how each principle applies to persuasive design and listed questions related to ethical implications of design. These are by no means exhaustive, but are intended to be food for thought in each of these areas. Note: when you see ‘design’ in the questions below, it refers to persuasive techniques used in your design, app, product, or solution.
Principle A: Beneficence and nonmaleficence
Do no harm. Your decisions may affect the minds, behavior, and lives of your users and others around them, so be alert and guard against misusing the influence of your designs.
- Does your design change the way people interact for the better?
- Does the design aim to keep users spending time they didn’t intend to?
- Does the design make it easy to access socially unacceptable or illegal items that your users would not have easy access to otherwise?
- How may your partners (including third party tools and SDKs) or “bad guys” misuse your design, unknown to you?
- Would you be comfortable with someone else using your design on you?
- Would you like someone else to use this design to persuade your mother or your child?
Principle B: Fidelity and responsibility
Be aware of your responsibility to your intended users, unintended users, and society at large. Accept appropriate responsibility for the outcomes of your design.
- During design, follow up with answers to “How might we…?” with “At what cost?”
- What is the impact of your design/product/solution? Who or what does it replace or impact?
- If your design were used opposite from your intended use, what could the impact be?
- Does your design change social norms, etiquette, or traditions for the better?
- Will the design put users in harm’s way or make them vulnerable, intentionally or unintentionally (Study Estimates That Pokémon GO Has Caused More Than 100,000 Traffic Accidents)? How can it be prevented?
Principle C: Integrity
Promote accuracy, honesty, and truthfulness in your designs. Do not cheat, misrepresent, or engage in fraud. When deception may be ethically justifiable to maximize benefits and minimize harm, carefully consider the need for, the possible consequences of, and be responsible for correcting any resulting mistrust or other harmful effects that arise from the use of such techniques.
- Do you need users’ consent? When asking for their consent, are they aware of what exactly they are consenting to?
- What’s the intent of the design? Is it in the best interest of the user or the creator? Are you open and transparent about your intentions?
- Does your design use deception, manipulation, misrepresentation, threats, coercion, or other dishonest techniques?
- Are users aware or informed if they are being monitored, or is it covert?
- Is your design benefiting you or the creators at the expense of your users?
- What would a future whistleblower say about you and your design?
Principle D: Justice
Exercise reasonable judgment and take precautions to ensure that your potential biases and the limitations of your expertise do not lead to or condone unjust practices. Your design should benefit both the creators and users.
- Does your design contain any designer biases built in (gender, political, or other)?
- Does your design advocate hate, violence, crime, or propaganda?
- If you did this in person, without technology, would it be considered ethical?
- What are the benefits to the creators / business? What are the benefits to the users? Are the benefits stacked in favor of the business?
- Do you make it easy for users to disconnect? Do users have control and the ability to stop, without being subject to further persuasion through other channels?
Principle E: Respect for people’s rights and dignity
Respect the dignity and worth of all people, and the rights of individuals to privacy and confidentiality. Special safeguards may be necessary to protect the rights and welfare of vulnerable users.
- Are your designs using persuasion with vulnerable users (children, seniors, the poor)?
- Does your design protect users’ privacy and give them control over their settings?
- Does the design require unnecessary permissions to work?
- Can your design use a less in-your-face technique to get the same outcome (e.g., speed monitors on roads instead of surveillance)?
- Does your design make your users a nuisance to others? How can you prevent that?
If you have been designing with white hat techniques, you may appreciate the ethical issues discussed here. However, if you have been designing in the gray or black area, thank you for making it all the way to the end. Ethics in persuasive design are important because they don’t prey on the disadvantages users have when it comes to interactive technology. As creators, designers, and developers, we have a responsibility to stand up for our users.
Do good. Do no harm. Design ethically.
Influence: The Psychology of Persuasion, by Robert B. Cialdini
Evil by Design by Chris Nodder
Ethics for Designers (downloadable toolkit): https://www.ethicsfordesigners.com/
Tarot Cards of Tech (to help you consider the impact of technology): https://www.artefactgroup.com/the-tarot-cards-of-tech/
SimCity and Designer Bias: https://www.gamasutra.com/view/feature/172835/how_do_you_put_the_sim_in_simcity.php
Upcoming Android features to make your smartphone less addictive: https://www.washingtonpost.com/news/the-switch/wp/2018/05/08/google-wants-to-cure-your-smartphone-addiction/?utm_term=.f23824450581