Photo by Markus Spiske on Unsplash

If you’re an online marketer you could be forgiven for feeling like is an atom bomb that was just dropped on your business model. The consensus seems to be that almost all companies with an online presence will be essentially forced to comply with , whether they are in the EU or not. Of course if you’re a user who’s had your data sold, rented, stolen or otherwise harvested, things look a whole lot rosier. One of the basic premises of is an opt-in model. “Data collection for European users, for example will require frequent and explicit consent (‘opt-in’), which can be withdrawn at any time ‘without detriment.’” (Downes, Larry. “GDPR and the end of the Internets Grand Bargain” Harvard Business Review, 2018). Another key tenant is “data portability”; “…the data subject shall have the right to transmit those personal data… retained by an automated processing system, into another one…” (

There is plenty of low-hanging compliance fruit that companies can harvest now; updating privacy policies, hiring data protection officers, disclosing cookies, and being transparent/ethical in their data collection, but this obscures a lot of nuance. For example, how will personalized web services (Amazon, Spotify, Netflix, etc.) function if users choose not to opt-in?. Or stated from the design perspective, how do they “gracefully degrade” in the absence of user permission or partial permission? Do users need to opt in whenever a third party service is called via an API for example? It’s not hard to imagine a great deal more opt-ins as users traverse the various services that make up most modern web-based . Perhaps an interesting side-effect of this will be increased innovation around single-sign on and authentication technologies. However, it almost certainly means disrupted user experiences and task flows (at least in the short run), as users consider each request for personal data. Because “Data collectors can be held responsible for violations by third-party users” (Downes, Larry. “GDPR and the end of the Internets Grand Bargain” Harvard Business Review, 2018), it might also spell the end for businesses that provide third-party functionality to products. The assumption being that companies will try to limit their exposure by creating more of their products in-house where they can control liability (think of third party payment services like PayPal, Square or Venmo). Alternatively, the opposite could be true; companies might outsource liability for users’ personal data to third party specialists.

Another interesting takeaway is the rising popularity of conversational (“chat”) bots and the perceived need companies have to create “relationships” with their customers. In the world of value-based marketing, relationship building seems to be the chosen strategy to create these personalized experiences. “An investment company, for example, may want to ask each prospect how much money she is looking to invest… If these questions are asked ‘so we can sell to you better,’ it is unlikely that the prospect will answer or engage. But, if these questions are asked ‘so that we can send you a weekly email that describes investment options…’ now the prospect may happily answer the questions because she will get something from the exchange of data.” (Wirth, Karl. “Personalization and privacy in a GDPR, 2018.) The obvious implication being that using an impersonal dialog message to ask for customers personal data may be a tougher sell than having a “personalized” chatbot ask them for their permission. Of course this creates a chicken/egg situation in which the chatbot needs permission to personalize it’s responses to facilitate a personalized relationship with the user.

I’m somewhat skeptical however that users will entirely buy the idea that chatbots are anything more than a glorified telephone assistant. It seems like consumer sentiment is starting to back up this view; “…many consumers no longer view this tradeoff as fair. In research conducted in 2015 by the University of Pennsylvania’s Annenberg School of Communication, 84% of Americans reported a desire for control over what marketers can learn about them online.” (Yeager, Bryan. “The GDPR: A Game-Changer for Personalized Marketing?” Gartner for Marketers, 2018.). And let’s not forget the cautionary tale of “Clippy”, Microsoft’s much maligned early foray into personal (though not personalized) assistants. (Meyer, Robinson. “Even early focus groups hated Clippy” The Atlantic, 2015.)

This opens a larger philosophical debate about “privacy by design” (the GDPR principle that businesses architect privacy into their products and services from the ground up). It seems likely that there will be a kind of arms race between the principles of privacy advocates, as enshrined in GDPR, and marketers and businesses. It’s safe to assume that most businesses will pursue a two-pronged strategy of compliance with GDPR while also looking for competitive advantage around GDPR. One highlight; a recent O’Reilly survey documented that 43% of respondents are including the GDPRs privacy-by-design approach and checking their analytics against it. (Lorica, Ben and Paco, Nathan. “The state of machine learning adoption in the enterprise”. O’Reilly, 2018) In short, architecting privacy in a product or service doesn’t necessarily keep up with advances in technology or business models designed to thwart that privacy. There’s a strong profit incentive to subvert the GDPR’s privacy protections. Instead, privacy advocates should think of privacy as an ongoing challenge the same way cyber security is approached. As the ways to thwart privacy protections evolve, so should the measures to defend it.

How might this apply to AI agents that don’t “collect” data about users but are capable of inferring it from public records or other known sources? There’s no explicit opt-in to a bot that crawls the web autonomously. One might assume that the author of the agent would be considered a collector in such a case.

A concrete (and obvious) example of a service experience that relies in different ways on personalization is Amazon. One of Amazon’s core consumer value propositions is online retail. At a minimum, consumers will need to reveal credit information to pay for purchases and an address they are in some way associated with to have their purchases shipped. These services would not function as currently designed without this information, and I’m not aware of (legal) technologies that would let users anonymously pay for and ship purchases (although this is a large part of the “dark web”). While it’s possible that Bitcoin or other Blockchain-based payment approaches might work, they don’t seem to be supported currently. I’m assuming this is because anonymity is often viewed as a way to facilitate criminal activity. Not to mention that it conflicts with many business models built around selling personal user data; the whole rationale for the GDPR. These are just the core Amazon services.

Consider other Amazon features/products like Alexa running on Echo (or other current smart speakers). In the case of Alexa, users are essentially allowing Amazon to “bug” their home. Not only does it mine ambient conversation around it, it also can interface with it’s environment via actuators in a “smart home”. It knows the state of the home and, like Santa Claus, probably even knows when you’ve been bad or good! Smart home technology is an example of a service/product that likely wouldn’t exist in a GDPR world. Although EULAs and non-networked home automation might have some kind of workarounds for GDPR. (I’m doubtful that GDPR would let users sign away their personal data in a EULA).

What I think is most frightening however is not single services or products from individual companies. It’s the joining together of individually relinquished personal data. For example, I might give Google permission to personalize my account so I can use Gmail. I might also give Amazon my personal data (mentioned above) so I can buy goods and services or control my smart home. Individually, I might consent to all these things but what if Google and Amazon were to enter into some kind of data-sharing agreement? Now they both have access to all that data and can form inferences about me that I didn’t even know were possible. There are companies whose entire business model is the joining and selling of consumer data. Companies routinely enter into these kinds of deals. To the general public this might sound far-fetched. But consider that in 2009 researchers at Carnegie Mellon were able to guess the last four digits of peoples social security numbers using publicly accessible information. Just think what’s possible almost 10 years later. GDPR has arrived not a moment too soon.

Source link


Please enter your comment!
Please enter your name here