March 17, 2012 Leave a comment
SXSW came and went – and so did the second installment of our discussion about personalization in “How to personalize without being creepy” on Sunday – a vivid conversation with journalists and marketers alike (search Twitter for #sxnotcreepy for the play-by-play):
The room was packed, to break the ice we kicked off with a few examples of personalized content and advertising “in the wild”, using green (not creepy) and red (creepy!) voting cards to gauge the mood in the room: What is acceptable, when does personalization start to feel creepy? We never made it through the Top10 that we had prepared, because after a few “warm-up” examples we put re-targeting and the recent Target story to the vote – and that’s when the real room discussion kicked off:
You can read the complete story by Charles Duhigg on the NY Times website, but the essence of the story is that a teenage girl suddenly received coupons for diapers and other pregnancy-related items in the mail, and her upset father accused Target of trying to encourage her to get pregnant. When Target called a week later to apologize once more, the father had to apologize instead – and declared that his daughter was indeed pregnant but hadn’t told him at the time. Target knew that his daughter was pregnant before he did – how did they?
Target habitually collects all customer data from credit cards, coupons, online orders, etc. It turned out that Target had hired statisticians to identify how shopping habits could be used to predict if a woman was pregnant and what their due date would be. The statisticians identified a number of products that were good indicators (think unscented soaps, lotions, etc.) and their algorithm had determined that said girl was most likely pregnant at the end of her first trimester.
Target learned their lesson from this PR disaster, however. Instead of sending more girls pregnancy-related coupons, they have now taken to obfuscate their knowledge by sprinkling in other, completely unrelated and largely useless, items into the same coupons – thus making the appearance of pregnancy related items “random”. But does that make it better or worse?
In the following discussion a few nuggets crystallized as important when discussing the topic of targeting, privacy, and personalization:
- The “black box magic” breeds a lot of the concerns and rejection around personalization: If it isn’t obvious to a user how data was obtained, it starts getting creepy. If it turns out you’ve been collecting data secretly all along, even a very transparent opt-in to personalization has a foul taste (e.g. applying personalization for your news feed on all the stuff you’ve read in the past before opting in and didn’t know was tracked).
- Context and the relationship with the user matter. Customers are much more at ease getting personalized experiences from a brand they trust and to which they have a relationship which warrants deeper knowledge (that’s why eHarmony’s “black box” matching algorithm appears much less creepy than it could).
- There is distinct value in creating more relevance for the user. Cutting out stuff users don’t want to see or don’t care about is inherently a good thing, but getting this right is difficult: Intent is a fleeting state and once it has passed relevance changes dramatically (Amazon was cited as a great example which continues to suggest related items years after having bought something e.g. as a once-off present; recently they have added the option to “Fix this recommendation” to give users greater control).
- Your data is the currency that buys a lot of the free experiences – applications, news, services. You can’t expect a discussion on internet privacy to remain inseparable from a discussion on monetization, advertising, and subscriptions. This isn’t a binary link, but there is a relationship that consumers need to understand.
- We all like free stuff, especially on the internet. Of the entire audience (a group acutely aware of privacy and data usage for advertising) only 2 people paid for their email service – everyone else used a free service monetized with behavioral or contextual advertising. This isn’t inherently bad – but it’s important to remember “If you’re not paying for it, you’re not the customer – you are the product”.
The themes that emerged from the discussion that would aid making an experience not creepy revolved around four main pillars:
- Consent – Let the user opt-in (or at least opt-out).
- Control – Allow the user to change their mind, control the depth and breadth of data being used; be transparent about what and how it’s being used. This is where re-targeting appears creepy – no opting-in, and few people know how to control it.
- Trade-off – People understand that even free stuff has a cost. Should advertising-financed offerings offer users a choice between increased costs and no tracking, or decreased costs and tracking? Let users participate in this trade off decision.
- Trust – Consensus appeared to be that for a company a user has a trusting relationship with or sees an authority in a field, personalized experiences are much more palpable.
- Creepy or not – does it even matter? Do people use less Facebook because they see their friends pictures on ads? Will there be a dip in sales at Target because of above story?
- How does all this personalization work? Where does the data come from? Do companies just identify my behavior, or do they actually know “me”? How does re-targeting work?
- How can I navigate this topic as a company? What are the industry initiatives? Are there best practices and guidelines for using data? Most importantly – how is the legal framework changing in the US and in Europe?
Stay tuned on this blog for some posts delving deeper into these questions!