Samsung, Ericsson, Google
In the era of big data and personalization, websites and (mobile) applications collect an increasingly large amount of personal information about their users. The large majority of users decide to disclose some but not all information that is requested from them. They trade off the anticipated benefits with the privacy risks of disclosure, a decision process that has been dubbed privacy calculus. Such decisions are inherently difficult though, because they may have uncertain repercussions later on that are difficult to weigh against the (possibly immediate) gratification of disclosure. How can we help users to balance the benefits and risks of information disclosure in a user-friendly manner, so that they can make good privacy decisions?
Transparency and control
Privacy experts recommend giving users comprehensive control over what data they wish to share, and more transparency about the implications of their decisions. Transparency and control are also at the heart of existing or planned regulatory schemes. However, research in the past few years has unveiled a fair number of situations in which transparency and control do not increase people’s privacy, or even decrease it.
For example, while users claim they want full control over their data, they shy away from the hassle of actually exploiting this control. Systems like Facebook that manage large amounts of personal data have to resort to “labyrinthian” privacy controls. As a result most Facebook users do not seem to know the implications of their own privacy settings. Similarly, informing users about the rationale behind information requests does not make them more discerning about their privacy decisions, but merely makes them worry about privacy in general. For example, displaying a privacy label on an e-commerce website—a supposed vote of confidence—may decrease purchases rather than increase them.
Evidently, transparency and control do not work well in practice. Due to the complexity of privacy decisions and users’ bounded rationality, an increase in transparency and control often aggravates the problem by introducing choice overload and information overload.
Privacy nudges
An alternative approach to support users’ privacy decisions is to introduce a subtle yet persuasive cue (i.e. a "nudge") that makes people more likely to decide in one direction or the other. Nudges make it easier for people to make the right choice, without limiting their ability to choose freely.
A justification is a nudge that provides a succinct reason to disclose or not disclose a certain piece of information. Different types of justifications include giving a reason for the request (e.g. “We ask you for your age because our app is unsuitable for younger users”), highlighting the benefits of disclosure (e.g. “Our personalized results will be much better if you tell us your gender”), and appealing to the social norm (e.g. “90% of our users shared their location”). We have found that justifications are regarded as helpful, but overall do not increase users’ disclosure or satisfaction but rather decrease them. The other approach to nudging users’ privacy decisions is to provide sensible defaults. For example, framing a disclosure decision as an opt-in or opt-in decision, or changing the order of the requests, can be used to influence users’ likelihood to disclose the information.
The problem with privacy nudges is that they are the same for every user. Nudges that increase disclosure may improve personalization results, but may also cause the more privacy-minded users to feel ‘tricked’ into disclosing more information than they would like. Protective nudges may help privacy-minded users, but may make it more difficult to for less privacy-minded individuals to enjoy the benefits that disclosure would provide. Generally speaking, such one-size-fits-all nudges do not work because privacy decisions are highly user- and context-dependent: The fact that one person has no problems disclosing a certain item in a particular context does not mean that disclosure is equally likely for a different person, a different item, or in a different context. Likewise, what is a convincing justification to disclose a certain item in a particular context for a certain person, may be a completely irrelevant reason for a different person, a different item, or a different context.
Project goal: Developing a Privacy Adaptation Procedure
To move beyond the “one-size-fits-all” approach to privacy, the core goal of this project is to develop a Privacy Adaptation Procedure that offers tailored privacy decision support. This procedure first predicts users’ privacy preferences and behaviors based on their known characteristics. It then provides automatic initial default settings in line with users’ “disclosure profile”: Imagine that your smartphone learns about your privacy preferences and suggests default privacy settings for newly downloaded apps accordingly. Also, the procedure will show "tailored disclosure justifications" in situations where they are needed, but only to users who can be expected to react rationally to them, so that they will not cause privacy scares in the other users.
In this project we have already taken several steps towards this goal. For example, we have shown that in many domains one can identify distinct subgroups of users with similar privacy preferences (“disclosure profiles”). The existence of such subgroups considerably simplifies the Privacy Adaptation Procedure. We have also demonstrated that tailored justifications can increase disclosure and satisfaction.
Expected benefits of the Privacy Adaptation Procedure
The proposed Privacy Adaptation Procedure strikes a balance between giving users no control over, or information about, their privacy at all (which is inadequate in highly sensitive situations, and may deter privacy-minded individuals) and giving them full control and information (which, due to the increased burden, is bad in all other situations and for all other people). The procedure relieves some of the burden of the privacy decision from the user by providing the right privacy-related information and the right amount of control that is useful, but not overwhelming or misleading. It thus enables users to make privacy-related decisions within the limits of their bounded rationality.
More Information:
Prof. Kobsa' Privacy research was featured in the Hot Research section of the Spring/Summer 2012 issue of the ISR Connector newsletter.