How Online Social Network Providers' Privacy Policies Impact Users' Information Sharing Behavior

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Journal of Strategic Information Systems 24 (2015) 33–43

Contents lists available at ScienceDirect

Journal of Strategic Information Systems


journal homepage: www.elsevier.com/locate/jsis

Handle with care: How online social network providers’ privacy


policies impact users’ information sharing behavior
Jin Gerlach ⇑, Thomas Widjaja 1, Peter Buxmann 2
Technische Universität Darmstadt, Chair of Information Systems | Software Business & Information Management, Hochschulstraße 1, D-64289 Darmstadt, Germany

a r t i c l e i n f o a b s t r a c t

Article history: Privacy policies determine online social network providers’ options to monetize user data.
Received 7 November 2013 However, these statements also intrude on users’ privacy and, thus, might reduce their
Accepted 23 September 2014 willingness to disclose personal information, which in turn limits the data available for
Available online 12 October 2014
monetization. Given these conflicting interests, we conducted an experimental survey to
investigate the relationship between privacy policies and users’ reactions. We show that
Keywords: users’ privacy risk perceptions mediate the effect that changes in policies’ monetization
Privacy policy
options have on users’ willingness to disclose information. Our findings emphasize privacy
Privacy risk
Secondary data use
policies as a delicate managerial concept for companies relying on data monetization.
Online social network Ó 2014 Elsevier B.V. All rights reserved.

Introduction

OSNs3 are among the most popular websites on the Internet. It is very important for OSN providers to accurately characterize
their network members using the user-provided data, as targeted advertising is a cornerstone of most OSN businesses. However,
the commercial use of data raises privacy issues for users (e.g., Acquisti and Gross, 2006). Users particularly fear that providers
might misuse shared information, sell it to third parties, or aggregate the information to comprehensive personality profiles
(Krasnova et al., 2009a).
Providers use their privacy policies to disclose how they handle user data (e.g., Milne and Culnan, 2004; Tsai et al., 2011),
and individuals who want to participate must agree to the OSN’s policy when signing up. Previous research has emphasized
the important role privacy policies play in users’ assessments of Internet services in general (e.g., Earp and Baumer, 2003; Hui
et al., 2007; Xie et al., 2006). Note that users need not read the policies in their entirety to become knowledgeable about their
contents, since media reports effectively make the policies’ commitments transparent to (potential) users—as has been seen in
the cases of Google and Facebook (e.g., Rizk et al., 2009; Vaknin, 2012; Washington Post, 2012). Furthermore, current tech-
nological developments allow users to visually evaluate a website’s privacy policy without having to read it (e.g., Gross, 2014).
For an OSN provider, the privacy policy’s contents are an integral part of the business model, which both enable and
restrict business opportunities. For example, a privacy-unfriendly policy might open avenues for the provider to monetize
user data, but such policies also tend to scare off potential users, or lead participants to disclose less information. In contrast,
a more privacy-friendly policy may attract many users, but simultaneously restrict the provider’s ability to utilize their data.
This trade-off makes it crucial for OSN providers to find a suitable data-handling strategy and articulate it in a privacy policy.
We term this trade-off pricing-by-privacy, which is similar to classic price-sales functions, where vendors have to find a

⇑ Corresponding author. Tel.: +49 (6151) 16 2363; fax: +49 (6151) 16 5162.
E-mail addresses: [email protected] (J. Gerlach), [email protected] (T. Widjaja), [email protected] (P. Buxmann).
1
Tel.: +49 (6151) 16 7358; fax: +49 (6151) 16 5162.
2
Tel.: +49 (6151) 16 2663; fax: +49 (6151) 16 5162.
3
‘‘Online social network’’ is abbreviated ‘‘OSN’’ throughout this article.

http://dx.doi.org/10.1016/j.jsis.2014.09.001
0963-8687/Ó 2014 Elsevier B.V. All rights reserved.
34 J. Gerlach et al. / Journal of Strategic Information Systems 24 (2015) 33–43

balance between high prices coupled with low sales (privacy-unfriendly policies and smaller amounts of user data) and low
prices coupled with high sales (privacy-friendly policies and larger amounts of user data). The photo-sharing network Inst-
agram provides an example of the challenge this trade-off poses for OSN providers. After being acquired by Facebook, Inst-
agram changed its privacy policy in a way that would enable the provider to monetize user photos without having to pay or
even notify the creators. This announcement immediately ignited a revolt within the community, and many users declared
their intention to delete their accounts. Instagram reacted quickly, admitting that its updated privacy policy was confusing
and promised to change the wording of the statements (McCullagh and Tam, 2012).
The central role that the contents of privacy policies play in the OSN context makes it crucial to understand the interplay
between the divergent interests of providers and users in this regard. However, there is a lack of studies on this topic in the
privacy literature (Bélanger and Grossler, 2011). This is surprising, given privacy policies’ significant role in the businesses of
OSN providers. A few studies have, however, examined the possible influences of privacy policies on users’ behavior. Thereby,
two issues are particularly noteworthy: first, this research has yielded inconsistent results concerning the significance of the
hypothesized effects (e.g., Berendt et al., 2005; Hui et al., 2007). Second, existing studies have taken a more traditional per-
spective on privacy policies that stems from an e-commerce context, in which revenue streams are typically based on the
trade of goods and services. In such a rather traditional context, a provider’s success usually depends less on the monetization
of user data and, as a consequence, less on explicit formulations within the privacy policies. This has resulted in these studies’
use of rather coarse concepts of privacy policies (e.g., absence vs. presence). However, as we will detail below, the nature of
privacy policies has changed and revenue for OSN providers is based on secondary data use, which Culnan has defined as ‘‘the
use of personal information for other purposes subsequent to the original transaction between an individual and an organi-
zation when the information was collected’’ (Culnan, 1993: 342). Therefore, a concept of privacy policies is required which
includes providers’ options for monetizing user data as those statements are central to OSN providers’ revenue streams.
Bearing in mind this potential for research as well as the pricing-by-privacy trade-off, we integrate OSN providers’ busi-
ness goals and users’ privacy needs to investigate the research question: How are an OSN provider’s data-handling practices, as
reflected in the contents of its privacy policy, linked to users’ willingness to disclose personal information?
To answer this question, we first elaborate on the conceptual differences between privacy policies used in more tradi-
tional settings and those that result from OSN providers’ business models. We subsequently present our research framework
and introduce users’ perceptions of privacy risks as a mediator of the stimulus–response link between privacy policies and
the resulting user behavior, a link that previous research has treated as a black box. Based on an experiment with 1116 par-
ticipants, we analyze the impact that changes in privacy policies’ contents have on user reactions, systematically varying the
extent to which the policies allow for monetizing user data and consequently have the potential to harm users’ privacy. Note
that this research does not aim to provide a comprehensive model of users’ information sharing behavior, as excellent work
addressing the facilitators or inhibitors of such behavior is presented elsewhere (e.g., Dinev and Hart, 2006; Dinev et al.,
2008; Hann et al., 2007; Wakefield, 2013). This article rather takes an in-depth look at the interplay between privacy policies’
contents and users’ privacy-related perceptions and behavior in the context of OSNs.

Conceptual background

Privacy policies and their nature in an OSN context

An online privacy policy is a statement that informs users how a service provider handles personal user information (Awad and
Krishnan, 2006; Mai et al., 2010). As previously mentioned, studies examining the effect of privacy policies on users’ willingness to
provide personal information have led to mixed findings. For example, Berendt et al. (2005) found that the contents of privacy
policies do not have a significant effect on users’ information-sharing behavior. They provided a privacy-friendly policy to one
group and a privacy-unfriendly policy to another group. When members of these groups were asked to voluntarily provide their
address to a shopping agent, no significant behavioral differences were observed. Metzger (2006) confirmed this result, finding
that the presence of a privacy-friendly policy had no significant effect on participants’ information-sharing behavior. These results
seem to contradict a study conducted by Miyazaki and Fernandez (2000) who analyzed commercial websites’ privacy policies
along with consumer perceptions regarding their privacy practices. The authors found a significant link between the presence
of privacy-related statements and the likelihood of online purchases from the corresponding companies. In the same vein, Hui
et al. (2007) conducted an online shopping experiment to investigate the effect of a privacy policy on participants’ information
disclosures. They found that the presence of a privacy policy had a marginally significant effect on information disclosure.
To date, research has taken a more traditional perspective on the nature of privacy policies. Prior studies focusing on the
context of e-commerce have often assumed that posting a privacy policy is inherently beneficial to companies (e.g., Hann
et al., 2007; Hui et al., 2007; Xie et al., 2006). This holds true for companies that primarily generate revenue by trading goods
online, as the privacy policy can assure customers that no information is misused. However, business models of OSNs are
often based on the idea of secondary data use and, recently, more and more companies other than OSN providers have
started using their customers’ data beyond the original purposes. Thus, the function of privacy policies has changed in recent
years and differs significantly for OSNs compared to a more traditional context. For a conventional e-commerce merchant, a
privacy policy can signal trustworthiness (e.g., Hui et al., 2007; Pan and Zinkhan, 2006; Xie et al., 2006). In contrast, a privacy
policy directly enables and restricts business opportunities for OSNs by ensuring that the provider has particular rights to
monetize user data. Hence, OSN providers face the pricing-by-privacy trade-off and must weigh the effect their secondary
J. Gerlach et al. / Journal of Strategic Information Systems 24 (2015) 33–43 35

data use has on (potential) users concerned about privacy against opportunities for data monetization. To date, the extended
nature of privacy policies for many of today’s businesses and OSN providers has been overlooked.

The permissiveness of privacy policies

An OSN provider facing the pricing-by-privacy trade-off needs to have a clear understanding of how the opportunities
enabled by the statements of a privacy policy might have negative consequences for its users. In accordance with
Bandura and Walters (1963), we define the permissiveness of a privacy policy as the extent to which its statements allow
the provider to compromise an individual’s privacy in order to freely monetize the individual’s personal data. The statements
in a privacy policy thus entail a certain level of permissiveness. For example, consider a policy stating that the provider
requests information about its users’ age, gender, and interests to display targeted advertising. This policy gives the provider
a certain opportunity to monetize user data but simultaneously has the potential to affect the users’ privacy (e.g., Phelps
et al., 2000). If the policy also states that the provider may use users’ current geographic locations along with the previous
information for targeted advertising (enabling it to use location-based advertising), the policy’s level of permissiveness
increases. This, moreover, increases the potential to intrude on users’ privacy because the policy allows for actions that affect
users’ privacy in a more immediate way (e.g., Clarke and Wigan, 2011; Xu et al., 2009).
As discussed earlier, previous research has produced inconsistent findings on whether privacy policies significantly influ-
ence user behavior. A reason for this may be that the policies presented to participants in these studies implied different
levels of permissiveness. For example, Hui et al. (2007) analyzed how the mere presence of a privacy policy affects user
behavior. However, their study did not elaborate on the contents of the particular policy used, so the company’s stance
on the use of personal information was unclear. Similarly, Xie et al. (2006) tested the effect that presenting a privacy policy
to users has on their willingness to provide personal information. Their policy assured the participants that the company in
question had a secure connection and a privacy statement as well as a third-party privacy seal. But, again, the authors did not
provide information about the contents of the privacy statement. Berendt et al. (2005) and Metzger (2006) both compared
the behavioral outcomes of presenting study participants with privacy policies that provide weak versus strong privacy pro-
tection. However, a comparison of the studies’ strong (and weak) policies shows dissimilarities between the companies’
data-handling practices, and thus the two studies’ implications for users’ privacy differ.
Moreover, the rather coarse differentiation of privacy policies in these studies (e.g., the mere presence or absence of a pol-
icy, or whether the policy is strong or weak) makes it difficult to assess effects that slight changes to only parts of a policy’s
content might have on users’ perceptions. Yet from the perspective of OSN providers, it is crucial to delineate the conse-
quences of adding or removing different statements to or from a privacy policy concerning monetization options. Character-
izing privacy policies according to their level of permissiveness will account for the possibility that different policies may
present different opportunities for data monetization and thus have different possibilities for intruding on users’ privacy.

Conceptual framework

This study investigates the relationship between the contents of privacy policies and user behaviors in OSNs. We there-
fore investigate a causal chain from a privacy policy’s permissiveness, through users’ perceived privacy risks, and finally to
users’ willingness to disclose personal information (Fig. 1).
A provider’s decisions regarding its data handling strategies result in a privacy policy that entails a certain level of per-
missiveness. The policy’s permissiveness is the extent to which the provider is free to pursue data monetization objectives
based on its users’ data. From the users’ perspective, however, the level of permissiveness also determines their anticipation
of the extent to which the provider may engage in unwanted behavior vis-à-vis the users’ personal information.
An Internet user’s perceived privacy risk is defined as ‘‘the degree to which an individual believes that a high potential for
loss is associated with the release of personal information to a firm’’ (Smith et al., 2011: 1001). We propose that risk percep-
tions play an intermediate role in the relationship between a privacy policy’s permissiveness and user behavior. Finally, we
define users’ willingness to disclose personal information as the extent to which they are willing to provide information while
participating in an OSN (Krasnova et al., 2010). This construct represents our main dependent variable, indicating the
amount of valuable data a provider will receive from its users.

Hypothesis development

As discussed above, the uncontrolled levels of permissiveness that characterize the privacy policies used in previous
research might explain these studies’ mixed results. Therefore, it seems worthwhile to analyze whether explicit variations
in permissiveness actually do influence user behavior. We argue that a privacy policy’s permissiveness should affect users’
information-sharing behavior, as they may anticipate greater or lesser potentials for harmful behavior on the part of the
provider (e.g., Earp et al., 2005). For example, a user might be willing to reveal personal data to a provider that requests
information about the user’s political, religious, or sexual orientation. However, the user might reconsider sharing informa-
tion if the provider also requests a street address since this would not only enable targeted advertising but would also allow
third parties to approach the user more directly than just online. Furthermore, an individual might be more willing to dis-
close personal information if the provider’s privacy policy guarantees that the data will only be used for purposes related to
36 J. Gerlach et al. / Journal of Strategic Information Systems 24 (2015) 33–43

Providers’ Behavior Users’ Reaction and Behavior

H1

Privacy Policy Perceived Willingness to


Permissiveness Privacy Risk Disclose

H2

Fig. 1. Conceptual framework.

the network’s functionality (e.g., to search for other users). Moreover, a provider could declare that any transfer of user pro-
files to third parties requires the users’ explicit permission. This could increase users’ confidence in providing personal infor-
mation, countering fears that their information might be disclosed in marketing transactions without their knowledge
(Culnan, 1993; Phelps et al., 2000).
Despite studies that have not found privacy policy statements to influence users’ willingness to provide information (Berendt
et al., 2005; Metzger, 2006), there is evidence that a policy’s permissiveness does affect behavior. Hui et al. (2007) have shown
that users worry about the amount of information requested by a provider. In addition, Phelps et al. (2000) found that consumers
are more hesitant to provide information when the data collected by a provider increases the likelihood that specific individuals
can be identified. Finally, 69% of the participants in a study conducted by Hoffman et al. (1999) stated that a site’s unwillingness
to state how the data will be used is a reason for refusing to provide personal information to an Internet provider. These
examples illustrate ways in which users might react to providers depending on the permissiveness of their privacy policies.
Our first hypothesis proposes a stimulus–response effect. By understanding a privacy policy’s permissiveness, users
should be able to more clearly assess the potential for unwanted behavior on the part of the provider when disclosing per-
sonal data (Milne and Culnan, 2004). Users should therefore adjust their information-sharing behavior accordingly:

H1. A privacy policy’s permissiveness is negatively related to users’ willingness to disclose personal information.

Besides the effects of different levels of permissiveness, another possible reason for previous studies’ mixed outcomes is
the omission of intermediate variables. To date, studies have not explored intervening constructs that mediate the relation-
ship between privacy policies and user reactions (Bélanger and Grossler, 2011). As H1 suggests, previous studies have treated
privacy policies’ effects on user behavior as a black box (e.g., Berendt et al., 2005; Hui et al., 2007). An exception is the study
by Li et al. (2011), who proposed that the presence of a privacy policy has an indirect effect on users’ information sharing
behavior via their perceptions of risk. However, the study found a privacy policy’s mere presence (whatever its level of per-
missiveness) to have a non-significant effect on users’ risk perceptions. In the following, we argue that perceived privacy
risks should nevertheless be included in an analysis of privacy policy permissiveness and user behavior because they trans-
mit the effects of the permissiveness on that behavior.
With respect to the link between privacy risks and user behavior, several studies have found users’ privacy risk percep-
tions to be key predictors of their online behavior, for instance their willingness to disclose personal information (Dinev and
Hart, 2006; Malhotra et al., 2004). The rationale behind this is that increased privacy risks are expected to reduce the net
benefits associated with sharing information on a website, which leads to less information disclosure (Culnan and Bies,
2003; Dinev and Hart, 2006).
In contrast, the influence a privacy policy’s content (especially its level of permissiveness) has on the perception of privacy
risks has not been subject to empirical studies beyond the non-significant relationship that Li et al. (2011) found. Despite this,
we argue that a privacy policy’s permissiveness affects the perceived privacy risk by addressing its underlying structure. Risk
perceptions are often described as the mathematical product of two factors: the more likely and the more severe the potential
for harm to an individual, the higher that individual’s perception of risk will be (Dowling and Staelin, 1994; Peter and Tarpey,
1975). Furthermore, from a user’s point of view, a privacy policy serves as a means to reduce information asymmetries between
the user and the provider in respect of the latter’s handling of personal information (Milne and Culnan, 2004). Hence, with
reduced information asymmetries, a user should be able to more clearly assess distributions for both factors (likelihood and
severity) constituting the risk of unwanted behavior by the provider. If it becomes apparent that sharing information with
the provider implies monetization practices on the part of the provider with highly probable or severe adverse outcomes
for a user’s privacy, that user’s perception of risk is likely to increase. If a policy contains ambiguous statements, and thus only
J. Gerlach et al. / Journal of Strategic Information Systems 24 (2015) 33–43 37

partially reduces information asymmetries, the user will nevertheless draw conclusions about the meaning of these state-
ments, relying on heuristics to assess the risk of using the service (Slovic et al., 2004; Trumbo, 1999).
Our second hypothesis captures the idea that perceived privacy risks have a mediating role regarding the effect of a pol-
icy’s permissiveness on users’ willingness to disclose information. In terms of the widely used stimulus–organism–response
framework (e.g., Jacoby, 2002), we argue that an individual’s perception of a policy’s permissiveness (stimulus) leads to an
assessment of privacy risks by the individual (organism) which will, in turn, result in a certain behavior (response). It is
important to note that the intervening psychological assessment is of a subjective nature. Therefore, two ‘‘organisms’’ assess-
ing the same scenario might have very different privacy risk perceptions (Conchar et al., 2004; Dowling and Staelin, 1994;
Hann et al., 2007). Hence, a significant share of variance should remain unexplained if perceived privacy risks are omitted
from this relation. In sum, we hypothesize:

H2. Privacy risk perceptions mediate the effect of a privacy policy’s permissiveness on users’ willingness to disclose personal
information on OSNs. The higher a privacy policy’s degree of permissiveness, the greater the users’ perceptions of privacy
risks, which are in turn negatively related to their willingness to disclose information.

Methodology

As mentioned above, prior research’s mixed findings on privacy policy contents’ effects on user behavior are possibly due
to these studies’ coarse differentiation of privacy policies (e.g., Berendt et al., 2005; Hui et al., 2007; Metzger, 2006; Xie et al.,
2006). Owing to the centrality of privacy policies for OSN providers’ business models, we have chosen to investigate this
effect in detail, using an experimental survey as our research method.
The primary advantage of experiments lies in establishing causality through the effective manipulation of independent
variables. However, it is also necessary to keep several factors constant in the research setting to prevent confounding factors
(e.g., Coolican, 2009; Kirk, 2003). We thus decided to use a hypothetical scenario, which is a common approach in privacy-
related research (e.g., Hann et al., 2007; Malhotra et al., 2004; Pan and Zinkhan, 2006). Hypothetical scenarios are used to
prevent influences from external variables, such as brand effects and loyalty (e.g., Pan and Zinkhan, 2006). To keep the idea
of the OSN’s capabilities relatively constant among the participants, the survey contained a detailed description of the net-
work’s features, representing functionalities common to most real OSNs (e.g., contact lists, private messaging, and content
sharing). In order to avoid variance introduced by differing perceptions of the network’s user base, our description of the
hypothetical OSN specifically stated: ‘‘many of your friends and colleagues are members of the network as well.’’
It needs to be mentioned that, although experimental settings allow for observing an effect ceteris paribus, it is possible
(and or likely) that, in practice, other interfering factors with regard to variations in the dependent variables exist. We
address the issues related to this characteristic of our research method in the discussion section of this article.

Privacy policy construction

We used vignettes to vary the permissiveness of the OSN’s privacy policies. Vignettes are used to study systematically
varied contexts, creating a standardized understanding of the stimulus across respondents (Alexander and Becker, 1978).
All participants received the same OSN description and measurement items, but each participant was randomly presented
with one of eight vignettes that implemented privacy policies with different degrees of permissiveness.
To create the different vignettes for different levels of permissiveness, we employed a 3  2 factorial design. As business
models of OSNs rely heavily on secondary data use, we follow Culnan’s (1993) proposal that a company’s data-processing
activities comprise three elements: the acquisition, use, and transfer of customer information. Given this backdrop, we used
the following dimensions to vary the contents of a privacy policy, significant to an OSN provider’s revenue:

– The comprehensiveness of data collection reflects statements regarding the amount of personal information acquired
from the user for monetization purposes. The more information a provider acquires about a user, the more precisely
the user can be characterized.
– The purposes of data use reflect a provider’s statements on how personal information will be used. The greater the vari-
ety of data use options a privacy policy allows for, the more opportunities a provider will have to monetize the data.
– A user’s control over the transfer and use of personal information reflects statements regarding the extent to which the
provider can use that personal information without needing to worry about the user’s interference. For example, a pro-
vider could acquire the right to transfer user information to third parties without having to ask the user for permission.

These three dimensions capture current business practices well (e.g., Mai et al., 2010; Malhotra et al., 2004). An exami-
nation of three popular OSNs’ privacy policies (Facebook, Google+, and LinkedIn) shows that statements regarding the com-
prehensiveness of data collection, the purposes of data use, and users’ abilities to control the transfer and use of their data
are integral parts of these policies.
We formulated a permissive manifestation and a non-permissive phrasing for each dimension (Table 1). All individual
statements underwent iterations of pretesting and were rephrased accordingly. Specifically, we asked pretest participants
38 J. Gerlach et al. / Journal of Strategic Information Systems 24 (2015) 33–43

Table 1
Privacy policy construction framework.

Dimension Non-permissive (0) Permissive (1)


Comprehen- Regarding the data you provide, solely your name and (if The provider collects any personal data that you either provide
siveness provided) your age and gender will be collected by the provider while using the service or that is provided by your contacts about
for further use yourself. Furthermore, computer-based technology will be
employed to identify your interests more precisely. Additionally, the
provider collects your current geographic position
Purposes of The provider uses this collected data for the purpose of making This data collected will potentially be used for future purposes,
use contact search and maintenance easier for you (e.g., to search which the provider does not need to explicate to you
for other users)
Control and You can alter or delete this collected data at any point of time. Additionally, you transfer the right to the provider to share this
transfer Sharing this data with third parties is only possible if you collected data with third parties
actively agree to it

to assess the permissiveness associated with the statements. This procedure ensured that the permissive (non-permissive)
manifestations of the three policy dimensions were perceived as equally permissive (non-permissive). Note that the vign-
ettes, which resulted from this framework, are not as detailed as typical real-world privacy policies. Using statements of
actual length would have impeded balanced levels of permissiveness, which were however required for a systematic manip-
ulation of our independent variable.
We then conducted a Q-sort among 11 expert judges to validate whether, from an OSN provider’s point of view, our state-
ments could be seen as manifestations of their underlying dimensions. The overall placement ratio was 92.4% and the aver-
age Cohen’s Kappa was 80.0%. Both values indicate an appropriate level of validity (e.g., Moore and Benbasat, 1991).
In each of the eight vignettes, the privacy policies followed the construction order: comprehensiveness, purposes of use,
and the control and transfer of personal information. For the remainder of this study, we denote a constructed privacy policy
as a triple, with ‘‘0’’ representing a non-permissive manifestation of a policy dimension and ‘‘1’’ representing the permissive
version. For example, we presented the policy denoted by the triple (1, 1, 0) to the participants as follows:

1. The provider collects any personal data that you either provide while using the service or that is provided by your con-
tacts about yourself. Furthermore, computer-based technology will be employed to identify your interests more pre-
cisely. Additionally, the provider collects your current geographic position.
2. This data collected will potentially be used for future purposes, which the provider does not need to explicate to you.
3. You can alter or delete this collected data at any point of time. Sharing this data with third parties is only possible if
you actively agree to it.

Measures and pretest

Having ensured that all permissive (non-permissive) phrasings were rated as equally permissive (non-permissive) during
the pretest, we varied a privacy policy’s permissiveness as the number of permissive phrases within each policy. For exam-
ple, policy (0, 1, 0) received a permissiveness level of 1 while policy (1, 1, 1) received a level of 3. All latent variables were
measured using established indicators on 7-point scales from information systems research.
Drawing on extant research on privacy-related behavior, we included trust in Internet firms (Dinev and Hart, 2006) as
well as age and gender (e.g., Smith et al., 2011) as control variables in our analysis. A user’s general trust in Internet firms
(i.e., ‘‘the confidence that personal information submitted to Internet websites will be handled competently, reliably, and
safely,’’ (Dinev and Hart, 2006: 64) should increase the user’s perception of safety in an Internet environment and thus
positively relate to the user’s willingness to provide personal information (e.g., Dinev and Hart, 2006; Hui et al., 2007). Fur-
thermore, there is evidence that demographic differences might impact privacy-related behavior (e.g., Smith et al., 2011).
We conducted an additional preliminary study in order to test the scales and the experimental setup (n = 63). Each par-
ticipant was asked to assess the items and to leave comments, which we used to further enhance our questionnaire. These
procedures ensured a high degree of understandability for both the measures and the privacy policies. As noted earlier, we
followed previous studies and introduced a hypothetical OSN in our questionnaire. Our pretest showed that all participants
could easily imagine the OSN service we presented and found its characteristics easy to understand.
To assess the internal consistency of the measures, we computed Cronbach’s alpha for all latent variables. All scales met
the required threshold of .70 (e.g., MacKenzie et al., 2011), except for the control variable trust, which diverged marginally
(.68). Conducting an exploratory factor analysis, all factors had eigenvalues greater than 1, with the first factor explaining
34.3% of the total variance. This procedure also indicates that it is unlikely that our data were affected by a common method
bias (Podsakoff et al., 1984). The final scales are presented in the Appendix.

Procedure and sample

Data was collected from German Internet users in April and May 2012. We recruited participants by posting our survey
link on three websites: a news website, the website of a radio station focusing on spoken content (e.g., news, reports, etc.),
J. Gerlach et al. / Journal of Strategic Information Systems 24 (2015) 33–43 39

and the website of a radio station explicitly addressing a younger audience. This diversification should ensure that our sam-
ple represents a broad spectrum of individuals, from digital natives to digital immigrants and including both the working
population and the non-working population. The questionnaires were available for three consecutive weeks. Each partici-
pant was randomly presented with one of the eight vignettes. The items for general trust in Internet firms were surveyed
before the vignettes were presented to the participants in order to prevent biasing attributable to more or less permissive
policy contents. Participants’ perceived privacy risks and willingness to disclose personal information were assessed after
the policy was presented in order to capture the policy-specific perceived risks and user behavior.
To further ensure that only users who would actually read or skim privacy policies in a real setting were considered in our
analysis of policies’ effects on risk perceptions, all participants were asked: ‘‘If you were interested in joining this network,
would you, under real circumstances, read the provider’s privacy policy?’’ We only presented the privacy policy to partici-
pants who chose either ‘‘probably yes’’ (37%) or ‘‘I think I would skim the policy’’ (44%) before assessing the remaining mea-
surement items. These numbers are consistent with prior empirical findings that the majority of users read privacy policies
when they visit a website for the first time or are unclear about the provider’s data-handling practices (Earp and Baumer,
2003; Milne and Culnan, 2004). Participants who chose the option ‘‘probably not’’ were excluded from our analysis. Hence,
our overall sample was reduced from 1375 to 1116 participants (664 male and 452 female). Turning to the participants’ ages,
8.2% were 19 or younger, 21.5% were between 20 and 29, 21.1% were between 30 and 39, 22.8% were between 40 and 49,
14.8% were between 50 and 59, and 11.6% were 60 or older. As participants were randomly assigned to the different permis-
siveness conditions, we obtained random sample sizes between 118 and 161 participants for each experimental group. Dun-
can and Waller’s multiple-range test confirmed that not a single pair of groups significantly differed from each other (at the
.05 level) with respect to the control variables.
As in most studies, our procedure might be subject to a non-response bias. Specifically, it is possible that participants who
chose to complete our survey were more interested in Internet privacy issues than those who did not participate. To analyze
this possibility, Armstrong and Overton (1977) suggest comparing early respondents with late respondents. The underlying
assumption is that individuals who respond early are more interested in the survey’s subject than are late respondents.
Hence, we analyzed whether the first 100 and the last 100 respondents differed significantly with respect to our study’s vari-
ables. Running a multivariate analysis of variance including all study and control variables, we did not observe significant
differences between early and late respondents.

Results

We used OLS regression analyses to test our hypotheses. Our hypotheses concern either a direct effect of privacy policies’
permissiveness on users’ willingness to disclose personal information or an indirect link between the two variables mediated
by privacy risk perceptions. To simultaneously test the two hypotheses, we used Baron and Kenny’s (1986) four-step
approach, which is commonly used to analyze mediated effects (e.g., De Jong and Elfring, 2010; Venkatesh and Bala,
2012). The first condition for establishing a mediating role of perceived privacy risk is that the permissiveness of a privacy
policy must significantly affect privacy risk perceptions. Thus, we regressed risk perceptions on privacy policy permissive-
ness and the control variables. Table 2 (model 2) shows that, in addition to the significant effects of trust and age, a privacy
policy’s permissiveness affects perceived privacy risks positively and significantly (b = .38, p < .001).
The second condition for confirming our mediation hypothesis is that the permissiveness of a privacy policy must affect a
user’s willingness to disclose personal information. This also represents our stimulus–response hypothesis (H1). Hence, in
the next step, we regressed our dependent variable on the independent variable and the control variables. As Table 2 (model

Table 2
Mediation test regression analysis.a

Independent variables Perceived privacy risk Users’ willingness to disclose


Model 1 Model 2 Model 3 Model 4 Model 5 Model 6
Trust in internet firms .25*** .25*** .24*** .24*** .16*** .16***
Age .07* .08** .22*** .22*** .20*** .20***
Gender .00 .01 .00 .01 .00 .00
Privacy policy permissiveness .38*** .09*** .03
Perceived privacy risk .31*** .32***

R2 .07 .22 .12 .13 .21 .21


Adjusted R2 .07 .22 .12 .13 .20 .21
Difference in R2 .07 .15 .12 .01 .08 .00
F-value for R2 difference .00 .00 .00 .00 .00 .27
a
Values show standardized regression coefficients.
*
p < .05.
**
p < .01.
***
p < .001.
40 J. Gerlach et al. / Journal of Strategic Information Systems 24 (2015) 33–43

4) shows, a privacy policy’s permissiveness is negatively related to users’ willingness to disclose personal information
(b = .09, p < .001). Thus, our data confirmed H1.
Third, we regressed users’ willingness to disclose personal information on perceived privacy risks, privacy policy permis-
siveness, and the control variables. Table 2 (model 6) displays the results of this regression, showing that privacy risk per-
ceptions significantly affect users’ willingness to disclose personal information (b = .32, p < .001). Furthermore, the effect of
privacy policy permissiveness on users’ willingness to disclose personal information became insignificant, indicating full
mediation (Baron and Kenny, 1986).
Finally, to further assess the significance of the mediating effect, we conducted Sobel’s test for mediation. The Sobel
test statistic (z = 6.42, p < .001) confirmed a significant mediating effect of privacy risk perceptions. Comparing the
stimulus–response model (model 4) to the mediated model (model 5), we also observed a significant increase in R2,
by .08.
In sum, these steps revealed that a privacy policy’s permissiveness directly affects users’ willingness to disclose informa-
tion (b = .09, p < .001, model 4), confirming H1. However, when we included perceived privacy risks as a mediator, the
direct effect of privacy policy permissiveness on users’ willingness to disclose information became insignificant. As both
the direct effects of privacy policy permissiveness on perceived privacy risks (b = .38, p < .001, model 2) and of risk percep-
tions on willingness to disclose information (b = .32, p < .001, model 6) were significant, full mediation was established,
confirming H2 (Baron and Kenny, 1986).

Discussion

We subsequently discuss our findings against the backdrop of our study’s experimental nature and the consequences of
this research design. In order to isolate the single effect of our experimental factor, other possible influences, besides our
control variables, on the dependent variable (i.e., users’ willingness to disclose information) were excluded from this study.
Hence, the results of our experiment suggest that privacy policies affect risk perceptions and thus influence users’ behavior,
ceteris paribus. However, this also implies that, due to additional antecedents of users’ risk perceptions and behavior, the
effect observed in this study might not surface in its purity in a real-world setting. For example, a real-world OSN has a rep-
utation and history regarding the handling of user data and both variables should influence users’ risk perceptions and
behavior. Therefore, our discussion does not give a comprehensive explanation of users’ information sharing behavior (as
research regarding this endeavor exists elsewhere; see, e.g., Dinev and Hart, 2006; Hann et al., 2007; Krasnova et al.,
2010; Wakefield, 2013). Instead, it discusses the implications of knowing how (ceteris paribus) privacy policies influence
users’ perceived privacy risks and behavior.

Implications for research

Our research aimed to clarify how an OSN provider’s data-handling practices, reflected in its privacy policy’s contents, are
linked to users’ willingness to disclose personal information. We subsequently discuss the different facets of our theoretical
contribution with regard to this research question.
First, our study provides an improved understanding of the mechanism underlying the effect of privacy policies’ contents on
users’ risk perceptions and behavior in OSN settings. We chose this direction because previous research has yielded incon-
sistent results on how privacy policies affect users’ perceptions and behaviors (e.g., Bélanger and Grossler, 2011; Berendt
et al., 2005; Hui et al., 2007). Our findings shed light on this issue in two ways: On the one hand, as our random experimental
groups did not differ in their structure, we found significant differences between users’ reactions, which were due to differ-
ences in the privacy policies’ contents. Thus, our study complements previous studies, providing theoretical and empirical
arguments that privacy policies actually do affect users’ perceptions and behavior (e.g., Miyazaki and Fernandez, 2000;
Hui et al., 2007). On the other hand, we have extended previous stimulus–response models on this topic (e.g., Berendt
et al., 2005; Hui et al., 2007) by opening up the ‘‘black box,’’ suggesting a stimulus–organism–response perspective. As
two individuals assessing the very same scenario might differ in their subjective risk perceptions, omitting perceived privacy
risks in this relationship might have led to a lack of explanatory power and thus confounded previous results. In this respect,
we provide evidence that perceptions of privacy risks fully mediate the effects of OSN providers’ privacy policies on users’
willingness to disclose information.
Second, privacy policies represent the interface between users’ and providers’ interests and our study provides a more
differentiated conceptualization in this regard. Our three-dimensional concept of privacy policies could help future studies
interested in a more detailed analysis of such policies’ consequences for users and providers, as previous empirical research
has relied on rather coarse (i.e., binary) operationalizations of these policies (i.e., presence vs. absence or strong vs. weak).
We further elaborated on this direction and proposed a concept that is more comprehensive from a provider’s perspective.
We consequently found that fine-grained changes in policies’ contents have a significant linear effect on users’ risk percep-
tions and behavior. Along these lines, we introduced the notion of a privacy policy’s permissiveness, as it mirrors providers’
monetization options regarding their users’ data which result from a policy as a legal document. Indeterminate levels of
J. Gerlach et al. / Journal of Strategic Information Systems 24 (2015) 33–43 41

permissiveness might have contributed to the mixed results of previous research and make it difficult to compare these find-
ings. Our study shows that an increase in permissiveness leads to an increase in perceived privacy risks, which paves the way
to investigate the pricing-by-privacy trade-off as a whole. That is, future studies might analyze how changes in a privacy
policy’s permissiveness affect users’ perceptions and translate back into providers’ earnings. Future studies, could moreover
investigate the optimal degree of permissiveness, which maximizes a provider’s profits by balancing monetization options
and the amount of data their users provide, which is actually available for monetization.

Implications for practice

Our study contributes to practice by underlining the role of privacy policies as a crucial managerial concept for companies
facing the pricing-by-privacy trade-off. We provide evidence that differences in privacy policies’ levels of permissiveness
ceteris paribus affect users’ willingness to provide valuable information. Based on these insights, we derive the following
practical implications.
This study is based on a comprehensive conceptualization of privacy policies, which is closely related to providers’ busi-
ness objectives, that is, secondary data use. The findings suggest that a provider’s decisions regarding the amount of data
collected, the provider’s degrees of freedom governing data usage, and the user’s ability to control the transfer and use of
data all affect users’ risk perceptions (everything else constant). This provides some leeway for managers who need to care-
fully balance their decisions. Assuring a more privacy-friendly practice (a decrease in permissiveness) on one dimension
might compensate for the privacy risk arising from an increase in permissiveness on another dimension. If a certain data-
processing practice is mandatory due to business-related objectives, its risk-related effect might be mitigated if an OSN
provider eases the policy on other dimensions. For example, if gathering comprehensive user information is necessary for
targeted advertising, the provider can counter perceptions of increased privacy risks by assuring users that their data will
not be transferred to third parties.
If it is not possible to decrease the permissiveness of privacy policies due to strategic reasons, the mediating role of
perceived privacy risks, as identified in this study, enables providers to employ different approaches to reduce users’ risk
perceptions. Thereby, extant IS literature on users’ risk formation processes offers several insights into how to make users
feel safer with regard to their privacy (e.g., Dinev et al., 2013; Krasnova et al., 2010; Malhotra et al., 2004). For example, pro-
viders could try to further increase the benefits of their services or refer users to legal regulations that protect their privacy
(Dinev et al., 2013). Furthermore, OSN providers could engage in trust building measures to affect users’ risk perceptions
(e.g., Krasnova et al., 2010; Malhotra et al., 2004).

Limitations and further research

Our findings should be understood in the light of their limitations. First, our results are based on an experimental study
design, which uses a hypothetical OSN. Thus, they should not be generalized without reflection, as we used a hypothetical
provider without a reputation or history compared to a real world OSN. Therefore, other factors in real-world settings might
overshadow the effect observed ceteris paribus in this study. Future work could build on our results and investigate how, for
example, individual perceptions of a provider’s reputation and history affect users’ reactions and thereby complement our
findings.
Using an experimental setup entails a second limitation relating to the length of privacy policies used in this study. In
practice, privacy policies may contain more text than the policies constructed within our framework. Using real-world
privacy policies would have prevented us from systematically manipulating our experimental factor, the policies’ permis-
siveness. However, it is possible that the length of a privacy policy might influence users’ perceptions of privacy risks,
for instance, by creating the feeling that a company’s data-handling practices are a rather complex matter. Furthermore,
the contents of real privacy policies usually contain additional aspects, which might be less relevant to providers but
capture the attention of users (e.g., Earp et al., 2005). Thus, when reading a lengthy privacy policy, users’ focus might
not be exactly congruent with the dimensions isolated in this study. Nevertheless, the policy dimensions used in this
study are often highlighted in popular real-world OSNs’ privacy policies (e.g., Facebook, Google+, and LinkedIn). More-
over, there is evidence that users will capture the essence of longer privacy policies as media reports communicate
the contents of privacy policies to users and non-users in a brief manner (e.g., Rizk et al., 2009; Vaknin, 2012;
Washington Post, 2012).
Finally, our research has emphasized the extended role of privacy policies for businesses which depend on secondary
data use. Continuing along these lines, we suggest that it would be fruitful for future research on privacy policies to address
challenges for companies whose business models are based on user data monetization. In particular, although our study
focused on users’ willingness to disclose information as a dependent variable, it is also possible that privacy risk percep-
tions will lead to the non-adoption of services because of their secondary data use. Such a relationship could intensify the
consequences of pricing-by-privacy, making it even more important for providers to carefully adjust their data handling
strategies.
42 J. Gerlach et al. / Journal of Strategic Information Systems 24 (2015) 33–43

Appendix A

Measures and scale properties.

Construct M SD a Items Comments


Perceived 5.35 1.66 .95 In general, it would be risky to store personal 7-Point Likert scale with
privacy risk information on this social network anchors1 = ‘‘strongly disagree’’
and 7 = ‘‘strongly agree’’
There would be a high potential for risk Adapted from Malhotra et al.
associated with storing personal information on (2004)
this social network

The risk would be high that the provider of this


social network would use personal information
for inappropriate purposes

Storing personal data on this social network


would involve many unexpected problems

Trust in internet 2.91 1.13 .68 Internet websites are safe environments in 7-Point Likert scale with
firms which to exchange information with others anchors 1 = ‘‘strongly disagree’’
and 7 = ‘‘strongly agree’’
Internet websites are reliable environments in
which to conduct business transactions

Internet websites handle personal information Dinev and Hart (2006)


submitted by users in a competent fashion

Willingness to 2.22 1.50 .93 On this network, I would provide a lot of 7-Point Likert scale with
disclose information about things that represent me anchors 1 = ‘‘strongly disagree’’
personal personally and 7 = ‘‘strongly agree’’
information
On this network, I would save a detailed profile
of my person

On this network I would save a lot of Krasnova et al. (2009b, 2010)


information that would characterize me as a
person truthfully

Notes: n = 1116; M = mean value; SD = standard deviation; a = Cronbach’s alpha.

References

Acquisti, A., Gross, R., 2006. Imagined communities: awareness, information sharing, and privacy on the Facebook. In: 6th Workshop on Privacy Enhancing
Technologies, Cambridge, UK. Springer-Verlag, Berlin, pp. 36–58.
Alexander, C.S., Becker, H.J., 1978. The use of vignettes in survey research. Public Opinion Quarterly 42 (1), 93–104.
Armstrong, J.S., Overton, T.S., 1977. Estimating nonresponse bias in mail surveys. Journal of Marketing Research 14 (3), 396–402.
Awad, N.F., Krishnan, M.S., 2006. The personalization privacy paradox: an empirical evaluation of information transparency and the willingness to be
profiled online for personalization. MIS Quarterly 30 (1), 13–28.
Bandura, A., Walters, R.H., 1963. Social Learning and Personality Development. Holt, Rinehart & Winston, New York.
Baron, R.M., Kenny, D.A., 1986. The moderator–mediator variable distinction in social psychology research: conceptual, strategic, and statistical
considerations. Journal of Personality and Social Psychology 51 (6), 1173–1182.
Bélanger, F., Grossler, R.E., 2011. Privacy in the digital age: a review of information privacy research in information systems. MIS Quarterly 35 (4), 1017–A36.
Berendt, B., Guenther, O., Spiekermann, S., 2005. Privacy in e-commerce: stated preferences vs. actual behavior. Communications of the ACM 48 (4), 101–
106.
Clarke, R., Wigan, M., 2011. You are where you’ve been: the privacy implications of location and tracking technologies. Journal of Location Based Services 5
(3–4), 138–155.
Conchar, M.P., Zinkhan, G.M., Peters, C., Olavarrieta, S., 2004. An integrated framework for the conceptualization of consumers’ perceived-risk processing.
Journal of the Academy of Marketing Science 32 (4), 418–436.
Coolican, H., 2009. Research Methods and Statistics in Psychology. Routledge, New York.
Culnan, M.J., 1993. ‘‘How did they get my name?’’: an exploratory investigation of consumer attitudes toward secondary information use. MIS Quarterly 17
(3), 341–363.
Culnan, M.J., Bies, R.J., 2003. Consumer privacy: balancing economic and justice considerations. Journal of Social Issues 59 (2), 323–342.
De Jong, B.A., Elfring, T., 2010. How does trust affect the performance of ongoing teams? The mediating role of reflexivity, monitoring, and effort. Academy of
Management Journal 53 (3), 535–549.
J. Gerlach et al. / Journal of Strategic Information Systems 24 (2015) 33–43 43

Dinev, T., Hart, P., 2006. An extended privacy calculus model for e-commerce transactions. Information Systems Research 17 (1), 61–80.
Dinev, T., Hart, P., Mullen, M.R., 2008. Internet privacy concerns and beliefs about government surveillance – an empirical investigation. Journal of Strategic
Information Systems 17 (3), 214–233.
Dinev, T., Xu, H., Smith, J.H., Hart, P., 2013. Information privacy and its correlates: an empirical attempt to bridge and distinguish privacy-related concepts.
European Journal of Information Systems 22 (3), 295–316.
Dowling, G.R., Staelin, R., 1994. A model of perceived risk and intended risk-handling activity. Journal of Consumer Research 21 (1), 119–134.
Earp, J.B., Antón, A.I., Aiman-Smith, L., Stufflebeam, W.H., 2005. Examining internet privacy policies within the context of user privacy values. IEEE
Transactions on Engineering Management 52 (2), 227–237.
Earp, J.B., Baumer, D.B., 2003. Innovative web use to learn about consumer behavior and online privacy. Communications of the ACM 46 (4), 81–83.
Gross, G., 2014. New software targets hard-to-understand privacy policies. PC World. <http://www.pcworld.idg.com.au/article/548329/
new_software_targets_hard-to-understand_privacy_policies/> (accessed 08.07.14).
Hann, I.-H., Hui, K.-L., Lee, S.-Y., Png, I.P.L., 2007. Overcoming online information privacy concerns: an information-processing theory approach. Journal of
Management Information Systems 24 (2), 13–42.
Hoffman, D.L., Novak, T.P., Peralta, M., 1999. Building consumer trust online. Communications of the ACM 42 (4), 80–85.
Hui, K.-L., Teo, H.H., Lee, S.-Y.T., 2007. The value of privacy assurance: an exploratory field experiment. MIS Quarterly 31 (1), 19–33.
Jacoby, J., 2002. Stimulus–organism–response reconsidered: an evolutionary step in modeling (consumer) behavior. Journal of Consumer Psychology 12 (1),
51–57.
Kirk, R.E., 2003. Experimental design. In: Weiner, I.B., Schinka, J.A., Velicer, W.F. (Eds.), Handbook of Psychology, Research Methods in Psychology, vol. 2.
Wiley, New York, pp. 23–45.
Krasnova, H., Guenther, O., Spiekermann, S., Koroleva, K., 2009a. Privacy concerns and identity in online social networks. Identity in the Information Society
2 (1), 39–63.
Krasnova, H., Kolesnikova, E., Guenther, O., 2009b. ‘‘It won’t happen to me!’’: self-disclosure in online social networks. In: Nickerson, R., Sharda, R. (Eds.),
Proceedings of the 15th Americas Conference on Information Systems, San Francisco, CA, pp. 1–9.
Krasnova, H., Spiekermann, S., Koroleva, K., Hildebrand, T., 2010. Online social networks: why we disclose. Journal of Information Technology 25 (2), 109–
125.
Li, H., Sarathy, R., Xu, H., 2011. The role of affect and cognition on online consumers’ decision to disclose personal information to unfamiliar online vendors.
Decision Support Systems 51 (3), 434–445.
MacKenzie, S.B., Podsakoff, P.M., Podsakoff, N.P., 2011. Construct measurement and validation procedures in MIS and behavioral research: integrating new
and existing techniques. MIS Quarterly 35 (2), 293–A5.
Mai, B., Menon, N.M., Sarkar, S., 2010. No free lunch: price premium for privacy seal-bearing vendors. Journal of Management Information Systems 27 (2),
189–212.
Malhotra, N.K., Kim, S.S., Agarwal, J., 2004. Internet users’ information privacy concerns (IUIPC): the construct, the scale, and a causal model. Information
Systems Research 15 (4), 336–355.
McCullagh, D., Tam, D., 2012. Instagram apologizes to users: we won’t sell your photos. <http://news.cnet.com/8301-1023_3-57559890-93/instagram-
apologizes-to-users-we-wont-sell-your-photos/> (accessed 08.07.14).
Metzger, B., 2006. Effects of site, vendor, and consumer characteristics on web site trust and disclosure. Communication Research 33 (3), 155–179.
Milne, G.R., Culnan, M.J., 2004. Strategies for reducing online privacy risks: why consumers read (or don’t read) online privacy notices. Journal of Interactive
Marketing 18 (3), 15–29.
Miyazaki, A.D., Fernandez, A., 2000. Internet privacy and security: an examination of online retailer disclosures. Journal of Public Policy and Marketing 19
(1), 54–61.
Moore, G.C., Benbasat, I., 1991. Development of an instrument to measure the perceptions of adopting an information technology innovation. Information
Systems Research 2 (3), 192–222.
Pan, Y., Zinkhan, G.M., 2006. Exploring the impact of online privacy disclosures on consumer trust. Journal of Retailing 82 (4), 331–338.
Peter, J.P., Tarpey Sr., L.X., 1975. A comparative analysis of three consumer decision strategies. Journal of Consumer Research 2 (1), 29–37.
Phelps, J., Nowak, G., Ferrell, E., 2000. Privacy concerns and consumer willingness to provide personal information. Journal of Public Policy and Marketing 19
(1), 27–41.
Podsakoff, P.M., Todor, W.D., Grover, R.A., Huber, V.L., 1984. Situational moderators of leader reward and punishment behaviors: fact or fiction?
Organizational Behavior and Human Performance 34 (1), 21–63.
Rizk, R., Marx, D., Schrepfer, M., Zimmerman, J., Guenther, O., 2009. Media coverage of online social network privacy issues in Germany. In: Nickerson, R.,
Sharda, R. (Eds.), Proceedings of the 15th Americas Conference on Information Systems, San Francisco, CA, pp. 1–9.
Slovic, P., Finucane, M.L., Peters, E., MacGregor, D.G., 2004. Risk as analysis and risk as feelings: some thoughts about affect, reason, risk, and rationality. Risk
Analysis 24 (2), 311–322.
Smith, H.J., Dinev, T., Xu, H., 2011. Information privacy research: an interdisciplinary review. MIS Quarterly 35 (4), 980–A27.
Tsai, J.Y., Egelman, S., Cranor, L., Acquisti, A., 2011. The effect of online privacy information on purchasing behavior: an experimental study. Information
Systems Research 22 (2), 254–268.
Trumbo, C.W., 1999. Heuristic-systematic information processing and risk judgment. Risk Analysis 19 (3), 391–400.
Vaknin, S., 2012. Five ways Google’s unified privacy policy affects you, CNET. <http://howto.cnet.com/8301-11310_39-57388626-285/five-ways-googles-
unified-privacy-policy-affects-you/> (accessed 08.07.14).
Venkatesh, V., Bala, H., 2012. Adoption and impacts of interorganizational business process standards: role of partnering synergy. Information Systems
Research 23 (4), 1131–1157.
Wakefield, R., 2013. The influence of user affect in online information disclosure. Journal of Strategic Information Systems 22 (2), 157–174.
Washingtonpost.com, 2012. Google privacy policy: who will be affected and how you can choose what information gets shared. The Washington Post.
<http://www.washingtonpost.com/business/economy/google-privacy-policy-who-will-be-affected-and-how-you-can-choose-what-information-gets-
shared/2012/01/26/gIQA69fNVQ_story.html> (accessed 08.07.14).
Xie, E., Teo, H.H., Wen, W., 2006. Volunteering personal information on the internet: effects of reputation, privacy notices, and rewards on online consumer
behavior. Marketing Letters 17 (1), 61–74.
Xu, H., Teo, H.-H., Tan, B.C.Y., Agarwal, R., 2009/2010. The role of push-pull technology in privacy calculus: the case of location-based services. Journal of
Management Information Systems 26 (3), 135–173.

You might also like