The effect of Fair information practices and data collection methods on privacy-related behaviors: A study of Mobile apps

https://doi.org/10.1016/j.im.2020.103284Get rights and content

Highlights

  • We tested the effect of two intervention strategies on privacy-related decisions.

  • Intervention strategies were found to have a significant effect on PDC and PIR.

  • Our findings have novel implications to the mobile app industry.

Abstract

To capitalize on valuable consumer and transactional data on mobile apps, companies should employ ethical decisions and strategies that can reduce privacy concerns, because such concerns present critical challenges for corporate social responsibility. In this study, we tested the effect of intervention strategies, Fair Information Practices, and the data collection method on privacy-related decisions. The results show that both intervention strategies have a significant effect on perceived data control and perceived risks and in turn on behavioral intention. Our findings have novel theoretical and managerial implications to those who want to promote ethical business practices in the mobile apps industry.

Introduction

The increasing affordability and features of mobile devices and mobile apps have enabled companies to collect huge amounts of user data [[1], [2], [3]]. In particular, cookies and GPS, along with consumers’ transactional data, allow companies to track user preference, and provide accurate location-based prediction and recommendations [4]. When processed effectively and innovatively, consumer data supply actionable real-time information to improve operations, facilitate innovation, optimize resource allocation, reduce costs, and enhance decision-making [4,5]. The recent big data analytics capabilities and tools further accentuate potential benefits companies could gain from having access to a large amount of consumer data [4].

While companies could capitalize on consumer data to improve market advantage, privacy concerns stand as the biggest roadblock to monetizing these data [3,6]. The concerns also present critical challenges for ethical business practices [3,6]. Specifically, consumers are concerned about how their personal data will be processed, stored, shared, and used [7]. They are also worried about the vulnerabilities of mobile technologies that lead to potential data leakages, hacking, and data thefts [7]. All these concerns may stop consumers from using mobile apps. As the success of mobile apps depends on the usage rate [8,9], a low adoption rate is a loss to companies considering that the size of the mobile apps market is one of the biggest within the IT sector. Its total market revenue for 2016 was 76.5 billion dollars, and the figure is expected to grow to 101 billion dollars in 2020 [10]. Therefore, companies ought to strategize on how to reduce consumers’ privacy concerns, so as to increase the consumption rate and present a better corporate image [11]. In this paper, we tested the effect of two company intervention strategies, Fair Information Practices (FIPs) and the data collection method, on privacy-related decisions linked to the use of mobile apps. These intervention strategies are framed using the control-risk literature, which gives our research model a strong theoretical foundation [12,13].

We contended that using FIPs to instill consumer confidence toward a company’s data management practices is a good strategy. The privacy literature posits that FIPs have a regulatory endorsement effect, which conveys the perception of “fairness” [14,15]. Companies could utilize the power of FIPs to shape positive perception toward their data management practices. Existing FIP studies have examined various aspects including origin [16], challenges in the implementation process [12,17,18], companies’ compliance [12,[16], [17], [18], [19], [20], [21]], perceptions [6,22,23], and enforcement [[24], [25], [26], [27]]. From a further inspection of Appendix A, which covers studies on FIPs or closely related variables, most of them focused on fixed platforms or general scenarios, while only five focused on mobile technologies. Following are the few studies on fixed platforms or general scenarios that used experimental research. Culnan and Armstrong [14] conducted a preexperimental study with secondary data. However, data were captured from secondary sources, there was no control group, and the research items were proxy measures of privacy concerns and other related variables. Therefore, the results should be interpreted with caution. Nemati and Van Dyke [27] conducted a quasi-experimental research using t-test and ANOVA techniques. However, the results showed a nonsignificant effect of FIPs on trust and risk perception (the study did not include behavioral intention). Liu, Marchewka, Lu and Yu [28] conducted an experimental study on the effect of FIPs on trust and behavioral intention. However, they only included two scenarios: FIPs and non-FIPs (they did not study interaction with other factors). To fully understand the power of FIPs, it is necessary to assess their effectiveness in various situations. Other studies that focused on fixed platforms are either descriptive in nature [12,16,[18], [19], [20]] or do not consider all the aspects of FIPs. For example, Awad and Krishnan [29], Bellman et al. [24], Chellappa and Pavlou [30], Li, Sarathy and Xu [31], Milne and Boza [23], Milne and Rohm [32], and Xu et al. [25] focused on some aspects of FIPs but considered neither all the FIPs dimensions nor their interaction with other interventions. In addition, all these studies are nonexperimental research, and thus do not allow testing of causality.

As for the studies that focused on mobile platforms, Karyda, Gritzalis, Park, and Kokolakis [17] conducted a descriptive study on the obstacles of implementing FIPs. In the case of Libaque-Saenz, Chang, Kim, Park and Rho [6] and Libaque-Sáenz, Wong, Chang, Ha, and Park [22], although these studies focused on the mobile sector, the scope was the secondary use of personal information by network operators (i.e., a situation faced by users when their data have been already collected), and not user interaction with mobile devices before their data are collected. Prior literature contends that the data collection stage is more sensitive for users than the postrelease process itself [33]. In addition, none of these studies used an experimental design to assess causality. Finally, although Xu, Gupta, Rosson, and Carroll [7] focused on mobile apps and used an experimental design, they neither included all the FIPs principles (only choice) nor focused on behavioral intention (their focus was privacy concerns). They did not include another internal factor; rather, they focused on external factors, namely, government intervention and industry regulation. Thus, there is still a gap in the literature to fully understand the effectiveness of FIPs in various contexts.

In short, existing studies have not investigated the effect of all FIPs dimensions on consumers’ privacy-related decisions within the mobile apps context. Mobile apps or mobile platforms in general differ from fixed platforms, because the former are characterized by portability, mobility, and permanent availability features [34]. Hence, these platforms can be used anywhere and at any time. In contrast, fixed platforms are usually used in predetermined environments, such as in an office or at home [35]. In addition, mobile apps run on mobile devices (e.g., mobile phones), which are regarded as personal and individual items because users always carry them and rarely share them with others [36]. However, fixed platforms can be used by many people, such as family members and office workers [37]. Furthermore, the location-awareness features of mobile Internet can be used to determine users’ physical locations [38], unlike fixed Internet, which does not expose where consumers are located. To better support these differences, we developed an additional survey to determine user perceptions about which device (fixed or mobile) is storing more of their personal information. First, an extensive list of items (pieces of data) was developed based on the General Data Protection Regulation (GDPR) and prior research [[39], [40], [41]]. These items were reviewed by three researchers to assess the clarity and appropriateness of the questions. Finally, we gathered 150 responses through Mechanical Turk, which is the same platform we used to collect data to assess our research model, as it is discussed in the Methodology section. Appendix B shows the source of each of the questions of this survey, while Fig. 1 shows clearly that user perception of the amount of personal information stored by mobile devices is far larger than the amount of data collected by fixed platforms. In fact, according to Ghose [40], smartphones are storing information about who we are, where we are from, where we go, where we have been, our location, what we need, what we have bought, and what our interests are. In addition, the IT Security Survey 2019 revealed that though most of the fixed computers have an installed antivirus, there are about 37.8 % of mobile phones without any antivirus solution [42]. These features of mobile platforms (an active collection of personal information and lack of antivirus programs) may raise greater user perception of risks compared to fixed platforms, and thus additional analysis is required. We argue that the effect of FIPs still remains an accepted “black box” because it “assumes the status of a taken-for-granted truth where its label replaces its contents” [16].

As well as FIPs, we argue that companies could also intervene by adjusting their data collection methods. Data collection activities have raised concerns about data control and the associated potential risks [23,43,44]. Aggressive data collection may give the impression of privacy invasion and affect consumers’ decision to use an app. For example, quitters of Facebook are motivated by privacy concerns when they commit “virtual identity suicide” [45]. They are worried about Facebook making their location available, which may reveal patterns of their day-to-day activities and place them at risk [46]. At the same time, they feel they are losing control over personal data because their friends may post information about them without restrictions [47]. Prior research on data collection methods has focused on theoretical discussions [48], their effect on the use of u-commerce [49], and perceptions of personalization [50,51]. However, the interaction of these methods with FIPs remains unexplored. Our study will provide empirical evidence to explain the role of data collection methods in creating consumers’ willingness to participate in privacy-related behaviors when using mobile apps and their interaction with FIPs.

In summary, our study fills previously discussed research gaps by adding a theoretical explanation on the effect of FIPs on user privacy-related decisions. Specifically, we use an experimental design to support causality and include all dimensions of FIPs, and the interaction of FIPs with other important internal factors, such as data collection methods, to understand the power of this intervention on user privacy-related decisions. In addition, we focus on the mobile context that has not yet been fully studied.

The rest of the paper is organized as follows. The next section provides the literature review and develops the hypotheses. The subsequent two sections detail the research methodology and show the results of the study. We then present the discussion and implications as well as the limitations and future research. Finally, we provide a conclusion for the paper.

Section snippets

Fair information practices (FIPs) and other recent privacy policy developments

FIPs is a “set of internationally recognized practices for addressing the privacy of information about individuals” [52]. FIPs were originally proposed in a 1973 report of the US Secretary’s Advisory Committee on Automated Personal Data Systems published in response to incensement at automated data collection of individuals’ personal information. They were set out in their most effective form in 1980 by the Organization for Economic Co-operation and Development [52]. In the 1990s, the Federal

Research methodology

We adopted an experimental design for this research. The presence or absence of FIPs and the data collection method were manipulated using a 2 (FIPs versus NO-FIPs) X 2 (AUTO versus non-AUTO) between-subject factorial design (Fig. 4). We used a scenario-based methodology because scenarios are descriptions of possible future situations and the methodology supports causality [101]. Prior research based on experimental design have used this methodology to manipulate similar interventions in

Manipulation and control checks

We took several steps to verify the salience of our manipulations. First, the conditions on the existence or absence of FIPs, and the use of automatic and non-AUTO were checked using yes/no questions to confirm that the respondents understood the scenarios (Appendix D). Second, the manipulation check for FIPs asked whether the respondents believed their personal information would be used fairly (1=completely false; 7=completely true) (Appendix D). The t-test result shows the participants in the

Discussion and implications

Our study has several key findings that contribute novelty to the privacy literature. First, by finding the significant impact of FIPs and data collection methods on PDC and perceived risks and in turn behavioral intention, we show how the intervention strategies effectively influence privacy intention. In developing and validating the substantive model, our study contributes new findings to the privacy literature, and reaffirms control risk as a useful framework to analyze contemporary

Conclusion

The success of mobile apps depends on the usage rate. However, privacy concern often stands as the roadblock that may hinder consumers’ adoption of these apps. In this study, we tested the effect of two company intervention strategies, FIPs and the data collection method, on privacy-related decisions. We found that the intervention strategies are effective in influencing privacy-related decisions. This finding suggests companies could deploy our validated intervention strategies to increase the

CRediT authorship contribution statement

Christian Fernando Libaque-Sáenz: Conceptualization, Investigation, Methodology, Writing - original draft. Siew Fan Wong: Writing - review & editing, Funding acquisition. Younghoon Chang: Project administration, Visualization, Formal analysis. Edgardo R. Bravo: Supervision.

Acknowledgment

This study was funded by the Malaysian Ministry of Education, Fundamental Research Grant Scheme (FRGS) (Grant Number FRGS/1/2014/SS05/SYUC/02/1).

Christian Fernando Libaque-Saenz ([email protected]) is an associate professor and the head of the Engineering Department at Universidad del Pacífico (Peru). Christian has received his B.S. degree in Telecommunications Engineering from the National University of Engineering (Peru), and his M.A. and Ph.D. degrees in Information and Telecommunication Technology from the Korea Advanced Institute of Science and Technology (KAIST). Before starting his studies at KAIST, Christian worked for the

References (135)

  • M. Workman et al.

    Security lapses and the omission of information security measures: a threat control model and empirical test

    Comput. Human Behav.

    (2008)
  • Y. Chang et al.

    The role of privacy policy on consumers’ perceived privacy

    Gov. Inf. Q.

    (2018)
  • K.-W. Wu et al.

    The effect of online privacy policy on consumer privacy concern and trust

    Comput. Human Behav.

    (2012)
  • M.-C. Lee

    Factors influencing the adoption of Internet banking: an integration of TAM and TPB with perceived risk and perceived benefit

    Electron. Commer. Res. Appl.

    (2009)
  • B. Erdogan

    Antecedents and consequences of justice perceptions in performance appraisals

    Hum. Resour. Manag. Rev.

    (2002)
  • R. Vaidyanathan et al.

    Who is the fairest of them all? An attributional approach to price fairness perceptions

    J. Bus. Res.

    (2003)
  • S. Conger et al.

    Personal information privacy and emerging technologies

    Inf. Syst. J.

    (2013)
  • G. Chittaranjan et al.

    Mining large-scale smartphone data for personality studies

    Pers. Ubiquitous Comput.

    (2013)
  • K. Shilton et al.

    Linking Platforms, Practices, and Developer Ethics: Levers for Privacy Discourse in Mobile Application Development

    J. Bus. Ethics

    (2017)
  • A. McAfee et al.

    Big data: the management revolution

    Harv. Bus. Rev.

    (2012)
  • C.F. Libaque-Saenz et al.

    The role of perceived information practices on consumers’ intention to authorise secondary use of personal data

    Behav. Inf. Technol.

    (2016)
  • H. Xu et al.

    Measuring mobile users’ concerns for information privacy

  • X. Hu et al.

    Are mobile payment and banking the killer apps for mobile commerce?

  • D. Takahashi

    The App Economy Could Double to $101 Billion by 2020

    (2016)
  • K. Martin

    Understanding privacy online: development of a social contract approach to privacy

    J. Bus. Ethics

    (2016)
  • M.J. Culnan et al.

    Consumer privacy: balancing economic and justice considerations

    J. Soc. Issues

    (2003)
  • T. Dinev et al.

    Information privacy and correlates: an empirical attempt to bridge and distinguish privacy-related concepts

    Eur. J. Inf. Syst.

    (2013)
  • M.J. Culnan et al.

    Information privacy concerns, procedural fairness, and impersonal trust: an empirical investigation

    Organ. Sci.

    (1999)
  • G. Midgley

    Systemic intervention for public health

    Am. J. Public Health

    (2006)
  • M. Karyda et al.

    Privacy and fair information practices in ubiquitous environments: research challenges and future directions

    Internet Res.

    (2009)
  • S. Pearson

    Taking account of privacy when designing cloud computing services

  • M.J. Culnan

    Protecting privacy online: Is self-regulation working?

    J. Public Policy Mark.

    (2000)
  • M. Warkentin et al.

    The influence of the informal social learning environment on information privacy policy compliance efficacy and intention

    Eur. J. Inf. Syst.

    (2011)
  • C.F. Libaque-Sáenz et al.

    Understanding antecedents to perceived information risks: an empirical study of the Korean telecommunications market

    Inf. Dev.

    (2016)
  • S. Bellman et al.

    International differences in information privacy concerns: a global survey of consumers

    Inf. Soc.

    (2004)
  • H. Xu et al.

    Information privacy concerns: linking individual perceptions with institutional privacy assurances

    J. Assoc. Inf. Syst.

    (2011)
  • H. Xu et al.

    Research note—effects of individual self-protection, industry self-regulation, and government regulation on privacy concerns: a study of location-based services

    Inf. Syst. Res.

    (2012)
  • H.R. Nemati et al.

    Do privacy statements really work? The effect of privacy statements and fair information practices on trust and perceived risk in e-commerce

    Int. J. Inf. Secur. Priv.

    (2009)
  • N.F. Awad et al.

    The personalization privacy paradox: an empirical evaluation of information transparency and the willingness to be profiled online for personalization

    Mis Q.

    (2006)
  • R.K. Chellappa et al.

    Perceived information security, financial liability and consumer trust in electronic commerce transactions

    Logist. Inf. Manag.

    (2002)
  • H. Li et al.

    Understanding situational online information disclosure as a privacy calculus

    J. Comput. Inf. Syst.

    (2010)
  • G.R. Milne et al.

    Consumer privacy and name removal across direct marketing channels: exploring opt-in and opt-out alternatives

    J. Public Policy Mark.

    (2000)
  • L. Brandimarte et al.

    Misplaced confidences: privacy and the control paradox

    Soc. Psychol. Personal. Sci.

    (2013)
  • E. Kim et al.

    Understanding shopping routes of offline purchasers: selection of search-channels (online vs. offline) and search-platforms (mobile vs. PC) based on product types

    Service Business

    (2018)
  • M. Hiltunen et al.

    Mobile User Experience

    (2002)
  • M. Chae et al.

    What’s so different about the mobile Internet?

    Commun. ACM

    (2003)
  • R.T. Watson et al.

    U-commerce: expanding the universe of marketing

    J. Acad. Mark. Sci.

    (2002)
  • P. Kannan et al.

    Wireless commerce: marketing issues and possibilities

  • H. Chen et al.

    Business intelligence and analytics: from big data to big impact

    Mis Q.

    (2012)
  • A. Ghose

    TAP: Unlocking the Mobile Economy

    (2017)
  • Cited by (41)

    • Transportation policies for connected and automated mobility in smart cities

      2022, Smart Cities Policies and Financing: Approaches and Solutions
    View all citing articles on Scopus

    Christian Fernando Libaque-Saenz ([email protected]) is an associate professor and the head of the Engineering Department at Universidad del Pacífico (Peru). Christian has received his B.S. degree in Telecommunications Engineering from the National University of Engineering (Peru), and his M.A. and Ph.D. degrees in Information and Telecommunication Technology from the Korea Advanced Institute of Science and Technology (KAIST). Before starting his studies at KAIST, Christian worked for the Peruvian Ministry of Transport and Communications. Christian’s research interests include digital divide, privacy, ICT strategy, human-computer interaction, and spectrum management. His publications have appeared in journals such as the Government Information Quarterly, Telecommunications Policy, Behaviour and Information Technology, Telematics and Informatics, Telecommunication Systems as well as in international conferences.

    Siew Fan Wong ([email protected]) is an adjunct professor with Sunway University, Malaysia. She received her Ph.D. degree in MIS from the University of Houston, Texas. Her research interests involve organizational IT strategy, information privacy, business analytics, and technology addiction. Her publications have appeared in journals such as the Information & Management, Journal of Global Information Management, International Journal of Information Management, Government Information Quarterly, Industrial Management & Data Systems, and Cyberpsychology, Behavior, and Social Networking.

    Younghoon Chang ([email protected]) is an associate professor in the School of Management and Economics at Beijing Institute of Technology, Beijing, China. He received his Ph.D. degree in Business & Technology Management from Korea Advanced Institute of Science and Technology (KAIST), South Korea. His research interests include Information privacy, AI and robot management, e-business, business analytics, and digital health. His articles have appeared in the Information & Management, Information Systems Frontiers, Government Information Quarterly, Telecommunications Policy, Journal of Global Information Management, Behavior and Information Technology, Industrial Management & Data Systems as well as in the proceedings of international conferences. He is currently serving as an associate editor of Asia Pacific Journal of Information Systems, and an editorial review board member of Journal of Computer Information Systems and Industrial Management and Data Systems.

    Edgardo R. Bravo ([email protected]) is currently an associate professor in the Engineering Department at Universidad del Pacifico (Peru). He received his Ph.D. in Management Sciences from ESADE (Spain), his MBA from ESAN-University (Peru), and his B.S. in Systems Engineering from National University of Engineering (Peru). His research interests are human-computer interaction, digital divide, privacy, and social networks. He has published in Behavior & Information Technology, Information Technology & People, Cognition, Technology & Work, Energies, and top international conferences. He has managed diverse areas in public and private organizations for more than eighteen years. Also, he has been a consultant for international agencies.

    View full text