Contextual Integrity: Nissenbaum, Creepiness, and Data Privacy
Why did the data go to therapy? Because it needed to talk about its privacy issues! – ChatGPT
Data privacy becomes increasingly relevant as more people put their lives onto the internet without control or knowledge of where that content may end up. This lack of control contributes to the creepiness of some data privacy and why we are so against certain elements of information transparency.
Tene and Polonetsky introduce creepiness in a series of examples where information sharing was “not illegal; it was distasteful” (T&P, 62). They push this theme of the gray area of legality throughout the paper, discussing ways companies can portray themselves in a better light and avoid controversy.
According to Tene and Polonetsky, the main characteristic of “creepy” behavior in the context of digital data privacy is that “it was not illegal; it was distasteful” (T&P). Tene and Polonetsky give an example of the ‘creepy’ app Girls Around Me, which was available for download in 2012. Girls Around Me allowed its users to access the locations of nearby women based on their GPS location and Facebook statuses (T&P, 62). The app itself was not illegal in any sense of the word; it didn’t regulate any legislation and used publicly available data. The app, however, put such data in the social “context” that made it seem creepy. Through its particular aggregation and representation of publicly available information, the application facilitated and normalized the violation of individuals’ privacy, as well as perpetuated exploitative gender dynamics. Respectively, it raised a significant privacy discourse. Yet, before the emergence of this stalkerish app, there were many similar ones, such as the app Highlight, which detected nearby users and shared their app profiles (i.e., favorite books or music), or the app Banjo, which shared when social media friends (i.e., Facebook, Twitter…) were nearby (T&P, 61-62). These apps weren’t met with immediate disapproval – after all, they did not seem to go against social and ethical norms.
This brings us to the thesis of Tene & Polonetsky. They claim: the ‘creepiness’ of data privacy policies is based on social norms, which contributes to why it is so difficult to create regulations that can keep up with rapidly shifting societal rules (T&P, 65, 71, 73). To prove their point, they bring up an example of caller ID. Despite being socially acceptable and desirable, in the 1980s, when caller IDs were introduced, they were met as a privacy violation. To solve the current “creepiness” of data practices, Tene and Polonetsky suggest that for any innovation or new product launch, users should be brought along carefully, educated, and allowed to object. For example, they recommend companies use “obfuscation and search” to make discovering things more difficult or to frame new products more understandably so that less tech-savvy users understand them better. They suggest that users set their expectations based on their perception of a brand, and regulators analyze users’ expectations rather than corporate statements.
Is it a full picture, however? Despite the public understanding of data privacy obviously can get rid of distrust in the public towards technology, will it truly solve the issue of technological privacy? By focusing merely on the emotions of the user, Tene and Polonetsky are unable to touch the core of the actual ethics of data privacy. After all, the feeling of creepiness is usually a moral phenomenon and is not a good representation of actual morals (Fischer and Fredericks). Once again, recall the case of Girls Around Me and other apps that collected data similar in its publicity. The difference in the public response to the use of similar in its personal public data brings up an important point to the privacy discourse: a considerable part of privacy’s value is consequentially determined. Rather than caring about what type of information gets revealed, we care much about whether it violates our ethical norms. We value privacy because it can help us prevent information-based harm and informational inequality, promote autonomy and freedom, and preserve critical human relationships and democracy by giving decision-making to manipulate users’ emotions easily.
We should employ the contextual integrity approach towards privacy to deal with complex issues that arise with data privacy. Instead of looking at dichotomies of “private” and “public” ascribed to information, we should consider data privacy in the context of politics, convention, cultural expectations, and its long-term consequences. What implications does it have for policy-making?
The landmark case of Katz vs. the United States set the stage for privacy and civil liberties that determined that the Fourth Amendment’s protection also extends to electronic communication and data (Richards 72). Charles Katz was convicted of using a public telephone booth to transmit illegal gambling information. The FBI caught him by placing a listening device on the outside of the booth and later used this against him as proof in court. The Supreme Court ruled this as unlawful in favor of Katz because it breached a reasonable expectation of privacy within a telephone booth. This case spread the ideas of physical privacy that were already present within the Constitution and the Bill of Rights into the digital world, leading to a series of future cases and discourse about digital privacy (“Katz v. United States | the National Constitution Center”).
While these can be used to frame things in a friendly way, a user’s feeling of creepiness is a form of self-protective manner that prevents their data from being abused in a way they did not consent to. By proposing solutions that can help companies manipulate, Tene and Polonetsky miss the ethos of the core issue that Nissenbaum more directly addresses. Instead, she argues for a more fundamental approach of using contextual integrity as a benchmark (Nissenbaum 102).
This involves using the context of the relationship between the consumer and provider because we exist in many different realms with different expectations of privacy (i.e., medical records vs. favorite songs) (Nissenbaum 119). She describes three principles that are used today in legislation to prevent the misuse of personal information. First, she advocates for the “limiting of surveillance of citizens and use of information about them by agents of government” (Nissenbaum 110). The US Department of Health, Education, and Welfare’s Secretary’s Advisory Committee’s report on Automated Personal Data Systems emphasized the need for limiting the power of the state or more extensive institutions: “the net effect of computerization is that it is becoming much easier for recordkeeping systems to affect people than for people to affect record-keeping systems.” Second, she recommends restricting access to information—both in terms of how sensitive or private personal information is collected and how it is disseminated. She applies contextual integrity here because the degree of dissemination or collection is based on the sensitivity of the information (Nissenbaum 110). Finally, she emphasizes the sanctity of private or personal spaces—consumers should feel safe knowing that their information is secure in their private spaces (Nissenbaum 112).
Yet, these principles fail to encompass true privacy, as was seen in the court ruling California vs. Greenwood, in which law enforcement arrested Billy Greenwood for drug possession by looking through his garbage. The 1988 case argued that because he had left the garbage out in a public sphere to be taken away, he did not deserve the right to privacy (“California v. Greenwood, 486 U.S. 35 (1988)”). This ruling has many implications that can allow future privacy breaches: for example, even if there was no violation of the law, allowing people to look through your garbage in the hopes of gleaning private information about you is a commonly agreed upon violation of privacy (Richards 90).
Therefore, Nissenbaum proposes to add an element of contextual integrity to decisions about privacy. She asks for privacy norms to be evaluated in terms of the effects of the rules on the interests and preferences of affected by parties; how these norms align with ethical, political, and societal values; and how well they consider the context of the situation. Nissenbaum proposes five parameters for considering the context: (1) the data subject, (2) the sender of the data, (3) the recipient of the data, (4) the information type, and (5) the transmission principle or the agreed upon privacy norms (Martin and Nissenbaum). In a way, this acts like a zero-trust transaction, where there must be a clear identification of the rules at play before trust can be established. In general, she determines that decisions on privacy should be based on the context of the sphere that the event is in. For example, within a patient-doctor medical sphere, the patient is expected to trust the doctor with their medical information, but the doctor does not reciprocate, nor does she share her own medical information. On the other hand, in a sphere of friendship, there is an expectation of secrecy when sharing information about personal lives. When spheres intersect, and information is shared outside of the contextual sphere, privacy is breached (Nissenbaum 119-120, Privacy as Contextual Integrity).
There are many implications of this ideal of contextual integrity. There are no elements of life that aren’t governed by the norms of informational flow—everything happens within a sphere of cultural, political, or conventional expectations, each with its own distinct set of norms. Something that can be interesting to delve into is the concept of information as aligning with distributive justice principles. A lot of information can be considered economic objects that give an extreme advantage to any organization or corporation with access to data that can be extrapolated into predictions about human behavior, used to manipulate crowd decisions, and easily converted into revenue through sales or similar means (Nissenbaum 125-126).
Works Cited
Allen, Anita. Unpopular Privacy. Oxford University Press, 2011.
“California v. Greenwood, 486 U.S. 35 (1988).” Justia Law, 1988, supreme.justia.com/cases/federal/us/486/35/.
Fischer, Jeremy, and Rachel Fredericks. “The Creeps as a Moral Emotion.” Ergo, an Open Access Journal of Philosophy, vol. 7, no. 20201214, Mar. 2020, https://doi.org/10.3998/ergo.12405314.0007.006. Accessed 27 Aug. 2021.
“Katz v. United States | the National Constitution Center.” National Constitution Center – Constitutioncenter.org, constitutioncenter.org/the-constitution/supreme-court-case-library/katz-v-united-states.
Martin, Kirsten E., and Helen Nissenbaum. “Privacy of Public Data.” SSRN Electronic Journal, 2016, https://doi.org/10.2139/ssrn.2875720. Accessed 26 Oct. 2019.
Nissenbaum, Helen. “Privacy as Contextual Integrity.” Washington Law Review, vol. 79, no. 1, Feb. 2004, p. 119, digitalcommons.law.uw.edu/wlr/vol79/iss1/10/.
Richards, Neil. Why Privacy Matters. Oxford Univ Press, 2021.
Tene, Omar, and Jules Polonetsky. “A Theory of Creepy: Technology, Privacy and Shifting Social Norms | Yale Journal of Law & Technology.” Yjolt.org, 2013, yjolt.org/theory-creepy-technology-privacy-and-shifting-social-norms. Accessed 1 Mar. 2023.