Skip to main content

Ethics in Technology: Chapter 8. Privacy, Surveillance, and Data Ethics

Ethics in Technology
Chapter 8. Privacy, Surveillance, and Data Ethics
  • Show the following:

    Annotations
    Resources
  • Adjust appearance:

    Font
    Font style
    Color Scheme
    Light
    Dark
    Annotation contrast
    Low
    High
    Margins
  • Search within:
    • Notifications
    • Privacy
  • Project HomeEthics in Technology
  • Projects
  • Learn more about Manifold

Notes

table of contents
  1. Text Cover Page
  2. Chapter 1. Preface
  3. Chapter 2. Introduction, Ethical Frameworks and Personal Lenses
  4. Chapter 3. Defining Ethics and Related Terminology
  5. Chapter 4. Ethics for Tech Developers and Tech Consumers
  6. Chapter 5. Cybersecurity, Hacking, and Digital Identity
  7. Chapter 6. Technology, Justice, and Social Equity
  8. Chapter 7. Technology in Personal and Social Life
  9. Chapter 8. Privacy, Surveillance, and Data Ethics
  10. Chapter 9. Digital Communication, Social Media, Misinformation and Democracy
  11. Chapter 10. Intellectual Property, Digital Art, and Emerging Economies
  12. Chapter 11. Artificial Intelligence (AI), Automation and Robotics, and Algorithmic Ethics
  13. Chapter 12. Bioethics and Human Enhancement
  14. Chapter 13. Technological Disruption and the Paradox of Progress

8. Privacy, Surveillance, and Data Ethics

Big Data and Privacy; Public vs. Private; Urban Surveillance and Smart Cities; Data Collection and Consent; Cloud Computing; Data Ownership and Open-Source Solutions

As our lives become increasingly intertwined with digital technologies, the boundaries between public and private spheres have grown more complex. The preceding chapters have explored how technology shapes our identities, relationships, and access to opportunities, while also highlighting the responsibilities of both developers and consumers in navigating ethical challenges. From the responsibilities of tech users and professionals, to the impact of technology on justice, equity, and personal well-being, we have seen that ethical decision-making is rarely straightforward – often requiring us to balance competing values and anticipate unintended consequences.

Building on these foundations, this chapter delves into the critical issues of privacy, surveillance, and data ethics. In a world driven by big data, cloud computing, and ubiquitous connectivity, questions about who owns our information, how it is collected, and for what purposes it is used have become central to the ethical landscape. We will examine the evolving definitions of privacy in the digital age, the rise of urban surveillance and smart cities, and the ethical dilemmas posed by large-scale data collection and consent. By considering the implications of data ownership and the responsibilities of both individuals and organizations, this chapter aims to equip readers with the tools to critically assess the ethical dimensions of privacy and surveillance in contemporary society.

Big Data and Privacy

The rise of big data has fundamentally transformed the landscape of personal privacy. Every day, individuals generate vast amounts of digital information through online interactions, purchases, social media activity, and even passive data collection via mobile devices and smart home technology. This information is not only collected by the platforms and services individuals use directly, but is also routinely shared with third-party data aggregators, sold to marketers, and analyzed by a wide array of organizations seeking to infer deeper insights about users’ behaviors and preferences. The sheer scale and interconnectedness of data collection means that a single piece of personal information – such as an email address or geolocation – can be replicated, cross-referenced, and stored in dozens, if not hundreds, of separate databases worldwide.

A crude (but conservative) estimate suggests that for any active digital user, there may be hundreds to thousands of copies of their personal data distributed across various entities. Each online service, retailer, social media platform, and app may create its own record; data brokers and aggregators further multiply these records as they buy, sell, and combine data sets; and backup systems, cloud storage, and analytics platforms add further redundancy. Moreover, big data analytics can infer additional attributes and connections, effectively creating new “copies” of information by extrapolating from existing data points. This exponential proliferation makes it nearly impossible for individuals to fully track or control the spread of their digital footprints.

Consider this scenario: A big cell phone provider sells anonymized (de-identified) data points to a data aggregator and analysis group which cover a period of one-week for a particular region. The data was requested because a store in this region wants to learn more about the individuals who showed up for their big sale event. The aggregator starts by focusing on all of the cell phones that went to that particular store during the store’s event.

But, they don’t stop there. They then review where the data points go after they left the store to see the other destinations for these devices. They discover that quite a number of phones left the store and went to food establishments. This might suggest that the store should have some ‘snacks’ available during their next sale event. They also discover that a number of the devices went to a competitor’s store… this may be interesting to both the original store as well as to the competitor store.

But then, they follow the phones to their ‘final destinations’ for that day to see where they ended up that night. Then, they repeat this process for each day (not just the sale day) to see what else they can learn about the ‘anonymous’ data points. Just by analyzing where the phones end up for the ‘end-of-the-day’ these data points may likely represent the ‘homes’ of the ‘anonymous data points’. And tracking where the phones go each day, they are likely to discover other patterns as well. So much for anonymized data!

Privacy, in its most common definition, refers to the ability of individuals to control the collection, usage, and distribution of their personal information. It encompasses the right to decide what information is shared, with whom, and for what purposes. In the digital age, however, this expectation is increasingly challenged. The default practices of data collection, the complexity of data flows, and the lack of transparency in how information is shared or sold mean that true control over personal data is often illusory. While privacy remains a foundational value and a legal right in many jurisdictions, the reality is that maintaining a reasonable expectation of privacy online requires significant effort, technical literacy, and often, a willingness to opt out of many modern conveniences. Thus, while the principle of privacy is still widely recognized, its practical realization in the age of big data is fraught with challenges and, for many, may no longer be a fully reasonable expectation without substantial systemic change.

Public vs. Private

In the previous section, we considered the expectation of privacy in the age of big-data. So it seems now we should differentiate between the legal definitions vs. the ethical definitions of the terms ‘Public’ vs. ‘Private’.

The distinction between ‘public’ and ‘private’ is foundational both in legal and ethical discussions, yet the definitions and boundaries can shift depending on context. Legally, ‘public’ typically refers to spaces, actions, or information that are accessible or visible to the general population and where individuals have a reduced expectation of privacy. ‘Private,’ on the other hand, denotes areas, behaviors, or data that are restricted to individuals or select groups, where a higher expectation of privacy is recognized and protected by law.

Ethically, the distinction often hinges on the reasonable expectations of those involved. Consider the scenario of looking through an open window into someone’s home: while the window may be open and the view technically accessible from a public sidewalk, most people would agree that peering inside or, more invasively, taking a photo or video crosses an ethical line. The act transitions from a passive observation in a public space to an active intrusion into someone’s private life, highlighting how context and intent matter.

Similarly, recording audio or video of people inside a grocery store – where there is a general expectation of being in a semi-public space – differs ethically (and sometimes legally) from recording those same people outside on a public sidewalk. The boundaries blur further in places like restaurants, public transportation, or even online forums, where the mix of public accessibility and private interaction complicates the ethical calculus.

Additional examples illustrate these nuances. In a workplace, conversations in a private office are generally considered private, while those in a break room may not be. In digital contexts, posting on a public social media page is typically considered public, but sending a direct message is private – though the technical ability to copy, share, or leak messages challenges this expectation. Even in public spaces, certain activities, such as using a restroom or changing clothes in a locker room, retain strong legal and ethical protections of privacy despite their location.

Ultimately, the legal definitions of public versus private are shaped by statutes and case law, often focusing on the impact of actions on society versus individuals. Ethically, the distinction is more fluid, relying on context, societal norms, and the reasonable expectations of those involved. As technology continues to blur these boundaries – through ubiquitous cameras, data collection, and online sharing – it becomes increasingly important to critically examine not just what is legally permissible, but what is ethically respectful of individuals’ privacy and autonomy.

Urban Surveillance and Smart Cities

Urban surveillance and the development of smart cities have introduced a range of technologies that promise to enhance public safety, improve efficiency, and optimize city services. Traffic cameras, for instance, are widely deployed to monitor intersections, enforce traffic laws, and provide real-time data to manage congestion. These systems can reduce accidents and improve emergency response times by allowing authorities to quickly identify and address incidents. Similarly, vehicle tracking – enabled through license plate readers and various connected sensors – can help locate stolen vehicles, optimize public transportation routes, and even support environmental goals by monitoring emissions and traffic patterns.

However, these same technologies raise significant concerns about privacy and the potential for misuse. Traffic cameras and vehicle tracking systems can be repurposed for mass surveillance, enabling authorities or third parties to monitor individuals’ movements without their knowledge or consent. This persistent observation can erode the sense of urban anonymity and create a chilling effect on personal freedom, as people may change their behaviors if they feel constantly watched. The aggregation of vehicle movement data, when combined with other data sources, can reveal sensitive patterns about individuals’ routines and associations.

Facial recognition technology represents another powerful but controversial tool in the smart city arsenal. On the positive side, it can assist in locating missing persons, identifying suspects in criminal investigations, and enhancing security at large public events. Yet, the deployment of facial recognition in public spaces has sparked intense debate over accuracy, bias, and the risk of wrongful identification. Moreover, the widespread use of facial recognition can enable pervasive government or corporate monitoring, undermining civil liberties and disproportionately impacting marginalized communities.

Other notable examples include smart utility meters and environmental sensors. Smart meters can help residents and city officials monitor and reduce energy and water consumption, contributing to sustainability goals and lowering costs. Environmental sensors, such as those monitoring air quality or flood risks, can provide early warnings and improve public health outcomes. Yet, both technologies collect detailed data about residents’ habits and activities, raising questions about who has access to this information and how it might be used beyond its intended purpose.

Ultimately, while urban surveillance and smart city technologies offer clear benefits – improved safety, efficiency, and sustainability – they also introduce complex ethical challenges. The risk of cyberattacks, unauthorized data sharing, and the erosion of privacy demands robust governance, transparent policies, and meaningful public engagement to ensure that technological progress does not come at the expense of individual rights and community trust.

Data Collection and Consent

The distinction between explicit and implied consent is central to understanding how data is collected and used in the digital environment. Explicit consent requires a clear, affirmative action from the user – such as checking a box, signing a form, or clicking an “I Agree” button – indicating unambiguous agreement to the collection and processing of their data. This type of consent is often accompanied by detailed language in End-User License Agreements (EULAs) or Terms of Service (ToS), specifying what data will be collected, how it will be used, and who it may be shared with. For example, a ToS might state, “We collect your name, email, and usage data to provide and improve our services,” and the company requires the user to actively accept these terms before proceeding.

Implied consent, by contrast, is inferred from a user’s actions or the context in which those actions occur. If a user continues to browse a website after being notified of a cookie policy, or submits a contact form expecting a response, their behavior is interpreted as agreement to certain data practices – even if they have not explicitly acknowledged them. Implied consent is often used for routine or less sensitive data collection, but it is inherently less transparent and can lead to ambiguity or disputes over what the user actually agreed to.

A critical nuance in these agreements is the use of open-ended language regarding data use. For instance, a clause might state, “We may use your data for purposes such as backups or translation to another language.” The phrase “such as” does not restrict the company to only those listed uses; rather, it leaves the door open for additional, unspecified uses of the data. The stated intent behind this language (if it is ever actually stated) is said by the company to provide flexibility for operational needs. However, it can also be used to mask broader data exploitation. For example, data collected for “service improvement” could be repurposed for targeted advertising, profiling, or even sold to third parties – uses not explicitly disclosed in the original agreement – but made legal through the agreement as written.

The concept of intent becomes central here. While a company may claim its intent is benign – such as improving user experience or ensuring data security – the same permissions can be leveraged for more intrusive or profit-driven activities, like behavioral advertising, location tracking, or sharing data with law enforcement or other organizations without further user notification. Other examples include using voice recordings from smart speakers to train AI beyond the stated purpose, or aggregating fitness tracker data for insurance risk assessment, even when the original consent was for personal health monitoring.

Legally, the sufficiency of consent – whether explicit or implied – depends on the jurisdiction and the sensitivity of the data involved. Regulations like the General Data Protection Regulation (GDPR) developed and implemented in the European Union, require explicit, informed consent for most personal data processing, especially for sensitive categories, and place the burden on organizations to demonstrate that valid consent was obtained.

Ethically, the bar is even higher: true consent should be informed, freely given, and revocable, with users fully understanding both the scope and intent of data collection. In practice, however, the complexity of agreements and the opacity of data flows make it difficult to prove that users have genuinely understood or agreed to all possible uses of their data.

Likewise, demonstrating the true intent of a company’s data practices is challenging, as broad or ambiguous language can be exploited for purposes far beyond those originally disclosed. As a result, both proving consent and intent remains a fraught process, highlighting the ongoing need for clearer communication, stronger regulation, and more transparent data practices.

Cloud Computing

Cloud computing has become deeply integrated into the daily routines of non-corporate users, offering convenience and flexibility across a range of applications. Common examples include file storage and sharing services like Google Drive, Apple iCloud, and Dropbox, which allow users to save documents, photos, and videos remotely and access them from any device. Email services such as Gmail and Yahoo Mail rely on the cloud to store messages and attachments, making communication seamless and accessible from anywhere. Social media platforms like Facebook and Instagram use cloud infrastructure to let users upload and share photos, videos, and other content. Streaming services, including Netflix and Spotify, leverage the cloud to deliver on-demand entertainment to millions, while cloud-based productivity suites like Google Docs and Microsoft 365 enable real-time collaboration and document editing without the need for local software installations.

The primary appeal of these cloud-based applications lies in their promise of accessibility across devices, ease of use, and the ability to synchronize data across multiple devices. Users are drawn to the convenience of automatic backups, the ability to share files instantly, and the reduction in the need for physical storage or device-specific software. Cloud computing also supports mobile banking, online education, and even health and fitness tracking, making it a central pillar of modern digital life.

However, as discussed in the previous section on data collection and consent, the agreements users accept when adopting cloud services often grant providers broad rights over their personal information. While the stated intent may be to facilitate backups or enhance user experience, the legal language typically allows providers to use, analyze, and even share user data for purposes far beyond those original use cases. This creates a significant imbalance: the value extracted from user data – through targeted advertising, analytics, or third-party partnerships – can exceed the utility provided to the user in the form of basic storage or convenience.

Figure 13: There is no cloud...

Consider this figure describing ‘the cloud’. If we understand that there is no cloud, but rather, it is just someone else’s computer, we begin to understand that we are just using their computers for storage, and that they are perpetually peeking at literally everything you put up there!

Transparency remains a major issue. Once data is uploaded to the cloud, users have little visibility into where it is stored, how it is processed, or with whom it is shared. The lack of clear, accessible information about data practices means that users cannot easily verify how their information is being used or if it is being sold or repurposed for profit. This opacity is compounded by the trend of phasing out traditional, locally-installed productivity software in favor of cloud-based, subscription-only models. Companies are increasingly steering users toward exclusive cloud solutions to ensure recurring revenue, gain greater control over software updates, and – crucially – maintain ongoing access to user data. As a result, users are often left with little choice but to accept these terms if they wish to continue using familiar tools, further eroding their control over personal information and privacy in the digital age.

Data Ownership and Open-Source Solutions

Data ownership refers to the legal rights, control, and authority an individual or entity has over specific sets of data, including how that data is accessed, used, modified, shared, or deleted. It is about both possession and responsibility, granting the owner the power to determine the fate of the data and to enforce those rights legally and ethically. Data ownership is foundational for accountability, privacy, and security in a world where personal and organizational data are invaluable assets.

Questions to Consider About Data Ownership:

  • Does a person own their own name, or is it merely a public identifier?
  • Who owns an individual’s email address: the person, the email provider, or both?
  • If you purchase a phone, do you own all the data stored on it, or does the manufacturer or service provider retain some rights?
  • Is your fingerprint your property, or does an entity that collects and stores its digital representation (e.g., for authentication) share ownership?
  • Who owns your DNA sequence: you, your healthcare provider, or the company that analyzes it?
  • If a company collects your location data via a mobile app, do you retain ownership, or does the company claim rights through its terms of service?
  • Who owns the photos and messages you upload to social media platforms – you, the platform, or both?
  • If you generate creative works (art, writing, code) using a cloud-based app, do you own the content, or does the app provider have rights to it?
  • When you use voice assistants, do you own the recordings, or does the service provider?
  • Who owns aggregated or anonymized data derived from your personal information?
  • If your data is sold to third parties, do you still have any ownership or control over it?
  • Who owns the metadata (such as timestamps, device info, or usage statistics) generated by your interactions with digital services?
  • If a government agency collects your data for public health or security, do you retain any ownership or rights over that data?

These questions illustrate the complexity and spectrum of data formats – ranging from personally identifiable information (PII) like names and fingerprints, to digital content, behavioral metadata, and even biological data. Legally, ownership can depend on jurisdiction, contractual agreements, and the nature of the data, while ethically, many argue individuals should retain primary rights and control over their personal information.

Some types of data, such as biometric identifiers (fingerprints, facial scans, DNA) and commonly accessed data (emails, social media posts), are inherently difficult to isolate and protect due to the way they are collected, stored, and shared across platforms and organizations. Once digitized and uploaded, these data types often become subject to broad terms of service that can dilute individual ownership and control.

By contrast, data that a user creates – such as documents, code, or media files – can, actually be more readily controlled and protected! Open-source software and personal computing resources provide the mechanisms by which users can take a modicum of control over digital information that they create. Open-source solutions empower users to retain ownership by allowing them to store, manage, and modify their data locally or on self-hosted platforms, free from restrictive proprietary agreements. This approach not only enhances privacy and security but also aligns with the ethical principle that individuals should have meaningful control over their own digital creations and personal information.

There are many open-source solutions for the vast majority of the computing activities that typical users experience. Here is a brief list (as of this publication) of just some of the open-source titles and their typical uses:

Here are several of the most popular open-source software titles across a wide range of productivity and creative tasks, suitable for non-corporate users:

Operating Systems

  • Ubuntu
  • Linux Mint
  • Debian
  • Fedora
  • Manjaro
  • OpenBSD
  • FreeBSD
  • Puppy Linux

Personal Information Managers & Email

  • Thunderbird
  • Evolution
  • KOrganizer/KMail

Office Applications

  • LibreOffice
  • OnlyOffice
  • Calligra Suite

Artistic and Image Editing

  • GIMP (Photo Editing)
  • Inkscape (vector graphics)
  • Krita (digital painting)

Video Editing and Production

  • Shotcut
  • Blender (also for 3D modeling and animation)
  • OBS Studio (Open Broadcaster Software)

Audio Editing and Production

  • Audacity
  • LMMS (Linux Multi Media Studio)
  • Ardour

Other Productivity and Creative Tools

  • VLC Media Player (media playback)
  • Nextcloud (personal cloud storage and collaboration)
  • Joplin (note-taking and to-do lists)
  • Scribus (desktop publishing)
  • Darktable (photo workflow and raw development)
  • Calibre (e-book management)
  • Rocket.Chat (team communication)
  • Jupyter Notebook (interactive computing and data science)

These tools provide robust alternatives to proprietary solutions and empower users to retain greater control over their data and creative output.



Textbook Definitions – Privacy, Surveillance, and Data Ethics

  • privacy – The right and ability of individuals to control the collection, use, and sharing of their personal information, ensuring freedom from unwarranted intrusion into their lives.
  • surveillance – The monitoring or observation of individuals or groups, often by authorities or organizations, to collect information or ensure security, which can threaten privacy if unwarranted.
  • data ethics – The moral principles and guidelines that govern the collection, analysis, and use of data, emphasizing privacy, transparency, accountability, and fairness.
  • big data – Extremely large and complex datasets generated from various sources, analyzed to reveal patterns, trends, and associations, especially relating to human behavior.
  • cloud computing – The delivery of computing services – including storage, processing, and software – over the internet, allowing users to access and manage data and applications remotely.
  • ubiquitous connectivity – The state of being continuously connected to digital networks and services from virtually anywhere, enabling constant data exchange.
  • urban surveillance – The use of technology such as cameras, sensors, and tracking systems in cities to monitor public spaces and activities for safety, efficiency, or control.
  • data collection – The process of gathering information from various sources, either actively or passively, for analysis, storage, or decision-making.
  • consent – Permission granted by individuals for the collection and use of their data, which should be informed, freely given, and revocable.
  • data ownership – The legal rights and control an individual or entity has over specific data, including how it is accessed, used, shared, or deleted.
  • passive data collection – Gathering information from users without their direct input or awareness, often through background processes or device sensors.
  • smart home technology – Devices and systems within a home that use internet connectivity to automate and control functions such as lighting, security, and climate.
  • data aggregators and brokers – Entities that collect, combine, and sell data from multiple sources, often creating detailed profiles of individuals.
  • cloud storage – A service that allows users to save data on remote servers accessed via the internet, rather than on local devices.
  • lack of transparency – The absence of clear, accessible information about how data is collected, used, or shared, making it difficult for individuals to understand or control their data.
  • expectation of privacy – The belief or assumption that one’s personal information or activities will not be observed or disclosed without consent.
  • Public – Legally and ethically, spaces, actions, or information accessible to the general population, where individuals have a reduced expectation of privacy.
  • Private – Spaces, actions, or information restricted to individuals or select groups, where a higher expectation of privacy is recognized and protected.
  • reasonable expectations – What an average person would consider appropriate regarding privacy or data use in a given context.
  • context – The circumstances or setting in which data is collected, used, or observed, which influence privacy expectations and ethical considerations.
  • intent – The purpose or motivation behind collecting, using, or sharing data, which affects the ethical evaluation of those actions.
  • societal norms – The shared expectations and rules within a community that shape perceptions of privacy, consent, and acceptable data practices.
  • Traffic cameras – Cameras installed in public areas to monitor vehicle flow, enforce traffic laws, and enhance public safety.
  • license plate readers – Automated systems that capture and process images of vehicle license plates for law enforcement or traffic management.
  • connected sensors – Devices embedded in infrastructure or vehicles to collect and transmit data on movement, environment, or system status.
  • mass surveillance – The large-scale monitoring of populations, often by governments, using technology to collect and analyze vast amounts of data.
  • Facial recognition – Technology that identifies or verifies individuals by analyzing facial features from images or video.
  • accuracy – The degree to which a system or process correctly identifies, measures, or represents information, crucial for fair outcomes in surveillance and data use.
  • bias – Systematic errors or prejudices in data collection, analysis, or technology that can lead to unfair or discriminatory outcomes.
  • wrongful identification – Incorrectly matching or labeling an individual by surveillance or recognition systems, leading to potential harm.
  • civil liberties – Fundamental rights and freedoms, such as privacy and free expression, that are protected from excessive government or organizational intrusion.
  • marginalized communities – Groups that experience discrimination or disadvantage, often disproportionately affected by surveillance and data misuse.
  • cyberattacks – Malicious attempts to access, disrupt, or damage digital systems or data.
  • Explicit consent – Clear, affirmative agreement to data collection or processing, usually given through direct actions like checking a box or clicking “I Agree”.
  • End-User License Agreements (EULAs) – Legal contracts between software providers and users outlining the terms for using the software, including data rights.
  • Terms of Service (ToS) – Agreements specifying the rules, responsibilities, and data practices associated with using a digital service.
  • Implied consent – Permission inferred from a person’s actions or the context, rather than a direct statement or agreement.
  • cookie policy – A statement on a website detailing how cookies are used to collect and process user data.
  • General Data Protection Regulation (GDPR) – A comprehensive European Union law that governs data protection and privacy, emphasizing informed, explicit consent and user rights.
  • informed – Having adequate information to understand the implications and risks before agreeing to data collection or use.
  • freely given – Consent provided voluntarily, without coercion or undue pressure.
  • revocable – The ability to withdraw consent at any time, stopping further data collection or use.
  • Cloud Computing – The practice of using remote servers on the internet to store, manage, and process data, rather than relying on local hardware.
  • accessibility across devices – The capability to use data and applications seamlessly from multiple devices via cloud services.
  • synchronize data across multiple devices – Keeping files, settings, and information consistent and updated on all user devices through cloud-based solutions.
  • automatic backups – The process of regularly copying data to a remote server to prevent loss and ensure recovery.
  • Data ownership – The legal and ethical right to control, access, and manage one’s own data, including decisions about its use and sharing.
  • personally identifiable information (PII) – Any data that can be used to identify a specific individual, such as names, addresses, or Social Security numbers.
  • biological data – Information derived from an individual’s biological characteristics, including DNA, fingerprints, and other biometrics.
  • biometric identifiers – Unique physical or behavioral traits, such as fingerprints, facial scans, or iris patterns, used for identification.
  • fingerprints – Distinctive patterns on the tips of fingers, often used as a biometric identifier.
  • facial scans – Digital representations of facial features used for identification or authentication.
  • DNA – The genetic material that carries an individual’s hereditary information, unique to each person.
  • Open-source software – Software with publicly available source code that can be freely used, modified, and distributed by anyone.
  • personal computing resources – Devices and infrastructure owned and controlled by individuals, enabling them to manage and store their own data locally.

Annotate

Next Chapter
Chapter 9. Digital Communication, Social Media, Misinformation and Democracy
PreviousNext
CC BY NC SA
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org