Skip to main content

Ethics in Technology: Chapter 9. Digital Communication, Social Media, Misinformation and Democracy

Ethics in Technology
Chapter 9. Digital Communication, Social Media, Misinformation and Democracy
  • Show the following:

    Annotations
    Resources
  • Adjust appearance:

    Font
    Font style
    Color Scheme
    Light
    Dark
    Annotation contrast
    Low
    High
    Margins
  • Search within:
    • Notifications
    • Privacy
  • Project HomeEthics in Technology
  • Projects
  • Learn more about Manifold

Notes

table of contents
  1. Text Cover Page
  2. Chapter 1. Preface
  3. Chapter 2. Introduction, Ethical Frameworks and Personal Lenses
  4. Chapter 3. Defining Ethics and Related Terminology
  5. Chapter 4. Ethics for Tech Developers and Tech Consumers
  6. Chapter 5. Cybersecurity, Hacking, and Digital Identity
  7. Chapter 6. Technology, Justice, and Social Equity
  8. Chapter 7. Technology in Personal and Social Life
  9. Chapter 8. Privacy, Surveillance, and Data Ethics
  10. Chapter 9. Digital Communication, Social Media, Misinformation and Democracy
  11. Chapter 10. Intellectual Property, Digital Art, and Emerging Economies
  12. Chapter 11. Artificial Intelligence (AI), Automation and Robotics, and Algorithmic Ethics
  13. Chapter 12. Bioethics and Human Enhancement
  14. Chapter 13. Technological Disruption and the Paradox of Progress

9. Digital Communication, Social Media, Misinformation and Democracy

Social Media Ethics; Cyberbullying and Harassment; Deepfakes; Misinformation; Manipulation; Free Speech vs. Hate Speech; Influencer Culture; Media Literacy

The digital revolution has profoundly transformed the ways in which people connect, share ideas, and participate in civic life. This chapter explores how the tools and platforms that facilitate these interactions also raise complex ethical questions that touch on every aspect of our personal and collective existence. The rapid spread of information – and the ease with which it can be shaped or distorted – has forced societies to confront new challenges regarding trust, credibility, and the responsibilities of both individuals and institutions. These dynamics are deeply interwoven with our earlier discussions on privacy, data ethics, and the broader societal impacts of technology. This all serves to highlight the need for nuanced approaches to digital citizenship.

As digital spaces become central to public discourse, the boundaries between private expression and public consequence have blurred. The ethical dilemmas introduced here are not isolated; they are amplified by the same technological advancements that enable unprecedented connectivity and innovation. Issues explored in previous chapters – such as the responsibilities of tech developers and consumers, the vulnerabilities of digital identities, and the implications of surveillance – are now seen through the lens of how information is shared, consumed, and manipulated. This chapter examines the ways in which digital communication shapes social norms, influences decision-making, and can both empower as well as undermine democratic processes. It is here that the ethical frameworks introduced at the outset of this text are put to the test, as readers are invited to consider how technology mediates our relationships with each other and with the wider world.

Social Media Ethics

Social media has redefined how individuals engage with information and with one another, creating a dynamic environment where both users and platforms play crucial ethical roles. As consumers, people are constantly exposed to a vast array of content – news, opinions, entertainment, and more – often algorithmically curated and designed to maximize engagement rather than assure accuracy. This places a unique responsibility on users to critically evaluate the information they encounter. Ethical participation means more than simply sharing or reacting; it involves considering the potential impact of one's posts and interactions. Users must weigh the value of free expression against the potential harm caused by spreading misinformation, engaging in harmful rhetoric, or participating in online harassment. The rise of digital anonymity can sometimes embolden individuals to act in ways they would not in face-to-face interactions, underscoring the importance of empathy, respect, and accountability in online spaces.

Platforms, on the other hand, bear a distinct set of ethical responsibilities. While users must exercise personal judgment, social media companies are tasked with balancing the principles of free speech with the need to prevent harm and maintain a safe, inclusive environment – all while keeping their financial bottom line in mind. This balancing act often manifests in debates over censorship – where does moderation cross the line into undue suppression of ideas? Platforms must also grapple with the challenge of distinguishing between legitimate satire and deliberately misleading content. The expectation of fact-checking is a contentious issue: while some argue that platforms should take a more active role in verifying information, others warn of the dangers of overreach and the potential for bias in content moderation. Ultimately, both participants and platforms share an ethical obligation to foster an online ecosystem that encourages constructive dialogue, protects against harm, and upholds the integrity of public discourse – a challenge that grows ever more complex as the digital landscape continues to evolve.

Cyberbullying and Harassment

Cyberbullying and harassment are two closely related forms of harmful behavior that occur through digital channels. Cyberbullying is defined as the use of technology – such as social media, messaging apps, or online games – to harass, threaten, embarrass, or target another person. It often involves repeated actions intended to harm, and can include sending mean or aggressive messages, spreading rumors, posting embarrassing photos or videos, or deliberately excluding someone from online groups. Harassment is a broader term that encompasses any unwanted behavior intended to annoy, threaten, or intimidate another person, and in a digital context, this can range from persistent unwanted messages to explicit threats or hate speech. Both cyberbullying and harassment can have severe emotional and psychological consequences, especially since digital content can be widely and permanently distributed.

Examples of these behaviors are numerous and can include cyberstalking, where an individual monitors or follows someone’s online activity obsessively, often with threatening intent; doxxing, which involves maliciously sharing someone’s personal information online without consent; and the distribution of inappropriate material, such as revenge porn, which is the sharing of explicit images or videos without consent to humiliate or blackmail the victim. Other mechanisms include impersonation (creating fake profiles to harm someone’s reputation), trolling (posting inflammatory or offensive comments to provoke a reaction), and flaming (sending hostile and insulting messages). These actions not only violate privacy but can also escalate into situations where victims feel unsafe in both digital and physical spaces.

From a young age, many children and adolescents may be exposed to digital environments where the culture of “trash-talking” – playful or aggressive banter often aimed at opponents in online games – is prevalent. While initially intended as harmless competition, such behavior can quickly escalate if not moderated, leading to more serious forms of cyberbullying or harassment. The anonymity and distance provided by digital platforms can embolden individuals to cross ethical boundaries. As a result, what begins as teasing can easily spiral into targeted campaigns of abuse. Over time, repeated exposure to or participation in such behavior can desensitize young people to the harm caused by their words and actions, making it crucial for both individuals and platform providers to foster respectful and accountable online communities.

Deepfakes, Misinformation and Manipulation

Deepfakes, misinformation, and manipulation represent some of the most complex ethical challenges in today’s digital landscape. Deepfakes – realistic, AI-generated images, videos, or audio – can blur the line between truth and fiction, with both creative and destructive potential. On the positive side, deepfake technology has been used to enhance public awareness campaigns, such as the “Malaria Must Die” initiative, where David Beckham appeared to speak in nine different languages, helping to reach a global audience. In media, Reuters has employed AI-generated presenters for personalized news summaries, making content more accessible and engaging. Other beneficial uses include voice cloning for individuals with speech impairments, de-aging actors for films, and creating immersive educational or historical experiences.

However, deepfakes have also led to significant legal and ethical controversies. Lawsuits have arisen over non-consensual use of individuals’ likenesses – most notably in cases involving revenge porn, where deepfakes have been used to create explicit content without consent, leading to litigation and demands for stricter regulation. High-profile cases also include financial scams, where deepfake voices or videos impersonated executives to authorize fraudulent transactions, resulting in millions in losses and subsequent lawsuits. Celebrities and public figures have similarly pursued legal action against unauthorized deepfake impersonations that damaged their reputations or misled the public.

Misinformation and manipulation, meanwhile, are often amplified by automated tools such as bots, which can flood social media platforms with false or misleading content. Bots are designed to mimic human behavior, allowing them to interact with users, post comments, and even “like” or share content en masse. This orchestrated activity can artificially boost the visibility of certain narratives, pushing curated lists of users toward trending misinformation. The intent is often to manipulate public opinion, influence elections, or sow discord by making fringe ideas appear more widely accepted than they actually are. The combination of deepfakes and bot-driven misinformation creates a potent tool for manipulation, challenging both individuals and platforms to discern fact from fiction in an increasingly synthetic information environment.

Free Speech vs. Hate Speech

The legal definitions of “free speech” and “hate speech” have evolved through a complex interplay of constitutional principles, court decisions, and ongoing debates about ethics and public order. In the United States, the First Amendment protects freedom of speech as a foundational right, barring the government from restricting expression based on viewpoint, even when that expression is offensive or hateful. The intent behind this legal framework was to uphold robust public discourse and protect minority voices, recognizing that ethical considerations – such as the need to prevent harm and promote dignity – must be balanced against the imperative of open debate. Over time, courts have clarified that speech can only be restricted if it directly incites imminent lawless action or constitutes a true threat.

Despite these legal boundaries, ethical debates persist over what constitutes acceptable speech. Hate speech, while not legally defined in the U.S., is generally understood as expression intended to vilify, humiliate, or incite hatred against a group or class of people based on characteristics such as race, religion, gender, or sexual identity. The challenge arises because the same words or phrases can be interpreted differently depending on the observer’s perspective, cultural background, or personal experience. Can you think of some phrases that have been used by one group as a rallying cry of ‘free speech’ while others attempt to vilify anyone who uses the exact same phrase with accusations of ‘hate speech’?

When communities or governments attempt to define and regulate these terms, the result is often confusion, ambiguity, or outright contradiction. The subjective nature of what constitutes hate speech or offensive speech means that any attempt to codify these concepts risks either overreach – suppressing legitimate debate – or underreach – failing to protect vulnerable groups from harm. This tension is heightened in diverse societies, where different groups may have conflicting values and interpretations of what is ethical or acceptable. As a result, legal definitions rarely align perfectly with the full spectrum of ethical considerations, and the process of defining these terms remains a contentious and evolving challenge for both lawmakers and society at large.

Influencer Culture

Influencer Culture refers to the social phenomenon in which individuals – both online and off – build communities around themselves and exert significant commercial and non-commercial influence over their followers. This culture is not new: throughout history, prominent figures such as royalty, philosophers, political leaders, and celebrities have shaped public opinion, set trends, and influenced consumer behavior. In the digital age, however, the barriers to becoming an influencer have dropped dramatically, and the speed and reach of influence have expanded exponentially.

Before the rise of social media, influencers included figures like Eleanor Roosevelt, who used her newspaper column and radio appearances to shape public opinion and advocate for social causes. In the 20th century, celebrities such as The Beatles, Marilyn Monroe, and Audrey Hepburn became trendsetters whose choices in fashion, music, and lifestyle were widely emulated. Today, influencers are typically individuals who have built large followings on platforms like Instagram, YouTube, and TikTok. These influencers often arrive without credentials or any specific expertise. Rather, they excel at social media engagement and, perhaps, have a likable or convincing personality.

As influencer culture has grown, so too have debates about the responsibilities of influencers themselves. Some have faced backlash and legal repercussions for promoting harmful products, spreading misinformation, or engaging in unethical behavior. In response, there have been calls – and sometimes legal actions – to hold influencers accountable for the consequences of their actions, particularly when those actions mislead or harm their audiences. This includes demands for greater transparency in sponsored content, as well as accountability for endorsing products or ideas that may have negative real-world effects.

The rise of influencers goes beyond mere entertainment. For many followers, influencers fill voids left by traditional institutions, offering advice, companionship, or a sense of belonging that may be missing from their everyday lives. Influencers often create parasocial relationships – one-sided bonds where followers feel a personal connection to the influencer – which can be a source of comfort, inspiration, or even identity formation. This dynamic can make influencers powerful agents of change but also places significant responsibility on their shoulders.

Despite the potential for lasting impact, many influencers experience the ephemeral nature of fame. The phrase “15 minutes of fame” is especially apt, as viral success can be fleeting, and the public’s attention is fickle. Some influencers exhaust their popularity through overexposure, scandal, or controversial behavior, leading to a rapid loss of followers and influence. Others “crash and burn” more dramatically, facing public backlash or legal issues that end their careers as quickly as they began. This cycle highlights both the opportunities and the risks inherent in influencer culture, underscoring the need for ethical awareness and resilience in the digital age.

Media Literacy

Media Literacy is the ability to access, analyze, evaluate, create, and act using all forms of communication. It goes beyond simply understanding information; it involves critical thinking about the messages we encounter, their sources, and their impact. Media literacy empowers individuals to navigate the complex media landscape, discerning credible information from misinformation or manipulation.

A cornerstone of media literacy is the use of multiple sources to verify facts. By comparing information from various reputable outlets, consumers can identify patterns, inconsistencies, or biases. Evaluating the credibility of sources is also essential. This includes considering the reputation of the publisher, the author’s expertise, and the presence of citations or references to original research. Traditional methods also involve checking for objectivity, transparency about funding or affiliations, and whether the information is current and relevant.

Determining whether information is factual or opinion-based requires careful analysis. Facts are statements that can be objectively verified with evidence, while opinions reflect personal beliefs or interpretations. Facts are often presented with quantifiable data without qualification with an intent to inform. Whereas opinions are often subjectively presented with adjectives and adverbs intended to persuade, or in some other way elicit an emotional response. A simple way to consider whether some content is more fact-based or opinion-based is to simply count the parts of speech. If the piece has notably more numerals, nouns, and verbs (objective) than it has adjectives and adverbs (subjective) then the piece may be more fact-based than opinion-based. But if the piece has more subjective language than objective language, you already know that the piece is more opinion than fact.

Content creators bear the responsibility of producing accurate, transparent, and ethical media if they are, in fact, acting in an ethical framework. This means clearly distinguishing between facts and opinions, disclosing conflicts of interest, and correcting errors promptly. Creators should also be mindful of the potential impact of their messages on audiences, striving to avoid harm and promote informed understanding.

Content consumers, on the other hand, must approach media with a critical mindset. This includes questioning the motives behind messages, recognizing bias, and seeking out diverse perspectives. Consumers should also engage in reflection about how media influences their thoughts and behaviors, and take action – such as sharing reliable information or educating others – to contribute positively to public discourse. By embracing these practices, both creators and consumers can foster a media environment that supports truth, accountability, and informed civic participation.

Textbook Definitions – Digital Communication, Social Media, Misinformation and Democracy

  • Social Media Ethics – The moral principles and guidelines that govern responsible, respectful, and ethical behavior on social media platforms.
  • Maximize engagement – Strategies designed to increase user interaction, such as likes, shares, and comments, on digital content.
  • Accuracy – The degree to which information is free from errors, distortions, or misrepresentations.
  • Ethical participation – Engaging online in a manner that is respectful, honest, and mindful of the impact on others.
  • Misinformation – False or inaccurate information that is spread, regardless of intent to deceive.
  • Harassment – Unwanted behavior intended to annoy, threaten, or intimidate another person, especially repeatedly.
  • Accountability – The obligation to take responsibility for one’s actions and accept the consequences.
  • Censorship – The suppression or prohibition of speech, writing, or other forms of expression considered objectionable or harmful.
  • Moderation – The process of monitoring and managing online content to ensure it complies with rules or standards.
  • Suppression – The deliberate act of preventing information or expression from being shared or seen.
  • Satire – The use of humor, irony, or exaggeration to criticize or mock people, ideas, or institutions.
  • Misleading content – Information that is designed or likely to deceive or misinform the audience.
  • Fact-checking – The process of verifying the accuracy of claims made in content or statements.
  • Bias in content moderation – Prejudiced or unfair treatment in the review and management of online content.
  • Cyberbullying – The use of digital technology to harass, threaten, embarrass, or target another person.
  • Cyberstalking – The repeated use of digital technology to monitor, follow, or harass someone.
  • Doxxing – The malicious act of publicly revealing private or identifying information about an individual without their consent.
  • Inappropriate material – Content that is offensive, explicit, or otherwise unsuitable for its intended audience.
  • Revenge porn – The distribution of explicit images or videos without consent, often to humiliate or blackmail.
  • Impersonation – Pretending to be someone else online, often for malicious or deceptive purposes.
  • Trolling – Posting inflammatory, offensive, or disruptive comments or messages to provoke a reaction.
  • Flaming – Sending hostile and insulting messages, often in online discussions or forums.
  • Deepfakes – Realistic, AI-generated images, videos, or audio that can make it appear someone said or did something they did not.
  • Impersonated executives – Individuals falsely represented as company leaders, often in scams or fraudulent schemes.
  • Bots – Automated software programs designed to perform tasks online, such as posting messages or mimicking human behavior.
  • Trending – The state of being widely discussed or shared on social media at a given time.
  • Free speech – The right to express opinions and ideas without fear of government retaliation or censorship.
  • Hate speech – Expression intended to vilify, humiliate, or incite hatred against a group or class of people.
  • Open debate – The free exchange of ideas and perspectives in public discourse.
  • Overreach – Excessive or unjustified restriction of rights, such as speech, beyond what is necessary or appropriate.
  • Underreach – Failing to provide sufficient protection or regulation, resulting in harm or injustice.
  • Influencer Culture – The social phenomenon in which individuals build communities and exert significant influence over their followers’ opinions and behaviors.
  • Credible information – Information that is trustworthy, reliable, and supported by evidence.
  • Reputable outlets – Media sources known for accuracy, fairness, and reliability in reporting.
  • Credibility of sources – The degree to which a source is considered trustworthy and authoritative.
  • Objectivity – The practice of presenting information in a neutral and unbiased manner.
  • Transparency – Openness and clarity about intentions, actions, and sources of information.
  • Fact-based – Information that is grounded in verifiable evidence and data.
  • Opinion-based – Information that reflects personal beliefs, interpretations, or judgments.
  • Questioning motives – The act of critically examining the reasons behind someone’s actions or statements.
  • Recognizing bias – Identifying personal or systemic prejudices that may affect the presentation or interpretation of information.

Annotate

Next Chapter
Chapter 10. Intellectual Property, Digital Art, and Emerging Economies
PreviousNext
CC BY NC SA
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org