Skip to main content

Ethics in Technology: Chapter 11. Artificial Intelligence (AI), Automation and Robotics, and Algorithmic Ethics

Ethics in Technology
Chapter 11. Artificial Intelligence (AI), Automation and Robotics, and Algorithmic Ethics
  • Show the following:

    Annotations
    Resources
  • Adjust appearance:

    Font
    Font style
    Color Scheme
    Light
    Dark
    Annotation contrast
    Low
    High
    Margins
  • Search within:
    • Notifications
    • Privacy
  • Project HomeEthics in Technology
  • Projects
  • Learn more about Manifold

Notes

table of contents
  1. Text Cover Page
  2. Chapter 1. Preface
  3. Chapter 2. Introduction, Ethical Frameworks and Personal Lenses
  4. Chapter 3. Defining Ethics and Related Terminology
  5. Chapter 4. Ethics for Tech Developers and Tech Consumers
  6. Chapter 5. Cybersecurity, Hacking, and Digital Identity
  7. Chapter 6. Technology, Justice, and Social Equity
  8. Chapter 7. Technology in Personal and Social Life
  9. Chapter 8. Privacy, Surveillance, and Data Ethics
  10. Chapter 9. Digital Communication, Social Media, Misinformation and Democracy
  11. Chapter 10. Intellectual Property, Digital Art, and Emerging Economies
  12. Chapter 11. Artificial Intelligence (AI), Automation and Robotics, and Algorithmic Ethics
  13. Chapter 12. Bioethics and Human Enhancement
  14. Chapter 13. Technological Disruption and the Paradox of Progress

11. Artificial Intelligence (AI), Automation and Robotics, and Algorithmic Ethics

Levels of AI; AI Moral Agency; Autonomous Vehicles; Chatbots; Robotics and Robot Ethics; Algorithmic Bias; Automation; Predictive Policing

Figure 15: Robot typing at computer.

The story of automation is one of both disruption and transformation, shaping the very fabric of society from the earliest days of agriculture to the dawn of the Information Age. In the agricultural era, simple tools and animal-driven machines revolutionized food production, freeing human labor for other pursuits. The Industrial Revolution brought mechanized factories and assembly lines, dramatically increasing productivity but also displacing traditional crafts and altering social structures. The advent of computers in the 20th century marked another, automating complex calculations and data management, and laying the groundwork for the digital revolution. Today, as we enter the era of artificial intelligence (AI), automation, and robotics, the pace of change is accelerating at an unprecedented rate, touching every aspect of our economic, social, and personal lives.

Technologies such as advanced AI, autonomous vehicles, chatbots, and robotics are no longer confined to research labs or science fiction – they are rapidly becoming integral to how we work, communicate, and make decisions. AI systems now perform tasks ranging from diagnosing medical conditions to driving cars and moderating online content. Automation is transforming industries, from manufacturing and logistics to finance and customer service, while algorithmic decision-making increasingly shapes everything from hiring practices to law enforcement through predictive policing. This growing ubiquity brings both promise and peril: while these technologies offer the potential for greater efficiency, safety, and convenience, they also raise profound ethical questions about bias, accountability, and the distribution of power and opportunity.

As these innovations continue to evolve, we must grapple with the sustainability of our current economic and social systems. Will the continued rise of AI, automation, and robotics lead to widespread job displacement, deepen existing inequalities, or erode human agency? Or can these technologies be harnessed to create a more just, equitable, and sustainable society? The answers to these questions will depend not only on technical advancements, but also on the ethical frameworks and policies we establish to guide their development and deployment.

Levels of AI

Artificial Intelligence (AI) exists along a spectrum of complexity and capability, often described in terms of “levels.” Early AI systems, such as expert systems, were designed to mimic the decision-making abilities of human specialists within narrow domains – think medical diagnosis or troubleshooting technical issues. These systems rely on predefined rules and logic, and while they can outperform humans in specific, well-defined tasks, they lack the flexibility and adaptability of broader intelligence. At the other end of the spectrum is Artificial General Intelligence (AGI), a theoretical form of AI that can understand, learn, and apply knowledge across a wide range of tasks at a human-like level. Beyond AGI lies Artificial Superintelligence (ASI), which would surpass human intelligence in virtually every field, including creativity, problem-solving, and social intelligence.

Most of what is marketed as “AI” today – such as large language models (LLMs) and natural language processing (NLP) systems – falls far short of AGI or ASI. These models, including popular chatbots and content generators, are trained on vast, curated datasets but do not actively or continuously learn from new data once deployed. Instead, they are periodically “tuned” by their creators, often for specific domains or applications, which can introduce or reinforce biases and inaccuracies present in the training data. The curated nature of these datasets means that AI outputs can reflect the perspectives, limitations, and prejudices of the data and those who select it, leading to algorithmic bias and fairness issues. Despite rapid advances, none of today’s mainstream AI systems possess the autonomy, adaptability, or self-awareness associated with AGI.

The path to AGI – and, by extension, ASI – remains uncertain, but many experts believe that once AGI is achieved, an immediate, unavoidable and unstoppable transition to ASI will follow. Given the potential for self-improvement and recursive learning (without curated input, interruption, and without specified domain limitations) this prospect raises profound questions about control and safety. The assumption that AGI or ASI could be reliably “controlled” is widely regarded as hubristic, given the unpredictable nature and potential power of such systems.

Compounding these concerns is the lack of universal ethical definitions or standards in the data used to train AI, making it impossible to predict what kind of “ethical center” an advanced AI might develop. As a result, society faces urgent questions about how to guide the development of increasingly capable AI systems in ways that align with shared values and long-term human interests.

AI Moral Agency

Current AI systems – including expert systems, large language models, and other advanced tools – are best understood as sophisticated instruments rather than independent moral agents. These systems currently lack consciousness, intentionality, and the capacity for ethical judgment, so moral agency and culpability remain with the humans who design, deploy, and use them. Developers are responsible for building systems that are safe and fair, operators must ensure proper oversight, and users must understand the tool’s limitations and risks. Attributing moral agency to these tools can lead to confusion, misplaced accountability, and the dangerous illusion that ethical responsibility can be delegated to technology.

The conversation shifts dramatically when considering the hypothetical emergence of Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI). If an AI system were to achieve human-level understanding, autonomy, and the ability to make independent decisions (which a number of AI researchers and companies are actively pursuing), the question of moral agency becomes more complex and contentious. Would such a system deserve to be treated as a moral agent, or even as a legal entity, responsible for its actions?

This debate is reminiscent of the gradual transfer of moral agency from parent to child: children initially lack full moral responsibility, which is, instead, held by their parents or guardians. But as children develop autonomy and understanding, they gradually assume agency for their own actions.

Similarly, if AGI or ASI were to demonstrate genuine autonomy and ethical reasoning, there could be a case for shifting some degree of responsibility from the creators or users to the AI itself. However, this transition would be fraught with uncertainty, as we currently lack clear ethical rubrics, legal frameworks, or even a consensus on what would constitute an “ethical center” for such entities.

Autonomous Vehicles

Autonomous vehicles (AVs) are rapidly transforming transportation, with trucking and freight leading the way in the adoption of high-level autonomy. The Society of Automotive Engineers (SAE) defines six levels of vehicle autonomy, from Level 0 (no automation) to Level 5 (full automation, with no human intervention required at any point). Most consumer vehicles today feature Level 2 or Level 3 autonomy, offering driver assistance and partial automation. However, the most groundbreaking developments are occurring at Levels 4 and 5, where vehicles can operate independently in specific conditions or, eventually, in all environments.

In the United States, fully autonomous trucking is no longer a distant vision. Aurora Innovation launched driverless trucks on the I-45 corridor between Dallas and Houston in 2025. Other companies such as Kodiak Robotics, Gatik, and Waabi are also advancing hub-to-hub autonomous trucking, particularly in states like Texas, Arizona, and Florida, where regulations are more permissive.

Internationally, China’s Inceptio Technology and Germany’s on-road trials are pushing the envelope in large-scale autonomous truck deployment. These trucks promise to address driver shortages, increase operational efficiency, and reduce costs, with the potential to revolutionize logistics and supply chains globally.

One of the most compelling arguments for autonomous vehicles is their potential to dramatically reduce vehicular crashes. Human error is responsible for over 90% of traffic accidents; by removing fatigue, distraction, and impaired driving from the equation, AVs could save thousands of lives annually.

However, the transition is not without challenges. Legal and ethical questions loom large: when an autonomous vehicle is involved in a crash, who is responsible – the manufacturer, the software developer, the fleet operator, or the owner? Current legal frameworks are struggling to keep pace, and there is ongoing debate about how to assign liability and ensure accountability as vehicles become more autonomous. These questions will only grow in importance as AV technology becomes more ubiquitous, raising fundamental issues about trust, transparency, and the future of transportation.

Chatbots

Chatbots have evolved dramatically from their origins as simple, rule-based programs designed for entertainment or to answer basic questions. Early chatbots, like ELIZA in the 1960s, relied on scripted responses and could only handle straightforward, predictable interactions. As technology advanced, chatbots became popular in business settings for providing 24/7 customer service, automating frequently asked questions, and reducing the workload for human agents. The introduction of natural language processing (NLP) and machine learning (ML) allowed chatbots to better understand context and intent, leading to more sophisticated conversational agents that could manage more complex queries. Today, chatbots are widely used not only for customer service but also for telemarketing, sales, and customer engagement, often serving as the first point of contact between companies and their customers.

Despite these advancements, significant limitations persist. Most chatbots, even those powered by large language models, are trained on curated datasets and operate within restricted domains; they struggle to adapt when conversations deviate from expected patterns, often resulting in user frustration when the system cannot process nuanced or evolving requests. Additionally, modern chatbots increasingly use synthesized voice recordings, complete with intonations and inflections, to simulate emotion and create a more “human-like” interaction. This can enhance user experience but also blurs the line between machine and human, raising important ethical questions:

  • Is it ethical to replace human customer service jobs with chatbots, especially when the technology is still imperfect?
  • Should companies be required to disclose when a customer is interacting with a chatbot rather than a real person?
  • What are the risks of chatbots providing false, misleading, or “hallucinated” information to users?
  • How can companies ensure that chatbots do not exploit users by establishing artificial relationships or manipulating emotions?
  • Who is responsible if a chatbot causes harm, either through misinformation or inappropriate interactions?
  • Should there be regulations governing the use of voice synthesis to prevent deception or emotional manipulation?
  • How can biases and inaccuracies in chatbot responses be effectively identified and corrected?
  • What safeguards should be in place to protect vulnerable populations from exploitation by automated systems?
  • How can transparency and accountability be maintained as chatbots become more autonomous and integrated into everyday life?

These questions highlight the ethical complexities that accompany the rapid integration of chatbots into business and society, underscoring the need for thoughtful oversight and responsible development as the technology continues to advance.

Robotics and Robot Ethics

Robotics is the interdisciplinary field of engineering and computer science focused on the design, construction, operation, and use of programmable machines – robots – that can replicate, substitute, or assist human actions in various tasks. Some of the earliest robots were ancient automata, such as mechanical birds in ancient Greece and water clocks in China, but the modern concept of the robot emerged in the 20th century with inventions like George Devol’s Unimate, the first industrial robotic arm, which began operating at a General Motors facility in 1959. The field of robotics was further defined by Isaac Asimov’s introduction of the “Three Laws of Robotics,” which have influenced ethical thinking about robots ever since.

Asimov’s three laws of robotics were defined as follows:

Isaac Asimov's Three Laws of Robotics are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Although begun as a work of science fiction, these three laws have become a foundational starting-point for very many philosophical and ethical positions regarding how robotics should be ethically developed and utilized.

Most industrial robots today are fully programmed using programmable logic controllers (PLCs) or computer numerical control (CNC) systems, enabling them to perform repetitive tasks such as welding, assembly, and painting within tightly controlled environments. These robots are typically limited to their pre-programmed domains and cannot adapt to new tasks without human intervention or reprogramming.

However, advances in robotics have produced machines capable of operating in more diverse and less structured environments, such as autonomous mobile robots, manufacturing and warehouse automation systems, and even robots that can assist in surgery or explore hazardous locations. These more advanced robots use sensors, AI, and machine learning to make decisions and adapt to changing conditions, reducing the need for direct human oversight and expanding the potential applications of robotics.

Some ethical questions raised by the increasing use of robotics include:

  • What are the societal consequences of job displacement caused by robotics without corresponding changes in the existing economic model?
  • Should robots be used for police or military operations, and what are the risks of delegating lethal force to machines?
  • Is it ethical to use robots to administer medicines or perform medical procedures, and who is responsible if something goes wrong?
  • Should robots be permitted to manufacture or design other robots, potentially accelerating automation and reducing human oversight?
  • How do we ensure safety and accountability when robots operate in public or shared spaces?
  • What rights, if any, should humans have to intervene in or override robot decisions in critical situations?
  • How can we prevent bias or discrimination in robots programmed for social or service roles?
  • Should there be universal standards or regulations for the ethical design and deployment of robots?
  • How do we balance innovation with the need to protect vulnerable populations from unintended harm caused by robotics?

These questions highlight the complex ethical landscape that accompanies the rapid advancement and integration of robotics into society.

Algorithmic Bias

Algorithmic bias arises because AI systems are fundamentally shaped by the data used to train them, the domains they are intended to operate within, and the objectives set by their developers. Most AI is trained on curated datasets that reflect the perspectives, limitations, and sometimes the prejudices of those who collect and label the data. These models are typically fixed within a specific domain, meaning their understanding and decision-making are limited to the patterns present in their training environment. Furthermore, the intended outcomes – what the AI is supposed to optimize or predict – are defined in advance by the tool’s creators, embedding their assumptions and priorities into the system. This results in inherent biases, which can become self-perpetuating as the AI consistently produces outputs that reinforce the patterns and disparities present in its training data.

Imagine a hypothetical, national healthcare system that adopts an AI-powered tool to help prioritize patients for specialist referrals. The model is trained on historical data from urban hospitals, where access to care and patient demographics differ significantly from rural areas. Because the data underrepresents rural patients and overrepresents certain ethnic groups, the AI learns to prioritize urban, majority-population patients for referrals. Over time, this bias is amplified: rural and minority patients are systematically deprioritized, leading to poorer health outcomes and widening existing disparities. The system’s recommendations are trusted as “objective” because they come from an advanced AI, making it difficult for affected groups to challenge the results or for administrators to recognize the underlying bias.

If machine learning environments begin to “learn on their own” – continuously updating their models based on new data – the risk of algorithmic bias may become even more pronounced. Without explicit mechanisms to recognize and correct for bias, an AI could reinforce and amplify prejudices present in both its initial and ongoing data streams. How would such a system recognize that its data is incomplete or skewed? Could it ever truly understand the social and ethical context behind the data it consumes? Would it be able to distinguish between correlation and causation, or between majority patterns and minority needs? If an AI is left to self-train, who is responsible for monitoring and correcting its outputs, and how can we ensure transparency and accountability in such a dynamic system?

These questions highlight the risk that algorithmic bias may be inevitable unless there is continuous human oversight, robust auditing, and deliberate efforts to diversify and scrutinize training data. But how will this be accomplished if the creators of the AI systems are allowed to claim ‘trade secrets’ or ‘national security’ and then withhold this information? As AI systems become more autonomous, the challenge of ensuring fairness and ethical outcomes will only grow more complex – demanding vigilance, innovation, and a commitment to equity at every stage of development and deployment.

Automation

Automation refers to the use of technology to perform tasks without human intervention, marking a fundamental shift from humans merely using tools to tools independently executing work. The earliest automation can be traced back to inventions like water mills and mechanical clocks, which reduced the need for constant human oversight. The Industrial Revolution accelerated this trend with machines such as the Jacquard loom and assembly line systems, which automated textile production and manufacturing processes. Over time, automation evolved from simple mechanical aids to sophisticated systems capable of performing complex, repetitive, or hazardous tasks with minimal human input.

The primary drivers for automation include improving health and safety by removing humans from dangerous environments, surpassing human physical and cognitive limitations, increasing speed and productivity, reducing fatigue and stoppages, and enhancing accuracy and consistency. Automation also allows for 24/7 operation, minimizes waste, and ensures higher quality control, all of which contribute to significant cost savings and increased profits for businesses. While these benefits are often framed in terms of operational efficiency, flexibility, and safety, they are ultimately subordinate to economic motivations: the adoption of automation is primarily justified by its potential to reduce labor costs, increase output, and boost competitiveness in the marketplace.

Today, automation extends far beyond manufacturing. In logistics, automated warehouses and self-driving delivery vehicles streamline supply chains. In healthcare, robotic surgery and automated diagnostics improve precision and efficiency. Financial services use algorithmic trading and automated fraud detection, while agriculture benefits from autonomous tractors and drones for planting and crop monitoring. Even creative industries are seeing automation in content generation and design.

If automation continues unchecked across all sectors, it could potentially replace most traditional forms of human employment, fundamentally challenging the status quo of the current economic model. The question remains: can our existing economic systems – rooted in wage labor and job-based income – sustain the rapid pace of automation adoption? Or, will we need to rethink how value, work, and livelihood are distributed in a world where machines do most of the work?

Predictive Policing

Before the existence of formal laws, societies were governed by shared ethical norms – unwritten rules about right and wrong that guided individual and collective behavior. Laws and legal systems only emerged after these societal ethics were violated, requiring a codification of values into enforceable rules to maintain order and address breaches. Policing, as a profession and practice, arose to uphold these laws, maintain social order, and protect the community through the prevention, detection, and investigation of crime. The role of policing has always been closely tied to ethics, as officers are entrusted with significant power and discretion, and their actions can profoundly affect life, liberty, and public trust.

Policing, however, has not always been a force for good. Throughout history, the institution has been subject to abuse – ranging from corruption and discrimination to excessive use of force and the protection of political interests over public welfare. These abuses highlight the ongoing tension between the ideals of ethical policing – courage, respect, empathy, and public service – and the realities of institutional culture and unchecked discretionary power. The evolution of policing models, from crime control to social peacekeeping, reflects an ongoing struggle to balance authority, accountability, and the ethical imperative to serve the public fairly and justly.

Predictive policing is a recent development that uses algorithms and data analysis to forecast where crimes are likely to occur or who might be involved, with the aim of deploying resources more efficiently and preventing crime before it happens. Proponents argue that predictive policing can improve efficiency, reduce crime rates, and help allocate police resources more effectively. However, critics warn that these systems can amplify existing biases, lack transparency, and lead to over-policing of already marginalized communities. The risks of algorithmic bias, lack of oversight, and ethical ambiguity – discussed in previous sections – are especially acute in predictive policing, where flawed data or unchecked models can result in large-scale injustices, erode public trust, and perpetuate cycles of discrimination.

As predictive policing becomes more prevalent, the amplification of these risks could manifest in widespread surveillance, unfair targeting, and diminished civil liberties. Without rigorous ethical standards, transparency, and accountability, predictive policing could undermine the very societal values and ethical foundations that laws and policing were meant to protect.

Textbook Definitions – Artificial Intelligence (AI), Automation and Robotics, and Algorithmic Ethics

  • Automation – The use of technology, machines, or systems to perform tasks with minimal or no human intervention, streamlining processes and increasing efficiency.
  • Information Age – The current era characterized by the rapid transmission, processing, and accessibility of information through digital technology and computing.
  • agricultural era – A historical period when societies were primarily based on farming and the cultivation of crops and livestock.
  • Industrial Revolution – The period of major industrialization during the late 18th and early 19th centuries marked by the shift from hand production to machines and factory systems.
  • artificial intelligence (AI) – The development of computer systems capable of performing tasks that typically require human intelligence, such as reasoning, learning, and problem-solving.
  • Robotics – The branch of technology that deals with the design, construction, operation, and application of robots to perform automated tasks.
  • Autonomous vehicles (AVs) – Vehicles equipped with technology that enables them to navigate and operate without direct human control.
  • Chatbots – Software applications that simulate human conversation using text or voice, often for customer service or information retrieval.
  • hiring practices – The methods and criteria organizations use to recruit, select, and employ personnel.
  • Predictive policing – The use of data analysis and algorithms to forecast potential criminal activity and inform law enforcement strategies.
  • bias – A systematic inclination or prejudice in favor of or against certain outcomes, groups, or data, often leading to unfair or inaccurate results.
  • accountability – The obligation to explain, justify, and take responsibility for one's actions or decisions.
  • distribution of power and opportunity – The way authority, resources, and chances for advancement are allocated among individuals or groups in a society.
  • sustainability – The capacity to maintain or support processes, systems, or resources over the long term without depleting them.
  • job displacement – The loss of employment opportunities due to technological change, automation, or other factors.
  • human agency – The capacity of individuals to act independently and make their own free choices.
  • expert systems – Computer programs that emulate the decision-making abilities of human experts in specific domains using predefined rules.
  • domains – Specific areas of knowledge, activity, or expertise within which a system or individual operates.
  • predefined rules and logic – Explicitly programmed instructions and decision criteria that govern the behavior of a system or process.
  • Artificial General Intelligence (AGI) – A theoretical form of AI capable of understanding, learning, and applying knowledge across a wide range of tasks at a human-like level.
  • Artificial Superintelligence (ASI) – A hypothetical AI that surpasses human intelligence in all respects, including creativity, reasoning, and problem-solving.
  • large language models (LLMs) – Advanced AI models trained on extensive text data to generate, summarize, and understand human language.
  • natural language processing (NLP) – The field of AI focused on enabling computers to interpret, process, and generate human language.
  • curated datasets – Carefully selected and organized collections of data used to train or evaluate AI models.
  • tuned – Adjusted or refined by developers to improve a model’s performance or adapt it to specific tasks or domains.
  • reinforce biases – To perpetuate or amplify existing prejudices or patterns present in training data through repeated outputs.
  • Algorithmic bias – Systematic and repeatable errors in AI outputs that result from biases in the data, design, or implementation of algorithms.
  • recursive learning – A process where AI systems iteratively update and improve themselves by learning from their own outputs or new data.
  • ethical center – The core set of moral principles or values that guide decision-making and behavior in an individual or system.
  • independent moral agents – Entities capable of making ethical decisions and being held responsible for their actions without external control.
  • moral agency – The ability to discern right from wrong and to be held accountable for one’s actions.
  • culpability – The degree to which an individual or entity is responsible for a fault or wrong.
  • Human error – Mistakes or failures in judgment, perception, or action made by people, often leading to unintended consequences.
  • simulate emotion – The act of mimicking or reproducing emotional expressions or responses using technology.
  • automata – Self-operating machines or mechanisms, often designed to follow a predetermined sequence of operations.
  • Three Laws of Robotics – A set of ethical rules devised by science fiction writer Isaac Asimov to govern the behavior of robots.
  • programmable logic controllers (PLCs) – Industrial digital computers used to control manufacturing processes or machinery.
  • computer numerical control (CNC) – The automated control of machining tools and 3D printers by means of a computer.
  • manufacturing and warehouse automation – The use of automated systems and robots to perform tasks in production and storage facilities with minimal human involvement.
  • intended outcomes – The specific goals or results that a system or process is designed to achieve.
  • self-perpetuating – Capable of continuing or reinforcing itself without external input or intervention.
  • systematically deprioritized – Consistently assigned lower importance or priority in a structured or organized manner.
  • correlation – A statistical relationship or association between two or more variables.
  • causation – The action of causing something to happen; a direct cause-and-effect relationship.
  • majority patterns – Trends or behaviors that are most common within a given dataset or population.
  • minority needs – The specific requirements or interests of less-represented groups within a population.
  • oversight – The act of supervising, monitoring, or regulating processes or organizations to ensure proper conduct.
  • auditing – The systematic examination and evaluation of processes, systems, or data to ensure accuracy, compliance, and integrity.
  • algorithmic trading – The use of computer algorithms to automatically execute financial trades at high speed and volume.
  • autonomous tractors – Self-driving agricultural vehicles capable of performing tasks such as plowing, planting, and harvesting without human intervention.
  • drones – Unmanned aerial vehicles operated remotely or autonomously for various purposes, including surveillance, delivery, and data collection.
  • Policing – The activities and responsibilities of maintaining public order, enforcing laws, and preventing and investigating crime.
  • corruption – Dishonest or unethical conduct by those in power, typically involving bribery or the abuse of authority for personal gain.
  • discrimination – Unfair or prejudicial treatment of individuals or groups based on characteristics such as race, gender, or age.
  • excessive use of force – The application of more physical power than is necessary or justified in a given situation, often by law enforcement.
  • unchecked discretionary power – Authority exercised without sufficient oversight, limits, or accountability, increasing the risk of abuse.
  • authority – The legitimate power to make decisions, enforce rules, and command obedience.
  • accountability – The requirement to answer for one’s actions and decisions, especially in positions of power or responsibility.

Annotate

Next Chapter
Chapter 12. Bioethics and Human Enhancement
PreviousNext
CC BY NC SA
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org