Computing and Moral Responsibility
“When technology is injected into a setting, the roles and responsibilities of those in that setting change. What might have been obvious ways to assess responsibility in a less technologically-supported environment become obscured when the human-technology balance is recalibrated with the introduction of new technology. In such cases, the new responsibilities and relationships need to be identified and articulated so that people know what their new role entails and the necessary requisites for performing it.” (Cass 71)
Discussions of computing and moral responsibility have largely revolved around the need to reassess such roles and responsibilities, often in the face of injuries or deaths resulting from computer error. Three main lines of inquiry are especially prominent. First, several authors have sought to identify responsibility for computer system use, especially when such use results in error or harm. These efforts include analyses of particular case studies, the identification of barriers to responsibility, recommendations for overcoming these barriers, and clarification of the responsibilities of the various parties involved. Second, several authors have discussed whether and why computers themselves might be held morally responsible. Third, several authors have offered guidance regarding whether and when humans might responsibly give up decision-making power to computer systems.
Many of the concerns raised in the analysis of responsibility for computer use could equally well be applied to other forms of technology. However, the contemporary use of computers to model human cognition, combined with philosophical discussions about the possibility of computational consciousness, and the prevalence of science fiction portrayals of insightful and caring androids, and the malleability of computer technology encourage us to imagine that computers (unlike other technologies) might have (or someday achieve) cognitive capabilities on par with humans. In short, while blaming a gun is clearly inappropriate (i.e., guns don't kill people, people kill people), blaming a computer is just plausible enough to tempt us to do so under the illusion that we might get away with it. Thus, a careful exploration of whether computers could be held responsible and (if so) when they should be given responsibilities is warranted. Such discussions challenge us not merely to better understand the potential and limitations of computing technology, but also to better understand our own human strengths and weaknesses.
- 1. Responsibility
- 2. Responsibility for Computer Use
- 2.1 Case Studies [Not Yet Available]
- 2.2 Barriers to Responsibility
- 2.3 Recommendations for Overcoming Barriers
- 2.4 Clarification of Responsibilities
- 3. Can Computers Be Morally Responsible?
- 4. Can Humans Responsibly Give Decision-Making Power to Computers?
- Bibliography
- Other Internet Resources
- Related Entries
1. Responsibility
Explicit or implicit in many discussions of computing and responsibility is Hart's (1985) analysis of different senses of responsibility, including Role-Responsibility, Causal-Responsibility, Liability-Responsibility, and Capacity-Responsibility. These senses (and others identified by individual authors as needed) provide a framework for exploring computers and moral responsibility.
Role-responsibility refers to the performance or fulfillment of the duties attached to a person's social role. On the one hand, one is responsible for performing the duties attached to his or her role; on the other hand, one is responsible if he or she actually performs these duties.
Causal-responsibility is a retrospective sense of responsibility, entailing the existence of a causal relationship between the agent and the consequences for which the agent is (arguably) responsible.
Liability-responsibility refers to responsibility for causing harm in violation of the law. Typically, liability entails fault which Feinberg (1968) analyzes as having a causal condition (i.e., the act caused the harm), a fault condition (i.e., the agent was at fault because the act made harm inappropriately likely, either intentionally or through negligence), and a causal relevance condition (i.e., the sort of harm made likely was of the same sort as the harm actually caused). Liability-responsibility primarily focuses on the identification of who (or what) caused the harm (intentionally or through negligence), and who (or what) shall pay for it.
Finally, capacity-responsibility involves the capacity to understand the conduct required by the relevant norms (be they legal, moral, or otherwise), to reason about what to do in light of these requirements, and to control one's conduct in accord with the results of such deliberation. (Hart 1985) Capacity-responsibility has two senses. On the one hand, one is (retrospectively) responsible for having performed a particular action or consequence (e.g., he is responsible for hitting his sister). On the other hand, one's psychological capabilities may be such that the question of whether one is responsible for a particular action may be nonsense.
As noted above, these senses (and others identified by individual authors as needed) provide a framework for exploring computers and moral responsibility. They enable us to analyze in retrospect who is responsible for the consequences of computer use (e.g., Which humans played a causal role? Did they do so in a faulty—i.e., intentionally or negligently harmful—way? Who (if anyone) is liable to punishment or compensation?). They enable us to anticipate and strive to prevent future problems (e.g., What are the role responsibilities of the various parties in the creation, implementation and use of computer have? Are the various parties aware of, capable of understanding, and fulfilling these responsibilities? What punishments or fines are they liable to should they fail?). They enable us to consider whether computers themselves could be responsible (e.g., Are they capable of understanding and control their actions—in particular, can they act intentionally?) Finally, they enable us to assess what decisions (if any) computers should not be allowed to make (e.g., Are computers better then humans at performing some roles? Is letting a computer make decisions negligent? Does turning decisions over to computers undermine or diminish our capacity for understanding and fulfilling our responsibilities as human agents?).
Despite these distinctions, responsibility is frequently conflated with liability. However, an exclusive focus on liability (especially who is legally required to compensate for harms) overlooks many relevant and useful aspects of responsibility, and several authors attempt to refocus our attention on these more valuable aspects. For example, Nissenbaum (1994; 1996) and Kuflik (1999) focus on accountability or answerability as a crucial aspect of responsibility, and Gotterbarn (2001) and Ladd (1989) reject a negative account of responsibility (i.e., one that is primarily about blaming and punishing) and embrace instead a positive account of responsibility (i.e., one that focuses on the identification of, and fulfillment of, duties based on roles or relationships). These critiques are especially important when assessing responsibility for computer use because the suggested alternatives are non-exclusive and better acknowledge the complexity of the systems under review. As Ladd puts it, “[o]ne person's being responsible does not entail that other persons are not also responsible; [in addition] responsibility need not always be direct and proximate; it may be and more commonly is indirect and remote” (1989, 213).
2. Responsibility for Computer Use
Computers are the products of more or less responsible humans; as such, most analyses of the responsibility for the use of such systems (as well as the consequences of such use) focus on the humans who create, implement, and use these systems. When focusing on moral (as opposed to legal[1]) responsibility, the prime motivation for identifying responsible parties is the recognition that “accountability serves as a powerful tool for bringing about better practices, and consequently more reliable and trustworthy systems” (Nissenbaum 1994, 74). Toward this end, Leveson and Turner (1993) consider the case of the Therac-25 malfunction at length; Gotterbarn (2001), Friedman and Kahn (1992), Johnson and Mulvey (1995), Nissenbaum (1994; 1996), and Ladd (1989) and all explore the general question of responsibility for computer use; and Murray (2001) and Cass (1996) provide concrete guidance to the various parties involved.
This section includes a brief discussion of several particular and noteworthy cases, identification of several major barriers to responsibility, recommendations for overcoming these barriers, and clarification of the responsibilities of the various parties involved.
- 2.1 Case Studies [Not Yet Available]
- 2.2 Barriers to Responsibility
- 2.3 Recommendations for Overcoming Barriers
- 2.4 Clarification of Responsibilities
2.1 Case Studies
[Not Yet Available]
2.2 Barriers to Responsibility
Nissenbaum (1994; 1996) identifies four barriers to accountability: the problem of many hands, bugs, blaming the computer, and ownership without liability. These barriers are discussed as well by other authors (Friedman and Kahn (1992), Gotterbarn (2001), Johnson and Mulvey (1995), and Ladd (1989)), and two additional barriers emerge from the literature: poor articulation of norms, and the assumption of the ethical neutrality of computers. A discussion of each of these barriers follows.
- 2.2.1 The problem of many hands
- 2.2.2 Bugs
- 2.2.3 Blaming the computer
- 2.2.4 Ownership without liability
- 2.2.5 Poor articulation of norms
- 2.2.6 Assumption of ethical neutrality
2.2.1 The problem of many hands
The problem of many hands is the result of the fact that complex computer systems are produced by groups of people (e.g., project teams, corporations) thereby making it difficult to identify who is responsible for errors or harmful consequences of use. As Gotterbarn (2001) notes, this problem is due in part to a malpractice model of responsibility which looks to assign blame and mete out punishment, and is due in part to an individualistic model of responsibility which looks to assign responsibility to one person. This latter, individualistic model is inadequate for complex computer systems in particular for a number of reasons.
As Nissenbaum (1994; 1996) points out using Feinberg's analysis of fault discussed above, the person who satisfies the fault criteria — i.e., the programmer or designer—is typically not the person who satisfies the causal criteria—i.e., the user. In addition, the institutional setting in which the systems are developed are composed of groups; thus decisions are often made by more than one person (so no one person satisfies the fault criteria) and are often implemented by more than one person (so no one person satisfies the causal criteria). Further, the practice of code re-use results in multiple—and not necessarily coordinated—hands in the creation of systems. Finally, it is not unusual to re-use part of a system—or even a whole system — in a new application; thus, the hands that implement the new application may be poorly coordinated with the hands that created it.
This problem is evident when, e.g., creators complain that they lack control over how users use the system (Johnson and Mulvey (1995)), when designers blame clients for providing inadequate specifications (Gotterbarn (2001)), when users expect creators and implementers to accept responsibility in the face of disastrous consequences (Johnson and Mulvey (1995)), and when the identification of several responsible parties is taken to imply diminished responsibility for all (Gotterbarn (2001). Invoking notions of collective responsibility may help those involved accept a stronger sense of responsibility (see, e.g., Murray (2001)), but it may also inhibit the improvement of both practice and technology if it does not include an investigation of what went wrong (Ladd (1989)).
2.2.2 Bugs
Not only are the human organizations which create these systems complex, but so too are the computer systems themselves. This increasing complexity makes it harder to identify and fix bugs both before the system is used and after errors are detected; it also makes it easier to justify not looking closely at problems in retrospect. Nissenbaum (1994; 1996) argues that because there are no clear standards of negligence, the presence of bugs is not only to be expected, but it is often excused. Further, Gotterbarn (2001) argues that using terms such as ‘bugs' and ‘computer error’ rather than, e.g., ‘programmer error’ discourages humans from interpreting these errors as their own, thereby inhibiting efforts to understand and prevent similar problems in the future.
2.2.3 Blaming the computer
A third barrier to responsibility is the ease with which we can blame computers for harmful consequences (Nissenbaum 1994; 1996). This is the result of several interrelated factors. First, the computer is often the proximate cause of the harm, thus satisfying the causal condition of fault, while humans may be far removed from the consequences and thus may not obviously satisfy the causal condition.
Second, people often attribute intentionality to computers (so computers appear to satisfy the fault condition as well). Such attributions of intentionality make sense (whether or not they are appropriate) because the consequences of computer use are not always easily or obviously interpretable as human action. Indeed, Friedman and Kahn (1992) note that when computational systems are designed to interact with the user as though the system were a human—thereby creating the illusion of an intermediary agent—such anthropomorphization is encouraged.
Third, these computers perform the same tasks previously performed by humans who were responsible not only for completing these tasks, but could also be held responsible for any harms that resulted; this replacement of humans by computers creates the illusion that “we have ‘delegated’ or ‘abdicated’ our decision-making powers to computers and have made computers responsible for outcomes for which human beings used to be responsible” (Ladd 1989, 219). This illusion is especially strong in closed-loop systems (i.e., where the system both makes and acts on decisions without human oversight), when the system has become established as an expert as the field, and in some types of instructional technology (Friedman and Kahn 1992).
Fourth, blaming computers makes sense in light of the unreasonably high expectations users have of computers (Johnson & Mulvey 1995): when those expectations are violated (as they will be) it is the system that has violated them and thus becomes the target of blame.[2] As Gotterbarn (2001) notes, the complexity of the system compounds this by making it difficult to re-direct those expectations—and the subsequent blame — onto the appropriate humans involved.
Finally, assuming the individual model of responsibility, once we have blamed the computer, there is no need to investigate other, human factors.
2.2.4 Ownership without liability
A fourth barrier to responsibility is the current practice of extending the privileges and rights of ownership of software and computer systems without also demanding that owners accept responsibility for their products (Nissenbaum 1994; 1996). As Johnson and Mulvey (1995) note, users expect owners and creators to accept responsibility for disastrous consequences; this makes sense based on their experience with other products and services, some of which are governed by strict liability (i.e., liability without having to demonstrate fault). Rather than accepting (full or partial) responsibility, owners and creators shirk responsibility by, e.g., blaming the client for providing inadequate specifications (Gotterbarn 2001) and appealing to their lack of control over how their system is implemented or used (Johnson and Mulvey 1995).
2.2.5 Poor articulation of norms
Many of these barriers have, as an underlying problem, the poor articulation and vague understanding of the relevant norms, a concern raised explicitly by Johnson and Mulvey (1995). Without a clear understanding of what each party in the creation, implementation, and use of a system is responsible for doing, we are poorly placed to assess fault (which contributes to the problem of many hands). Without a clear understanding of programming standards, we are poorly placed to distinguish innocent bugs from intentional or negligent programmer error (contributing to the interpretation of errors as ‘bugs'). Without a clear understanding of what we can reasonably expect from the system, we are vulnerable to disappointed expectations (and the temptation to blame the computer that has disappointed us). Finally, without a clear understanding of when creators and owners are to be held liable, users (and the community at large) are vulnerable to bearing the brunt of any harms that result.
2.2.6 Assumption of ethical neutrality
The final barrier to responsibility is the assumption that technology is ethically neutral. In contrast with blaming the computer, this assumption prevents us from considering the impact that technological choices have on our actions. Ladd (1989) suggests that this is due in part to the transparency of computer systems: when we don't notice them, we fail to consider how the technology affects our actions. In other words, the assumption that technology is ethically neutral is a barrier to responsibility because it obscures our responsibility for the choice to use technology, as well as for the choice of which technology to use.
Ultimately, however, this assumption of ethical neutrality is false. As Ladd points out, “computer technology has created new modes of conduct and new social institutions, new vices and new virtues, new ways of helping and new ways of abusing other people” (1989, 210-11). Further, the analytic distinction between means (i.e., tool or technology) and end is obscured in practice.
Unfortunately, several features about the field of computing reinforce the assumption of ethical neutrality. For example, Gotterbarn (2001) notes that computing has matured in theoretical fields such as mathematics rather than practical fields such as engineering and applied science; as such, problems and solutions are articulated and developed in a context in which their impact on humanity is less visible than it should be, leading to a myopic problem-solving style that gives no attention to the context of the problem and the consequences of the solution.
2.3 Recommendations for Overcoming Barriers
Three main areas of recommendations are prevalent. First, we must ensure that our understanding of (and assumptions about) responsibility are appropriate for the task at hand, namely using our practice of responsibility to improve both practice and technology. Second, we should re-design computer systems to reveal that they are not responsible. Third, we should clearly articulate those norms most relevant to the creation, implementation, and use of computer systems.
- 2.3.1 Ensure our understanding of responsibility is appropriate for the task at hand
- 2.3.2 Redesign the computer system
- 2.3.3 Clearly articulate norms
2.3.1 Ensure our understanding of responsibility is appropriate for the task at hand
As noted in the earlier discussion of responsibility (section 1), Nissenbaum recommends that we “[k]eep accountability distinct from liability to compensate” (1994, 79), where liability is focused on punishment and compensating the victim, and accountability is focused on assessing the actions of all the agents involved. Gotterbarn (2001) and Ladd's (1989) advocacy of positive rather than negative responsibility serves the same end. Further, Nissenbaum (1994; 1996) and Ladd (1989) explicitly advocate assuming that someone is responsible “no matter how difficult it may be to determine” (Ladd 216), “unless, after careful consideration, we conclude that the malfunction in question is, indeed, no one's fault” (Nissenbaum 1994, 79).
2.3.2 Redesign the computer system
The temptation to blame the computer can be addressed redesigning the computer to make its lack of capacity-responsibility more visible (Friedman and Millett 1997; Friedman and Kahn 1992). For example, redesigning computer systems so as to minimize if not eliminate the felt presence of the computational system—e.g., by permitting direct manipulation of files and objects—helps to eliminate the illusion that the computer is an agent and, therefore, responsible. In addition, opting for open-loop systems (where the system merely “recommends a course of action to a human user who may or may not choose to follow the recommendation”) and participatory design methods (involving the users in the design of the system) help to integrate users as active decision-makers in both the use and creation of these systems (Friedman and Kahn 1992, 11)
Two other strategies are worth noting. First, we might also reconsider the wisdom of using the computer system to begin with (Ladd 1989; Moor 1979; Kuflik 1999). In short, those overseeing the operation of the computer in question must accept their responsibility for the decision to use the computer system to begin with, and mnight perhaps exercise their responsibility by revoking that decision. (for further discussion, see section 4),
Although it is taken for granted that computers must be able to match or exceed the accuracy, efficiency, and reliability of the humans whose tasks they now perform, there has so far been little effort to match or exceed the responsibility of these humans. Thus, a second strategy is to build responsible computers (Ronald and Sipper 2001). Toward that end, Thompson (1999) recognizes that training, certification, and assessment are used when preparing humans to take on responsibility, and advocates similar measures be taken with respect to computer systems. (Whether computers could be responsible is discussed in section 3.)
2.3.3 Clearly articulate norms
Recognizing that an awareness of norms is central to our practice of responsibility, and that it plays a key role in the process of professionalization, Johnson and Mulvey (1995) argue that it is crucial to clearly articulate norms.
Establish norms regarding the relationship between designer (creator) and client. Johnson and Mulvey (1995) appeal to fiduciary model to clarify the norms for the relationship between the designer / programmer and the client. The fiduciary model is appropriate when “one party [e.g., the designer] has special expertise and the other party [e.g., the client] seeks that expertise to improve his or her decision making” (61), and it encourages shared decision-making grounded in on-going trusting relationship. Further clarifying what each can expect from the other is an important step toward professionalization.
Establish norms regarding collaboration with affected parties. Appealing to Neibuhr's responsibility ethic, Dillard and Yuthas (2001) point out that responsible behavior involves identifying and working with the affected members of the community, taking into account that community's history and future, and being prepared to account for one's actions. Rather than describe particular behaviors (i.e., that result from a decision-making process) as being responsible, they focus instead on responsibility in the context of the decision-making process, noting in particular the need to consult with all affected members of the community.
Establish norms regarding the production and use of computer systems. Nissenbaum (1994; 1996) recommends that we articulate “guidelines for producing safer and more reliable computer systems” (1994 79); Gotterbarn recommends that these standards go beyond mere “'due care’ (i.e., avoidance of direct harm)” but include as well “a concern to maximize the positive effects for those affected by computing artifacts” (2001, 229).
Establish norms of behavior for the various roles involved in creating, implementing, and using computer systems. Surveying these recommendations is taken up in section 2.4.
Establish norms regarding (possibly strict) liability. Despite her general focus on positive responsibility, Nissenbaum responds to the inevitability of bugs and ownership without liability by recommending that we “impose strict liability for defective consumer-oriented software, as well as for software whose impact on society and individuals is great” (1994; 79).
2.4 Clarification of Responsibilities
Two authors in particular—Murray (2001) and Cass (1996)—have attempted to articulate clear norms for the various roles involved in the creation of computer systems. Murray's norms focus on role responsibility, while Cass explicitly includes moral responsibilities as well. Both provide useful guidelines for improving practice through increased awareness of responsibilities, as well as providing standards by which to assess fault.
Murray is primarily focused on role responsibility rather than moral responsibility. Overall, Murray's concern is with demonstrating the interdependent nature of computing projects; toward this end, he argues that “The goal must be to develop an IT project management philosophy that establishes the idea that a project's success is everyone's success and a failure is everyone's failure” (2001, 29).
Responding to the challenges posed by introducing an expert system (ES) into contexts where it will be consulted by non-experts, Cass re-defines the various roles involved in the design, creation, and use of the expert system to explicitly include a discussion of when an agent filling that role can be held morally responsible for an bad outcome. In addition to providing guidance for assessing responsibility after the fact, this analysis also helps to “heighten people's awareness of their obligations and culpability in the ES-mediated problem-solving process.” (1996, 69) Toward this end, Cass appeals to an Aristotelian account of responsibility in which “someone is morally responsible for an action if she is the causal agent of the action and acted knowingly and voluntarily” (70). Since all those involved in the design, creation, and use of the expert system satisfy the causal criteria, Cass's analysis largely focuses on identifying the knowledge that each person is expected to have and to share with others, and identifying any potentially coercive circumstances that might undermine the voluntary nature of their actions.
Some particularly relevant responsibilities include the following. First, the manager is responsible for creating an environment that fosters the “free flow of critical information about potential problems” (Cass 1996, 73). The expert is responsible for sharing her expertise (both knowledge-based and procedural), identifying the limits of that expertise, identifying the limits of the expert system as a non-human expert, anticipating “problems users may encounter in understanding and / or applying the domain expertise” (Cass 1996, 74), compensating for those limits and problems or (if the problems are too great) refusing to participate in the project at all. In addition, since knowledge is relevant to assessing responsibility, the responsibility of the user will depend on whether the user is a domain-expert or a domain-novice. The domain-expert uses the expert system to seek a second opinion, or possibly as part of their own professional training; thus, the domain-expert user can “critically evaluate” the consultation as well as the resulting advice. Two factors can undermine a domain expert's responsibility: first, coercive policies that demand that the advice of the ES be followed regardless of the user's assessment (thus undermining the user's ability to act voluntarily); second, lack of relevant environmental or contextual information (thus undermining the user's ability to act knowingly). In contrast, the domain-novice is ignorant about the domain of expertise; as such, the novice is responsible for compensating for this ignorance by using the help and explanation features of the expert system.
Cass reminds the reader that while each person involved in the design, creation, and use of the ES is responsible for having and sharing the relevant knowledge from their own domain of expertise, and for learning enough about other domains of expertise to reasonably ensure that cross-domain communication is accurate, even the best efforts cannot guarantee that all assumptions made in this process are correct; in short, we cannot guarantee that there is no involuntary ignorance.
3. Can Computers Be Morally Responsible?
As anticipated above in the discussion of barriers to responsibility (section 2.2.3), and as backed up by empirical research by Friedman and Millett (1997), and by Moon and Nass (1998), humans do attribute responsibility to computers.[3] Of course, that we may be inclined to blame computers does not entail that we are justified in so doing. Although computer systems may clearly be causally responsible for the injuries and deaths that resulted from their flawed operation, it is not so clear that they can be held morally responsible for these injuries or deaths. Indeed, many authors simply assume that computers are not candidates for attributions of moral responsibility; Ladd, for example, claims that “it is a bit of an anthropomorphic nonsense to ascribe moral responsibility to systems, whether they be technological or social” (1989, 218). As discussed above (section 2.3.2), the common recommendation is to redesign the computers to make their lack of responsibility more visible to the user (Friedman and Millett 1997; Friedman and Kahn 1992). Nevertheless, the possibility that computers can be (morally) responsible has been explored by several authors.[4]
Most notably, Dennett's account of intentionality in terms of an intentional stance licenses attributions of responsibility to computers. His approach is most clearly integrated in “When HAL Kills, Who's to Blame?” (1997) in which he argues that IBM's Deep Blue is the best candidate for the role of responsible opponent of Kasparov. In addition, Dennett (1997) argues that intentionality in general[5] —and higher-order intentionality (e.g., beliefs about beliefs) in particular[6] —along with worldly knowledge and the ability to process ‘perceptual’ input are prerequisites for moral responsibility. These are characteristics that 2001: A Space Odyssey's HAL 9000 computer is portrayed as having, and which Dennett suggests that real life robots such as Rodney Brooks' cog might someday possess. Finally, by identifying several potentially exculpating factors—insanity, brainwashing (or, more appropriately, programming) and duress (including either self-defense or loyalty to a goal)—he implicitly suggests that HAL is a legitimate candidate for moral responsibility in general precisely because these sorts of excusing or exempting factors can seriously be applied to HAL.
Bechtel (1985) appeals to a modified version of Dennett's concept of intentional systems to support his claim that intentionality is possible for a computer and, therefore, that computers could be responsible for their (intentional) decisions.[7] None of this is to let humans off the hook; although humans may not bear responsibility for the computers' decisions themselves, Bechtel claims that they will still “bear responsibility for preparing these systems to take responsibility” (1985, 297).
Looking to the future when a computer could pass as a human in conversation (i.e., is certified as a Turing Chatterbox), Ronald and Sipper (2001) question whether Turing Chatterboxes could be held accountable for their actions; they suggest that our current practice of holding manufacturers responsible will eventually break down and “[t]he scenario [will] becomes less like a manufacturer producing a (guaranteed) product and more like that of parenting a child ‘caveat emptor’” (574). As such, we will need to attend not merely to whether computers can be held responsible, but also to how we can make them responsible.
In contrast, Friedman and Kahn (1992) argue that computers cannot be moral agents because (according to Searle (1980) and his Chinese room argument) they lack the intentionality which is necessary for responsibility.” (1992, 9)
Finally, concerned with the cases of properly functioning and well-designed computers which nevertheless make errors, Snapper (1985) argues that, despite worries about the control the programmer has over the program—and therefore over the computer's output—computers are capable of deliberate choice. However, appealing to an Aristotelian analysis of moral responsibility, he further argues that they are incapable of the appropriate mental attitude (e.g. regret) toward these decisions; their decisions cannot be understood as voluntary and they therefore cannot be morally responsible for their decisions.[8]
4. Can Humans Responsibly Give Decision-Making Power to Computers?
Kuflik (1999) identifies six senses of responsibility: (1) Causal Responsibility, (2) Functional Role Responsibility, (3) Moral Accountability, (4) an honorific sense of responsibility, (5) Role Responsibility, and (6) Oversight Responsibility. Making use of these six senses, he asks:
How much responsibility (in either sense (2) or sense (5)), could responsible (sense (3)) human beings responsibly (sense (4)) allocate to a computer, without at the same time reserving to themselves oversight-responsibility (sense (6))? (1999, 189)
Despite some minor differences, Bechtel (1985), Ladd (1989), Moor (1979), Nissenbaum (1994; 1996), and Kuflik (1999) all agree that responsible humans cannot responsibly allocate all responsibility to computers.
Ladd (1989) argues that computer control (of machines or systems) is sufficiently similar to human control for computers to be given control in some situations. However, computers are better suited to control than humans in certain situations (e.g., those demanding fast, accurate, information processing over long, unbroken periods of time), and humans are better suited to control than computers in other situations (e.g., those prone to surprises and accidents); as such, Ladd argues that “human beings are better than computers where “judgment” is required” (1989, 223). Ultimately, whether or not judgment is not involved, Ladd concludes that in safety-critical situations (i.e., “where there is a possibility that a computer error might lead to disastrous consequences” (1989, 223)) humans must preserve our ability to intervene and take back control from the computer.
Moor (1979) argues that although computers are able to make decisions, they should not necessarily be given the power to make decisions. Agreeing with Ladd that neither humans nor computers are automatically the best choice for decision-making control, Moor argues that, “[w]ithin the context of our basic goals and values (and the priorities among them) we must empirically determine [on a case by case basis] not only the competence of the computer decision maker but the consequences of computer decision making as well” (1979, 129). He nevertheless insists that there is one area of decision-making that should be denied computers: “Since we want computers to work for our ends” we should deny them the power to make decisions regarding “our basic goals and values (and priorities among them)” (1979, 129). In addition, Moor anticipates that if we are irresponsible in our use of computer decision making—i.e., if we do not identify the nature of the computer's competency, demonstrate that competence, and get clear about why using the computer to make such decisions furthers our basic goals and values—such use will have the unacceptable consequence of eroding human responsibility and moral agency.
Kuflik (1999) also argues that, since computers are fallible and, therefore, capable of gross deviations, “humans should not altogether abdicate their oversight responsibility” (194); however, distinguishing between the ability to override the computer on a case-by-case basis and the ability to override it through periodic reviews, Kuflik appeals to our familiar experience interacting with experts (e.g., doctor-patient relationships) to provide intuitive guidance about how to balance these two types of oversight.
Reaching similar conclusions by appeal to notably different concerns, Thompson (2001) argues that we should not give computer systems responsibility for judging us (e.g., as computerized judges) unless the computer were capable of (self-consciously) recognizing its shared participation in humanity with us (if such a thing were even possible); otherwise the computer would lack compassion and could not take responsibility for its decisions.
Bibliography
- Allen, C., G. Varner, and J. Zinser, 2000, “Prolegomena to Any Future Moral Agent”, Journal of Experimental and Theoretical Artificial Intelligence, 12: 251-261.
- Bechtel, W., 1985, “Attributing Responsibility to Computer Systems”, Metaphilosophy, 16.4: 296-306.
- Bynum, T.W., 1985, “Artificial Intelligence, Biology, and Intentional States”, Metaphilosophy, 16.4: 355-377.
- Cass, K., 1996, “Expert Systems as General-Use Advisory Tools: An Examination of Moral Responsibility”, Business and Professional Ethics Journal, 15.4: 61-85.
- Dejoie, R., G. Fowler, D. Paradice, 1991, Ethical Issues in Information Systems, Boston, MA: Boyd and Fraser.
- Dennett, D. C., 1973, “Mechanism and Responsibility”, in Essays on Freedom of Action, T. Honderich (ed), Boston: Routledge & Keegan Paul.
- -----, 1984, Elbow Room: The Varieties of Free Will Worth Wanting, 1984. Cambridge, MA: Bradford Books-MIT Press.
- -----, 1995, The Intentional Stance. Cambridge, MA: Bradford Books-MIT Press.
- -----, 1997, “When HAL Kills, Who's to Blame? Computer Ethics”, in HAL's Legacy: 2001's Computer as Dream and Reality, D. G. Stork (ed), Cambridge, MA: MIT Press.
- Dillard, J.F., and K. Yuthas, 2001, “A responsibility ethics for audit expert systems”, Journal of Business Ethics, 30.4: 337-359.
- Feinberg, J., 1968, “Collective Responsibility”, The Journal of Philosophy, 65: 674-688. Revised and Rpt. in Doing and Deserving: Essays in the Theory of Responsibility, J. Feinberg (ed.), Princeton: Princeton University Press, 1970.
- -----, 1970, “Sua Culpa”, in Doing and Deserving: Essays in the Theory of Responsibility, J. Feinberg (ed.), Princeton: Princeton University Press, Rpt. in Ethical Issues in the Use of Computers, D. G. Johnson and J. W. Snapper (eds.), Belmont, CA: Wadsworth Publishing, 1985.
- Floridi, L., and J. W. Sanders, forthcoming, “On the Morality of Artificial Agents”, Ethics of Virtualities: Essays on the Limits of the Bio-Power Technologies, A. Marturance and L. Introna (eds.), Culture Machine. London: Athlone Press.
- Forester T., and P. Morrison, 1994, Computer Ethics: Cautionary Tales and Ethical Dilemmas in Computing, 2nd Edition, Cambridge MA: MIT Press.
- Friedman, B., 1990, “Moral Responsibility and Computer Technology”, Eric Document Reproduction Services (EDRS).
- -----(ed.), 1997, Human Values and the Design of Computer Technology, Stanford: CSLI Publications; NY: Cambridge University Press.
- -----, and P. H. Kahn, Jr., 1992, “Human Agency And Responsible Computing - Implications For Computer-System Design”, Journal Of Systems And Software, 17: 7-14.
- -----, and L. I. Millett, 1997, “Reasoning About Computers as Moral Agents: A Research Note”, in Human Values and the Design of Computer Technology, B. Friedman (ed.), Stanford: CSLI Publications; NY: Cambridge University Press.
- Gips, J., 1995, “Towards the Ethical Robot”, in Android Epistemology, K. Ford, C. Glymour, and P. Hayes (eds.), Menlo Park, CA: AAAI Press / The MIT Press.
- Gotterbarn, D., 1995, “The Moral Responsibility of Software Developers: Three Levels of Professional Software Engineering”, Journal of Information Ethics, 4.1: 54-64.
- -----. 2001,, “Informatics and Professional Responsibility”, Science and Engineering Ethics, 7.2: 221-230.
- Hart, H. L. A., 1985, “Punishment and Responsibility”, Rpt. in Ethical Issues in the Use of Computers, D. Johnson and J. M. Snapper (eds.), Belmont, CA: Wadsworth Publishing.
- Johnson, D., G., 2001, Computer Ethics, 3rd Edition, Upper Saddle River, NJ: Prentice Hall.
- -----, and J. M. Mulvey, 1995, “Accountability and Computer Decision Systems”, Communications of the ACM, 38.12: 58-64.
- -----. and J. W. Snapper, 1985, Ethical Issues in the Use of Computers, Belmont, CA: Wadsworth Publishing.
- Kuflik, A. 1999, “Computers in Control: Rational Transfer of Authority or Irresponsible Abdication of Autonomy?”, Ethics and Information Technology, 1: 173-184.
- Ladd, J., 1989, “Computers and Moral Responsibility”, in The Information Web: Ethical and Social Implications of Computer Networking, C. Gould (ed.), Boulder: Westview Press.
- Leveson, N. G., and C. S. Turner, 1993, “An Investigation of the Therac-25 Accidents”, Computer, 26.7: 18-41.
- Moon, Y. and C. Nass , 1996, “How ‘Real’ are Computer Personalities? Psychological Responses to Personality Types in Human-Computer Interaction”, Communications Research, 23: 651-674.
- -----, 1998, “Are Computers Scapegoats? Attributions of Responsibility in Human-Computer Interaction”, International Journal Of Human-Computer Studies, 49.1: 79-94.
- Moor, J., 1979, “Are There Decisions Computers Should Never Make?” Nature and System 1: 217-229. Rpt. in Ethical Issues in the Use of Computers, D. G. Johnson and J. W. Snapper (eds.), Belmont, CA: Wadsworth Publishing, 1985.
- -----, 1995, “Is Ethics Computable?” Metaphilosophy, 26: 1-21.
- Murray, J.P., 2001, “Recognizing the Responsibility of a Failed Information Technology Project as a Shared Failure”, Information Systems Management, 18 (2): 25-29.
- Nass, C. I., Y. Moon, J. Morkes, E.-Y. Kim, and B. J. Fogg, 1997, “Computers are Social Actors: A Review of Current Research”, in Human Values and the Design of Computer Technology, B. Friedman (ed.), Stanford: CSLI Publications; NY: Cambridge University Press.
- Nissenbaum, H, 1994, “Computing and Accountability”, Communications of the ACM, 37.1: 73-80.
- -----, 1996, “Accountability in a Computerized Society”, Science and Engineering Ethics, 2: 25-42. Rpt. in Human Values and the Design of Computer Technology, B. Friedman (ed.), Stanford: CSLI Publications; NY: Cambridge University Press, 1997.
- Reeves, B., and C. I. Nass, 1996, The Media Equation: How People Treat Computers, Television, and New Media like Real People and Places. Stanford: CSLI Publications; NY: Cambridge University Press.
- Ronald, E. M.A., and M. Sipper, 2001, “Intelligence is not enough: On the socialization of talking machines”, Minds and Machines, 11.4: 567-576.
- Searle, J., 1980, “Minds, Brains and Programs”, The Behavioral and Brain Sciences, 3: 417-424.
- Szolovits, P., 1996, “Sources of Error and Accountability in Computer Systems: Comments on ‘Accountability in a Computerized Society’”, Science and Engineering Ethics, 2.1: 43-46.
- Snapper, J. W., 1985, “Responsibility for Computer-Based Errors”, Metaphilosophy, 16: 289-295.
- --, 1998, “Responsibility for Computer-Based Decisions in Health Care”, in Ethics, Computing, and Medicine, K. W. Goodman (ed.), NY: Cambridge University Press.
- Thompson, H. S., 1999, “Computational Systems, Responsibility, and Moral Sensibility”, Technology in Society, 21.4: 409-415.
- Versenyi, L., 1974, “Can Robots Be Moral?”, Ethics, 84: 248-259.
- Wallace, R. J. 1994, Responsibility and the Moral Sentiments, Cambridge, MA: Harvard University Press.
Other Internet Resources
Video Presentations
- Computing and Philosophy at Oregon State University. Video archives of presentations at 2001, 2002, 2003 CAP@OSU Conferences.
- CEPE2000: Computer Ethics: Philosophical Enquiry. Presentations from the CEPE2000 Conference (Dartmouth College, 14-16 July 2000) availabe in RealVideo.
Bibliographies
- The Tavani Bibliography of Computing, Ethics, and Social Responsibility. Edited by Herman Tavani, Rivier College. This is an extensive and useful bibliography, and is quickly becoming (if it has not already become) the standard bibliographic reference in the field.
Journals On-line
- ETHICOMP Journal. Publishes papers presented at ETHICOMP conference series, an international conference series of interest to those working in computer ethics.
- Ethics and Information Technology. Published by Klewer; peer-reviewed journal spanning a wide variety of issues and approaches to exploring in the intersection of moral philosophy and information / communications technology.
- Metaphilosophy Published by Blackwell; peer-reviewed journal that has been publishing articles in the area of computer ethics (including computing and responsibility) since the 1980's.
- Minds and Machines: Journal for Artificial Intelligence, Philosophy and Cognitive Science. Published by Klewer; peer-reveiwed journal that occasionally includes articles related to the ethics / moral responsibility of artificially intelligent computer systems.
Centers
- Research Center on Computing and Society (at Southern Connecticut State University)
- Centre for Computing and Social Responsibility (at De Montfort University)
- Computer Professionals for Social Responsibility (CPSR)
Organizations
- INSEIT: International Society for Ethics and Information Technology
- IACAP: International Association for Computing and Philosophy. Concerned with computing and philosophy broadly construed, including the use of computers to teach philosophy, the use of computers to model philosophical theory, as well as philosophical concerns raised by computing.
Related Entries
artificial intelligence | Chinese room argument | computer ethics | information technology and moral values | intentionality | responsibility | science and technology studies | Turing test