Phenomenological Approaches to Ethics and Information Technology

First published Sat Feb 19, 2005; substantive revision Thu Jun 29, 2017

Information and communication technology (simply referred to as ‘information technology’ here) is changing many aspects of human endeavour and existence. This is beyond dispute for most. What are contested are the social and ethical implications of these changes. Possible sources of these disputes are the multiple ways in which one can conceptualize and interpret the information technology/society interrelationship. Each of these ways of conceptualization and interpretation enables one to see the information technology/society relationship differently and therefore construe its social and ethical implications in a different manner. At the center of this technology/society interrelationship we find many complex questions about the nature of the human, the technical, agency, autonomy, freedom and much more. This is indeed a vast intellectual landscape, which can obviously not be explored here in its fullness. This entry is about just one particular perspective on this landscape. It is primarily concerned with the phenomenological approach to interpreting information technology and its social and ethical implications. It should be noted from the start that there is not a unified phenomenological tradition or approach to information technology in particular, or other phenomena more generally. The phenomenological tradition consists of many different approaches that share certain characteristics (certain family resemblances, one might say) but not all. We may however suggest, with Don Ihde (2003,133), that they all accept that “phenomenology investigates the conditions of what makes things appear as such [as that which we take them to be].” Differently stated, phenomenology suggests that there is a co-constitutive relationship between us and the phenomena we encounter in our engagement with the world. In this sense phenomenologists would suggest that to understand the technology/society relationship we need to reveal how they co-constitute each other—i.e. draw on each other for their ongoing meaning and sense. We will elaborate more precisely what this means in section 2 below. However, in order to understand the distinctiveness of the phenomenological approach other possible ways of interpreting this technology/society relationship will also be outlined briefly below.

It can be said that information technology has become in a very real sense ubiquitous. Most everyday technologies such as elevators, automobiles, microwaves, watches, and so forth depend on microprocessors for their ongoing operation. Most organizations and institutions have become reliant on their information technology infrastructure to a lesser or greater degree. Indeed information technology is seen by many as a cost-efficient way to solve a multitude of problems facing our complex contemporary society. One can almost say that information technology has become construed as the default technology for solving a whole raft of technical and social problems such as health provision, security, governance, etc. One could also argue that it has become synonymous with society’s view of modernization and progress. For most it seems obvious that information technology has made it possible for humans to continue to construct increasingly complex systems of coordination and social ordering—systems without which contemporary society would not be able to exist in its present form. To say the least, we, as contemporary human beings, have our manner of being made possible through a rather comprehensive entanglement with information and communication technology. Indeed, the economic, organizational and social benefit of information technology is not widely disputed. The dispute is more often about the way information technology is changing or transforming the social domain, and in particular, the ethical domain. This dispute is largely centered around different ways of conceptualizing and interpreting the nature of our entanglement with information technology. This debate is not merely an academic debate about different and competing theoretical ‘models.’ Rather, these different ways of conceptualizing it is central to our understanding of how we go about managing our increasingly entangled relationship with information technology.

1. Views on the Nature of Information Technology

It seems obvious that a world with information technology is somehow different from a world without information technology. But what is the difference? Is it a difference of degree (faster, closer, clearer, etc.) or is it a difference of kind? Does technology shape society or society shape technology, or both shape each other? What is the nature of this shaping? Is it in practices, in ways of thinking, or is it more fundamental? The answers to these questions will obviously influence the judgements we make about the social and ethical implications of information technology when we consider the policy and practical concerns of using information technology in a particular domain (such as commerce, education or government).

The answers to these questions are also grounded to a large extent in one’s particular, implicit or explicit, ontology of information technology itself; what is the nature—way of being—of information technology as such? Obviously many different ontological positions are possible and have emerged. Nevertheless, it may be useful for the purposes of this entry to discern at least two contrasting and prevailing views against which the contribution of phenomenology can be rendered visible. The purpose of this section is solely to make the distinctive contribution of phenomenology visible rather than being a complete presentation of all possible approaches. For example, obvious omissions from the approaches suggested here are the Marxist or Critical Theory perspective as reflected in the work of Andrew Feenberg (1991, 1999), and a poststructuralist perspective as reflected in the work of Jean Baudrillard (1983) and Paul Virilio (1994). This said, the approaches that are included here are reasonably widely held and would as such function as a helpful contrast to the contribution of phenomenology.

1.1 Information Technology as an Artifact or Tool

The most common view of information technology is that it is an artifact or tool simply available, to use of not to use, in order for humans to achieve their objectives and outcomes. Some of these tools might be very useful and others not. When users (and the term ‘user’ here is important) take up a tool or artifact (word processor, mobile phone, etc) it will tend to have an impact on the way they do things. For example if I write with a word processor I would tend to have different writing practices than I would with pen and paper—for example, with the ‘cutting and pasting’function of a word processor I can simply type out my ideas and then reorder them later without being preoccupied with the composition of the whole from the start. Thus, tools function as extensions of human capabilities, allowing us to achieve what we can not achieve with the body alone—and one might add that some tools are more ‘extensive’ than others as our pen vs. word processor suggest. According to this view, we need to understand the impact that the use of the tools (information technology in this case) has on human society as they are taken up and used in everyday practices. For example, how will communication with mobile phones change our social interaction and social relationships? In asking such a question this view does not primarily concern itself with the development of the technology—why and how did it come about in the first instance. It mostly assumes that the design of a particular technology is rational and objective; one might call it an engineering or scientific solution to a particular problem. It also mostly assumes that the particular technology—mobile phones in this case—operates in a more or less uniform manner in different social practices and settings. In other words, it assumes that a particular technology has certain determinate effects on, or in, the context of its use. This way of conceptualizing information technology leads to questions such as “what is the impact of the mobile phone on communication patterns” or “what is the impact of mobile phones on privacy/privacy expectations”. This view of technology is often criticized for a greater or lesser degree of technological determinism. Technological determinism is the view that technology more or less causes certain ways of doing or ways of organizing to come about. For example, a technological determinist may argue that the Internet’s open and non-hierarchical architecture can more or less cause a society that uses it to become more open and less hierarchical. The work of Postman (1993) is an example of this type of critical evaluation of the impact of technology on society.

1.2 Information Technology as Socially Constructed Artifacts and Actors

Many scholars argue that the tool and impact view above of information technology does not give an adequate account of the relationship between information technology and society (Bijker, Pinch, and Hughes 1987, Bijker 1995, Law 1991, Latour 1991). Firstly, it does not take into account that the technology does not simply appear but is the outcome of complex and socially situated development and design practices. In this development and design process many alternative options become excluded in favour of the technology that is now available—obviously with important social and normative implications. In other words there are many cultural, political and economic forces that shape the particular options suggested as well as the way the selected options become designed and implemented (Bijker, Pinch, and Hughes 1987). It is not only technology that ‘impacts’ on society; technology itself is already the outcome of complex and subtle social processes and practices—in other words it is socially constructed in a very direct and significant manner. In short: our current technologies were not inevitable, things could have been substantially otherwise. Moreover, they argue that when we look at the actual uses of particular technologies we discover that users use them in many diverse and often unexpected ways—leading to many and diverse unintended consequences. Both in its design and in its actual use there is an ongoing reciprocal relationship in which society and technology co-construct each other; they act through and upon each other. Technology is not simply passive tools, waiting for us to use them. They circumscribe our possibilities and enact significant ‘scripts’ (Latour, 1991) that shape us as much as we shape them. As Latour (2005, 107) suggests, technologies (as actors) make us do things but “not by transporting a force that would remain the same throughout as some sort of faithful intermediary, but by generating transformations manifested in many unexpected events triggered in the other mediators that follow them along the line.” The degree to which the technology/society distinction is useful at all (ontologically or analytically) varies between the different constructivist authors—for example, Brey (1997) identifies three different strands of constructivist approaches. Those who support this view claim that it is very difficult to make general statements about the ‘impact’ of a technology in some general terms. One can, at most, speak of some general trends for which many exceptions will invariably exist. For the proponents of the this constructivist view it is important to understand, through detailed descriptive accounts, the particular ways in which technologies emerge and become embedded in particular social practices. Examples of such studies can be found in the work of Bijker (1995), Law (1991) and Latour (1991).

1.3 Information Technology as an Ongoing Horizon of Meaning and Action

For the phenomenologist the tool and impact view of technology as well as the constructivist view of the technology/society relationships are entirely valid in their own terms but not adequate (Heidegger 1977, Borgmann 1985, Winograd and Flores 1987, Ihde 1990, Dreyfus 1992, 2001, Verbeek 2005). They argue that these accounts of technology, and the technology/society relationship, posit technology and society as if speaking about the one does not immediately and already draw upon the other for its ongoing sense or meaning. For the phenomenologist society and technology co-constitute each other from the very start; they are each other’s ongoing condition or possibility for being what they are. For them technology is not just the artifact, ‘out there’ as it were. Rather, for them the artifact already emerges from a prior ‘technological’ attitude towards the world (Heidegger 1977), or is indeed already the constitutive possibility for the human to be human as such (Stiegler, 1998, 2009). For example, as the already technologically oriented human beings that we are, we will tend to conceive communication as a problem requiring a technological solution. Thus, technology is already the outcome of a technological way of looking and relating ourselves to the world. Once in place technology allows the world to ‘show up’ in particular ways (Introna and Ilharco 2003). For example you are a different person to me with a mobile phone than without one. With a mobile phone you become disclosed, or show up, as ‘contactable’, ‘within reach’ as it were. It is this way of thinking about information technology, as co-original with the human, and as a horizon of meaning and action, that we want to elaborate further before we can elaborate how these various ways of conceptualizing technology shape our views on the social and ethical implications of such technologies. Before proceeding it should be noted that the most recent work of Bruno Latour (2002, 2005) suggests that he has taken up many of the insights of phenomenology in his ongoing work. Thus, the later Latour (2002, 2005) can be seen as a bridging figure between the constructivist tradition and the phenomenological tradition (for more detailed arguments in this regard refer to Graham Harman’s (2009) book on Latour’s metaphysical ground).

2. Phenomenological Approaches to Technology

We started this discussion by suggesting that phenomenology investigates the conditions of what makes things appear as such, or, that phenomenology suggests that there is a co-constitutive relationship between us and the phenomena we encounter in our engagement with the world. Before we move on it might be helpful to clarify this point with a simple everyday example. Let us take the human experience of listening to music and consider it phenomenologically. From the perspective of physics and physiology, music is constituted by a flux of waves of particular frequencies to which the inner ear may be sensitive. Indeed, once so analyzed, it is possible to create a technological device such as a tape recorder that is sensitive to these same sounds, and can even replay them on command. Human beings, however, when they hear sounds in everyday life never take them simply as a stream of sounds, rather they find themselves already listening to something particular—a cry for help, an automobile breaking, construction noise, or a piece of music. Indeed it would take a very strange sort of attitude to hear sounds and take them as a flux of waves of particular frequencies. Listening is different than registering or recording. To listen is to already take sounds as this or that. In listening, the taking of sound as music implies an already existing sense of what music is, something that makes it possible for us to take these sounds as music rather than noise. Furthermore, in listening to music, this listening is informed by an ongoing sense (or unity) of movement, rhythm, tone, scale, style, and so forth. This ongoing active unity provides an active and ongoing framework (or necessary background) that enables me, in the experience of listening (right now), to simultaneously ‘retain’ the sounds I no longer hear (the past), and in anticipation to ‘fill in’ the sounds I am not yet hearing, yet already anticipate (the future). As a phenomenological being I find myself listening to music, not merely recording sounds after the manner of a technological device. For phenomenologists the relevant question is: What are the transcendental conditions that make it possible for humans to listen to music, as music, rather than merely record sounds?

What is it that enables us to encounter music in its fullness even though we are always given, at any particular point in time, only some limited aspect of such a phenomenon (the current note I am hearing)? The answer of phenomenology is that it is the transcendental horizon or conditions that make our encounter with the world possible. One could say that the transcendental is the background, or horizon, that makes the meaningful experience of the foreground possible. Yet insofar as such a formulation suggests a background that is somehow separate and ‘behind’ that which appears in the foreground, it would be incorrect. The transcendental horizon is always and immediately already present in the very appearing as such—this is exactly what makes a horizon ‘disappear’ or withdraw from our focal awareness. It is so evident that it simply does not come up as an issue. It is this seemingly ‘forgotten’ constitutive horizon that is the focus of phenomenology. All phenomenological approaches have as their focus a ‘return’ to this vital co-constitutive interplay between the ‘foreground’ and the ‘background’. Thus, all phenomenological studies share at least the underlying view that technology and society co-constitute each other by being each other’s reciprocal and ongoing condition or possibility for being what they are. As such they continually draw on each other for their ongoing sense or meaning. For the purposes of this entry it might be best to indicate some of the existing phenomenological studies of technology in four different but intimately connected strands (this is not a comprehensive list, it is rather indicative of common themes or approaches):

  • Phenomenology as a fundamental critique of the technological attitude as such (Martin Heidegger 1977)
  • Original technicity and the human being (Stiegler 1998, 2009);
  • The technological attitude as manifested in our contemporary relationship with particular technologies (Hubert Dreyfus 1992 and Albert Borgmann 1984);
  • A phenomenology of the human/technology relationship (Don Ihde 1990).

2.1 A Fundamental Critique of the Technological Attitude

Probably the most famous phenomenological analysis of technology—or rather the technological attitude that gives rise to artifacts—is Martin Heidegger’s (1977) essay The Question Concerning Technology. This essay is important as a reference point because it illustrates forcefully the most important and distinctive claim of phenomenology vis-à-vis technology. Technology is not merely an artifact or our relationship with this or that artifact; rather, the artifact—and our relationship with it—is already an outcome of a particular ‘technological’ way of seeing and conducting ourselves in and towards the world. Heidegger (1977) famously claimed that “the essence of technology is nothing technological” (p.4).

For Heidegger the essence of technology is the way of being of modern humans—a way of conducting themselves towards the world—that sees the world as something to be ordered and shaped in line with projects, intentions and desires—a ‘will to power’ that manifest itself as a ‘will to technology.’ It is in this technological mood that problems show up as already requiring technical solutions. The term ‘mood’ here is used to refer to a collectively held ‘sense’ (or way of grasping) of an event or a situation; we often use it when we refer to the ‘mood of the meeting’ or the ‘mood of our times.’ He calls this technological mood enframing (Gestell in German). With this term he wants to denote that the modern mood takes or approaches the world as always and already framed, i.e., enframed. Heidegger claims that for us, living in the technological age, the world is already framed as a resource available for us, to be made, to be shaped for our ongoing possibilities to express our particular projects, to be whatever we are, as business people, engineers, consultants, academics, teenagers, etc. In short: technology makes sense because we already live in the technological age or mood where the world (and we as beings that are never ‘out’ of the world) are already framed in this way—as available resources for the ongoing challenging and ordering of the world by us. For him the essence of technology is not the particular artifacts but the technological mood that make this or that particular artifacts show up as meaningful and necessary.

Heidegger claims that there were other times in human history, a pre-modern time, where humans did not orient themselves towards the world in a technological way—simply as resources for our purposes. He suggests that in ancient Greek culture humans’ relationship with the world was one of ‘letting be’; a world approached in an attitude or mood of reciprocal ‘care.’ This must not be taken as some sort of romantic vision of the past in which everybody in ancient Greece ‘cared’ for the world in contrast to the modern technological mood where everybody takes the world as something to be challenged and ordered. It is rather saying that the mood, the horizon, in which we conduct ourselves tends to dispose us in particular ways. Obviously there were also artifacts in ancient times. However, according to Heidegger this ‘pre-technological’ age (or mood) is one where humans’ relation with the world and artifacts, their way of being disposed, was poetic and aesthetic rather than technological (enframing). The act of making and shaping the artifact was directed by a different attitude or mood. Since the world was not taken as ‘available for ordering’ a certain reciprocal care and intimacy was possible (and cultivated) in which the world was ‘let to be’—to let the world show itself in its own terms. He claims that the pre-modern craftsman makes “the old wooden bridge” that “lets the river run its course.” However, in the horizon of the technological mood this same river is disclosed as a possibility for a “hydroelectric plant” that turns the river into a reservoir, a resource on standby for our projects, challenged and framed as available.

There are many who disagree with Heidegger’s account of the modern technological attitude as the ‘enframing’ of the world (Feenberg 1999, Pitt 2000). For example Andrew Feenberg (1999) argues that Heidegger’s account of modern technology is not borne out in contemporary everyday encounters with technology. When we look more carefully we see many individual situations in which the technological attitude does not hold sway, where people have intimate relationships with artifacts that cannot merely be dismissed as instances of enframing. Others, such as Hubert Dreyfus and Albert Borgmann, have extended Heidegger’s work into more specific critiques of particular technologies and particular contemporary ways of being.

2.2 Original Technicity and the Human Being

Bernard Stiegler is mostly known for his multi-volume work Technics and Time (La technique et le temps). In this multi-volume work he argues that the human and the technical are co-original—in other words that the technical did not emerge out of the (already constituted) human or the human out of the (already constituted) technical but that these two ontological domains co-constituted each other from the very start. To make his argument he draws on the work of Heidegger and Derrida but also very significantly on the work of the paleoanthropologist Andre Leroi-Gourhan—in particular his widely recognized Gesture and Speech (originally published in French in 1964). In this work Leroi-Gourhan argues that there is a fundamental continuity from the biological to the sociological and that this continuity is realized through the mediation of technology. He sees the use of tools (made possible by the freeing of the hands when in the upright position) as a process of ‘exteriorization’ in which the process of evolution is transferred from the zoological domain to the technical domain—or as Stiegler suggests “the continuation of life by means other than life” (1998, 50). Thus, unlike other mammals, humans remain generalists who specialize—as and when required—by exteriorizing these specialized capabilities in the technical domain outside of the human body. This exteriorization is also simultaneously reflected (or mirrored) back as a process of interiorization where the technologies become embodied by the humans who use them. Stiegler uses the notion of ‘epiphylogenesis’ to describe this extra-genetic co-evolution of the human and the technical. Through epiphylogenesis culture becomes possible as the ‘the inorganic organization of memory’ (1998, 174, my emphasis). Thus, for him “[t]he human invents himself in the technical by inventing the tool—by becoming exteriorized technologically” (1998, 141). Moreover, Stiegler suggests that without these technically inscribed memory systems we would not be able to exist in time. Without the material technicity we humans would be unable to experience the past and would have nothing to ‘select’ from in order to invent the future. Thus, without technicity (what he calls “organized inorganic matter”) we will exist in a perpetual present without any hope of transcending it. This point is indeed also made by Latour (2005) in his discussion of technology as mediators. As such culture and society will not be possible. Thus, for Stiegler the constitutive transcendental horizon of the human is technicity, from which emerges the conditions of possibility of time, society and culture.

If the claims made by Stiegler are correct then this fundamental unity between the human and the technical (as expressed through the idea of epiphylogenesis) have important implications for how we think about our relationship with technology. It is not just something ‘out there’ it is also immediately something ‘in here’, at the very source of our humanity. This means that when we design new technological systems we are also designing the sort of humans that we are (or will become). We are adding to the archive that will afford future generations their possibilities for being and also simultaneously framing the way our human past will be recalled and remembered. This co-evolution of the human and the technical also means that we cannot escape the technical--our being is always already technical. It is not something alien that we can get rid of, or choose to be without. Within this perspective technology becomes the central question in which many of the most fundamental questions of what it means to be human becomes opened up in new and unexpected ways (this is why Stiegler argues that technicity is the central philosophical question which has been forgotten ever since the Greeks).

The radical nature of Stiegler’s project means that there are many who disagree with him. For example Vaccari (2009) argues that the fundamental genetic concepts of ‘inscription’ and ‘transmission’ at the centre of Stiegler’s argument have been severely criticized (especially in the biological sciences where it originates). He also suggests that there is a certain determinism at the heart of Stiegler’s programme, which has been severely criticized by many constructivists such as Latour.

2.3 The Technological Attitude in Contemporary Society and Technologies

Phenomenology has not only been used to address questions of philosophical anthropology, as one might suggest is the case with Heidegger and Stiegler. Hubert Dreyfus (1992) has used it very effectively to provided a devastating critique of the classical program in artificial intelligence (AI) research. In his critique Dreyfus (1992) argues that the way skill development has become understood in the past has been wrong. He argues—using the earlier work of Heidegger in Being and Time—that the classical conception of skill development, going back as far as Plato, assumes that we start with the particular cases and then abstract from these to discover and internalize more and more sophisticated and general rules. Indeed, he argues, this is the model that the early artificial intelligence community uncritically adopted. In opposition to this view he argues, with Heidegger, that what we observe when we learn a new skill in everyday practice is in fact the opposite. We most often start with explicit rules or preformulated approaches and then move to a multiplicity of particular cases, as we become an expert. His argument draws directly on Heidegger’s account in Being and Time of humans as beings that are always already situated in-the-world. As humans ‘in-the-world’ we are already experts at going about everyday life, at dealing with the subtleties of every particular situation—that is why everyday life seems so obvious. Thus, the intricate expertise of everyday activity is forgotten and taken for granted by AI as an assumed starting point.

Dreyfus proceeds to give an account of five stages of becoming an expert as a way to critique the programme of AI. In his account a novice acts according to conscious and context-free rules and generally lacks a sense of the overall task and situational elements. The advanced beginner adds, through experience, situational aspects to the context-free rules to gain access to a more sophisticated understanding of the situation. The relationship between the situational aspects and rules are learned through carefully chosen examples, as it is difficult to formalize them. The competent person will have learnt to recognize a multiplicity of context-free rules and situational aspects. However, this may lead to being overwhelmed as it becomes difficult to know what to include or exclude. The competent individual learns to take a particular perspective on the situation, thereby reducing the complexity. However, such ‘taking of a stand’ means a certain level of risk taking is involved that requires commitment and personal involvement. For the proficient most tasks are performed intuitively. As an involved and situated actor the relevant situational aspects show up as part of the ongoing activity and need not be formalized. Nevertheless, a pause may still be required to think analytically about a relevant response. For the expert relevant situational aspects as well as appropriate actions emerge as an implicit part of the ongoing activity within which the expert is totally absorbed, involved, and committed. The task is performed intuitively, almost all the time. In the ongoing activity of the expert thousands of special cases are discriminated and dealt with appropriately.

With this phenomenological account of skill development in hand it is easy to see the problem for the development of AI programmes. Computing machines need some form of formal rules (a program) to operate. Any attempt to move from the formal to the particular, as described by Dreyfus above, will be limited by the ability of the programmer to formulate rules for such a shift. Thus, what the computer lacks (and we as human beings have) is an already there familiarity with the world that it can draw upon as the transcendental horizon of meaning to discern the relevant from the irrelevant in ongoing activity—i.e., the computer is not already a skilled actor (a being-in-the-world in Heidegger’s terms). The critique of Dreyfus was one of the major factors that pushed AI researchers into new ways of thinking about AI—in particular the development of the embodied cognition programme of the Massachusetts Institute of Technology (MIT) AI Lab.

What Dreyfus highlighted in his critique of AI was the fact that technology (AI algorithms) does not make sense by itself. It is the assumed, and forgotten, horizon of everyday practice that make technological devices and solutions show up as meaningful. If we are to understand technology we need to ‘return’ to the horizon of meaning that made it show up as the artifacts we need, want and desire. We also need to consider how these technologies reveal (or disclose) us. For example the microscope can reveal us as ‘inquisitive’, the gun as ‘aggressive’, or the mobile phone as ‘communicative’, and so forth. The ongoing co-constitution of society and technology, phenomenology’s insight, can help us to understand and make sense of complex information technology, such as AI, but also more mundane technologies such as word processors (Heim 1999).

In thinking about our relationship with technology in modern contemporary life Albert Borgmann (1984) takes up the question of the possibility of a ‘free’ relation with modern technology in which everything is not already ‘framed’ (in Heidegger’s sense) as resources for our projects. He agrees with Heidegger’s analysis that modern technology is a phenomenon that tends to ‘frame’ our relation with things, and ultimately ourselves and others, in a one-dimensional manner—the world as simply available resources for our projects. He argues that modern technology frames the world for us as ‘devices’. By this he means that modern technology as devices hides the full referentiality (or contextuality) of the world—the worldhood of the world—upon which the devices depend for their ongoing functioning. Differently stated, they do not disclose the multiplicity of necessary conditions for them to be what they are. In fact, just the opposite, they try to hide the necessary effort for them to be available for use. A thermostat on the wall that we simply set at a comfortable temperature now replaces the process of chopping wood, building the fire and maintaining it. Our relationship with the environment is now reduced to, and disclosed to us as a control that we simply set to our liking. In this way devices ‘de-world’ our relationship with things by disconnecting us from the full actuality (or contextuality) of everyday life. By relieving us of the burden—of making and maintaining fires in our example—our relationship with the world becomes disclosed in a new way, as simply there, already available for us. Obviously, this is sometimes necessary otherwise the burden of everyday life might just be too much. Nevertheless, if the ‘device mood’ becomes the way in which we conduct ourselves towards the world then this will obviously have important moral and ethical implications for others who may then become disclosed as devices.

Against such a disengaging relationship with things in the world Borgmann argues for the importance of focal practices based on focal things. Focal things solicit our full and engaging presence. We can think of the focal practice of preparing and enjoying a meal with friends or family as opposed to a solitary consumption of a fast food meal. If we take Borgmann’s analysis seriously we might conclude that we, as contemporary humans surrounded by devices, are doomed increasingly to relate to the world in a disengaged manner. Such a totalizing conclusion would be inappropriate since the prevailing mood does not determine our relationship with those we encounter. Nevertheless, Borgmann’s analysis does point to the possibility of the emergence of a device mood—as we increasingly depend on devices—and our moral obligation not to settle mindlessly into the convenience that devices may offer us. Otherwise we might, as Heidegger (1977) argued, become the devices of our devices.

2.4 A (Post) Phenomenology of the Human/Technology Relationship

Phenomenology does not only function as an approach to reveal and critique our relationship with technology, as suggested by Heidegger, Dreyfus and Borgmann. Don Ihde (1990, 1995, 2002, 2010) has used the resources of phenomenology to give a rich and subtle account of the variety and complexity of our relationship with technology—what he refers to as a postphenomenology of technology (Ihde 2009, 2010; Selinger 2006). With postphenomenology Ihde wants to move away from ‘transcendental’ (and often dystopian) grand narratives of the technological to a more grounded empirical analysis of the human/technology relationship. This move to the empirical is often described as the ‘empirical turn’ in the philosophy of technology (Achterhuis 2001). Postphenomenology is a relational ontology, which proposes that the subject/object (or human/technology) relation is not just interactional but also co-constitutive (Rosenberger and Verbeek 2015). Moreover, this co-constitutive relation is fundamentally mediated. There are no direct relations between subject and object—only ‘indirect’ ones in which technologies often function as mediators, not to connect but to co-constitute. As such, the human-world relation is typically a human-technology-world relation. In this relation, there are no pre-given subjects or pre-given objects (which then connect through some form of mediators). Rather mediation is the original source from which a specific subjectivity and objectivity emerges, or become enacted, as part of specific situated doings. Ihde defines this originally mediated nature of existence as embodiment (Ihde 2011). Postphenomenology aims to describe the varieties of subjectivities/objectivities that emerge through different embodiments. What sort of subjects do we become (and what does the world become) through the embodiments of the microscope, the telescope, self-driving cars, the computer screen, and so forth?

In thinking about these embodied human/technology relationships Ihde characterizes four different I-technology-world relationships. The first type of relationship he calls embodiment relations. In this case technology is taken as the very medium of subjective perceptual experience of the world, thus transforming the subject’s perceptual and bodily sense. In wearing my eyeglasses I do not only see through them; they also become ‘seen through.’ In functioning as that which they are, they already withdraw into my own bodily sense as being a part of the ordinary way I experience my surrounding. He denotes this relationship as having the form [I-glasses]-world. He further argues that this relationship has a necessary ‘magnification/reduction structure’ associated with it. Embodiment relations simultaneously magnify and amplify or reduce and place aside (screen out) what is (and is not) experienced through them. The moon through the telescope is different from the moon in the night sky perceived by the naked eye. The person at the other end of the online chat is made present to me across a great distance at the expense of being reduced to text on the screen.

The second type of human/technology relation is what he calls hermeneutic. In this type of relation the technology functions as an immediate referent to something beyond itself. Although I might fix my focus on the text or the map, what I actually see (encounter) is not the map itself but rather immediately and simultaneously the world it already refers to, the landscape already suggested in the symbols. In this case the transparency of the technology is hermeneutic rather than perceptual. As I become skilled at reading maps they withdraw to become for me immediately and already the world itself. He denotes this relationship as having the form I-[map-world].

The third type of human/technology relations Ihde calls ‘alterity’ relations. In these relations technology is experienced as a being that is otherwise, different from me, technology-as-other. Examples include things such as religious icons and intelligent robots (the Sony Dog AIBO for example). In my interaction with these technologies they seem to exhibit a ‘world of their own.’ As I engage them they tend to disengage me from the world of everyday life and point to the possibility of other worlds, hence their pervasiveness in activities such as play, art and sport. He denotes these as having the form I-technology-[world], indicating that the world withdraws into the background and technology emerges as a focal entity with which I momentarily engage—as I play with my robot dog for example.

Ihde also recognizes a fourth type of human/technology relation in which technology is not directly implicated in a conscious process of engagement on the part of the human actor. Ihde refers to these as background relations. Examples include automatic central heating systems, traffic control systems, and so forth. These systems are ‘black-boxed’ in such a way that we do not attend to them yet we draw on them for our ongoing everyday existence. They withdraw as ongoing background conditions. Although he does not designate it as such one might formalize these relations in the form: I-[technology]-world. These invisible background technologies can be powerful in configuring our world in particular ways yet escape our scrutiny.

Ihde’s phenomenological description of the human/technology relation provides a meaningful taxonomy or framework to give an account of many everyday technology relations in a manner that can facilitate our considerations of the social and ethical implications of information technology. For example, the withdrawal of technology, into my body, into my perception and into the background has important political and ethical implications for its design and implementation. Especially if one considers that every disclosure of the world ‘through’ technology is also immediately a concealment of other possible disclosive relations. The car discloses possibilities—for getting to places quickly—but also conceals, in its withdrawal, the resources (roads, fuel, clean air, etc.) necessary for it to be what it is—they act as devices in Borgmann’s terminology. Indeed we often lose sight of the reduction/magnification structure as we simply use these technologies. As these technologies become more and more pervasive—almost a necessary condition of everyday life—it becomes more and more difficult to see that which has become concealed in their withdrawal. With Ihde’s typology of I-technology-world relations we might be able to bring what has become concealed back to the foreground for our critical attention and ethical reflection. Let us now consider how these different ways of approaching the information technology/society relationship might circumscribe our thinking about the social and ethical implications of information technology.

3. Ethics and Information Technology

3.1 The Impact of Information Technology and the Application of Ethical Theory

Much of the ethical debate about computers and information technology more generally has been informed by the tool and impact view of information technology (discussed in section 1.1 above). Within this tradition a number of issues have emerged as important. For example, whether computers (or information and communication technology more generally) generate new types of ethical problems that require new or different ethical theories or whether it is just more of the same (Gorniak 1996). These debates are often expressed in the language of the impact of information technology on particular values and rights (Johnson 1985, 1994). Thus, within this approach we have discussions about the impact of CCTV or web cookies on the right to privacy, the impact of the digital divide on the right to access information, the impact of the piracy of software on property rights, and so forth. In these debates Jim Moor (1985) has argued that computers show up policy vacuums that require new thinking and the establishment of new policies. Others have argued that the resources provided by classical ethical theory such as utilitarianism, consequentialism and deontological ethics is more than enough to deal with all the ethical issues emerging from our design and use of information technology (Gert 1999).

Irrespective of whether information technology creates new types of ethical problems that require new ethical theory or whether established ethical theory is sufficient, one tends to find the debate centered on questions of policy that is intended to regulate or justify conduct vis-à-vis the negative impact produced by certain uses or implementations of IT. These policies are seen, and presented as ways to regulate or balance competing rights or competing values in the context of the impact of IT. For example, what sort of policies do we need to protect our children when they go on the Internet? How would these policies affect the right to free speech? Or, what sort of policies do we need to secure the rights of producers of digital products? How would these policies affect the right of society to a reasonable access to these products? (Lipinski & Britz 200). Furthermore, these debates are most often directed at an institutional level of discourse—i.e., with the intention to justify the policies or conduct for governments, organizations and individuals. In these debates on the impact of technology, ethicists are primarily conceived as presenting arguments for justifying a particular balance, of values or rights, over and against other possibilities within the context of specific uses or implementations of IT. In presenting these arguments ethicists normally apply ethical theories (such as consequentialism, utilitarianism, deontological ethics, etc.) to new cases or problems presented by the use, or perceived impact, of the particular technology.

3.2 The Politics of Information Technology and a Disclosive Ethics

The constructivist view of the information technology/society relationship (discussed in section 1.2 above) tends to lead to a different kind of reflection on the ethical significance of information technology. They suggest that the problem with the tool and impact view of society/technology relations is that it assumes that the domain of society is where the ends (values, assumptions and politics) are located and that the domain of the technical is merely neutral means towards these ends. This view is often expressed in the slogan ‘guns do not kill people, people do’, meaning guns are merely a means to an end (which could be peaceful or violent depending on the user of the gun). In contrast social constructivists tend to argue that technology, as socially constructed, is already political as such and therefore already suggests an ethical concern. By this they mean that technology, by its very design, includes certain interests and excludes others. This does not mean that designers are always aware that they are making political and ethical decisions. In fact they are mostly not. They are mostly trying to solve very mundane everyday ‘technical’ problems when they build technologies. Nevertheless, they always make assumptions or take for granted certain values and beliefs (mostly their own) when they construct artifacts. For example the ATM bank machine assumes a particular person in front of it. It assumes a person that is able to see the screen, read it, remember and enter a personal identification (PIN) code, etc. It is not difficult to imagine a whole section of society that does not conform to this assumption. If you are blind, in a wheelchair, have problems remembering, or are unable to enter a PIN, because of a disability, then your interest in getting access to your account will be excluded by the actual design of the ATM. In this way the ATM embodies a particular understanding of the world (humans) in front of it. This is why Langdon Winner (1980) argued that artifacts (and technological systems) always already embody interests, values, etc.—that is to say, they are always political from the start. This does not mean that users cannot intentionally or unintentionally reinterpret the way technology suggests or ‘affords’ possibilities to suit their own needs. Users often ‘read’ and use technology in ways unintended by the designers/implementers. However, as these technological affordances become embedded in larger infrastructures (practices, systems, spaces, organizations, etc.) it becomes increasingly difficult to use the technology in ways other than in the way it was set up to afford possibilities (or not).

If information technology is political—i.e., it already includes/excludes certain interests—then it is also immediately ethical. For the constructivist it is the particular way in which interests become built into the technology and practices within which it is embedded that is ethically significant (Brey 2000, 2004, 2006). Moreover, they often argue that ethical reflection should be an inherent part of the design process—referred to as value sensitive design (Friedman 1997). Of particular concern is the way information technology ‘hides’ these values and interests in the logic of software algorithms and hardware circuits (Introna & Nissenbaum, 2000). Everyday technologies such as utensils are mostly open to scrutiny by the reflexive user. Information technology, on the other hand, is mostly not open to such scrutiny (Brey 2000). Important assumptions and biases in information technology are mostly obscured and subsumed in ‘black-boxes’ in ways that make it difficult even for the experts to scrutinize them. It is not possible for the ordinary computer user to scrutinize the assumptions and biases embedded in the code of the Microsoft Windows operating system. Imbedded in the software and hardware code of information technology applications are complex rules of logic and categorization that may have material consequences for those using them, and for the production of social order more generally (Introna & Wood 2004, Introna & Nissenbaum 2000). In this view of information technology ethics, the task of ethics is to open up the ‘black box’ of information technology and reveal or disclose the values and interests it embodies for scrutiny and reflection—not only in its final design but also in the process of development (Introna 2007). Such an approach to the ethics of information technology is most often informed by technology studies within the science, technology and society (STS) tradition as proposed by Bijker (2003)

3.3 Information Technology, Ethics and our Human way of Being

From the preceding discussion, it should be clear that phenomenologists would tend not only to concern themselves with this or that artifact or technology as such. They would rather be concerned with the world (or mood as already suggested) that made these artifacts or technologies seem necessary or obvious in the first place. They would also be concerned with the ways in which particular technologies ‘frame’ and reveal us, or our world, as we draw on them. They would claim that it is this ongoing co-constitution that we should focus on if we are to understand the social and ethical implications of ICT and new media (Verbeek 2008). This does not preclude the possibility that we could also consider the impact of particular technologies as well as unpack particular technologies to understand the values and interests they imply. However, the phenomenologists would argue that the impact analysis (section 3.1) and the disclosive analysis (section 3.2) could be enhanced if these were situated in a broader phenomenological analysis. Such an additional analysis might add another level of critical reflection that could be important in understanding and justifying different possible futures. One might describe the phenomenological approach as an iterative process of ontological disclosure in which a world (relevant social practices or involvement whole) and technology (nexus of relevant technologies) are taken as mutually constitutive interpretive contexts in which the one renders the other intelligible—i.e., grounds it as a ‘seemingly’ meaningful way to be. In this iterative process there is a progressive uncovering of the constitutive conditions that are necessary for particular ways of seeing or doing in the world—or in particular social practices—to make sense and be meaningful in the way they are taken to be. For example, in Heidegger’s analysis of modern technology, as outlined above in section 2.2, he identifies the emergence of calculative thinking as a necessary condition to see the world as resources available for our purposes. However, this calculative orientation is itself conditioned by a particular way of approaching the world, which he traces back to Greek thought. Let us consider this approach in more detail through an example of virtuality.

4. Phenomenology, Ethics and Information Technology: The Case of Virtuality

As indicated above, it would be misleading to suggest that there is a vast literature on the phenomenological approach to the social and ethical implications of information technology. Clearly, the work of Stiegler, Heidegger, Dreyfus, Borgmann and Ihde discussed above could be described as critical work that aims to open up a horizon for social and ethical reflection. Nevertheless, there seems to be at least one information technology theme that has attracted some sustained attention from phenomenologists (especially with regard to its ethical implications)—the phenomenon of virtualization or virtuality. The term ‘virtuality’ is used here to refer to the mediation of interaction through an electronic medium between humans as well as between humans and machines. The Internet (or Cyberspace as it is known in cultural discourse) is the most evident example of the virtualization of interaction.

The development of the Internet and the subsequent extension of computer networks into all domains of everyday life have prompted much speculation about the way in which this information technology will change human existence, especially our notion of sociality and community. Much of this speculation suggests that the virtualization of human interaction has led to a multitude of new possibilities for humans—such as cyber communities, virtual education, virtual friendships, virtual organizations, virtual politics, and so forth. Clearly, such claims about the transformation of the social domain have important implications for our understanding of ethics. One might suggest that most of our current thinking about ethics implies a certain sense of community based on reciprocal moral obligations that are largely secured through situated, embodied practices and institutions that are often overlapping and mutually inclusive. If these practices and institutions become virtualized then it would seem that we need to reconsider some of our most fundamental human categories.

The proponents of the virtualization of society (and its institutions) argue that virtuality extends the social in unprecedented ways (Fernback 1997, Rheingold 1993a, 1993b, Turkle 1995, 1996, Benedikt 1991, Horn 1998). They argue that it opens up an entirely new domain of social being. For example Rheingold (1993a) argues that it offers “tools for facilitating all the various ways people have discovered to divide and communicate, group and subgroup and regroup, include and exclude, select and elect. When a group of people remain in communication with one another for extended periods of time, the question of whether it is a community arises. Virtual communities might be real communities, they might be pseudocommunities, or they might be something entirely new in the realm of social contracts.” (p. 62). According to the proponents this new social space is novel in that it offers completely new ways to be and relate. They argue that through the plasticity of the medium it is possible to conceive, construct and present our identities in almost boundless ways. Turkle (1996) suggests that cyberspace “make possible the construction of an identity that is so fluid and multiple that it strains the very limits of the notion [of authenticity]. People become masters of self-presentation and self-creation. There is an unparalleled opportunity to play with one’s identity and to ‘try out’ new ones. The very notion of an inner, ‘true self’ is called into question… the obese can be slender, the beautiful can be plain. The ‘nerdy’ can be elegant. The anonymity of Multi-User Dungeons (MUDs), such as Second life, (you are known only by the name you gave your characters) provides ample room for individuals to express unexplored ‘aspects of the self’” (p. 158). The claims by Rheingold, Turkle and others are certainly bold. If they are right then virtuality may indeed represent entirely new possibilities for humans to relate, extend, and express themselves, which should be encouraged, especially for those that have become excluded from the traditional domains of social relations, due to disability for example.

Those who treat the internet as an artifact may suggest that we look at the impact of mediation (or virtualization) on communication and relations of power; for example, the fact that certain social prejudices are circumvented because the individual responding to my online application for a particular service is not confronted with my physical appearance. They may also suggest, as Turkle (1995, 1996) has done, that we look at the way virtualization makes the presentation of self and identity more plastic and encourage us to think through the consequences of this for ongoing social interaction. The social constructivists may suggest that we need to look at the assumptions as values embedded in the artifacts as such (as was suggested above). They may, for example, suggest that we consider the implicit assumptions about the nature of communication when considering e-mail applications— for instance, the fact that most e-mail applications assume and emulate the structure of a physical letter. They would argue that we need to trace through how people interpret this ‘letter’ structure to communicate and share objects (such as files and pictures) with others, as well as the sorts of communication such a structure excludes.

Phenomenologists would suggest that these responses are all important but they assume something more primary—i.e., the conditions that render such acts as the presentation of the self, ongoing communication and sharing meaningful and significant in the first instance. They might suggest that these social acts are all grounded in an already presumed sense of community. They might further argue that social interaction, community and identity (as we know it) are phenomena that are local, situated and embodied, which is characterized by mutual involvement, concern and commitment (Dreyfus 2001; Borgmann 1999, Ihde 2002, Introna 1997, Coyne 1995, Heim 1993). In other words these phenomena draw on an implied sense of involvement, place, situation, and body for its ongoing meaning. For example, Borgmann (1999) argues that the “unparalleled opportunity” of virtuality suggested by Turkle comes at a ‘cost.’ To secure “the charm of virtual reality at its most glamorous, the veil of virtual ambiguity must be dense and thick. Inevitably, however, such an enclosure excludes the commanding presence of reality. Hence the price of sustaining virtual ambiguity is triviality” (p. 189). Indeed such ‘fluid and multiple’ identity is only feasible as long as it is “kept barren of real consequences”. Dreyfus (1999, 2001) argues, in a similar vein that without a situated and embodied engagement there can be no commitment and no risk. They argue that in such an environment moral engagement is limited and human relations become trivialized. Ihde (2002) does not go as far as Borgmann and Dreyfus in discounting the virtual as ‘trivial.’ Nevertheless, he does claim that “VR bodies are thin and never attain the thickness of flesh. The fantasy that says we can simultaneously have the powers and capabilities of the technologizing medium without its ambiguous limitations is a fantasy of desire” (p.15).

Coyne (1995), drawing on the work of Heidegger, argues that the proximity of community has nothing to do with physical distance. He argues that proximity is rather a matter of shared concerns—i.e., my family is ‘close’ to me even if they are a thousand miles away and my neighbors may be ‘distant’ to me even if they are next door. Levinas (1991, 1996) takes this claim even further. He suggests that proximity has nothing to do with either social or geographic distance. For him proximity is an ethical urgency that unsettles our egocentric existence. Proximity is the face—or our always already facing—of the Other (all other human beings) that unsettles the ongoing attempts by the ego to ‘domesticate’ the infinitely singular Other (a proper name) into familiar categories (race, ethnicity, gender, etc). For the phenomenologist any electronic communication (or any other communication) will find its meaning in a prior horizon of proximity. If we do not already share certain concerns then electronic mediation will not create proximity even if it does seem to break down the geographic distances between us—even if it is ‘shrinking’ the world as it were. These authors suggest that our sense of community and the moral reciprocity it implies comes from a sustained and situated engagement where mutual commitments and obligations are secured in the proximity of an already shared horizon of ongoing meaning. In a similar vain Silverstone (2002, 2003) argues, drawing on the work of Levinas, for the importance of maintaining a ‘proper distance’ in which proximity and responsibility are maintained. He argues that in the modern world of the Internet and increased mobility the stranger becomes ‘my neighbour’: ‘and we are all neighbours to one another now’. In the mediated world we become inundated with the solicitations of the multitude of others that increasingly appear on our screens. How ought we to respond? We cannot allow the world, reconstituted through the new media, to turn into mere images, pixels on the screen. We must recognize: ‘that I have as much responsibility for the stranger, that other who is either, physically or metaphysically, far from me, as I do for my neighbour’ (Silverstone 2003: 480). Thus, according to Silverstone the ambiguity of a world of ‘closeness’ and simultaneous ‘distance’ of the other that the new media constitute is an altogether different way of being with others that requires a new ethic of ‘proper distance’ where the possibility of facing the other, as Other, is not lost in the ethereality of our clicks. It is clear from these examples that the ethical question for phenomenology is mostly also an ontological question—that is, what sort of world or way of being are we becoming, as opposed to the sort of world we value and want? The phenomenologists would argue that these fundamental choices may only become visible if we approach new media and ICT (and the ethics they imply) from a phenomenological point of view.

Not all agree with this phenomenological analysis that seems to privilege the face-to-face. Feenberg (1999, and 2004 in Other Internet Resources), although he is not a phenomenologist as such, nevertheless uses phenomenological insights to argue that the messages exchanged are not ‘thin’ but can also be ‘thick.’ The messages exchanged through e-mail for example are also situated and already imply a certain minimal level of reciprocity and commitment—the mere fact of their exchange implies such a minimal horizon of meaning for the act of exchange itself to make sense. He argues: “the interpreted message stands in for the world, is in effect a world. In the case of mediated communication, a person and the social context of their presence are delivered in the message” (emphasis added). He argues that community is an intersubjectively constructed phenomenon that emerges from a mutual connection that may imply a physical co-presence but is not restricted to it. He acknowledges that a mediated community may be different and have its own particular problems. Nevertheless, he insists that “community needs to be interpreted from the inside out, not as a geographical fact.” In a similar manner Powers (2004) argues, with reference to the well known LamdaMOO ‘virtual rape’ case, that virtuality can lead to real moral wrongs, even if these virtual worlds seem trivial and ‘shallow’. Introna and Brigham (2007) also argue that virtuality affords an opportunity to reconsider the meaning of the traditionally assumed community in a rather fundamental way. They suggest that the idea of virtual communities as being ‘thin’ and ‘shallow’—lacking the depth that local proximity in face-to-face communities brings—privileges a certain view of community premised upon shared values, or shared concerns, embedded in local situated face-to-face interaction and practices. They argue that in such communities ethics is often rooted in a notion of reciprocity (which my not be altruistic at all, may even be economic). In contrast, they suggest, the virtual stranger (popping up on my screen as if out of nowhere) radically disturbs such an idea of reciprocity (and community). In responding to the virtual stranger the often presumed source of our ethics becomes visible. In the anonymity of the interface I have to really decide (become truly responsible) because the stranger can be easily ignored.

The phenomenological analysis and critique of virtuality is important because it forces us to reconsider some of our most fundamental human categories—especially our moral categories. From these debates, indicated above, it is apparent that a simple dichotomy that poses the virtual as ‘thin and trivial’ and situated embodied co-presence (often referred to as the ‘real’) as ‘thick and significant’ is too simple to be helpful. From the phenomenological analysis presented it follows that one of the most important aspects of community is the presumed density of its mutually referring concerns and involvements (irrespective of mediation). Traditional communities often develop this dense referentiality, quite implicitly, by sharing involvement (and therefore concern) in many overlapping practices and institutions. It is conceivable that there are individuals that already share certain involvements and concerns (such as individuals that share a debilitating disease) that could become an online virtual community because they are in a very real sense already a community. It seems also evident that individuals who share limited involvements and concerns (such as playing games in MUDs) are unlikely to become a community simply because they share a virtual space. Without a dense horizon of mutual involvement and concern upon which we can base ourselves all choices and actions become equally significant or insignificant—i.e., trivial. Taylor (1991) in discussing the ethics of authenticity provides a good summary of the importance of this communal horizon of significance for the construction of a ‘thick’ self and therefore a ‘thick’ community: “The agent seeking significance in life, trying to define him- or herself meaningfully, has to exist in a horizon of important questions [shared concerns]. That is what is self-defeating in modes of contemporary culture that concentrate on self-fulfillment in opposition to the demands of society, or nature, which shut out history and the bonds of solidarity. These self-centred ‘narcissistic’ forms are indeed shallow and trivialised” (p. 40).

5. Conclusion

The purpose of this entry is to provide the reader with a sense of the phenomenological approach to information technology and its social and ethical implications by contrasting it with two other approaches. It might be useful to summarize our discussion in the Table below. Obviously such a summary must be seen within the context of the whole entry and will be subject to the normal problems of such summative statements, namely that they do not always do justice to the ideas so summarized. Nevertheless, as a general guide it might help to further clarify some of the contrasts that the entry tried to suggest as useful in understanding the distinctiveness of the phenomenological approach.

Artifact / tool Approach
View of technology / society relationship Technologies are tools that society draws upon to do certain things it would not otherwise be able to do. When tools become incorporated in practices it tends to have a more or less determinable impact on those practices.
Approach to ethical implications of technology The task of ethics is to analyze the impact of technology on practices by applying existing or new moral theories to construct guidelines or policies that will ‘correct’ the injustices or infringements of rights caused by the implementation and use of the particular technology.
Social Constructivist Approach
View of technology / society relationship Technology and society co-construct each other from the start. There is an ongoing interplay between the social practices and the technological artifacts (both in its design and in its use). This ongoing interplay means that technological artifacts and human practices become embedded in a multiplicity of ways that are mostly not determinable in any significant way.
Approach to ethical implications of technology The task of ethics if to be actively involved in disclosing the assumptions, values and interests being ‘built into’ the design, implementation and use of the technology. The task of ethics is not to prescribe policies or corrective action as such but to continue to open the ‘black box’ for scrutiny and ethical consideration and deliberation.
Phenomenological Approach
View of technology / society relationship Technology and society co-constitute each other from the start. They are each other’s condition of possibility to be. Technology is not the artifact alone it is also the technological attitude or disposition that made the artifact appear as meaningful and necessary in the first instance. However, once in existence artifacts and the disposition that made them meaningful also discloses the world beyond the mere presence of the artifacts.
Approach to ethical implications of technology The task of ethics is ontological disclosure. To open up and reveal the conditions of possibility that make particular technologies show up as meaningful and necessary (and others not). It seeks to interrogate these constitutive conditions (beliefs, assumptions, attitudes, moods, practices, discourses, etc.) so as to problematize and question the fundamental constitutive sources of our ongoing being-with technology.

Bibliography

  • Achterhuis, H. (ed.), 2001, American Philosophy of Technology: The Empirical Turn , Bloomington: Indiana University Press.
  • Baudrillard, J., 1983, Simulations, New York: Semiotext(e).
  • Bijker, W. E., 1995, Of Bicycles, Bakelites and Bulbs. Toward a Theory of Sociotechnical Change, Cambridge, MA: MIT Press.
  • –––, 2003. “The Need for Public Intellectuals: A Space for STS,” Science Technology & Human Values, 28(4): 443–50.
  • –––, W., T. Pinch, and T. Hughes, 1987, The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology, Cambridge, MA: MIT Press.
  • Borgmann, A., 1984, Technology and the Character of Contemporary Life, Chicago: University of Chicago Press.
  • –––, 1999, Holding On to Reality, Chicago/London: University of Chicago Press.
  • Brey, P., 1997, “Philosophy of Technology meets Social Constructivism.” Techne´: Journal of the Society for Philosophy and Technology, 2(3&4): 56–79.
  • –––, 2000, “Disclosive Computer Ethics,”Computers and Society, 30(4): 10–16.
  • –––, 2004, “Ethical Aspects of Face Recognition Systems in Public Places,” Journal of Information, Communication & Ethics in Society, 2(2): 97–109.
  • –––, 2006, “Freedom and Privacy in Ambient Intelligence,” Ethics and Information Technology, 7(3): 157–166.
  • Coyne, R., 1995, Designing information technology in the postmodern age: From method to metaphor, Cambridge MA: MIT Press.
  • Dreyfus, H.L., 1999, “Anonymity versus commitment: The dangers of education on the internet,” Ethics and Information Technology, 1(1): 15–20, 1999
  • –––, 2001, On the Internet, London: Routledge.
  • –––, 1992, What Computers Still Can’t Do: A Critique of Artificial Reason, Cambridge, MA: MIT Press.
  • Feenberg, A., 1991, Critical Theory of Technology, Oxford: Oxford University Press.
  • –––, 1999, ‘Technology and Meaning’, in Questioning Technology, London and New York: Routledge, 183–199.
  • Fernback, J., 1997, “The Individual within the Collective: Virtual Ideology and the Realization of Collective Principles.” In Steven G. Jones (Ed.), Virtual Culture: Identity and Communication in Cybersociety, London: Sage, 36–54.
  • Friedman, B. (ed.), 1997, Human Values and the Design of Computer Technology, New York: Cambridge University Press and CSLI, Stanford University.
  • Gert, B., 1999, “Common Morality and Computing,” Ethics and Information Technology, 1(1): 57–64.
  • Gorniak, K., 1996, “The Computer Revolution and the Problem of Global Ethics,” Science and Engineering Ethics, 2(2): 177–190.
  • Harman, G., 2009, Prince of Networks: Bruno Latour and Metaphysics, Melbourne: Re.press.
  • Heidegger, M., 1977, The Question Concerning Technology and Other Essays, New York: Harper Torchbooks.
  • Heim, M., 1993, The Metaphysics of Virtual Reality, New York: Oxford University Press.
  • –––, 1999, Electric Language, New York: Yale University Press.
  • Horn, S., 1998, Cyberville. Clicks, Culture, and the Creation of an Online Town, New York: Warner Books.
  • Ihde, D., 1990, Technology and the Lifeworld: From garden to earth, Bloomington and Indianapolis: Indiana University Press.
  • –––, 1995, Postphenomenology: Essays in the Postmodern Context, Evanston: Northwestern University Press.
  • –––, 2002, Bodies in Technology, Minneapolis: University of Minnesota Press.
  • –––, 2003, “If Phenomenology is an Albatross, Is Post-phenomenology possible?” in Don Ihde and Evan Selinger (eds.), Chasing Technoscience: Matrix for Materiality, Indianapolis: Indiana University Press, 15–26.
  • –––, 2010, Heidegger’s Technologies: Postphenomenological Perspectives, New York: Fordham University Press.
  • Introna, L.D., 1997, “On Cyberspace and Being: Identity, Self and Hyperreality.” Philosophy in the Contemporary World, 4(1&2): 16–25.
  • –––, 2007, “Maintaining the Reversibility of Foldings: Making the ethics (politics) of information technology visible,” Ethics and Information Technology, 9(1): 11–25.
  • Introna, L.D., and Brigham, M., 2007, “Reconsidering Community and the Stranger in the Age of Virtuality,” Society and Business Review, 2(2): 166–178.
  • Introna, L.D., and Ilharco, F.M., 2003, “The Ontological Screening of Contemporary Life: A Phenomenological Analysis of Screens,” European Journal of Information Systems, 13(3): 221–234.
  • Introna, L.D., and Nissenbaum, H., 2000, “The Internet as a Democratic Medium: Why the politics of search engines matters,” Information Society, 16(3): 169–185.
  • Introna, L.D., and Wood, D., 2004, “Picturing Algorithmic Surveillance: The Politics of Facial Recognition Systems,” Surveillance and Society, 2(2&3): 177–198.
  • Irwin, S., and Ihde, D., 2016, Digital Media: Human–Technology Connection, Lanham; Boulder; New York; London: Lexington Books.
  • Johnson D. G., 1985, Computer Ethics, Englewood Cliffs, NJ: Prentice-Hall.
  • –––, 1994, Computer Ethics, 2nd edition, Englewood Cliffs, NJ, Prentice-Hall.
  • Latour, B., 1991, “Technology is society made durable.” in J. Law (ed) A Sociology of Monsters: Essays on Power, Technology and Domination, London: Routledge, 103–131.
  • –––, 2002, “Morality and Technology: The End of the Means,” Theory, Culture & Society, 19(5&6): 247–60.
  • –––, 2005, Reassembling the Social: An Introduction to Actor-Network-Theory, Oxford: Oxford University Press.
  • Law, J., 1991, The Sociology of Monsters: Essays on Power, Technology and Domination, London: Routledge.
  • Levinas, E., 1991, Otherwise than Being or Beyond Essence, Dordrecht: Kluwer Academic Publishers.
  • –––, 1996, “Ethics as First Philosophy,” in The Levinas Reader, S. Hand (ed.), London: Blackwell, 75–87.
  • Lipinski, T. A. and Britz, J. J., 2000, “Rethinking the Ownership of Information in the 21st Century: Ethical Implications.” Ethics and Information Technology, 2(1): 49–71.
  • Moor, J. H., 1985, “What is computer ethics?” Metaphilosophy, 16(4): 266–279.
  • Pitt, J.C., 2000, Thinking about Technology: Foundations of the Philosophy of Technology, New York: Seven Bridges Press.
  • Postman, N., 1993, Technopoly: The Surrender of Culture to Technology, New York: Alfred A. Knopf.
  • Powers, T.M., 2004, “Real wrongs in virtual communities,” Ethics and Information Technology, 5(4): 191–198.
  • Rheingold, H., 1993a, “A Slice of Life in My Virtual Community.” In L. Harasim (ed.), Global Networks. Computers and International Communication, Cambridge, MA: The MIT Press, 57–80.
  • –––, 1993b, The Virtual Community: Homesteading on the Electronic Frontier, Reading, Mass: Addison-Wesley. [Preprint available online.]
  • Rosenberger, R., 2012, “Embodied Technology and the Dangers of Using the Phone While Driving,” Phenomenology and the Cognitive Sciences, 11(1): 79–94.
  • Rosenberger, R., and Verbeek, Peter-Paul (eds.), 2015, Postphenomenological Investigations: Essays on Human–Technology Relations, London: Lexington Books.
  • Selinger, Evan (ed.), 2006, Postphenomenology: A Critical Companion to Ihde, Albany: State University of New York Press.
  • Silverstone, R., 2002, “Complicity and Collusion in the Mediation of Everyday Life,” New Literary History, 33(4): 761–80.
  • –––, 2003, “Proper Distance: Towards an Ethics for Cyberspace.” In G. Liestol, A.Morrison and T. Rasmussen (eds), Digital Media Revisited, Cambridge MA: MIT Press, 469–91.
  • Stiegler, B., 1998, Technics and Time, 1: The Fault of Epimetheus, Stanford: Stanford University Press.
  • –––, 2009, Technics and Time, 2: Disorientation, Stanford: Stanford University Press.
  • Taylor, C., 1991, The Ethics of Authenticity, Cambridge, MA: Harvard University Press.
  • Turkle, S., 1996, “Parallel lives: Working on identity in virtual space.” in D. Grodin & T. R. Lindlof, (eds.), Constructing the self in a mediated world, London: Sage, 156–175.
  • –––, 1995, Life on the Screen – Identity in the Age of the Internet, New York: Simon and Schuster.
  • Vaccari, A., 2009 “Unweaving the Program: Stiegler and the Hegemony of Technics” Transformations, 17, available online.
  • Verbeek, P.P., 2005, What Things Do – Philosophical Reflections on Technology, Agency, and Design, University Park: Pennsylvania State University Press.
  • –––, 2008, “Obstetric Ultrasound and the Technological Mediation of Morality – A Postphenomenological Analysis.” Human Studies, 31(1): 11–26.
  • Virilio, P., 1994, The Vision Machine, Bloomington, IN: Indiana University Press.
  • Winner, L., 1980, “Do Artefacts Have Politics.” Daedalus, 109: 121–36.

Other Internet Resources

Copyright © 2017 by
Lucas Introna <l.introna@lancaster.ac.uk>

This is a file in the archives of the Stanford Encyclopedia of Philosophy.
Please note that some links may no longer be functional.
[an error occurred while processing the directive]