Philosophy Of Man And Technology

  • Uploaded by: mutableS
  • 0
  • 0
  • January 2021
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Philosophy Of Man And Technology as PDF for free.

More details

  • Words: 9,331
  • Pages: 24
Loading documents preview...
oration 15 october 2009

Philosophy of Man and Technology

prof.dr.ir. Peter-Paul Verbeek

2

prof.dr.ir. Peter-Paul Verbeek

Philosophy of Man and Technology

3

The limits of humanity: on technology, ethics, and human nature Lecture presented at the occasion of the appointment as professor of

Philosophy of Man and Technology at the Faculty of Behavioural Sciences University of Twente Thursday 15 October 2009 by prof.dr.ir. Peter-Paul Verbeek

15 october 2009 thermochemical biomass refining technology

Macht en onmacht der gewoonte

4

Index Professor Schraagen macht en onmacht der gewoonte noten regels, dit kan over bijvoorbeeld over drie regels gaan

1 5 17 3

Philosophy of Man and Technology

5

Rector Magnificus; Dean of the Faculty of Behavioural Sciences; family and friends, colleagues and students. “All beings hitherto have created something beyond themselves; and do you want to be the ebb of this great tide, and would rather return to the beasts than surpass man?” In 1883 Friedrich Nietzsche used these words to have Zarathustra announce the advent of a new kind of human (Nietzsche 1985, 27). For Zarathustra, the time was ripe for a successor to the imperfect and submissive thing that mankind was. Man should no longer be seen as an end in himself, but as a bridge to a higher being: the Übermensch (ibid, p. 28). Nietzsche’s image of the Übermensch has since had a chequered history; at its deepest point it became an icon for Hitler’s eugenics programme to breed a purely Aryan race. But there is another, more appropriate reading, in which the Übermensch stands not for a Superman who will replace us, but for a better way of being human: the Übermensch as the highest form of humanity. Nevertheless, many people are still uncomfortable with the idea that humanity might not be a reference point, not an end in itself, but merely the bridge to something better. The image of the Übermensch seems to demand that we abandon that which best characterizes us and gives us dignity: our humanity. And to abandon that would be to abandon the foundations of morality. Recent technological developments have breathed new life into the image of the Übermensch, and also into the difficult ethical questions it poses. The convergence of nanotechnology, biotechnology, information technology, and the cognitive and neurosciences has given us more and more ways of intervening in human function. Numerous examples of this can be found at this very university. For instance, Wim Rutten’s group is working hard on the development of neuroimplants: tiny devices connected directly to the brain or other parts of the nervous The Creation of Cyborg

6

system, enabling ‘deep brain stimulation’ which can, for example, reduce the effects of Parkinson’s disease. In a team led by Clemens van Blitterswijk, techniques are being developed for cultivating human tissue from stem cells, for instance to repair bone damage caused by cancer. And Albert van den Berg’s colleagues are using nanotechnology techniques to create chip-sized laboratories that can be swallowed, like a pill, to detect intestinal cancer from within at an early stage. So what is going to happen to us, now that technologies like this are invading deeper and deeper into the body? With prosthetics, implants and artificial tissues, and also with advanced diagnostics and embryoselection, we seem to be able to interfere substantially with human nature. ‘Making someone better’ used to mean curing their diseases; today it also seems to mean enhancing them as human beings. In this way new technologies are giving new shape to the image of the Übermensch. We now seem to be capable of going ‘beyond’ humanity – and some are already dreaming about improved versions of, or even a successor to, Homo sapiens. New technologies, then, have brought us to the limits of humanity. In practical terms: can the hybrids we create still be called ‘human’? And in ethical terms: does humanity have a borderline that should not be crossed? I should like to devote this address to these ‘limits of humanity’. The newest generation of technologies do indeed form a challenge to the philosophy of technology. They demand a reconceptualization of the relationship between people and technology, because they are creating human-technology relationships that are entirely unprecedented. But most of all, as I shall argue, it is high time that the philosophy of technology started playing a role in today’s ethical discussions on human enhancement. Let me make myself clear: this does not mean that I think there is something wrong with humankind as it is, and that we ought to replace ourselves with a ‘better’ version as quickly as possible. I will argue, rather, that it is an inherently human characteristic that people continually look for ways to reshape themselves. The most recent technologies offer new ways of doing this, but at the same time they raise questions about the best way of going about it; and it is in this search for the best way of dealing with the technological possibilities that we find the ‘higher man’ of which Nietzsche spoke. In developing this idea in my address I shall take three steps. First of all, from the point of view of the philosophy of technology, these new human-technology relationships need to be further conceptualized. Secondly, from a philosophical-anthropological standpoint, I will examine the implications of these new human-technology relationships for the way we should try to understand human beings. Lastly, I will expand on the consequences of this approach for the ethics of technology. In doing so I will also be sketching the outline of a significant part of my own research over the next few years.

7

1. HUMANS AND TECHNOLOGY New relationships between people and technology In recent decades the philosophy of technology has devoted much attention to an analysis of the relationships between humans and technologies, with Don Ihde’s work playing a central role. Starting from the phenomenological idea that human existence can be understood only in terms of our relationship to reality, Ihde has researched into the many ways in which this relationship is actually mediated by technology (Ihde, 1990). People can incorporate technologies, as when wearing a pair of glasses which one does not look at but looks through. Other technologies we have to read, in the way that a thermometer gives information on temperature or an ultrasound machine gives a representation of an unborn child. People can also interact with technology, as when operating a DVD player or setting a central heating thermostat. Finally, within the framework sketched by Ihde, technologies can also play a role in the background of our experience. The fan noise made by a computer and the illumination provided by room lights are not experienced directly, but form a context within which people experience reality. Ihde’s work has comprehensively researched how techno-logy, mediated by the different relationships that people can have with it, plays a role in the establishment of interpretative frameworks, scientific knowledge, and cultural practices. This framework has been of considerable value to the contemporary philosophy of technology, but the technological developments just described – which have been made possible by the convergence of ‘nano’, ‘bio’, ‘info’, and ‘cogno’ – go beyond this framework. The central focus of Ihde’s schema is technology which gets used: glasses, telescopes, hammers, and hearing aids. However, the newest technologies are increasingly responsible for man-machine relationships that can no longer be characterized as ‘use’ configurations. For instance, the development of intelligent environments, of which the Ambient Intelligence programme initiated by Philips is a prime example, leads to a configuration that one might rather give the name of immersion: here, people are immersed in an environment that reacts intelligently to their presence and activities. These technologies do not have what Ihde calls a ‘background relationship’ with people, because they engage in interaction with them’ they are therefore more than just a ‘context’.

Neuro-implant for Deep Brain Stimulation

An entirely opposite route is taken by the ‘anthropotechnologies’ I mentioned earlier, to use a term coined by Peter Sloterdijk (1999): technologies which redesign human beings at the physical level. These technologies are not of the exterior, the environment, but of the interior – within the human body. This relationship goes beyond that of incorporation; it might be said to

8

represent a merge, as it becomes difficult to draw a distinction between the human and the technological. When a deaf person is given a degree of hearing capability thanks to a cochlear implant connected directly to their auditory nerve, then this ‘hearing’ is a joint activity of the human and the technological; it is the configuration as a whole that ‘hears’, and not a human being whose ‘hearing’ is restored thanks to technology (cf. Verbeek 2008).

Autonomy: the limit of humanity? Both these technological trends – outwards, towards the environment, and inwards, towards the body – are blurring the borderline between humans and technology. They are also making technology increasingly invisible: it does its work without allowing us to adopt an explicit relationship to it. And this is undoubtedly one of the reasons that some people see the current convergence of technological domains as a potential threat. When our environments start meddling with us of their own accord, and when technologies start merging with our bodies, it feels as if we are losing our grip on what happens to us. Our frontiers appear to evaporate: externally, in our environments, and internally, within our own bodies, it seems that technologies are running the show. A living room that decides independently how warm it should be, what colour the lighting should be, and whether the phone is allowed to ring is reducing our autonomy considerably; and the same is unquestionably true of brain implants that mitigate the symptoms of Parkinson’s disease but which also bring about personality changes. When the boundary between the human and the technological is blurred, we also appear to have to give up that which makes us most human: our autonomy, the freedom to organize our lives as we see fit. After all, without this autonomy we are but slaves to technology. A world in which people are directed by devices which do their work invisibly, whether in the environment or from within the body, perfectly embodies the Brave New World dystopia that is so widely feared. It is no exaggeration to say that the relationship between technological power and human autonomy has been an obsession for the classic critique of technology. From Lewis Mumford’s Megamachine to Charlie Chaplin’s Modern Times, the core theme has been: how are we to escape from the dominance of technology? How are we to prevent technology from taking power over people and thereby alienating them from themselves and their surroundings? The reality, however, is considerably more complex. In actual fact we have never been autonomous with regard to technology, not even with regard to technologies we simply ‘use’ and which are not concealed in the environment or within our bodies. One of the most important insights to have emerged from contemporary approaches to the philosophy of technology is the realization that technology plays a fundamental mediating role in human experience and activity. Our personal contacts are mediated by telephones and computers; our opinions and ideas are mediated by newspapers, televisions and computer screens; and our movements are mediated by cars, trains and aeroplanes. Technology has even played a crucial role in the ethical domain, as I have elaborated in recent years. The decision on whether a pregnancy should be terminated if the child has a genetic disorder, for instance, is not an autonomous choice; to an important degree it is prestructured by the way a modern technology such as ultrasound scanning presents the unborn child (Verbeek,

9

forthcoming, 2010). We must give up the idea that we exercise a sovereign authority over technology and that we employ technologies merely as neutral means towards ends that have been autonomously determined. The truth is that we are profoundly technologically mediated beings. For modern people like ourselves, however, the product of the Enlightenment, this fact is rather hard to swallow. After all, the modern self-image of the autonomous subject, freed by the Enlightenment from dictatorship, ignorance and dependence, has already suffered some serious dents, as Freud’s A General Introduction to Psychoanalysis showed all too clearly. Copernicus evicted us from the centre of the universe by having the Earth rotate around the sun; then Darwin took away our unique position in Creation by linking humans to other animals through evolution. Finally, Freud took responsibility for dealing the third blow to our modern self-image by showing that the ego, far from being its own master, is itself the product of a complex interaction with the subconscious (Freud 1989). Today’s technological developments continue to unmask the modern autonomous subject, but by other means than has philosophy. Freud’s list of unmaskers of the modern subject was composed entirely of thinkers who showed that we should try to understand people in a different way; the list has since been expanded to include a series of scientists who have questioned human autonomy in different ways again. These include Emile Aarts of Philips, for instance, one of the brains behind Ambient Intelligence (Aarts & Marzano, 2003); but it also includes many scientists working at this very university. The movie you are now looking at was made in the team of Wim Rutten. When I saw it for the first time it sent a shock right through me. It shows a boundary being crossed in a way which elicits a certain astonishment and awe – much as did the first pictures of the moon landing, the first heart transplant operation, or the first test tube baby. This movie shows nerve fibres attaching themselves to electrodes. The pictures were taken here in Enschede, on the University of Twente campus. They represent a potentially revolutionary development, because this technology makes it possible to plug devices into our nervous system. The border between the human and the technological is being crossed here as easily as putting a plug into an electrical socket.

Nerve fibres grow into an electrode This is a time-lapse recording of a nerve fibre microchannel (10 micrometer wide), dividing and growing towards the electrodes of a neural prosthesis. The speed of this growth is about 0.5 mm per day. The pictures were taken by Paul Wieringa MSc, a member of the Neurotechnology group at the University of Twente which is led by Professor Wim Rutten. Other declared opponents of ‘human improvement’ include Leon Kass, erstwhile chairman of the President’s Council on Bioethics in the US, and Francis Fukuyama, a prominent neoconservative thinker in the US.

Border blurring What good does it do to equate today’s blurring of the border between humans and technology with the unmasking of the ‘autonomous subject’? Does this approach leave us no option than to simply accept that we are slaves to technology, free only to display the

10

occasional bout of subversive behaviour? Can we even talk about ethical limits to technology if our minds and bodies are entirely mediated and directed by that technology? Must we simply accept that the border between humans and technology is a fiction, and deliver ourselves to the machines? No, of course not! Precisely that would mean the end of humanity. Precisely that is what Nietzsche meant by a return to the beasts, instead of aiming for the highest in what is human. “Do you want to be the ebb of this great tide, and would you rather return to the beasts than surpass man?” The diagnosis that humankind is controlled by technology, and that no more than token subversive resistance can be offered, fails to appreciate how each is interwoven with the other. There is an interplay between humans and technologies within which neither technological development nor humans has autonomy. Humankind is a product of technology, just as technology is a product of humankind. This does not mean that we are the hapless victims of technology; neither does it mean that we should try to escape from its influence. In contrast to such a dialectic approach, which sees the relationship between humans and technology in terms of oppression and liberation, we need a hermeneutic approach. Within such an approach – hermeneutics is the study of meaning and interpretation – technology forms the tissue of meaning within which our existence takes shape. We are as autonomous with regard to technology as we are with regard to language, oxygen, or gravity. It is absurd to think that we can rid ourselves of this dependency, because we would remove ourselves in the process. Technology is part of the human condition. We must learn to live with it – in every sense of the word. In other words, we must shape our existence in relation to technology. And this is where we encounter a metaphysical issue which in my view forms the crux of the philosophy of technology. At the source of the dialectical approach to the philosophy of technology, and its narrative of oppression versus liberation, lies a very specific metaphysical concept of the relationship between humankind and reality. As the French philosopher Bruno Latour has argued, this concept, which has characterized all of post-Enlightenment modernism, draws a fundamental distinction between ‘subjects’ and ‘objects’. Subjects are active, have intentionality and freedom; objects are lifeless, passive, and at best serve as the projections or instruments of human intentions (Latour 1991). Such a metaphysics makes it impossible to properly discern the interrelatedness and interconnectedness of subject and object – of humankind and technology. The moral load of technology, the technologically mediated character of human freedoms, and all the ways that people express their humanity through relationships with technology – all of this is rendered invisible by a modernistic metaphysics which radically separates subjects and objects and diametrically opposes them. However, what has hitherto remained absent in a non-modernistic or amodernistic perspective of the type proposed by Latour is a more detailed concept of humanity as interwoven with technology in this way, and an ethics to replace the unilateral rejection that is characteristic of the classic critique of technology. We must develop a concept of humankind which goes beyond the ‘autonomous subject’ that wants to be purged of all outside influence, and we must develop an ethics that goes beyond safeguarding this purging and which looks further than the risks, the violations of privacy, and the other threats that ‘technology’ poses to ‘humanity’.

11

2. ANTHROPOLOGY Human nature as the limit? Surprisingly enough, technology has always played a large role in the philosophy of anthropology – the domain of philosophy that occupies itself with knowledge of mankind. A core concept here is that we come into the world as imperfect beings, and have to cope as best as we can by means of technology. We are Mängelwesen, as Gehlen (1940) put it, with a nod to Herder. Because humans have no specialized organs or instincts, they cannot survive long in a natural environment. We have to supplement ourselves in order to continue to exist; and for this reason the relationship between the human organism and technology always has an important role in the philosophy of anthropology. Ernst Kapp’s Grundlinien einer Philosophie der Technik (1877) was the first study to subject this relationship to closer scrutiny. His central thesis was that of ‘organ projection’: consciously or unconsciously, technologies were the projections of human organs. A hammer was the material projection of what a fist is to the organic domain; a saw was a projection of human teeth. The telegraphy network that was being constructed in Kapp’s day was the projection of the nervous system. Kapp’s position amounts to an inverted Cartesianism: where Descartes had sought to understand the organic in terms of the mechanical, Kapp does exactly the opposite, explaining the mechanical world in terms of the organic, and technology in terms of nature. We create a material world of technology by externalizing aspects of ourselves – and in using this technology discover more and more about ourselves. The relationship between the organic and the technological was subsequently elaborated in more detail by Hermann Schmidt. He distinguished three stages in the development of technology (Schmidt 1954). Kapp’s analysis was concerned with the first stage: that of the tool. The motive power required is derived from human work, and human intelligence is required to use the tool for a given purpose. The second stage is that of the machine. This powers itself, but still needs to be operated by humans in order to be put to use. The third and final stage is that of the automaton. This derives both its motive power and its purposeful deployment from technology itself. In a sense, human beings are superfluous in the third stage; the automaton is physically and intellectually self-reliant.

The technological character of human existence

Gehlen then built on Schmidt’s work by asking again how these technologies related to people as organic beings. He also distinguished three types of humantechnological relationship: organ replacement, as a hammer substitutes for a fist: organ strengthening, as a microscope expands on the capabilities of the human eye; and organ facilitation, as the invention

12

of the wheel made it possible to move heavy objects without imposing this load in full on the human body (Gehlen 2003, 213). Gehlen also noted that the organic was being increasingly replaced by the inorganic. Technology was increasingly occupying positions that belonged to people – and according to Gehlen this was a development that could turn against humanity. Each in their own way, these three positions depict the relationship between organic human beings and non-organic technologies. But in the light of the considerations I developed in the first part of this address, they all fall short of the mark. Contemporary technological developments, which go beyond the question of ‘use configuration’, simply do not fall into any of these categories. Let us take the example of the deep brain stimulation currently being developed at this university. This technology cannot be understood as organ projection; which organ could possibly be projected in this case? The technology also goes beyond the tool – machine – automaton dialectic. The human-technological hybrid that results from the implantation of a deep brain stimulation device would seem, rather, to herald the next step in this development: that of the cyborg – a being that is partly human and partly technological in nature (cf. Haraway 1991). The replacement of the organic by the inorganic which Gehlen dreaded is not at issue here. On the contrary, the organic is at centre stage: it merges with the inorganic simply in order to function better. While in classical philosophy the body functioned as the entirely natural borderline between the human and the technological, the newest anthropotechnology makes this borderline considerably less clear. These technologies do not ‘project’ the body and they do not ‘intensify’ it; they merge with it, and in so doing form a new body. The human body is no more able to function as the limit of humanity as the concept of human autonomy was. If we are to fully understand the new step that is being taken in the relationship between humanity and technology, we have to negotiate a large conceptual hurdle. After all: we see ourselves as ‘natural’ and technology as ‘artificial’, and it is on this basis that we experience a blurring of the distinction between humanity and technology as an encroachment on human authenticity and a departure from the ‘human condition’. Classical Greece had already distinguished between technè (technology, craftsmanship) and physis (nature) as two different forms of poièsis (making): while physis makes itself, technè is a human intervention. A flower blooms of its own accord, but a building or a painting has to be created by human hands. Technology is the work of man, but mankind himself is not the product of technology. The French philosopher Bernard Stiegler claimed, however, that it is exactly this distinction between technè en physis that needs to be overhauled (Stiegler 1998). After all, it is as a Mängelwesen that mankind has always had to intervene technologically in nature, and in so doing has always lived in an ‘artificial’ environment which formed the context for human development – for human evolution, if you will. At the organic level, too, people have been closely knit with technology from the very beginning. This notion of originary technicity throws an entirely new light on the question of the limits of humanity, which shows that there has in fact never been a clear divide between humankind and technology. The cyborg – a fusion of the mechanical and the organic – does not embody the alienation of humankind from itself, but actually depicts its basic structure. The fact that we are continuously reshaping ourselves is the very thing that makes us human. Technology is

13

part and parcel of human nature, and recent technological developments have simply given this theme a new and more radical interpretation. We are shaping our lives not just in an existential way, but also a biological one – something we had always done without realizing it, in Stiegler’s view, but which is becoming more and more explicit because of the current pace of technological developments.

The human condition The fundamental interconnectedness of humans and technology means that ‘the human condition’ is not a constant factor to which we could ethically appeal. What makes us human, both in the existential sense and in the biological sense, is historic. It has become what it is now, and it will continue to develop. This historic, rather than essentialist character of the human condition has profound consequences. It means that none of the central dimensions of our human existence – our natality, mortality, freedom and intentionality, but also our appearance and gender – will remain the same forever. Pre-implantation diagnostics, for instance, makes it possible to prevent the development of embryos with certain genetic properties. Quite apart from the ethical question of whether the application of this technology is desirable, it is clear that human natality is changed by the availability of this technology. To bring a child into the world who carries certain hereditary traits suddenly becomes something for which people can take personal responsibility. In fact, in extreme cases people could even be held responsible for it, as in the so-called ‘wrongful life’ lawsuits in which children sue their doctors, or even their parents, for the fact that they were born at all. The same applies to our mortality. New technological developments in the areas of palliative care, euthanasia, and intensive care mean that mortality today is not what it was for previous generations. The end of our life is no longer something that we simply undergo, but something we have to make choices about. This is independent of any moral judgement about the desirability of technological intervention at the end of life; the simple fact of the availability of these technologies means that we become responsible. Even human freedom and intentionality – seen so often as the crown jewels of humanity, in comparison to (some) animals and plants – are subject to continuous technological change, as is demonstrated by the deep brain stimulation example. This technology uses a neuro-implant to impart electrical signals directly to someone’s brain, and thereby influence their intentionality. A famous case described in the Dutch medical journal Tijdschrift voor Geneeskunde recounts how the condition of a patient suffering from Parkinson’s disease improved markedly after DBS (Leentjens et al., 2004). But while the symptoms of Parkinson’s disease were ameliorated, his behaviour also changed, and in uninhibited ways that were completely unfamiliar to his Pre-implantation diagnostics: towards a new human condition?

14

family and friends. He took up with a married woman, bought her a second house and a holiday house abroad, bought several cars, was involved in a number of traffic accidents, and eventually had his driving licence taken away. The man had no idea that his own behaviour had changed – until the DBS was switched off. But at that moment his Parkinson’s symptoms returned with such severity that he became entirely bedridden and dependent. There appeared to be no middle way; he would have to choose between a life with Parkinson’s disease, bedridden – or a life without the symptoms, but so uninhibited that he would get himself into continual trouble. Eventually he chose – with the DBS switched off! – to be admitted to a psychiatric hospital, where he could switch the DBS on and suffer fewer symptoms of the disease, but where he would also be protected against himself. This case raises all sorts of issues about freedom and responsibility, issues which push the envelope of what it is to be human. This man lived as two parallel personalities and was only aware of the fact while in one of them; moreover, he made the explicit choice to go on living in the one which was not aware of it. In circumstances like this it is difficult to judge whether a free choice was possible, or for that matter who the authentic person was who was doing the choosing. In short technology alters the human condition, and it shows, in a radical way, how historical we are. This does not mean that ‘humankind’ is subordinate to ‘technology’ in the way that classical philosophy of technology feared; it means that we must continue to find new ways of shaping our technologically mediated existence. Even as cyborgs, we are still thrown (‘geworfen’) into existence, and the challenge of our lives is how shape this existence (‘ent-werfen’). The question is: how? This brings us to the domain of the last part of my address: ethics.

3. ETHICS Towards a non-humanistic ethics The analysis I have given so far, of an increasingly blurred borderline between humans and technology, might give the impression of being entirely ethically nihilistic. After all, if no real borderline can be drawn between humans and technology, and if we never were as autonomous and authentic as we thought, then what’s the use of ethics? If technology mediates our whole existence, from birth to death and everything in between, then why would we trouble to look at technology through the lens of ethics? If this was your impression, then I am happy to say that I can reassure you. In my view, the analysis I have presented so far, framed as it is by the philosophy of technology and philosophical anthropology, only really comes into its own as a contribution to the ethics of technology. Putting the borderline between people and technology into perspective certainly does not mean that from now on ‘anything goes’. On the contrary: it means that the aim of the ethics of technology must be to give shape, in a sound and responsible way, to the relationship between people and technology. This is going to be no simple matter, however; today’s ethics of technology leaves much to be desired. It is dominated by what I have called an ‘externalistic’ approach towards technology

15

(Verbeek, forthcoming 2010). The basic model is that there are two spheres, one of humanity and one of technology, and that it is the task of ethics to ensure that technology does not transgress too far into the human sphere. To stay within the paradigm of the ‘limits of humanity’, in this model ethics is a border guard whose job it is to prevent an unwanted invasion. However, in the light of the analysis I have presented here of the relationship between humans and technology, this model is inadequate; it draws a distinction between a ‘human’ domain and a ‘technological’ domain which is ultimately untenable. And while brain implants, tissue engineering, and embryoselection have already begun their advance, this ethics is painting itself into a corner by only being willing to consider the question of whether such technological developments are morally acceptable or not. So in ethics, too, we must cross the boundary between subject and object. We must no longer see ethics as a matter concerning the subject alone, but as a coproduction of subject and object. Over recent years I have elaborated one possible direction for such an ‘amodern’ ethics by researching into the moral dimensions of technology. The example I just gave, of the moral significance of ultrasound technologies, has formed a guiding example; the ethical decisions surrounding abortion cannot be seen as an autonomous moral human choice, because they are formed, to an important degree, by the way that technologies like ultrasound present the unborn child. However, as I have explained, the technological developments at the heart of this address have regard to another configuration than that of the use of technology, and in so doing depict a new form of the interrelatedness of human subjects and technological objects. In the blending configuration, not only our existential but also our biological life is shaped in interaction with technology. And the ethical questions here are considerably more delicate. This was well illustrated by the furore that arose ten years ago after Peter Sloterdijk gave his renowned speech Rules for the Human Zoo (Sloterdijk 1999). In this speech, Sloterdijk argued that the latest technologies offer entirely different media by which we could give shape to our humanity, media other than those of the word. While texts had always been used to tame people, new technologies were making it possible to breed them, and according to Sloterdijk it was high time to start pointing these new possibilities in the right direction. But while philosophers racked their brains about the texts and ideas that formed people, the actual material re-creation of humanity was proceeding apace. In a provocative formulation, Sloterdijk proposed that ‘rules for the human zoo’ were needed; people did not live merely as conscious minds in a universe of ideas, but also as organic beings in a biotope – a ‘zoo’ – and it was this organic dimension of our existence which now needed our full attention.

Humanity: outdated and in the sale? Photograph

c

Jan Verberne, Enschede

The German academic world was in uproar after this speech. Sloterdijk’s plea that rules should be developed for human ‘cultivation’

16

was immediately associated with Nazi eugenics programmes. Simply posing the question of how best to shape the interrelatedness of humans and technology, then, had turned out to be rather too much of a good thing. But while intellectuals struggled to outdo each others’ political correctness and proclamations on the evils of eugenics, the ethical questions stood, and remained unanswered. This was a clear instance of the failure of the modernistic perspective on ethics; while in the real world humans and technologies are becoming ever more intertwined, ethics stands on the sidelines, hawking a division of the estate. Jürgen Habermas, for instance, who according to reports was active behind the scenes in the attack on Sloterdijk, has since published a book in which he explicitly states that genetic intervention should be allowed only for therapeutic purposes: all interventions aimed at human enhancement, such as pre-implantation genetic diagnosis and genetic enhancements, are morally unacceptable, because these technologies mean that we take decisions on behalf of others about what kind of life is worth living (Habermas 2003). By erasing the difference between the ‘grown’ and the ‘made’, these technologies attack the ‘autonomous authorship of existence’ and the ‘moral self-understanding’ of the person so ‘programmed’ (idem, 52). Today’s anthropotechnologies treat people not as self-actualizing subjects, but as the ‘ instruments of our preferences’ – and this Habermas finds utterly unacceptable. I naturally share the belief that we should respect the rights of others as far as possible, and that we should treat people as ends in themselves and not as means to an end. There must be very few people in this society who do not share this view. But it is a fiction to suppose that a society is imaginable in which people can take entirely autonomous decisions on what kind of life is worth living. The remarkable thing about technology is that it contributes continuously to the way we answer questions about the good life. Genetic intervention and pre-implantation diagnostics have added new components to an existing repertoire. These new technologies do indeed cross the border between ‘growing’ and ‘making’, between physis and technè, as Habermas states; but this does not mean that we are incapable of dealing responsibly with them. Instead of making ethics a border guard who decides the extent to which technological objects may be allowed to enter the world of human subjects, ethics should be directed towards the quality of the interaction between humans and technology. This does not mean that every form of such interaction is desirable, nor that we are entitled to tinker with ourselves at random. I agree with Habermas that we should not passively accept every genetic ‘enhancement’ of human beings, and that respect for individual freedom and for human dignity must play an important role in this matter. But the distinction between ‘therapy’ and ‘enhancement’ fails to provide an appropriate vantage point. We cannot employ the criterion that we must stop at the point where the ‘restoration’ of an original situation gives way to the creation of a new human being; after all, the ‘original situation’ does not exist, and we have always used technology to create ourselves anew. So the question is not so much where we have to draw the line – for humans, or for technologies – but how we are best to shape the interrelatedness between humans and technology that has always been a hallmark of the human condition. We need an ethics that does not stare blindly at the issue of whether a given technology is morally acceptable or not, but which looks at the quality of life as lived with technology. I should like to close my address with a proposal for just such an ethics.

noten

17

The good life In elaborating a non-modernistic ethics it is useful to follow the approach that was adopted in classical antiquity, which was obviously, by definition, non-modernistic. At the core of classical ethics is the concept of ‘the good life’. This had not so much to do with the question of ‘how I should behave’, as a moral subject in a world of objects, but with the question of how to live. The good life was directed by aretè – a term frequently translated as ‘virtue’, but which is better rendered by the word ‘excellence’. Ethics, then, was about mastering the art of living. Michel Foucault has shown that the ethics of the good life revolved around the shaping of one’s own subjectivity. Foucault’s research was directed specifically towards the ethics of sexuality, and he demonstrated that the ethics of sexuality in classical times did not boil down to adherence to commandments and prohibitions but to finding the best way of dealing with lust and passion. Passions impose themselves on us, so to speak, and ethics was about choosing not to follow these passions blindly but to establish an open relationship with them: finding an appropriate use for these passions. Steven Dorrestijn has argued that an ethics of technology could look similar. When technological means force themselves upon us incessantly, then the art of living in a technological culture is the art of shaping our own mediated subjectivity. This ethics of self-constitution offers – not only in the use of technology, but also in the configuration of its merging with us – a fruitful alternative to existing ethical positions. This approach also gives the ethics of self-constitution a very concrete meaning. Its central question becomes: what do we want to make of human beings? For some, this question appears to be an expression of pure hubris: the overweening pride and recklessness to think that we should be allowed to tinker as we like with human nature. But though this might look like overconfidence, in fact what it amounts to is an assumption of responsibility. In fact, it is the very refusal to take these technologies seriously – their categorical rejection – which marginalizes an ethics from the outset. The technological developments themselves continue to move on, and while squeaky-clean ethicists grumble on the sidelines, they are missing the opportunity of contributing towards the responsible development and the responsible use of these technologies. The world is already full of antidepressants, Ritalin, amniocentesis, prostheses and deep brain stimulation; it is high time that ethics moved on from considering simply whether or not these are acceptable and started addressing the issue of the best way to embed such technologies in our society. The principal question in the ethics of self-constitution is this: what is a good human life? When we allow technology to be accompanied by this ethical question, instead of setting it at odds with ethics, it becomes possible to pose explicit questions about those aspects of human existence that are affected by technology, and to decide which considerations might therefore be relevant. Pre-implantation diagnostics, for instance, can help to alleviate suffering, because serious disease can be detected long before the further development of an embryo. At the same time, the existence of this technology can affect social norms, in that people become increasingly responsible for the birth of a child with a serious disease – as is already the case with parents of babies having Down’s syndrome. Deep brain stimulation, as we have seen, can have far-reaching effects on personality, effects which can even lead to people having different views and making different choices than would have been the case

18

in the absence of this technology. These are more than just ‘side effects’; the use of DBS can mean that a person consciously elects to become a different person, thereby intervening materially into their own freedom and intentionality. By directing attention towards the quality of these human-technology configurations, it immediately becomes clear why the ethics of technology needs to be closely connected to philosophical anthropology. A good ethical discussion of contemporary technology has to be closely knit with a philosophical-anthropological analysis of the relationships between humans and technology, and of the impact of technology on human subjectivity. Thinking about the question of what we want to make of ourselves thereby becomes a way of taking responsibility for the technology currently under development; responsibility for our own existence, but also that of others. Responsibility for the design of life with technology. Responsibility for a good way of being human. A more detailed investigation of these responsibilities will form an important part of my own research in the years to come.

The Übermensch Lastly, these considerations throw some new light onto the image of Nietzsche’s Übermensch. Nietzsche’s position that we must ‘create something above ourselves’ is not a plea for the creation of some sort of Superman that will sneer at today’s humans as if they were no more than pathetic creatures. The Übermensch is the person who takes full responsibility for his or her own existence – an existence that is formed in relation to other people, to social structures, and to technological developments. With respect to anthropotechnology this means that we must move beyond the current debate between supporters and opponents of ‘human enhancement’, and have ethics address the question of how we can use this technological configuration to best give form to ourselves. It is exactly this openness towards the interwoven nature of humans and technology, and a continuous readiness to embrace it in all its forms, which will form the foundation of ethics. An ethics that is closely interlaced with anthropology. An ethics that equips designers to ask the right questions when developing new technologies – whether these are anthropotechnologies, or technologies that show familiar ‘use’ configurations. An ethics that also equips people to interact with technologies in new ways to give form to their existence and to their lives with others. It is the very fact that we can shape ourselves which makes us human. The Übermensch is the human who has learned to deal wisely with that power. This is exactly what is being asked of us now, in the technological culture in which we live.

Ethics as guidance for technology Rector, ladies and gentlemen, these are the considerations which outline the space within which my research will take place over the coming years. In the research projects in which I am currently engaged, I intend to study the relationship between humans and technology in more detail, and to give clearer form to the guiding role for ethics which I have in mind. In doing so, incidentally, I will not limit myself to the anthropotechnologies which blur the physical borderline between humans and technology, but will continue all my existing research into human-technological relationships and the moral significance of technology. For instance, I am currently working, with great pleasure and inspiration, together with PhD student Steven Dorrestijn on the IOP project Design for Usability, in which we are collaborating with Industrial Design colleagues in Twente, Delft and Eindhoven to study the

19

relations between products and users. Steven and I have been looking in particular at the impact of products on user behaviour and its ethical aspects. How can designers best anticipate this impact? What is the wisest way to shape the behavioural influence that a product invariably wields? What kind of subjects emerge from the impact of these products, and what do the most desirable human-technology configurations look like? On the 1st of October the new MVI project Telecare at Home was launched, in which Nelly Oudshoorn, Val Jones and myself will work together in the area of telemonitoring in care settings, and within which Asle Kiran will be doing postdoctoral research. The project is aimed at understanding the impact of telemonitoring on the nature and quality of care provision and on patient experience of this care, with a view to giving the design, application and use of these technologies the benefit of a richer ethical dimension. Katinka Waelbers’ doctoral degree project on the technological mediation of responsibility, which I am supervising together with Tsjalling Swierstra, is another project I would like to mention here – along with those of Nynke Tromp at TU Delft on Design for Society, and of Hanneke Miedema at Wageningen University on the design of sustainable animal production systems. The relationship between humans and technology lies at the heart of all these projects, and they all give special attention to the ways in which good design practices can anticipate this relationship. My own research into the ‘limits of humanity’ is concentrated in the VIDI project I am currently working on together with PhD student Lucie Dalibert. This project is a study of the philosophical-anthropological and ethical aspects of human enhancement technologies. We are focusing our attention on philosophical theorization on the one hand, and on contributing to the identification and answering of ethical questions during the design of such technologies on the other. How are we to better understand and conceptualize the increasing merging of humans and technology? And how do we ensure that this merging takes the best possible form? In all these projects I intend to bring about a connection between the philosophy of anthropology and ethics. The central questions are always: what configurations of humans and technologies are at stake here? Which would be desirable forms this technology could adopt? And what would good practices of design and use look like? Linking insights into the nature and structure of human-technology relationships with ethical reflection is of crucial importance in those investigations. In this way I hope to further articulate what the philosophical accompaniment and guidance of technological developments could entail – both here, at the University of Twente, and elsewhere.

Acknowledgements I should like to close with a word of thanks, first and foremost to those without whom I would not be standing here today, and who have shaped me as a person and an academic. My PhD tutor and teacher Hans Achterhuis, to whose warm personality and inspiring, stimulating presence I owe so much. I can only hope that I may play a comparable role to others in the future. I also want to thank Pieter Tijmes and Petran Kockelkoren, who introduced me to philosophy as a student, and who lit the philosophical fire within me. From them I learned of the rigourc of philosophical writing, but also of the importance of making philosophy a public activity.

20

I should like to thank the Board of Governors of the University of Twente for the confidence they have invested in me. Rest assured that I shall continue to exercise my profession with enthusiasm. I am proud to be able to work at a university that has always stood for the importance of seeing technology in its social context. I shall make every effort to contribute to this profile, by continuing to link my work with the many fascinating developments taking place in the university’s technical faculties. I thank the Dean of the Behavioural Sciences faculty for the trust he has placed in me, and for the fruitful way in which, for many years, I have been able to collaborate with him as the director of the Philosophy of Science, Technology and Society programme. Hubert: the good life regularly predominates in our conversations, but in a less abstract sense than I have just described, and it is in this sense that I look forward to our future collaboration as well. I would like to thank Philip Brey, chairman of the Department of Philosophy, and all my departmental colleagues for our inspiring collaboration. I consider it a privilege to be part of such a large group of people all concerned with the philosophy of technology, and with a real interest in each others’ work. I should like to make particular mention of our research group, Philosophical Anthropology and Human-Technology Relations. Petra Bruulsema, a rock-steady departmental member for as long as anyone can remember, has already dubbed the group ‘The Black Hand Gang’. Steven, Petran and René, and now also Ciano, Asle and Lucie: I hope we continue to break new ground in the philosophy of technology, and to get our hands dirty doing so! The many other colleagues with whom I am delighted to work are too numerous to name here, but I would like to thank them all the same: my colleagues in the Behavioural Sciences faculty and in the Centre for Telematics and Information Technology; everyone working with me in the Philosophy of Science, Technology, and Society master programme; my colleagues in the IOP project Design for Usability and in the MVI project Telecare at Home; and last but not least, everyone at the 3TU Centre for Ethics and Technology, under the inspirational leadership of Jeroen van den Hoven. A special word of thanks is due to my parents, who set me so lovingly on the road of life and who are now so close to the end of that road. Mother: when the news of my appointment arrived, it seemed impossible that you would live to see my inauguration – but again and again you have found the strength to stay one up on the disease, with the zest for life and carefulness you have always had. Father: your disease prevents you from being here today in person, but you are here, not just because a video camera is recording this, but because my philosophical disposition comes from you and was carefully fed and encouraged by you. Dear Levi, Domien and Micha, it is wonderful to have your cheerfulness and warmth around me every day, and to be shown the world through your eyes. Take today: Micha thinks I look most like a penguin, and when my toga was delivered Domien ran upstairs to put on his Zorro cape and pose next to me. At the same moment Levi called out “Now I know what the letters ‘Prof. Dr. Ir.’ stand for – Professor Dokter in zijn jurk (Professor Doctor in a dress)!”All of you continually put my life into the right perspective, and I’m very happy you do.

21

My dearest Annette, how special it is that I can give this address in the very same place that we were married eight years ago. If I have learned anything about the limits of humanity, then I have done so at your side; for life together with you is limitless glory. Quod dixi dixi. .

References

22

Aarts, E. en S. Marzano (2003). The New Everyday. Views on Ambient Intelligence. Rotterdam: 010 Publishers. “http://en.wikipedia.org/wiki/Giorgio_Agamben” \o “Giorgio Agamben” Agamben, Giorgio (1998), Homo Sacer: Sovereign Power and Bare Life, Stanford, CA: HYPERLINK “http://en.wikipedia.org/wiki/Stanford_University_Press” \o “Stanford University Press” Stanford University Press Bostrom, N. (2005). ‘In Defence of Posthuman Dignity’. Bioethics, Vol. 19, No. 3, pp. 202-214. Brijs, S. (2005). De Engelenmaker. Amsterdam: Amstel Uitgevers. Coolen, M. (1992). De machine voorbij: over het zelfbegrip van de mens in het tijdperk van de informatietechniek. Meppel / Amsterdam: Boom Dorrestijn, S. (2006). Michel Foucault et l’éthique des techniques: Le cas de la RFID. Nanterre: Université Paris X (Mémoire.) Ellul, J. (1954). La Technique ou l’Enjeu du siècle. Paris: A. Colin Foucault, M. (2006). De woorden en de dingen. Amsterdam: Boom {1966} Freud 1989, Inleiding tot de Psychoanalyse. Meppel: Boom {1917} Gehlen, A. (1940). Der Mensch. Seine Natur und seine Stellung in der Welt. Berlin: Junker und Dünnhaupt Gehlen, A. (2003). ‘A Philosophical-Anthropological Perspective on Technology’. In: R.C. Scharff and V. Dusek (eds.), Philosophy of Technology: The Technological Condition. Oxford: Blackwell, pp. 213-220. Valkenburg, G. (2009) - Politics by All Means - An Enquiry into Technological Liberalism. Delft: Simon Stevin Series in Philosophy of Technology Habermas, J. (2003). The Future of Human Nature. Cambridge: Polity Press. Hage, J.C. (2004). ‘Wrongful life en rechtswetenschap’. E. Engelhard, T. Hartlief en G. van Maanen (red.), Aansprakelijkheid in gezinsverband, Den Haag: BJu 2004, 221-250. Haraway, D. (1991). ‘A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century’. In: D. Haraway, Simians, Cyborgs and Women: The Reinvention of Nature, pp.149-181. New York: Routledge Heidegger, M. (1927). Sein und Zeit. Tübingen: Max Niemeyer Verlag. Heidegger, M. (1954), Die Frage nach der Technik. In: Die Technik und die Kehre. Stuttgart: Verlag Günther Neske. Houellebecq, M. (2005). De mogelijkheid van een eiland. Amsterdam: Arbeiderspers Ihde, D. (1990), Technology and the Lifeworld. Bloomington/Minneapolis: Indiana University Press Kapp, E. (1877). Grundlinien einer Philosophie der Technik. Zur Entstehungsgeschichte der Kultur aus neuen Gesichtspunkten. Braunschweig: Verlag George Westermann

23

Latour, B. (1991), Nous n’avons jamais été modernes, Parijs: La Découverte, 1991. Latour, B. (1994), ‘On Technical Mediation - Philosophy, Sociology, Genealogy’. In: Common Knowledge 3, pp. 29-64. Leentjens, A.F.G., F.R.J. Verhey, V. Visser-Vandewalle en Y. Temel (2004). “http://www.ntvg.nl/publicatie/manipuleerbare-wilsbekwaamheid-een-ethi-307454/volledig” Manipuleerbare wilsbekwaamheid: een ethisch probleem bij elektrostimulatie van de nucleus subthalamicus voor ernstige ziekte van Parkinson’. Ned Tijdschr Geneeskd 148:1394-8 Lemmens, P. (2008). Gedreven door techniek: De menselijke conditie en de biotechnologische revolutie. Box Press Uitgeverij Nietzsche, F (1985). Aldus sprak Zarathustra: Een boek voor allen en voor niemand. Amsterdam: Wereldbibliotheek {1883-1885} Plessner, H. (1928), Die Stufen der Organischen und der Mensch. Einleitung in die philosophische Anthropologie, Berlin / Leipzig: Dre Gruyter Schmidt, H. (1954). ’Die Entwicklung der Technik als Phase der Wandlung des Menschen’. Zeitschrift des VDI 96 (1954), 5, 118-122 Sloterdijk, P. (1999). Regeln für den Menschenpark: Ein Antwortschreiben zu Heideggers Brief über den Humanismus. Frankfurt/M: Suhrkamp Stiegler, B. (1998) Technics and Time, 1. The Fault of Epimetheus, Stanford UP 1998 Swierstra, Tsj. (2000). Kloneren in de polder: Analyse van het maatschappelijk debat over klonen en kloneren in Nederland. Den Haag: Rathenau Instituut. Verbeek, P.P. (2005). What Things Do: Philosophical Reflections on Technology, Agency, and Design. University Park, PA: Penn State University Press Verbeek, P.P. (2008). ‘Cyborg Intentionality – Rethinking the Phenomenology of Human-Technology Relations’. In: Phenomenology and the Cognitive Sciences 7:3, pp. 387-395 Verbeek (te verschijnen 2010). Moralizing Technology: Understanding and Designing the Morality of Things. Chicago: University of Chicago Press Vries, G. de (1999), Zeppelins – over filosofie, technologie en cultuur. Amsterdam: Van Gennep.

Universiteit Twente. CONCERNDIRECTIE STRATEGIE & COMMUNICATIE GEBOUW: DE SPIEGEL ANTWOORDNUMMER 323 7500 VB Enschede

Related Documents


More Documents from "Sharron Shatil"