Bernard Stiegler

23 November 2018, Institute of Ereignis, Shanghai

For anyone wanting a precise analysis of what we refer to today as artificial intelligence, which seems now to have become the horizon of everyday life (and I will return to this question), it is necessary to begin from the following postulate: all noetic intelligence is artificial. This implies that there is such a thing as non-noetic intelligence. And it also implies that, generally speaking, noetic life is intelligent in a specific way, which is that of artifice.

I claim that there is non-noetic intelligence in the sense implied when Marcel Détienne and Jean-Pierre Vernant talk about metis, but also in the sense invoked by Kevin Kelly when he wrote an article that presented forms of life, of whatever kind, as forms of intelligence, where each of these forms has evolved in a different way over the last three billion years or more. Speaking in this way is for Kelly a matter of opposing what he calls the myth of super-intelligence, but it is also to speak against Descartes: it is to posit that life is never just machinic – and here we should also mention Georges Canguilhem’s ‘Machine and Organism’.1

Intelligence, here, whether in its ‘natural’ or ‘artificial’ forms, but I prefer to say in its organic or organological forms (I will clarify this a bit later on), is the accomplishment of a goal or an aim. There is no necessity at all for this goal to be a conscious representation, as Francisco Varela shows in a drawing in which he ridicules this kind of ‘representational’ hypothesis. What is involved with noetic intelligence, however, is, in principle, access to consciousness, insofar as it has the capacity to access what Heidegger called the as such – Heidegger being himself someone who deconstructs the metaphysics of representation. Intelligence, whether noetic or otherwise, is in a general way what orients behaviour: it constitutes an animation, as Aristotle will say in On the Soul, in which vegetative, sensible and noetic souls draw intelligence from what he calls the ‘first unmoving mover’, and where the intelligence that is the soul is above all movement, which is also to say, phusis.

In order to precisely distinguish (without opposing) the organic (vegetative and sensible) forms of intelligence from the organological (noetic) forms, we must firstly recall what Aristotle remained unaware of, namely that, some three million or so years ago, there arose the conditions for what would later, some forty thousand years ago, become noetic intelligence, in which Georges Bataille would recognize himself, and in relation to which he said: here it is we who begin, those who painted these animals are our ancestors, our father, this is evidently so, it is obviously so, and recognizing this evidence is a key feature of noesis itself. Here is exactly what he writes in Lascaux, or The Birth of Art:

It is ‘Lascaux man’ about whom we can surely say, and for the first time that, making works of art, he evidently resembled us, he was one of us, our fellow man.2

Bataille will go on to say that the kind of intelligence involved in the work of art is the intelligence of play – I will not develop this point now, but it is fundamental in order to understand what it means for the question of noetic imagination (and I will discuss this next year in Hangzhou). Having said this, we can begin to understand why what we today refer to as artificial intelligence is a continuation of the process of the exosomatization of noesis itself, such as it begins firstly with fabricating exosomatization, making things by hand, and continues with hypomnesic exosomatization, as that which makes it possible to access lived experiences of memory and imagination, which have accumulated since the origin  of the play of works, as Bataille considers them, and which engender, in passing through writing, instruments of observation, calculating machines whose principles were established by Leibniz, and analogue technologies, which form the basis of the culture industries – and here the question of their role in the ‘post-truth era’ arises as never before. All of this then, writing, telescopes, calculating machines and the analogue recording technologies of the culture industries, all of this has generated a perpetual and techno-logical evolution of what Kant called the faculties – whether they are lower, that is, functions of noesis, or whether they are higher, and thereby constitute faculties in the sense we refer to them in universities, and which regularly enter into conflict.

Why do such conflicts arise? Because there are exosomatic evolutions of the hypomnesic supports of noesis, and this generates tensions – which can be social as well as noetic.

Two years ago in Nanjing, I tried to show (and I will come back to this next year) that what Kant called the lower faculties – intuition, understanding, imagination and reason, which are put to work by the higher faculties that are those of knowing, desiring and judging – are functions that are produced through the process of their exteriorization, which Hegel was already able to see, but without truly seeing it. It is Marx who will be the first to understand this, and it will then be reformulated by Lotka, who will do so from a biological standpoint and by coining a new term: exosomatization, or exosomatic evolution and exosomatic organs. Here, the intelligence of the body is produced in being supplemented, inasmuch as it makes possible an exteriorization of experience, and the constitution of what I call (using Husserlian terminology) collective secondary retentions: the latter are retained in individual memories, but they are retained there collectively, forming what we also refer to as knowledge, which can be transmitted from generation to generation, and which metastabilizes the conditions of life – these conditions of life being negentropic, that is, struggling against the entropic effects of human behaviour, which is something we discover in the Anthropocene, through the analysis of what the IPCC calls anthropogenic forcings, which fundamentally threaten life, and in particular noetic life – life that is worthy of being lived by a noetic soul. All of this is what leads to a life that is unworthy of noesis – and ultimately becomes incompatible with life as a whole, as the 15364 signatories of a recent scientific text declare.

It is in this context, at the end of the Anthropocene – and these scientists indeed tell us that it is reaching its end, which must then also be ours – that we see the advent of artificial intelligence as an ordinary reality of everyday life. What then should be the function of what is today called ‘artificial intelligence’, where this refers to a technology of reticular, ubiquitous super-computing that automates the majority of processes by which behavioural flows are managed, where this has fundamental effects on both modes of production and modes of exchange in all their forms, and where, in its current stage, these have been transformed into functions of consumption?

What we today call artificial intelligence is not what was on the horizon of the Macy conferences, the project of which was formulated in Dartmouth by Marvin Minsky with Claude Shannon, Allen Newell, Herbert Simon, and so on. It is a reticular AI, based on what Clarisse Herrenschmidt has called reticular writing, which is linked to the networking together of three and a half billion individuals – via an apparatus that becomes exosopherical, constantly evolving, and now based on the ‘platform capitalism’ described by Benjamin Bratton – and that makes possible the production and exploitation of what I call ‘digital pheromones’.

The possibility of such digital pheromones was in a way already raised by Norbert Wiener in 1948, when he worried about the possibility that cybernetics could give rise to what he called a ‘fascist ant-state’.3 That the human could regress to the stage of the ant is a possibility contained in the fact that this human abandons his knowledge – his knowledge being the path by which he must struggle against entropy. That such a possibility exists, that is, that cybernetic exosomatization can generate an industrial artificial stupidity, is the question that must guide us here. As soon as intelligence becomes artificial, that is, as soon as it is made possible by artefacts and makes these artefacts possible – due to that astonishing faculty of dreaming which, according to the palaeo-anthropologist Marc Azéma, characterizes the human being: he posits that man dreams, as do animals, but that he also does so by producing, drawing and writing, by day-dreaming, such an ex-pression being the beginning of a process of exo-somatization by which man realizes his dreams. The faculty of dreaming is, then, here the faculty of the realization of dreams, and such is noetic intelligence according to Paul Valéry.

But as soon as it becomes artificial, such intelligence can also generate an artificial stupidity: the pharmakon that is the artifice thereby engendered can lead to regression and to self-destruction. Such artificial stupidity is what Alvesson and Spicer describe as ‘functional stupidity’, in a well-known article that has since become a book – and it also generates what Tijman Shep describes as ‘social cooling’, which John Pfaltz describes as an increase in the rate of entropy in social networks. This artificial stupidity, therefore, is also a technique for the production of lures and traps to in some way deceive humans, but here, beyond stupidity, we must also refer to the necessity of putting faults or accidents into music, as was the case with software created by IRCAM, which produced only absolutely ‘right’ notes – for example for the Queen of the Night aria in Mozart’s The Magic Flute – but the ‘music’ this produced was unbearable.

And this is an issue we also see with trading software – which raises the question of vertus and of the necessity of imperfection, that is, the necessity of negentropic locality – which we must interpret via John Stuart Mill and the necessity of diversity. Artificial stupidity also means cognitive overflow syndrome, that is, the functional destruction of attention, or, again, it is what worries Adam Smith in 1776 in The Wealth of Nations.

The possibility of artificial stupidity is what characterizes artificial intelligence, which, as we have already said with Kevin Kelly, can be distinguished from natural intelligence. Natural intelligence cannot commit acts of stupidity: it can only fail, which means, ultimately, to die. Taking up a thesis of Nick Bostrom – but we could also refer here to Bergson, who thinks intelligence in terms of a relation to action – Kelly himself argues that life in general amounts to a series of conquests of intelligence. He argues this while criticizing the perspective of those he calls ‘singularitans’, who maintain five assumptions which, when examined closely, are not based on any evidence.4

The first of these ‘misconceptions’, and the most common, begins with the common misconception about natural intelligence. This misconception is that intelligence is a single dimension. Most technical people tend to graph intelligence the way Nick Bostrom does in his book, Superintelligence – as a literal, single-dimension, linear graph of increasing amplitude. At one end is the low intelligence of, say, a small animal; at the other end is the high intelligence, of say, a genius – almost as if intelligence were a sound level in decibels […] with fish evolving into reptiles, then up a step into mammals, up into primates, into humans, each one a little more evolved (and of course smarter) than the one before it.

So the ladder of intelligence parallels the ladder of existence. But both of these models supply a thoroughly unscientific view. […] A more accurate chart of the natural evolution of species is a disk radiating outward, like this one first devised by David Hillis at the University of Texas and based on DNA. […] Every one of these species has undergone an unbroken chain of three billion years of successful reproduction, which means that bacteria and cockroaches are as highly evolved as humans.

Into this ‘mandala’, however, we must also introduce the perspective of Alfred J. Lotka, for whom ‘natural’ intelligence becomes ‘artificial’, which is also to say, discovers the possibility of its own stupidity, as highlighted by Arnold Toynbee – when the morphogenesis that is endosomatic organogenesis continues outside wet tissue, and does so as exorganogenesis. The latter generates exosomatic organs that modify the trajectory of motor acts, such as occurs with the flint tool, some of which are arrows capable of travelling at 350 kilometres per hour, and today’s rockets, which launch themselves to escape velocity – twenty-eight thousand kilometres per hour – are the continuation of this capacity in a direction that opens up exospherical spaces. But in addition, these exosomatic organs also engender accumulations of psychic retentions, which thereby become collective and constitute what Roger Bartra calls an ‘exocerebrum’ and what Karl Popper calls World Three and objective knowledge.

This third world, however, is the world of what I call hypomnesic tertiary retentions, a world composed not just of exosomatic organs but of retentional accumulations, and where Lotka shows that these are orthogenetic, that is, bearers of non- Darwinian selection processes, making possible the establishment of scalar relations between different orders of magnitude, something completely different from, for example, relations between cells, organs, bodies, milieus and so on. Here we should really turn to Durkheim and to his book, The Elementary Forms of the Religious Life, in which he studies totemism, but I no longer have time to do so now. But we should also note that échelle as ladder, which is also the ladder dreamed of by Jacob, who has a primordial role in Judaeo-Christian monotheism, then becomes échelle as scale, and technologies of scalability are at the heart of those ‘economies of scale’ characteristic of the industrial and capitalist stage of exosomatization. Furthermore, platforms that utilize and develop reticulated artificial intelligence are based on specific technologies of scalability, managing multi-scale data ranging from infra-organic medical ‘nanomachines’ to exospherical infrastructures capable of handling medical data at the scale of the technosphere.

It is worth noting that it is on the basis of totemic classification that Durkheim posits that the Aristotelian and Kantian theory of the categories should be completely rethought. Now, the biosphere may be one scale located within the cosmos. But to this we must add the fact that, from the moment such changes of scale and arrangements of orders of magnitude arise, which is something that occurs with the exosomatic organs that are technical objects in all their forms (including language), from this moment, this biosphere becomes a technosphere. Within this technosphere, moreover, entropy, negentropy and anti-entropy, whose local equilibriums had metastabilized over the course of three or four billion years, find themselves totally overthrown by those exosomatic organs that are pharmaka, that is, organs that can as easily increase entropy as contain it, defer it and transform it through the ‘art of living’, as Alfred Whitehead put it. And the function of artificial intelligence is to in this way minimize entropy and increase negentropy and anti-entropy.

Artificial stupidity, then, is what persists in accelerating entropy instead of deferring it, and does so by destroying knowledge, which, alone, is capable of generating positive bifurcations. It would be entirely possible to take advantage of the analytical possibilities of algorithms in order to defer entropy. But in order to do so, it would be necessary to modify data structures, to press algorithms into the service of the constitution of deliberative scales reconstituting neganthropic knowledge, that is, dialogically transindividuated knowledge, and to make automation serve disautomatization within the framework of a new macro-economy in which value would be defined according to the increase of negentropy. In the current model, however, the criteria of value are entropic.

Behind this question, there are those of the relationships between calculability, locality, incalculability and deliberation – which is equally to say, those of the relationships between understanding, imagination and reason. In Automatic Society, I have argued that algorithms constitute a hypertrophy of the understanding – and that the latter is always artificial, and based on tertiary retentions inasmuch as they configure the schematism and the categories. These questions of epistemology and technology, of the industrial future and new macro-economic models, must all be brought together. It is precisely in order to do so that a program is currently under development in the Plaine commune territory, in the northern suburbs of Paris. And in terms of the question of macro-economics – which is also a question of the function of knowledge and therefore of the episteme in Foucault’s sense and of epistemology in Bachelard’s sense – it is an attempt, in the epoch of algorithmic and articulated artificial intelligence, to draw conclusions from Marx’s statements about fixed capital and the general intellect in the Grundrisse.

I argued earlier that with Kelly’s model, inspired by Hillis, it is necessary to specify the conditions of the passage from natural intelligence to artificial intelligence. I would like to conclude by adding some further remarks on this point.

  1. We must think this passage both with Whitehead and with Canguilhem, with respect to biology for example, and more generally with respect to the role of knowledge in the technical form of life, and as a vital function that can be thought only starting from biology, but precisely as what requires that which is no longer only biological, and which leads Georges Canguilhem to make statements that are quite close to being post- Darwinian and very close to those of Lotka concerning orthogenesis.
  2.  We must specify the question of metis and distinguish it from noesis: cognition, in the sense that this word has in the so-called ‘cognitive sciences’, is not knowledge in Popper’s sense. The passage from cognition to knowledge requires an exosomatic exteriorization and the constitution of what Leroi-Gourhan calls a third kind of memory, very close to what Popper calls World Three, and what I myself call the epiphylogenesis that forms with the accumulation of tertiary retentions. It is this question of exosomatization that Kelly completely ignores when he writes that:

We contain multiple species of cognition that do many types of thinking: deduction, induction, symbolic reasoning, emotional intelligence, spacial logic, short-term memory, and long-term memory. The entire nervous system in our gut is also a type of brain with its own mode of cognition. We don’t really think with just our brain; rather, we think with our whole bodies. These suites of cognition vary between individuals and between species. A squirrel can remember the exact location of several thousand acorns for years, a feat that blows human minds away. So in that one type of cognition, squirrels exceed humans. And yet:

Your calculator is a genius in math; Google’s memory is already beyond our own in a certain dimension. We are engineering AIs to excel in specific modes.

But these specific modes are only functions. It is not just a question of functions, but of faculties – if we take it as given that we must rethink the concept of faculty from the exosomatic perspective.

… the faculties are social, and not just psychic, and that is the whole issue involved in the conflict of the faculties.

Here we should read Ignace Meyerson’s Les fonctions psychologiques et les oeuvres, along with Vernant. Now, when Kelly writes that:

In the future, we will invent whole new modes of cognition that don’t exist in us and don’t exist anywhere in biology. When we invented artificial flying we were inspired by biological modes of flying, primarily flapping wings. But the flying we invented – propellers bolted to a wide fixed wing – was a new mode of flying unknown in our biological world.

When Kelly writes this, what he describes is precisely exosomatization, but he does not see it as such, and he does not see in what way it stems from the works produced during the Upper Palaeolithic, upon whose appearance Bataille meditates. ‘To build machines capable of beating humans’: this is the very goal of exosomatization. Why would we bother to make an automobile – or a bow and arrow – if these exosomatic organs were not quicker than humans? Here, however, the question is of noetic functionality. What indeed is noesis? It is what struggles against the perverse effects generated by exosomatization, but always by generating other processes of exosomatization. This is what Freud describes in Civilization and Its Discontents.

But in that case, it is not just a matter of the exosomatization of the exosomatic organisms that we have been ever since the dawn of hominization: it is also a matter of social organizations. And the latter amount to complex exorganisms, composed of the simple exorganisms that we are, which together form social groups of longer duration than the individuals who form them, as is the case for all civilizations. Such complex exorganisms are, however, prone to becoming massively anthropic, and they can therefore collapse, and today, more than ever, the role of politics consists in struggling against this pharmacological tendency.

Kelly eventually points out that the ‘Turing machine’ and ‘the Church-Turing hypothesis’ are misleading:

no computer has infinite memory or time. When you are operating in the real world, real time makes a huge difference, often a life-or-death difference. Yes, all thinking is equivalent if you ignore time.

But this indicates that what matters here are scales of time – as well as scales of space, and hence of speed.

The only way to have equivalent modes of thinking is to run them on equivalent substrates. [The] only way to get a very human-like thought process is to run the computation on very human-like wet tissue.

What is at stake in the organic tissues of humans, that is, what is at stake in their bodies, is, however, their relationship to death, where the locus of this relationship does not just reside within this body, but, precisely, within what I call the noetic necromass, that is, within what Popper called World Three, which means for example the Trinity College library in Dublin, which is being shifted onto new substrates that require a total reconsideration of the conditions of a new era of exosomatic noesis, themselves fundamentally composed of organizations – without which it will be impossible to avoid collapse.

No thought that in fact thinks thinks like any others, and this is what points to the real challenge: anti-anthropic bifurcation is what exceeds all calculation – and the question is the function of calculation and its limit in a neganthropic field, that is, a localized field, whereas the generalization of calculation, and the totalizations to which this generalization gives rise in this or that locality, destroys this locality5 – and this locality is the biosphere itself, in its relation to the cosmos, a question that was opened in these terms by Vernadsky in 1926.

The biosphere is the condition of biodiversity. Today, the question is how to make the technosphere the possibility of a new noodiversity.

Translated by Daniel Ross.

1 Georges Canguilhem, Knowledge of Life, trans. Stefanos Geroulanos and Daniela Ginsburg (New York: Fordham University Press, 2008), ch. 4.

2 Georges Bataille, Prehistoric Painting: Lascaux, or The Birth of Art, trans. Austryn Wainhouse (Geneva: Skira, 1955), p. 11, translation modified.

3 Norbert Wiener, The Human Use of Human Beings: Cybernetics and Society (London: Free Association Books, 1989), p. 52.

4 Kevin Kelly, ‘The Myth of a Superhuman AI’, Wired (25 April 2017), available at: <https://www.wired.com/2017/04/the-myth-of-a-superhuman-ai/&gt;.

5 The question is: where is the limit of intelligence? Is it not clear that this is a matter of entropy – and of entropy within a finite locality? The limit is not quantitative, according to Kevin Kelly: for example, it is ‘not on a Moore’s law rise. AIs are not getting twice as smart every 3 years, or even every 10 years’. And to shift these limits, Kelly posits that ‘we should engineer friendly AIs and figure out how to instill self-replicating values that match ours’. But the question is here the categorization that is accomplished along with the algorithmic => a new ‘transcendental deduction’ of algorithmically generated categories.

Leave a comment