This post is the result of a discussion between David (a friend of mine and engineering student) and myself. It ought to be read alongside his post on the same topic, which takes a quite different perspective on many of the matters. As he states, it is quite unlikely that (even between us) we will cover all the points made by the both of us, though I will certainly make an effort to do so.
I now forget how the debate arose, but its main theme ended up as follows: Are modern computers conscious/self-conscious to any degree and what is it exactly that makes them so, or indeed differentiates them from humans in this respect? It began as a rather scientific/technological discussion but turned out to involve a good deal of metaphysics (in which neither of us can claim to be well versed, though we certainly learnt much in the process).
To start I should note that where David refers to intelligence, I more often that not mean consciousness. In my opinion, intelligence of certain kinds is something already possessed by computers to varying degrees; their ability to perform calculations and analysis of some forms of data far surpasses that of humans whereas they are not nearly so adept at holistic analysis or creative thinking for example.
Before I get to the core of the discussion, it is important to firstly (try to) define a few terms. There is no general consensus on the exact meaning of consciousness but the introduction of the Wikipedia article offers a good idea of what I refer to when using the word. Self-consciousness (or self-awareness more accurately) is a much easier to define concept, if still not a concrete one: if anything can actively identify itself in a mirror (whether it be a physical or conceptual one), then it can be deemed self-aware. Several animals other than humans have been labelled as such on the basis of this test, such as chimpanzees, dolphins, and elephants. Now the question is whether computers can currently demonstrate this. An example given by David was a computer recognising its existence within a network by pinging itself via a remote device (if I remember correctly). His argument is that if the computer receives a successful reply, then it can clearly determine that it exists (the remote device would act as the mirror in this example) and is therefore self-aware. I dispute this argument primarily by asking whether the computer actively/explicitly realises that it exists. Firstly consider that it would be easy enough to fool the computer into believing that it does not exist on the network by returning a fake reply (or none at all). Also, in effect the programmer is telling the computer that it exists if it receives a successful reply, which fails to meet my criteria for self-awareness. In a way, the programmer is imparting his own realisation of the computer’s existence into it. Humans on the other hand can actively come to the conclusion that they exist (even without sensory information). They need not be told that they exist, but rather only to think about it. The famous statement by Rene Descartes, “Cogito, ergo sum” (“I think, therefore I am”) can be seen as proof of this. The same argument applies to the mirror test for self awareness in animals, although the difference there is that observers have to make the decision (albeit with very high probability) that the animal has shown signs of self-awareness. David refuted this explanation, suggesting that a person raised without any contact with others would not have the ability to come to the conclusion of their own existence. However the situation in fact then becomes similar to that of other intelligent self-aware animals which have not been trained in any meaningful way. I do concede that it is theoretically impossible to be sure of self-awareness in anything other than yourself on the basis of “Cogito, ergo sum”, though the fact that humans and animals have not been explicitly/consciously programmed gives a good indication that self-awareness arose independently.
This whole argument leads onto the (wholly philosophical and non-empirical) issue of from where consciousness is derived. It is believed (or has at least been proposed) by some that all biological organisms have a certain level of consciousness (though not necessarily self-awareness). For example, the cells that compose an organism could be seen to have a certain level of consciousness (by the definition given earlier), while the whole organism could be seen to have a greater one. Similarly, the Gaia hypothesis (especially that presented by Isaac Asimov in his Foundation series) proposes that the Earth has a supreme level of consciousness, which is greater than the sum of its component consciousnesses (including humans and other organisms). It goes as far as to suggest inanimate matter has a minute amount of consciousness, though I suspect this was a unique idea for the sake of fiction. This theory can be summarised by the statement “the whole is greater that the sum of its parts”, which comes up in various places but I feel is perhaps most appropriate here. As I warned, the topic has now diverged completely from empirical science, since no-one currently knows a way to measure consciousness quantitatively (or even define it in a concrete way). Continuing nonetheless; a computer may be said to derive its consciousness from either its programmers, internally, or from a combination of both. Humans may be considered to derive their consciousness internally (the neural networks of the brain are created from inanimate matter via biological growth and are developed with learning). Whether an entity derives its consciousness from a few other highly conscious entities (such as the programmers) or a multitude of entities with very low consciousnesses (such as cells and micro-organisms) could perhaps define what is to be considered independently conscious (though there is clearly a grayscale here). We did not discuss this particular area too far as it was becoming horrifically abstract, though I think we both agreed that it was an interesting idea.
The final point made by David in his post is regarding the increase in the complexity (again another loosely defined concept) of an entity (system) in order to completely understand itself. His point is that the complexity will eventually converge to a finite value as a system grows indefinitely in order to understand itself. (See his post for a proper explanation.) A solely hypothetical question, but nonetheless intriguing. This view seems intuitively wrong to me, but specifically it would seem necessary that the system would have to re-comprehend its entire self as it increases its complexity (and therefore level of consciousness), since fully understanding the original system and the additional parts of it would not imply an understanding of the overall system (if you subscribe to the view that “the whole is greater that the sum of its parts”).
I don’t think I can comment very well on my general philosophical views as David has (though take what has been offered already). Looking briefly at some of the terminology however, I seem to largely subscribe to the philosophies of Holism and Emergentism, which appear to contradict with his views, as I might expect. (Why else would I be writing a post on the same topics?) Still, I subscribe very much to empiricism, with the small caveat that our knowledge of metaphysics is too small and basic to yet apply it to that too. (As a student of physics, I would be worried if I didn’t!)
Now that I’ve finally made this post (after much goading to fulfill my promise), and David has likewise made his own, I’m hoping that this debate is ended for the time being, but that these posts stand well as records of our philosophical views, to which we may return at some time.