Introduction

The nature of experience is one of those deep philosophical questions which philosophers and scientists alike have not been able to reach a consensus on. In this article, I review a transhumanist variant of a basic question of subjectivity. In his classical article “What is it like to be a bat?”, Thomas Nagel investigates whether we can give a satisfactory answer to the question in the title of his article, and due to what he thinks to be fundamental barriers, concludes that it is not something we humans can know [1]. Without going knee-deep in an epistemological minefield, we can intuitively agree that although the bat’s brain must have many similarities to a human’s, since both species are mammalian, the bat brain contains a sensory modality quite unlike any which we possess. By induction, we can guess that perhaps the difference between sonar perception could be as much as the difference between our visual and auditory perception. Yet, in some sense sonar is both visual and auditory, and still it is neither visual nor auditory. It is more similar to vision, because it helps build a model of the scene around us, however, instead of stereoscopic vision, the bat sonar can make accurate 3-D models of the environment from a particular point of view, in contrast with normal vision that is said to have “2-1/2D vision”. Therefore, it is unlike anything that humans experience, and perhaps our wildest imaginations of bat sonar experience are doomed to fall short of the real thing. Namely because it is difficult for us to understand the experience of a detailed and perhaps rapidly updated 3-D scene that does not contain optical experience as there is no 2-D image data from eyes to be interpreted. This would likely require specialized neural circuitry. And despite what Nagel has in mind, it seems theoretically possible to “download” bat sonar circuitry into a human brain so that the human can experience the same sensory modality. This seems to be one of those things in which thinking alone is not sufficient. The only barrier to knowing what it is like to be bat is, thus, a technological barrier, not a conceptual or fundamental barrier.

That being the case, we may also consider what an upload would experience, or whether it would experience anything, as brain uploading is a primary goal of transhumanism on which computational neuroscientists have already begun working. The question that I pose is harder because the upload usually does not run on a biological nervous system, and it is easier because the processing is the simulation of a human brain (and not something else). Answering this question is important, because presumably the (subjective) experience, the raw sensations and feelings of a functional human brain, are very personal and valuable to human beings. We would like to know, if there is a substantial loss or difference in the quality of experience for our minds’ digital progeny.

Brain prosthesis thought experiment

The question is also very similar to the brain prosthesis thought experiment, in which biological neurons of a brain are gradually replaced by functionally  equivalent (same I/O behavior) synthetic (electronic) neurons. In that thought experiment, we ponder how the experience of the brain would change. As far as I can tell, Marvin Minsky and Hans Moravec think that nothing would change. And John R. Searle maintains that the experience would gradually vanish in his book The Rediscovery of the Mind. The reasoning of Minsky seems to be that it is sufficient for the entire neural computation to be  equivalent at the level of electrical signaling (as the synthetic neurons are electronic), while he seems to disregard other brain states. While for Searle, experience can only exist in “the right stuff”, which he seems to be taking as biological substrate (although one cannot be certain) [4]. We will revisit this division of views soon enough.

Naturalist theories of experience

In a recent interview on H+, Ben Goertzel makes an intriguing summary of his views on “consciousness”:

Consciousness is the basic ground of the universe. It’s everywhere and everywhen (and beyond time and space, in fact). It manifests differently in different sorts of systems, so human consciousness is different from rock consciousness or dog consciousness, and AI consciousness will be yet different. A human-like AI will have consciousness somewhat similar to that of a human being, whereas a radically superhumanly intelligent AI will surely have a very different sort of conscious experience.

While he does not explicitly state his views on this particular question, it seems that he would answer in a manner close to Minsky rather than Searle. Since the upload can be considered as a very human like AI, it seems that Goertzel anticipates that the experience of an upload will be somewhat similar to human. He also mentions that the basic stuff of consciousness must be everywhere, since our brains are formed from natural matter.

Why is this point of view significant? The evidence from psychedelic drugs and anesthesia imply that changing the brain chemistry also modulates experience. If the experience changes, what can this be attributed to? Does the basic computation change, or are chemical interactions actually part of human experience? It seems that answering that sort of question is critical to answering the question posed in this article. However, it first starts with accepting that it is natural, like a star, or a waterfall. Only then can we begin to ask questions with more distinctive power.

Over the years, I have seen that neuroscientists were almost too shy to ask these questions, as if these questions were dogma. Although no neuroscientist would admit to such a thing, of course, it makes me think whether religious or superstitious pre-suppositions may have a role in the apparent reluctance of neuroscientists to investigate this fundamental  question in a rigorous way. In one particular study, Bialek and his super-star team of cognitive scientists [2] may shed light on the question. There, Bialek’s team makes the claim that the neural code forms the basis of experience, therefore changes in neural code (i.e. spike train, a spike train is the sequence of signals that travel down an axon), change experience. That’s a very particular claim, that can be perhaps one day proven in experiment. However, at the present it seems like a hypothesis that we can work with, without necessarily accepting it.

That is to say, we are going to analyze this matter in the framework of naturalism, without ever resorting to skyhooks. We can consider a hypothesis like Bialek’s, however, we will try to distinguish finely between what we do know and what is hypothetical. Following this methodology, and a bit of common sense, I think we  can derive some scientifically plausible speculations, following the terminology of Carl Sagan.

The debate

Let’s rewind a little. On one side, AI researchers (like Minsky) seem to think that uploading a mind will just work, and experience will be alright. On the other side, skeptics like Searle and Penrose, try everything to deny “consciousness” to poor machinekind.

And on the other hand, Ray Kurzweil wittingly suggested that when the intelligent machines claim that they have conscious experience we will believe in them (because they are so smart and convincing). That goes without saying, of course, and human beings are gullible enough to believe in almost anything, but the question is rather, would a good engineer like himself be convinced? In all likelihood, I think that the priests and conservatives of this world will say that uploads have no “souls” and therefore they don’t have the same rights as humans. And they will say that none of what the uploads say matters. Therefore, you have to have very good scientific evidence to show that this is not the case. If we leave this matter to superstitious people, they will find a way to twist it beyond our imagination.

I’m hoping that I have convinced you that merely word play will not be sufficient. We need to have a good scientific theory of when and how experience occurs. The best theory will have to be induced from experimental neuroscience and related facts. What is the most basic criterion for assessing whether the theory of experience is scientifically sound? Well, no doubt, it comes down to rejecting each and any kind of supernatural/superstitious explanation and see this matter the same way as we are investigating problems in molecular biology, that the experience is ultimately made up of physical resources and interactions, and there is nothing else to it! In philosophy, this approach to mind is called “physicalism”. A popular statement of physicalism is known as “token physicalism”: “every mental state x is identical to a physical state y”. That’s something a neuroscientist can work with, because presumably, when the neuroscientist introduces a change to the brain, he would like to see a corresponding change in the mental state. One can think of cybernetic eye implants and transcranial magnetic stimulation and confirm that this holds in practice.

Asking the question in the right way

Now, we have every basic concept to frame the question in a way akin to analysis. Mental states are physical states. The brain states in a human constitute its subjective experience. The question is whether a particular whole brain simulation, will have experience, and if it does, how similar this experience is to the experience of a human being. If Ben Goertzel and I are right, then this is nothing special, it is a basic capability of every physical resource. However, we may question what physical states are part of human experience. We do not usually think that, for instance, the mitochondrial functions inside neurons, or the DNA, is part of the experience of the nervous system. We think like that, because they do not seem to be directly participating in the main function of the nervous system: thinking. Likewise, we don’t really think that the power supply is part of the computation in a computer.

This analogy might seem out of place, but it isn’t. If Ben Goertzel and I are right, experience is one of the basic features of the universe. It’s all around us, however, most of it is not organized in an intelligent way, and therefore we don’t call them conscious. This is the simplest explanation of experience. It doesn’t require any special stuff. Just “stuff” organized in the right way so as to yield an intelligent functional mind. Think of it like this. If today, some evil alien came and shuffled all the connections in your brain, would you still be intelligent? I think not. However, you should accept that even in that state, you would have an experience, an experience that is probably meaningless and chaotic, but an experience nonetheless. So, perhaps that’s what a glob of plasma experiences.

Neural code vs. neural states

Let us now revisit the hypothesis of Bialek. Experience is determined by particular electrical signals. If that is true, even the experience of two humans is very different, because it has been shown that codes evolve in different ways. You can’t just plug in the code from another human to someone else, it will be random to the second human. And if Bialek’s right, it will be another kind of experience. Which basically means that the blue that I experience is different from the blue that you experience, while we presently have no way of directly comparing them. Weird as that may sound, as it is based on sound neuroscience research, it is a point of view we must take seriously.

Yet even if the experiences of two humans can be very different, they must be sharing some basic quality or property of experience. Where does that come from? If experience is this complicated time evolution of electro-chemical signals, then it’s the shared nature of these electro-chemical signals (and processing) that provides the shared computational platform. Remember that a change in the neural code (spike train) implies a lot of changes. For one thing, the chemical transmission across synapses would change. Therefore, even a brain prosthesis device that simulates all the electrical signaling insanely accurately, might still miss part of the experience, if the bio-chemical events that occur in the brain are part of experience.

In my opinion, to answer the question decisively, we must first encourage the neuroscientists to attack the problem of human experience, and find the sufficient and necessary conditions for human experience to occur, or be transplanted from one person to the other. They should also find to what extent chemical reactions are important for experience. If, for instance, we find that the property of human experience crucially depends on quantum computations carried at synapses and inside neurons, that might mean that to construct the same kind of experience you would need similar material and method of computation.

On the other hand, we need to consider the possibility that electrical signals may be a crucial part of experience, due to the power and information they represent, so perhaps any electronic device has these electron patterns that make up most of what you sense from the world around you. If that is true, the electronic devices presently would be assumed to contain human-like experience, for instance. Then, the precise geometry and connectivity of the electronic circuit could be significant. However, it seems to me that chemical states are just as important, and if as some people think quantum randomness plays some role in the brain, it may even be possible that the quantum description of the random number generator is relevant.

Simulation and transforming experience

At this point, you might be wondering if the subject was not simulation. Is the question like whether the simulation of rain is wet? In some respects, it is, because obviously, the simulation of wetness on a digital computer is not wet in the ordinary sense. Yet, a quantum-level simulation that affords all the subtleties of chemical and molecular interactions can be considered such. I suppose that, we can invoke the concept of a “universal quantum computer” from theory, and claim that a universal quantum computer would indeed re-instate wetness, in some sort of a “miniature pocket universe”. Even that is of course very much subject to debate (as you can follow from the little digression on philosophy I provide at the end of the article).

With all the confusing things that I have said, it might appear now that we know less than we started out with. However, this is not the case. We have a human brain A, a joyous lump of meat, and its digitized form B, running on a digital computer. Will B’s experience be the same as A’s, or different, or non-existent?

Up to now, if we accept the simplest theory of experience (that it requires no special conditions to exist at all!), then we conclude that B will have some experience, but since the physical material is different, it will have a different texture to it. Otherwise, an accurate simulation, by definition, holds the same organization of cognitive constructs, like perception, memory, prediction, reflexes, emotions, etc., accurately, and since the dreaded panpsychism is accepted to be correct, they will give rise to an experience “somewhat similar to the human brain” as Ben Goertzel said about human-like AI’s, yet the computer program B, may be experiencing something else at the very lowest level. Simply because it’s running on some future nanoprocessor instead of the brain, the physical states have become altogether different, yet their relative relationship, i.e. the structure of experience, is preserved.

Let us try to present the idea here more intuitively. As you know, the brain is some kind of an analog/biological computer. A great analogy is the transfer of a 35mm film to a digital-format. Surely, many critics have held that the digital format will be ultimately inferior, and indeed the texture is different but the (film-free) digital medium also has its affordances like being able to backup and copy easily. Or maybe we can contrast an analog sound synthesizer with a digital sound synthesizer. It’s difficult to simulate an analog synthesizer, but you can do it to some extent. However, the physical make-up of an analog synthesizer and digital synthesizer are quite different. Likewise, B’s experience will have a different physical texture but its organization can be similar, even if the code of the simulation program of B will necessarily introduce some physical difference (for instance neural signals can be represented by a binary code rather than a temporal analog signal). So who knows, maybe the atoms and the fabric of B’s experience will be different altogether as they are made up of the physical instances of computer code running on a universal computer, as improbable as it may seem, these people are made up of live computer codes, so it would be naive to expect that their nature will be the same as ours. In all likelihood, our experience would necessarily involve a degree of unimaginable features for them, as they are forced to simulate our physical make-up in their own computational architecture. This brings a degree of relative dissimilarity as you can see. And other physical differences only amplify this difference.

Assuming the above explanation, therefore, when they are viewing the same scene, both A and B will claim to be experiencing the scene as they always did, and they will additionally claim that no change has occurred since the non-destructive uploading operation went successfully. This will be the case, because the state of experience is more akin to the RAM of computers. It’s this complex electro-chemical state that is held in memory with some effort, by making the same synapses repeat firing consistently, so that more or less the same physical state is maintained. This is what must be happening when you remember something, a neural state that is somewhat similar to when the event happened should be created. Since in B, the texture has changed, the memory will be re-enacted in a different texture, and therefore B will have no memory of what it used to feel like being A.

Within the general framework of physicalism, we can comfortably claim that further significant changes will also influence B’s experience. For instance, it may be a different thing to work on hardware with less communication latency. Or perhaps if the simulation is running on a very different kind of architecture, then the physical relations may change (such as time and geometry) and this may influence B’s state further. We can imagine this to be asking what happens when we simulate a complex 3-D computer architecture on a 2-D chip.

Moreover, a precise answer seems to depend on a number of smaller questions that we have little knowledge or certainty of. These questions can be summarized as:

  1. What is the right level of simulation for B to be functionally equivalent to A? If certain bio-chemical interactions are essential for the functions of emotions and sensations (like pleasure), for instance, then not simulating them adequately would result in a definite loss of functional accuracy. B would not work the same way as A. This is true even if spike trains and changes in neural organization (plasticity) are simulated accurately. It is also unknown whether we can simulate at a higher level, for instance via Artificial Neural Networks, that have abstracted the physiological characteristics altogether and just use numbers and arrows to represent A. It is important to know these so that B does not turn out to be an emotionless psychopath.
  2. How much does the biological medium contribute to experience? This is one question that most people avoid answering because it is very difficult to characterize. The most general characterizations may use algorithmic information theory or quantum information theory. However, in general, we may say that we need an appropriate physical and informational framework to answer this question in a satisfactory manner. In the most general setting, we can claim that ultimately low-level physical states must be part of experience, because there is no alternative.
  3. Does experience crucially depend on any funky physics like quantum coherence? Some opponents of AI, most notably Penrose [5], have held that “consciousness” is due to macro-level quantum phenomena, by which they try to explain “unity of experience”. While on the other hand, many philosophers of AI think that the unity is an illusion. Yet, the illusion is something to explain, and it may well be that certain quantum interactions may be necessary for experience to occur, much like superconductivity. This again seems to be a scientific hypothesis, which can be tested.

I think that the right attitude to answering these finer questions is again a strict adherence to naturalism. For instance, in 3, it may seem easier to also assume a semi-spiritualist interpretation of Quantum Mechanics, and claim that the mind is a mystical soul. That kind of reasoning will merely help to stray away from scientific knowledge.

I am hoping that you see the panpsychism approach is actually the simplest theory of experience, that everything has experience. Then, when we ask a physicist to quantify that, she may want to measure the energy, or the amount of computation or communication, or information content, or heat. Something that can be defined precisely, and worked with.  I suggest that we use such methods to clarify these finer questions. Thus, assuming the generalist theory of panpsychism, I can attempt to answer the above finer questions. At this point, since we do not have conclusive scientific evidence, this is merely guesswork, and I’m  going to give conservative answers. My answer to 1. could for instance be at the level of molecular interactions which would at least cover the differences among various neurotransmitters, and which we can simulate on digital computers (perhaps imprecisely, though). The answer to 2. is at least as much as required for correct functionality, and at most all the information as present in the biological biochemistry (i.e. precise cellular simulations). This might be significant in addition to electrical signals. And to 3. Not necessarily. According to panpsychism, it may be claimed to be false, since it would constrain minds to funky physics (and contradict with the main hypothesis). If, for instance, quantum coherence is indeed prevalent in the brain and provides much of the “virtual reality” of the brain, then the panpsychist could argue that quantum coherence is everywhere around us. Indeed, we may have a rather primitive understanding of coherence/decoherence yet, as that is itself one of the unsettled controversies in philosophy of physics. For instance, one may question what happens if the wave function collapse is deterministic as in Many Worlds Interpretation.

Other finer points of inquiry may as well be imagined, and I would be delighted to hear some samples from the readers. These finer questions illustrate the distinctions between specific positions, therefore the answers could also be quite varied, no doubt.

After these closing remarks, comes a section reminiscing the fiery philosophical background of this article.

Infinite philosophical regression

The philosophy behind this article goes a long way of arguing over and over again about basic statements of cosmology, physics, computation, information and psychology. It is not certain how fruitful that approach has become. Yet for the sake of completeness, I wish to give some further references to follow. For philosophy of mind in general, Jaegwon Kim’s excellent textbook on the subject will provide you with enough verbal ammunition to argue endlessly for several years to come. That is not to say that philosophical abstraction cannot be useful. It can guide the very way we conduct science. However, if we would like that useful outcome, we must pay a lot of attention to fallacies that have plagued philosophy with many superstitious notions. For instance, we should not let religion or folk psychology much into our thoughts. Conducting thought experiments is very important, but they should be taken with care so that the thought experiment would actually be possible in the real world, even though it is very difficult or practically impossible to realize. For that reason, per ordinary philosophical theories of “mind”, I go no further than neuro-physiological identity theory, which is a way of saying that your mind is literally the events that happen in your brain. Rather than being something else like a soul, a spirit, or a ghost. The reader may have also noticed that I have not used the word “qualia” because of its somewhat convoluted connotations. I did talk about the quality of experience, which is something you can think about. In all the properties that can be distinguished in this fine experience of having a mind, maybe some of them are luxurious even; and that’s why I used the word “quality” rather than “qualia” or “quale”.

About the sufficient and necessary physical conditions, I’ve naturally spent some time exploring the possibilities. I think it is quite likely that quantum interactions may be required for human experience to have the same quality as an upload’s, since biology seems inventive in making use of quantum properties, more than we thought, and as I suppose you would remember because macro bio-molecules have been shown to have quantum behavior. Maybe, Penrose is right. That is possible. However, specific experiments would have to be conducted to demonstrate it.  I can see why computational states would evolve, but not necessarily why they would have to depend on macro-scale quantum states, and I don’t see what this says precisely on systems that do not have any quantum coherence. Beyond Penrose, I think that the particular texture of our experience may indeed depend on chemical states, whether  quantum coherence is involved or not. If of course the brain turned out to be a quantum-computer under our very noses, that would be fantastic and we could then emulate the brain states very well on artificial quantum computers. In this case, assuming that the universal quantum computer itself has little overhead, the quantum states of the upload could very well closely resemble the original.

Other physical conditions can be imagined as well. For instance, digital physics provides a comfortable framework to discuss experience. The psychological patterns would be cell patterns in the  universal cellular automata.  A particular pattern may describe a particular experience. Then, two patterns are similar to the extent they are syntactically similar. Which would mean that, you still cannot say that the upload’s experience will be the same. It will likely be quite different.

One of my nascent theories is the Relativistic Theory of Mind, it is discussed in an ai-philosophy mailing list thread, which obviously tries to explain subjectivity of experience with concepts from the theory of relativity. From that point of view, it makes sense that different energy distributions have different experience, since measurements change.

I think that a general description of the difference between two systems can be captured by algorithmic information theory (among others perhaps). I have previously applied it to the reductionism vs. non-reductionism debate in philosophy [3]. I think that debate stems mainly from disregarding the mathematics of complexity and randomness. As part of ongoing research, I am making some effort to apply it to problems in philosophy. Here, it might correspond to saying that the similarity between A’s and B’s states depends on the amount of mutual information in the physical make-up of A, and  the physical make-up of B. As a consequence, the dissimilarity between two systems would be only the informational difference in the low-level physical structures of A and B,  together with the information of the simulation program (not present in A at all), which could be quite a bit if you compare nervous systems and electronic computer chips running a simulation. Perhaps, this difference is not so insignificant that it will not have an important contribution to experience.

Please also note that the view presented here is entirely different from Searle, who seemed to have a rather vitalist attitude towards the problem of mind. According to him, the experience vanishes, because it’s not the right stuff, which seems to be the specific biochemistry of the brain for him [4]. Regardless of the possibility of an artificial entity to have the same biochemistry, this is still quite restrictive. Some people call it carbon-chauivinism, but I actually think it’s merely idolization of earth biology, as if it is above everything else in the universe.

And lastly, you can participate in the discussion of this issue on the corresponding ai-philosophy thread.

References

1. Thomas Nagel, 1974, “What Is it Like to Be a Bat?”, Philosophical Review, pp. 435-50.

2. E Schneidman, N Brenner,N Tishby, RR de Ruyter van Steveninck, & W Bialek, 2001, “Universality and individuality in a neural code“., In Advances in Neural Information Processing 13, TK Leen, TG Dietterich & V Tresp, eds, pp 159–165 (MIT Press, Cambridge, 2001); arXiv:physics/0005043 (2000).

3. Eray Özkural, 2005, “A compromise between reductionism and non-reductionism“, In WORLDVIEWS, SCIENCE AND US Philosophy and Complexity,University of Liverpool, UK, 11 – 14 September 2005. World Scientific Books, 2007.

4. John Searle, 1980, “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3, 417-424.

5. Hameroff, S.R. and Penrose, R., 1996,  “Orchestrated reduction of quantum coherence in brain microtubules: a model for consciousness ? ” In Toward a Science of Consciousness – The First Tucson Discussions and Debates, eds. Hameroff, S.R., Kaszniak, A.W., and Scott, A.C., Cambridge, MA: MIT Press, pp.507-540.

What is it like to be an upload?

Eray Özkural

Eray Özkural has obtained his PhD in computer engineering from Bilkent University, Ankara. He has a deep and long-running interest in human-level AI. His name appears in the acknowledgements of Marvin Minsky's The Emotion Machine. He has collaborated briefly with the founder of algorithmic information theory Ray Solomonoff, and in response to a challenge he posed, invented Heuristic Algorithmic Memory, which is a long-term memory design for general-purpose machine learning. Some other researchers have been inspired by HAM and call the approach "Bayesian Program Learning". He has designed a next-generation general-purpose machine learning architecture. He is the recipient of 2015 Kurzweil Best AGI Idea Award for his theoretical contributions to universal induction. He has previously invented an FPGA virtualization scheme for Global Supercomputing, Inc. which was internationally patented. He has also proposed a cryptocurrency called Cypher, and an energy based currency which can drive green energy proliferation. You may find his blog at https://log.examachine.net and some of his free software projects at https://github.com/examachine/.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.