2

I am puzzled by a claim by Christof Koch, one of the authors of the Integrated Information Theory of Consciousness (IIT), that the internet may possess a “small” degree of consciousness.

According to IIT, a digital computer cannot possess consciousness. Therefore, if the internet is to possess consciousness, it must be as a result of the information content of the network and not simply the aggregate of the information content of the devices it connects.

If we are to impose a cause-and-effect structure on the internet, then we require that, unlike a digital computer, the internet must be backward feeding; that is to say, a current state of the internet (or some subsystem of the internet) must be able to effect a previous state of the internet (at least from the point of view of the internet as a conscious observer). I am at a loss to understand how this might occur. The internet, just like the various servers and clients that drive it, appears to be strictly forward feeding, moving from one state to the next, and therefore incapable of effecting its past states.

How might one explain Koch’s claim in the context of ITT?


TL/DR;

A long shot : Is it possible that this has something to do with the way information is transmitted over the network - by being broken up into packets which travel independently before being reassembled on the receiving device? This seems a bit fuzzy to me, and I don’t honestly see how this could add any Φ to the system.

( It should be obvious that I am new to IIT, having only become aware of it through a recent answer posted by Jo Wehler to this question. )

nwr
  • 3,525
  • 2
  • 16
  • 29
  • 1
    @JohnAm "Abusive" seems a rather strong term to use. IIT is intended to measure degrees of consciousness according to any excess information content present in a physical system, over and above the information content of the system parts. I agree that assigning consciousness to a computer network appears to be unusual, but how might one say it is abusive. Any non-zero IIT measure would indicate some degree of consciousness. It need not be human-like consciousness, just that there is "something that it is to be like the internet". – nwr Oct 05 '15 at 21:15
  • I think the notion of consciousness is used in a gratuitous way in this context. – John Am Oct 05 '15 at 21:39
  • @JohnAm Oh, I see. Well that's fair comment. I should have realised that is what you intended. Certain features of the theory do appear to be somewhat arbitrary to me at this (early) stage. I'm guessing you are not a "physicalist". I don't really have a strong opinion one way or the other. – nwr Oct 05 '15 at 21:43
  • @JohnAm My understanding of IIT is very tentative at this early stage. I take it as saying that the internet may have near-brick-like consciousness, but some "small" positive measure of consciousness. The theory intends to extend the mathematical characterisation of information as entropy to complex systems. This then captures the common notion of consciousness as a phenomenon emerging from the complexity of the brain. Excess entropy = consciousness. High excess = highly conscious; low excess = marginal conscious; zero excess=brick=amoeba. A continuous scale of consciousness results. – nwr Oct 05 '15 at 22:45
  • @JohnAm Very good. The next time I prepare an omelette I must remember to subject it to a sound interrogation before I cook it - just in case I destroy a poet. – nwr Oct 05 '15 at 23:20
  • I am not confident that the backers of IIT actually claimed that a digital computer cannot be conscious, or if they did claim it, they did not mean it. I think the statement may hold for an "ideal digital computer," but there are enough probability distributions in real life computer systems that I think they would have to admit that there is some small consciousness to be had in a digital computer. Clock drift between parallel processes can yield surprising amounts of uncertainty in variables. – Cort Ammon Oct 07 '15 at 05:13
  • @CortAmmon I also thought of issues like clock drift as an adder of entropy, but I am not technically knowledgeable enough to think it through. I'll try to relocate the source where Koch states categorically that IIT predicts that digital computers can never be conscious, but you may need to be patient since I'm preoccupied with my other course work right now. Thanks. – nwr Oct 07 '15 at 17:55
  • @CortAmmon I've just had a lovely lunch and watched back-to-back episodes of Sponge Bob, so I thought I'd have a quick look through the articles I've read. This article http://rstb.royalsocietypublishing.org/content/370/1668/20140167 features the quote "IIT implies that digital computers, even if their behaviour were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing." That is not the same as zero, but I read as saying intelligent computers would effectively be "zombies". So perhaps not zero then. – nwr Oct 07 '15 at 21:15
  • @NickR Thanks, that gave me the pieces I needed to craft an answer – Cort Ammon Oct 07 '15 at 23:19

3 Answers3

2

To answer your interesting question, it is not enough to read alone Koch's book, chapter 9. Koch describes in popular terms and in qualitative way Tononi's Theory of Integrated Information (IIT).

You may consult the following primary sources for a quantitative formulation of IIT:

  • Tononi, Guilio: Consciousness as Integrated Information: a Provisional Manifesto. Biol. Bull. 215, 2008. p. 216-242 (see my answer from your quote above)

  • Tononi, Guilio; Balduzzi, David: Integrated Information in Discrete Dynamical Systems: Motivation and Theoretical Framework. PLoS Computational Biology. Vol. 4, Issue6, 2008. p. 1-18

Both papers deal in a quantitative way with simple integrated systems, toy examples. But to obtain any quantitative results one needs a background in information theory.

I do not know whether in the meantime Tononi's thoughts have been applied to deal with the internet. I consider it a fascinating exercise. E.g., consider just wikipedia as an example of an integrated information system. Consider each separate article a component and all articles together, integrated via their links, the whole information system wikipedia. Then one can compute the degree of consciousness of wikipedia - according to Tononi's definition.

Jo Wehler
  • 30,912
  • 3
  • 29
  • 94
  • Thanks for your answer. I've had a read through the first of the papers you reference and I shall have a look at the second. I have to consider my current course work, but it is very tempting to be side tracked by this theory. In the example of wikipedia, it appears to me, at this stage, that the links simply embed existing information content into other existing information content without adding anything over and above the aggregate information. If Tononi uses wiki as an example, it should be very revealing to read his thoughts. Thanks. – nwr Oct 05 '15 at 20:40
  • @Nick R To apply IIT to wiki is just my idea. One has to check the formulas for relative entropy to decide whether from the links any excess information results. That's homework which I have not done tonight :-) – Jo Wehler Oct 05 '15 at 20:44
  • The "combinatorial explosion" in dealing with anything remotely complex appears to be an insurmountable problem at this stage. Even the simplest "3-gate" example given in Tononi's paper gives rise to an 18 dimensional object in the qualia space. Heaven knows how many trillions of trillions of dimensions may arise in any remotely complex system. Also, the choice of a metric for the space seems to me (at this early stage) to be somewhat arbitrary. – nwr Oct 05 '15 at 20:49
  • One last comment (I promise). Perhaps dynamic versus static linkage may be relevant. A brain or a network is capable of dynamically linking its parts, while something like wikipedia employs only static linkage. As I say, I've only just scratched the surface of Tononi's beautiful ideas. Hopefully I shall obtain a stronger understanding with time. – nwr Oct 05 '15 at 21:18
  • 1
    @Nick R That not every page links to every other means that the presence of a link in an article carries more information than simply an embedding of existing information arbitrarily, it establishes association. Perhaps not so different from the association a connection between neurons provides. – otakucode Oct 08 '15 at 08:29
  • @otakucode I take your point about association adding information, but it is not clear to me that the link and association is anything more than a "part". The link may add association information from the point of view of a human observer, but does it add information from the point of view of wikipedia as a (potentially) conscious observer. Two linked articles and a link between them may form an irreducible complex, but the info content appears to be just the sum of the parts. Association is entailed by the existence of a link. I suspect I am missing something rather subtle here. – nwr Oct 08 '15 at 17:38
1

This interview is made just to create some magazine articles.

http://www.slate.com/articles/technology/future_tense/2012/09/christof_koch_robert_sawyer_could_the_internet_ever_become_conscious_.2.html

“Even today it might ‘feel like something’ to be the Internet,” Koch says. Each computer feels nothing, of course, but the totality of the Internet may be more than the sum of its parts. “That’s true for my brain, too. One of my nerve cells feels nothing—but put it together with 100 billion other nerve cells, and suddenly it can feel pain and pleasure and experience the color blue.”


This publication http://rstb.royalsocietypublishing.org/content/370/1668/20140167

Consciousness: here, there and everywhere? is interesting.

BUT:

IIT was not developed with panpsychism in mind (sic). However, in line with the central intuitions of panpsychism, IIT treats consciousness as an intrinsic, fundamental property of reality. IIT also implies that consciousness is graded, that it is likely widespread among animals, and that it can be found in small amounts even in certain simple systems. Unlike panpsychism, however, IIT clearly implies that not everything is conscious. Moreover, IIT offers a solution to several of the conceptual obstacles that panpsychists never properly resolved, like the problem of aggregates (or combination problem [107,110]) and can account for its quality. It also explains why consciousness can be adaptive, suggesting a reason for its evolution.

They forgot consciousness is a manifestation of language carried by individuals in a society. And not any type of language but an advanced form of it.

So my opinion is that Koch’s claim is out of question.

John Am
  • 1,324
  • 1
  • 14
  • 33
  • It's nice to see that you've stopped your anti-omelette hate-speak ;-) Seriously though, Tononi and Koch specifically argue against consciousness being tied to language. I do, however sympathise with your view. It's a very difficult subject and I'm only just starting to read about consciousness. The RS Philosophical Transactions paper is a nice, gentle introduction to the IIT, though lacking in the necessary rigour. Thanks for your answer. – nwr Oct 07 '15 at 17:45
  • I didn't think you were trolling at all. Your point of view seems perfectly reasonable to me. My crack about your omelette metaphor was just a bad joke on my behalf. Some of my best friends are omelettes. I'll try and locate the discussion of language and consciousness, but it may take some time since I'm a bit preoccupied with my course work right now. – nwr Oct 07 '15 at 17:51
  • Hi John, It is not a detailed argument, but in this video https://www.youtube.com/watch?v=1cO4R_H4Kww , starting at about 4:00, Koch gives a long list of things which he claims are not required from conciousness. This includes language, the argument being based on clinical evidence. One problem in understanding IIT may be that what Tononi and Koch call consciousness may not be what others call consciousness. They reject any human exceptionalism and rely more on behavioural correlates and neural correlates. – nwr Oct 07 '15 at 21:26
0

Looking at the article you linked, I think I can give a meaningful answer.

IIT argues that feed-forward systems are not conscious. This means any system which can be well modeled as an open loop that accepts inputs from one side, and provides outputs on the other side, must not be conscious because all of the information it integrates is available just by processing its inputs. Systems with feed-back function differently. Systems with feedback can observe the effect of their own actions, and adapt accordingly.

When they talk about a digital computer being minimally conscious, what they are most likely referring to is a traditional brain simulation, where you have a body of canned inputs and observe its outputs. Even if the system has feedback loops within itself (which a brain simulation would), the entire system has limited consciousness because it can only integrate the inputs that it receives. If the canned inputs contain very little data, it is hard to generate a large amount of integrated information about them (information cannot be "created" out of thin air).

In IIT, the concept of the environment is very important. When you get down to the actual mathematics to describe the Phi function of a system (measuring its level of consciousness), it is dependent on the ability to exclude states which are theoretically possible by the model, but cannot actually be arrived at by the entity under test. The fundamental measures of information used in IIT are literally information about a system. The classic example is the Maxwell's Daemon. You have a system with two boxes connected by a door. One gas particle bounces around in the system. If you have information that the particle is in one box, you have reduced the number of possible states of the system by 1/2, corresponding to 1 bit of information about the system. In IIT, Phi ends up measuring how much information the conscious entity can generate (in bits) minus the amount of information that could have been gathered simply from the input (in bits). Clearly, for a feed forward computer, virtually all of the information about the system is available in the inputs, so it is not very conscious. In the case of a brain simulation with canned inputs, the state of the system is fully knowable with the information at the onset (its fully defined by the blob of canned information plus the code of the brain simulation).

Connect the brain simulation up to the enviornment, however and the rules shift. Now there are unknown variables in the system, because they are interacting with the physical world. Now, if these unknown variables are affected by the computer's actions, and observable by its sensors, we have the capacity for feedback. The computer can try something, and observe the effect of its own actions. In doing so, it can begin providing information that was not known at the onset of the simulation -- information that has grown over time as interactions occur.

Now if we were to snarf all of the data for this simulated brain as it crosses the analog to digital barrier, and saved it, we would generate a set of canned data again. This computer that was conscious is, by the rules of IIT, rendered virtually unconscious because we have a body of data which can be replayed to get all of the information that simulated brain had. However, to declare it as thus, we have to capture all of its inputs. In real life situations, data capture like this is difficult, quickly consuming terabytes of data! So, in practice, it is often not reasonable to capture every single bit of digital input to the simulated brain. Those bits now have to be thought of as unknowns in IIT, and since they are unknown, we can give the computer credit for "knowing" information that is not available in any other pile of data besides "itself."

Now, consider the internet. The internet is massively parallel, and it is this parallel-ness that is important for why it might be conscious. Even given a body of canned input, the internet could still be conscious because its internal state is dependent on a remarkable number of timings. If one part of the internet is fed data that tells it to read a block of data from a server, while another part of the internet is fed data that tells it to change that block of data at the same time, slight differences in how fast those parallel processes execute lead to changes in behavior. These changes in behavior were not knowable from the canned input -- they are emergent as a result of slight analog effects such as clock skew. Thus, the internet contains the integrated information of a billion small transactions which are not captured in any database anywhere, but the result of those transactions can be fed back into algorithms in parts of the internet. This allows the internet to demonstrate consciousness, by IIT.

Cort Ammon
  • 17,775
  • 23
  • 59
  • Thanks Cort. This is a very plausible and intuitively appealing explanation of how the internet could generate some Phi. It also explains well the problem of generating any significant Phi on a digital computer. I'll read it a few more times in case I have any questions before accepting it. Ta! – nwr Oct 08 '15 at 00:30
  • You say ' In real life situations, data capture like this is difficult, quickly consuming terabytes of data!' but I think the difficulty is actually far more than that. No amount of storage or degree of instrument sensitivity is sufficient to capture the information (Heisenberg Uncertainty Principle at the smallest scale for instance). That together with the fact that even simply constructed systems exhibit chaotic behavior (extreme sensitivity to initial conditions) mean you can't capture enough information to be effective. – otakucode Oct 08 '15 at 08:25
  • @otakucode By "real life situations" I mean a real digital simulation of a brain attached to real hardware. In that case, there is actually a stream of data (I work with hardware that actually does this.. it turns out a lot of data can go over a PCIe bus!). It doesn't actually have to capture the reality, merely the exact series of inputs, as digitized by the computer. (and I'm also handwaving away the imperfections in the implementation of a digital computer, but I think that's fair given the definitions being thrown around in the paper) – Cort Ammon Oct 08 '15 at 16:44
  • @CortAmmon I'm bogged down with course work right now, but I do have a number of questions here. They'll have to wait for now, but I hope to get back to you later. – nwr Oct 09 '15 at 16:59
  • @NickR I look forward to pretending to be an expert on the topic. After your questions demonstrate how much more I have to learn, hopefully we'll both get to learn something =) – Cort Ammon Oct 09 '15 at 20:00
  • @CortAmmon OK, if you are going to capture the digital representation of the data after it has been collected, then I think you need to consider doing the exact same thing for the human brain. I think that omitting the organs of perception is a critical flaw, but where could IIT come from in an actual brain if you consider its electrochemical and connectome state to be the 'code' and use canned inputs? – otakucode Oct 12 '15 at 00:18
  • @otakucode The difference is that we have yet to identify the "code" for a human brain, from which to recreate a copy of the brain. With the digital system, it is theoretically possible to encode the state of the simulated brain, and its inputs, using nothing but a harddrive. With analog systems, such as the brain, it is much harder (potentially impossible). It would also be comparably hard/impossible to do for a digital brain whose implementation intentionally is irreproducable, such as one that allows for parallel computation with interactions based on clock timings, at least by IIT. – Cort Ammon Oct 12 '15 at 07:03