Hive minds, consciousness and bricks

I finish with a couple of miscellaneous observations. First, there is no reason for this functional gestalt to be confined to individual organisms. There is a gestalt, a “how”, to being part of a telepathically linked hive mind, something more amorphously collective like a slime mould, a planetary consciousness, and so on.

But not every functional gestalt has an “interior” aspect. There is a functional gestalt associated with being a flock of birds or perhaps even a brick, but neither of these is likely to have any true inner experience, for the simple reason that inner experience is something over and above the ability to successfully operate many abilities in tandem. It is a feedback loop on the contents of its internal representations, as described in Byzantine detail by people like Dennett and Douglas Hofstadter.

Whether or not it is only a functional phenomenon, the feedback loop of conscious experience has clear functional implications for the organism, since it can report on its internal contents. A human operator might enter the murmuration tank in the heterophenomenological arcade, successfully wield a flight of starlings, and emerge with some “inner experience” of being that flock that the flock does not have! (Even sillier is the brick simulator.) A flight of starlings does not need to be conscious to accomplish its executive coordination, and attend to its collective umwelt. Instead, the starlings presumably exchange a set of cues. In a sort of relativist double entendre, the “language” of these cues might in some sense determine the cognition of the flock!

Finally, we suggested above that from a functionalist perspective, near enough is good enough. We need not surgically implant bat ears if we can “mock up” a functionally equivalent form of echolocation. But if consciousness has a functional characterization, it raises a strange question: is it possible to mock up a “near enough is good enough” functional analogue of consciousness for a creature which does not possess it? This is, of course, the “old” problem of AI: get a computer to succeed at the conscious experience simulator. And designing such a simulator would answer Daniel Dennett’s mocking rejoinder to Nagel: “What is it like for there to be something it is like to be something?”

Next time