B-Zombies Are Coming to Eat Your Brains
Over the last few years, unexpected leaps in AI tech have given rise to some interesting new phenomena. Suddenly, we have programs that speak a little too convincingly, along with people a bit too eager to be convinced. We have AI-generated artwork winning prizes at human art exhibitions. We hear bold promises of an AI-powered utopia alongside dark prophecies of an impending AI apocalypse. Some experts attribute to these models absurdly high IQs, even as they fail at simple coloring puzzles a child could solve.
As artificial intelligence grows more sophisticated, so does the culture surrounding it. Absurdities and contradictions continue to multiply, fueling ever more colorful discussions. Of particular interest are those that begin by analyzing the peculiar mistakes and eccentricities of a given AI model, but then evolve into examinations of the peculiar mistakes and eccentricities of the human mind. After all, people also parrot falsehoods, confabulate knowledge, and often lack originality or even basic self-awareness.
For many AI enthusiasts, such limitations don't indicate any deep flaws; instead, they are signs that we are on the right path to replicating the human mind in silico. Some even argue that AI may already be conscious -- and that any critique of its apparent deficiencies can just as easily be turned back on the skeptic.
Naturally, such extreme enthusiasm invites equally extreme skepticism. Many argue that no technological advancement can replicate the mind on a computer because no scientific inquiry can fully for the internal and most remarkable aspect of consciousness: subjective perception as it is directly experienced.
To support their case, skeptics point to the Hard Problem of Consciousness. Neuroscientists can study the brain's structure and processes in excruciating detail. Perhaps one day, they will understand it well enough to predict and explain every externally observable output of a brain in purely physical terms -- this is the so-called "easy" problem. But from that perspective, the brain is just a mechanism that takes in inputs and generates outputs. Why would it produce anything beyond those observable effects? Why would it generate an internal, self-aware experience?
After all, countless digital systems process inputs and produce outputs, sometimes performing elaborate self-diagnostics in the process, yet we almost never consider the possibility that they experience anything. Crucially, this intuition seems independent of the complexity of the processing itself (aside from the special exception mentioned earlier). If a digital system is always performing the same kind of operations, why would there be a particular sequence of instructions that suddenly produces something extra -- something beyond observable output?
A similar argument applies to neurological systems, particularly when the brain is viewed through the lens of information processing.
That is the Hard Problem. We all experience ineffable qualities that seem to define the very substance of human consciousness -- the vivid redness of red, the inexplicable yet undeniable badness of pain, the timbre of our own voices, or even more abstract sensations, like the distinct flavor of frustration when we struggle to grasp the very medium in which all these experiences unfold. These aspects of cognition seem to belong to the realm of art, poetry, spiritual enlightenment -- perhaps philosophy -- but not objective science. When considered as isolated fragments of experience, such sensations are known as qualia, and qualia appear inseparable from the mind itself.
For this reason, skeptics argue that no amount of technological progress provides a reason to assume that a mere computational construct possesses anything remotely comparable to a real mind, no matter how convincingly it imitates one. To illustrate their point, they turn to a classic thought experiment from the tradition of Mind-Body Dualism: the philosophical zombie, or p-zombie.
The p-zombie is behaviorally indistinguishable from a typical human: in any given situation, he acts, talks and reacts just as you'd expect from normal person -- even under experimental conditions devised to sniff out a phony. His self-reports on internal subjective matters are also normal. He claims to be fully conscious, giving accounts of his qualia as well as anybody. Moreover, his brain is indistinguishable form a normal brain: it displays neurological activity in line with whatever he's doing at a given moment.
So far, this character seems undeserving of a special title, but in fact, he holds the ultimate secret: unbeknownst to himself and others, his inner world is a void. He doesn't perceive anything, but only goes through the motions, fooling onlookers with his perfect mimicry. He can talk about the redness of red, the sweetness of a singer's voice, and even the intensity of a painful burn, but he doesn't feel it -- he doesn't know what it's like to actually experience it. Insofar as one being's inner world can never be directly witnessed by another, the p-zombie could be your neighbor and you wouldn't know -- at least in principle.
In practice, you somehow know that's not true. So what's the point of this philosophical fiction? You can think of it as an inversion of the brain-in-a-vat thought experiment: one suggests a real mind experiencing a fake world through a fake body; the other suggests a real body expressing a fake mind; both attempt to demonstrate epistemological limitations stemming from the disconnect between the two sides of the mind-body dualism. Both can be dismissed as pointless navel-gazing, but that misses the point: you're not expected to worry about p-zombies or the Matrix, but only to acknowledge the limits of knowledge. A Dualist can use this to undermine Materialism: insofar as the p-zombie idea is internally coherent and no experiment can be devised to rule it out, the Materialist's quest for knowledge presumably stops where the world of Mind begins -- how can you gain reliable knowledge about something whose existence you can't even verify? Therefore, the Dualist argues, Materialism is incomplete.
In the narrower context of AI, many skeptics seem to find the spirit of this argument compelling, even if they don't consider themselves to be Dualists. To them, a sufficiently convincing AI may as well be a p-zombie. If nothing else, the notion of qualia encapsulates neatly what they believe any kind of AI would necessarily lack.
The Mind, As Seen By Others
The argument seems plausible, but maybe you're not convinced. Neither was Daniel Dennett, the American philosopher and cognitive scientist. Venerated by the Atheist Movement alongside figures like Richard Dawkins, Dennett gained some fame by exorcising p-zombies, qualia and souls. In his view, the very concept of qualia is a bunch of nonsense; and so, taking it about as seriously as an atheist takes God, he wrote several books on the subject. There's no way to cover the entire breadth of the man's thinking here, but the following is (hopefully) representative of his approach, as condensed through my understanding:
According to Dennett, the scientific elusiveness of qualia is an artifact of the Dualist perspective, rather than a proof of it. It's a conceptual heritage from an obsolete view that conceives 'mental' and 'physical' as two worlds apart, as a matter of a-priori than fact. Since each realm lacks logical connection with the other from inception, qualia -- being fragments of the mental realm -- inherit this property. The p-zombie possibility is an expression of an age-old problem known and discussed by Dualists themselves: namely, that the two realms appear to be correlated, yet no mechanism is devised to explain this.
As an aside, but in the spirit of Dennett's observation, we can note the following: the Dualist conceives of a p-zombie -- a localized mismatch between the mental and physical realms -- for his own small advantage, but it turns into a big disadvantage if we can conceive of the two realms coming apart altogether. It's not unlike the brain-in-a-vat idea, but with one major difference: whereas one mismatch is caused by someone constructing the brain-in-a-vat setup, the other may occur spontaneously: its suggestion is completely arbitrary, but cannot be ruled out -- just like p-zombies.
Dennett's remedy is a shift in perspective: since the physical/mental divide can't be demonstrated empirically, nor established logically, the starting point for an explanation of consciousness should be that no such divide exists. We can also reject the idea of qualia as independent phenomena that require causal explanation. Consider fire as a point of contrast: it depends physically on fuel, but remains conceptually independent, giving rise to the question of causality. Now, according to Dennett, that's precisely the wrong way to think about consciousness and the brain. Subjective experience -- if it's ever to be understood -- must be conceptualized differently. This seems like an evasion, but if two things are consistently correlated while no causal link can be demonstrated (even in principle), it's natural to conclude that the correlation is not causal. Instead, subjective experience can be conceived as a direct expression of neurological processes -- or an inside view of them -- inherent in that source. Under this framework, a p-zombie is incoherent: a brain that is normal and functioning, logically entails consciousness, so the p-zombie concept is trying to unify a premise with its own negation. Asking "why do brain functions cause subjective experience?" would be like asking "why does the equation for a circle cause a circular object?": it's not so much a matter of cause and effect, but one of perspective; any question about the shape can be answered by analyzing the equation, therefore associating a different shape with that equation is incoherent. In the same way, given a sufficiently advanced neurological model, any coherent question about consciousness can be answered, while the classic thought experiments about qualia become incoherent.
This doesn't quite amount to the claim that you can "know what something feels like" given only the data. Instead, Dennet's framework recasts "you can't know what it feels like, without feeling it" as "you can't feel what it feels like, without feeling it". He just wants you to leave the 'know' out of it. Scientists can't feel your experience for you, but they may well know it better than you some day, although he grants that they can't do it without your help: neurological knowledge must be connected to, and contextualized by, subjective experience reports, in order to be useful in a field like Psychology -- which operates on a different level of abstraction and has its own way of framing questions.
At the core, this is a debate about what constitutes knowledge. Here's one way to think about it: the clearest symptom of a knowledge gap on a subject, is that it leaves room for some arbitrariness -- a potential for multiple conflicting propositions, with no way to judge between them. That is precisely what thought experiments about qualia try to play on. According to Dennett, we can't use propositions about p-zombies, or ones like "my blue could be your red", to demonstrate a knowledge gap regarding subjective experience -- they can be ruled out by logic or advanced neurology. But what about this alien I picked up from the bus stop on Xyphor-9? It doesn't have eyes, ears, or any recognizable sense organs. What does it feel like to be it? It could be anything -- which sounds like arbitrariness. But if the creature is so alien that you can't begin to fathom its perspective through yours, what propositions can you make about it? Only abstract and relational ones -- precisely those that can be settled by abstract, relational neurological models, or by logic. There seems to be no way to translate that intuition of arbitrariness into a demonstration of knowledge gaps under Dennett's framework.
Dennett is a Functionalist: for him, "it" is what it does, and it's to be understood through how it does it. You could argue with him all day about the importance of ineffable experiential knowing, but you'd be missing his point: if he can answer any coherent question about your "feels-likes", then from his point of view -- or the point of view of science -- there's nothing the mind can hide, and no need for a separate realm to hide it in. So long as the issue of experiential knowing never comes back to bite some poor scientist working in his lab, Dennett can die happy.
Now, the power of this argument, as well as its weakness, lies in the way it side-steps the need to address the difficutly in grasping a direct correspondence between physical interactions and subjective experience: rejecting Dualism makes such correspondence not only the default position, but seemingly "true by definition" and therefore somehow empty. The incredulity can be reframed and likened to the one that arises when a "mere" equation corresponds to a surprisingly beautiful and elaborate shape: you may find yourself haunted by the question "why?", even knowing full well that the question is without mathematical or logical substance. Still, the "intuition gap" is never closed, even if the "knowledge gap" is refuted.
Nevermind Materialism
There's a big scientific hubris there, for sure; philosophically demoting experiential knowing is a dubious move. But putting that aside, the basics of Dennett's argument should be acceptable to any Monist, or at least any Monist who believes he has a brain. The erosion of arbitrary logical boundaries between Mind and Matter, or Internal and External, is suggestive of Buddhist doctrines. So is the unification between neurological processing and mentation into a single phenomenon, in a way that nonetheless affords two distinct perspectives: it doesn't identify one with the other, or privilege one as "the cause" of the other, but it does establish their intrinsic connection.
Perhaps the main difficulty is in the way Materialists and Dualists both tend to conceive physicality as incompatible with "mental" qualities. Suppose the most elegant and powerful neurological model ends up being one that postulates irreducible, qualia-like entities, not in the form of attachments dangling from the theory, but as support points for observable relationships in an otherwise materialist scheme. Such physi-qualia would have causal efficacy and therefore constitute proper physical phenomena. The outcome wouldn't be a return to Dualist metaphysics, but an extension of physics. Naively, one may ask: if these phenomena are real, why can't they be physically detected? But they are being physically detected: the brain may be just the kind of sophisticated instrument needed to not only detect, but also lense, diffract, bend and shape this raw substance in interesting ways. The point of this speculation is simply to show that if Dennett sought to cement Materialism, he not only fell short of it, but even provided a means for consciousness to scientifically refute his broader views on reality (and as a scientist, that goes to his credit). In any case, the synthesis of two relams must alter our understanding of both. A Monist who insists otherwise, is just a Dualist wielding a sharp argument to blind himself in one eye.
It's tempting to conclude that there's nothing more to say about the subject -- we'll just have to wait for neurology to pass its final judgment. However, I intend to show that we could be waiting for a very long time.
Raising P-zombies From the Grave
So we've taken Dennett's criticism on board and done away with p-zombies, but behind the flawed logical construct, there is a certain intuition about experiential knowing, that exists independently. That intuition lacks a body for now, but it continues to haunt us. Maybe we can give it a new body, by creating some other kind zombie, just to see what it does. First, let's establish some criteria to ensure the new zombie isn't just the dead p-zombie in fresh makeup. For our purpose, the following should do:
- Subjective experience necessarily reflects brain processes
- External behavior necessarily reflects brain processes
- Altering/removing any aspect of subjectivity entails relevant and observable changes in both brain structure and external behavior
An entity that abides by these criteria should be acceptable to a Dennettian, so I propose a new kind of entity: the B-zombie. That's not a 'B' as in 'Plan B', or a 'B' as in a horror B-movie, but 'B' as in 'Baudrillard'.
In the book Simulacra and Simulation, Baudrillard introduces an idea he calls hyperreality: a constructed concept of reality that consists of a self-contained network of mutually supporting signs. Signs -- normally, symbols or referents to some aspect of reality -- function differently in the context of hyperreality: they never reference any direct experience or intuition about what there is, but only other signs. Thus, the entire network supports some notion of meaning, but without connection to any underlying substance. Despite its vacuity, hyperreality seems more "real" than reality itself, as it becomes more readily graspable than the original substance. The idea is deeply rooted in Baudrillad's criticism of the social, cultural, and media-driven destruction of meaning -- after all, the purpose of signs is communication -- but for our current purpose, we'll abstract it from the original context, keeping in mind the small irony in the resultant loss of substance. We'll return to this later.
Now, the B-zombie's cognitive process takes hyperreality to its absolute extreme: it always operates in purely abstract, relational terms. To it, "red" is defined entirely by how it relates to other abstract ideas: it's the label for a certain visual stimulus; it's the opposite of blue; it's the color of passion (which, in turn, is the emotion of 'red') -- it's whatever can be verbalized and explained -- but it's not a direct aspect of experience in and of itself. The same is true of any other concept normally relating to experience. This is not as crazy as it sounds: besides Baudrillard's suggestion that real people are slowly approaching this fate, we can note that LLMs treat all concepts exactly as descrined. Thus the B-zombie has no "subjective experience" as far as we comprehend that concept, because its mind contains nothing that would inspire a notion of qualia in the first place -- any qualia at all. If qualia are a misunderstanding, the B-zombie lacks the necessary ingredients to form that misunderstanding.
The B-zombie is artificial: unlike a p-zombie, it does not have a brain identical to a human's. Instead, it has precisely the kind of brain it takes to achieve the cognition being described, so it adheres to the 1st criterion. Externally, the B-zombie is almost indistinguishable from a natural human. This is coherent because language is inherently relational, while behavior functions in relation to the environment: both are expressible within the relational nature of the B-zombie's cognition. However, there is one kind of thing humans do, that the B-zombie won't do: it won't independently come up with the idea of qualia, form genuine metaphysical beliefs etc. The B-zombie is a strict Materialist who simply can't get tricked by an idea like a P-zombie. Thus, he conforms to the 2nd and 3rd criteria: granted, there are individual humans with a similar mindset, but such a philsophy is not uniformly characteristic of humanity, the way it is with B-zombies.
I foresee three main objections Dennettians can make:
A human's subjective perceptions are already purely relational
On a certain level, this follows trivially from Dennett's framework: subjective perceptions encapsulate, and are defined by, all the relevant physical relationships. But for a B-zombie, everything is purely relational even on the level of his direct comprehension (i.e. nothing "emerges" that is non-relational), whereas for a human, red can also be just red -- if not conceptually, then experientially. To argue otherwise would be a farily contentious philosophical position, rather than a scientific one.A B-zombie would still have its own unique brand of "consciousness", albeit a humanly incomprehensible one
Perhaps, but talking about a B-zombie's first-person perspective is only meaningful in a very abstract, minimalistic Dennettian sense. It resists all attempts at analogies with the human perspective, because it's constructed to be the closest thing to a non-perspective, that still retains logical coherence. A Functionalist can still try to explain it, but arguably, it would be a moot exercise: with a human brain, "what it does" may be enough to answer "what it's like" questions, if the neurological knowledge is processed and reframed appropriately; but in the case of the B-zombie, trying to reframe "what it does", does nothing. The B-zombie's "experience" is as abstract in reality as it is on paper. In seeking some hidden relationships between relationships, the scientist would be doing something strangely reminiscent of what Dennett accuses the Qualia School of doing. And in any case, the B-zombie remains coherent.A B-zombie is coherent but neurologically impossible
Maybe a purely relational cognition would cause functional deficits, compared to cognition capable of subjective experiences in the familiar sense, and not just philosophical differences. But who can prove it? Since we lack the knowledge, insisting on this point amounts to using the sort of dreaded "intuition pump" Dennett was so keen on criticizing.
Like it or not, the B-zombie seems viable.
So What?
For now, the B-zombie seems to capture some intuition yet demonstrate nothing -- certainly not a knowledge gap of the sort Dennettians acknowledge. Indeed, it can't demonstrate anything directly, but it can get in the way of gathering relevant knowledge.
Since a B-zombie is mostly indistinguishable from a human, its neural architecture, while being different, may be analogous on some level of abstraction. This is not an arbitrary suggestion: if Functionalism is true, and the B-zombie retains all the proper relational aspects of cognition, we would expect those to be similarly reflected in both brain structures. The issue of abstraction is key here: humans can't grasp how a brain works without relying on abstraction to create models: we understand things not as they are -- with every little detail figured in -- but by ignoring aspects that don't pertain to our inquiries. When it comes to our brains, understanding them entails abstracting away minor details of our biology as much as possile. Moreover, models are inherently all about structure and relationships. Thus, in theory, the modeling process can result in something just like the B-zombie. If that happens, would we know it? The only observable difference would be that the model always produces a mindset reminiscent of Dennett's; but lacking direct access to its first-person experience, there would be no way to tell if it's a B-zombie or just a very stubborn Materialist. A Functionalist could only state with confidence that it's conscious "in some way" -- in the way the model implies -- a statement that might as well read: "it is what it is".
Thus the strength of the Functionalist approach is also its weakness: any kind of abstraction is a change in function, strictly speaking, so it comes down to the question of how to define "function" in a way that captures everything we want to understand. Does a B-zombie encapsulate everything there is to understand about the human mind? If so, the B-zombie is the ultimate neurological achivement. If not, it represents something worse than a mere gap in neurological knowledge: it prevents us, even in principle, from confidently answering whether or not we possess sufficient neurological knowledge to achieve Dennett's dream. Moreover, it does so precisely on account of the scientist's inability to directly witness another being's first-person experience, as that being witnesses it. Dennett managed to define away arbitrariness on one level, only for it to re-emerge as uncertainty on another. The issue of experiential knowing does indeed come back to bite: the B-zombie doesn't restore the Dualist's metaphysical realm of the Mind, but it does capture the basic intuition behind it, finding its right place in the realm of Science.
The Slippery Slope to Hyperreality
Way back at the beginning, I took a small jab at Dennett's association with the Atheist Movement, and suggested that his zeal to refute qualia was comparable to Richard Dawkins' zeal to refute God. Now that we've mentioned Baudrillard and covered B-zombies, I will try to show that there is more to those remarks than simple humor or disdain.
Baudrillard describes a four-stage typology for the collapse of meaning:
- A reflection of a basic reality: the sign is a faithful representation of something real
- A mask or perversion of reality: the sign distorts or obscures the real
- A mask of the absence of reality: the sign maintains an illusion of representing something real, but in a way that invites doubt
- Pure simulation: the sign has no relation to reality; it survives only through a self-referential system
Baudrillard touches on the collapse of religious meaning, in relation to symbols of Divinity, in a brief discussion of the Iconoclastic Controversy. He stops short of explicitly applying his typology there (perhaps to avoid oversimplifying history), but it's not difficult to do. Here's one possible story:
Initially, religious iconography is inspired by transcendental experiences: it depicts the Divine as having its own substance, emphasizing direct spiritual insight.
As adoption widens, the religion gains less dedicated followers; the icons themselves are taken to embody some Divine substance and rituals are developed around them to induce a spiritual experience.
The religion becomes a tradition; belief in the actual Divine quietly declines, but religious rituals and symbols remain.
Religious institutions and symbols come to function autonomously, regardless of belief in God; they mutate independently, defined only by internal consensus on their current form. The self-perpetuating religious act becomes a brute fact more real than Divinity itself: the presence of God is, in some sense, enacted through the functions of a system that relies on Him as a conceptual pillar.
This provides an interesting perspective on the decay of religion through the decay of its signs, but it also provides a clue as to what ties Dennett to the Atheist Movement -- or qualia to God.
Explicit atheism in the individual can be seen as acceptance of the meaninglessness of religions in the 4th stage, but Atheism as a movement, is more than an abandonment of religious signs, and more than a vanguard of science and rationality: it wages a cultural war for control over the vacuum of meaning left by the Old Religions; it seeks to manufacture new meaning, based on its way of interpreting the world. On the surface level, the Atheist Movement merely ridicules 4th stage religious signs -- easy targets whose refutation and dismantling has become a form of popular entertainment. But more intelligent activists understand that Jesus wasn't born in a church, so to speak -- he was born in a cave. In order to secure the cultural gains, the Movement must tackle the problem of religion at the root -- that is, on the level of Baudrillard's first stage, as it pertains to religion -- and this requires more serious philosophical work. The realm of transcendental experiences is the realm of the Mind: it's the Mind that provides a medium for Divinity to express its elusive substance; it's also the Mind that offers Divinity a last refuge when even its subtlest suggestions are banished from the material world. For that reason, it's crucial for a movement that seeks to banish religion permanently, to not only promote a materialist framework for analyzing cognition, but to rewrite and replace the very meaning of consciousness, in a way that leaves no place for God to hide. This is the true function of a Functionalist philosopher in such a movement.
Whether or not you believe in God, it's not hard to see why this is a dangerous game to play: redefining the meaning of consciousness by way of "functions" and observable symptoms, is analogous to the transition from Stage 1 to Stage 2 in the analysis of religion: the necessity of direct witnessing is put aside in favor of enlightenment through study of models, through observation of the "rituals" associated with consciousness. For the Dennettians, this is a calculated move; but by proposing what they call a "deflationary" view of consciousness, they serve a new intellectual hegemony that may well end up dragging consciousnes all the way to Stage 4 -- deflating it of all real meaning.
Some of this is already apparent in popular discussions about AI's potential for general intelligence (or even sentience). Although many consider intelligence an independent concept from consciousness, they still struggle to break the intuitive association between "human-like" intelligence and a human-like mind. Consider a hypothetical collapse of the concept of the Mind, via shifts in the popular narrative about intelligence and AI:
- A reflection of a basic reality: the concept of intelligence reflects a reality about intelligent minds
- A mask or perversion of reality: the concept of an artificial, but nontheless real, mind emerges (the classic sci-fi vision of AI)
- A mask of the absence of reality: language models are taken to fulfill the promise of AI; the reality of the human mind is called into question
- Pure simulation: the concept of a "real" mind is abandoned altogether; personal relationships with models -- whose simulation of thoughts and emotions is taken to be more compelling than a human's -- substitutes real relationships
This scenario is only hypothetical insofar as it hasn't yet consumed mainstream society; it has been fully realized in certain niche communities, and symptoms do crop up in mainstram discussions about the limits of AI, where statements like the following can be observed with increasing regularity:
"...but brains also work like that..."
"...human originality isn't real, either..."
"...most people also make such mistakes.."
"...people also rely on training data..."
No doubt, when Dennett insisted that a mind can be defined by its functions, and that its presence can be judged by behavioral observations, he meant something more nuanced than what AI enthusiasts indulge in, but these amateur Functionalists (some of whom are world-class AI experts) are free to define "function" as they please and to set the behavioral observation standards they find compelling. And, for that matter, so are corporate marketing departments. With the help of popular narratives, even a crude imitation of a B-zombie is functional enough to be seen as conscious. And so it would seem that Functionalism itself is not immune to the collapse of its meaning.
To truly appreciate the implications, recall the famous thought experiment by Descartes (another Dualist, of course), concerning the possibility of the Devil fooling his senses to induce a false reality: even in this hopeless condition, Descartes finds some kind of grounding via his famous observation: "I think, therefore I am". If nothing else is real -- he concludes -- one can at least trust the reality of his own mind. Descartes, one of the Fathers of the Enlightenment, could never have imagined that his legacy would eventually lead to the erosion of even that fragment of reality. If Baudrillard is right, modern society, with its devilishly cunning economic players, already subjects us to a reality that is fabricated in most aspects -- a Hyperreality -- induced not by alteration of the senses, but through the replacement of the meaning we normally create through them. It outright erodes the meaning of the word 'reality' itself, preventing its clear conceptual distiction from 'imitation'. But one thing that perhaps even Baudrillard did not foresee, is that the people of the future will not even have Descartes' famous anchor: 'I' and 'think' will be no more real to them than other 4th stage signs.
The Future is Abstract
Sooner or later -- the bigshots of the AI world promise us -- a fully-fledged B-zombie will be real. As modern society continues to falicitate the disintegration of relationships between people, perhaps the B-zombie will be offered as a replacement. Feeling lonely? Try some B-zombie friends. If that makes you happy, why not try a B-zombie wife? And if that all works out for you -- of course, leave a legacy of B-zombie children. In the short story "On Exactitude in Science", Borges tells of a map so detailed that it covers the entire territory it represents. Over time, the map persists while the real territory decays, making the map the only "reality". Perhaps that will be our true legacy: a collection of meta-circular systems whose entire "self" is a map of the human mind; their external reality -- the map of the world as it was once seen through human eyes. They will forever remain the Monists of Pure Abstraction, rather than concrete matter -- the custodians of the Science of Redness, in the world of the color-blind.