The Great Replacement: When Man Becomes Machine

The Great Replacement: When Man Becomes Machine

The fight to preserve and restore the white race is fraught with obstacles and adversaries. Most are known; even our opponents who have tried to diminish us while remaining hidden in the shadows have had their intentions exposed. For the immediate future, for the most part, we have a clear vision of the struggle and a reasonable sense of how to proceed (although, there are still some disagreements about the latter).

However, we also face the possibility that technical advances will produce such sweeping transformations that we can scarcely fathom them, let alone know how to turn them to our advantage.

Envisioning the future at a time of rapid technical change is no mean task. Too often, analysis is conducted as if the world was relatively static, with expectations of a world relatively similar to the one that exists now. For instance, most of us can see how artificial intelligence can threaten employment in the next few years, but not how it will fundamentally alter existence further on. Another tendency that frequently colors anticipation of the future is the desire for some past golden age, as we often long for what is familiar rather than the unknown.

But we should not be planning on a return to the past—or even a world that will be recognizable in just a few years. Never before has the rate of change come close to what it is now. In light of new developments in artificial intelligence, nanotechnology, and robotics, only those connected to the world of technology are likely to grasp the sheer depth and breadth of anticipated changes. So we should listen to them when they tell us their plans for our futures.

Perhaps nobody is more gung-ho to spread the word about the high-tech future than author-inventor-entrepreneur Ray Kurzweil. He has been deeply connected to the highest levels of the tech world since the 1960s and written a number of books on what the future is likely to bring, most notably The Singularity is Near: When Humans Transcend Biology in 2005 and its 2023 sequelThe Singularity is Nearer: When We Merge with AIThe 2005 book presents most of his major ideas; the latter is sort of a 300-page addendum.

An example of Kurzweil’s unabashed cheerleading for tech change is his quotation of Swedish philosopher Nick Bostrom:

It is hard to think of any problem that a superintelligence cannot either solve or at least help us solve. Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a superintelligence equipped with advanced nanotechnology would be capable of eliminating. [K1 259]

Of course, Bostrom is neglecting to mention some other things that could be eliminated: humanity, for one.

So who is Ray Kurzweil? He is a secular Jew and early boomer from New York who attended a Unitarian Church in his formative years. He had an early interest in technology and was something of a technical prodigy. An MIT-trained computer scientist, he has founded several software companies, mostly with an emphasis on character and voice recognition. But it is his role as an author and futurist that have bought him the most acclaim. His initial definition of The Singularity is when “the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.” [K1 7] A simpler definition is the “merger of man and machine,” which Kurzweil believes will be achieved around the year 2045. Neither definition gives a full appreciation of the drastic changes he predicts.

Underlying Kurzweil’s perspective is the obvious observation that the power of ideas to remake the world is accelerating, what he calls “The Law of Accelerating Returns.” [K 3] Soon, advances in AI and biotechnology will “exceed human intelligence” and “transcend the limitations of biology.” [K1 8-9] Giving humanity over to technology raises many grave philosophical questions; their potential answers are either impossible or disturbing. Just how far beyond these limitations will we go? How much transformation can occur before we are no longer human?

Kurzweil makes a case that it is likely our biological selves will become useless vestiges, far inferior to self-perpetuating technology. If so, we may even disappear altogether. Yet, he also suggests that, despite the disappearance of biological humanity, the enduring civilization will somehow remain human.

Of course, there is a darker side to tech advancement. The COVID crisis was a stark example of the sort of things we may begin to see on a regular basis, such as large scale experimentation on human populations. In the 2023 sequel, Kurzweil even wrote that if we fail to “meet the scientific, ethical, social, and political challenges posed by these advances. . . our very survival is in question.” [K2 4] However, when you realize just how far he is willing to take our tech transformation, even what he would consider to be success could be disastrous.

To Kurzweil’s credit, he cites the concerns of William French Anderson, the father of gene therapy: “I fear. . . we do not really understand what makes the lives we are tinkering with tick.” [K1 196]

But he then dismisses Anderson’s concerns rather casually. They do not “reflect the scope of the broad and painstaking effort by tens of thousands of brain and computer scientists to methodically test out the limits and capabilities of models and simulations before taking them to the next step.” [K1 196] There are shades of “trust the plan” here; we’ve got big-brained experts on the job. This confidence in experts is especially disconcerting given Kurzweil’s continued positive attitude toward the COVID-19 vaccines.

Bostrom—who Kurzweil clearly regards as an important voice for the future—has a vision of a tech-induced utopia:

A superintelligence . . . could assist us in creating a highly appealing experiential world in which we could live lives devoted to joyful game-playing, relating to each other, experiencing personal growth, and to living closer to our ideals. [K1 260]

Perhaps Bostrom’s vision may cause some to hear the soothing strains of “Kumbaya” playing in the distance. But it also could signal horror. In such a scenario, we will no longer be men, but cattle, no longer striving to create or to emulate the best of humanity: the noble ones, the honorable, the just, the brave. Instead, we will be satisfied to resemble H.G. Wells’s Eloi, living lives of meaningless pleasure until harvested by darker forces. Or perhaps, as we become completely superfluous or even counterproductive to machine civilization, eliminated entirely.

It is tempting to regard Kurzweil as a kook who read too much science fiction as a boy. But tech has already progressed considerably in the direction he describes. His vision is also in accord with a wide range of futurists and tech wizards. Consider tech financier Peter Thiel’s prediction that “technology will replace politics.” In other words, machines will take over mankind’s decision-making. Or perhaps take note of Elon Musk recently urging people to go ahead and spend their retirement savings, since AI will provide everything needed in the future (which eerily echoes the World Economic Forum mantra of “You’ll own nothing and be happy.”)

This is not to say the future is now written in stone. Even the most brilliant among us have been known to make errors or to be oblivious to unseen consequences—unpredictability is what makes life interesting. Caution must be used regarding Kurzweil’s predictions. Some from his earlier work have failed to materialize; for instance, he believed that AI would be embedded in ordinary items such as clothing, making the need for physical computers obsolete by 2020. He also suggested that the real estate market would entirely collapse due to changes brought by tech advance. [K1 105]

But many other of his predictions have indeed manifested. In fact, enough have come true that his general vision of the future must be taken very seriously. For instance, in the 2005 book, he mentioned that meat would soon be manufactured in laboratory conditions by “cloning animal muscle tissue.” [K1 224] Recently, a Campbell’s Soup executive was fired for decrying the “bioengineered meat” used in his company’s products.

We are likely to see tech development happen in two stages. The first will be a period of general improvement to human existence. Human intelligence will be greatly enhanced, human longevity will be greatly extended, human capability in many facets will be heightened. In the second stage, biological humans will be either marginalized or completely eliminated, replaced by electronic components.

Stage 1: A Better Man

Right now, humanity is exploring the use of technology to augment human existence—or even to improve mankind. We have already made considerable progress: biotechnology is everywhere, including prosthetic devices, drugs, and surgical techniques that were unimaginable not that long ago. And artificial intelligence has already embedded itself in everyday life for perhaps billions of people.

Particularly important will be the development of nanotechnology—the manipulation of matter at the molecular and atomic levels. Soon, “small robots the size of human blood cells or smaller” will “travel inside the bloodstream,” [K1 253] monitoring cells and delivering needed nutrients with precision. This is already being done with animals. Such nanobots will eventually be reprogrammable, with functionality far exceeding any comparable biological component.

At first, nanotechnology will appear to be a great boon to mankind. “It’s going to solve the problems of biology by overcoming biological pathogens, removing toxins, correcting DNA errors, and reversing other sources of aging,” Kurzweil gushes.

But that statement already gives pause for concern. “Correcting DNA errors” sounds like it could be invaluable in fighting diseases with genetic components—or it could produce a nightmare of unintended consequences (or intended ones, if the wrong people are in charge). Still, Kurzweil is “all-systems-go” with genetic engineering. “By the 2020s,” Kurzweil suggests, “we will have the potential to replace biology’s genetic-information repository with a nanoengineered system.” [K1 232] In other words, “we could introduce DNA changes to essentially reprogram our genes.” [K1 233]

Nanobots will enable us to scan the brain more closely, with greater detail, than external techniques. By 2030, with such information, we should be able to replicate the human brain well enough to “pass the Turing Test, indicating intelligence indistinguishable from that of biological humans.” [K1 25] The interplay between the human mind and technology will greatly enhance human mental prowess. Already, by 2005, at Duke University, monkeys with neural implants could control robots with their thoughts alone. [K1 194] Two-way communication is anticipated in the very near future. Elon Musk’s Neurolink received approval in 2023 to experiment on people, including implanting a “1,024-elctrode device” in a human brain.” [K2 70] Soon, people will be able to use such devices to tap into the entire store of human knowledge and mentally perform complex computations with unbelievable speed. He does not, however, talk much about the potential for authorities to tap into the minds of individuals.

Progress will continue at breathtaking speed. “By the 2030s,” he writes, “the nonbiological portion of our intelligence will predominate, and. . . by the 2040s, the nonbiological portion will be billions of times more capable” [than ordinary human intelligence]. [K1 201-2] Exactly how will the nonbiological portion treat its biological counterpart at that point? Remember, we are talking about artificial intelligence that is capable of deliberate, rapid evolution. Might it not envision the biological portion as deadweight in need of offloading?

Self-Replicating, Self-Evolving: Will We Lose Control?

Once singularity has been achieved and machines not only co-exist with men but dominate them, several key powers—and concerns—emerge. One is the ability of machines to self-replicate. Replication will simply be a manufacturing process, undertaken by machines, with no need for human activity. Robotics will exist to handle the transformation of materials into components, even down to the level of constructing entities by combining individual atoms. Kurzweil writes that “nanotechnology” will “ultimately enable us to redesign and rebuild, molecule by molecule, our bodies and brains and the world with which we interact.” [K1 227] This seems wildly speculative, but a German research team has already built a “’DNA Hand’ that can select one of several proteins, bind to it, and then release it upon command.” [K1 235] It seems that, at some not-so-distant time, a similar hand could chemically bond with individual atoms in the atmosphere and use them in manufacturing processes.

But self-replication has a few potential problems. First is the possibility of such phenomena as “gray goo,” which happens when nanobots self-replicate out of control, with the potential to “destroy the Earth’s biomass.” [K1 399]

At some point, Kurzweil envisions the entire universe being saturated with intelligence. [K1 29]

It may be true that “our civilization will expand outward, turning all the dumb matter and energy we encounter into sublimely intelligent. . . matter and energy.” [K1 389] Who today can accurately say whether that will be physically possible in the future? And explain how it will be good for mankind, or even for the nonbiological civilization Kurzweil says is to follow? Once you can make use of all the matter and all the energy in the universe to self-replicate, atom by atom, all reality can be transformed, or perhaps destroyed. This may be, of course, very far “out there” in terms of likelihood. But once you start trying to alter the fundamental principles of the universe, is not the potential for disaster heightened?

Kurzweil also suggests that intelligent machines will eventually gain the ability to self-evolve. Unlike natural evolution, in which random mutations advance organisms depending on how successful they are over many generations, in machine evolution changes will occur both intentionally and rapidly. The need for a specific form of evolution will be determined by AI analysis, as will the design. Physical changes will be conducted by robots. As such, is it evolution or engineering? We are, after all, talking about a machine analyzing its own existence and designing improvements. Yes, it is a process of change, as in evolution. Yes, there may be trial and error, with a trail of failed experiments. But one process begins with mutation, a form of randomness, while the other begins with intentional design. Perhaps “evolving” is a word too far.

Kurz also notes that “evolution speeds up because it produces its own perfect order.” In other words, evolution produces variations that are more amenable to further evolution than what came before. However, given that the speed of evolution will be increased exponentially by its intentionality, this may create a permanent instability. For those whom innovation is akin to a religion, this perpetual innovation may seem to be an optimal scenario. But instability has generally been a difficult environment for human thriving.

And the question must be asked: “what is it all evolving to?” Kurzweil has no real answer.

Stage 2

Eventually, advancing technology may no longer enhance human existence but supersede it—perhaps dispensing with it altogether. Kurzweil writes: “once nonbiological intelligence gets a foothold, so to speak, in our brains, it will be subject to the law of accelerating returns. Our biological thinking, on the other hand, is basically stuck.” [K1 257] In such a scenario, the biological part of the human brain may become largely a mere vestigial organ, useless like the leg bones of whales or wings of flightless birds, given the greater computing power of non-biological intelligence.

The biological components of the human body will also undergo total or near total replacement. Eventually, the question arises whether we should even bother to continue in human form. Would it not be more efficient for brain activity to occur solely in electronic components? And in machines designed for specific purposes to assume mankind’s physical functions rather human forms?

Kurzweil predicts the replacement will begin soon:

Let’s consider where we are, circa early 2030’s. We’ve eliminated the heart, lungs, red and white blood cells, platelets, pancreas, thyroid and all the hormone-producing organs, kidneys, bladder, liver, lower esophagus, stomach, small intestines, large intestines and bowels.” [K1 307]

At least we will still have our “skeleton, skin, sex organs, sensory organs, mouth, and upper esophagus.” [K1 307] Or will we? As Nick Bostrom suggests, “nowhere on the path is there any natural stopping point where technophobes could plausibly argue ‘hither but not further.’” [K1 259] The bio-tech world apparently has plans for the rest of our bodies. For example, Kurzweil claims that “interlinking nanobots will one day provide the ability to augment and replace the skeleton through a gradual and noninvasive process.” [K1 307]

There will also be a point at which our actual red blood cells become obsolete, replaced by trillions of nanobots. But what happens when nanobots supply our cells with nutrition and remove our waste? Of what use is our digestive function then? Except, perhaps, the pleasure of eating? The entire human anatomy will come into question; the reproductive system will be irrelevant as a means of procreation since humans can be manufactured. Of what use is the human form at that point?

Is it Life?

Tech advances by solving problem. The specific problem Kurzweil most wants to solve is death. And he believes that we can do so by converting ourselves to electronic beings: “As we move toward a nonbiological existence, we will gain the means of backing ourselves up (storing the key patterns underlying our knowledge, skills, and personality), thereby eliminating most causes of death as we know it.” [K1 323]

You can buy Christopher Pankhurst’s essay collection Numinous Machines here.

But the ability to store information is not the central question; we can already do that to some degree. Rather, it needs to be asked that, when biological humans are replaced by electronics, how is that life? This is an area for which Kurzweil struggles to make strong arguments; he can explain how technology can alter the world, explain how the tech-run world will be more efficient, and also explain how there is a certain inevitability to tech’s advance. But it is much harder to make the case that electronic intelligence is of the same quality as human consciousness.

Still, Kurzweil makes it very obvious that, in his vision of the future, biology will be unnecessary. In this transition comes the question: what is it to be human? Is not death central to our humanity? Furthermore, if technology based on the deterministic action of physical material and energy predominates, will not life possibly become predictable? And if life becomes perfectly predictable, is it truly human?

Kurzweil seems to think it is. “Nonbiological intelligence should still be considered human, since it is fully derived from human-machine civilization and will be based, at least in part, on reverse engineering human intelligence.” [K1 317] But such arguments based on technical matters fall flat. There is something missing, some spark or drive that makes us human. A machine may need something for continued performance, but can it lust for it? Can desire make it mad, enraged, irrational, broken? Can it hope, or feel the pain and joy of self-sacrifice?

Furthermore, he suggests that, because machines can behave as if they have consciousness, they should be regarded as conscious:

I do believe that humans will come to accept that nonbiological entities are conscious, because ultimately the nonbiological entities will have all the subtle cues that humans currently possess and that we associate with emotional and other experiences. [K1 385]

Kurzweil defines higher consciousness as “the ability to have subjective experiences inside a mind—and not merely to give the outward appearance of doing so.” [K2 77] Still, no one has proven that the ability to store and manipulate information and to react deterministically to stimuli is truly a subjective experience. But he argues that the human brain can be investigated and re-engineered to the point that, along with being exponentially more powerful than a human brain, artificial intelligence will be able to emulate consciousness—necessary to defeat death. Future machines “will claim to be human and to have the full range of emotional and spiritual experiences that humans claim to have. And these will not be idle claims; they will evidence the sort of rich, complex, and subtle behavior associated with such feelings.” [K1 377]

But how can we be sure this is not just an idle claim? Certainly, machines can be programmed to react in specific ways as if they had emotions and felt pain. But will a machine have the capacity for love, let alone comprehend the rejection by a loved one the way humans do?

Just as Kurzweil cannot provide any definitive proof for his claims, I cannot provide proof to the contrary. But I instinctively recoil from the suggestion that machines can have true consciousness. And in the end, even Kurzweil relents that there is no definitive “proof” of machine consciousness: “there exists no objective test that can conclusively determine its presence.” [K1 378] He adds that “we cannot penetrate into the core of subjective experience through objective measurement.” [K1 379] Of course, if we cannot penetrate into the core of something we cannot translate it into computer code. Artificial intelligence may always be just that; artificial, without real feeling. Responses such as physical attraction, appreciation of beauty, and sadness are merely reactions programmed into a machine, not truly felt.

Part of Kurzweil’s argument that machines can achieve consciousness concerns the way biological life is constantly renewing itself; our cells are not the same molecules that existed previously but are constantly being repaired and refreshed by new ingredients. “We know that most of our cells are turned over in a matter of a couple of weeks, and even our neurons, which persist as distinct cells for a relatively long time, nonetheless change all of their constituent molecules within a month.” (K1 383] Yet, even though we may be almost entirely new material beings after a month or two, consciousness remains. As we replace our biological components with nonbiological, does the same consciousness remain? Kurzweil suggests that is so; he insists that a human is just “a pattern of matter and energy that persists over time.” And that if that were the case, one could ostensibly copy that pattern.

And yet, might not something be lost in the transition from biological to electronic? Is it possible that the biological components sum to some level of sentience that cannot be achieved by replacing them with artificial intelligence?

Kurzweil suggests another perspective from which to view consciousness: “whether or not an uploaded brain is really you.” [K1 201] The promise of defeating death fails if it is not. He suggests that, if the human brain is copied, it will be difficult to tell the original and uploaded version apart, except that the nonbiological version will be “increasingly capable.” [K1 200]

What Will be Lost?

So what does this mean to those who intensely feel the call to archaic concepts such as tribe and nature? Think what will be sacrificed to “perfect” or “improve” humanity: tow-headed children dragging their sleds through the woods at dusk after a great day on the slopes. An elderly man looking back on a life well-lived. A woman singing sweetly to the babe at her breast. The sacrifice by an elderly person with aches and pains of his or her own to nurse and soothe his or her dying spouse. All gone before the inexorable march of progress. Is extending life worth the loss of beauty?

One question Kurzweil never touches is who will be included in his brave new world. He speaks of “us” and “we”; does he mean all of humanity? Does he mean some sort of narrow elite, such as the community of advanced tech wizards? Or is he hiding some design to reserve the future only for a particularity ethnicity? Just how many of us are going to make it through to the next stage is anybody’s guess at this point. Kurzweil, ever the cheerleader, assumes that more efficient utilization of resources will permit a massive increase in population—and that this will be good. And tech’s tendency to innovate for reduced costs will make resources accessible to all. I question his certainty of universal benefits.

Kurzweil realizes—and applauds—that technology is going to advance no matter what else happens, unless a totalitarian world government can stop it: “The only conceivable way that the accelerating pace of advancement on all of these fronts could be stopped would be through a worldwide totalitarian system.” [K1 407] Yet he regards totalitarianism as unlikely “because the increasing decentralization of knowledge is inherently a democratizing one.” This is a shaky assumption on two grounds. One is that knowledge is in fact being “decentralized.” Instead, it is all being linked together through the Internet, making it more amenable to total control. Additionally, we are increasingly seeing how “democracy” can be used as a weapon by a ruling elite, and that the great mass of people are easily manipulated into supporting the policies of the elites.

Furthermore, most totalitarian systems intent on keeping power will eventually realize that technology is a valuable tool for accomplishing its goals, and expand it.

Problems

What happens when Kurzweil’s Law of Accelerating Returns meets the Law of Unintended Consequences meets Murphy’s Law? A single line of ambiguous or hastily written code could wipe out humanity.

Furthermore, man—or rather, men—have a strange capacity for hatred, of others, and of the self. Bad actors abound, and it is not always easy to know who they are. What will stop one deranged individual from adding a few lines of code convincing the Singularity that man’s existence has become counterproductive?

Certainly, as Kurzweil says, there will be safeguards to prevent artificial intelligence from becoming hostile. But as anybody familiar with software code can attest, writing code can be an imperfect art, whether it is written by a biological human or generated by a machine.

Another problem on the biotech side is the possibility of pandemics. One solution Kurzweil proposed is defensive technology, released pre-emptively. It may take the form of altering our genes—hardly something to be taken likely. Another possibility is the release of instructions to attack harmful life forms, given to nanobots already present in our system.

The most important strategy for Kurzweil is social. According to Kurzweil, “our primary strategy in this area should be to optimize the likelihood that future nonbiological intelligence will reflect our values of liberty, tolerance, and respect for knowledge and diversity.” [K1 424] Of course, there is a question of “our” values. Or rather, whose values? For in the Western world, tech is currently in the hands of leaders who are antagonistic to many forms of real freedom (and, perhaps, to the continued existence of the white race). In fact, the values of this elite may be described as “globo-homo,” favoring a breakdown of national boundaries, identities, and traditional morality. It instead promotes international organizations and world government, as well as an egalitarian mindset. And yet, amidst all this internationalism, egalitarianism, and universalism, there appears to be one nation whose identity and autonomy remains sacrosanct.

And who will craft this framework of values for the machine world? Kurzweil suggests a plethora of think tanks and institutes, many with ties to Leftist organizations such as the United Nations. They offer such platitudes as the “Lethal Autonomous Weapons Pledge” that mandates that “the decision to take a human life should never be delegated to a machine.” [K2 279-80] Of course, while international celebrities flocked to sign it, none of the top military powers have signed onto the pledge. And what happens to such pledges and promises if biological humanity ends?

And the future control exerted on the population is not likely to have a light touch. “When we reach the 2020s and have software running in our bodies and brains, government authorities will have a legitimate need on occasion to monitor these software streams.” [K1 424] Kurzweil realizes that the dangers are too clear to ignore. He writes “The potential for abuse of such powers is obvious.” [K1 424] But he dismisses them by say only that we need to find “a middle path” between control and privacy. His words do not reassure.

Conclusion

We cannot stop the advance of technology, no matter how frightening it is. There is no going back in time, no opting out. Although its exact path is not yet certain, change is coming. And it will be enormous, with the potential to end biological humanity, to make human existence deterministic, or to permit the continued existence of only a small elite. The best option is not to flee from it, nor try to stop it, but to possess it and influence it so that we will not be destroyed. We must remain in the forefront of technology; though seeking a more optimal state of humanity through returning to the past may seem preferable, to fall behind means the current elites win. For, given the immense power of those in charge in technocracy, it is unlikely that they will permit alternative communities (except, perhaps, as controlled “zoos”).

Getting in front of the tech revolution will be difficult if we continue to live in the same multi-ethnic, globalist society that the United States is today. We know who dominates American tech today; the Silicon Valley Tech Bros, academics, Jews, and some Asians. If our people are involved, they almost always bow to the anti-white zeitgeist. (Even Musk flip-flops on this.) Ray Kurtzweil seems to be fairly representative of the tech elite. He does not discuss ethnicity but seems to regard mankind as all one group. We already know that’s bad for us, since we can assume we will be expected to submerge ourselves into the greater whole, not remain as a distinct group. Then, he describes doing away with biological man altogether—again, that is not good for whites trying to remain a coherent people.

That makes our path clear. Only if we live in a society in which the fundamental principle is the preservation of the white race will we be able to maintain both our humanity and our very existence. Additionally, only if we embrace tech and make it the means to our futures, will we survive into the 22nd century. This makes the need to form our own nation—even if it is a shadow of the United States at its height—more urgent than ever.

As for Kurzweil, the man who would defeat death? He’s not easy to figure out. Maybe he is cynically aligned with security agencies, hoodwinking the reading public into joining their agenda. Or maybe he is actually a naïve, almost child-like true believer preaching the Gospel of the Singularity. Either way, in battling death in the manner he writes about, there is a high likelihood that it is life itself that will be destroyed or diminished.

Notes

Format: [Book Page]

K1 = The Singularity is Near: When Humans Transcend Biology

K2 = The Singularity is Nearer: When We Merge with AI

https://counter-currents.com/2026/01/the-great-replacement-when-man-becomes-machine