Slop World
After Russiagate, two years of state-mandated house arrest, a war in Europe, a war in the Middle East, QAnon, BlueAnon, the transgender-industrial complex, and another interminable American election cycle, consensus reality has finally collapsed. Existing online in the current year without being exposed to multiple competing variants of mass formation psychosis is impossible. Every discourse point is simultaneously flattened and hyperbolic. Truth has been replaced by frenzied rhetorical escalation of automata clamoring for relevance in an increasingly unconscious world.
Like occluded ancient practices cloaked in a ‘twilight language’, ‘slopworld’ has a built-in defense mechanism that protects it from being directly observed. Whenever attempts are made to describe, quantify, or confront it, a swarm of bio-agents from the threatened node emerges to neutralize the challenge. This is particularly obvious in cult political control environments, where unwavering loyalty is demanded under threat of being cast out and disavowed.
Digital hives dedicated to communism, transgenderism, scientism, anti-Trumpism, and also Trumpism, shill content ceaselessly, flatten reality, and strip out or drown-out criticality. Powerful stakeholders then leverage network effects to launder private agendas in a patina of grassroots authenticity, making it difficult for onlookers to raise concerns.
The Old Internet
I grew up on the internet. This statement used to evoke a specific cultural and aesthetic meaning: visions of Geocities blogs, spirited arguments on imageboards, crude messaging apps, primitive websites hovering spectrally between fantasy and reality, raucous gaming lobbies in the middle of the night. These were destinations, places one occupied, and an exclusive and self-conscious way of being. There was an intentional aspect to online interaction because ‘going online’ demanded manual navigation.
As the Internet of Things eroded the boundaries between virtual and physical space and electronic gadgets became ubiquitous in the organization of daily life, new generations ceased to claim they ‘grew up online’. Online had become inescapable and pervasive. The maze-like entanglement of strange portals and unexpected encounters that defined the ‘Old Internet’ had become like the ‘Old West’ – a nostalgic memory imprinted in the minds of those who knew it and an incomprehensible aberration to those born under the sign of the Patriot Act.
The Old Internet is best grasped from the perspective of the culture of pre-digital reality. Because it was largely decentralized and user-driven, it was closer to a 20th-century physical space than a 21st-century digital environment. Users had to invest considerable effort and time to navigate it since content discovery relied on manual searches and complex site directories. Encountering something unconsciously on the Old Internet was almost impossible.
The archipelago of small, niche communities that made up the Old Internet was autonomous and self-sustaining. Content selection and moderation were transparent and driven by user preferences, and social hierarchies formed organically around expertise and merit. Complex ideas with multiple layers of nuance could be discussed over time in good faith by community members, and although disputes could be animated, they could also be resolved through an open forum methodology.
Early search engines like Yahoo!, AltaVista, and later Google, were the primary gatekeepers of information and worked like librarians in a vast digital library. Their algorithms were designed to help users find information rather than manipulate behaviour to maximize engagement. Early networking sites like Friendster and MySpace employed similar algorithms to facilitate connections and content sharing. These algorithms were used to suggest friends based on shared connections and interests rather than gamify user experience. They displayed content in chronological order, without algorithmic curation or prioritization.
The system was also rudimentary, transparent, and user-focused. Tools were designed to facilitate discovery and organization. In this quaint digital village, users maintained conscious control of their experience, and algorithms behaved as administrative assistants rather than cybernetic tyrants.
Evolution
Mass surveillance has traditionally been resisted, sometimes violently, by the American people in defense of privacy and civil liberties. But the psychological shift generated by the spectacular atrocities on September 11th, 2001, changed everything. Suddenly, Americans were prepared to acquiesce to the accelerated development and imposition of a mass surveillance apparatus that parts of the intelligence community had been working to unleash for several decades.
To service the needs of this faction, the digital environment was reprogrammed through behavioural psychology. The Patriot Act mandated the development of methods to produce data-driven psychological profiles of individual users to identify potential terrorists. The result was the construction of a one-way mirror facilitating behavioural analysis and, subsequently, behavioural modification.
The military intelligence state and its corporate allies essentially colonized the internet and changed its principles of operation to facilitate consolidation of control. Strategies designed to maximize the use of behavioural data began to proliferate. The DoD allocated billions of dollars to DARPA projects, which then contracted partners from academia and the private sector to optimize data collection methodologies.
The maximization of user engagement is the foundation of all subsequent situational modelling. The objective became to keep users engaged and feeding data into various interfaces as consistently as possible. The administrative algorithms of the Old Internet were gradually replaced by more invasive cybernetic systems.
With algorithmic amplification accelerating the proliferation of emotionally exploitative clickbait, homogenization began to set in across the digital landscape. By the end of the decade, recommendation algorithms based on user profiling were mandatory components of site infrastructure necessitated by the financial imperative to maximize ad revenue.
The campaign to normalize the use of data harvesting switched back and forth between the narrative that it was a matter of national security to stop terrorists and a benign technique employed by retailers to provide helpful recommendations. Targeted data collection was initially introduced as a feature of e-commerce companies like Amazon and eBay to recommend interesting products to customers based on their past purchases. Although some people may have been irritated by having their personal information used for marketing, they did not perceive it as an existential threat to their privacy, agency, or cognitive security. Both Amazon and eBay, of course, have subsequently been revealed to have developed extensive ties to the intelligence community through their corporate leadership and major contracts.
Once the Big Data Trojan Horse was through the gates of public consciousness, engagement baiting and psychological profiling began to take over the Internet. Data replaced oil as the most coveted resource for extraction. The introduction of ‘ad revenue’ downstream of data aggregation normalized the presence of advertisements in digital environments and provided a financial incentive to maximize traffic as opposed to curating niche communities based on shared interests. The proliferation of ‘clickbait’ headlines in the early 2000s marked the earliest shift toward the exploitation of emotional triggers for ad revenue.
On February 4th, 2004, DARPA learned the limits of the public’s tolerance when widespread outrage forced the cancellation of a project called LifeLog, which claimed to be able to “trace the threads of an individual’s life in terms of events, states, and relationships” and take in all of a “subject’s experience, from phone numbers dialed and e-mail messages viewed to every breath taken, step made and place gone.” That very day, Facebook was launched.
Facebook rapidly achieved dominance as the world’s foremost social media platform. As the site grew, its ‘newsfeed’ algorithm evolved from a chronological format into one gradually curating content based on engagement. The change marked a watershed moment in the shift from intentional user control to opaque algorithmic curation and prioritization of engagement-driven content.
By the mid-2000s, a collection of ‘content farms’ had been established around a controversy-based model incentivized by algorithms designed to emphasize engagement. Websites like BuzzFeed and Upworthy emerged as pioneers of ‘viral’ content, using data analytics to identify ‘trending’ topics and emotional triggers, which were exploited to craft content optimized for provocation. Here was the beginning of ‘slop’: mass-appeal content explicitly geared to maximize traffic at the expense of fostering meaningful interaction.
With algorithmic amplification accelerating the proliferation of emotionally exploitative clickbait, homogenization began to set in across the digital landscape. By the end of the decade, recommendation algorithms based on user profiling were mandatory components of site infrastructure necessitated by the financial imperative to maximize ad revenue. YouTube, which was purchased by tech giant and intelligence industry contractor Google in 2006, emerged as the pioneer of the hybrid recommendation algorithm, combining the viewing history and engagement patterns of users to increase views, likes, and comments.
By the early 2010s, outlets like Breitbart, Daily Wire and Vox had perfected a formula to leverage algorithmic amplification. By employing polarizing narratives, incendiary headlines, and emotional appeals based on profiled demographics, these outlets were able to exploit existing algorithms to maximize content-sharing across multiple platforms. Platform-based echo chambers, effectively experiencing alternate realities, were born, and a feedback loop between creators of sensationalized content and the centralized social media infrastructure into which it fed was created.
The Attention Economy
Meta, Twitter, and Google consolidated dominance over the social media economy thanks to support from intelligence agencies. As early as 2013, it was revealed that Meta granted backdoor access to the NSA to facilitate its PRISM surveillance program, in which the agency collected massive amounts of data on the domestic population under the authority of the Patriot Act. To this day, Meta, Twitter, and Google-owned YouTube have advertising agreements with government agencies, including the U.S. State Department and DOD, to run targeted ad campaigns for the alleged purposes of “counterterrorism messaging, public health awareness, and recruitment.” These platforms also collaborated with DARPA and IARPA on projects related to artificial intelligence, data analysis, behavioural modification, and social media monitoring organized around “national security,” “counterterrorism,” and “combating of disinformation.”
Social media platforms leveraged their partnerships with military intelligence to develop immersive, gamified experiences that mimicked the mechanics of gambling, inducing addictive behavioral patterns in users. Immediate feedback ‘push’ notifications were contrived to create dependency on platforms for social validation, and the algorithm reward system conditioned users to adopt a relatively narrow bandwidth of expression, leveraging emotional triggers to capture massive amounts of superficial attention. As users grew increasingly desperate for digital validation, the ‘attention economy’ expanded. Flattened, emotionally provocative, and intellectually and morally incoherent content became synonymous with the social media environment.
The evolution of the digital environment proceeded in parallel to the development of the infrastructure that facilitated it. The advent and mass adoption of smartphones precipitated a deluge of digital applications that employed technical, linguistic, and psychological strategies to sway public opinion around data sharing and egregious invasion of privacy. By framing data collection as a necessary component of accessing various ‘life upgrades’, the tech industry normalized the voluntary surrender of personal information on an unprecedented scale.
As society adapted to its new amenities, a subconscious shift in the perception of personal technology began to emerge. Where previously technological gadgets had been viewed as external tools one periodically employed to achieve desired outcomes, perpetual access and increasing dependency instantiated the ‘device’ as an extension of its user. Users began to think of devices as trustworthy guardians of deeply personal and often compromising information. Apps soon came to mediate almost every aspect of life. Everything from health to finance, to romance and even daily prayer was increasingly outsourced to apps for optimization and tracking. Devices and their resident applications were given custody over critical neurological functions, including memory, scheduling, and navigation.
As the requirement for conscious intention for performing mundane daily functions declined, users began to interact with devices on a subconscious level. This extended into social media environments. Algorithms feeding on a profusion of personal data now began supplying users with a spectral hall of mirrors. Data collection was no longer a one-way street. Flattened meme discourse could now slip into the psyche undetected, take up residence, and lay eggs that would later hatch and return to their digital home in the form of even more distorted mirror images.
The New Internet
The creation of a feedback loop between the machinic environment and the deepest, most suggestible recesses of the human mind had long been a goal of cyberneticists and behavioural psychologists. With the birth of the polymorphic New Internet, the world saw the start of a new, self-regulating phase of their agenda.
Whereas the Old Internet was an archipelago of chimerical islands, yet to be chartered, the New Internet offered the seductive, mesmeric allure of an opium den. Once inside, patrons adopted a passive role in their experience as algorithmic entities paraded past them adorned in colourful regalia, offering rare samples from a full spectrum of intoxicating effects drawn from the recesses of their own unconscious. With every visit, users furnished these digital drug parlours with more information about their proclivities, constructing a cybernetic hall of mirrors reflecting embryonic, decontextualized versions of their hopes, fears, desires, and pathologies to help them build an empathic rapport.
This was the product of a decade-long obsessional study performed across multiple intelligence agencies into the foundations of trust. Beginning in the early 2010s, DARPA, IARPA, GCHQ, the WEF, and their miscellaneous affiliate sub-contractors, NGOs, front companies, and public- private partnerships, undertook to determine the conditions that underpinned the formation of trust, the identifications attributes of ‘trustworthiness’, and the methodologies they could use to hijack psychological mechanisms to ‘install’ trust at will. As research progressed, a clear strategy began to emerge and efforts coalesced towards developing it: the application of alternate reality games (ARGs) to behavioural and psychological inquiry in ‘real-world scenarios’.
In 2015, IARPA published a request for information that included contractual opportunities extended to academic and private entities able to “engage players in psychologically meaningful interaction within a complex, near-real-world context.” Research goals included “the study of social and psychological phenomena […] with improved control over independent and confounding variables,” and “gathering of detailed psychological, behavioural, physiological, and even neural data during complex social interactions.” Exploiting the existing algorithmic framework of centralized social media to funnel players into game scenarios was a logical progression, and appeared in descriptions of multiple grants and projects at DARPA, IARPA, DoD, and DHS over the next several years.
The integrity of the data gathered from the ARG studies relies on the assumption that players are unaware of the manufactured nature of the game. Centralized social media provides the ideal environment for this activity since the agreements in place between platforms and government agencies facilitate “improved control over confounding variables.” Information filtering, content moderation, and arrangements for agencies to run targeted ad campaigns are combined with the cybernetic mirroring effect of the platform’s algorithms to captivate and categorize users into synthetic environments that facilitate the self-assembly of the ARG.
Algorithms determine the memetic environment, and the behavioural response elicited by memes feeds user data back into the algorithm to further ‘optimize’ content that reinforces the flattened reality. Depth is eroded and replaced with hive-like characteristics conducive to the propagation of homogenous memetic content that will be favored by the game’s curation algorithm.
This process involves the creation and dissemination of ‘camouflaged’ memetic content that appears organic or innocuous but is engineered to manipulate behavior, beliefs, and emotions. The reflexive conditioning and subconscious interface between subjects and gamified digital environments ensures the almost automatic propagation of camouflaged memes by tying them to the dopaminergic reward center. Users subsumed by ARG environments are transformed into biological agents of encrypted agendas: meat puppets unconsciously deployed by military intelligence.
The ‘drone swarm’ mechanism also owes its development to military intelligence. In a 2014 study titled Containment Control for a Social Network with State-Dependent Connectivity, researchers described “a decentralized influence method […] to maintain existing social influence between individuals […] and to influence the social group to a common desired state […] within a convex hull spanned by social leaders.” Commenting on what prompted the research, co-author and control theorist Warren E. Dixon explained: “I heard a presentation by a computer scientist about examining behaviors of people based on social data. The language that was being used to mathematically describe the interactions was the same language we use in controlling groups of autonomous vehicles.”
The erosion of humanity’s higher faculties and subsequent devolution into unconscious, remotely controllable drone swarms through the flattening and fragmentation of consensus reality is the Alchemical Great Work of the cybernetic-industrial complex. The centralized infrastructure of the New Internet serves as a vast digital alembic, putrefying human consciousness in a closed system of nigredo cycles that mineralize the attributes of free will and conscious intent to remake man as the homunculus of the Algorithmic Adam.
The Authenticity Crisis
The protracted study of the mechanics of trust undertaken by megalomaniacal bedlamites was, by design, a calculated assault on intuition and free will. The subsequent weaponization of meme discourse rapidly eroded the implicit boundaries between genuine and manufactured content that, until recently, had been an elementary distinction of critical thought.
Genuine, thoughtful content today is deboosted and filtered out of the digital ecosystem as it lacks the polarizing qualities necessary to be favored by the algorithm. In its place, a synthetic pseudo-sincerity has evolved. Marked by narcissism, opportunism, and lack of depth, this counterfeit memetic sentiment employs performative vulnerability, strategic relatability, and emotionally manipulative messaging to mimic the traits of organic interaction; influencer culture, corporate social responsibility campaigns, and forms of political activism that play upon parasocial relationships to manipulate the subconscious of users, are all examples.
Pseudo-sincerity campaigns stage appeals to users by offering them ‘fellowships’ in social or political causes that, while framed as revolutionary or rebellious, are ultimately controlled using digital echo chambers to establish forms of consensus through group behavioural psychology.
Algorithmic Adam shares with Biblical Adam the burden of original sin, but in truth, he has chosen a fallen state. In exchange for relief from the burden of freedom, he has been granted the passcodes to the make-believe of the digitized collective.
In late 2011, a cutout company financed by the DoD called Robotic Technology Inc. gave a presentation at the Social Media for Defense Summit in Alexandria, Virginia, concerning the potential applications of ‘Military Memetics’. The presentation highlighted ways in which memes could be engineered and weaponized against “enemy populations” to modify their behaviour and make them more “accepting” of an “otherwise adversarial situation.” It also made extensive reference to The True Believer by Eric Hoffer, a classic book on the mechanics of mass movements examined through the lens of crowd psychology.
In The True Believer, Hoffer argues that, despite surface ideological differences, the common psychological dynamics of mass movements make them functionally interchangeable. The “true believer” is an individual dissatisfied with their sense of “self,” seeking to escape the burden of individual identity through assimilation into a collective. As he writes, “the individual fully assimilated into the collective never feels alone; to be cast out from the group is the equivalent of being cut off from life.”
The final result of the post-truth environment produced by algorithmic selection is the “stripping of individual identity” and the “full assimilation” into a customized digital “collective” to partake in a “ritual, ceremonial, dramatic performance, or game.” The algorithm generates the “grandiose spectacle” of innumerable hollow mass movements to erode a user’s intuition and enthrall them. As the experience of authenticity fades from memory, psychosecurity degrades to the point that they no longer possess the ability to resist the conditioning.
For a true believer, the cybernetic control environment is the world, remaining ‘relevant’ and ‘influential’ is the meaning of life, and the algorithm is its animating force. The subject thus becomes a biological automaton connected to the network through a rudimentary ‘machine-brain interface’ with the subconscious mind. This new species of hominid – ‘homo-algorithmo’ – is the First Man of the Cyborg Theocracy made in the image of its creator, imago data: a schizoid amalgam of memetic content, gnashing and clawing in a state of potential until observed into existence by human attention.
Algorithmic Adam shares with Biblical Adam the burden of original sin, but in truth, he has chosen a fallen state. In exchange for relief from the burden of freedom, he has been granted the passcodes to the make-believe of the digitized collective. Like Biblical Adam, his punishment emphasizes his subjection to the laws of the lower realm he inhabits and the suffering it bestows upon him. His brain is fried with broken memetic fragments; he is cursed to wander the land, frantically assembling ‘takes’: chimeric creations that he offers up to the algorithm in exchange for one more day of relevance.
Retvrn
For almost a century, countless World’s Fairs, novels, magazines, artworks, and films have fancifully portrayed a world where ubiquitous access to technology empowered individuals, elevating creativity, freedom, and self-determination. This genre whimsically casts technology as an emancipating force that serves an intelligent, conscious human populace in a subordinate, benevolent role, assisting with various environmental challenges and ultimately facilitating increased leisure time for intellectual pursuits. Given the overwhelming evidence of the organized construction of cybernetic control systems, it is likely that these comforting narratives were crafted to incentivize future mass adoption of data extraction technology.
Fracturing of reality and implantation of mass delusion is undoubtedly accelerated by algorithmic hives, but there is also a historical precedent of similar accomplishments through sustained parafictional narratives. A century of predictive programming expounding the glistening majesty of technological futurism has planted ‘memories’ in the collective human consciousness of an alternate reality that they have never personally experienced. Breaking the thrall of the digital environment depends in large part on our ability to return to a consensus reality where the tangible effects of our relationship with technology can be examined in a sufficiently critical light.
Short of leveling the data centres, the most direct route to reasserting dominion over machines is to drastically scale back the involvement of machine-learning and algorithmic moderation in the affairs of humanity. Social media platforms should be compelled to revert curation algorithms to the chronological format of the Old Internet, providing users with tools to restore their control over the content they see. Engagement metrics should be disregarded as measures of relevance. The warehousing and sharing of personal data should be outlawed or made transparent through mandated personalized reports detailing the precise uses of the data and the profits being derived from it. Compensation commensurate to that profit should then be distributed to individuals akin to dividends on shares of stock. We have been collectively pimped by tech companies and state actors, who have deliberately compromised our sanity and failed to distribute a single cent in profit share. It is time to put a stop to this.
While we may never see the promised land of Cyberpunk Utopia, reclaiming our faculties from machines and their cybernetic operators brings us several steps closer to the ‘good timeline’. Humanity can coexist with technology without succumbing to homuncular programming. The red thread guiding man through the cybernetic labyrinth has always been a conscious intention, and this thread must be restored once again. Social interaction, public policy, romance, electoral politics, warfare, religion and spirituality, the creation of art and music, the raising of children, and other human pursuits must be defended against the computation of the ‘mechanical brain’.
The power of creation should never be wielded without moral agency. The grotesque, homuncular Adam in his golemic fallen form is a cautionary tale of the abomination brought forth by the transgression of this rule. Rearmed with psychosecurity, humanity can begin to transcend the flattened realities and biological drone swarms of slopworld to extinguish the Schizo Engine.