Everything Will Be Synthetic

Imagine a future where nearly everything you see, hear, and experience digitally is artificially generated. We are not merely approaching that moment—we are entering it. AI technologies capable of producing hyper-realistic images, videos, audio, and immersive virtual realities have evolved rapidly. Soon, distinguishing reality from AI-generated content will become not only difficult but nearly impossible. In the near future, anyone could create a cinematic-quality film complete with music, visuals, and richly developed characters—with no human actors or directors required.

Synthetic media, powered by advanced AI models like GPT-4, Midjourney, DALL·E, and deepfake technologies, already produce strikingly convincing content. As Berry (2025) discusses, we have entered an “algorithmic condition”—a moment in which machine-generated cultural production not only becomes indistinguishable from human creation but begins to actively reshape our understanding of authenticity and experience.

A 90s suburban bike ride generated by Sora

A hummingbird animation generated by Sora

This transformation is not merely technical. It has deep cultural and philosophical implications. Berry (2025) outlines the concept of the “Inversion,” a critical shift where computational processes begin to define reality itself. In this framework, synthetic media do not simply mimic human expression—they generate culture autonomously, reorganizing our symbolic environment. Where Walter Benjamin once spoke of the mechanical reproduction of art diminishing the aura of authenticity, AI-generated content today creates works with no original human reference, shifting the very notion of origin and authenticity.

While this technology opens vast potential for education, entertainment, and accessibility, it also carries significant risks. According to Europol (2023), the rapid evolution of synthetic media technologies poses a threat to public trust, creating new challenges in misinformation detection and verification. The “verification crisis,” as Berry puts it, highlights the fragility of our current mechanisms for establishing shared social reality.

Additionally, a recent study by Gutiérrez-Caneda and Vázquez-Herrero (2024) investigates the transformative impact of AI on fact-checking practices. Their research reveals that AI is now embedded in every phase of the verification process—from identifying misleading claims to analyzing images and texts, and even interacting with users via chatbots. Yet, the same tools that enable efficiency and speed in countering misinformation also introduce new complexities, including algorithmic bias, hallucinations, and unequal access to technology across regions.

Moreover, Berry (2025) introduces the idea of “post-consciousness,” a condition in which the boundary between human cognition and algorithmic input becomes porous. Unlike traditional false consciousness, in which social conditions are misunderstood, post-consciousness implies a co-constitution of experience between human and machine. Our thoughts, decisions, and interactions are increasingly mediated—and even generated—by algorithmic systems.

We are also seeing the emergence of what Berry terms “automimetric production”—cultural value loops in which both content creation and consumption are automated. Examples include AI-generated music streamed by bots to create revenue without human engagement. This represents a profound shift in the cultural economy, one in which human participation becomes secondary to the computational feedback loops optimized for profit.

Even our memories and collective history may be reshaped by AI. As large language models increasingly serve as repositories of knowledge and cultural reference, they may begin to overwrite traditional forms of remembering, archiving, and verifying. What is remembered—and how—is increasingly subject to the logic of algorithmic filtering and generation.

Gutiérrez-Caneda and Vázquez-Herrero (2024) also highlight that AI tools used in fact-checking must be transparent and accountable. Journalists and technologists alike call for greater ethical oversight and the development of robust standards to ensure that the very systems used to fight misinformation do not themselves erode truth. As AI tools become more integrated, fact-checkers are evolving into hybrid teams composed not only of journalists but also software developers and data scientists.

While the proliferation of synthetic media is on the rise, we have been interacting with synthetic worlds for a long time. Decades of research in media psychology and communication studies show that humans are inclined to respond to synthetic agents as though they were real. As Reeves and Nass (1996) famously demonstrated, people treat computers, television, and other media as if they were social actors, responding with empathy, politeness, and even moral judgment. Who hasn't felt a pang of genuine sadness watching Mufasa’s death in Disney’s The Lion King? The synthetic, animated world elicited authentic emotional reactions—demonstrating that our brains are already wired to deeply connect with artificial experiences.

The Lion King

Navigating this shift requires not only regulation and transparency, but also what Berry calls “constellational analysis”: a method of examining the intersecting technical, social, and cultural forces that define the algorithmic age. Critical reflexivity—our capacity to examine our own assumptions and methods—becomes essential if we are to resist being subsumed by the very systems we create.

The world is becoming synthetic—not just in its media, but in its modes of thought, expression, and meaning-making. And yet, within this transformation lies the potential to shape these technologies toward human flourishing rather than instrumental rationality. The challenge ahead is profound, but not insurmountable. It is a call not only to innovate, but to critically imagine.

References

Berry, D. M. (2025). Synthetic media and computational capitalism: Towards a critical theory of artificial intelligence. AI & Society. https://doi.org/10.1007/s00146-025-02265-2

Europol. (2023). Facing reality? Law enforcement and the challenge of deepfakes. Europol Innovation Lab. https://www.europol.europa.eu/publications-events/publications/facing-reality-law-enforcement-and-challenge-of-deepfakes

Gutiérrez-Caneda, B., & Vázquez-Herrero, J. (2024). Redrawing the lines against disinformation: How AI is shaping the present and future of fact-checking. Tripodos, 55, 55–74. https://doi.org/10.51698/tripodos.2024.55.04

Kietzmann, J., Paschen, J., & Treen, E. (2020). Artificial intelligence in advertising: How marketers can leverage AI. Journal of Advertising Research, 60(1), 10–22. https://doi.org/10.2501/JAR-2020-001

Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people and places. Cambridge University Press.


Previous
Previous

An Analogue Childhood: Our Brains Are Changing

Next
Next

Welcome to Our World In Flux