Drone HorizonsJan 30
all the weird, terrifying, and incredible ways drones will change the world
G. B. RangoLucid dreaming is an excellent model for understanding the future of video games. Consider a sample dream scenario: you’re running across a concrete, parapet-perimetered, rectangular rooftop of some nondescript multi-story building, the wind whistling over a sprawling New York City-like metropolis. Chasing you are dozens of (surprisingly athletic) street cops, perhaps with a parent or other Freudian cameo mixed in (no judgment), and they’re quickly gaining on you and your sloppy dream gait. Out of rooftop runway, you decide to leap, choosing the fall in lieu of interminable dream-jail for whatever nonsensical crimes you had committed pre-lucid-awareness.
Wincing, willing for a miraculous turn of events, you open your eyes and see the impossible: instead of plummeting toward your logical end as a visceral sidewalk puddle, you’re rising upward with accelerating force into a lush and vibrant environment of waterfalled cliffs and oversized prehistoric flora. Some idyllic rehashing of Avatar’s Pandora comes into focus, and as you glance backward, your pursuers transfigure into a now-dispersing gaggle of multicolored parakeets. You yourself have sprouted a pair of Pterodactyl-esque, iridescent wings, the powerful heaving of which rockets you deeper into the sky.
Of course, none of this makes the slightest bit of logical sense. It is experiential free association, one strange circumstance morphing into another, an endless chain of weird correlations and incoherent imagination. However — with more of a framework, greater input control, and some semblance of direction, this sounds like one hell of an infinitely open-world video game. The inevitable interweaving of AI into game development workflows, underlying game engines, and end-product player interfaces is going to do much more than just speed up release timelines. Artificial intelligence will enable us to invent fundamentally new game types by partnering with neural net systems to entirely redefine the concept of “video games.” Waking experiences reminiscent of lucid dreaming in their impossible openness will be available in endless variety — a cornucopic marketplace of ideas and frameworks for gamers to pick from. The boundaries we’ve assumed are inherent to gaming may soon vanish entirely, revealing a wholly new universe of adventures.
In AI, the term “hallucinations” is usually derogatory. The AI is inventing things — numbers, quotes, data, concepts, historical events — that don’t exist in whatever reality we have instructed the model to operate within. The psychological connotation of the term is not incidental: current AI video models are “dreamlike” in nature; they have the same fuzziness, indefinability, and ungraspable lack of rigid consistency as an acid trip.
In time, however, AI’s ‘ability’ to spontaneously imagine the unexpected can be harnessed and guardrailed — a critical step for the future of video games — effectively creating sandboxes (open-ended, virtual, and interactive spaces) of infinite variety. Game development will be more about proposing a general concept and then paring down possibilities, as opposed to constructing a set of one-way rules, mechanics, and physics from the ground up. Constraining boundless chaos into cogent and digestible concepts — carving intelligible shapes out of undefined neural net fever dreams.
We are already seeing evidence that gaming-relevant AI models can adhere to visual, physical, and logical frameworks. For example, Midjourney is currently working on developing improved “character consistency,” the ability to conjure the same character reliably and coherently across generations. Video-generation models like Kling 1.6, Runway’s Gen-3, and Google’s Veo 2 are carrying this coherence torch from static shots to the realm of video clips. Video games are a logical next step after images and video — in addition to stitching frames together in a coherent way, what if you took user inputs and actually modeled an understanding of the underlying world so that a series of cause and effect relationships emerged? Artificial intelligence will not only show gamers what they want to see, but understand what that thing is so that the impact of players’ decisions is properly modeled.
Developers today are already leveraging AI in modest but increasingly integral ways — like dutiful, somewhat useful grunts that need constant micromanagement. Many AI game development tools are surface-level and assistive in nature, working within traditional infrastructural environments as opposed to disrupting them. In one instance, coder Ian Curtis showed how he was able to leverage Claude to quickly piece together a 3D environment in Unity, generating capsule colliders and stitching together basic animations. Pieter Levels, a prolific solo project developer, went a step further by typing a simple prompt into Cursor (an AI code editor) and creating a rudimentary flight simulator MMO (Massively Multiplayer Online) in just 30 minutes, which is now generating $50k MRR. If these kinds of games are getting spun up in a matter of minutes, what happens if masters of the craft like Hideo Kojima and his team start creatively wielding AI? Death Stranding 3 shows up in 2027, slog-free, and more immersive than any game ever built.
Unity, one of the most popular gaming engines, already features early integrations of artificial intelligence. Unity’s Muse, for example, offers in-editor AI that can generate character sprites, behavior trees, and more, while its Sentis tool allows developers to integrate AI models locally into Unity projects. Right now, generative AI effectively acts like an invisible production assistant — a henchman of sorts — suggesting script fixes, proposing asset variations, blending animation frames, or updating scene geometry. As these tools improve, and before game development is reimagined entirely, the traditional processes will become faster, less tedious, and more broadly accessible. Soon, however, AI will break free from its role as a developer sidekick and begin leading major aspects of a game studio’s development path and finished product.
Graphics have been a major part of the evolution of video games over time and are often referenced as evidence of their “progress.” From 2D 8-bit platformers, to the frequently memed PlayStation Hagrid, to Uncharted 4’s 2016 stunner, to consoles like the PS5 Pro existing almost solely to accommodate increasingly complex graphics systems, the visual fidelity of a game has been a milestone marker and inherent piece of its identity.
But many gamers feel that graphics updates and other nominal changes are being increasingly used to conceal a lack of underlying innovation and effort. It seems like flippant remasters are replacing the development of new and interesting games in order to cut costs and shorten timelines. And despite graphics being a major focus for AAA studios, games from 2024 just don’t look that much better than games from 2018. Video games, non-indie titles in particular, are trapped in a rut of perennial rehashing, safe moves, and sloppy cash-grab clones.
What if AI could make charging $60 for a remaster of a 10-year-old game obsolete? What if every existing game, both new and old, could be visually reimagined in an endlessly modular way by non-technical operators? Remasters at the click of a button — and not just updated graphics, but complete thematic reinventions? AI offers an escape hatch into endless creativity for individual devs, indie teams, and AAA companies alike — all of the above will soon be possible (and, on some level, is already happening).
A YouTube channel by the name of “VaigueMan” has posted a collection of visual gameplay transformations that were created using Runway ML’s Video-to-Video feature. The incredible difference that this AI-powered re-skinning of game footage makes is, in many cases, surprising. It highlights how much of the experienced variation between games is a result of their visual and emotional profiles. VaigueMan has created Fortnite both as a vibrant world of soft pastel yarn weapons and knitted buildings, and as a violent World War II-era suburban massacre that is morbidly unsettling. He’s also reimagined Super Mario 64, demonstrating how different degrees of realism, simulated visual aging, color palette variations, motifs, lighting, and more result in what feels like multiple completely new games.
Realistic Fortnite AI re-skin | VaigueMan
Currently, all of this visual re-skinning is done offline using game footage, not in real time during gameplay. If this could be done with no (or minimal) latency, it would feel as if you’re playing all sorts of new games. There could be turbocharged visual-mod marketplaces just for this layer, where independent creators can develop their own AI lenses through which games are completely reinvented. Want Halo to look like puppies romping around in an ethereal cloud world? Just install the Puppies and Clouds real-time visual filter. AAA studios will also inevitably launch games that have built-in generative AI (e.g. creating new character and map skins via text prompt), which will set a new standard for all future releases — exciting opportunities for both creativity and revenue generation exist here.
Research teams are already creating fully AI-powered, real-time, frame-generated versions of existing video games. These games have no underlying engine apart from their AI image-generation systems. DOOM, Minecraft, and CS:GO have all been re-created in this form after researchers trained diffusion systems on datasets from their respective games. It is important to understand here that there are no underlying character assets, world maps, or anything that typically constitutes a video game’s construction. The model is simply generating 2D images in real-time that are predictive based on the user’s keyboard inputs. One might say the AI is imagining what, based on its past experience watching game footage, is likely to happen — a surreal, generated imitation from a mechanical mind that knows no other reality.
Of the AI-based DOOM, Minecraft, and CS:GO games, the DOOM simulation is the most similar to its source material. In simple terms, researchers at Google first taught an AI agent how to play DOOM, capturing millions of frames and corresponding controller inputs. That data was then used to train a diffusion model — a specialized neural network originally designed for text-to-image generation (e.g. Midjourney) — to predict the next frame of the game based on the previous frames. The result is a fully neural “game loop” where GameNGen creates visual scenes on the fly (again, without any DOOM code or traditional game assets under the hood). While riddled with odd graphical artifacts and occasional inconsistencies, AI DOOM is an important step forward in exploring how artificial intelligence can be used to generate games.
Right now, technical limitations make these simulations far from perfect — to be honest, they’re barely even playable. Beyond visual elements like resolution (the AI DOOM, the “best” of the three AI-replicated games, runs at 320 x 240 and only 20 fps), there is the major problem of inconsistency. This is most readily apparent in Oasis’ AI version of Minecraft: the world is too dynamic. If you see a vast tundra before you, and make a 180-degree turn, you might find yourself instead in a desert, or underwater, or a thousand feet underground. There is no object permanence in the environment, your inventory changes constantly, and having any sort of goal-oriented gameplay is rendered completely untenable. But the problems of consistency can and will be solved as new computational approaches are iterated, leading us to untouched frontiers in gaming.
LEVEL 3.1 — STRUCTURAL CONSISTENCY
For all of this to start becoming a reality, AI systems driving gameplay need to develop structural consistency — they need to “understand” the rules of the worlds they’re simulating so that players have the agency to set and achieve goals within a cogent framework. Enter Genie 2 and SIMA (Google) and WHAM (Microsoft) — systems that bring authentic physics, object permanence, and agent behavior prediction to AI-generated worlds. A simple example is modeling the role of barrels in a first-person shooter — in most games, when you fire at them, they blow up. AI needs to understand this interaction and replicate it in the expected way (e.g. if a barrel is created in the environment, and the barrel is met with major physical force, the barrel is likely to combust).
Genie 2 is described as a “foundation world model” that can generate fully interactive 3D environments from a single prompt image. Capabilities like memory retention of structures that move out of view and multi-NPC tracking make its environments fully “playable,” though the worlds typically devolve after 10-20 seconds. SIMA, Google DeepMind’s “Scalable Instructable Multiworld Agent,” populates these worlds and is able to follow natural language instructions (e.g. “open the chest”). Genie 2 will then continuously reinvent its virtual world in order to try and accommodate SIMA’s actions.
LEVEL 3.2 — CRYSTAL BALLS AND MULTIVERSES
With Genie 2 and SIMA, we’re moving from top-level visual generation to nascent real-world modeling and agent-based gameplay. Microsoft’s take on this type of system, unveiled in February of this year, is called WHAM (World and Human Action Model). While Google’s Genie 2 was trained on gameplay footage, WHAM was trained on real gameplay data: seven years’ worth of it, from Ninja Theory’s game “Bleeding Edge.” WHAM is able to guess what a player might do next under certain conditions, allowing developers to iterate and test different scenarios.
The key here, and with Genie 2, is the idea of mapping different branching pathways — alternate universes, if you will — based on a particular starting point. A huge practical application for the nuts and bolts of game development is using these human-like simulations in order to quickly tune difficulty — without needing in-person playtests early on in the development cycle.
This progress is early evidence of the AI-driven game engines that are to come, platforms which will allow developers to simulate consistent worlds, invent characters and test mechanics on the fly, and create expansive gameplay experiences that were previously impossible. Foundation world models and branching agent-prediction frameworks are precursors to a new generation of what I will refer to as “infinity engines” — platforms that let developers, both technical and nontechnical, and eventually even players themselves, imagine limitless experiential sandboxes while keeping gameplay satisfying.
LEVEL 4.1 — THE PRESENT
There are already a number of early AI-integrated games. Retail Mage, an indie game in which you work as a shopkeeper in a magical store called MageMart, transforms the humdrum of minimum wage into an improvisational challenge. Use text inputs to invent new items on a whim, converse with (or insult) customers, enchant, carve, and paint existing items, or try anything else you can think of.
1001 Nights, another indie game, features a user-created narrative in which players collaboratively weave a story alongside the King (an NPC) with the goal of getting him to mention specific words that can then be brought to life as in-game weapons and equipment. (OpenAI handles the textual elements, and Stable Diffusion evolves the world visually as your tale progresses). AI Dungeon, an older but still-popular AI-native text-based adventure game, is essentially an interactive D&D narrative engine. There is also Infinite Craft, a simple (and viral) game powered by LLaMA 2 which allows players to “make anything” through endless combinations of concepts (e.g. water + fire = steam, steam + fire = engine, water + wine = holy water, Nintendo + village = Animal Crossing, and so on). When a player comes up with a completely new combination, the AI system invents a result and stores it so that all players will have consistent recipes.
Dead Meat, an upcoming murder-mystery interrogation game, is another early example. By harnessing LLMs and voice-to-text features, this game allows players to ask a cohort of suspects (whose minds you can also read) literally anything. Your interrogees squirm and sweat as they react on the fly to your accusations and probes, accidentally leaking information during open-ended conversations that help you construct a dynamic web of clues and solve the murder. Suck Up!, which also lets players speak to AI-powered NPCs, is a role-playing game where you assume the role of a disguise-donning vampire and sweet-talk neighborhood residents into letting you enter their homes (at which point you kill them and assume their identities). InZOI, an upcoming life sim releasing on March 28th, 2025, is basically Sims on steroids (with a “Smart ZOI” system powered by NVIDIA’s ACE that makes NPCs behave with more complexity). MIR5, an installment in the Legend of Mir series releasing later in 2025, will feature a boss called Asterion which somehow uses an LLM-based system in order to adapt to the tactics and specific skills of players attempting to slay it.
LEVEL 4.2 — INFINITY ENGINES¹
While early AI-powered games are cool, they’re not nearly as flexible, dreamlike, and groundbreaking as they will be in the future. When generative and predictive AI systems underpin game development and are integral to player experience, what might video games look like? If the entire game engine itself were built around generative technology, able to respond to each scenario uniquely and spontaneously? These hypothetical “infinity engines,” akin to a digital version of Star Trek’s holodeck, represent the emergence of lucid-dream-like properties in games — creative agency within a dynamically responsive environment, limited only by the user’s imagination and the developer’s carefully placed framework of constraints.
The idea space for infinity engines seems unlimited. Puzzle-platformers where your base character develops emergent abilities based purely on your playstyle. Stealthy players are able to dissolve into shadows, brute-force-users grow nine feet tall and sprout additional (very jacked) limbs. The problems you face in the environment then adapt to challenge each character uniquely and satisfyingly. Or how about a leader-sim game where you guide and defend a planet against all sorts of evolving intergalactic and domestic threats, building your cabinet of alien advisors, each with their own unique quirks and powers? Draw your own characters (which then get transformed professionally by the game engine), design new skins for existing characters with text prompts or rudimentary sketches, and invent your own weapons and powerups mid-game.
Games might even become malleable in ways that seem insane — have you ever felt, when playing a game, that it would be really cool if you could just do “X” or that you’re extremely frustrated by some nagging mechanic “Y”? Explain that to the game, and enjoy your new experience. Mad you can’t murder a shopkeeper NPC and raid his store instead of grinding gold to purchase that potion you want? Change the rules, and take the loot, but expect the game to adapt (guards, armed, and lots of them). Hampered by the characters in your PvP not being able to double-jump? Here, now they can fly. Mods created on a whim, within the base game’s framework.
Another example: in Minecraft, even in expansive mods, there are a set number of materials players can mine. In a hypothetical AI version of Minecraft, there could be infinite. But we can think on an even broader scale than materials — biomes of endless variety — things that are not just permutative, but completely new. Guess what? You just found Area 51 as a biome. And now you have to break in, find some collection of alien-specific ore, and mine it. With this never-before-seen alien ore now in your inventory, the AI will have to imagine its possible crafting endpoints. Combining a torch with smelted alien ore might make a lightsaber. Diamonds plus these glowing rocks might create an egg that, after some number of days, hatches into a Hydra that you have to fight and kill. (Or, alternately, train and command). And if you kill that beast, if you are powerful enough to, the AI will imagine some kind of commensurate loot reward, perhaps NFT-able, the likes of which will send you on another completely new and untrodden adventure of your own making. None of this would be pre-defined or pre-built, the AI would be dreaming it ground-up within some set framework of the game’s reality.
The effects of this kind of openness will cascade beyond the siloed experience of video gamers and into the larger media ecosystem. It’s hard to get exact numbers, but the scale of gaming content is bewildering: people watched at least 15 billion hours of gaming content on Twitch in 2024, and YouTube gaming content was clocking 6 billion views per month (and growing) as of mid-2024. When “Let’s Plays” are unique, fully dynamic, endlessly evolving, and full of unknowable twists and turns, how much more engaging and valuable will they become? Efforts to prompt the discovery and generation of new possibilities within a game will be heavily rewarded — easter eggs and hidden endings, but in unlimited supply. Borges’ infinite Library of Babel comes to life, housing every conceivable variation of narrative, rearranging itself into innumerable, player-driven permutations.
The ultimate gaming engine might not just simulate worlds — it could read our intentions and craft experiences drawn out from our subconscious desires, effectively blurring the lines between dream, simulation, and experienced reality. If games merge with neural interfaces, they might become indistinguishable from lucid dreaming — immersive worlds conjured directly from your gray matter, effortlessly adapting and responding to the human mind in real time.
In the nearer-term, imagine what might be possible with lighter, cheaper, and more powerful versions of Apple’s Vision Pro. Gesture-controlled games that put you in worlds that feel wonderfully, or eerily, alive and unexplored. AI-driven horror games that integrate with fitness-tracking wearables, learning over time how to generate the most heart-pumping, thrilling, terrifying experiences for each particular gamer. A/B testing every possible sort of monster, atmosphere, and scenario, learning things about you that you may not even know about yourself. Going on virtual dates with your virtual girlfriend, who virtually consoles, therapizes, manipulates, and does whatever else you might ask of her (a very weird and disconcerting application, but an absolute inevitability).
At what point does this stop being “gaming” and transform into something else entirely? Extreme escapism with unimaginable pull, a legitimate recreation of “Infinite Jest” from the eponymous novel, putting us in a WALL-E world — an alternate set of realities that people start to prefer over the actual reality that we all share. Personalized, impossibly tailored experiences, characterized by the “perfect” level of challenge — a difficulty tuning that is absent here in base reality.
In the end, artificial intelligence and its eventual “infinity engines” will open doors to millions of potential universes, but we are the ones who must choose which to walk through, which to seal off, and what kind of realities we want to invent beyond each threshold. The real spark — the creative alchemy of constraints, ethics, and curious wonder — still lies in human hands. Neural nets can give us near-limitless combinatorial power, but ultimately, we decide what forms come into being — it’s up to us to ensure that whatever emerges on the other side of this opportunity is not just infinite for its own sake, but intentional and meaningful. And what could provide more fertile ground for finding genuine meaning in the digital realm than sprawling, ever-expanding universes? Buckle up.
— G. B. Rango
--
¹ The moniker “Infinity Engine” has a special resonance in gaming history, thanks to the legendary RPG engine of this name powering classics like Baldur’s Gate (1998), and I would be remiss if I repurposed the term without paying homage. Turbocharged by artificial intelligence, a reality truer to its name than ever before will emerge.
0 free articles left