Meet the Transsexual Hijabis Welcoming the Muslim New World Order Nov 9
muslim immigration in the west has become a hot button issue. for some, it's just hot.
River PageA.I. summer. I was in Northern California for a work event a couple weeks ago, walking through a quiet redwood grove, when I stumbled into a bit of service, decided to scratch my Twitter itch with a quick check into the chaos engine, and found myself face-to-face with an A.I.-generated demonic entity named [REDACTED]. In a lengthy thread, a user named Supercomposite narrated her journey with one of the recently-popularized machine learning models native to the phenomena of “generative art.” For people less familiar with the subject, this is a technology capable of reading a written description provided by a human, and generating an original image based on that description. Long story short, the summer of 2022 has been a whole, wild season of robots dreaming — and then painting — electric sheep. But in the case of [REDACTED], Supercomposite purportedly ran a kind of opposite search on a descriptive piece of language with the hope of finding an image the A.I. considered least like her chosen description. The result was strange, but benign. She then ran a similar search on that initially strange result, and a deathly, haunted looking woman with sunken, soulless eyes appeared. Supercomposite, a self-defined “hellmaxxer,” happily tugged the yarn with further searches, and the haunted figure of [REDACTED]’s haunted world, populated by a host of haunted friends, gradually expanded. The discovery was finally shared on Twitter, and the images went viral. Catholics and astrology chicks had questions.
Why did Supercomposite give the cursed thing a name? Why did she pick a name associated with demonology? Most importantly, had she — and we, by extension, as we shared the thread — created an egregore, inadvertently (or purposely) unleashing a malevolent thoughtform on an unsuspecting memespace?
Okay. My purpose here is not to imply A.I. has opened a literal gateway to Hell. But just a few months into semi-popular use, generative machine learning models have acquired a mythology, including a mythology of horrors, and the implication here is meaningful. People are both excited and afraid, emotions that tend to rise when confronted with something new. In the case of deep learning, I believe we are dealing with something fundamentally new, standing on the precipice of paradigmatic change in magnitude not seen since the invention of the internet. There’s incredible promise here. There are also valid concerns. For the purpose of this wire, I’ll focus on the narrow space of creative work.
How is deep learning going to assist rather than replace the average creative worker? If replacement is the goal — valid, by the way, if a net positive for humanity — are we paying the people responsible for work the models have been trained on? Is it possible for a writer, artist, or designer to opt out of assimilation by the creative hive mind? And finally, presuming the amorphous question of culture is worthy of consideration for the architects of an explicitly creative tool, how does a machine designed to mimic, rather than imagine, drive a culture forward, rather than freeze it in time?
A couple years ago, at the top of that spectacular summer 2020 (plague, legalized rioting, social media civil war), the broader public first beheld the awesome success of GPT-3, a language model trained on human writing to (kind of) communicate like a person. By spring of this year, machine learning tools were live with the stunning capability of associating words and images (a decent primer here), which is to say there were now models capable of creating artwork based on a series of human prompts. For example, I open Dall-E, my friendly neighborhood generative model, and ask to see “a cat riding a rocket ship in the style of Monet.” After a few seconds, Dall-E delivers a piece of art that has never before existed.
Behold:
This dumb ass cat? Historic breakthrough, actually. The potential applications of generative models are as obvious as they are endless. Let’s start with something personal.
I’m currently looking to expand the Pirate Wires media empire. This requires the employment of additional, talented writers amenable to my overall ‘vibe,’ and the greater their output the better. Then again, what about robots?
Once we iron out the kinks, it should be possible to train a language model on everything I’ve ever written, feed the model outlines, and produce rough drafts of work, in roughly my own voice, as quickly as I can write the prompts. Then, I can polish the drafts, and produce in an hour what previously took me ten or twelve. By increasing my output so dramatically, I’d be capable of running a media company out of my bedroom staffed by, in a sense, a couple dozen clones. There are analogs to this for artists and designers in pretty much every creative field, presenting the obvious, first, positive utility: this technology, like most information technology, will be very good at amplifying the ability of our very best creative minds (me (you’re welcome (subscribe already, damn))).
Generative models will also compliment creative workers with skills they’ve never had. As a kid, I wrote comic books and television scripts, but I didn’t know any artists, and I had no access to a studio. As image, video, and voice models improve, that won’t matter. Comic writers will be able to ‘collaborate’ with models trained on the greatest artists who ever lived. It’s easy to imagine the technology improving to the point animators can rapidly generate entire television sagas from a script alone, or maybe even the rough outline of a script, with the work of any voice actor who has ever lived. The cost to produce every form of content will plummet. Then, in terms of the sci-fi sort of applications we’re maybe not thinking of?
Imagine an unlimited supply of your favorite TV show. If a sufficiently advanced model trained on scripts and video is also trained on your viewing history, we may see, in our lifetime, a personalized 6th, 7th, 27th season of Stranger Things — tailored specifically to viewer tastes — 30 years after the show’s final episode. In a world of A.I., the future is personalized. Diamond Age status. The Lady’s Illustrated Primer is coming.
Beyond consumer content, from animation to live-action television, movies, and video games, all work that has ever been achieved in the fields of graphic design, fashion design, interior design, architecture, and even software engineering can conceivably be captured, and roughly “democratized,” which is to say many of the people who used to get paid for this kind of work no longer will, but the work itself will be ubiquitous. I want a new, original wardrobe. Done. Plans to remodel and furnish my entire home in accordance with my preferred tastes. Next question. Social media posts, in my voice, meme-ing in the current trends, dotting posts with ads for — back to that wardrobe — my new fall line. I don’t think this is crazy.
Now let’s push it all a little further. Imagine a model trained on the emails, texts, voicemails, and video of a beloved, deceased relative. In my opinion, this doesn’t constitute immortality. But I do think in a world of autonomous generation you’ll probably be talking to something that feels like your mother for the rest of your life, and your great great grandson will wander the digital halls of your family Meta necropolis asking his ancestors for advice on his love life. It’s me, glowing in tasteful archangelic notes, probably holding a sword, telling my progeny to simply stop being a hoe.
Anyway, this is obviously dope as hell. Why are the art people mad?
One thing I’ve been wondering is what might endless “art” on demand mean for the creative fields conceptually? Because it seems the law of supply and demand would dictate the value of most creative output, in a world of unlimited creative output, must fall close to zero. A.I. is a huge topic, and I’m still not sure how to think about it all, which is why I’ve avoided firm opinions. I did however make the mistake of attempting a mild joke on Twitter, thus evoking the frustration of Taylor Lorenz’s favorite reply guy (be kind, he means well).
Whenever I’m hit with a dunk for an inoffensive, basically common point, my impulse is to look a little closer. Setting aside the interesting question of whether I should be lying more about my feelings pertaining to a nascent technology with potentially world-altering consequence, I’ll simply admit that sure, you’re right, I’m not interested in “finding the positive spin” of generative models. I’m interested in their actual impact. As I’ve now illustrated more potentially positive utility in this technology than most people actually working in the field, I’ll go ahead and flesh out my concerns.
First, is this theft?
Generative machine learning models work by training programs on enormous quantities of data comprised of images and descriptions created by humans. There has therefore naturally been considerable controversy over the question, among artists, photographers, designers, and writers, of whether or not their material is being used by models to produce “original” work, itself obviously destined to replace them in the professional market.
Of the three big generative models, including Midjourney, Stability AI’s Stable Diffusion, and OpenAI’s Dall-E, only Stable Diffusion has built openly. Such transparency is commendable, but it has proven tactically disadvantageous, as it opened the company up to the first, furious, and unfortunate bit of backlash. Generally, the perspective among artists angered by the rise of generative models has been something like “is my work being used to train this thing, and, if so, why am I not being paid?”
A defense of the models from the charge of theft might look something like interrogation of the human creative process. Where did these angry artists find inspiration for their own “original” work? Many engineers, most of whom have been trained on libraries of gifted code, assume the answer is, like the generative models, other artists. Aren’t we all just copying each other? Don’t be mad because the robots do it better. I understand the position. Still, I can’t help but wonder, were it true that no such thing as “original” art existed, how could any art have ever been created in the first place? Sure, the originals are cloned by the frauds, but there are still originals. This, for what it’s worth, appears to be true in engineering as well as art, where a small handful of devs contribute the overwhelming majority of work on open source projects. But as we now approach — dangerously — the realm of pure philosophy, I acknowledge we’re in muddy water, with no clear answer. So let’s turn back to the much more potent question of money.
The thought of losing something so personal as one’s work, if even in some abstract future sense, tends to upset people. This is the controversy tech writer Charlie Warzel walked into over the summer, after using a popular machine learning model to generate an image of Alex Jones for his tedious, run-of-the-mill “bad radio man” piece a couple months back. Noticeably absent from the controversy was concern over the inevitable misuse of deep fakes here clearly foreshadowed, in which more lifelike images of a man will almost certainly be produced engaging in some action or other that never took place. Here, illustrators were simply mad that Warzel didn’t pay one of them to create the image, something that never would have happened even were we not now living in a world of robot painters. Warzel was, in keeping with his nature, quick to apologize for the crime of pissing off his own tribe. But the use of generative art in this regard is nonetheless the future. It can’t be stopped.
While a new field of “prompt engineers,” or “prompt artists,” might possibly emerge, it’s undeniable the days of commissioning work for hot takes are numbered. Efficiency is the nature of technology, and the inefficient fading from relevance shouldn’t matter provided the replacement improves a function. In this case, the function is creative work. This brings me to my next question. Will creative work be improved by generative art, or will it only made abundant? And if it’s simply made abundant, are we not on a path to cap, rather than accelerate our culture?
Creatively speaking, are we about to trap our world in amber?
As the space of mature, readily-accessible generative art models expands, the financial incentive to produce probably anything other than fine art, the value of which ceased to correlate with technical skill after the invention of the camera, will evaporate. No intelligent, talented person will choose to enter a field in competition with a robot that produces perfectly adequate work for a thousandth or even millionth the cost of a human. But that adequate work will be trained on material from decades prior, when people were paid hefty sums to create new things.
In the shorter term, talented designers and artists will probably use generative models to produce more of what they currently produce. But over time, I don’t see how professional opportunity in spaces impacted by deep learning won’t contract, even as output explodes. It’s just the output will be generated by models trained on ancient human content.
Twenty years ago, the protagonist of objectively perfect movie The Matrix woke up in a simulated world stuck in a loop of 1999. That world — frozen in time just before the dawn of artificial intelligence — replayed for thousands, or possibly tens of thousands of years. The future is a difficult loss to metric, and I understand we love a good metric in technology. But regardless of our difficulty in measurement, this feels like a problem.
Knock, knock, Neo.
There’s no writing about the field of A.I., and these incredible fruits in machine learning, without celebrating the absolutely mind blowing achievement. This is real innovation. This is how the future happens. The impulse among technologists to build, and their incredible work to date, is good. There should be parades. But given the newness of this world of generation we approach, and the apparition now of angels and demons — both metaphorically and, who knows, maybe not — I do think it wise, at the very least, to tread forward with some caution.
-SOLANA
0 free articles left