I suspect that in the decades to come, 2023 will be noted as the year that AI started to take over the world, and humanity both embraced and recoiled in horror at the possibilities. It was no longer science fiction that a “computer” became capable of answering more than rudimentary questions without extensive programming.
I use “computer” in quotes because we’re no longer talking about a singular device, such as a phone or laptop. Something else that’s become particularly overlooked is that our devices are less about local processing than they are terminals for access to a global computer, services exposed through the multitude of online services that require little more than a browser and a few clicks to use. That means we have access to the global computer anywhere we are, so long as there’s a way for our device to see it.
That level of ubiquity is only the beginning. Our “personal assistants” – Google, Siri, and Alexa – are only the beginning of a J.A.R.V.I.S.-like experience, at least once we figure out the inherent flaws in LLMs and their ability to deal with the unknown in reasonable ways, anyway.
And it’s that very situation – dealing with the unknown in a reasonable way – that is at the heart of my post. Right now, we’re dealing with a very unrealistic view of what AIs/LLMs can do. On the one side, we have the Writers Guild of America (WGA) and the Screen Actors Guild - American Federation of Television and Radio Artists (SAG-AFTRA) on strike because they fear that their work is in the process of being taken over by AI so it can be readily commoditized into Frankenstein productions for which they’ll never see a dime, or worse, bring ruin to their names and legacies. And they’re not wrong, because on the other side, we have the production companies who have proven, for decades, their willingness to throw out a cheap knock-off and make a few bucks than invest in quality.
For the record, I stand with the WGA on this. Yes, there will always be some hack who turns in a trash script that gets turned into a trash movie, and you don’t need an AI for that. But I stand with them because we’ll just see more trash, and our culture and our society will suffer as a result. Don’t believe me? How much of your life involves scenes or moments from your favourite TV show or series, or quotes from a movie? Those all came from somewhere. Our culture – worldwide, I should add, and yes Hollywood is currently the loudest aspect, but it’s just the first to deal with it – is dependent on the broadcast arts for entertainment. So, yes, I want a writer providing us with story, to give us characters and plots that we need to escape, even if just for a while.
Okay, that’s about the longest preamble I’ve ever had to a post, so let’s get down to the point, where the studios are utterly missing the danger of trying to use AI to replace a writer: with our current understanding, it’s never going to work.
If you haven’t already heard of this before, let’s deal with the key word in all of this: Intelligence. If you go through Merriam-Webster, Oxford, or Collins, they all have the same basic approach: an ability to learn, to reason, to comprehend concepts, to understand truths, to establish relationships, to assimilate information, to solve problems. Oddly, all of that is an intake process – a one-way feed of data into a model that … does nothing outwardly visible. Which, honestly, exactly describes the basic model of an LLM – it acquires information, and builds and refines a model.
But a model is nothing more than a structure to contain that information. One notable part missing in the above, which isn’t present in all the definitions, is producing knowledge, which is formed from the information stored in the model. (Wisdom is a further refinement, testing that knowledge with experience.) The catch is how we perceive that knowledge, which denotes intelligence.
Generally, we can experience another’s knowledge merely by interacting with them – we get a sense of what they know and how they interpret it through questions and actions. We can see that in animals, as well; not just trained animals, we can understand the knowledge (and intelligence) in wild animals, such as Jane Goodall’s observances of chimpanzees trying to get a termite snack.
Our LLMs, as they stand, could not solve an unknown problem they had not before seen, let alone try to create a problem for someone else to solve. And that’s what a writer does, through plot lines and characterizations: they build a puzzle for us to solve.
If you’ve ever watched any arced episodic bit of media, be it traditional television, streamed shows, or even serial movies, you’ve come to experience and (presumably) enjoy characterizations and plotlines that can extend for multiple seasons, often going into some very complex and strange places. A good writer knows how to craft those sorts of things (some even build entire TV series around them, such as Babylon 5, which was crafted on a primary 5 year arc). AI? You’ll be lucky to get something recognizable for a single episode, and it’ll be so disjointed that you’d have to treat it as a “clip” episode.
This is because AI, in its current form, it stateless – it doesn’t have a “memory” in which you build a profile, create characters, define plots, mark the passage of time, or set places for where action occurs. Any request into an AI requires extensive work to even come close to where you were, never mind where you’re trying to go.
This is because LLMs are best described as statistical models that mark language use at a point in time. Yes, they can accurately determine the next most likely word that will come up, but … do you really want that? Do you want such predictability in your stories? So plot twists, no sudden character reveals (forget Tootsie), and never mind trying to create the most complex of all sci-fi tropes: the time travel episode (Babylon 5’s single episode took two and a half years to build up to).
Even if you were living in the moment, could you imagine an AI producing the dialogue found in any of Quentin Tarantino’s films? Or, for that matter, any of the Zucker Brothers’ zany escapades? I wonder if AI could even reasonably reproduce anything resembling British dry, absurd humour? And that’s if you’re just dealing with English – multicultural situations are an entirely different matter, which LLMs have no method of navigating without risking culture conflict.
Yes, there are huge (and valid) concerns that studios want to digitize performances and use those however they see fit, once the technology has evolved to the point where we could not tell. Make no mistake, they’re painfully close to this, we’ve already seen several examples that show that the uncanny valley is closing up; actors are well and smart to protect their likeness as much as possible, lest they lose control of it entirely.
We are on the edge of significant change, where our own brilliance and genius threatens to make us irrelevant as our creations become more desireable than ourselves, and (to our own detriment) less expensive to use. The human element is something we should cherish and protect as much as possible, not because we’re in danger of losing it over corporations’ profits, but because it is unique and flawed and beautiful and terribly damaged in ways that no amount of brilliant programming can ever truly mimic.