Generative AI is at the Jurassic Park Moment

A lot of people are concerned about Generative AI right now. We’re looking at a very wide swatch of “intelligent” tools that are taking people’s jobs away. Or at least that’s what we’re told.

The truth is, the media is really good at inducing panic. That’s been their game for the better part of a decade (probably longer), stirring up fear because it either sells more newspapers or encourages people to share links on social media, driving more visits to questionable websites that have more ads than content.

The truth of Generative AI is this: we’re at a turning point in society. Yes, this is technology and yes, this is on the internet, but this isn’t about either of those things. This is us, the human species, starting to realize the decades-old trope of having an unfeeling automaton being able to address our every question.

I’ll dispense with the painfully obvious allegory with slavery. Because if you haven’t seen it yet, you’re not paying attention.

Anyhoo.

Science-fiction has been selling us the idea of an all-knowing digital butler for decades, most brilliantly portrayed (I think) by J.A.R.V.I.S. in the Marvel Cinematic Universe. It’s most accurately noted in The Avengers: Age of Ultron when Tony Stark notes that:

“Started out, J.A.R.V.I.S. was just a natural language UI. Now he runs the Iron Legion. He runs more of the business than anyone besides Pepper.”

The key thing there is “natural language UI”: something that can take a normal conversation and derive the intent from the cruft. (If you think that’s not significant, try physically writing down the words you use to talk to a colleague, and then understand that if you had to communicate that to anyone else, it would require a lot of extra explanation and context.)

But it’s not just understanding what you’ve said once. Anyone can search for “recipes that use up the chicken in my fridge”. Throw that in Google, and you’ll get a list of websites that have recipes that put that old poultry to good use. But there will be recipes in there that you don’t like, ones that include ingredients you detest. Where the “AI” portion shines is where you respond: “I don’t like olives.”

The latter is the “intelligent” part: understanding that a refinement needs to exclude certain options. We’re not yet at a point where AIs (such as Google; which, by the way, already has a very detailed profile of you) know your likes and dislikes and will automatically include or exclude certain options, but that’s coming, make no mistake.

Where we are in all of this, right now, is the beginning. The easiest way to phrase this is to look at movies. Any given movie, these days, has elements of computer graphics (CG) in them. Sometimes it’s obvious, sometimes it’s not. If you’re watching any of the aforementioned MCU, it can be exceedingly heavy, to the point where about the only real thing on screen is part of an actor. (And if we’re talking Avatar or its siblings, well, it’s only the actor’s data.)

Sherman, set the Wayback Machine for the early 1990s. Director Steven Spielberg had become attached to a certain script involving dinosaurs running rampage on a remote (fictional) Costa Rican island. He wanted to have dinosaurs, big and fearsome, on screen in a way that hadn’t been done before. The technology, at the time, was do this with stop motion animation, and one of the best ever to accomplish this was Phil Tippett, who was Industrial Light and Magic alumni. You didn’t get any better.

At the time, ILM was known as the best effects shop in the world. They build models, they invented motion control, created Photoshop, gave us video morphing. Most of the special effects that we know today came from the geniuses at ILM when they were pressed to do the impossible. But at the time of Jurassic Park, digital effects were sparse, limited to small amounts of screen time, the bulk of the operations were still physical models and complicated cameras.

But deep within Industrial Light and Magic, there was a windowless room referred to as “The Pit”. In this room were two rogues, Mark Dippé and Steve “Spaz” Williams. They were not going to let an opportunity like this go by without a fight. They created a demo reel of a Tyrannosaurus Rex. Nothing fancy, nothing too elaborate, but enough to demonstrate that computer programming had reached the level where they could generate a new reality.

Upon seeing this demonstration, depending on whom you ask, Spielberg turned to Tippett and said “You’re out of a job”, to which Tippett replied: “Don’t you mean extinct?” (Other versions have Tippett declaring: “I think I’m extinct”.) No matter the situation, it was recognized, there and then, that cinematic special effects were about to change. But not a single person in that room at the time, regardless of their respective brilliance, could have imagined that one day, we would have entirely computer generated movies with such intense realism that it caused people to have significant psychological reactions.

This is where we are with Generative AI: the moment when we’re recognizing that the change is here, but we don’t yet know what it will be.

Okay, back to the movie: the world changed, and we now had computer-generated dinosaurs. But what of Phil Tippett and his team? They’re still credited in the movie, and it’s not a pity credit – they were still there, still valuable, still contributing to the final result. Phil Tippett was named on the Academy Award for Best Visual Effects.

This is how it changed for them: they were no longer making stop motion animated dinosaurs. But they were still providing the motion.

At the time, computer animation was still very primitive. You had models that had skeletons and control points that set points in an animation sequence that the computer programming could blend together in a smooth motion. Great, wonderful … but unless you had someone who knew how to set the motion, you’d get something that was noticeably “wrong”. No human has ever seen a real dinosaur move, but there’s something about biological movement that we seem to instinctively understand.

This is where Phil Tippett and his team remained in the picture. While they didn’t produce the stop motion (known as Go Motion)for the movie, they understood movement in a way that the boys in The Pit couldn’t even begin to grasp. So instead of moving the dinosaur models, ILM created armatures that looked like metal dinosaur skeletons that the animators would move, creating the data that the animation program would turn into a 3D animated dinosaur.

There’s a reason that, 30 years after it was made, the T-rex’s escape from it’s pen, where it bows and roars, still sends chills up people’s spines. It’s not because it’s the best-ever render of a dinosaur, it’s because it ticks all the boxes of “real” that we need.

In essence, Phil Tippett’s career changed. He still made motion look amazing, but the method in how that was recorded and expressed changed. And to be fair, stop moption is still around, it’s still a viable art form (look at anything Laika has produced, it’s all excellent). The knowledge of motion has been taught and retaught, the practice refined, and the input options dramatically improved through techniques such as motion capture. The business changed, careers were not destroyed.

That’s what we have to recognize here, now: yes, Generative AI has arrived and it will change the game. But we also need to recognize that it cannot be anything more than a tool.

Artificial Intelligence is not intelligent. It’s an algorithm; a fancy algorithm, programmatical logic. There is no creativity, no autonomus invention. Which is why, if you use any of these tools as they currently exist, it doesn’t take much to push them past their limits. Humans think very differently than Large Language Models (LLMs) that are “trained” off the content of the internet (honestly, it’s a wonder there’s not more cat-influenced output … or extensive porn). And that’s where we still have, and will retain, the advantage.

So why have these things? Speed of iteration, reduction or removal of trivial lookups and tasks. Personally, I would love nothing more than for an AI to understand my family’s eating habits, so I can prompt one for “family menu plan for the week, different than the last three weeks”. A few years ago, that would be pure science-fiction, bordering on black magic. Today? We’re so painfully close that it’s almost “why doesn’t this work yet?!”

The world is about to change again. But it won’t be about a piece of hardware (like your smartphone). It’ll be about programs that are now predictive enough, with enough information, to actually have as close to a real conversation as possible, leading to valuable results.