From HAL 9000 to R2-D2: AI's Blockbuster Journey in the Movies
How artificial intelligence is shaping the future of filmmaking
What's your favorite Spielberg movie? Popular choices often include classics like Jaws, Indiana Jones and the Temple of Doom, E. T. the Extra-Terrestrial, Jurassic Park, Schindler's List, and Saving Private Ryan. But one Spielberg film often overlooked in such discussions is AI: Artificial Intelligence. Stanley Kubrick initiated the movie’s lengthy development process in the late 1970s, basing it on English writer Brian Aldiss's 1969 short story “Supertoys Last All Summer Long.” Spielberg took over the project upon Kubrick's death in 1999 and released the finished film in 2001, centering on a young android, played by child actor Haley Joel Osment (The Sixth Sense), whose experimental programming enables him to experience love. Initially criticized for the seeming clash between Kubrick's existential ruminations and Spielberg's blockbuster style, the film has become more timely and appreciated in recent years. The BBC's Mark Kermode publicly apologized to Spielberg in 2013, calling AI: Artificial Intelligence the director's “enduring masterpiece.”
Spielberg, of course, was neither the first nor the last director to put artificial intelligence on the big screen, where it has been a recurring sci-fi theme for decades, explored through the thought-provoking intricacies of 2001: A Space Odyssey and Blade Runner, the heartwarming charm of Short Circuit and Wall-E, and the futuristic dystopias of The Terminator and The Matrix, among many others. Cinematic portrayals of AI, from the endearingly cute to the menacingly ruthless, have long fascinated audiences. Yet, until recently, the idea of artificial intelligence wasn't taken all that seriously in the real world. In 1950, English computer scientist Alan Turing devised his famous Turing Test, proposing that if a machine could carry on a conversation that was indistinguishable from a conversation with a human being, that machine could be said to be thinking. The following year, American AI pioneer Marvin Minsky built the first neural network learning machine. AI progressed gradually over the decades — and yet the notion of an artificial intelligence that could mimic human consciousness largely existed alongside mythical quests for the philosopher’s stone or the Holy Grail, seemingly unattainable and far removed from people’s daily lives.
Rapid advancements in technology and computing power have shifted the landscape dramatically. Artificial intelligence has not yet progressed to the levels depicted in sci-fi — although some people are getting in trouble for claiming otherwise, as when Google fired senior software engineer Blake Lemoine last summer when he claimed that the company’s artificially intelligent chatbot was displaying sentience comparable to an eight-year-old human child. But there’s still no doubt that AI is becoming an integral part of our modern world. With smart assistants like Alexa and Siri in our homes and in our pockets, and self-driving cars on our roads, AI had already made ever-increasing inroads into our lives. Then, late last year, OpenAI launched the first public release of ChatGPT, a sophisticated chatbot powered by a large language model, whose ability to generate human-like responses turned AI into a tangible and transformative force with the potential to revolutionize business, science, healthcare, communication, and even the creative arts.
ChatGPT naturally raised concerns about unleashing such a powerful technology on an unprepared humankind. While some have embraced it as a remarkable technological leap forward, others have articulated a sense of threat and unease. Educators worry about students using ChatGPT to complete academic assignments. Others argue that AI will eliminate countless knowledge workers’ jobs over the coming years, replacing lawyers, accountants, financial advisors, computer programmers, and many others. Still others voice concern that AI will destroy democracy by flooding the world with disinformation, or even lead to the extinction of humanity itself. The European Parliament has recently claimed that mitigating the risk of extinction from artificial intelligence should be a global priority.
The film industry, like many others, has been grappling with the profound impact of artificial intelligence, which swiftly became an intrinsic part of its creative processes. The influence of AI on the industry is not some futuristic prospect—it's already a reality in the synthesized voices used in films and animations, as well as cutting-edge visual effects such as deep fakes and de-ageing techniques. Artificial intelligence digitally rolled back the years for octogenerian Harrison Ford, seamlessly erasing decades for a scene in Indiana Jones and the Dial of Destiny.
AI has also raised numerous questions about the sanctity of intellectual property, the authenticity of performances, and the essence of storytelling itself. The fusion of human ingenuity and artificial intelligence has begun to redefine not just how stories are told, but also who tells them and what it means to be a creator in the age of intelligent machines. Contributing to the creative process through visual effects is one thing; taking over the creative process itself is quite another. ChatGPT’s potential to craft treatments, pen scripts, and transform source material into screenplays has already become an issue in a lengthy dispute between the Writers Guild of America and the studios, tensions that a new agreement aims to resolve by setting limits on studios’ use of AI in the creative process.
Understanding audience preferences is also crucial for studios striving to create more impactful, successful, and profitable movies. In this arena, AI-driven audience analysis can harness the vast amount of data generated by social media, streaming services, and other online sources. Machine-learning algorithms can process this data, uncovering intricate patterns, trends, and audience behaviors that human analysis might miss. By examining viewer engagement metrics in-depth, AI can provide deep insights into audience preferences and sentiments that can even predict future trends. AI algorithms can also perform demographic analysis, helping studios identify ever-more granular audience segments based on factors such as age, gender, location, and interests. This, in turn, enables the creation of targeted marketing campaigns tailored to different audience groups.
But there’s more. Not only can AI analyze the reaction of audiences, it can become that audience. In collaboration with strategic consulting firm McKinsey, researchers at the MIT Media Lab’s Story Learning Machine project detailed how they used a deep neural network to “watch” brief video segments, including movies, television shows, and online shorts, analyzing their perceived impact by the second. The neural network then endeavored to identify the visual elements that created especially emotional or impactful moments. The researchers then developed a method for classifying stories into families according to their arcs—in other words, finding videos that shared the same emotional trajectory. A neural network analysis of more than 500 Hollywood movies and 1,500 short films on Vimeo found that most stories could be classified into a relatively small number of such families — just as Joseph Campbell, Kurt Vonnegut, and others story theorists have long believed — and could also predict which story families would attract most attention online.
Predictably, neuroscience and machine learning are beginning to converge in the field of neuroforecasting, each augmenting the other. Paul Zak, a professor at Claremont Graduate University and director of its Center for Neuroeconomics Studies, authored the 2022 book Immersion: The Science of the Extraordinary and Source of Happiness, which identifies the neurologic basis for high-impact experiences. More recently, Professor Zak and his team applied a comprehensive machine-learning technique to brain responses, and were able to predict hit songs with an astonishing 97 percent accuracy. This study, recently published in Frontiers in Artificial Intelligence, shows how machine-learning techniques improve upon a linear statistical model, which was able to identify hit songs at a success rate of 69 percent, as well as a 2011 fMRI study that identified hits just 30 percent of the time. We are moving closer and closer to machines that can perfectly simulate and predict our cultural responses.
Artificial intelligence is not sentient — at least not yet. Spielberg’s AI: Artificial Intelligence still lies in the realm of science fiction. But such breakthroughs in applying machine learning to the analysis of story deeply inform our work at the Story Rules Project, which seeks to refine age-old storytelling techniques scientifically. This means we’re paying rapt attention to scientists’ efforts to study the versatile computational capabilities of machine learning systems, and integrating them into our research and our ongoing work with filmmakers. We’ll address this more in future posts.
There are numerous SF stories from the 1950s speculating about the impact of AI and robotics. I reviewed some of them here:
https://chicagoboyz.net/archives/68808.html
Your article has so much information in it about the AI subject, I think it belongs in the Encyclopedia Britannica!:-) I mean this in the most positive of ways.
You do a good job of positives and negatives, which I find useful in my own understanding of the subject.
That which we don't know much about, tends to be fodder for fear based thinking.
Thanks for all the research and legwork on a huge technological subject that is having an impact on everything and everyone even if you don't know about it.