Size / / /

M3GAN posterM3GAN is a horror-comedy about an artificially intelligent killer doll. On its comedy side, it’s very, very, funny. On its horror side, it has more thrills and chills than abject fear. It also has a fair number of gory murder scenes. Through all of this, the movie is clearly styled after ’80s B-movies, but it isn’t the kind of nostalgic splatterhouse homage that loses its place in time or becomes muddied by anachronisms. Its irreverence is aimed at our present moment. M3GAN is a very sharp movie. It is also exactly what it looks like it is. It is not, at all, cerebral, it makes no attempts to come off like it thinks that it is, and it respects its audience enough not to try to stun them with technobabble or other forms of faux intellectualism. The result is one of the most coherent and accessible treatments of real-world issues in AI to be released in mass media.

I want to be exceedingly clear that M3GAN is still a creepy doll movie. You can (and possibly should) just watch it for that. I saw it in theatres not long after finishing a fairly painful series of computer science exams, so my experience was coloured by that close timing. I was expecting trash—the kind of film where keeping a running count of all of the failures and inconsistencies makes the experience worthwhile. I really like bad tech movies, but I didn’t find one in M3GAN: I was entirely taken aback by the coherence of the technical dialogue of the film. When Megan’s creators—roboticist toymaker Gemma and her colleagues—are talking with one another, it sounds believable. This isn’t to say that the specific outcomes are realistic, but more that this thread of the plot isn’t a story about people who don’t know what they’re doing and haphazardly crash their way into disaster. It’s about competent, knowledgeable people who create a disaster because their actions had consequences they didn’t know to consider, and who carried on anyway past the point where those problems became apparent.

M3GAN’s strength in terms of how it represents AI comes not from technical accuracy but from its focus on the human characters in the film and how they interact with technology and each other. M3GAN asks whether, in the wake of technology that produces increasingly human-like facades, we are still treating people like people. If we are respecting our own humanity.

M3GAN builds its satirical commentary off of the scaffolding of a tragedy—the death of Gemma’s sister and her husband, which leaves behind Gemma’s ten-year-old niece, Cady. Gemma takes Cady in, but she isn’t prepared for it, and emotionally neglects Cady. In an attempt to help Cady that becomes more of a way to avoid spending time with her, Gemma builds the titular artificial companion, Megan. When the company Gemma works for finds out that Gemma has been using company resources to build Megan, Gemma nearly loses her job—but manages to sell Megan to the company’s management by exploiting Cady and her experience. Gemma remains firmly in denial throughout the film until she can no longer maintain her self-delusion. Throughout all this, the child actress who plays Cady gives her performance the gravitas that might be expected of a serious drama. No matter the absurdity, what’s happening is real to Cady, and it grounds the film in an emotional believability that makes it all the more compelling.

The tropes that make up the “creepy doll” in horror are particularly apt for discussing anthropomorphic AI, and M3GAN’s use of them rather than more typical science fiction tropes allows it to sidestep the narrative trap of having to spend its time litigating Megan’s humanity or lack thereof. From the film’s early technical discussions to a jarring scene at its conclusion depicting Megan—face half torn off—trying to manipulate Cady’s emotions through contorted facial expressions, it’s clear that Megan isn’t a person. Megan is a system designed to mechanically process information, make inferences, then choose a course of action that maximises the likelihood of a desired outcome being met. Megan is presented as many things—a friend, a teacher, a counsellor—but must ultimately be understood as an object. As for creepy dolls, they appear to have agency, but this comes from a place that is fundamentally inhuman, possibly a place that could not be meaningfully considered even a mind. You end up wondering—why does the owner of this doll keep it around? Why did anybody make it? What was it for? And the answer is somewhat apparent: dolls are for children, as vessels for imaginative play, for a sense of companionship; they take a human form because it facilitates this kind of anthropomorphising projection. The horror of creepy dolls arises from an awareness of both the role our perceptions play in understanding them as people and the knowledge that they themselves have some form of agency. There’s a tangible and deeply uncomfortable misalignment between the simultaneous instinctive understandings of them as objects to be projected onto and people to be related to. 

Because M3GAN avoids typical tropes and conventions surrounding anthropomorphic AI, the movie is able to comment on our current moment in an incisive and fresh way. AI systems designed to come across as at least a little human are being inserted into more and more facets of life. There are an increasing number of digital services that we approach as if they are people, in the full knowledge that they are inanimate—as well as tools which produce communicative materials in formats that approximate human communication while requiring a minimum of human input. M3GAN circles around one question—is it appropriate to leave activities that are fundamentally about connection and understanding to automata? The conclusion it seems to approach is that—whether the use of AI is apparent or not—the answer to this question is no, that the problem isn’t really one of any number of hypothetical risks but an inextricable and present loss. Through this, M3GAN offers a smart and surprising contribution to the conversation on anthropomorphic AI. It’s well worth watching.



Clark Seanor is an accessibility editor at Strange Horizons. They are a postgraduate student at Swansea University and their research interests include the role and use of machine learning technologies in society, limits in data representation, and the history and present of rationalisation and bureaucracy.
Current Issue
23 Dec 2024

what harm was there / in lingering a little?
even with a diary there’s the moment where one needs to open it (carefully flip the pages)
Wednesday: The Great When by Alan Moore 
Friday: The Brightness Between Us by Eliot Schrefer 
Issue 16 Dec 2024
Issue 9 Dec 2024
Issue 2 Dec 2024
By: E.M. Linden
Podcast read by: Jenna Hanchey
Issue 25 Nov 2024
Issue 18 Nov 2024
By: Susannah Rand
Podcast read by: Claire McNerney
Issue 11 Nov 2024
Issue 4 Nov 2024
Issue 28 Oct 2024
Issue 21 Oct 2024
By: KT Bryski
Podcast read by: Devin Martin
Issue 14 Oct 2024
Load More