DIRECTROSPECTIVE: I, ROBOT (2004)
Notes on the films of Australian director Alex Proyas.
They mean well, I guess, but there’s a nasty problem at the center of police racism allegory films. Whether it’s kid-friendly fare like ZOOTOPIA, in which predator and prey animals have evolved to live in relative harmony, or high-concept blockbuster pablum like the Netflix film BRIGHT, in which legally-dissimilar-from-Tolkien orcs take the place of Black and brown people in an alternate Los Angeles, or even the space alien apartheid parable DISTRICT 9, they tend to make the same mistake. In illustrating the cruelty of police discrimination against and violence toward Black people by casting friendly jaguars or sweet-faced ogres or sympathetic aliens in the oppressed role, they end up reifying the concept of race that white supremacists hold: that people who are identified as white and people who are identified as Black are fundamentally different types of beings. ZOOTOPIA is pretty good about most of what it’s dealing with, frankly, except for the part where the mistrusted class is inherently, genetically violent toward the innocent, defenseless ruling class in the exact way that racist propaganda likes to paint Black people. You shouldn’t be suspicious of a perfectly civilized polar bear just because he’s descended from savage killers, that film tells its impressionable viewers. He is definitely descended from a different and more dangerous type of creature, though. Don’t forget that. In the other two films I mentioned, the distinction between the oppressor and the oppressed is less starkly one of hunted and (“reformed”) hunter, but still a literal difference in species that’s as real in the fictional world as race is arbitrary and culturally determined in ours. Well-meaning as it may be, this type of racism metaphor is much more dangerous than it thinks it is. As is often the case, Dr. Seuss got it right on the money with “The Sneetches,” but I don’t remember any Sneetch cops.
This sort of thing seems like it’s going to be the premise of Alex Proyas’s 2004 Isaac Asimov semi-adaptation I, ROBOT, at least for its first act or so. That’s the central trouble with I, ROBOT - it wants to be so many different movies at once that from scene to scene it feels disjointed and tonally incoherent. You’ve got a police racism allegory, a movie about what rights we owe to artificial intelligences, a movie about machine ethics, a movie about the dangers of government contracts with Big Tech, a movie about survivor’s guilt, and a movie about assisted suicide, and the kind of thing you’d expect Philip K. Dick’s, like, third cousin to cynically churn out. It’s no great mystery how this came to be; the film was Frankensteined from Jeff Vintar’s original, Asimov-unaffiliated screenplay HARDWIRED, a talky crime scene film about a robophobic human detective investigating a murder that appears to have been committed by a machine, which 20th Century Fox asked him to expand into more of a big-budget action noir, eventually tacking on some allusions to science fiction writer Isaac Asimov’s work: his famous “Three Laws of Robotics” are a significant, if easily bent, part of the plot, and some existing characters and situations from HARDWIRED were renamed or rejiggered to more closely resemble parts of Asimov’s 1950 short story collection I, Robot. After that, Will Smith became attached to the project and Akiva Goldsman was brought on to rewrite the script further until it more closely resembled every other Will Smith movie.
The end result is a big, dumb, sci-fi action noir that’s got twice the bombast of 2002’s MINORITY REPORT and a quarter of the attention span. Every potentially interesting question it raises - At what point do we have to treat robots like people? How can we trust networked devices that can be controlled remotely by their manufacturer? Is it still murder if someone asks you to do it? What is the nature of “the greater good?” - is waved away in the interest of a big flashy action sequence with swirly camera moves and robots cleaved in two. We feel for Sonny the robot (voiced by Alan Tudyk), suspected of murdering his creator Dr. Alfred Lanning, because we can tell Will Smith’s technophobe Detective Del Spooner isn’t giving him the benefit of the doubt. We’re on his boss Chi McBride’s side when he tells Spooner to lay off the conspiracy shit, but then when a bunch of robots turn bad and break into the police station and McBride starts to take them out a shotgun, it’s clearly framed as some badass hero shit. The trauma at the root of Spooner’s hatred of robots, a car accident from which a robot decided to save him rather than a young girl due to lower calculated odds of her survival, offers a great opportunity for the film to dig into the gnarly moral thicket in which “self-driving” car programmers get caught: when there’s no avoiding harm to at least one human, by what criteria should the machine decide who to prevent from harm? Class is central to this problem, because if the car is programmed to prioritize the safety of its own passengers that necessarily privileges those who can afford an autonomous vehicle over those who cannot. This is one of the classic examples of how “objective,” “artificial intelligence”-based solutions are just as subjective and fallible as the humans who come up with the rules that will govern them. But the film isn’t interested in human culpability.
The big reveal of a Skynet-lite plan by the artificial superintelligence VIKI to protect humanity from itself by killing a great many people but fewer than it calculates would die if it did nothing is certainly tired, as is the nonsensical action climax in which the only way to stop VIKI requires Will Smith to jump off of something high and stab a special robo-weapon directly into her computer brain. It also willfully ignores the origin of VIKI’s morality in Big Tech thinking, choosing instead some rosy old You Can’t Calculate The Value Of A Human Life shit. It’s not that I don’t agree with that; if there’s any single idea at the root of my personal politics it’s that it’s possible to prevent or ease all human suffering and that we must do all we can to do so. But that’s not something that humans think and robots don’t. That’s something that some humans think and some consider bleeding-heart crybaby bullshit, and the latter type are the people who run tech companies and get to tell the self-driving car what its ethics should be. For all his wake-up-sheeple paranoia, it’s a shame Proyas doesn’t do much with this idea; indeed, VIKI kills US Robotics CEO Lawrence “Robots Own” Robertson just to make sure we know that he actually wasn’t bad. It’s clear that the final product doesn’t represent the uncompromised vision of a single filmmaker, but it’s hard to find much of him in here at all.