My Take:
In 1854, as daguerreotypes first flickered into existence, a French journalist proclaimed, "We can hardly accuse the sun of having an imagination." He was echoing a prevailing sentiment of the time: photographs captured reality unvarnished, truth etched in silver nitrate.
Fifty years later, photographer Lewis Hine countered, "While photographs may not lie, liars may photograph."
Hine's prescience is all too real. We now know that many of History's most iconic photographs were doctored or staged. The notion of photography as an "unimpeachable witness" was debunked long before photoshop.
Now we face a new hurdle in multimedia advancement: AI-generated deep fakes -- and alarmists are losing their minds.
There was an interesting New Yorker article that ran at the end of the last year. The premise is that doomsayers have mostly gotten it wrong in predicting the havoc new media was supposed to wreak on democracy, civil order, and truth in general. According to this argument, we're not nearly as gullible as the alarmists think, and collectively we're pretty good at discerning truth from lies and liberties taken.
In the longer history of media manipulation, video deep fakes could be just another bump in the road: interesting -- entertaining, even -- but hardly seismic.
A guest essay in The New York Times this week argued that the real danger of AI deep fakes isn’t so much a threat to our future as it is a threat to our past.
From the piece:
We don’t have to imagine a world where deepfakes can so believably imitate the voices of politicians that they can be used to gin up scandals that could sway elections. It’s already here. Fortunately, there are numerous reasons for optimism about society’s ability to identify fake media and maintain a shared understanding of current events.
While we have reason to believe the future may be safe, we worry that the past is not.
History can be a powerful tool for manipulation and malfeasance. The same generative A.I. that can fake current events can also fake past ones.
And this will create a treasure trove of opportunities for backstopping false claims with generated documents, from photos placing historical figures in compromising situations, to altering individual stories in historical newspapers, to changing names on deeds of title. While all of these techniques have been used before, countering them is much harder when the cost of creating near-perfect fakes has been radically reduced.
So where does that leave us? My bigger concern for the future is less about what happens when people believe what they can see with their own eyes and more about what happens when they don’t.
We’ve got plenty of scientific and visual evidence to prove climate change, for example, and yet only one in four Republicans believe that it’s a serious threat.
As AI proliferates, there are very real dangers that we can’t be glib about.
As for deep fakes, we're pretty good at detecting them. Everything that's unfolding now fits a larger historical pattern, which we've successfully navigated in the past.
My take? Breathe in, breathe out, stay calm.
What I’m Reading:
Our Rodent Selfies, Ourselves by Emily Anthes VIA The New York Times
Coming of Age at the Dawn of the Social Internet by Kyle Chayka VIA The New Yorker
Buzzings from the Hive:
Detecting Software Bugs with AI by Jon Suarez-Davis VIA Super{set}