How concerned should we be about deepfake?
Draft

How concerned should we be about deepfake?


The term deepfake has only been around for a couple of years. However, the practice, which involves machine learning of superimposing images or videos onto source files to create doctored images or videos, has been under increased scrutiny. Most recently a deepfake video of Mark Zuckerberg appeared showing the Facebook CEO saying things he did not say. Another recent example showed House Speaker Nancy Pelosi appearing to slur her speech.

“It’s definitely one of a variety of applications of deep learning that I find to be concerning,” said Ben Mitchell, assistant professor of computer science. "In fact, I think this is one of the most clear-cut and easiest to grasp examples of a potential harm, though some of the others are more worrying to me in terms of their potential long-term consequences. What these algorithms do is mostly lower the cost and expertise required. As a result, we can't rely on professional ethics being taught along with the skills needed to perform the editing in the first place.”


It has reached a point where the House Intelligence Committee held a hearing to discuss the matter, over national security concerns. However, Mitchell believes the larger issue comes down to educating the audience.


"We need to teach viewers to assume that all information might be fake unless it comes from a trustworthy source, and we need to make sure that those sources are doing their due diligence to verify their information through multiple channels. For instance, rather than try to verify that the video file is unmodified, reach out to primary sources to confirm that the underlying information is correct."


To speak with Mitchell, email mediaexperts@villanova.edu or call 610-519-5152.



powered by

You might also like...