Siwei Lyu Warns of the Dangers of AI-Generated Media

interviewed Siwei Lyu, SUNY Empire Innovation Professor and co-director of the Center for Information Integrity, for an article on how to tell if photos and videos are AI-generated.

“The world is so connected, yet we are not able to physically present at every important event. We rely so much on audio visual information from social media to tell us what happened. So anybody, if they have the ability to manipulate the images or videos, can influence our decision-making process. ”
Dept. of Computer Science and Engineering | Center for Information Integrity

In the article, entitled “, Lyu, director of the , discussed how easy it is to be misled by manipulated content online and shared tips for spotting AI-generated images.

“We rely so much on audiovisual information from social media,” Lyu said, noting that because we’re not physically present at events, manipulated images or videos can easily influence our beliefs and decisions — from politics to product choices.

Lyu warned that AI-generated content is already causing confusion, such as during the Los Angeles wildfires, when fake images spread online. He advised approaching all media with healthy skepticism and offered practical tips to spot AI fakes — like distorted hands, unnatural eye shapes, odd lighting or shadows, and robotic-sounding voices that lack natural breaths or pauses.

Lyu cautioned, however, that as AI gets more advanced, these flaws may disappear, making source-checking and critical thinking even more important.