2 Matching Annotations
  1. Jul 2025
    1. Be clear about the consequences of using AI to generate pornographic images. Tell students that they may see apps to create nude pictures advertised on platforms like TikTok. Though they may be curious or think it's funny (because the pictures aren't "real"), using AI to generate nude pictures of someone is harassment and illegal. It doesn't just harm the victim—law enforcement could get involved. Victims should tell a trusted adult, report to authorities, and can also report the incident to CyberTipline.org.

      This part really stands out as a necessary and urgent conversation. With how normalized AI tools have become on platforms like TikTok, I can see how some students might not fully grasp the seriousness of using them to generate explicit content. It’s not a harmless joke—it’s a form of harassment with real legal and emotional consequences. As someone who spends a lot of time online, I think we all need to take more responsibility in calling out this kind of behavior and making sure people know where to get help,

    2. Make students aware that deepfakes exist online already, so it's good to be skeptical about audio and video that's especially shocking or shared widely to get an emotional response. People can still spot some imperfections, but they're getting more difficult to identify.

      This is such an important reminder, especially for students who often trust what they see online. I’ve seen deepfake videos shared on social media that looked incredibly real at first glance. It worries me how easily misinformation can spread, especially when it's designed to trigger strong emotions. As educators and students, we need to build digital literacy skills—not just for reading and writing, but also for recognizing manipulated media. Critical thinking is more essential than ever.