AI News

AI-Powered Student 'Slander Pages' Target Teachers on Social Media

Students are using AI tools like Viggle AI to create viral TikTok and Instagram content mocking teachers, raising concerns about harassment and platform moderation in educational settings.

LLMBase Editorial Updated March 11, 2026 3 min read
ai llm industry social media content moderation education
AI-Powered Student 'Slander Pages' Target Teachers on Social Media

The trend has gained particular traction in Texas school districts, where student-run accounts have accumulated hundreds of thousands of views by superimposing teachers' faces onto controversial figures or placing them in fabricated scenarios. The posts often incorporate extremist imagery and internet slang originating from fringe online communities.

AI Tools Enable Sophisticated Content Creation

Many of these accounts rely on Viggle AI, an image-to-video platform that allows users to insert any photographed person into reference videos or create lip-sync content from static images. The platform reports over 40 million users as of February 2025, though the company did not respond to requests for comment about its role in these harassment campaigns.

Viggle AI has drawn criticism from academic researchers. The Global Network on Extremism and Technology at King's College London described the platform as "a new frontier in the creation of spontaneous extremist propaganda" in a recent analysis. The tool's accessibility enables students to create sophisticated deepfake-style content without technical expertise.

The posts frequently incorporate controversial figures like Jeffrey Epstein and Benjamin Netanyahu alongside school faculty, apparently to boost engagement through association with trending topics. Students also use coded language from "looksmaxxing" communities and neo-Nazi symbolism, suggesting exposure to extremist online spaces.

Platform Response and Content Moderation

Both Meta and TikTok acknowledge the problem but face challenges in consistent enforcement. Meta representatives told Wired that the company reviewed and removed some content violating its harassment policies. TikTok similarly stated it has removed violating content and implemented automated detection systems.

However, the viral nature of these campaigns often outpaces moderation efforts. In one notable case at Crandall High School in Texas, the original account @crandall.kirkinator inspired copycat content from creators with no connection to the school, amplifying harassment beyond the local community. The account's administrator eventually deleted it after teachers reported receiving spam calls and emails from strangers.

Educational and Legal Implications

School districts are struggling to address these AI-powered harassment campaigns within existing disciplinary frameworks. The Wylie Independent School District confirmed awareness of accounts targeting its faculty and warned of "disciplinary action and possible legal consequences" for identified students.

The trend highlights broader challenges around AI literacy and digital citizenship in educational settings. While students demonstrate technical fluency with AI tools, their use often lacks consideration for ethical implications or potential real-world harm to targeted individuals.

From a European perspective, these incidents underscore the importance of comprehensive AI governance frameworks that address both technical capabilities and social responsibility. The EU's AI Act and Digital Services Act provide regulatory models for platform accountability that could inform responses to similar harassment campaigns.

Technical Teams and Content Safety

For AI companies and platform operators, these cases demonstrate the need for robust content safety measures that consider the intersection of synthetic media tools and harassment. Traditional content moderation approaches may prove insufficient when dealing with AI-generated content that can rapidly evolve to evade detection systems.

The viral success of these 'slander pages' also reveals how AI tools designed for creative expression can be weaponized for harassment when deployed without appropriate safeguards or user education. Technical teams must consider these misuse cases when developing consumer-facing AI applications.

The emergence of AI-powered student harassment campaigns represents a new challenge for educators, platforms, and policymakers navigating the intersection of synthetic media capabilities and online safety in educational contexts.

Original source: Wired reported on the trend of AI-fueled student 'slander pages' targeting teachers across multiple Texas school districts.

AI News Updates

Subscribe to our AI news digest

Weekly summaries of the latest AI news. Unsubscribe anytime.

EU Made in Europe

Chat with 100+ AI Models in one App.

Use Claude, ChatGPT, Gemini alongside with EU-Hosted Models like Deepseek, GLM-5, Kimi K2.5 and many more.