Currently, protections around explicit deepfakes vary significantly across the United States. While some have explicit laws surrounding AI-generated explicit images, others have loose and unclear statutes that do not directly apply to such material. Without a cohesive picture, students lack clarity on justice, support, and accountability.
​​
Safety should be an expectation, not a geographic chance.
Why Schools?
Students and youth are frequently victims of such deepfakes, yet many schools do not have the tools to navigate it. Without clear support, many victims run the risk of being overlooked or feeling unsupported. Implementing clear policy and guidelines for mitigating negative effects shows that explicit deepfakes will not be tolerated and students have support.
Where Current Laws Fall Short
While there are various laws surrounding non-consensual imagery, many lack consistency and clarity.
Some areas of concern include:
​
-
A lack of standardized punishment and uneven enforcement
-
Inconsistency in definitions around AI-generated imagery and victim protections
-
Limited options when seeking civil damages
-
Insufficient and inconsistent protections for minors in school-related incidents
-
Additionally, unclear obligations of educational institutes surrounding explicit AI-generated content
Staying Silent is Not Staying Neutral
As technology progresses, schools must progress as well. By implementing clear policies and support systems, schools can take a stand and actively support their students. This sends an unequivocal message that explicit deepfake abuse will not be tolerated. All students deserve to feel safe everywhere, especially online.
Powered by the Center for Gender Equitable AI.

Please contact stephanie@centergeai.org with any questions.

.png)
.png)