Poor Policies and Over-Reliance on AI Can Sabotage Your Security Technology’s Potential

Want your security technology to be effective? Provide robust policies, proper training, and human oversight to prevent misuse and ensure the tools fulfill their intended purpose.
Published: August 11, 2025

Security technology has the potential to significantly enhance campus safety, but only when supported by robust policies, proper adherence to those policies, and well-trained personnel. Without these safeguards, even the best technology can lead to unintended and damaging consequences.

Take, for instance, AI-powered computer surveillance software increasingly implemented by schools to monitor student activity on school-issued devices. These programs track students’ online behavior, searching for signs of self-harm, suicide, bullying, or potential acts of violence. When used appropriately, such tools have been effective in identifying early warning signs and protecting students. However, the misuse of such technology can result in significant harm to the very individuals it aims to protect.

13-Year-Old Arrested, Strip Searched Over a Threat Later Deemed Not Credible

Consider a recent incident involving an eighth grader from Tennessee reported on by the Associated Press. After surveillance software flagged her inappropriate comments made on a school-issued device, the student faced extreme discipline, including arrest, interrogation, and a strip search. AP reports that her comments, when reviewed in context by humans and not just with AI, were offensive but did not constitute a credible threat. Yet the response by the school and local law enforcement escalated unnecessarily, leading to the student being jailed overnight without access to her parents, according to a lawsuit filed by her family. The girl’s court-ordered punishment included house arrest, mandated attendance at an alternative school, and a psychological evaluation.

Related Article: Study: Stricter School Discipline Policies Have Long-term Negative Effects on Students

This situation illustrates the critical need for proper protocols and discretion in handling such incidents. Jeff Patterson, CEO of Gaggle, the software provider used in the Tennessee incident, stated that the technology was not used as intended by the school. Surveillance tools like these are designed to identify early warning signs and intervene before circumstances require law enforcement involvement.

——Article Continues Below——

Get the latest industry news and research delivered directly to your inbox.

Patterson emphasized that this case should have been treated as a “teachable moment” rather than a “law enforcement moment.” Such overreach risks not only traumatizing students but also jeopardizing the credibility and effectiveness of the technology used and the school administrators who manage the programs.

Sam Boyd, an attorney with the Southern Poverty Law Center, reinforced this concern to the AP, noting that for many children, facing actions like involuntary examinations can result in lasting psychological harm rather than offering meaningful support.

Other Security Technologies Must Also Be Supported by Good Policies

This scenario is not unique to digital surveillance software. Similar concerns surfaced years ago with the proliferation of security cameras on campuses. While the cameras provided valuable security benefits, misuse of the video surveillance systems — for example, racial profiling, improper monitoring of individuals, or invading personal privacy by peering into dorms or residences — resulted in justified backlash. This type of misuse undermines community trust in campus leadership and can threaten the very existence of the technology on campus… not to mention the careers of the staff members who initially champion the system.

Regardless of the technology used, the lesson is clear: strong and appropriate policies must be in place to guide technology’s use, personnel must be trained to follow these policies rigorously, and all AI-generated alerts must be verified by humans to ensure they are real and accurate. Blindly relying on AI without human evaluation increases the risk of false alarms, compromised judgment, and unwarranted, even harmful, actions.

Related Article: The Other AI: ‘Active Involvement’ by Humans Is Critical in School Security

When technology, human oversight, and appropriate policy combine, the opportunities to safeguard campuses are many. Without them, that same technology can create the very problems it was designed to solve.

Strategy & Planning Series
Strategy & Planning Series
Strategy & Planning Series
Strategy & Planning Series
Strategy & Planning Series