Is AI the Future of School Safety?

All upgrades must be considered in collaboration with existing personnel so school security can be supported — not replaced — with automated technology.

Is AI the Future of School Safety?

Photo: kras99 -

Gone are the days when student monitoring consisted of intercepting passed notes, signing out hall passes, and checking for lanyard IDs at the door. Methods of mitigating bullying, student victimization, and on-campus weapons use have evolved to using expertly programmed metal detectors, digital hall passes, online activity monitoring programs, and AI video tracking systems.

The efficiency with which AI and updated software programs can detect risks and respond accordingly is tangible, but so are its inaccuracies and imperfections compared to manual methods.

Districts need to understand the full implications of AI safety technology before making a long-term decision with financial and social effects. Implementation costs and student response to increased safety and monitoring efforts are the most prevalent factors when considering security upgrades.

What AI Can Do for Schools

In addition to firearm attacks, campus risks today include cyberbullying, nonfatal crimes against students, safety threats against teachers, and student conduct violations. According to a recent report, occurrence rates within these domains have generally decreased over the past decade, except for cyberbullying which nearly doubled from 2010 to 2020.

As a result, campus security measures have aligned closely with threat evolution through technologies such as online safety monitoring, digital student location services, weapons screening, and gun detection, to name a few. These tools can oversee student activity with automated alerts and action steps, and with a faster process than manual tracking, they have the potential to considerably prevent, reduce, or stop harm on school grounds more efficiently than current security methods. However, they are not infallible and have notable shortcomings as well as privacy concerns.

What AI Can’t Do for Schools

At its current stage, security programs and AI tools are not capable of fully replacing human monitoring and safety oversight in schools. In addition to errors, some programs still rely on personnel input before action can be taken. One gun detection company, for example, uses AI to identify guns from live security footage feeds, but each identification alert is manually reviewed by experienced internal professionals to determine authenticity and appropriate next steps.

Like traditional security approaches, AI programs may be under-sensitive or overreactive, causing false alarms or worse, no alarms when it matters most. Schools must carefully train and monitor new interfaces, sensors, or programs to ensure accuracy and efficacy. This may also require more personnel than previous safety protocols, which needs to be compared against the anticipated increased speed and precision gained from using new technology.

Student Response and Implications

It’s been established that an increased security presence negatively impacts students socially and academically due to the interpretation that greater security implies a greater threat. This can potentially be mitigated by the enhanced discretion and reduced visibility of newer security devices and programs, but the concerns regarding privacy and autonomy may grow in turn.

Software like those for online monitoring or physical activity tracking, for example, may feel intrusive to students and deplete morale if they feel a loss of autonomy more akin to an institution than a learning space. Moreover, using online monitoring programs to protect the safety of students may also set a precedent that they can not be independently trusted and raise data privacy ethical challenges for minors.

Another disconcerting aspect of using AI-powered security is the potential for bias. Some tools could reduce bias by exclusively focusing on objective data, such as weapon identifiers, but other tools like online activity trackers may rely more on subjective interpretation, especially when school personnel make the final evaluation before responsive action.

Accordingly, thorough research and surveying of all stakeholders are essential prior to full integration to better anticipate and address concerns.

Getting Support: Weighing Costs and Community Buy-in

Emerging AI security technology has a wide price range contingent on desired features, district size, and length of use.

In the lower range, a Texas school district paid $6,000 for a 90-day trial run of an online monitoring software, while a midrange security addition of AI in camera feeds cost a New Jersey district $76,000. For larger service areas, the funding required may stretch into the millions, such as the $3.7 million investment made by Utica City Schools in New York for a weapons detector supported by AI.

Furthermore, since these programs and technological advancements are constantly developing, funding may inadvertently be used on a soon-to-be-obsolete program or may cost more in the long term with necessary personnel oversight of the new technology that was originally intended to automate activities.

Security upgrades need to be analyzed against respective schools’ needs, as well as the community’s likelihood of buying in when it comes to voting for the school bond referendums that would provide funding. School leaders can examine the full risks, rewards, and public opinion related to new safety implementations by:

  1. Looking at relevant data to determine necessity
  2. Reading ethics and legal reports on intended software
  3. Communicating with students and teachers to gauge interest
  4. Gaining a full understanding of the technology’s impact to be fully prepared to navigate the politics of the new investment

AI for Security Support, Not Substitution

Some school leadership may oppose new technology while others are eager to integrate new supports and replacements. The goal for schools now is to find a solution that simultaneously provides a happy medium and vital security for all building inhabitants.

Undoubtedly, AI tools have the potential to make schools safer than was previously feasible, but it shouldn’t come at the expense of student unease or the large financial expense when it may not be warranted.

All upgrades must be considered carefully and with the collaboration of existing personnel so school security can be supported — not replaced — with automated technology.

Amairani Asmad is a writer who has researched areas ranging from education to neurodiversity across the entire age spectrum. Her work can be found in school-targeted publications and peer-focused resource blogs. Amairani is also a Penn State University alumna with a B.S. in Rehabilitation and Human Services.

This article was originally published by CS’ sister event, EDspaces.

If you appreciated this article and want to receive more valuable industry content like this, click here to sign up for our FREE digital newsletters!

Leading in Turbulent Times: Effective Campus Public Safety Leadership for the 21st Century

This new webcast will discuss how campus public safety leaders can effectively incorporate Clery Act, Title IX, customer service, “helicopter” parents, emergency notification, town-gown relationships, brand management, Greek Life, student recruitment, faculty, and more into their roles and develop the necessary skills to successfully lead their departments. Register today to attend this free webcast!

Leave a Reply

Your email address will not be published. Required fields are marked *

Get Our Newsletters
Campus Safety Conference promo