Securing Artificial Intelligence: Can the Industry Collaborate?

Artificial intelligence's advance has made it difficult for companies to stay on top of key issues with security ranking as a top concern.
Published: July 4, 2025

Collaborative efforts can propel industries forward, especially when it comes to developing standards and best practices. The World Wide Web Consortium (W3C), for example, is an open forum that has created global standards on web-related issues ranging from PNG protocols to privacy principles. In most cases, collaboration creates an environment where issues plaguing the entire industry can be analyzed and addressed effectively.

Yet, collaboration is not always easy to facilitate, as those in the artificial intelligence industry are discovering. The rapid advance of artificial intelligence (AI) has made it difficult for companies in that industry to stay on top of key issues, with security currently ranking as a top concern. While collaboration could lead to solutions, it would also require companies to overcome some inherent operational challenges.

Related Article: 36% of Security Operations Leaders Say AI Helps Fight Crime, 7% Use It for Physical Security

Security systems integrators seeking to optimize the use of artificial intelligence in their campus security applications stand to benefit from the improvements that collaboration can provide. AI plays an ever-increasing role in the solutions integrators deploy, contributing to advanced video analytics, access control, threat detection and more.

Understanding the issues preventing collaboration and leveraging their influence to help overcome them can ensure integrators have reliable and effective AI components to integrate into campus security systems. 

——Article Continues Below——

Get the latest industry news and research delivered directly to your inbox.

Key Artificial Intelligence-Related Security Concerns

Training AI models requires accumulating vast amounts of data. When that data includes sensitive or proprietary information, it becomes a key target for hackers.

Artificial intelligence-driven security systems can involve datasets of their own that could be considered valuable to hackers. For example, facial recognition systems are an AI-driven security component that attracts cybercriminals because of the valuable biometric data they gather. If an attack were to compromise that data, it could lead to faulty identifications, which would impact performance and client trust.

Data poisoning is a type of attack that seeks to introduce faulty or malicious data to AI datasets. The errors that result from data poisoning can lead to breakdowns in the system’s operations.

When security systems are impacted by data poisoning, their reliability suffers. Threat detection becomes unreliable, potentially leading to disruptive false positives and dangerous false negatives. Data poisoning can also be used to create backdoors that hackers can exploit.

Top Challenges to Security Collaboration

On their own, AI development companies can struggle to address the security issues mentioned above. Recent reports show that DeepSeek, a rising star in the artificial intelligence industry, failed to address an internal vulnerability that left its data dangerously exposed.

With collaboration, however, developers have more resources for identifying vulnerabilities and engineering the solutions that address them. Yet, a number of hurdles stand in the way of impactful cooperation. 

Related Article: 3 Ways AI is Improving Hospital Safety

Competitive pressure is one of the top challenges to collaboration, with several factors contributing to the competitive environment. The rapid pace of AI development, for example, means any unique innovation gives companies an advantage over the rest of the field. The talent wars in the AI industry also make it essential for companies to maintain a competitive edge.

Because data is a core resource for AI development, companies that take a lead position in data security gain a considerable competitive advantage. Collaboration on security standards could level the playing field, potentially undermining the competitive edge that some companies gain by developing proprietary security protocols.

A lack of trust among AI developers also poses a challenge to effective collaboration. The AI industry is relatively young, with the most established companies having barely a decade’s worth of history to prove their trustworthiness. If trust can’t be established, companies will be reluctant to share the type of information needed to develop industry-wide solutions.

Security systems integrators can play a key role in overcoming these challenges because they bring a unique perspective to the AI development environment. Integrators are the bridge between innovation and deployment. They can show developers the real-world problems that weaknesses in their products can lead to, which in turn highlights the value of collaboration.

Influencing the Artificial Intelligence Discussion

Integrators have the opportunity to play both a direct and indirect role in pushing AI development toward a more collaborative framework. As consumers, they can directly engage with developers, using their purchasing power to push for better solutions and their client-facing experience to provide tangible examples of the issues being caused by developer isolation.

As service providers, they can educate their customers on the value that collaboration could bring to the industry, encouraging them to share their concerns with developers.

Related Article: AI in School Security: Empowering Understaffed Districts Amid Growing Threats

Building relationships with developers is a key step in opening avenues for change. As integrators initiate conversations focused on real-world challenges and security gaps they are finding in developers’ products, they highlight the need for best practices that collaboration can provide.

Integrators can also help to define an acceptable scope for collaboration, helping those who see the process as a threat to their market position or the long-term value of their products. By helping developers see that industry-wide standards established with input from all developers can make their products more effective without the need for sharing proprietary solutions, integrators can bring more into the fold.

A push for industry-wide standards can also be catalytic for inspiring higher levels of collaboration. Integrators can advocate for industry bodies to establish assessment frameworks for AI-powered security tools, which in turn will bring developers together to contribute to the drafting of those frameworks.

Collaboration on frameworks results in a broader perspective on needs, which leads to more robust solutions and, ultimately, more powerful and reliable security tools for integrators.

Artificial intelligence is becoming a core element of security systems, making the need for robust and reliable solutions increasingly urgent. Collaboration offers a pathway to ensure the most effective solutions are identified and shared, with the promise of better tools that empower better solutions.

Integrators can leverage their influence to communicate both the need for collaboration and the long-term benefits it can bring to both developers and the consumers they serve.


Yashin Manraj is CEO of Pvotal Technologies. This article was originally published in CS sister publication, Security Sales & Integration.

NOTE: The views expressed by guest bloggers and contributors are those of the authors and do not necessarily represent the views of, and should not be attributed to, Campus Safety.

Strategy & Planning Series
Strategy & Planning Series
Strategy & Planning Series
Strategy & Planning Series
Strategy & Planning Series