Defending Against Adversarial AI
With the ever-increasing capabilities and use of AI, we must take appropriate steps to guard against its nefarious use.
We’ve talked about robots and how they can take over the world. We’ve seen apocalyptic movies and novels that pit man versus machine in epic fights of good versus evil. Current geopolitical and environmental issues have evolved in maturity and capability where cyber defense/and cyber offensive strategies are being used not only in military confrontations, but corporate and campus boardrooms as well. As we think about the current cryptocurrency meltdown, economic and supply chain instability, we face a new dimension of threats that require rethinking and developing new out-of-the-box concepts to identify and defeat adversarial technologies.
Consider artificial intelligence (AI) and what a long way it has come in the past five years. There appears to be no end to the possibility of AI diminishing the need for humans to think as we move to the smart machine era. When we develop confidence in AI and integrate it into our daily lives, we must also think of the adversarial use of AI and its capability to disrupt and negatively impact our way of living.
A group called the Future of Life Institute has administered a petition supported by major AI tech companies calling for a six-month moratorium on large-scale experiments with AI. While they all claim to promise that their technologies will change the course of civilization, they claim that the industry has not taken the steps to ensure AI is not co-opted into nefarious use and establish foundational rules and standards governing the use of AI and AI-based systems.
Let’s dig deeper into where we find ourselves today and what it portends for the electronic security industry and systems integrators.
What Is Adversarial Artificial Intelligence?
Machine learning and AI presents a new cyber-attack surface requiring new skillsets, technologies, and competencies to identify and mitigate cybersecurity risks to the public, industry, and the nation.
Cybersecurity efforts aim to protect computing systems from digital attacks, which are a rising threat in the Digital Age. Adversarial machine learning, a technique that attempts to fool models with deceptive data, is a growing threat in AI and machine learning.
The monitoring of AI-based technologies, or adversarial AI, is based on technology that if used for malicious purposes can endanger public safety, health, and national security. Before we go down the path leading to machines that decide to exterminate mankind such as seen in movies like “The Terminator,” let’s talk about real-world examples.
One of the best examples we see today is what is known as deepfake. This refers to the use of AI-based models and computer algorithm capabilities to simulate voice, expressions, facial recognition, and computer vision facial images contextually so that the ability to discern real versus fake is not possible.
AI natural language models that now flood the Internet as chatbots, or bots, have been successfully programmed to alter data and make mistakes in its algorithms in advance of presenting it back to other systems and responses.
Without going into a dissertation of how AI is developed, there is a specific process known as training AI. This involves taking a machine algorithm and developing responses that classify how the input is used, processed, and output.
Most AI cyber-attacks on AI frequently use poisoning to train the data and labels to underperform during the deployment. Think of this as data being contaminated, rendering the data collected and processed through the AI system useless.
Another type of AI cyber-attack is AI model extraction. This is where there is the intent to steal the AI module and reconstruct the data to respond alternatively to the way it was intended to respond.
At the top of the list for dangerous potential targets of compromised AI are autonomous weapons systems, next brain-computer interfaces (BCI), self-driving vehicles, 3D printing, facial recognition, augmented reality, swarm intelligence, in addition to the already alluded to deepfakes and bots.
Distinctions need to be made between systems referred to as automated versus autonomous. Automated systems perform repetitive processes that reduce human interaction yet are still governed by processes and relationships managed by humans. Autonomous systems (especially those that rely on AI) can adjust and modify outcomes without governance or human interaction.
Defending Against AI Exploits
The application of strong digital encryption and cybersecurity best practices must always be top of mind in the shadow of defending against the formidable capabilities and power of AI. Governing the use of AI by the government, as well as public and private organizations, is not too far off.
The military perspective against weapons of war is to establish international guidance and laws of how, when, and under what conditions AI can be used in conflict.
The U.S. Department of Defense’s “Unmanned Systems Integrated Roadmap” sets out a concrete plan to develop and deploy weapons with ever-increasing autonomy in the air, on land, and at sea in the next 20 years. A defining feature of these autonomous weapons systems (AWS) is precisely their ability to operate autonomously: “robotic weapons … once activated, can select, and engage targets without further human intervention.”
As we seek to have greater nonmilitary privacy controls and protection of personally identifiable information (PII), there are specific methods that can be implemented to secure and train AI models. One such process is known as Privacy Preserving Machine Learning (PPML). Another involves code obfuscation techniques that incorporate face blurring with computer vision recognition models.
Making sure that AI is fully and completely aligned to human goals is surprisingly difficult and takes careful programming. AI with ambiguous and ambitious goals are worrisome, as we don’t know what path it might decide to take to its given goal.
One thing is certain … we saw it coming!
Darnell Washington is President and CEO of SecureXperts.
If you appreciated this article and want to receive more valuable industry content like this, click here to sign up for our FREE digital newsletters!
Leading in Turbulent Times: Effective Campus Public Safety Leadership for the 21st Century
This new webcast will discuss how campus public safety leaders can effectively incorporate Clery Act, Title IX, customer service, “helicopter” parents, emergency notification, town-gown relationships, brand management, Greek Life, student recruitment, faculty, and more into their roles and develop the necessary skills to successfully lead their departments. Register today to attend this free webcast!