Discussions with DTRA: Episode 7

DTRA podcast profile image

Discussions with DTRA Podcast: Around the Microphone

DTRA, the premier agency for meeting the challenges of WMD and emerging threats.

The DTRA Podcast series provides agency members with a platform to discuss interesting mission-related, morale-boosting or special interest item topics. The goal of our program is to deliver cross-talk that educates and informs audiences in an effort to support employee engagement and target potential outreach opportunities. Listeners can anticipate hearing conversations that are agency director-supported, amplify agency's core functions and convey mission intent in segments that range from 20 to 40 minutes.

Episode 7: The Impact of AI on Cybersecurity

Length: 28:19

DTRA Host Dr. Michael Howard, a Senior Program Manager in the Acquisition, Contracts and Logistics Directorate discusses with Dr. James Bret Michael , Chair of Computer Science at the Naval Postgraduate School, Cybersecurity and AI, Risks, Rewards and Frameworks in today's rapidly growing AI environment.

 
 
 

Host:
Dr. Michael Howard
a Senior Program Manager in the Acquisition, Contracts and Logistics Directorate at Defense Threat Reduction Agency

Interviewee:
Dr. James Bret Michael,
Chair of Computer Science at the Naval Postgraduate School

Public Affairs Facilitator:
Darnell P Gardner
Public Affairs Specialist
Defense Threat Reduction Agency

Dr. James Bret Michael's Article Mentioned in this Podcast -

Enhancing Cybersecurity via Artificial Intelligence: Risks, Rewards and Frameworks.

 

Transcript

Announcer:

Greetings, and welcome to Discussions with DTRA. Where the Defense Threat Reduction Agency brings together subject matter experts to discuss meeting today's challenges of WMD and emerging threats, increase awareness, and deliver morale boosting information. And now, today's show.

Darnell Gardner:

I am Darnell Gardner, DTRA Public Affairs, and I'll be a facilitator for today's podcast. Mr. Howard will be moderating today's session entitled The Impact of AI on Cybersecurity, and our guest speaker and subject matter expert is calling in from California. So, DTRA listeners, please welcome Dr. James Bret Michael, professor of Computer Science and Electrical Engineering at the Naval Post-Graduate School in Monterey, California. Take it away.

Dr. Howard:

Good morning, DTRA audience. This is Dr. Howard coming to you live from our podcast today. I have a special guest from the Naval Postgraduate School. Dr. Bret Michael is here today to talk to us about the impact of AI on cybersecurity. Dr. Michael, would you please share a bit of your background with the DTRA audience?

Dr. Brett Michael:

Thank you for having me in this podcast. I've been working in the area of AI since the 1980s, and specifically at the intersection of software engineering, cybersecurity, and artificial intelligence. Prior to joining the Naval Postgraduate School, I was with the UC Berkeley research team that demonstrated the technical feasibility of automating the driving function for passenger vehicles, trucks, and buses. So, I was actually applying AI and traditional control system and signal processing technology to show how we can improve the safety, efficiency, and other good properties of our ground transportation systems. And prior to that, I was actually living in the Washington, DC, area. I worked at the Institute for Defense Analyses for several years while I was in grad school, finished my PhD at George Mason University, and then after that moved on my way west to Argonne National Laboratory, where I was a formal methods engineer. And then, UC Berkeley and the Naval Postgraduate School. Here at the Naval Postgraduate School, I lead the Interdisciplinary academic group that brings faculty and students from across our campus to work on challenging problems for the Department of Defense, how to better protect our systems, but also other aspects of cyber systems and operations. And with that, I'll turn it back to you.

Dr. Howard:

Okay. A very impressive background, I'll tell you. A lot of knowledge to share, I'm sure. And you and I met through DTRA's artificial intelligence machine learning data science working group. And that's where, as I said, we came in contact with one another, and then we continue to participate in together, as well. I believe you're an advisor to the group. Well, today, the article you wrote, you and your other two colleagues, Joshua Kroll and David Thaw, entitled Cybersecurity Via Artificial Intelligence: Risks, Rewards, and Frameworks. You write that recent advances in artificial intelligence challenge classical models of productivity by increasing the scale complexity and range of tasks that can be meaningfully automated, including those associated with cybersecurity. This is what I'd like to delve into deeper today. With that, what are some of the main challenges cybersecurity faces today?

Dr. Brett Michael:

It's really a broken record of sorts. We still have many of the same core challenges. One of those being that the core cyber threats and vulnerabilities still exist. Even when we transitioned to cloud computing in the early 2000s, when we had the computational resources, that being storage, communication, and processing, that really enabled cloud computing, the vulnerabilities and the threats just migrated into the cloud. So, we're still dealing with denial of service attacks, ransomware, problems with buffer overflows, problems with DNS attacks, and things like that. In addition, cyber conflict remains asymmetric, and AI has not changed that. And what many people don't realize is that AI and security have merged a long time ago. AI, artificial intelligence, is really a spectrum of levels of automation. So, we have machines that implement fairly simple rule-based type of automation of tasks all the way up to aspirational AI that you've probably heard of, which we're not there yet. That's the Lieutenant Commander, Data, on Star Trek who has emotions, can be creative, and so on. So, it's an artificial being. But we have a many gradations of levels of automation in between basic automation and aspirational AI. But because an attacker only needs to find one viable and usable means of exploitation, so they need to find a vulnerability that they can exploit to achieve their goal. This remains the same as it was when I started in this field back in the 1980s because we have a hard time building systems that don't have any flaws. Theoretically, we can't show a system, for example, doesn't have a covert channel, which is problematic from a security standpoint, especially if you're interested in protecting against exfiltration of data. But the defender needs to defend against a large universe of possible attacks.

Dr. Howard:

So, then how does AI improve cybersecurity?

Dr. Brett Michael:

Well, it does in the sense that the introduction of AI back in the 1950s, the concepts, even when it was very abstract and computers were quite difficult to program and maintain. But even back then, people were thinking about, well, how can AI improve the human situation, but how might it be used for nefarious purposes? So, the way it's helping us is that, since there is such a low entry barrier to entry today to using AI, pretty much, I could take my laptop, I could go to GitHub, which is a software repository, download algorithms, I can look online on the web for advice on how to construct attacks, or whatever. There's not much of a barrier to entry for the attacker to use AI to try to obviate or bypass or defeat, in some way, cybersecurity measures.

But we as defenders, we also need to keep up with the changes in technology and its uses. So, we need to employ artificial intelligence to maintain the level of security that our stakeholders expect us to provide in protecting our digital resources. So, it's a matter of taking a risk management perspective, and we know that there are risks to employing AI and we know there are risks of others employing AI, and we need to manage that risk by becoming fluent in AI and understanding how to best integrate it to mitigate risks and to protect our digital resources and the stakeholders that rely on them.

Dr. Howard:

What I hear you saying is that artificial intelligence machine learning can improve security, while at the same time making it easier for cyber criminals to penetrate systems with no human intervention, as well.

Dr. Brett Michael:

It's a cyber arms race, and it's a battle of the algorithms, as many people have coined the situation that we're in today.

Dr. Howard:

I think I'll skip my next question, which was going to discuss or ask about drawbacks and limitations. I think we've touched on that already. Now, that we've addressed the topic on a broader basis, I'd like to refer to your article. You mentioned that AI deployment should be viewed as rich socio technical systems rather than mere technical tools. Would you expound upon that?

Dr. Brett Michael:

Absolutely. So, it's one thing to have a technological capability. It's another thing to be able to employ it in a means in which you can be effective and efficient in achieving your goals. So, that's why I think you need to take a systemic approach and think in terms of that triad. So, you have to think not only about the system and the technology, but you need to also think about the humans, the human in the loop. Let's remember that artificial intelligence is based upon algorithms and data and the algorithms and data don't really understand like we do. So, we don't want to anthropomorphize AI making it seem like it's something like a human. It's not. It's algorithms and data that don't necessarily know the context of a situation. And within cybersecurity, all kinds of things can happen. There can be anomalies that will pop up, and we can train, using machine learning algorithms, a cybersecurity protection myth mechanism and policy enforcement mechanism to detect these anomalies.

But are the anomalies due to some malicious behavior? Is it due to a sensor being out of calibration? It could be due to a lot of different things. And really, it takes a human to step in and take advice and recommendations and situational awareness information from the AI system that's used as an assistant and then to make a decision. So, it's not like we're going to let AI take responsibility and accountability for doing things that protect our system in a fully autonomous manner. We have to integrate the human in the loop.

Dr. Howard:

And you touched on this in your response, however, I want to go a little deeper and get it to the level of practitioners. What are some principles for cyber practitioners as they learn to operate in the evolving world of AI?

Dr. Brett Michael:

Well, I would say, as I mentioned in my paper, one of the first of those principles that we might think about, and there are many principles. I don't have time to go through them all today. But one of those is that automation takes tasks done by humans and embodies them in the technology itself. So, because that work happens but in a different way and within a different process workflow, just like it did when we instituted cloud computing. We saw that we don't have to do things the same way we've done them in the past. The cloud computing, just like AI, can help us change our process workflows to be more effective and efficient. But with these changes, this represents a delegation of responsibility, and humans must remain accountable for the operation of the system and its outcomes. I think that's really important to keep in mind as a guiding principle, and also to establish sufficient oversight.

The actions of any system must be sufficiently traceable that an oversight entity can determine what led to them and whether they might have been manipulated by, let's say, insiders or by outside adversaries. So, what we want to do is design our systems to support this level of traceability, and that is a key challenge. But this is a principle of AI transparency and supporting accountability, and we are still challenged in trying to figure out how to do that. And, for our DOD audience, the what's known as responsible AI, which there's a lot of interest and work being done today by several working groups. Another guiding principle is that we need to keep in mind that systems should act as their controllers intend them to do. So, what we want these systems to do, including the AI enabled cybersecurity systems, is to fulfill the requirements set forward for them, and capturing the needs of their controllers in those requirements, to establish these systems must be subjected to rigorous tests and evaluation processes, including robust whole system verification and validation.

And one of the things that I'm working on, and several of my colleagues, as well as people in other organizations, is looking at how do we perform assurance on AI based systems? And how can we use AI, such as machine learning, to actually improve the assurance of our systems, including assurance related to cybersecurity objectives and goals? And I've actually been working in that area for critical infrastructure, including process control systems, where I've looked at using machine learning for parts of process control systems where you don't even have authentication organic to components, like sensors. So, how can we, for example, get around the fact that we don't have components that have organic cybersecurity capabilities, as basic as authentication? And most of your industrial process control system sensors do not have any way of authenticating themselves to anything else. But we found a way of using machine learning algorithms to look at the analog signals, the voltages produced by sensors, and actually be able to look for anomalies there and use that as a means for authenticating that the data is actually coming from the sensor and not from something else.

And, also, looking for what might be the cause of that anomaly. And that would even work to keep the process control system working, even if someone, for example, performed, let's say, a crypto virology type of attack, a ransomware attack, against the IT and OT networks of the system. And a third guiding principle here would be to take a complete system level perspective, which I already mentioned, that looking systemically at all the different aspects, the technology, the policy, the law, and the actual people. How do we get the automated system, which in includes the AI, to team with the human? So, we need to consider things, like human factors, including educating AI system developers, human operators, and other stakeholders, identifying stakeholders in their relationship to the AI system, and that will help us a great deal in establishing and clarifying the system's goals and requirements and aiding in a smooth operation and adoption.

This principle's important because, if we don't go this route, automation can suffer failures as fewer humans are responsible for more output, but less aware of how the output is generated. Also, enhanced by automation, humans are both more critical to the points where the technology hands off control, just like the work I did with fully automated vehicles that were dual mode, could operate manually or in full automation mode, and less able to take on that control either in nominal or degraded modes of system operation. And I think an overriding principle, that I think is important to mention during this podcast, is that although a defensive AI capability may be dependable, efficient, and effective, we need to apply that capability in an ethical manner. Ethics plays a key role here in enabling our cybersecurity and cyber operations.

So, what I'm saying is that we really need to apply a AI responsibly. Even though we may have the best intentions, two wrongs make a right. So, even though the attackers may not be playing fair, and it's not a level playing field. I already mentioned that we have assymetry in this competition between the defenders of digital systems and the attackers of those systems. We still need to be responsible in our actions. And I'd like to quote former Defense Secretary, Mark Esper, when, in a speech, he was referring to our development of AI capabilities for many different aspects of how we run our enterprise.

But he said, and this is a quote, "The United States will, once again, lead the way in the responsible development and application of emerging technologies, reinforcing our role as a global security partner of choice." To me, that resonates very strongly. I want to be a responsible, ethical user of AI in performing the role of defending our enterprise networks and other resources. So, I want to be an example, and I think we all want to be that example. So, those are some of the principles. There are many others I'm sure you and others can think of, but I think those, at least, give us some things to think about.

Dr. Howard:

Excellent. Well, you've unpacked a lot during this session. And I want to conclude on, and you've touched on this already, but let's reiterate it for our audience. As we conclude, what are the drivers for the advantage in the future of cyberspace?

Dr. Brett Michael:

I think human machine teaming is going to be it. Well, it is critical. We are already seeing that. We already employ AI in the ordinary devices that we use. For example, our smartphones. The multifactor authentication that involves things like a password and facial recognition. We rely on machine learning algorithms built into our phone. There's a chip or chips in our phone that actually map the points on our face and say, "Oh, yep. That's Michael Howard or that's Bret Michael." And, in addition to having that, the user supplying something they know, which is a password. So, it's something they are and something that they know, and it could be something that they have. You might have another device where gives you some random numbers that you type in.

So, that would be three-way multifactor authentication. That's already with us today. But that's at the individual level, and that's not that complex, but it's AI enabled. But as we scale up to enterprise level cybersecurity, we need to think about how the humans in that organization team with the AI to defend the enterprise's resources. And that's a very difficult problem in and of itself. But when you add in the teaming piece, we really are still finding our way. The technology is continuing to evolve, but figuring out how to best employ that with the human machine teaming, I think, is where we're going to make the great leaps and bounds, but we're still working our way in that direction.

Dr. Howard:

Well, I'm sure it requires, yes, particularly at the nation state level, quite a bit of an investment that [inaudible 00:25:41]-

Dr. Brett Michael:

Yes, and the wonderful thing is that the Department of Defense has been a key investor in AI and cybersecurity, and now the integration of the two, although much of the innovation today is not internal to the Department of Defense, the Department of Defense and other government agencies, and that includes agencies around the world, are trying to influence industry to fill those capability gaps, such as the human machine teaming aspects of using AI in combination with cybersecurity. So, we are still influencers and, of course, heavy users of AI and cybersecurity. So, I think things will improve, continually improve, but we have to remember it's a cat and mouse game and we have to keep up. We, as the defenders, need to keep up with the attackers. So, it's not a matter of whether we'll be using AI in combination with cybersecurity, but it's a matter of we are going to use it, we are using it, and we have to keep up with the attackers.

Dr. Howard:

Understood. Dr. Bret Michaels, thank you very much for this insightful talk. I'm sure our audience gained quite a bit out of it. There's lots to unpack here, and I would love to share your article. It's entitled Cybersecurity Via Artificial Intelligence: Risks, Rewards, and Frameworks. So, I'd love to be able to share that, if you're okay with that.

Dr. Brett Michael:

I'm okay with that.

Dr. Howard:

Excellent, excellent. See you on the other side. Talk to you later.

Dr. Brett Michael:

Okay.

Dr. Howard:

Bye-bye.

Dr. Brett Michael:

It was pleasure. Thank you.

Announcer:

Thanks for listening. To hear more podcasts, don't forget to subscribe on Google Play or wherever you listen. You can find out more about DITRA at ditra.mil.

ABOUT DTRA

DTRA provides cross-cutting solutions to enable the Department of Defense, the United States Government, and international partners to deter strategic attack against the United States and its allies; prevent, reduce, and counter WMD and emerging threats; and prevail against WMD-armed adversaries in crisis and conflict.  

DTRA logo

CONNECT WITH US

Facebook Twitter YouTube LinkedIn DTRA Webmail

8725 John J. Kingman Rd., Fort Belvoir, Va. 22060-6221

Welcome to the Defense Threat Reduction Agency’s website. If you are looking for the official source of information about the DoD Web Policy, please visit https://dodcio.defense.gov/DoD-Web-Policy/. The Defense Threat Reduction Agency is pleased to participate in this open forum in order to increase government transparency, promote public participation, and encourage collaboration. Please note that the Defense Threat Reduction Agency does not endorse the comments or opinions provided by visitors to this site. The protection, control, and legal aspects of any information that you provided to establish your account or information that you may choose to share here is governed by the terms of service or use between you and the website. Visit the Defense Threat Reduction Agency contact page at Contact Us for information on how to send official correspondence.