Discussions with DTRA: Episode 4

DTRA podcast profile image

Discussions with DTRA Podcast: Around the Microphone

DTRA, the premier agency for meeting the challenges of WMD and emerging threats.

The DTRA Podcast series provides agency members with a platform to discuss interesting mission-related, morale-boosting or special interest item topics. The goal of our program is to deliver cross-talk that educates and informs audiences in an effort to support employee engagement and target potential outreach opportunities. Listeners can anticipate hearing conversations that are agency director-supported, amplify agency's core functions and convey mission intent in segments that range from 20 to 40 minutes.

Episode 4: Responsible Artificial Intelligence Strategy and Implementation Pathway

Length: 20:13

In this episode, Michael Howard, DTRA's Chief, Acquisition Systems, Training and Support/PM/COR Acquisition Management Department sits down with Rhonda Maus, Professor of AI Software Engineering and Agile Coach Instructor at the Defense Acquisition University to discuss Responsible Artificial Intelligence Strategy & Implementation Pathway.

 
 
 

Interviewer:
Michael R. Howard
Chief, Acquisition Systems, Training and Support/PM/COR Acquisition
Management Department Defense Threat Reduction Agency

Interviewee:
Rhonda Maus
Professor of AI Software Engineering and Agile Coach Instructor
Defense Acquisition University

Public Affairs Facilitator:
Darnell P Gardner
Public Affairs Specialist
Defense Threat Reduction Agency

 

Acronym Terms
DAU Defense Acquisition University
DTRA Defense Threat Reduction Agency
DTRA PA Defense Threat Reduction Agency Public Affairs
AI Artificial Intelligence
IT Information Technology
DoD Department of Defense
Engineering V An engineering approach that defines project requirements before technology choices are made and the system is implemented
DT and OT Developmental and Operational Testing
RAI Responsible Artificial Intelligence

Transcript

Announcer:

Welcome to Discussions with DTRA, where the Defense Threat Reduction Agency brings together subject matter experts to discuss meeting today's challenges of WMD and emerging threats and increase awareness. And now, today's show.

Darnell Gardner:

Hello and welcome to Discussions with DTRA. I am Darnell Gardner with DTRA Public Affairs, and I will be your facilitator for today's podcast. At this time, I'd like to introduce our moderator, Dr. Michael Howard, a senior program manager with DTRA's Acquisition Management Department. During this podcast, we will be discussing the implementation of responsible artificial intelligence in the DoD and operationalizing DoD AI ethical principles. This effort aims to ensure our DTRA personnel and leaders understand that lawful and ethical behavior apply when designing, developing, testing, procuring, deploying, and using AI. Without further ado, Dr. Howard, if you will.

Dr Michael Howard:

Welcome back to the DTRA podcast. I'm here today with Rhonda Moss from the Defense Acquisition University. She's here today to discuss our topic on responsible artificial intelligence strategy and pathway. Rhonda, give us a little bit of your background.

Rhonda Maus:

Certainly, yeah, good morning, and thank you for having me. I'm currently a professor of software engineering for DAU, Defense Acquisition University, and I've been here for about three years. I retired early from the software industry after about 30 years of working in Silicon Valley and Wall Street in software engineering and software program management. So, I've come as part of a cadre of people who are helping DoD modernize our software.

Dr Michael Howard:

Excellent. Well, this isn't your first time here at DTRA. We had you here a few months back. A lot has changed and we thought this topic was right down your alley. I'm aware that public concern led the DoD to codify their responsible artificial intelligence pillars in 2020. Besides the public concern, why is responsible AI important to DoD?

Rhonda Maus:

Well, that's a great question. Advancements in AI have demonstrated the ability to truly transform every aspect of modern society, and it's no different here within the Department. We have mandates to advance our technology and create a force fit for our time. So, certainly, we are taking advantage of AI and we're finding it every day on our desktops, in products we use like Microsoft Teams and Excel and Word even have AI in them today. So, it's ubiquitous.

When it comes to the Department, aside from the fundamental questions of justice and fairness within our IT and data systems, the concern that AI may act more like a human, that these systems could make more decisions or more human-like decisions or could have more responsibility, that they may exhibit more human behaviors, means that as an evolved society, and this is not just in the department, this is happening in corporate America as well.

We have a society based on ethical foundations. And now because of the advancements in technology and because of this anthropomorphication. I always have trouble saying that word, but it's basically where we make this technology, we compare it to humans. It makes people nervous. And so it's time for us to apply a more philosophical and ethical lens toward IT and software. AI at the end of the day is software. I think I told you last time, better software running on better hardware with a lot of data. And so we're extending that to cover, we're extending the ethical lens now to cover software engineering and AI simply because of what it can do. Now, obviously within the Department with our missions, we have a responsibility to maintain the ethical standards that DoD has always had. And there's a sense that we need to make sure that this technology is safe and that it's reliable and trustworthy, and that our war fighters and our citizens of this country can trust this technology. So, I think the burden on us is bigger in some ways.

Dr Michael Howard:

That leads me to a follow-up. Responsible artificial intelligence is a new concept, and I'm aware that DoD just issued the strategy a few months ago. What are the DoD'S goals for RAI?

Rhonda Maus:

Well, certainly at the highest level, it's to ensure that our citizens, war fighters, and leaders can trust the outputs of AI in whatever way that we use it in DoD. But because we're DoD, we must demonstrate our military's commitment to lawful and steadfast behavior. I pulled that line right out of the policy. And so I think that's the primary goal. The Department said in the May 2021 RAI memo, the very first policy document we received on Responsible AI, and I think that's the first time we heard the term RAI, that, "We will take a holistic, integrated, disciplined approach to RAI." The strategy that came out this year that you're referring to earlier in 2022 is really the next step in that commitment. This is a maturation of DoD'S ethical framework that accounts for AI's unique characteristics and the potential for unintended consequences.

There's a phrase in the RAI strategy in the newer document, it says, "RAI is a journey to trust." So, at the highest level, RAI is a journey to trust. How are we going to plan and build and implement these systems in a way that they're trusted by the workforce, by the war fighter, and by the American citizens?

Dr Michael Howard:

That's a profound statement. Can you share some of the five key principles of Responsible AI?

Rhonda Maus:

Yes, absolutely. Start with responsible. We'll just define those a little bit here. Responsible means we're going to exercise the appropriate levels of judgment in acquiring and using AI. So, acquisition of AI is at the forefront and how we do that, we're going to do that responsibly.

Equitable. Number two, equitable. Deliberate steps to minimize unintended bias. Bias. So, bias in data and bias in systems is a major challenge, not just for DoD but for everybody. The main reason for that is while we may be developing new algorithms, we're using old data in a lot of cases and we don't necessarily know what biases were applied to that data five or 10 or 20 years ago when it came to be. We tend not to get rid of data. I don't know if you've ever noticed that, but we collect demographic data, for instance, or we collect data on facial recognition and we add to it, but we tend not to get rid of data that existed before. And so we have to question how that data was created. So, the topic of bias in data is a big topic. So, number one, responsible. Number two, equitable.

Number three, traceable. AI developed so relevant personnel can understand the technology, meaning we can't just use a big AI routine and tell the war fighter to do something. Here's a decision. And so we're going to have to build trust in these systems. We're going to have to explain them to the war fighter, and there are lots of ways to do that within software, but we need the war fighter to understand the system enough. Where are the 10 data sources, let's say, coming from? As a pilot, you want to understand some of that because you know when AI is augmenting your decision, it's giving you a suggestion, that you know where that comes from. And so traceable, we want to be able to trace the data that ends up in there.

Next one, governable. AI systems will be governed. We need to have the confidence that we're going to design and engineer AI to detect and avoid unintended consequences. And so we're going to explain it in traceability, but we are also going to govern the fact that we will make sure that our AI systems disengage or deactivate as necessary. So, taking every step we can to avoid unintended consequences.

The last one is reliable. Five key principles of RAI. Last one's reliable. DoD's AI will have explicit, well-defined use cases. Safety, security, and effectiveness will be thoroughly tested and assured. I want to spend a minute on the word "safety" because I'll tell you about a trend I've seen across the Department, and I don't think you and I have discussed it yet. I've noticed that as we are working, our labs are working and a lot of our development areas in DoD are working on coming up with policy and process, not necessarily policy, but process, process suggestions for AI. A lot of the work is ending up in the bucket we call safety.

I got a report the other day from a collaboration between Army Research Lab, Air Force Research Lab, and a few others, and the title of the document was Safety Issues Based... I'll paraphrase, it was like testing and evaluation of safety issues related to AI. I thought, "Oh, okay, this will be interesting." I read the document. It wasn't what we would have thought of before as peer safety issues. It was how do you do AI from soup to nuts? And I asked them, I said, "Wow, you covered off on everything." They even created a systems engineering V for AI. I said, "You know, you guys real..." And they said, "It's all unsafe to us. So, until we have processes to cover off on the fact that we all understand it and we've applied those five key principles to it, to us, it's all unsafe. So, we felt like we needed to deal with the whole topic."

Dr Michael Howard:

Interesting.

Rhonda Maus:

So, when you're looking for AI guidance in absence of a policy, let's say, go look at what the DoD safety people are doing around the Department because they may be working on the total picture and doing a pretty good job of it as well.

Dr. Michael Howard:

That is interesting and a lot to absorb in those five principles. How can organizations like DTRA implement Responsible AI through the AI lifespans, three phases of planning, development, and deployment?

Rhonda Maus:

That's a great question. The first organization within DoD to come out with a framework to help us do this, I think, was DIU, Defense Innovation Unit. They had been helping DoD customers try to implement AI. Defense Innovation Unit is tasked with staying in touch with Silicon Valley and what's on the forefront of innovation and making those technologies and those companies more available to DoD programs. So, they were bringing in AI software for programs to look at and to use, and the ones that were adopting it in the early days needed a way to do this before the conversation even existed. And so they found themselves doing sort of a mission assistance to six or seven programs, I believe, for AI. And through the course of that, they developed the first framework we have. How do you plan, develop, and deploy AI in a responsible way?

What they came out with was, it's not something that we're going to check the box on. It's more of a holistic, cultural mindset, different way of thinking about AI from the outset, and thinking about software engineering, to be honest, and hardware engineering from the outset, which is we really need to account for those five key principles in every stage of the acquisition lifecycle. The systems engineering V I mentioned earlier was really interesting because it was one of the goals was to tell testers where you put those Responsible AI practices into play across the systems engineering V. And so an example might be that testers now are going to learn more about software algorithms. In the past they generally, a software tester wouldn't learn about the algorithm so much as they would test the output, make sure nothing bad happened. Well, now they're saying, "Testers need to understand what that algorithm's doing in the first place and sign off on that as well." You see that go across the system's engineering V for every cycle.

So, that's a big question, getting into what we would do in all three of those phases. But I think the best way to answer it quickly is that we're baking it in to everything we do, whether we're doing human-centered or ethically-aligned design upfront before we develop all the way to testing aspects of a system we would have never tested before, that will enable us to better ensure that our algorithms and our data are free of bias and are doing what we're asking them to do.

Dr Michael Howard:

I love the term, "baked in". That makes sense. Baked in.

Rhonda Maus:

Yeah, it's a good one.

I want to say, and we may have talked about this last time. It's not going to be easy to tell the difference between software that has AI in it and doesn't, for everybody. My prediction, and this is Rhonda Moss, my hypothesis, but there's a lot of people that share it in the Department, is that this is going to apply to all software engineering, to all types of software, not just AI software because it's ubiquitous. We have systems that are mixed with AI, so it's going to be harder and harder to tell a program manager for instance, "Only do responsible stuff on the AI part of your app, but not the other part that's not AI." Well, to them, it's going to look like one system.

So, if we're explaining our algorithms in one place, we might have to start doing it in another. If we're making sure our data doesn't have bias for the AI part of our system, oh, by the way, we're using that same data in the non-AI part of the system. So, I think what we'll see is a convergence and we're going to figure out that this is what we needed to do to software engineering now, anyway. This is really with the evolution of this business or this industry of software or this capability is about 40 years old in its active form, and it's about time we started applying a more philosophical ethical approaches as that industry and that world has matured.

Dr Michael Howard:

Yeah, it sounds like we really need to ramp up the education and skills across the workforce in artificial intelligence and responsible AI.

Rhonda Maus:

Yes.

Dr Michael Howard:

Which leads me to my final thought on resources. What type of organizational resources are required for the implementation of Responsible AI?

Rhonda Maus:

I think for that answer, it's probably not possible to separate RAI from AI. And so when you look at the RAI guidance and the strategy, what they're really telling us is the same thing we see in other strategies about the AI workforce. We need to look across the Department and identify everybody who has subject matter expertise or skills in AI. We're going to flag them and we are going to then upscale. And so I agree, I think what you said before is it might be that we have to upscale across the board. These are modern software practices and if software today is up to 90% of the features of a weapon system, which is GAO tells us, that means everybody has software. Every program has some sort of software on it, and so we're kind of losing that focus on software-intensive programs vs. another type of program. Every program has software, so we need to upscale our software and our digital literacy across the board and to make sure it's responsible.

That's where I think there's a special upscaling that'll happen for folks like testers. We'll just continue to use them as an example, and engineers. Today, we might have a systems engineer responsible for software at a program who isn't really that software-literate. Maybe there's a software support asset under that systems engineer running with it, and we can do it that way. It's no longer going to be possible to have systems engineers, let's say, not be conversant in software and not understand that from a design perspective, from a systems of systems perspective, they're going to need to look at all of these things we've talked about for RAI and they're going to need to account for them with the program manager as they plan out the program.

I certainly think RAI, just the R part is going to add, and again, my opinion, staff in the DT and/or OT area, it's going to add staff in the audit area, and those folks are going to have to be conversant in how you look at software algorithms and how you look at data. And so I don't think it's different than what we're already hearing we need for the AI workforce, except in those areas that I think engineering and testing and audit functions will have.

Dr Michael Howard:

You've given us a lot to think about. As we conclude, I just want to thank you for coming back to DTRA and sharing your thoughtful insights on responsible artificial intelligence strategy and the pathway forward. Thank you again, Professor Maus.

Rhonda Maus:

Thank you. Thank you, Mike Howard, for having me.

Dr Michael Howard:

yes

Announcer

Thanks for listening. To hear more podcasts, don't forget to subscribe on Google Play and Spotify or wherever you listen. You can find out more about DTRA at DTRA.mil.

ABOUT DTRA

DTRA provides cross-cutting solutions to enable the Department of Defense, the United States Government, and international partners to deter strategic attack against the United States and its allies; prevent, reduce, and counter WMD and emerging threats; and prevail against WMD-armed adversaries in crisis and conflict.  

DTRA logo

CONNECT WITH US

Facebook Twitter YouTube LinkedIn DTRA Webmail

8725 John J. Kingman Rd., Fort Belvoir, Va. 22060-6221

Welcome to the Defense Threat Reduction Agency’s website. If you are looking for the official source of information about the DoD Web Policy, please visit https://dodcio.defense.gov/DoD-Web-Policy/. The Defense Threat Reduction Agency is pleased to participate in this open forum in order to increase government transparency, promote public participation, and encourage collaboration. Please note that the Defense Threat Reduction Agency does not endorse the comments or opinions provided by visitors to this site. The protection, control, and legal aspects of any information that you provided to establish your account or information that you may choose to share here is governed by the terms of service or use between you and the website. Visit the Defense Threat Reduction Agency contact page at Contact Us for information on how to send official correspondence.