Quantcast
Channel: NGA – GISCafe Voice
Viewing all articles
Browse latest Browse all 35

The Role of AI/ML in Intelligence and National Security

$
0
0

An interview with the Hon. Susan Gordon, Former Principal Deputy Director of National Security, by Balan Ayyar, CEO, percipient.ai was conducted at the USGIF GEOINT Community Forum Online in the past weeks. Ayyar is the Founder and CEO of percipient.ai, a Silicon Valley-based AI, machine learning and computer vision firm focused on intelligence and national security missions and the company is the title sponsor of the USGIF. Ayyar is also a retired U.S. Air Force General Officer. His last role was as the commanding general of the combined joint interagency task force 435 in Kabul, Afghanistan.

He served in four combatant commands as the military assistant, the secretary of defense and then the White House as a White House Fellow. He is a member of the Council of Foreign Relations.

Ayyar introduced Director Gordon, who is well known in the national intelligence community, as a friend of USGIF and a pillar in the GEOINT community.

“Director Gordon has almost forty years in the service of our nation as an intelligence professional,” said Ayyar. “It’s a joy to discuss a few of the insights we’ve already learned from the conference with her, back to the importance of humility in this profession. I was happy to share a couple of precepts from our experience on AI, not just the importance of it, but also the challenges associated with it. I want to really focus the conversation this morning on insights from Director Gordon and a few others that are of interest to many of us in the NGA and other intelligence agencies.” Ayyar noted to Director Gordon: “You may be back in the service of the nation soon.”

Ayyar went on to state that we see rising adversaries all the time.

BA: I used to say the intelligence professionals were as important to the nation’s security and saving lives as the men and women who wear the uniform. None of what we could do in harm’s way was possible without brilliant intelligence professionals out there helping us predict, permit and protect those things that have been the cause of our intelligence profession for many years. i wonder if you can just comment on the environment we find ourselves in now and if there are precepts from your previous experience when in a different time we had great powers that we were concerned about.

SG:  Great question on putting this moment in historical context. I, like you, feel optimistic. I know this is a hard time to feel optimistic and yet I surely am. Part of the optimism is because of how old I am and how many times I’ve seen us as a nation face challenges that we had no idea how we were going to meet. I grew up in the sixties and we were in elementary school and did duck and cover drills. And that was because we thought global thermonuclear war was just around the corner. When I joined the intelligence community we were in the throes of the Cold War and we had the capabilities that were allowing us to understand so much and have so much advantage in that knowledge. We were a huge part of developing the deterrents that allowed that not to manifest at that time .

And then 9/11 happened and when we watched those planes fly into the twin towers we had no idea what was coming next. Or we didn’t know if life would return to any kind of normalcy, whether the uncertainty of that moment was going to dominate. Quite frankly the intelligence community capabilities we had were insufficient for that adversary and yet there is no way you can look at this moment and say we again found our way to an advantage and new capabilities to be able to know where we needed to go for information and be able to gather that and build the partnerships. When we talk about using different information, in 9/11 we had no idea how to partner with people. We didn’t know how to trust and yet the key to counterterrorist success in counterterrorism has been working with partners and using data that we yet in 2011 had no idea how to use. And now we have this other moment and it really is the advent of the digital age. Our challenge in really embracing it as is we couldn’t see it. We were focusing so much on the physical world that we couldn’t see how much everything had changed. Quite frankly, I think it’s taken Covid to weirdly put in perspective how much the world has changed, how interdependent it is, what the nature of threat is, what the nature of national security is, now we have the opportunity to develop a new craft and new techniques. And we’re doing it right at the moment that some technologies are coming to maturity. We always had to develop new capability, we’ve always had to figure out how to trust new information we’ve never known how to trust, and now we have the chance again.

BA: Great to hear your optimism. It’s great to see it’s not so different with AI as you were facing the crises in your career as you look back. 

And you had to adapt in your environment and the fact you had to open your eyes and look broadly in order to build necessary trust. The type of terrorism was all unheard of prior to the time you had to adapt.

Did your degree from Duke lead to your involvement with NGA or CIA?

SG: What a great question – I actually think at Duke my choice of major did not anticipate the career I would have. I think it was incredibly well suited to a crafted discipline that demanded curiosity and induction. When I talk about induction intelligence, you dogmatically force yourself to deal with what is. And the craft is moving up into what you think it means rather than deductive that tries to break it down, so I think I lucked out on that one.

To be clear about NGA, If you’re a CIA officer you couldn’t imagine anything more, it’s the Central Intelligence Agency, massively capable, with ethos, domain, but my move to NGA changed me and it changed me on two fronts. Number one: the kind of support put me in contact with users, people who wanted to do things tactically as much as strategically and for me that was a usability shift and secondly, is I just lucked out with NGA that we embraced under Robert Cardillo, we embraced the open and it was the dawn of the commercially available data. That idea about participating openly that isn’t throwing away of the craft, that actually shaped my thinking. CIA I was designed for, NGA grew me up.

BA: You had insight into innovation as you founded InQTel with colleagues. About the time you came to NGA was when we knew there was much more data that was available with our analysts with their tradecraft could possibly understand in the time they had to be relevant to the missions. This isn’t a new challenge, just looked at it differently, just wanted to mention this unclassified data.  

Now we see your vision coming to life. 

Are these agencies under your leadership are ready for this change? The Google AI head said yesterday the reason AI fails is often not because of the technology, but because the organization isn’t ready to embrace. I feel like you’re ready to embrace, that you think we had everything in place under the purview of the DNI to really take what we can bring to them and transform all the different processes that are affected by the use of AI in prosecution of these missions. Do you think we have all the authorities in place in the right way for these nations to really transform?

SG: The answer is probably no and yes, but will start with yes. Were I talking from that role today, I would spend a lot of time reminding us who we are. Reminding us what we’re trying to do. We’re trying to know more, see more, understand more with ever more precision, always a little sooner so we can be waiting when the moment arrives or even better before moments that we can’t allow to invoke. That has always been our mission it is the role of intelligence, is it about secrecy? Intelligence is about national security and so I start with this is who we are. And then the second is, we have the moment right now to more precisely understand the world and have the better understanding in the hands of free societies that are based on the rule of law as the foundation for better decision making that allows good things to happen. I reject the notion that we have to be afraid of these technologies or afraid of companies, or partnering because of the actions that might happen. This is fundamentally about having more clarity.

We have to take advantage of the abundant data and new technologies available and harness the use. And there are tons of things that we need to do to be more prepared. We need to work much harder on data integrity, we have to work a lot harder on integration and the ability to put it at scale. We have to develop craft to go along with it because we’ve always known how to deal with uncertainty and we’ve always known how to put confidence in data. We just seem to be daunted by the scale and speed by which we’re asked to do it. But we can break it down into what we know and develop the craft out of it. I think we’re struggling in part because we tend to only see the risk with technology, but if you go back to the imperative, we’ve been there before and can see the advances we’ve made. Then we go all hands on deck so I’m hopeful to get on with it.

BA: If I use the SOCom example, in SOCom it was common for them to brief their failures, the ability to take risks in these high consequence tradecraft missions is very important and difficult. We agreed upon the importance of incorporating these capabilities into our work but we didn’t discuss a lot about the challenges. One panelist cited a challenge that he wouldn’t do a proof of concept unless there were a legitimate plan to scale to impact the organization. He felt as many do with AI that there’s a bit of a “valley of death” after they do proof of concept because organizational leadership commitment and will to take risk isn’t there. This is a very important difference from other types of business improvement software, etc. You have to have the will to have the value creation process change a little bit. And have the leadership of the organization really embrace that.

 That leads us to human and machine teaming a little bit. I wonder if you could share a few thoughts. Dr. Dixon, your successor, mentioned how do we convince this workforce which we’ve trained in this elegant tradecraft that has been a largely human endeavor up until now to trust machine intelligence that helps them feel strengthened and not diminished, potentially able to do more, look at more and understand more.

All of us are depending on machines all the time now.

What do we think Google is?

Your intelligence profession is among the most difficult professions to crack because you train at such a high level of performance.

SG: Such a rich talk, I’m going to try not to sound glib, because I don’t mean it this way. If the people who are responsible for purveying the wisdom that enables good decisions do not feel stressed by the questions being asked of them, no technology is going to be enough, it just won’t. I gently go into discussion about the current administration and my role. I like talking about the things I can do something about.

I felt that this president was making decisions here (she shows her right hand and arm extended off laterally) and the  intelligence community was leaving them off here. (she extends her left hand and arm in the opposite lateral direction). All that space between he was navigating and that is a big space. My imperative was, how do I fill that in? I wish I could inspire that. My gosh if climate is the national security challenge of the day, if certainty of knowledge or the interaction of the ecosystem is something that provides advantage, then how is that not something that our craft can apply? So when I say you have to see the moment clearly, understand the questions that are being asked that are being answered with insufficiency, be prideful of the role of intelligence historically and as I mentioned, the gaps, then you have to figure out how to do this. But if that pull isn’t there, just development of capability will never be enough because you can always stretch out capability. You can always stretch out your budget. So scale is important because you’re trying to change big things. Integration is important because it’s not going to be one data set that is providing the answer and human interaction is going to be important because intelligence never is an equation. It always requires judgement, always runs the risk of bias, deals with anomaly. So you can use the technology and data abundance to tee things up, but the human judgment – so if you ask me what I think a gap is besides scale and maybe why we haven’t seen that pull, it’s because the data work needs to be delivered at the decision points that humans make. And if you dump them off in the wrong place they will either lack confidence or be unable to see what it participates in, so to me it’s a whole end-to-end understanding of how you go about this.

I’m going to pool my technology and deliver them at final decisions and also intermediate times, I think that’s the next step.

BA: One person’s intuition is another person’s bias as their experience is different. Machine intelligence you can actually offset and strengthen in a collaborative way to help humans be confident that they’re looking at all the necessary data, that they’re not falling into the trap of the confirmation bias where we’re working with a tight timeline potentially with a senior leader who wants to make decisions quickly without the benefit sometimes of finished intelligence help to understand the strategic. We’re rushing against an artificial timeline because of a personal decision making style and we end up making decisions without the benefit of the knowledge and data that is available to us. But we clearly haven’t got the core case made yet — the sense of crisis that helps us really integrate innovation. Can you share your views of consequences of our intelligence community not responding as quickly as it should to these technologies? Let’s say China are very serious competitors and they have a different view of the world. On a number of levels we can work together, in terms of AI and other compute capabilities. 

SG: First the number 1 risk is, the world is going to turn, things are going to happen, solutions will be offered, problems will arise, changes will happen, we can kick back and there isn’t going to be any catastrophe that keeps the world from turning. No matter what our contribution some bad things will happen, some good things will happen but those will just happen. The biggest risk of just allowing it to happen is that we would like to beat the other guy to the spot. Second we do know our adversaries who have one of two attributes we don’t. 1. We have actually have a bigger installed base in terms of how government functions, how we operate than many adversaries do, so some adversaries can just implement modernly because they haven’t been trying to do it for so long. They have the advantage of not having to overcome organizational inertia. 2. There is a huge difference in our values, our rule of law than some of our adversaries and this notion of being presented with this world of possibility of data and data use without the constraints that we feel from a data and policy perspective. With a technology base that is equivalent they’re going to jump and they’re going to be using it differently but they’re going to be using it, and then their influence on the turn of the earth is going to supercede ours. I think that’s a risk you see in China or Russia, both who are pretty capable and have been at this a long time and newcomers who can make these jump shifts to affect us. Internally the risk is we will sound good because we are good. We will have the opportunity to present but I’m so concerned that if we don’t up our craft to be as gritty as others we won’t have the influence.

The opportunity goes with all the downsides I’ve mentioned. One of the great strengths of America and the military and the national security community is that we have been operating globally for a long time and we do know how to put things into use. We do know how to not just have a technological capability. All we have to do is apply the data things to it.

We are at a fraught moment and there’s a decision to be made. And at some point I think the massive advantage we have can be eroded but I do think we have to recognize we have some tremendous advantages, one of which is represented by you and precipient and all the people in the American and the open world are competing to be able to find new ways to do this. That’s a great national advantage.

BA: Around the democratization of GEOINT, we have actors who have access to sensitive sensors. Don’t we have a decided advantage with our sensors?

SG: Yes our capability is shocking. We had amazing advantage allowed us to operate at distance to project power. In a way that part of our advantage has been eroded, you have to give up the notion. In fifteen years it was said, every technology is going to be available to everybody and I think we kind of embrace that.

So what is the nature of our advantage? What we have to do from a policy perspective is quit retarding us as though if we stop the world will stop.  And the second thing is our advantage has always been in use, strategic advantage isn’t in the existence of the capability but in how you put it in to practice.

We have massive advantage on that. But if somehow we don’t introduce those technologies into that advantage of use, they will catch up.

There will always be those who tell you it’s broken whatever it is. I reject notion out of hand. I encourage young professionals. People coming into professional life have a view of rightness and know what needs to be done. It’s challenging in America but we’re vibrating with recognition that we need to be more. We can’t let status quo prevail. National security and global security are about not letting the status quo dominate. Truth matters, intelligence is a particular variety of truth we’re grounded in but we’re desperately trying to understand what that means.  It allows you to both have the certainty and curiosity to be able to find context and meaning in that and allows decisions to be made to improve our condition.  I think this is a great time to be in service.

I think we have figured out that the national security, global security tent is much bigger than just government work. If you do nothing else do something of purpose. Do something that advances truth and allows clear decision making and you’ll have done good and are on your way.

BA: Do any elements of the culture need to be tweaked?

SG: The value I think is immutable is responsibility. The cultural aspect is this feeling that what we do is going to matter, and if it’s going to matter it can’t ever be casual, and I think that’s what distinguishes the national security community from some of the others. That said, we are a bureaucracy who can be limited by how we have ensured that responsibility, ensured that confidence, doing it in the way we have grown accustomed, rather than finding new ways with what’s available to us. How do you introduce what so obviously needs to be included? Everyone is inspired by what’s available, and everyone feels the responsibility of the moment which is going to win. We have to find a way to bring them together. I think that’s what our moment is.

While I would encourage my GEOINTers to remember GEOINT isn’t the point of national security, I think it affords us the best opportunity to figure out how to effectively use the open information. If this community can lead in terms of unclassified, I think you’re going to find the more textural things will find their way in too. Hopes and dreams of the free world are resting on the GEOINT community, but remember, you’re not the point.

 

 

 

The post The Role of AI/ML in Intelligence and National Security appeared first on GISCafe Voice.


Viewing all articles
Browse latest Browse all 35

Latest Images

Trending Articles





Latest Images