Skip to main content

TECH INTERRUPTION

Operators, Optimization, and Ogres

Two techies, four topics, one wildcard,
10 minutes of insights and laughs! 🤖🤣⏰

Tech Interruption is a high-energy video podcast series where tech enthusiasts engage in unscripted debates on trending tech topics spiced up with surprise wildcard subjects.

The Experts

Sydney Gillen

Director, Geospatial AI Development

Matthew McDonald

Senior Director, D&I Technology and Innovation

Episode: Operators, Optimization, and Ogres

In this episode, technology professionals Matthew McDonald and Sydney Gillen delve into warfighter AI adoption, agentic AI, and more, including a wildcard topic that pits an animated icon against the nature documentary GOAT! Watch as Matt and Sydney serve up 10 minutes of insights and humor, then subscribe to get updates about upcoming episodes of Tech Interruption!

Full Episode Transcript

Matt: Welcome to Tech Interruption, where we break up the monotony of your day’s routine.

Sydney: Join us as we unpack five industry disruptors in just ten minutes.

Matt: It’s the ultimate Tech Talk challenge.

Sydney: Tech Talk Challenge. Cute! Oh my God! Crushed it!

Matt: Nailed it!

[3, 2, 1, Here we go!]

Sydney: Hi! My name is Sydney Gillen, and I’m the Director of Geospatial AI development at ECS within our Data and AI service line.

Matt: And I’m Matt McDonald and I run our Tech and Innovation here at ECS across our Defense, Health, and Intel accounts.

Today, we’re at the Equinix data center. Since 2016, Equinix has been a key partner in the delivery of large scale AI/ML programs for our Defense, Federal, and Mission Partner accounts. These programs have been enabled by Equinix’s scalability and reliable services for the past nine years.

So Sydney, for our first topic, it’s Warfighter AI adoption. What are your thoughts?

Sydney: The first thing that comes to mind is really kind of where we’re at today in 2025. And I think…we’ve talked a lot, Matt, in some of our conversations about how it’s a lot more accessible now and understandable.

Matt: There’s a lot of education around it. ChatGPT. Generative AI.

Sydney: Yeah. 100%. Whereas I think where some of the work that we’ve done started in a time when that just was not the case.

Matt: Everybody thought AI was Terminator or something that’s going to take their job.

Sydney: Right, right. And I think in 2017, when we started to think about Warfighter AI adoption, really intentionally, I think that there was a misunderstanding of what is AI and what can it do to help, and I think there was this inherent fear…

Matt: And augment users and enable them.

Sydney: Right. And I think that the fear came from, “Well, I don’t want to be replaced.” Right? And I don’t think that that’s something that we need to really think about from a warfighter perspective, or even a DoD perspective at this point…it’s just not where the technology and capability is…

Matt: But it’s how we enable the users…

Sydney: Well, and your timelines are being crunched down, right? So…as you have tools at your disposal that can help really expedite your workflows, that’s going to be really meaningful. And I think as technology has adapted, that’s become really critical for adoption.

Matt: Do you think that adoption requires it to be better than a human?

Sydney: I think human level performance is a very subjective term.

Matt: Me at 5:00am before my coffee in the morning.

Sydney: Not even that. That’s not even human level performance.

Matt: There’s a big spectrum there, and performance and adoption don’t necessarily go hand in hand. If you’re saving the end user two hours a day. You don’t necessarily have to be as performant as a human. You’re maybe allowing them to process more information faster in a more efficient manner.

Sydney: Well, and if you were to give me a metric of you’ll save me two hours a day, I would absolutely take that into consideration.

Matt: I think everybody would.

Sydney: So that type of metric is going to be really, really meaningful to increasing adoption.

Matt: And we have more data, more sensors, less people, and we need to be able to process that information as quickly as possible. But there are use cases where we have to have very performant, ethical models in those implementations that really matter.  Right!

Sydney: I think that’s… that’s really where we’re at with warfighter adoption.

Matt: I would agree.

Sydney: All right, Matt. So, we’ve talked about Warfighter AI adoption, let’s talk about Generative AI.

Matt: I think generative AI is a great capability. Everybody can use it, and everybody can consume it. We’ve got kids writing their resumes with GenAI, and you’ve got full-on code refactoring or software development. I think there’s a lot of really cool use cases and enabling down to the individual user, really to augment efficiency [bell dings] and provide more capabilities, you know, at the tips of their fingers.

Sydney: Right! And I think…there’s a lot of efficiency [bell dings] to your point, that can be leveraged within different businesses, different industries, including what we do today.

Matt: Exactly. What is one of the coolest implementations that you’ve seen of GenAI?

Sydney: Yeah, I think the one that I found really compelling fairly recently is that, I think it was even as early as this week, there was a completely generated by AI commercial that was published on TV.

Matt: Three-arm humans??

Matt: None of that. Thank goodness!! Would have been a little disconcerting, but no, I mean, it’s…its voices. It’s people. It’s motions. It’s pretty incredible to see how far the technology has been able to leverage all of these various components.

Matt: I think when we look at a…from a department standpoint, we have a lot of legacy code that exists in our weapons systems and different information systems that are written in COBOL and languages where we don’t have programmers or expertise. And we really need to quickly modernize and refactor that code and using GenAI to assist users in that process, I think, provides huge value.

Sydney: Huge efficiency! [bell dings] Even industry and companies can leverage some of these capabilities to really create efficiencies [bell dings] internally. I mean…as you said, everybody can use generative AI, and so as you think about being able to bring generative AI into your organization, leverage it on your information, your documents, and be able to create meaningful workflows that are much more expeditious, I think, it’s really incredible.

Matt: That’s a great point. I mean, using the commercial versions of ChatGPT, they take your data and all of your information and monetize off of that. But being able to host your own databases, like we do at ECS, allows the ability to secure your information and also leverage that information with the gen AI models to have focused results.

Sydney: Yeah, that’s a great point.

Sydney: All right, Matt, I’ve been doing a lot of thinking about who I would have lunch with if I could have lunch with anybody in the world… at any point in time.

Matt: Any person alive, living or dead?

Sydney: Anyone… anyone.

Matt: Who would that be?

Sydney: Jeffrey Katzenberg.

Matt: I got no clue who that is.

Sydney: Okay, listen. Hear me out. He used to be part of Disney, created classics such as The Lion King, and then left Disney, and helped to found and co-found Dreamworks and Dreamworks Animations.

Matt: Very nice.

Sydney: Which have given us cinematic masterpieces, like Shrek!

Matt: What is Shrek? o,O

Sydney: Don’t. Don’t do that. -_-

Matt: Donkae?!

Sydney: Donkae!!

Could you imagine how much we could talk about over lunch if we were to talk about Shrek with Jeffrey Katzenberg?

Matt: And you could talk to all of his characters at the same time.

Sydney: Oh, sure. There’s so many characters. If you look at…there’s Shrek, there’s donkey, there’s Puss in Boots, there’s Princess Fiona.

Matt: Well, what about that little gingerbread guy?

Sydney: Gingy?! Oh my gosh!! That would be amazing!

Matt: Don’t call me that again. [laughter]

[airy bells]

Matt: Well, you know who’s cooler than Shrek?

Sydney: Nobody.

Matt: It’s David Attenborough.

Sydney: Okay, that’s… that’s pretty cool.

Matt: One of my hobbies is saltwater fish tanks, and David Attenborough got to see the natural world and bring it into our living rooms across all of the TVs in America.

Sydney: Shrek also did that. But continue. [laughter]

Matt: It would be amazing for David Attenborough to come to lunch and narrate my fish tank and the natural beauty that it provides.

Sydney: Better idea! Joint lunch.

Sydney: Matt McDonald. Sydney Gillen. Shrek. Jeffrey Katzenberg. David Attenborough!

Matt: David Attenborough and Shrek could narrate the whole thing.

Sydney: And, uh… invite me to that lunch! And then we’ll also see you in theaters, 2026. Shrek 5!

Matt: Hungry already!

Matt: Okay, Agent Smith.

Sydney: Okay… Chill, dawg.

Matt: …what am I saying? [laughter in engineer]

Matt: So, Sydney, we just talked about generative AI. The next topic is Agentic AI. Where do you see that…as we start to see the modernization of technology?

Sydney: I think Agentic AI is certainly where I think everybody wants to go right now, from industry to government to various groups within industry. I think there’s a lot of opportunity with some of the work that we do, I think some of the work that finance does, healthcare…there’s a lot that can be done, especially as you look at certain situations where it’s probably more ideal to send an Agentic system than it is to send a human, whether that be because of ease or safety. And so, I think that’s going to be really critical as we think about Agentic systems, and where they can provide value.

Matt: I think it’s really interesting as we look at…you know, as we talk Large Language Models and some of those foundational models, but now we’re focused on very specific agents that do very detailed tasks and functions, and it allows us to pair those functions in multiple different kind of paths or implementations to allow us to get different types of end results and outcomes.

Sydney: You know, Matt… One of the things that we think about is you think about different uses is how are you tracing not only like what the Agentic system is doing, but how did it get to that point? Because I think that translation and transparency of the system is going to be really, really important as we think about some of the ethical concerns.

Matt: I think there’s great questions in the different kinds of databases or systems that agents have access to, say…business functions and financial functions to shipping records. How do we know that the Agentic model is not providing data from other data sets that the user shouldn’t have access to?

Sydney: Yeah, that’s a great point.

All right, Matt. Last topic for the day. I think that this might be a little contentious, but I want to talk to you about cybersecurity.

Matt: Oh, my favorite topic! -_-

Sydney: I know. Which is why I think it’s going to get a little contentious because I think we have various opinions.

Matt: All right, let’s hear it, Sydney.

Sydney: Well, I’m curious what you think is the risk assumption that folks are willing to make with some of these new and emerging systems.

Matt: Well, I’m glad you said the risk. I’m glad you didn’t say, “What is my critical, high, medium, low…” That provides me no analysis of risk to the end user. And I think as we look at risk, we should be allowing the end user to actually make the risk. The warfighter in the field is the person who owns the risk. We don’t give them a gun and tell them, “Here’s two bullets. No more are coming.” But we do give them software with no tail-end support, no long term roadmap, no approach, and we just deliver it to them with no kind of long term follow up.

Sydney: Yeah, but I think I would push back on that and I don’t fully agree. I think putting the risk on the warfighter, you’re going to see decreased adoption, a lack of trust, and I think you have to create this environment where there is a safety net. Where an authorizing official has said, “Yes. I, as the AO, am going to assume this level of risk. And oh, by the way, here’s why I’m assuming this level of risk, and here’s how you leverage the system so that I am still assuming the risk.” Otherwise, then I would agree, I think it is on whoever is using it if they use it out of the bounds of that ATO.

Matt: I think that’s an interesting point. And, you know, I see new organizations are saying we need to build new cybersecurity processes for AI models. Why do we need to build something new? We have existing standards for how we write our business systems to weapon systems. How we assess, how we secure, and how we make sure that we’re doing those in an ethical manner. Why are we doing something different for AI?

Sydney: Well, I think there’s a push to try and leverage AI to then accredit AI, and I personally think that’s opening this Pandora’s box of how are you going to daisy chain one accreditation?

Matt: Writing the test and assessing the test all at the same time.

Sydney: Right! And I just I don’t personally think that that’s a sustainable way to go about it; because at some point, the buck has to stop somewhere.

Matt: Well, I think we can all agree that the end goal is to deliver secure systems for our warfighters, and that we can all get behind that.

Sydney: Yep. I totally agree.

Sydney: Whoo! With that… We are…. We’re outta time.

Matt: Well hey, come join ECS. Where Smart People are doing Cool Things. And having cool conversations.

Sydney: And if you too would like to be part of the conversation, you can follow us on LinkedIn or visit our website.

And with that, shout out to my fans.

Matt: Okay, byyyyyye.

[laughter]

[outtakes play]

Become anECS Insider

WE'RE HIRING