On Tuesday, the Cybersecurity Subcommittee of the Senate's Armed Services Committee held a hearing about harnessing Artificial Intelligence to advance the Department of Defense.
Fuel your success with Forbes. Gain unlimited access to premium journalism, including breaking news, groundbreaking in-depth reported stories, daily digests and more. Plus, members get a front-row seat at members-only events with leading thinkers and doers, access to premium video that can help you get ahead, an ad-light experience, early access to select products including NFT drops and more:
https://account.forbes.com/membership/?utm_source=youtube&utm_medium=display&utm_campaign=growth_non-sub_paid_subscribe_ytdescript
Stay Connected
Forbes on Facebook: http://fb.com/forbes
Forbes Video on Twitter: http://www.twitter.com/forbes
Forbes Video on Instagram: http://instagram.com/forbes
More From Forbes: http://forbes.com
Fuel your success with Forbes. Gain unlimited access to premium journalism, including breaking news, groundbreaking in-depth reported stories, daily digests and more. Plus, members get a front-row seat at members-only events with leading thinkers and doers, access to premium video that can help you get ahead, an ad-light experience, early access to select products including NFT drops and more:
https://account.forbes.com/membership/?utm_source=youtube&utm_medium=display&utm_campaign=growth_non-sub_paid_subscribe_ytdescript
Stay Connected
Forbes on Facebook: http://fb.com/forbes
Forbes Video on Twitter: http://www.twitter.com/forbes
Forbes Video on Instagram: http://instagram.com/forbes
More From Forbes: http://forbes.com
Category
🗞
NewsTranscript
00:00I'd like to thank our witnesses for appearing today to discuss how artificial intelligence
00:06can be utilized to enhance the Department of Defense's cyber capabilities.
00:11We have just heard from experts in our closed session from the U.S. Cyber Command, the Defense
00:17Advanced Research Projects Agency, or DARPA, and the DoD's Chief Digital and Artificial
00:22Intelligence Office.
00:24These organizations all play a crucial role in making sure the department is postured
00:28to carry out its national security mission in cyberspace.
00:33Recent cyber attacks against U.S. critical infrastructure are a stark reminder of the
00:38growing sophistication and persistence of cyber threat actors.
00:43To outpace our adversaries in the cyber domain, the department must rapidly harness the advances
00:48of AI technologies.
00:51This means that the Department of Defense needs capable partners outside of the Pentagon
00:56who are moving at breakneck speed to solve our national security challenges.
01:01This brings us to our hearing topic today, how the department can leverage AI-enabled
01:07capabilities to field exquisite offensive and defensive cyber tools, enhance our ability
01:14to detect cyber threats, and automate threat mitigation to gain an enduring advantage in
01:21cyberspace.
01:22I also look forward to hearing from the witnesses about how the department can be better equipped
01:28to counter enemy AI-enabled cyber capabilities and leverage AI to enhance our overall warfighting
01:34ability in the cyber domain.
01:37Our innovators and tech companies are one of our asymmetric advantages in the cyber
01:41fight, but the gap is steadily closing.
01:45At the tip of the spear is artificial intelligence.
01:48Unfortunately, the Chinese Communist Party understands this all too well.
01:53Xi Jinping has spoken about the importance of AI.
01:57With the release of DeepSeek earlier this year, it is clear, unless we act decisively
02:03and soon, China will not be playing catch-up.
02:07We will.
02:08U.S. advancements in this critical technology are impressive, and we are fortunate to have
02:13some of the best innovators in the world.
02:16As Silicon Valley and other leading technology developers continue their research and development
02:20of AI at the bleeding edge, our job must be to integrate those tools in a secure but rapid
02:29fashion into our cyber capabilities.
02:32I look forward to hearing from our witnesses, who all bring unique and first-hand experience
02:38about how the department can speed up its use of AI in the cyber domain.
02:44Again, thank you to our witnesses for coming here today, and before I introduce them, I
02:48will now recognize Ranking Member Senator Rosen.
02:51Well, thank you, Chairman Rounds, and I'd like to begin by welcoming our panel and thanking
02:57you all for joining us.
02:59This topic has profound implications for our national security, I would say for our personal
03:04security, for everything in our world to come.
03:07But this is actually my first hearing as Ranking Member of this subcommittee, and I am really
03:12honored to work alongside Chairman Rounds, our colleagues, and each of you on how we
03:17can responsibly integrate innovation and the increasing pace of technology, including artificial
03:23intelligence, into our national defense strategy and into the hands of our service members
03:29to enhance their speed, their capabilities, and their operating picture.
03:34Well, of course, all the time we have to balance the risks and rewards concerns of AI and what
03:39it teaches us.
03:41So with great promise comes great responsibility.
03:44We know that our adversaries are developing new AI tools that have the potential to fundamentally
03:49shift the nature of warfare.
03:51We begin to see how new uses of AI can help our own service members counter such threats
03:58and take proactive offensive actions in the moment as well.
04:03However, the rapid pace of AI innovation also raises really important questions about its
04:08ethical implications, its governance, and the security risks it poses as well.
04:14We're operating in a new world without guardrails, and we need to tread carefully, balancing
04:19such caution with the need to create an environment that allows for innovation and agility.
04:27And there are also challenges we must overcome in order to both mitigate the risks of AI
04:31and make the most of the opportunities that I know it presents.
04:35In particular, we need to further invest in and expand the AI workforce, both at DoD
04:41and across the government, across the private sector.
04:44We have to increase it everywhere to harness our full potential.
04:48I truly believe this.
04:51As a former computer programmer systems analyst myself, I can say from firsthand experience
04:56that AI has vastly changed the technology landscape since I began my career.
05:04Many of the coding and the programming skills that people like me brought to the table,
05:08which formed the backbone of what cybercom personnel do every day in both offensive and
05:13defensive operations, can now be supplemented by AI.
05:19And I know it doesn't replace us, that's for sure.
05:22But however, this does pose its own set of risks, and it creates a deep need for us to
05:27invest in that new kind of cyber workforce that is centered around understanding these
05:33AI skills.
05:34And we continue to have a cyber and AI skills gap.
05:37And until we meet that challenge of bridging it, understanding it, being able to see its
05:42potential, and at the same time understand how it improves our own potential as human
05:49beings, we're going to continue to be at the risk of our adversaries having the upper hand.
05:55So I look forward to discussing such challenges today and over the course of this Congress.
05:59I thank our panel once again for your expertise and contributions to that effort.
06:05I thank you again, Mr. Chairman.
06:09And it is a pleasure to have you here on the team with us.
06:11And this is one of those subcommittees in which it is very bipartisan, and we have focused
06:17on this since the creation of this by Senator McCain back in 2017, I believe, and the path
06:22forward, I think, has been made better because of the work that we've done in the past on
06:26a bipartisan basis to keep everything on the straight and narrow.
06:29I want to thank all of you once again for coming in and participating in this open session.
06:35And we have with us today, all three of you here, beginning with Mr. Jim Meiter, Vice
06:39President and Director of RAND Global and Emerging Risks.
06:43Mr. Meiter, welcome.
06:45Mr. David Farris, Global Head of Public Sector, cohere, welcome.
06:50And Mr. Dan Tedros, Head of Public Sector, Scale AI.
06:55And I understand that the agreement has been made.
06:58Mr. Meiter, you will begin today.
07:00So we welcome you for your opening statement, sir.
07:02Terrific.
07:03Chairman Rounds, Ranking Member Rosen, thank you so much for the opportunity to testify
07:07today on the national security implications posed by the potential emergence of Advanced
07:12Artificial Intelligence, or Artificial General Intelligence, AGI.
07:17Leading AI companies in the United States, China, and the rest of the world are in hot
07:21pursuit of AGI, which would possess human-level or potentially even superhuman-level intelligence
07:27across a wide variety of cognitive tasks.
07:31The pace and potential progress of AGI's emergence, as well as the composition of a post-AGI future,
07:37are uncertain and hotly debated.
07:39Yet the emergence of AGI is plausible, and the consequences so profound that the U.S.
07:45national security community should take it seriously and plan for it.
07:50Consider the following.
07:51What would the U.S. government do if, in the next few years, a leading AI company announced
07:56that its forthcoming model had the ability to produce the equivalent of one million computer
08:01programmers as capable as the top 1% of human programmers at the touch of a button?
08:07The national security implications are substantial and could cause a significant disruption of
08:12the current cyber-offense-defense balance.
08:15At RAND, we are planning for it.
08:17Our work has revealed that AGI presents five hard national security problems.
08:22First, AGI might enable a significant first-mover advantage via the sudden emergence of a decisive
08:28wonder weapon, for example, a capability so proficient at identifying and exploiting vulnerabilities
08:34in enemy cyber-defenses that it provides what might be called a splendid first cyber-strike
08:40that completely disables a retaliatory cyber-strike.
08:44Such a first-mover advantage could disrupt the military balance of power in key theaters,
08:49create a host of proliferation risks, and accelerate technological race dynamics.
08:53Second, AGI might cause a systemic shift in the instruments of national power that alters
09:00the balance of global power.
09:02The history of military innovation suggests that being able to adopt a new technology
09:07is more consequential than being the first to achieve a specific scientific or technological
09:12breakthrough.
09:13As the U.S. allied and rival militaries establish access to AGI and adopt it at scale, it could
09:20upend military balances by affecting key building blocks of military competition, such as hiders
09:27versus finders, precision versus mass, or centralized versus decentralized command and
09:32control.
09:33States that are better postured to capitalize on and manage systemic shifts caused by AGI
09:38could have greatly expanded influence.
09:41Third, AGI might serve as a malicious mentor that explains and contextualizes the specific
09:47steps that non-experts can take to develop dangerous weapons, such as virulent cyber
09:52malware, widening the pool of people capable of creating such threats.
09:58Fourth, AGI might achieve enough autonomy and behave with enough agency to be considered
10:04an independent actor on the global stage.
10:07Consider an AGI with advanced computer programming abilities that is able to break out of the
10:12box and engage with the world across cyberspace.
10:16It could possess agency beyond human control, operate autonomously, and make decisions with
10:21far-reaching consequences.
10:23Fifth, the pursuit of AGI could foster a period of instability as nations and corporations
10:29race to achieve dominance in this transformative technology.
10:34This competition might lead to heightened tensions reminiscent of the nuclear arms race,
10:38such that the quest for superiority risks triggering rather than deterring conflict.
10:44Misinterpretations or miscalculations could precipitate preemptive strategies or arms
10:49buildups that destabilize global security.
10:53As the U.S. Department of Defense embarks on developing the National Defense Strategy,
10:57it will have to grapple with how advanced AI will affect cyber along with all other
11:01domains.
11:02The five hard problems that AGI presents to national security can serve as a rubric to
11:07evaluate how the strategy addresses the potential emergence of AGI.
11:12Thank you for the opportunity to testify.
11:14I welcome your questions.
11:17Mr. Tadros, unless you folks have agreed on a different – okay, Mr. Tadros.
11:26Chairman Rounds, Ranking Member Rosen, members of the subcommittee, thank you for the opportunity
11:29to be here today.
11:30My name is Dan Tadros.
11:31I lead Scale.AI's public sector business.
11:33Every day, my team is singularly focused on how to bring best-in-class AI into the DOD
11:37and other agencies.
11:39Scale was founded in 2016, and since that time has powered nearly every AI innovation.
11:44Our role in this critical ecosystem provides us a unique opportunity to understand how
11:48to build high-quality AI systems powered by the world's best data.
11:53Our work is deeply personal to me, as I have worked nearly my entire career at the intersection
11:56of AI and the government.
11:58During my time as an active-duty Marine, I had the privilege of helping to stand up
12:01the Joint Artificial Intelligence Center, which enabled me to see firsthand the challenges
12:04and struggles associated with the DOD's implementation of AI.
12:08This hearing comes at a critical time for the future of AI leadership, and before we
12:11discuss what the United States must do to win, it's important to analyze where things
12:14stand today.
12:15AI is made up of three main pillars, compute, data, and algorithms.
12:19More than one year ago, the United States clearly led – was clearly ahead on all three.
12:25However, today, that is no longer the case.
12:27Statistics from China have shown that they have closed the gap, and today China is leading
12:31on data.
12:32We are tied on algorithms, but the United States remains ahead on compute.
12:35It is clear that the race is neck and neck.
12:39In order to compete more aggressively, the CCP has implemented a whole-of-country approach
12:42to accelerating its pursuit of becoming a global standard for AI.
12:47From an investment standpoint, and for the first time in history, China is benchmarking
12:51AI investment off the leading tech companies and not the United States government.
12:55Last year, China spent at least $1.2 billion on data labeling alone, compared to our under
13:00$100 million by the United States.
13:02And as part of China's AI Plus initiative, the government established seven data labeling
13:06centers around the country to mainly support public sector application.
13:11Beyond data, while the U.S. has been stuck in a research and pilot mindset, the CCP has
13:16rapidly increased their investment in fielding AI capabilities.
13:19In the first half of 2024 alone, the PLA issued 81 contracts with large language model companies
13:25to rapidly grow their capability.
13:27To win, the U.S. needs to unleash our technology to the warfighter at an unprecedented pace.
13:33When it comes to adopting and implementing AI, the DoD has not launched a new AI program
13:37in nearly a decade.
13:38For the past four years, DoD leadership spent countless hours developing potential use cases
13:43for AI, researching and piloting AI systems, and even putting out guidance to stop users
13:48from utilizing AI.
13:50We still have time, but the window is closing.
13:52If we want to win, we must not only buy into a vision, but also take three clear and decisive
13:57actions.
13:58Number one is put the right AI foundation in place.
14:01To start, the DoD lacks the foundational pieces necessary to build, scale, and implement widespread
14:08AI solutions.
14:09This needs to change.
14:10And we must put in place the elements necessary to expand the use of AI programs.
14:14And this starts with data.
14:16To truly prioritize and execute this strategy, it requires two main aspects.
14:21AI-ready data requirements and enterprise-wide AI data infrastructure.
14:27The U.S. government is the world's leading producer of both quantity and diverseness
14:32of data, but nearly all that data is going unused.
14:34If the U.S. wants to turn our data into an advantage, this must change.
14:39In multiple NDAAs, this committee has directed, suggested, and tried to require the DoD to
14:44prioritize AI-ready data requirements.
14:47But it's clear that more must be done.
14:50In parallel to implementing the requirement, the department should also set up enterprise-wide
14:54AI data infrastructure.
14:56This commercial best practice ensures that AI programs are developed in the most efficient
15:00and cost-effective manner.
15:02And leading tech companies have long realized this requirement for effectiveness.
15:07And for that reason, China is mirroring this same approach.
15:10Number two is to shift our mindset to be an implementation first.
15:13If the U.S. is going to win, we must shift into an implementation-first mindset.
15:18In order for this to occur, SCALE believes that the DoD must first set a North Star related
15:24to robust AI implementation in no more than five years.
15:27This should focus on agentic applications, such as agentic warfare, and would provide
15:31an ambitious vision and enable a tangible, multi-year plan to reach it.
15:35SCALE is actively working on deploying the first instance of this in IndoPay.com and
15:39UCOM through DIU's ThunderForge effort.
15:42Number three is to ensure our acquisition system no longer slows us down.
15:46AI is unique in that it is software but needs to be maintained like hardware, which presents
15:51challenges for the DoD given that it doesn't neatly fit into a legacy acquisition system.
15:57Congress took a strong first step by requiring the DoD to break out AI elements of programs
16:01in the future budgets, and it is critical that Congress continues to provide oversight
16:05to push the DoD to do so quickly as possible.
16:08In addition to proposals like the FORGE Act, SCALE also believes that we need to continue
16:12to look at finding ways to break through the challenges of multi-year budgeting, which
16:15is clearly still holding back the DoD's implementation of AI.
16:19With these three decisive actions, the DoD will be better positioned to adopt and effectively
16:23implement AI solutions.
16:24Thank you again for the opportunity to be here, and I look forward to your questions.
16:28Thank you very much, sir.
16:29Mr. Farris.
16:32Chairman Rounds, Ranking Member Rosen, distinguished members of the subcommittee, thank you for
16:37the opportunity to testify today.
16:39My name is Dave Farris, and I am the Head of Global Public Sector at COHERE.
16:43I previously served nearly 17 years in the Canadian Armed Forces, including deployments
16:48to Afghanistan and Ukraine, and spent the last two years of my career on the U.S. Joint
16:53Staff in the Pentagon.
16:55COHERE is a leader in building AI systems designed exclusively for government and enterprise
17:00use, prioritizing privacy, security, multilingual capability, and verifiability.
17:07Our expertise spans from building foundational AI models to developing agentic systems.
17:13We focus on operationalizing AI, integrating it into real missions under real-world constraints.
17:19We partner with allied governments, agencies, and leading global companies.
17:24Our primary goal is seamless integration, deep customization, and accessible solutions
17:29that deliver immediate, practical value and confidence.
17:33We specialize in private deployments, even air-gapped environments where we do not see
17:37our customers' data.
17:40Today I would like to highlight four key topics of focus gleaned from having worked with high-security
17:45cyber-defense government organizations.
17:48The first key topic is how AI can significantly enhance the Department of Defense's mission,
17:54particularly in cyber security and intelligence.
17:56AI systems can dramatically improve pattern recognition and anomaly detection across vast
18:02datasets.
18:03They can be invaluable for sorting through and synthesizing huge volumes of multisource
18:08information.
18:10And they can help automate a number of crucial tasks to provide early warnings and free humans
18:15to focus on making strategic decisions.
18:18Similarly, effective AI adoption requires integrating technology thoughtfully with existing
18:24workflows.
18:25Human-AI teaming is crucial, and ensuring AI tools have user-friendly interfaces helps
18:31build trust and maximizes operational value.
18:35A second key topic is to consider how AI can help fight back against competitor nations
18:39and malicious actors that are already employing AI-enabled cyber capabilities.
18:45Reports have shown these countries are automating their intrusion attempts, using AI to generate
18:50deceptive deepfakes, develop more convincing phishing lures, and create information warfare.
18:56To stay ahead of these AI-augmented threats, DoD must likewise incorporate AI across its
19:01offensive and defensive cyber operations.
19:05Large language models provide a unique ability beyond traditional rule-based machine learning
19:09systems for language understanding and reasoning capabilities that allows for dynamic identification,
19:16analysis, and generation of conclusions across a wide range of use cases.
19:21The third key topic is to understand how technical considerations are critical to successful AI
19:26deployments in defense.
19:29Models should be right-sized for their specific mission.
19:31Specialized, efficient AI models can often outperform larger general-purpose systems.
19:37This enables deployment even on limited hardware, such as edge devices like laptops or classified
19:42data centers.
19:43Flexible, secure deployment architecture is critical.
19:47AI systems must be deployable across multiple secure environments and ensure AI sovereignty.
19:53Similarly, ensuring models are hardware-agnostic and interoperable so there is no lock-in to
19:58one cloud or one chip provider is essential to ensuring supply chain and operational security.
20:05Collaborative development through public-private partnerships allows for rapid customization
20:09of or creation of new AI models to meet specific operational contexts while protecting sensitive
20:16information.
20:17The DoD does not need to undertake the costly, time-consuming task of developing every AI
20:22model from scratch.
20:23The final key point is to highlight that Congress can take immediate action to accelerate responsible
20:28AI adoption.
20:30Congress should modernize procurement processes to allow innovative AI startups easier entry.
20:36Procurement should reward innovation, agility, and performance, not just size or past contracts.
20:41New legislation should promote interoperability and open standards to prevent vendor lock-in
20:46and enable diverse AI solutions to seamlessly integrate into defense ecosystems.
20:51Finally, Congress should support robust internal benchmarking and testing specific to defense
20:57applications rather than the use of generic academic benchmarks.
21:02This would ensure AI reliability and trustworthiness in critical missions.
21:07In conclusion, COHERE is committed to partnering with DoD and Congress, ensuring AI tools are
21:12secure, effective, and mission-ready.
21:15Thank you and I look forward to your questions.
21:17First of all, thank you to all of you and I appreciated your opening comments.
21:22We'll pass this back and forth a little bit with regard to questions and so forth, but
21:27we'll try to get to as many as we can in a short period of time.
21:30I wanted to begin, Mr. Mitre, the artificial intelligence is here to stay, not going away.
21:39You gave us some warning signs out there, but I wanted to hear from you.
21:44We can't slow down on the development of AI or we know that our competitors will clearly
21:50outpace us.
21:52Give me your rendition of how we do this without losing sight of the facts that there can also
22:03be some dangers involved.
22:05You've identified a number of the possible dangers, but how are we going to do this and
22:09still keep that in mind?
22:13That's a great question and I welcome it and I wholeheartedly agree that it's in America's
22:17interest to stay at the forefront of the development of generative AI and AI technologies more
22:22broadly.
22:24The way in which we can address this issue is first, it's helpful for the US government
22:28to really understand what the current state of the technology is and make sure that folks
22:33within the government, particularly those that are working in the national security
22:35community, really understand what's happening with the technology.
22:39One of the challenges with this technology is that it's not being developed by government.
22:43It's being developed by the private sector.
22:44Just understanding what the current state is, is critical so there aren't technological
22:48surprises that come out that shock people in the national security community.
22:54The second thing that government should be doing here is really looking for applications
22:58in the national security context.
23:00What are the specific use cases that can be applied?
23:03What are potential pathways to wonder weapon or ways in which it could be highly advantageous
23:07in a military competition?
23:09That's critical to do.
23:11That means having the AI in an environment where you've got sufficient compute, where
23:14you've got the right networks, et cetera, where you can actively experiment with it
23:18and get the technology in the hands of the operators to play around with it.
23:22The third thing is preparing for contingencies.
23:25There's a wide range of possible things that could happen, a loss of control scenario,
23:28for example, areas where there is technological surprise and the Chinese get ahead.
23:33What would the US government do in such contingencies?
23:36We should think that through in advance and have plans ready to address it.
23:40Mr. Tedros, this works right into some of the comments that you had made.
23:45I want to just, number one, I think it would be a statement we would all agree on that
23:49continuing resolutions are absolutely not the long-term plan that we need if we're going
23:55to be able to move forward with the investment in AI that we need that may very well save
24:00a lot of lives in the battlefield.
24:02I would recognize that up front.
24:04I think you were rather suggesting that a little bit in terms of our failure to keep
24:09up with the demands of how quickly AI is developing elsewhere.
24:13You also said something else.
24:14I wanted to touch on two items.
24:16Number one, you talked about the fact that we have data which is unused.
24:21I want you to explain that a little bit.
24:23Second of all, feeding into what Mr. Mitre talked about, you talked about agentic warfare.
24:32Can you talk a little bit about what that really means for the ... We've got a lot of
24:36folks out here that this may be their first introduction to the coordination of different
24:41applications that are directly involved in warfare versus the application of AI in general.
24:47First of all, data, unused, and second of all, agentic warfare.
24:51Of course, Senator, and thank you for the question.
24:55In terms of data being unused, the approach that I was looking at there is the aspect
25:00that right now an enormous amount of information is being collected day to day to day.
25:05To take a quote from one of the previous secretaries of the Air Force, we treat data like exhaust
25:10as opposed to something that's really critical to use.
25:14As a result, every time that we run an exercise, run a command post exercise in terms of large
25:18amounts of chat data is being developed, large amounts of chat data is being traced back
25:22and forth, what's happening is at the end of that exercise, all those hard drives are
25:27just being purged or being neglected and goes into storage.
25:31Those are instances where the interactions between participants of a staff, for example,
25:37should be getting captured, and we should be using that to help develop training data,
25:42using it to help develop benchmarks against how these algorithms should operate.
25:46And then by doing so, our eventual development of agentic solutions can be more in line with
25:52what is required by those end users, which I think then brings us into the idea of like
25:57agentic warfare and really what that means.
25:59My interpretation of this is what we're trying to do is we're trying to move humans, move
26:05to a position from humans are the loop to humans on the loop.
26:09So right now, if a staff at IndoPACOM or at UCOM or any other combatant command needs
26:14to make a decision, the process at which they do that hasn't really changed since the advent
26:20of the Napoleonic staff structure.
26:22We take the problem, we divide it up, and then what's required is that the commander
26:25at the last minute has to synthesize all of those things together and then make an
26:30informed decision.
26:32The effort of agentic warfare is to move to the point where much of that low-level staff
26:38work can be done by these AI agents through automated methods with humans' oversight and
26:44supervision of the process.
26:45It's important to maintain some human oversight, like human oversight of the entire process
26:49to ensure that human context, judgment, and the competitive advantage of the U.S. military,
26:55which is the fact that we have the most well-trained, well-versed staff and NCOs on the globe.
27:03Mr. Farris, I've got some questions for you as well, but my first five minutes is up.
27:06We will do a second round, but at this point I'll come back to Senator Rosen.
27:14I want to talk a little about guardrails and benchmarks, both.
27:18I believe they go hand in hand.
27:20Over the last year, discussions between Congress, prior administrations, have always centered
27:26around trying to come up with guardrails to promote responsible AI.
27:30You all know what I'm talking about.
27:32Nobody wants it to become an unchecked technology.
27:35The current administration has raised concerns that guardrails might inhibit innovation.
27:40I believe we need both effective guardrails and benchmarks because the benchmarks, just
27:45as if your child goes to school, they're the test to show if they're learning and going
27:50in the direction that you're maybe expecting them to go.
27:55That's what's going to keep that circle in check.
27:58I'm going to have questions for all three of you.
28:00They're similar, but I'm going to start with you, Mr. Mitri.
28:05How should we develop guidelines or the guardrails and benchmarks in ways that mitigate risk
28:11without stifling innovation?
28:13I might also add, I'm actually going to ask all three of you this.
28:16How do we develop, for those of us sitting in this seat, with all of you, a common policy
28:25language that is both nimble but provides the availability for us to do effective oversight?
28:32I'll start with you.
28:35Thank you, Senator.
28:36I wholeheartedly agree that it's important for us to understand what these models are
28:40capable of doing.
28:42They're developed and they're released into the world with no user manual.
28:46It's not entirely clear what applications they'll be able to perform or how capable
28:50they'll be at doing that.
28:51Benchmarks are crucial, particularly in a national security context.
28:55It's helpful to understand what might the latest generation model be able to do in terms
28:59of offensive cyber, defensive cyber capabilities, in terms of potentially informing non-experts
29:05on how they go about designing a bioweapon that could be highly transmissible and lethal,
29:09et cetera.
29:10I think the real focus that is warranted is on developing benchmarks to really just
29:16evaluate and understand what the risks are.
29:19Separate question in terms of what should government do about those risks if they emerge,
29:25and should regulations or something along those lines be appropriate.
29:27In that regard, I defer to government for specific thoughts on that.
29:31What we're trying to do is just understand, at first pass, what are some of the risks
29:35here and make sure that people are well-informed on that point.
29:39I'll just go down.
29:40Mr. Tadros, the same thing.
29:42Developing the guardrails, the benchmarks tells us one thing, the guardrails tell us
29:46another.
29:47I guess I'll make it all the same question.
29:49We are going to struggle.
29:50We have to put this down in some way on paper that allows us to be nimble and provide that
29:58ability to do the oversight we need to.
30:02If you have thoughts about how we develop this common language that we can all speak
30:09from or start from, I think it's really critical.
30:13Absolutely.
30:15The way that our company looks at this, at least as it relates to guardrails in the implementation
30:20of AI in the Department of Defense, is to really look at it from the perspective of
30:24people, process, and technology.
30:26While the technology needs to have guardrails by itself in terms of its responses when it
30:31will trigger a refusal or when it may not, there still needs to be the other two portions
30:36of this triangle.
30:38People need to be trained on how to best leverage the capability, and then the process needs
30:41to be adapted because if we just bolt AI onto an existing process, then the advantages are
30:46somewhat lost.
30:48The doctrine and training of the individuals needs to adapt at the same time as the technology
30:53is fielded.
30:54This goes back to my position about implementation.
30:56The only way to do this is to experiment in low-risk environments and to iterate very
31:01quickly.
31:02Short of that, I'm afraid that the concern about trying to write out the full answer
31:08at the beginning of the test is probably unlikely.
31:11You need to be able to learn from doing and be able to build off of that.
31:16As it relates to benchmarks, this is an area where our company has done quite a bit of
31:20interesting work.
31:22We have a paper that we've published showing that most of these large language models and
31:26AI systems will essentially cheat off of existing benchmarks.
31:31They've seen them.
31:32They understand the rules of the test.
31:33As a result, they will score abnormally high.
31:36The approach that we've taken in partnership with organizations like CSIS and the CDAO
31:41is to build custom benchmarks that are focused on the domain in which it actually matters
31:46to test.
31:47We've built these custom benchmarks.
31:49The algorithms have never seen them.
31:50They've never been incorporated in their training data.
31:52As a result, you can have a little bit more faith in the performance of those algorithms.
31:58Mr. Pearce.
32:00Thank you, Senator.
32:01I echo the sentiment of my colleague on the panel here.
32:04I think public benchmarks can often be gamed.
32:07I'll start from the perspective of benchmarks because I think it's relevant to what my colleague
32:11was saying.
32:12They don't typically show the performance in real-world context.
32:17So we would- Is using the word audit then a better word
32:20than benchmark for our- Well, no, I think we would say creating
32:24custom benchmarks- It's like right-sizing your model.
32:27Yeah, exactly.
32:29To take that down one step further, we work very closely with our customers from beginning
32:34to end in order to ensure that we're right-sizing that model, developing the benchmarks.
32:39That also includes some human evaluations because that human AI interface is obviously
32:46imperative as we're moving down this.
32:48With respect to guardrails, there's this healthy tension between accountability and agility,
32:56I would say, in this environment.
32:58And so right now, we obviously would suggest that we want to lean into the agility.
33:03We want to take an adoption mindset but can't sacrifice really the security, reliability,
33:13and verifiability.
33:14So ensuring that you have clear visualization into the data lineage, ensuring that you have
33:21a good understanding of how those safety measures have been built into the model during a
33:26development and deployment, I think is imperative.
33:28Well, I think because you say you want to lean into- Oops, I've gone over my time.
33:33I'm sorry.
33:34Can I finish the thought?
33:35Lean into the agility.
33:39But if you don't keep humans, if you don't keep someone else in the loop, people's lives
33:43are on the line and it's still a computer just analyzing data.
33:47And so at that execution point, you have to consider leaning into agility but at what
33:54execution points do we allow for a better decision?
34:00And I'll let it go to my...
34:01Maybe that's a philosophical question.
34:04Well, look here, and I'm going to lead into this a little bit too and I'm going to start
34:09with Mr. Ferris.
34:11We talked about right-sizing systems and kind of along the same line here.
34:17I'm going to compare that because I'm not sure if I'm thinking the same thing that you're
34:20proposing, but loitering munitions as an example, we have clear evidence that in the
34:27Nagorno-Karabakh war between Azerbaijan and Armenia, loitering munitions were used.
34:34They were able to, as basically unmanned aerial vehicles, they moved into a particular kill
34:41box, identified targets that were there, and then without a human in the loop, they were
34:49able to identify the types of systems that were there, whether it was a tank and an armored
34:56personnel carrier, a command center, a radar station, aircraft, and so forth.
35:03But because they had that capability, they could then choose which weapon system based
35:06upon which drone was there in the area and at an appropriate time, attack each of them.
35:16Is that the type of...
35:18Can you talk about, is that what you mean when you say right-sizing in terms of having
35:21the capability for that particular mission set?
35:25Or share with me what you mean by that.
35:27Yeah, thank you, Senator.
35:30In that context, I think when we talk about right-sizing the model, we're talking about
35:33making sure we're bringing the appropriate solution to the use case.
35:39So to use your example, we would be looking at how the models are used to analyze all
35:46of those multi-source information, all that multi-source information that's coming into
35:51the system from various sources, but also potentially from different sensors and systems.
36:00I think what's important is that we would suggest that by analyzing, using artificial
36:06intelligence to analyze all of that data, it allows you to elevate the level at which
36:11a human can make that decision.
36:13We would still suggest that the human-AI interface is important, and that should be maintained
36:20during these types of operations, but really what AI allows you to do is to elevate that
36:25decision and make it closer to when it needs to be taken potentially.
36:33You're following right into what my next question was going to be, and that is with regard to...
36:37And I'm going to run this all the way down the line again, but I want to talk a little
36:40bit about humans in the loop, humans on the loop, and humans over the loop, and defining
36:46each of them, if you would, in terms of where we're at today and where we're going to be
36:50tomorrow.
36:51And I'm going to talk about it in both offensive and defensive capabilities.
36:56And the example that I would use, if you could build upon, is we have systems right now that
37:03for defensive capabilities, we arm them.
37:07But once they've been armed, they can automate to protect our platforms.
37:14And that means if you have incoming missiles, particularly if you're talking less than a
37:20minute to respond, to be able to identify a missile incoming, such as what we've seen
37:25in the Red Sea region with regard to Houthis attacking our systems, but to be able to identify
37:31it, identify the type of weapon system necessary to take it out, and then to be able to execute,
37:37and then to have backups along with it.
37:41How far along are we, and what will AI do with regard to having that, whether there's
37:47a human directly in the loop of making that decision, or on the loop having armed it,
37:54or over the top of the loop not engaged at all?
37:57I'd like your thoughts.
37:58And I'm going to ask our other two members here as well for their thoughts.
38:03Yeah, thank you, Senator.
38:05So obviously, I would say that currently we're seeing AI deployed in an environment with
38:14humans in the loop, as you described, and on the loop, where there's some oversight.
38:21But certainly, I don't think we're yet at that over the loop, where they're elevated
38:26outside of the analysis and execution of the mission set, if you will.
38:33But certainly, as agentic AI becomes more advanced, and the models improve and become
38:41more precise and relevant, which is happening at an incredible pace, I would say we'd be
38:46able to see some of that.
38:47But again, our position at Cohere would be that we want to work.
38:51We would develop, because we deploy models with our customers in their environments,
38:57we would suggest that that integration on the front end with the customer and with our
39:02partners, having that partnership in development, deployment, and then ultimately, the decisions
39:08in how those guardrails are put in place, I think that's important on the front end
39:13of really understanding where in that loop it's necessary to have the human place.
39:19Mr. Tadros?
39:22So the way that I would kind of look at this is for human in the loop, what you're sacrificing
39:29is speed over the oversight required to ensure that you're rendering it.
39:37In those cases, I think in, on, or over the loop, it really comes down to the use case
39:41and the speed at which you have to make the decision.
39:44So if the use case is such in a defensive manner, similar to like a SeaWiz or an Aegis
39:49Cruiser, which if certain triggers are hit, you default to the machine's knowledge because
39:54the speed at which things are changing is so great that you can no longer support the
39:58decision-making process.
40:00I think what it comes down to with, that's a heuristic-based system where it's like very
40:04clear triggers.
40:05To be able to implement that same type of approach with AI would require a certain amount
40:10of evaluation of those systems.
40:12So going back to the benchmarking question from earlier, it would also require having
40:16a data infrastructure layer in place to be able to retrain those models effectively when
40:22the environment changes significantly.
40:24And as a result of doing that, you can ensure that this rapid iteration of retraining and
40:28test and evaluation can occur that would still provide the commander the opportunity to make
40:33that informed decision about if the staff needs to be in, on, or over the loop.
40:39Mr. Meiter?
40:40I apologize.
40:41Am I saying your name correctly?
40:42Is it Meiter?
40:43Meitri.
40:44Meitri.
40:45Meitri's fine too, though.
40:46We get it all the time.
40:47Not a problem.
40:49Yeah, no worries, Senator.
40:51On this point, I think fundamentally what the Department of Defense is looking for are
40:55weapon systems and military systems more broadly that are effective.
40:58And so the question is, what is effective in a particular use case, in a particular
41:01context?
41:02Now certainly as the technology progresses, there are more opportunities to use it in
41:06different ways.
41:07And along with that can come greater dependence on the technology.
41:10And with greater dependence, you're potentially open up new vulnerabilities and new risks
41:13associated with that.
41:15So it's incredibly important to understand what are ways in which it could go sideways?
41:19What are some of the vulnerabilities there when you're integrating in a broader weapon
41:22system where it might act in ways that are inconsistent with human intentions?
41:27And do you have the right safeguards put in place to guard against those cases?
41:31Are there kill switches that might be necessary?
41:33Are there ways in which you're dealing with a model that's breaking out of the box and
41:37engaging more with the cyber world?
41:40Are you able to cut it off from certain applications if you need to?
41:43I think it's helpful for the Department to think through the wide range of potential
41:47applications here and then make sure that it's thought through how you ensure effectiveness
41:53despite different ways in which the model could react in a particular context.
41:58Senator Rosen.
41:59I want to talk about energy limitations, but I'm not going to ask this as a question.
42:03I'm just going to make this as a general statement philosophically.
42:07Because if we move to no humans in the loop, why not just create a grand video game and
42:11save lives?
42:12Because at the end of the day, if it's AI making the choice, there are still people
42:19on the ground, all of us, not just men and women in the military, but the rest of us
42:25that live in the world that the computer may or may not really care too much about.
42:31So it's a bigger philosophical question as we move forward, not expecting it to be answered
42:36here.
42:37But in a way, we have to be sure that we think about that because for every action that these
42:42computers might take to each other, theirs versus ours, the fallout happens to us living
42:49here on earth.
42:50That's all I'm going to say.
42:51But we've got to speak about living here on earth.
42:54We've got AI energy limitations.
42:56A lot of data centers in Nevada, let me tell you, there's an increasing demand for energy
43:01They just gobble it up.
43:03And it's a hardware problem, software problem, and it's largely based, of course, on the
43:09current architectures that we have.
43:11Like I said, Nevada's dry weather and our vast open spaces, we have really become a
43:15national leader in data storage centers.
43:18Our companies are constantly innovating.
43:21But we know that the growing use of all this is going to create great energy burdens on
43:25our commercial, our government data centers.
43:28And so I guess we'll go this way.
43:30We'll start with Mr. Ferris.
43:32How do we address this challenge?
43:34Do you see it as a barrier to more widespread DOD and government adoption?
43:40And what research, what should we be investing in to try to maybe reduce that great energy
43:47suck as it will?
43:49It's going to take everything it can, right?
43:52Thank you, Senator.
43:55So Cohere, this is actually fundamental to our company.
44:00We build custom models and designed to be efficient and deployable in the environment
44:09that our clients and customers are working in.
44:11So in pursuit of that efficiency, a couple of things, one, we're chip agnostic and cloud
44:17agnostic.
44:18So we also, that means we've had to focus on building our models in somewhat of a resource
44:25constrained environment.
44:26So we've built-
44:27What if you put it on tanks?
44:28You've got heat.
44:29You can't, you have to be sure that they adapt in heat environments and they're going to
44:32generate energy, right?
44:34Absolutely, Senator.
44:35So, but we've built some of these models to be deployed on as small as two GPUs or even,
44:40you know, we're pushing towards edge deployments in laptops.
44:43So being able to bring down that energy cost, but also the infrastructure as a whole, and
44:51then even it has implications broadly speaking into the supply chain as well.
44:55Thermodynamics.
44:58What can we do about all the energy we need to do all of this and then make it portable?
45:04Yes, ma'am.
45:06So the way that I kind of look at this is as these technologies start to be fielded,
45:11there's always an interest in the Department of Defense in order to be able to operate
45:14in a disconnected environment.
45:16So what's going to require, what that requirement is going to come along with is fine-tuned
45:20smaller models that can interact together, which is similar to the approach that we're
45:23taking with INDOPACOM and EUCOM for agentic warfare.
45:27So this, what this really results in is a lower power requirement because back at home
45:32station while we've been doing the development and training, we're able to tune these models.
45:37You've been using very specific data sets.
45:38So the individual models are very good at a specific thing.
45:41They've been tested and evaluated.
45:43And then the interaction between those models is what can be fielded at the edge.
45:47So that minimizes the energy requirements as these things begin to get fielded and proliferated.
45:53Mr. Mitri?
45:54The only thing I'll add is that it's important to think about the entire tech stack to include
46:00power, not just the data layer and compute layer and then the models itself and certain
46:05applications.
46:06So you're right to think holistically that power is a big part of that.
46:09And certainly there are ways to find smaller, more efficient models that you could deploy
46:14abroad along the lines of what the other panelists said.
46:17And it's worth the department looking at that aggressively.
46:23Same question for all of you now.
46:25You all approached your work with the Department of Defense probably in different ways.
46:31But my question is, what can the Department of Defense do with regard to their policy,
46:37acquisition policies, the way that they treat contractors?
46:42What can they do to enhance their ability to take advantage of the private sector's
46:47capabilities that they're not doing today?
46:51Mr. Farris?
46:53Thank you, Senator.
46:54The first thing we'd say is we believe that the department needs to have an adoption mindset.
46:59We've seen a really good shift, the software acquisition pathway and the use of other transaction
47:06authorities from an acquisition perspective.
47:09There's some really great strides in acquisition.
47:13I would offer using existing mechanisms.
47:17I'm an advocate for the simple acquisition threshold being either a provision similar
47:25to what we have currently.
47:26The simple acquisition threshold is $250,000 for a contracting officer can buy anything
47:33under that without a competitive process.
47:35There's a provision for contingency operations or cyber defense and CBRN defense where that
47:41simple acquisition threshold is raised because of urgent operational requirements.
47:46And I think, similarly, we could have an approach in procurement where for artificial intelligence,
47:52urgent operational requirements, perhaps the simple acquisition threshold could be a provision
47:58for that.
47:59What that would do is it would shift the burden away from the DIUs and DARPAs and organizations
48:05like that that are well-versed in using OTAs and allow contracting officers and project
48:10managers at much lower levels in the department to execute and acquire these types of capabilities.
48:18Mr. Tadros?
48:19Thank you, Senator.
48:22So when I think about making it easier to acquire this technology, I tend to actually
48:28go back to the AI infrastructure standpoint.
48:31The reason for that is it actually reduces the barrier of entry of companies to come
48:36in.
48:37If they're able to operate off of a central data repository, then that company's pathway
48:42to being able to create relevant technology for the Department of Defense is considerably
48:47easier than one of the legacies that have been in that space for a while and may have
48:51troves of data that they've saved over 20 years of conflict.
48:55Mr. Mitri?
48:57I agree with the panelists on everything that relates to narrow AI or AI that exists today.
49:03What I think is principally lacking from the department's approach to the issue is anticipating
49:07where AI might be in a couple of years' time and really working closely with the technologists
49:12that are at the forefront of developing generative AI and frontier AI models to get their head
49:17around what that world might look like.
49:19So there's a lot of attention rightfully put towards maintaining our lead in the development
49:24of technology itself, to better promote its development, to better protect our lead through
49:28expert controls and AI security and things of that nature.
49:32But how well does the department really understand what capabilities it may unearth in the next
49:37two, three, four, five years?
49:39I don't know.
49:41And what that means for the future character of warfare, that's crucially important, especially
49:45as the department now embarks on developing a new defense strategy.
49:49One last question for all of you, and you don't have to spend a lot of time on this,
49:52but is there a place somewhere, a safe space, so to speak, where industry and DoD can actually
50:02interface and ask questions of one another, offer ideas, offer products, and so forth
50:08that is ongoing, or is it a case-by-case basis?
50:13In other words, if industry has a particular product that they think would be great in
50:20its application within DoD, do they know where to go to get it?
50:23And DoD, on the other hand, do they have a place where they can go and ask the questions
50:27about what do you have that can help us fix this problem?
50:31Does that exist today?
50:33Don't everybody speak at once.
50:37Not in a structured and systematic way.
50:39I think it happens in ad hoc cases here and there, but not in a coherent approach to really
50:44have a tight public-private partnership, if you will, to really understand where are we
50:49in the development of AI technologies relative to key competitors, like the Chinese in particular.
50:55What are things that we need to be doing to make sure that America maintains that lead?
50:59DeepSync's a great example here, where surprises like that can come out, and people wonder
51:03what does that mean in terms of where we are.
51:05I don't think we have that kind of environment to enable that constant flow of communication,
51:09especially when in a cleared environment, where you can have more sensitive conversations
51:13with key experts in terms of what's happening with this technology and what the U.S. government
51:17needs to be doing in partnership with the private sector to maintain America's lead.
51:22Any other thoughts?
51:23Yes, Senator.
51:24So I think the closest that I've seen of that existing is Project Maven, where the efforts
51:32behind that was to bring technology into the Department of Defense in a very aggressive
51:36manner.
51:37And because they took that approach, and because you had a single program that was well-funded,
51:41well-organized, and manned by the right individuals, what you end up with was a situation in which
51:48they were seeking to find as many technology experts as they could, bring them, and figure
51:53out ways to get them into the department to satisfy a mission requirement that was set
51:57forth.
51:59Mr. Ferris, anything?
52:00I'll just add that, you know, echo that it is very ad hoc and unstructured.
52:06However, I think that's precisely why, actually, you know, people like us end up staying in
52:11these types of companies and working in them for as long as we do, because, you know, it's
52:15important to know those pathways, know those venues in which these conversations do unfold,
52:21and how to get after, you know, getting in front of the government customers quickly
52:26and rapidly as possible, especially when you do think you have something that can support
52:31the mission.
52:32So, you know, it's a little bit, at this point, it's experience for some of us where we can
52:36find that opening and get in front of the department.
52:40Senator Rosen.
52:41I have one last question.
52:42I think, for those of you who don't know, MAVEN means know it all in Yiddish.
52:49I should say we should have the MAVEN marketplace.
52:51How about that?
52:52There you go.
52:54Maybe that solves what you need.
52:56And anyway, but what I want to talk about and just finish up with, we can't do any of
53:00this without building our AI workforce, and that is something that Congress can help invest
53:07and promote, and we can only go as far as we are willing to invest in all of that.
53:12And it's just so very important.
53:15So, for all of you, as we just finish up in our last few minutes, the workforce issues
53:21that you see in adoption of AI, what do we need to do to grow, well, coders, engineers,
53:28all of the things that we have to do to build out this robust workforce, because these are
53:32the kinds of things that Congress does work on and does fund, what advice would you give
53:37to us?
53:38No one starts in the center.
53:39We started on the ends.
53:40We'll start with you.
53:41And I think it's a good way.
53:43That's something that is in our wheelhouse.
53:45And work on that MAVEN marketplace, will you?
53:47There you go.
53:48I'm going to trademark that name.
53:49You heard it here first.
53:50Absolutely, Senator.
53:51So, I can say that I'm actually very, very proud of the work that we're doing in St.
53:56Louis.
53:57So, basically, what we're doing is we're taking individuals that would normally not participate
54:00in the national defense and give them an opportunity to support data development and AI development
54:05in the St. Louis community.
54:06So, in some cases, what we've done is taken individuals off the five-guy fry line, trained
54:11them on how to look at electro-optical imagery, gotten to the point through training that
54:14they are then able to look at synthetic aperture radar, get them to the point where they have
54:18a clearance, and then even elevate them even further so that they're able to pass certain
54:21imagery tests.
54:22So, community college certificate programs to bring people just into the workforce, as
54:28we say, even things like that, right?
54:30Yes, ma'am.
54:31And give them an opportunity to kind of participate in that national defense.
54:34This is an area where scale believes very strongly in kind of elevating this workforce
54:38in order to support the needs of the national defense in this space.
54:42Yeah.
54:43Perfect.
54:44Mr. Fares.
54:45Thank you, Senator.
54:46I agree.
54:47I mean, I think what we would say, we try to partner with, it's a public-private partnership
54:52that's extremely important, and workforce development is critical as part of the body
54:56of work that the department and really the government needs to undertake to achieve the
55:01advancement in AI that we're hoping for.
55:03But within the company, we do partner with educational institutions and within the community,
55:09and we're searching for ways to continue to grow that workforce.
55:12I do think it's a collaborative process that we need to take with the government and work
55:18in concert on it, because from a coherent perspective, we want to be, in terms of our
55:25deployment and how we work with our customers, it's really early on, so we want to make sure
55:28that we're contributing to the workforce development in a way that's meaningful for the department
55:33as time goes on.
55:34Right.
55:35Mr. Mitri.
55:36This is not exactly my area of expertise, but in my experience, there's no more compelling
55:41reason to go work in government than for the mission.
55:43So emphasizing that as the key ability to attract top technical talent I think is crucial,
55:49as is giving them opportunities to develop their skills.
55:52And that requires actually having the right compute infrastructure and networking analytic
55:56tools available so that they can grow and develop their skill set while in government.
56:01That's often a challenge to bring together.
56:04But there's a broader point than just the technical talent, the AI talent skill set
56:07here as well.
56:08Given advances in AI, it's going to impact all elements of the workforce.
56:13And so what we're seeing in the private sector right now, by way of analogy, is those companies
56:17that are better leveraging AI are out-competing companies that don't have it.
56:22And so I think that's likely what we could see in the military context too.
56:26Those militaries that are fully embracing and applying it across a range of applications
56:31are going to be at a significant advantage relative to those militaries that aren't.
56:36And so I would think a little bit more holistically on the workforce dynamics here.
56:40Appreciate it.
56:41Well, with that, let me take the opportunity to thank all three of our presenters here
56:46today.
56:47Mr. Jim Mitry, Vice President and Director at RAND Global and Emerging Risks.
56:51Mr. David Ferris, Global Head of Public Sector, co-chair.
56:55And Mr. Dan Tedros, Head of Public Sector, Scale AI.
56:59We thank you for participating in this open discussion today.
57:03It's been very, very helpful.
57:04And my thanks also to my Vice Chair, Senator Rosen, for participating today as well.
57:09We appreciate that.
57:10And unless you have any closing comments?
57:13Thank you for being here.
57:14Thank you for your work.
57:15And I look forward to continuing to work with you and the ideas you have.
57:20And with that, this subcommittee hearing of the Cyber Subcommittee is now closed.