Leaders such as Army Col. Jarrett Mathews, acquisition director of the SOCOM task force over the “hyper-enabled operator,” seek new tech to give even individuals the assets they need to see, sense, act and react to ever-changing conditions on the ground.
Mathews showed the audience here at the Global SOF Foundation’s SOF Week not a bearded, muscled operator kicking in doors and shooting but a business suit-clad “operator” navigating the streets of a foreign nation, deciphering spoken language, signage and even graffiti to sus out threats while running their mission.
Prompted by an audience question, Mathews outlined a no-limits picture of what he’d love to put in that operator’s hands.
“I would like a fully-capable, human-machine teaming with an information system that had access to the whole of the Internet,” Mathews said.
The colonel isn’t delusional, he knows that technology isn’t here yet, but Mathews and his team are looking to industry to make it a reality.
“We want these operators to be super users of their environment,” Mathew said.
The concept went public nearly three years ago, Defense News sister publication C4ISRNET previously reported. Since then, the team that Mathew now leads has advanced the language processing capabilities of its voice-to-voice program and started work on translating text via smartphone photo capture and eventually through other devices.
The voice-to-voice program is currently deployed in two undisclosed theaters of operation, Mathews told the crowd. And they’re working now to add languages to the software.
The team has also begun development on an augmented reality piece for viewing the environment with layers of data.
And he’s got some proof that they’re on the right path. As part of their program, the team set out to create a secure capability that can operate without Internet access called “Voice to Voice Language Translation.” The translation allows the user to speak into a smartphone and the software will translate that speech into the desired language and “speak” it aloud.
The team took a calculated risk in trying to demonstrate the software earlier in the day following the morning keynote address. It was clunky, not exactly translating word for word, and required some tech support – a signal that more work is needed.
But Mathews later noted that work with the Stanford Research Institute has allowed base-level translation on smart devices disconnected from the Internet with a higher quality than Google Translate.
The next phase is the Visual Environment Translation, through which a camera can decipher text, even graffiti, which is still in its nascent stages.
Using augmented reality technology, the team also looks to get past tourist-like smartphone photographing, which can draw attention. Instead, they would embed these features into something more inconspicuous, such as a Google Glass-like device that a user could wear, Mathews said.
For that operator running in an austere area with little access, or unsecured access to cloud computing, Mathews and his staff are pushing the boundaries of “edge computing” through a combination of radio frequency sensors and secure video/imagery “pipelines” that piggyback on existing WiFi and Bluetooth networks.
Working as kind of a second brain for the user, Mathews office is developing an “automate the analyst” program. The program aims to help users have not only those voice and text options but feeds from mapping software, social media and other feeds to have a clear picture on what’s happening around them.
The goal is to have all of that without the ever-present hovering “eye-in-the-sky” drones that may not be an option on some of these very small footprint missions.
Todd South has written about crime, courts, government and the military for multiple publications since 2004 and was named a 2014 Pulitzer finalist for a co-written project on witness intimidation. Todd is a Marine veteran of the Iraq War.