© 2024 Blaze Media LLC. All rights reserved.

Air Force Uses Gaming Technology to Train Drone Analysts

"We’re always trying to get things as natural as possible as much as you can wearing a black suit."

How do you survive countless falls, hundreds of bullet wounds and being "blown up thousands of times?"

Let your avatar do the work for you.

w The Air Force is using avatar programs to train drone analysts (Screen shot:Elizabeth Kreft)

Max Grattan does just that while operating in virtual training simulations for Air Force imagery analysts. His 3-D avatar role often requires the Air Force technician to play a terrorist or other combatant, in a program designed to mimic real-world scenarios that drone footage analysts will likely be asked to interpret.

The intelligence and reconnaissance imagery is typically gathered from multiple cameras, but sometimes just a single unmanned aerial vehicle captures footage, peering thousands of feet down to the operational environment. These analysts are then asked to determine whether or not a subject is dangerous, should be taken out or is an innocent civilian.

Without realistic training, it might be impossible to make the right call.

But lifelike, video game-style technology -- with avatar role-players like Grattan -- gives the analysts more accurate movements to practice with, based scenarios at the laboratory’s Human Measurement and Signatures Intelligence program. The size, shape and movements of the human body, referred to as biofidelic by researchers, are captured in stages to create the computer animation, according to the Dayton Daily News.

“We’re trying to blend a little bit of science and Hollywood for what they’ll see when they’re in the field,” said John Camp, a 711th Human Performance Wing biofidelic modeling researcher on the avatar project.

The avatar animation looks and sounds like a high-tech video game, but is specifically designed with more realistic motions and cultural cues than a mere commercial game that can be found online.

The Dayton Daily News reports:

"The realism is detailed enough to show people talking into a cellphone or smoking a cigarette in a crowd while an analyst looks for cues to recognize when someone might be holding a gun versus a cane, hiding a bomb under clothing, or to spot a soldier injured on the ground, researchers said.

The human models are from multiple regions to obtain the accuracy researchers say analysts need to distinguish someone through cues such as the way they walk.

“What we try to do is get a variety of people from a variety of places around the world and the United States to capture cultural cues,” Davenport said."

w Matt Grattan acts like a combatant while wearing a body scanning suit for the avatar development system (Screen shot: Elizabeth Kreft)

Creating the avatar is a multistep process; first, cameras scan the body measurements of a person wearing workout clothing, then motion-capture cameras to record the human body’s movements 120 times each second.

For some scenarios, the avatar-generators mimicked throwing a baseball or carried an AK-47 rifle.

The scanned and recorded images are fused together into an animation. The avatar’s clothing, such as a military uniform, will be added on a computer screen to fit the scenario. The clothing the avatar wears moves naturally, such as a loose-fitting camouflage blouse that billows when the wearer jumps over an obstacle, according to the Dayton Daily News.

“We’re always trying to get things as natural as possible as much as you can wearing a black suit,” Camp said.

Check out the video here:

--

(H/T: AZ Star Net)

Follow Elizabeth Kreft (@elizabethakreft) on Twitter. 

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?