© 2024 Blaze Media LLC. All rights reserved.
Why the Army Is Developing Mind-Reading Technology
Dr. Anthony Ries (left) works with Pfc. Kenneth Blandon (right) to develop a mind-reading program that will help intelligence analysts sift through a larger amount of information quickly. (Photo credit: U.S. Army)

Why the Army Is Developing Mind-Reading Technology

"Recognizing something important."

A soldier sat with electrodes connected to his head and looked at a screen flashing images at him.

Without speaking and without clicking, he silently counted how many images popped up on the screen that fit into a category that he had chosen to follow.

Without telling anyone if he had chosen the boat, panda, strawberry, butterfly or chandelier (the categories available to him) to count, the computer that was tracking his brain waves was able to tell what he was following.

The boats.

This work is being done out of the U.S. Army's MIND — Mission Impact Through Neurotechnology Design — Lab, led by neuroscientist Dr. Anthony Ries who thinks such technology could someday be used by the intelligence community to identify targets faster than the current method.

Dr. Anthony Ries (left) works with Pfc. Kenneth Blandon (right) to develop a mind-reading program that will help intelligence analysts sift through a larger amount of information quickly. (Photo credit: U.S. Army)

In a news story posted by the Army, Ries explained that intelligence analysts scanning an image for a specific something could take a very long time.

"They may be looking for a specific vehicle, house or airstrip — that sort of thing," he said.

The technology that Ries and his colleagues are working on takes sections of this larger image and flashes them on the screen. The analyst going through the electroencephalogram test, like the soldier did, is then able to pick up on the snippets of the image that trigger something in his or her brain.

"Only those chips that contain a feature that is relevant to the Soldier at the time — a vehicle, or something out of the ordinary, somebody digging by the side of the road, those sorts of things — trigger this response of recognizing something important," Ries explained.

The program would then take note of the images that elicited such a response in the analyst to be scrutinized later.

Here's more from the Army's news article on why such a system would be beneficial:

When Ries and his fellow researches cut a larger intelligence image into smaller parts and display them in rapid succession to an analyst, the analyst still has to look at the entire image - the same number of square inches of image overall. But Ries said that by cutting it up into smaller chips, and displaying it rapidly, they are taking much of the work out of accomplishing the analysis.

Instead of sliding his fingers over the image, or marking on it, or writing something, or typing, the analyst has only to think "of interest" or "not of interest." And that kind of decision can be made almost instantly - and a computer hooked to an EEG can detect when that decision has been made, what the decision is, tag the image with the result, and then present the next image in just a split second.

At this point, the team is working on figuring out what extraneous signals could create "artifacts" that might affect the interpretation of the neural signals.

"Maybe you have an analyst who is looking at an aerial image, but is also listening to auditory communications," Ries told the Army News Service. "How does multi-tasking affect the target recognition response? If we can characterize the way different task loads affect the response, we can update our classification algorithms to account for that."

The lab is also working on associating eye movement tracking that could be coupled with brain wave signals as well.

"One thing we have done is instead of having people view images at the center of the screen, we're leveraging eye-tracking to know whenever they fixate on a particular region of space," Ries explained. "We can extract the neural signal, time-locked to that fixation, and look for a similar target response signal. Then you don't have to constrain the image to the center of the screen. Instead, you can present an image and the analyst can manually scan through it and whenever they fixate on an item of interest, that particular region can be flagged."

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?