Skip to main content

The U.S. Army is building a giant VR battlefield to train soldiers virtually

 Maj. Gen. Cedric T. Wins, commanding general, U.S. Army Research, Development and Engineering Command tries his hands at One World Terrain.
Maj. Gen. Cedric T. Wins, commanding general, U.S. Army Research, Development and Engineering Command tries his hands at One World Terrain. U.S. Army

In the 2014 sci-fi action movie Edge of Tomorrow (also known as Live. Die. Repeat), Tom Cruise plays William Cage, a public relations officer with no combat experience, who somehow gets stuck in a Groundhog Day-style time loop. Forced to participate in a battle against a seemingly unbeatable foe, the initially hopeless Cage becomes increasingly effective by reliving the day of the attack over and over. Each time he dies, Cage wakes up on the day preceding the attack takes place.

Being able to train in this way is a luxury that’s not afforded to today’s combat troops. As many drills as you run, as much strategic briefing takes place, the reality is that nothing can prepare you for being in a real combat zone. Suddenly things become a whole lot more unpredictable — and unpredictability is difficult to train for. Especially when one mistake could lead to serious injury or worse.

Recommended Videos

The Synthetic Training Environment

The U.S. Army has an idea to help with this, however — and it’s one that could help supercharge the way that military training is carried out. Called the Synthetic Training Environment, the initiative aims to create a unified training environment for the infantry that lets soldiers practice combat scenarios dozens, potentially even hundreds, of times before setting foot in a battlezone.

Taking advantage of cloud-based computing and the latest virtual reality technology, the STE will allow soldiers to strap on a pair of VR or mixed reality goggles and immediately be transported to any country or terrain, along with their squadron.

The Synthetic Training Environment

“As part of our work for the Army under contract for STE, we’re developing a cloud-enabled, massively multiplayer training and simulation environment that uses a common terrain for the entire planet,” Pete Morrison, chief commercial officer for the military simulation software developer Bohemia Interactive Simulations, told Digital Trends. “This would enable the Army to conduct virtual training and complex simulations anywhere on a virtual representation of the Earth. STE will leverage cloud technologies to deliver training to anywhere it’s needed, ensuring a common and high-fidelity whole-Earth terrain representation for a multitude of different simulation systems.”

While it’s not a replacement for live training, the idea of STE is that it will be available whenever and wherever it’s required. That means that it can be used equally well in well-equipped combat training centers, at home station, or even during deployment. It can also be fine-tuned to a variety of different training scenarios: not just against different enemies, but simulating training environments for everything from battalion level through mission command. By gathering data points in real time during training, potential problems can be spotted (and nipped in the bud) before they become a, well, problem.

The military’s history with VR

The United States military is no stranger when it comes to virtual reality. Like artificial intelligence, the Department of Defense has been a big sponsor of VR throughout its long, and often tumultuous, history.

As far back as the 1970s, long before “virtual reality” had even been given its name by the computer scientist Jaron Lanier, a military engineer named Thomas Furness dreamed up a pilot training tool called the “Super Cockpit.” This ambitious (and expensive) flight simulator project involved a real aircraft cockpit, into which could be projected computer-generated 3D maps, infrared and radar imagery, and assorted avionics data into a three-dimensional space. It gave trainee pilots a whole new way of learning to fly planes without ever having to leave the hanger.

Virtual reality dome impact of real-life scenarios on cognitive abilities
Natick’s virtual reality dome enables researchers to assess the impact of the environment on Soldier cognition, including decision-making, spatial memory or wayfinding. David Kamm, NSRDEC

Since then, VR has been frequently experimented with by different branches of the military. Infantry training, however, poses a considerable challenge. As it turns out, as challenging as a pilot’s job is, simulating the experience of flying a plane is comparatively easy. It involves one immediate location and a limited number of friendly or enemy agents to interact with. The infantry is different.

In increasingly urban environments, today’s soldiers are dealing not just with friendly and enemy forces, but also civilians, who can bring with them their own complex population dynamics. Add onto that the demands of “massively multiplayer” training, the technical demands of virtual reality, and you have a scenario that would make the developers of GTA Online quake in their boots. (Let’s not forget that the accuracy of this version of GTA Online could affect real men and women’s lives if it’s not up to the job!)

Creating a complex virtual world

This is where BISim’s training and simulation software, based on a rendering engine called VBS Blue, aims to help. “What’s exciting about what we’re doing is that the Army will be able to dramatically scale up the number of intelligent entities represented in simulation scenarios to the millions,” Morrison continued. “Previously, only tens or hundreds of thousands of entities would be represented, and those would be aggregated to reduce the complexity of simulating large forces.”

VBS Blue: Stunning 3D Whole-Earth Rendering

Using a unique A.I. layer, the software also allows these millions of intelligent entities can act of their own accord. That means that no two training scenarios will be exactly like. The software is additionally able to interact with the DoD’s existing simulation systems, meaning that the infantry will be able to practice in a shared virtual world with, say, a helicopter simulator. The importance of this cannot be underestimated when it comes to preparing for a scenario in which hundreds or thousands of soldiers, with individual specialities, must work together under highly pressurized circumstances.

Last but not least, the models in BISim’s system can be easily updated: allowing for the training environment to reflect how a particular location is at that moment, rather than how it was when the software was first developed.

“Scenarios are usually ‘reset’ at the end of training, so a persistent environment would allow users to examine how tactical actions could have a strategic effect on the broader simulated population,” said Morrison. “By using the cloud and a common global terrain it will allow soldiers in-theater to provide updates to the terrain where they’re deployed and let soldiers at home station train in the same virtual environment. [That will let them further] increase their readiness for deployment.”

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
BYD’s cheap EVs might remain out of Canada too
BYD Han

With Chinese-made electric vehicles facing stiff tariffs in both Europe and America, a stirring question for EV drivers has started to arise: Can the race to make EVs more affordable continue if the world leader is kept out of the race?

China’s BYD, recognized as a global leader in terms of affordability, had to backtrack on plans to reach the U.S. market after the Biden administration in May imposed 100% tariffs on EVs made in China.

Read more
Tesla posts exaggerate self-driving capacity, safety regulators say
Beta of Tesla's FSD in a car.

The National Highway Traffic Safety Administration (NHTSA) is concerned that Tesla’s use of social media and its website makes false promises about the automaker’s full-self driving (FSD) software.
The warning dates back from May, but was made public in an email to Tesla released on November 8.
The NHTSA opened an investigation in October into 2.4 million Tesla vehicles equipped with the FSD software, following three reported collisions and a fatal crash. The investigation centers on FSD’s ability to perform in “relatively common” reduced visibility conditions, such as sun glare, fog, and airborne dust.
In these instances, it appears that “the driver may not be aware that he or she is responsible” to make appropriate operational selections, or “fully understand” the nuances of the system, NHTSA said.
Meanwhile, “Tesla’s X (Twitter) account has reposted or endorsed postings that exhibit disengaged driver behavior,” Gregory Magno, the NHTSA’s vehicle defects chief investigator, wrote to Tesla in an email.
The postings, which included reposted YouTube videos, may encourage viewers to see FSD-supervised as a “Robotaxi” instead of a partially automated, driver-assist system that requires “persistent attention and intermittent intervention by the driver,” Magno said.
In one of a number of Tesla posts on X, the social media platform owned by Tesla CEO Elon Musk, a driver was seen using FSD to reach a hospital while undergoing a heart attack. In another post, a driver said he had used FSD for a 50-minute ride home. Meanwhile, third-party comments on the posts promoted the advantages of using FSD while under the influence of alcohol or when tired, NHTSA said.
Tesla’s official website also promotes conflicting messaging on the capabilities of the FSD software, the regulator said.
NHTSA has requested that Tesla revisit its communications to ensure its messaging remains consistent with FSD’s approved instructions, namely that the software provides only a driver assist/support system requiring drivers to remain vigilant and maintain constant readiness to intervene in driving.
Tesla last month unveiled the Cybercab, an autonomous-driving EV with no steering wheel or pedals. The vehicle has been promoted as a robotaxi, a self-driving vehicle operated as part of a ride-paying service, such as the one already offered by Alphabet-owned Waymo.
But Tesla’s self-driving technology has remained under the scrutiny of regulators. FSD relies on multiple onboard cameras to feed machine-learning models that, in turn, help the car make decisions based on what it sees.
Meanwhile, Waymo’s technology relies on premapped roads, sensors, cameras, radar, and lidar (a laser-light radar), which might be very costly, but has met the approval of safety regulators.

Read more
Waymo, Nexar present AI-based study to protect ‘vulnerable’ road users
waymo data vulnerable road users ml still  1 ea18c3

Robotaxi operator Waymo says its partnership with Nexar, a machine-learning tech firm dedicated to improving road safety, has yielded the largest dataset of its kind in the U.S., which will help inform the driving of its own automated vehicles.

As part of its latest research with Nexar, Waymo has reconstructed hundreds of crashes involving what it calls ‘vulnerable road users’ (VRUs), such as pedestrians walking through crosswalks, biyclists in city streets, or high-speed motorcycle riders on highways.

Read more