Driving simulations that look more life-like

This post was originally published on this site

Today’s driving simulators have a big problem: They don’t look realistic enough, particularly background objects, such as trees, and road markings. But researchers have developed a new way to create photorealistic images for simulators, paving the way for better testing of driverless cars.

Conventional computer graphics use detailed models, meshes and textures to render 2D images from 3D scenes, a labor-intensive process which produces images that often fall short of being realistic, particularly in the background. Yet by using a machine learning framework called a Generative Adversarial Network (GAN), researchers were able to train their program to randomly generate life-like environments by improving the program’s visual fidelity — the level of representation computer graphics share with reality.

This is especially important when testing how humans react when they are in driverless vehicles or, alternatively, on the road with them.

“When driving simulations look like computer games, most people don’t take them seriously,” said Ekim Yurtsever, lead author of the study and a research associate of electrical and computer engineering at The Ohio State University. “That’s why we want to make our simulations look as similar to the real world as possible.”

The study was published in the journal IEEE Transactions on Intelligent Transportation Systems.

The researchers started with CARLA, an open-source driving simulator, as their base. They then used a GAN-based image synthesizer to render the background elements like buildings, vegetation and even the sky, and combine them with more traditionally rendered objects.

Yurtsever said driving simulations will continue to need conventional, labor-intensive graphics rendering techniques to display the primary objects of interest, such as nearby cars. But, using artificial intelligence, GAN can be trained to generate realistic backgrounds and foregrounds using real-world data.

One of these challenges the researchers faced was teaching their program to recognize patterns in their environments, a skill necessary to detect and create objects like vehicles, trees and shadows, and to distinguish these objects from each other.

“The beauty of it is that these patterns and textures in our model are not designed by engineers,” said Yurtsever. “We have a template of feature recognition, but the neural network learns it by itself.”

Their findings showed that blending foreground objects differently from background scenery improved the photorealism of the entire image.

Yet instead of modifying an entire simulation at once, the process had to be done frame-by-frame. But as we don’t live in a frame-by-frame world, the project’s next step will be to improve the program’s temporal consistency, wherein each frame is consistent with the ones before and after so that users experience a seamless and visually riveting experience, Yurtsever said.

The development of photorealistic technologies could also assist scientists in studying the intricacies of driver distraction, and help improve experiments with real drivers, Yurtsever said. And with access to larger datasets of roadside scenes, more immersive driving simulations could change how humans and AI begin to share the road.

“Our research is an extremely important step in conceptualizing and testing new ideas,” Yurtsever said. “We can never actually replace real world testing, but if we can make simulations a little bit better, we can get better insight on how we can improve autonomous driving systems and how we interact with them.”

Co-authors were Ibrahim Mert Koc, Keith A. Redmill and Dongfang Yang, all in electrical and computer engineering at Ohio State. The study was supported by the United States Department of Transportation.

Story Source:

Materials provided by Ohio State University. Original written by Tatyana Woodall. Note: Content may be edited for style and length.