What we've been up to, February 2022
It’s been three months since our last update, so it’s time for our quarterly status update!
Open-source
I’ve finally taken the time to clean up my codebase and open-source my Unreal Engine OpenDRIVE Plugin! It’s based on esmini, which, if you read this blog, you may already know. This project has the best OpenDRIVE library, so my plugin is basically just a wrapper, with some nice added in-engine features, like landscape sculpting.
It’s the first “real” project I open-source, so we’ll see how it goes. Obviously, the plugin could have way more features (e.g., road mesh generation), but for now we don’t have any major development in the pipeline, we mostly plan on maintaining it and add a few features here and there.
Virtual Driver
The Virtual Driver underwent a major refactor. It was previously an external lib (built on top of esmini) to be engine-agnostic, but since we only use it in Unreal Engine, that abstract layer was more trouble than it’s worth. So we moved the codebase from esmini to its own Unreal Engine plugin, which also implied splitting it from the OpenDRIVE plugin, since both of them were entangled.
This means the Virtual Driver now has a tighter integration within the engine, and can interact more easily with it. We’ll have an example of that just below.
The Virtual Driver learned to use the reverse gear. We plan on using it in an upcoming project, and I always wanted to implement it anyway. It turned out to be much easier than I imagined. The existing design is easily compatible with such change, so it really didn’t take much time.
And as an example of what a tighter Unreal integration means for the Virtual Driver, you can see our debug display on the video above: the planed trajectory is plotted, and the pure pursuit point is also shown along it. Really nice to have that on screen when you try to understand why the car isn’t doing what you want it to.
Pedestrians
A researcher colleague has been hard at work on pedestrians. First, they re-rigged all of CARLA’s pedestrians to the Unreal’s Mannequin, so that they can be used with the Advanced Locomotion System, which powers most of our pedestrians. This greatly extends our collection of characters available for our scenarios.
They also worked on more “advanced” and complex types of pedestrians, which are interacting with different types of props or other actors. This includes an older person with a cane, a person pushing a stroller, pulling a shopping trolley or a travel case, and even a blind person with a guide dog and a white cane!
Here's the stroller. We used our @ManusMeta gloves to record hand grip position directly into #UE4.
— Bertrand Richard (@brifsttar) January 14, 2022
One more great addition to our #DrivingSimulation platform! pic.twitter.com/UztpuUfWpx
There are more of those incoming, and they’ll sure be very useful to create immersive worlds and unique situations.
Projects
SUaaVE
Our experiment for the SUaaVE project, which is the first one using our Unreal Engine simulation tool, started this week. So there’s not much more to show since the last update, as most work was ironing out details.
Our project partners from ESI are still using the platform to create scenarios for the consortium, and I’m very impressed with their work. They made a wide range of scenes and situations, really showcasing the power of Unreal Engine and our platform. I hope they’ll share a reel of their work, but you’ll find a small sample below.
NEWMOB
NEWMOB is a new project, where we’ll study drivers’ interactions with Personal Light Electric Vehicle (PLEV).
As a first step, I re-rigged and did some major overhaul on the electric scooter and unicycle I had already created a few months back.
Worked a bit on improving our #UE4 #PLEV. First, the #EUC, which will be used in an upcoming project studying new urban mobilities, and their interactions with car drivers. So expect to see some of them in our #DrivingSimulation. pic.twitter.com/41xPRZDIuR
— Bertrand Richard (@brifsttar) December 12, 2021
A colleague, who will manage the simulation on this project, started working on proofs-of-concept for some basic situations we imagine we’ll be using later on. This allowed us to learn more about the current technical limitations, and we’ve already solved most of them.
Data collection
I’ve worked a bit on data collection, mainly on two fronts.
First, around our LabStreamingLayer pipeline, mainly to add support for our video sources (Axis IP cameras, screen grabber), but also to better integrate the LabRecorder in our workflow.
I also worked on automation, to make sure everything runs fine, to give the experimenter feedback about any potential issue, to back up and move all recordings to their appropriate locations, etc.
I could spend way more time than I do on that front, I think there’s a huge amount of work that can be done to make the whole experiment process (from early design to publication) more robust, reproducible, easier to share, track, collaborate, etc. The DevOps culture has a lot to bring to experimental research, and I wish I had more time to focus on that.
Misc
Some minor things that are interesting but don’t deserve their own sections.
I went over all of our Marketplace products to generate reusable Blueprints for all buildings or “roadside structures”. This will speed up environment creation significantly.
And finally, I added the option to manually start the car! Previously, it was automatically started and you could drive off by simply using the controls (e.g., pedals). Now, you can actually turn the key and start the engine. And turn if off once you’re done.