What we've been up to, November 2024
It’s actually been six months since our last update, so we have some new things to talk about. Let’s dive in!
How we do it, and things learned along the way
It’s actually been six months since our last update, so we have some new things to talk about. Let’s dive in!
Previsouly I talked about our workflows or automation, but I never took the time to explain our overall production pipeline. How do we manage the production of multiple experiments per year, each running on different simulation platforms? How do we capitalize on each project’s new features? How do we ensure everyone is working with up-to-date tools? Let’s explore under the hood and see how we get all of that done.
Things are pretty slow over here, so it’s going to be a short one.
Want to use a Buttkicker in an Unreal Engine project? There are many ways to go about it, so let’s cover a few.
In a previous update, I briefly introduced a new project we’ve been working on: Sérénité. In this post, we’ll dive deeper into the making of the scenario for this experiment, and how Unreal makes everything awesome.
It’s 2024 and I still don’t know how to write introduction for our quarterly update. So let’s jump right in, it’s going to be a short one.
This post has nothing to do with Unreal, driving simulation, or anything we usually talk about around here. It’s going to be just me, complaining about the computer mice market and why I think we need better mice, for developers and beyond.
We’re nearing the completion of one of our largest driving simulation experiment ever, which also was the first Unreal project for my colleagues involved and for our driving simulator (SIMAX). So let’s take some time to share what we learned from it, especially the non-trivial bits. Our first post will cover our use of the Level Blueprint.
I haven’t been very active on the blog lately, but I wouldn’t miss a quarterly update. Let’s dive straight into the last one for this year.
It’s the middle of summer, everything is going much slower than usual, so let’s take that time to look back at the past three months, shall we? I’m not going to lie, I wasn’t very productive… But still, some interesting things happened.
Explosions! Drama! AI-powered cryptos! This blog entry contains none of those; but if you read it, you’ll at least know what we’ve been doing since our last update.
A while ago, a researcher at my lab asked me: “Now that we’re using Unreal Engine, could we have things like mountain roads? Maybe even with snow?”. The answer is yes, and in this blog entry, I’ll explain how I implemented this short demo.
Driving simulation development is fun and all, right until you remember that actual cars have mirrors. So today I’ll rant about that, and maybe explore solutions.
I wanted ChatGPT to write the introduction to our usual quarterly report, because frankly I’m bad at it. But it’s “at capacity right now”, so I guess that will do?
A researcher colleague recently asked me: “Could we get the angular size of an actor in realtime, even if they’re partially occluded?”. As with many things in Unreal Engine, the answer is “yes we can!”, so here’s a breakdown of our answer to this question.
What did we do those past 3 months? I didn’t quite remember myself before deciding to write it up, so let’s all dive in and see what’s new.
I’m always on the lookout for new and often out of place tools that could find their role in our research workflows. One example is Veyon: built for classroom computers management, it also happens to be perfect for managing nDisplay clusters, at the core of our simulators. StreamDeck are a recent addition to our fleet, and today we’ll discuss all the use cases in which those will greatly help.
Second entry in a series that takes a closer look at how we actually get our driving simulation experiment from paper to the simulator. This time, we focus on a project where the workflow ended up being very different from our previous entry.
One year ago I started this quarterly report about the new things going on in our little driving simulation adventure. And deep in this (hot) summer, it’s once again time for an update!
What’s it like using Unreal Engine to actually implement a driving simulation experiment? What does that workflow look like? Well, we’re kind of learning as we go along, and this post is a first of a series that will look a bit closer at how things go from paper to simulator.
It’s time to look back again at the last three months, and see what happened around our driving simulation world.
Unreal Engine 5 is out, with quite a bunch of amazing features. Today we’ll explore what this means for our platform, and for driving simulation overall.
We’ve already talked about scenarios, which are at the very core of driving simulation. But this was a rather high level overview, and today I want to go a bit deeper to discuss how we’re actually implementing our scenarios, and how we try to make the process as easy and efficient as possible
It’s been three months since our last update, so it’s time for our quarterly status update!
Eye-tracking is widely used to study driver’s behavior in simulated environments. However, we’re used to either have glasses (e.g. Pupil Core) or fixed setups (e.g. Smart Eye Pro); and the newly released VR headsets with included eye-tracking (e.g. Vive Pro Eye) bring both new opportunities and challenges. Today, we’ll talk about live visualization of eye-tracking data, why and how we managed to implement it in our platform.
For us, one of the major benefit of using Unreal Engine is its Marketplace. It offers the ability to purchase so many products, at very reasonable prices, and that work out-of-the-box. It saved us countless amount of time and money, compared to our previous workflows. However, finding the right products for your needs is not always easy, so I thought I’d share the list of product we own, our favorites, and some comments on them.
What have we been up to since the last update? It’s been three months, so it’s time to share some of what we’ve made, seen and heard!
Recently I shared a short scenario I made, involving an e-scooter, a crowd, a bus and an unfortunate ending. It’s definitely not a usual scenario from our “driving-oriented” perspective, so I thought it’d be interesting to explain how it came to life.
What have we been up to in the past couple of months? What’s new in the V-HCD world? This post is a first of a hopefully long series where we showcase the new things in our driving simulation platform.
In the past months, the language I’ve used the most in the V-HCD is PowerShell. Considering the V-HCD is powered by Unreal Engine, which has support for Blueprint, C++ and Python; you might wonder why do I write PowerShell scripts. The answer is: to automate the boring stuff. And there’s a lot of it.
For the past ten years, the world of Virtual Reality headsets has been redefined, lead by Facebook’s Oculus and HTC’s Vive. Those relatively cheap devices allow for better immersion in various types of environments, and are now being used in a wide range of both industry and research entities. But, even though we’ve previously mentioned use of CAVE simulators, we never talked about VR headsets. Why is that?
CARLA is an open-source simulator for autonomous driving research. In addition to open-source code and protocols, CARLA provides open digital assets (urban layouts, buildings, vehicles) that were created for this purpose and can be used freely. The simulation platform supports flexible specification of sensor suites and environmental conditions.
The main point of a driving simulation experiment is to collect data relevant to our study. This includes both simulation related data (e.g. speed), but also physiological data, such as eye-tracking or heart rate; all of which need to be synchronized. Today we’ll discuss how we solve this in our platform.
The basis of a driving simulation experiment is the scenario. But quite often, an experiment requires not one but multiple scenarios. And most often, those multiple scenarios can be described as some kinds of variants from one another. How do we structure our experiment around this variant concept?
nDisplay is Unreal Engine’s tool for CAVE and other cluster rendering, allowing simulation to be displayed on any amount of screens from any amount of computers. However, getting nDisplay to work for driving simulation can be challenging.
As a software engineer, version control is mandatory for any project I work on. A driving simulation platform, or experiment, is no exception. However, such projects have quite a few differences from traditional software. In this post, we’ll explain the challenges we faced regarding source control, and how we answered them (or failed to).
When you start implementing your driving simulator experiment, the first thing you’ll work on is probably the scene. You may want to reuse an existing one from a previous project, maybe modify it, or start from scratch to build something tailored to your needs.
In most driving simulation scenarios, there are two types of cars. The one that’s driven by the participant (commonly referred to as ego), and others that are controlled from the scenario. With the rise of autonomous driving, it’s not uncommon to have scenarios where ego also is controlled from the scenario. How do we control all those cars, making sure that their external behavior appears realistic, all the while being easy to configure for researchers building their experiment?
An important and critical stage in driving simulator experiments is scenario authoring. We want to offer as much control as possible to researchers, allowing them to build any experiment they can imagine. But we also want this process to be as easy and intuitive as possible, so that non-experts can start working on their scenario as early as possible in the experiment design phase, allowing for quick and iterative development.
OpenDRIVE defines a file format for the precise description of road networks
When it comes to game engines, the market is dominated by two giants: Unity and Unreal. It’s worth mentioning that they’re not the only actors in town, you can also find Godot, CRYENGINE, Amazon Lumberyard, UNIGINE, etc. But when it boils down to it, the question is: Unity or Unreal?
One legitimate question about professional driving simulators is how they differ from racing/driving video games. One of the most successful video game of the past decade is Grand Theft Auto 5, which features realistic environments, multiple vehicle types, pedestrians, realistic enough physics, etc. It has notoriously been used in multiple AI training projects (see Grand Theft Auto V: The Rise And Fall Of The DIY Self-Driving Car Lab).
In our previous post, we’ve introduced our goal to develop a new driving simulation software. But that goal is actually part of a wider and long-term project call Virtual Human Centred Design, which aims to create a platform, bridging Human Centred Design and Virtual prototyping in the automotive domain.
At Université Gustave Eiffel, more precisely at LESCOT, we have a wide range of driving simulators to study driver behaviour. Our simulators range from simple desktop computers to full-scale immersive CAVE.