Procedural generation: Aurora’s approach to scaling simulation
The downside of curated scenarios is made clear when you begin to scale towards coverage of an operational environment. While an individual scenario set has a fast turnaround time, building scenarios by hand scales linearly with the number of scenarios we need to build. This presents a challenge because we need hundreds of thousands of scenarios to make assertions about the Aurora Driver’s performance. It would require a lot of time and effort to prevent simulation from becoming a blocker.
Fortunately, while some of the scenarios required for each operational area require hand-tuning and nuanced validation, a large proportion don’t. We prefer to build the right tools to solve the right problems, so instead of asking team members to manually generate more scenarios, we’re working on building machines that can generate them automatically. More specifically, we’re building tools for procedural generation.
Procedural generation is used across industries for any application that can use algorithmic execution to generate data. A human user provides input up front to create a procedural recipe, and the tooling follows that recipe to generate a specified number of outputs based on the provided parameters. Put simply, the human effort required to generate one output is exactly the same as it is to generate one million. Additionally, updating or extending the recipe only needs to be done once, rather than having to update a large set of data products one by one. By applying procedural generation to simulation, we can create scenarios at the massive scale needed to rapidly develop and deploy the Aurora Driver.