RiskBench: A Scenario-based Risk Identification Benchmark

Chi-Hsi Kung1, Chieh-Chi Yang1, Pang-Yuan Pao1, Shu-Wei Lu1, Pin-Lun Chen1, Hsin-Cheng Lu2, Yi-Ting Chen1
1National Yang Ming Chiao Tung University, 2National Taiwan University
2024 IEEE International Conference on Robotics and Automation (ICRA)

Abstract

This work focuses on risk identification, i.e., the ability to identify and analyze risks from dynamic traffic participants and unexpected events. While significant advances have been made in the community, the current evaluation of risk identification algorithms uses independent datasets, leading to difficulty in direct comparison and hindering collective progress toward safety performance enhancement.

To address this limitation, we introduce RiskBench, the largest scenario-based benchmark for risk identification. Our benchmark is created using a scenario-based approach, which is widely accepted in the automotive industry. We design a scenario taxonomy and augmentation pipeline to enable the systematic collection of ground truth risk under different scenarios. We assess the ability of nine algorithms to (1) detect and locate risks, (2) anticipate risks, and (3) facilitate decision-making. We conduct extensive experiments and summarize future research on risk identification. Our aim is to encourage collaborative endeavors in achieving a society with zero collisions.

Interaction Types

Our interaction type includes: Interactive: yielding to dynamic risk, Collision: crashing scenario, Obstacle: interacting with static elements, and Non-interactive: normal driving, aiming to cover the different definitions of risks discussed in the community.

tasks

Interactive

tasks

Collision

tasks

Obstacle

tasks

Non-interactive

Data Collection

A scenario taxonomy and data augmentation pipeline are developed to collect a range of diverse scenarios in a procedural manner. The taxonomy includes various attributes such as road topology, scenario types, ego vehicle behavior, and traffic participants’ behavior. From this taxonomy, if a scenario script is set, two human subjects can act accordingly. To form the final scenario dataset, we augment the collected scenario by changing attributes, including time of day, weather conditions, and traffic density.

tasks

Experiment Setups

Risk Identification Baselines

The baselines take a sequence of historical data as input and output a risk score for each road user (e.g., vehicle or pedestrian) or an unexpected event (e.g., collision or construction zone). We consider a road user or an unexpected event risk if the score exceeds a predefined threshold. We implement 10 risk identification algorithms and categorize them into the following four types.

For training details, please refer to our github.

Evaluation Metrics

We devise three metrics that evaluate the ability of a risk identification algorithm to (1) identify locations of risks, (2) anticipate risks, and (3) facilitate decision-making.

Localization and Anticipation Demo

Qualitative Results For Risk Identification

Fine-grained Scenario-based Analysis

tasks

Planning-aware Demo

Temporal Consistency


We study the temporal consistency of models’ predictions, as shown in the following table. To evaluate temporal consistency, we determine if a risk is predicted accurately and consistently within a specified time frame, leading up to the critical/collision point.
1s 2s 3s
QCNet [4] 50.2% 26.9% 18.5%
DSA [5] 14.0% 4.6% 3.8%
RRL [6] 19.0% 8.5% 4.7%
BP [7] 4.2% 2.4% 1.9%
BCP [8] 6.9% 3.9% 3.4%

Dataset Details


Sensor Suite

We collect camera/depth/instance segmentation images from all front view sensors. Our perspective-view camera has a 120-degree field of view.

In addition, we also collect object bboxs and precise lane mark which can render BEV images ( max pixel per merter: 10 ).

Alongside the camera data, we also collect LiDAR, GNSS, IMU. Our data is captured at a frame rate of 20 Hz.

tasks

Scenario Attributes

The following table summarizes all possible values of each taxonomy attribute. The maximum number of Basic Scenario that the proposed taxonomy can describe is 547 which is calculated without considering Map and Area ID.

Attribute Value
Map Town01, Town02, Town03, Town05, Town06, Town07, Town10HD, A0, A1, A6, B3, B7, B8
Road topology forward, left turn, right turn, u-turn, lane change left, right lane change
Interaction type interactive, collision, obstacle, non-interactive
Interacting agent type car, truck, bicyclist, motorcyclist, pedestrian
Interacting agent's behavior forward, left turn, right turn, u-turn, lane change left, right lane change, crosswalking, jaywalking, go into roundabout, go around roundabout, exit roundabout
Obstacle type traffic cone, barrier, warning, illegal parking vehicle
Traffic violation Running red light, ignoring stop sign, driving on sidewalk, jay-walker
Ego's reaction right-deviation, left-deviation

Labeled Area for Data Collection

Maps from CARLA simulator. The notations ’i’, ’t1’, ''t2', 't3', ’s’, and ’r’ indicate 4-way intersection, T-intersection-A, T-intersection-B, T-intersection-C, straight, and roundabout.

Additional Maps from Real World. To overcome the limited number of roundabouts in CARLA, we incorporate real-world maps from CAROM-Air [9] and reconstruct them in the CARLA simulator. Specifically, we select A0, A1, A6, B3, B7, B8 from CAROM-Air.

Dataset Splits

Interactive Collision Obstacle Non-interactive
Training 925 1044 850 1023
Validation 348 283 258 496
Testing 521 375 322 471

Citation

@article{kung2023riskbench,
  title={RiskBench: A Scenario-based Benchmark for Risk Identification},
  author={Kung, Chi-Hsi and Yang, Chieh-Chi and Pao, Pang-Yuan and Lu, Shu-Wei and Chen, Pin-Lun and Lu, Hsin-Cheng and Chen, Yi-Ting},
  journal={arXiv preprint arXiv:2312.01659},
  year={2023}
}

If you have any questions, please contact Yi-Ting Chen.

[1] Thrun, Sebastian. "Probabilistic robotics." Communications of the ACM 45.3 (2002): 52-57.

[2] Gupta, Agrim, et al. "Social gan: Socially acceptable trajectories with generative adversarial networks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.

[3] Marchetti, Francesco, et al. "Mantra: Memory augmented networks for multiple trajectory prediction." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.

[4] Zhou, Zikang, et al. "Query-Centric Trajectory Prediction." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.

[5] Chan, Fu-Hsiang, et al. "Anticipating accidents in dashcam videos." Computer Vision–ACCV 2016: 13th Asian Conference on Computer Vision, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part IV 13. Springer International Publishing, 2017.

[6] Zeng, Kuo-Hao, et al. "Agent-centric risk assessment: Accident anticipation and risky region localization." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017.

[7] Li, Chengxi, et al. "Learning 3d-aware egocentric spatial-temporal interaction via graph convolutional networks." 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020.

[8] Li, Chengxi, Stanley H. Chan, and Yi-Ting Chen. "Who make drivers stop? towards driver-centric risk assessment: Risk object identification via causal inference." 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020.

[9] Lu et al., "CAROM Air -- Vehicle Localization and Traffic Scene Reconstruction from Aerial Videos". ICRA 2023.