Play with Simulators

I have been passionate about traffic simulation for a long time, exploring various simulators ranging from the classical VISSIM during my undergraduate studies to CARLA now as a Ph.D. student. This project aims to document and share insights in leveraging these tools for cutting-edge research.

First we have some examples of simulation scenarios in Simulink, showcasing vehicle detection and sensor coverage in a dynamic driving environment. The plots illustrate radar and vision sensor coverage areas, road boundaries, and detected objects. Key elements such as vehicle tracks, radar detections (red), and vision detections (blue) are highlighted, providing insights into the perception and tracking capabilities of the system in different scenarios.
Examples of Simulink Simulation.
Also, I did a real-world case study of VISSIM simulation in Beijing, China.
The Road in CAD.
The VISSIM simulation of this road.
Let's look closer at the road section from HuFang Bridge to CaiShiKou.
Road Section from HuFang Bridge to CaiShiKou.
We can use the VISSIM simulations to study on the road capacity and congestion situtation and so on. This can be a great help to the traffic engineers! I trained a PPO (Proximal Policy Optimization) agent in the TORCS environment to control vehicle motion, and the results are quite promising. As shown in the figure below, the agent demonstrates effective control over the vehicle, successfully navigating the track and achieving smooth and stable performance.
PPO agent navigating in TORCS.

I built an LLM-driven driving agent in CARLA. The agent receives both state information (e.g., speed, forward vector, and navigation command) and visual input (RGB and bird’s-eye-view images). These inputs are passed into a large language model (LLM) with a structured prompt that requests driving actions in JSON format. The model outputs continuous control signals—throttle, steering, and brake— which are applied to the vehicle in real time. As shown below, the agent demonstrates smooth and stable driving behavior while responding to language-based navigation commands.

LLM-driven agent navigating in CARLA.