Abstract
Autonomous vehicles (AVs) still pose challenges in improving intelligence, safety, and reliability in complex motorway scenarios. Recently, deep reinforcement learning (DRL) has demonstrated superior decision-making capabilities in dynamic environments compared to rule-based methods. However, it requires considerable training resources due to a lack of innovative DRL component design (e.g., state space and reward) to link observation and action accurately. Its opaque nature may also result in hazardous driving conditions. In this paper, we introduce a hybrid autopilot framework that amalgamates three modules: (i) DRL is employed to build a smart, learnable, and scalable driving policy across various motorway scenarios; (ii) a kinematic-based co-pilot strategy is devised to bolster training efficiency and provide flexible decision-making guidance; and (iii) a rule-based system assesses and determines the final action outputs in real-time between itself and the DRL policy to further enhance safety. Extensive simulations are conducted under different complex motorway scenarios. The results indicate that the proposed framework surpasses the baseline DRL policy in terms of training efficiency, intelligence, safety, and reliability.
Original language | English |
---|---|
Pages (from-to) | 1-12 |
Number of pages | 12 |
Journal | IEEE Transactions on Intelligent Transportation Systems |
Early online date | 27 Jan 2025 |
DOIs | |
Publication status | Early online - 27 Jan 2025 |
Keywords
- Autonomous vehicle
- deep reinforcement learning
- kinematic model
- co-pilot strategy
- training efficiency
- , hybrid autopilot framework