중국의 자율주행 시뮬레이션 산업(2024년)
Autonomous Driving Simulation Industry Report, 2024
상품코드 : 1482386
리서치사 : ResearchInChina
발행일 : 2024년 05월
페이지 정보 : 영문 480 Pages
 라이선스 & 가격 (부가세 별도)
US $ 4,500 ₩ 6,458,000
Unprintable PDF (Single User License) help
PDF 보고서를 1명만 이용할 수 있는 라이선스입니다. 인쇄 불가능하며, 텍스트의 Copy&Paste도 불가능합니다.
US $ 6,800 ₩ 9,759,000
Printable & Editable PDF (Enterprise-wide License) help
PDF 보고서를 동일 기업의 모든 분이 이용할 수 있는 라이선스입니다. 인쇄 가능하며 인쇄물의 이용 범위는 PDF 이용 범위와 동일합니다.


한글목차

2023년 11월 17일, 공업정보화부 등 3개 부처는 '지능형 커넥티드카 시승 및 도로 통행에 관한 통지'를 발표했습니다. 현재까지 BYD, BMW, IM, Mercedes-Benz, Deepal, Avatr, ARCFOX, AITO, Jiyue, GAC Aion 등 많은 OEM이 고속도로 또는 도시 지역의 L3 자율주행 시험 면허를 취득했습니다. 현재 도시형 NOA로 대표되는 높은 수준의 지능형 주행 기능의 활용이 가속화되고 있으며, L3 이상의 자율주행 시스템은 도시 지역의 무수한 엣지/롱테일 케이스에 대응할 수 있는 안전성과 견고성을 갖춰야 합니다.

L3 지능형 주행 시스템을 실용화하기 위해서는 10억km 이상의 시험 주행이 필요하며, 실제 주행 테스트는 비용과 시간이 많이 들고 이용 사례의 커버리지가 낮습니다. Xpeng의 경우, 차량 소유자가 매일 제공하는 도로 데이터에 더해, Xpeng은 지능형 주행 시스템이 학습하고 이해할 수 있는 시뮬레이션을 결합하여 가상 공간의 극한 시나리오를 구축하는 데 주력하고 있습니다. 2023년 말까지 Xpeng의 시뮬레이션 주행거리는 1억 2,200만km에 달할 것으로 예상됩니다.

지능형 주행의 '세 가지 기둥' 테스트 방식은 시뮬레이션 테스트에서 가상 환경을 통해 다양한 교통 상황, 도로 상황, 날씨, 조도, 이상 상황을 시뮬레이션하여 다양한 상황에서의 자율주행 시스템의 기능, 반응, 판단 능력을 평가합니다.

위 그림에서 자율주행 시뮬레이션 플랫폼은 교통 장면 시뮬레이션(정적 장면 복원 및 동적 장면 시뮬레이션), 환경 인식 센서 시뮬레이션(카메라, LiDAR, 레이더, GPS/IMU 등 센서 모델링 및 시뮬레이션), 차량 역학 시뮬레이션, 인지 및 제어 시뮬레이션 등을 지원해야 한다, 차량 역학 시뮬레이션 등을 지원하여 인지부터 제어까지 시뮬레이션 테스트를 검증할 수 있어야 합니다. 테스트 대상에 따라 자율주행 시뮬레이션 플랫폼은 MIL(Model in the Loop), SIL(Software in the Loop), HIL(Hardware in the Loop), DIL(Driver in the Loop), VIL(Vehicle in the Loop) 등의 인더루프(In-the-Loop) 시뮬레이션 플랫폼으로 나뉩니다. in the Loop) 등의 인더루프 테스트를 가능하게 합니다. 현재 시뮬레이션 테스트 기업의 능력은 다양합니다.

동향1: 자율주행 시뮬레이션 테스트는 고충실도, 고감도의 정밀 시뮬레이션 단계에 접어들었습니다.

"지각-예측-판단-계획-제어"의 전체 연결에서 지각은 교통 흐름, 도로 상황, 날씨, 조도, 이상 등 차량의 외부 환경 정보를 수집하는 모든 유형의 센서에 해당합니다. 주로 카메라 시뮬레이션, LiDAR 시뮬레이션, 레이더 시뮬레이션, 측위 시뮬레이션(GPS, IMU)을 포함합니다.

현재 많은 기업들이 실제 도로 환경, 역동적인 교통 상황, 차량 및 보행자 행동을 충실하게 시뮬레이션하고 세부적인 물리적 현상과 동적 센서 성능을 정확하게 복원하여 자율주행 시스템의 성능을 신속하게 검증하고 종합적인 테스트 및 검증 보고서를 제공합니다. 신속하게 검증하고 종합적인 테스트 및 검증 보고서를 제공합니다.

PilotD Automotive의 경우, PilotD PlenRay 물리광선 기술을 기반으로 한 완전한 물리 센서 모델은 전자기파의 다중 경로 반사, 굴절, 간섭과 같은 상세한 물리 현상과 감지 손실률, 목표 해상도, 불확실성 감지, "고스트" 물리 현상과 같은 동적 센서 성능을 시뮬레이션할 수 있습니다. 시뮬레이션할 수 있어 센서 모델이 요구하는 높은 충실도를 얻을 수 있습니다. 현재까지 시뮬레이션 감소율은 95%에 가깝습니다.

이 보고서는 중국의 자율주행 시뮬레이션 산업을 조사 분석하여 시뮬레이션 기술, 국내외 솔루션 제공업체, 시뮬레이션 테스트 동향 등의 정보를 제공합니다.

목차

제1장 자율주행 시뮬레이션 개요

제2장 자율주행 시뮬레이션 테스트 씬 라이브러리

제3장 시뮬레이션 기술

제4장 국내 시뮬레이션 플랫폼 솔루션 제공업체

제5장 국외 시뮬레이션 플랫폼 솔루션 제공업체

제6장 자율주행 시뮬레이션 테스트 동향

LSH
영문 목차

영문목차

Autonomous Driving Simulation Research: Three Trends of Simulation Favoring the Implementation of High-level Intelligent Driving.

On November 17, 2023, the Ministry of Industry and Information Technology and other three ministries issued the Notice on Piloting Access and On-road Passage of Intelligent Connected Vehicles. Up to now, many OEMs including BYD, BMW, IM, Mercedes-Benz, Deepal, Avatr, ARCFOX, AITO, Jiyue and GAC Aion have obtained highway or urban L3 autonomous driving test licenses. At present, the application of high-level intelligent driving functions represented by urban NOA is being accelerated, and L3 and above autonomous driving systems should be safe and robust enough to deal with countless edge/long tail cases in urban areas.

The commercialization of L3 intelligent driving systems needs more than one billion kilometers of test mileage, and actual road tests are costly and time-consuming, with low use case coverage. However, simulation tests can quickly solve this problem in a short time and at low cost. In Xpeng's case, in addition to the roadside data provided by car owners every day, Xpeng is working to build extreme scenarios in virtual space combining simulation for the intelligent driving system to learn and understand. By the end of 2023, the simulation mileage of Xpeng had reached 122 million kilometers.

In the "three-pillar" test approach for intelligent driving, in simulation tests, different traffic scenes, road conditions, weather illumination and abnormalities are simulated through the virtual environment to evaluate the functions, response and decision capabilities of the autonomous driving systems in various circumstances.

In the above figure, the autonomous driving simulation platform should support traffic scene simulation (static scene restoration and dynamic scene simulation), environment-aware sensor simulation (modeling and simulation of sensors such as camera, LiDAR, radar and GPS/IMU), vehicle dynamics simulation, etc., so as to verify simulation tests ranging from perception to control. According to tested objects, the autonomous driving simulation platform enables in-the-loop tests such as: model in the loop (MIL), software in the loop (SIL), hardware in the loop (HIL), driver in the loop (DIL) and vehicle in the loop (VIL). At present, simulation test companies vary in capabilities, as shown in the table below.

Trend 1: Autonomous driving simulation tests have entered the precise simulation stage with high fidelity and high reduction.

In the whole link of "perception-prediction-decision-planning-control", perception corresponds to all kinds of sensors which collect the external environment information of the vehicle, such as traffic flow, road conditions, weather, illumination and abnormalities. It mainly involves camera simulation, LiDAR simulation, radar simulation, and positioning simulation (GPS, IMU).

At present, many companies are working on fine-grained simulation engineering practices, for example, high fidelity simulation of real road environment, dynamic traffic scene and vehicle/pedestrian behavior, and accurate restoration of detailed physical phenomena and dynamic sensor performance, so as to quickly verify the performance of autonomous driving systems and provide comprehensive test and verification reports.

In PilotD Automotive's case, the full physical sensor model based on PilotD PlenRay physical ray technology can simulate detailed physical phenomena such as multi-path reflection, refraction and interference of electromagnetic waves, or dynamic sensor performance such as detection loss rate, target resolution, uncertainty detection and "ghost" physical phenomena, so as to obtain the high fidelity required by the sensor model. Up to now, the simulation reduction rate is close to 95%.

NVIDIA DRIVE Sim(TM) supported by NVIDIA Omniverse is an end-to-end simulation platform, physically based multi-sensor simulation with high fidelity. It can generate numerous real-world digital twin scenes. At present, the LiDAR models of HESAI and RoboSense have been integrated into NVIDIA DRIVE Sim to simulate the performance of LiDAR in various aspects such as beam control, user-defined scanning modes and resolution, and generate synthetic data sets. Users such as OEMs or autonomous driving solution providers can directly call the LiDAR models for R&D or testing through DRIVE Sim.

Similar to NVIDIA, the 3D scene and high-precision physical sensor simulation of dSPACE AURELION generates highly realistic raw data for radar, LiDAR and cameras in real time. AURELION optimizes the impact of materials on radar echoes and multi-path ray tracing technology to ensure real-world-like measurement effects (e.g. ghost spot effects). For radar, dSPACE offers DARTS, that is, over-the-air simulation of radar echoes in real time.

In the vehicle's perception of environmental information, it is easy to ignore the simulation of the interaction between vehicles and pedestrians. For example, the Qianxing simulation platform of RisenLighten has added rich and realistic pedestrian models to support the functions of self-defining the micro-trajectory of pedestrians and generating pedestrians in batches. When editing scenes, users can simulate the crowded and sparse pedestrian distribution in reality as needed, and can also build complex long-tail scenes such as pedestrians walking randomly, sudden pedestrians in front, courtesy to people and cars, and dispute on right of way to test the comprehensive performance of the autonomous driving system. The platform also provides different pedestrian behavior style models, covering scenarios such as human-vehicle interaction, crossing the road and crossing the intersection obliquely, to simulate an intelligent pedestrian traffic flow. In addition, with the diverse driver behaviors, the platform models three driving styles of drivers (conservative drivers, conventional drivers and radical drivers), and refines each parameter through a certain probability distribution, so that the driving behaviors of vehicles in the environment are diversified and randomized.

In addition, multi-sensor concurrent simulation tests greatly improve the R&D and testing efficiency of perception algorithms. In terms of engineering practice, in May 2023 51Sim and VCARSYSTEM cooperated to successfully fulfill the closed loop of domain controllers from SIL to HIL in China's autonomous driving tests, and realized the thorough localization of domain controller in-the-loop simulation tests. In this domestic domain controller in-the-loop solution based on Journey 5, the intelligent driving data reinjection system independently developed by VCARSYSTEM supports simultaneous injection of sensor data from multiple high-definition cameras, LiDAR, radar, ultrasonic radar, GNSS&IMU and so on, and easily reproduces specific scenes and environments via 51Sim-One, an autonomous driving simulation test platform, thereby greatly improving the R&D and testing efficiency of perception algorithms.

Trend 2: Automatic generation and scene generalization are essential.

At present, how to build a corner case scene is a big challenge for the industry. It is the significance of simulation tests to reproduce scenes such as high-risk working conditions, extreme weather conditions, complex traffic environment and edge events, which are difficult to cover in actual road tests. Especially for large-scale tests of safety-critical scenes, automated simulation technology based on AI technology is needed to cover more scenarios.

Coverage-based quality is a more detailed and comprehensive autonomous driving safety test method. It focuses on the quality of test coverage, that is, whether the system has experienced various possible situations and scenarios. By defining a range of test cases and test scenes, this method can ensure that autonomous driving systems can be tested in various road conditions, traffic conditions and abnormalities. The quality of coverage can involve changes in road conditions, traffic behaviors, special weather conditions, emergencies and more.

Moveover, AI technology and large language models are gradually integrated into simulation tests, playing an increasingly important role in automatic scene generation, automatic annotation, accelerating the construction of scene libraries, reducing the cost of simulation tests, lowering the threshold of simulation test technology and shortening the vehicle development cycle.

In the case of natural language interaction, 51WORLD's AIGC-Scenario Copilot supports fully natural language interaction. Without tedious manual editing and code, it only needs scene descriptions, for example, "add an action, first change the lane to the right, and then slow down to 0". By using an AI large language model, an autonomous driving simulation test scene conforming to the OpenSCENARIO standard can be generated, and an unknown dangerous scene can also be generated to expand the boundary of the simulation test.

In addition, Huawei Pangu Models feature the following in generation of autonomous driving scenes:

Autonomous driving scene understanding replaces manual annotation and classification, and 10,000 video clips are processed in minutes.

Autonomous driving scene generation, vehicle type change, lane change, scene combination rendering and other applications are realized through NeRF technology.

Autonomous driving pre-annotation replaces manual annotation, and supports 2D, 3D and 4D automatic annotation, with an accuracy rate of over 90%.

Autonomous driving multimodal retrieval supports multi-dimensional retrieval capabilities such as searching for pictures by text and searching for pictures by pictures, and realizes minute-level retrieval of millions of pictures.

In addition to simulation platforms and scene library generalization capabilities mentioned above, simulation evaluation systems for autonomous driving testing are also essential in autonomous driving technology commercialization. Simulation evaluation refers to the evaluation and optimization of all aspects of an autonomous driving system by means of simulation testing to ensure its safe, reliable and efficient operation on actual roads. Simulation evaluation mainly includes autonomous driving system evaluation and simulation test system evaluation, of which simulation test system evaluation includes the evaluation of scenario coverage, scene realness, scene effectiveness and simulation efficiency.

Trend 3: Capitalization and sharing of scene library data help to drag down costs and improve efficiency of high-level autonomous driving training and testing.

In simulation tests, in addition to automatic scene generation based on road data (dSPACE Autera, NI data collection solution, VI-Grade AutoHawk, etc.), an all-scenario synthetic data simulation material library can help developers keep training, testing and verifying autonomous driving systems in massive driving scenes, especially safety-critical scenes, to improve algorithm iteration efficiency and closed-loop test efficiency and depth.

For example, the all-scenario synthetic data simulation library of OASIS DATA, Oasis' autonomous driving data platform, embraces common traffic participants, obstacles, road facilities and other traffic environment elements. Combined with physical sensor simulation models, it further generates multimodal, high-fidelity, and accurately annotated simulation materials on a large scale. In terms of synthetic data generation efficiency, it can generate up to 100,000 frames per day, saving more than 90% of data collection and annotation costs.

In addition, 51Sim DataOne has powerful data-driven capabilities, including Dataverse and Synthverse. Wherein, Dataverse, a data platform, is capable of data cleaning, data calculation, data management, data visualization, data statistics, etc. and enables a data-driven simulation closed loop; Synthverse, a synthetic data platform, can automatically generate 3D scenes based on HD maps, restore specific scenes with high fidelity through 3D reconstruction technology, and generalize dynamic and static elements of 3D reconstructed scenes via scene editing generalization tools.

In November 2023, 51Sim, together with Volcengine, TZTEK and MXNAVI, the ecosystem partners of Horizon Robotics, launched the industry's first full-chain data-driven closed-loop ecosystem solution to accelerate the mass production and application of autonomous vehicles. This solution provides chips, domain controllers, data acquisition, data processing, algorithm training, refeed testing, and simulation software and hardware integrated testing, aiming to solve common problems in the intelligent driving industry such as low data utilization and difficult data-driven closed loop construction, and promote the mass production and application of high-level autonomous driving. 51Sim provides a full set of virtual simulation test capabilities for the data-driven closed-loop ecosystem.

Amid surging demand for training and test data of autonomous driving systems, it is difficult to collect diverse and high-quality long-tail scenes on a large scale, and filter the required scenes. In view of this, some simulation solution providers such as SYNKROTRON, IAE and 51Sim have begun to work on the data assetization of simulation scene libraries (including standard regulatory scenes, accident scenes, natural driving scenes, dangerous/extreme scenes, and reconstruction scenes). They have also made a positive response to the "Three-Year Action Plan for 'Data Elements x" (2024-2026), and assisted the intelligent connection industry with assetization of data, which have been traded on Shenzhen Data Exchange, Shanghai Data Exchange, Suzhou Big Data Exchange, and Northern Big Data Trading Center.

In the aspect of simulation data sharing, it is necessary to handle vehicle-cloud cooperation. For example, in the conventional stand-alone R&D environment, the data silo within a team has been a serious problem. How to achieve an efficient working mode among XIL test engineers, tool chain R&D engineers, algorithm training engineers and algorithm test engineers has been tackled. For example, in April 2024, 51SimOne officially released the cooperative version of the "cloud+terminal" integrated product. Through centralized storage and integrated design, it seamlessly connects the client and the cloud, and supports multi-person cooperation, so that data is fully shared among a team. The client supports local integration and debugging of R&D tasks, and the cloud bolsters phased large-scale automated testing in algorithm development, so that a platform can meet various needs and greatly accelerate the iteration and optimization of autonomous driving algorithms.

Table of Contents

1 Overview of Autonomous Driving Simulation

2 Autonomous Driving Simulation Test Scene Libraries

3 Simulation Technology

4 Domestic Simulation Platform Solution Providers

5 Overseas Simulation Platform Solution Providers

6 Trends of Autonomous Driving Simulation Testing

(주)글로벌인포메이션 02-2025-2992 kr-info@giikorea.co.kr
ⓒ Copyright Global Information, Inc. All rights reserved.
PC버전 보기