자동차용 AI 박스(2026년)
Automotive AI Box Research Report, 2026
상품코드 : 1930697
리서치사 : ResearchInChina
발행일 : 2026년 01월
페이지 정보 : 영문 178 Pages
 라이선스 & 가격 (부가세 별도)
US $ 4,200 ₩ 6,076,000
Unprintable PDF (Single User License) help
PDF 보고서를 1명만 이용할 수 있는 라이선스입니다. 인쇄 불가능하며, 텍스트의 Copy&Paste도 불가능합니다.
US $ 6,300 ₩ 9,115,000
Printable & Editable PDF (Enterprise-wide License) help
PDF 보고서를 동일 기업의 모든 분이 이용할 수 있는 라이선스입니다. 인쇄 가능하며 인쇄물의 이용 범위는 PDF 이용 범위와 동일합니다.


한글목차

AI 박스는 엣지 AI 구현의 '액셀러레이터'입니다.

'엣지-클라우드 연계' 솔루션은 자동차 AI 구현의 공통된 인식이 되고 있습니다. 즉, 엣지 AI는 고빈도 실시간 프라이버시 보호가 필요한 작업(로컬 데이터 처리, 실시간 인식, 빠른 응답 등)을 해결하고, 클라우드 AI는 복잡한 추론, 모델 최적화, 대규모 데이터 저장 및 분석을 담당합니다. 엣지/클라우드 AI의 역할 분담이 명확해짐에 따라 탑재 난이도가 낮아지고, AI 운영 효율성이 향상됩니다.

클라우드 AI에 비해 엣지 AI는 실시간 성능과 프라이버시 보호에 있어 선천적인 우위를 가지고 있습니다. 그러나 AI 기능의 진화가 가속화됨에 따라 엣지 AI 특유의 새로운 과제가 나타나고 있습니다.

구형 차량의 연산 능력으로는 새로운 AI 기능을 지원할 수 없음 : AI 에이전트 등 복잡한 기능이 추가됨에 따라 기존 차량 통합 칩의 고정된 연산 능력으로는 증가하는 알고리즘 수요를 감당할 수 없는 경우가 빈번하게 발생하고 있습니다.

기존 모델의 성능이 새로운 시나리오의 지속적인 유입에 대응하지 못함 : AI 적용 시나리오의 복잡성과 수가 증가하고 있습니다. 기존 차량 엣지 AI 모델은 프루닝 및 양자화 후 성능이 제한적이어서 새로 추가된 복잡한 시나리오에 대해 정확한 추론과 예측을 할 수 없습니다.

한편, 차량용 AI 박스는 고성능 칩을 채택하여 차량의 연산 능력의 한계를 높이고, 새로운 알고리즘과 새로운 기능을 구현할 수 있는 충분한 연산 능력을 제공합니다. 한편, 기본 AI 알고리즘 프레임워크를 사전 설정하여 엣지 추론의 실시간성을 유지할 뿐만 아니라, 클라우드를 통해 최적화된 경량 모델 업데이트 패키지의 배포를 지원합니다. 이를 통해 엣지 AI 능력의 지속적인 진화를 실현하고, 복잡한 시나리오에서도 자체의 높은 연산 능력을 통해 AI 추론/의사결정 능력을 향상시킵니다.

보조 연산 능력을 예로 들면, 현재 엣지 AI 모델은 일반적으로 10억-8억 개의 매개변수를 가지고 있으며, 매개변수 수가 다른 기반 모델의 연산 능력 요구사항은 뚜렷한 경향을 보입니다.

엣지 컴퓨팅 제품으로서, 자동차 AI 박스 설계의 초기 중요한 목적은 연산 능력을 제공하는 것입니다. 현재 시중에 나와 있는 AI 박스는 30-200 TOPS를 자랑하며, 10억-80억 파라미터의 모델이 요구하는 연산 능력을 충분히 충족시킬 수 있습니다.

이 중 주류 AI 박스는 NVIDIA의 모듈(Jetson AGX Orin, Jetson Orin NX, Jetson Orin Nano 등)을 기반으로 구축되어 있으며, 200-275 TOPS의 연산 능력을 가지고 있습니다. 주로 에이전트 시나리오 서비스, 멀티모달 데이터 처리 등의 업무를 담당합니다. 예를 들어, ThunderSoft, Geely, NVIDIA가 공동 개발한 AI 박스는 OEM AI 박스로서 200 TOPS의 연산 성능과 205GB/s의 대역폭을 갖추고 있어, 웰컴 인터랙션, 능동적 추천, 강화형 모니터링, HPA, GUI 인터랙션 등의 시나리오의 에이전트 매트릭스 용도에 필요한 연산 능력을 충분히 충족합니다.

또한, 썬더소프트의 AI 박스에는 Aqua Drive OS와 NVIDIA DriveOS가 내장되어 있을 뿐만 아니라, AI 에이전트(예 : Sentinel Agent)도 내장되어 있어, OS 레이어의 연산 능력 할당, 모델 스케줄링, 시나리오 적응 등 시나리오 적응이라는 3대 기능을 에이전트 시나리오에 빠르게 적용하여 멀티모달 데이터에 대한 밀리초 단위의 응답을 실현합니다.

본 보고서는 자동차 AI 박스의 현재 응용 현황, 시나리오 수요, 제품 구성, 산업 체인 연계 측면에서 자동차 AI 박스의 응용 현황을 조사하고, 자동차 AI 박스의 미래 동향을 탐색합니다.

목차

정의

제1장 자동차용 AI 박스 현황과 동향

제2장 OEM AI 박스 공급업체 솔루션

제3장 AM AI 박스

제4장 AI 박스 부품 공급업체

KSM
영문 목차

영문목차

Automotive AI Box Research: A new path of edge AI accelerates

This report studies the current application status of automotive AI Box from the aspects of scenario demand, product configuration, and industry chain collaboration, and explores the future trends of automotive AI Box.

AI Box is the "accelerator" for the implementation of edge AI

The "edge-cloud collaboration" solution has become a consensus for the implementation of automotive AI, that is, edge AI solves high-frequency, real-time, privacy-sensitive tasks (such as local data processing, real-time perception, and rapid response), and cloud AI is responsible for complex reasoning, model optimization, and large-scale data storage analysis. The edge/cloud AI division of labor is clear, which reduces the difficulty of deployment and improves AI operating efficiency.

Compared with cloud AI, edge AI has natural advantages in real-time performance and privacy protection. However, as the iteration of AI functions accelerates, typical new problems of edge AI have emerged:

The computing power of the old vehicle model cannot support new AI functions: With the addition of complex functions such as AI Agent, the fixed computing power of the original vehicle integrated chip is often unable to support the continuously growing algorithm demand.

The performance of the existing model cannot cope with the continuous flow of new scenarios: the complexity and number of AI application scenarios have increased. The original vehicle's edge AI model has limited performance after pruning and quantification, and cannot make accurate reasoning and predictions for newly added complex scenarios.

The automotive AI Box can solve the above two problems: on the one hand, it uses a large computing power chip to increase the upper limit of the original vehicle's computing power, providing sufficient computing power support for the implementation of new algorithms and new functions; on the other hand, it presets a basic AI algorithm framework, which not only retains the real-time nature of edge reasoning, but also supports the delivery of optimized lightweight model update packages through the cloud, achieving the continuous evolution of edge AI capabilities, and then relying on its own large computing power to improve AI reasoning/decision-making capabilities under complex scenarios.

Taking supplemental computing power as an example, current edge AI models generally have 1-8 billion parameters, and the computing power requirements of foundation models with a varying number of parameters show clear gradients:

As an edge computing product, the automotive AI Box's initial important purpose in design is to provide computing power. The current AI Box on the market boasts 30-200TOPS, which is enough to meet the computing power required by models with 1-8B parameters.

Among them, the mainstream AI Box is built based on NVIDIA's modules (such as Jetson AGX Orin, Jetson Orin NX, Jetson Orin Nano), with a computing power of 200-275TOPS. It mainly handles tasks such as agent scenario services and multi-modal data processing. For example, the AI Box launched by ThunderSoft, Geely, and NVIDIA is an OEM AI Box with 200TOPS computing power and 205GB/s bandwidth, which is enough to meet the computing power required by agent matrix applications in scenarios such as welcome interaction, active recommendation, enhanced sentry, HPA and GUI interaction.

In addition, ThunderSoft's AI Box not only has built-in Aqua Drive OS and NVIDIA DriveOS, but also built-in AI Agent (such as Sentinel Agent), which can quickly apply the three major capabilities of OS layer computing power allocation, model scheduling, and scenario adaptation to agent scenarios to achieve millisecond-level response to multi-modal data.

The application of AI Box starts from "cockpits of mid-to-low-end vehicle models" + "AM"

As of the end of January 2026, some applications of AI Box had been as follows:

From the perspective of installation, it had been mainly used in the cockpit (also available in Internet of Vehicles, but with fewer cases/applications);

From the OEM/AM perspective, it had been mainly seen in the cockpit AM (also available in the OEM market, but with fewer cases/applications), and IVI systems for old vehicle models or medium to low-end vehicle models.

AM AI Box had boasted a certain scale on the market, and had been mainly used to solve problems such as medium and low-end vehicle models' IVI lags, backward function versions, and insufficient AI functions. Such a product is connected through a USB cable to provide AI functions or supplement computing power. It had realized IVI-phone interconnection through various connection methods such as HUAWEI HiCar and CarPlay.

OEM AI Box also targets the cockpit AI service issues of medium to low-end vehicle models. It aims to superimpose the technical route of high-performance AI BOX on a medium-computing power cockpit platform to achieve rapid mass production of autonomous foundation models. Typical representatives include the AI Box of ADAYO and BICV.

For example, ADAYO's AI Box can support edge foundation models with 7 billion parameters. By providing standardized high-speed interfaces and supporting mainstream communication methods such as Gigabit Ethernet, it adapts to the current mainstream EEA and introduces foundation models without replacing the existing cockpit platform. While controlling vehicle cost and power consumption, it also reserves space for subsequent EEA upgrades.

1.Cockpit AM cases

AM AI Box already has a certain scale on the market, and is mainly used to solve problems such as low-end and medium vehicle model IVI lags, backward function versions, and insufficient AI functions. Such a product is connected through a USB cable to provide AI functions or supplement computing power. It had realized IVI-phone interconnection through various connection methods such as HUAWEI HiCar and CarPlay. Typical cases include:

Banma Zhixing AI Box

Banma Zhixing launched the Banma AI Box in June 2025. This product is deeply integrated with HUAWEI HiCar and supports IVI-phone interconnection. It also supports the iteration of Banma's latest system and can apply the Yan AI system. This product is initially adapted to the Roewe RX5 (2016-2020), and will gradually be adapted to the older IVI systems of vehicle models such as Roewe RX5, Roewe ERX5, Roewe eRX5, Roewe i6, Roewe ei6, etc.

Dongfeng Honda AI Box

Dongfeng Honda launched the AI-powered automotive cloud box, which is equipped with an 8-core automotive-grade chip that supports implicit installation and can directly apply AI foundation models to support functions such as AI voice, smart car books, short video entertainment, smart search, and image and text creation.

2.Cockpit OEM cases

NIO ET9 is equipped with N-Box, a scalable heterogeneous computing unit, which is also a type of AI Box. This product is equipped with MediaTek MT8628 and can be connected to the central computing platform.

Core configuration of AI Box: heterogeneous computing + AI framework

In summary, the automotive AI Box should meet core requirements such as automotive-grade reliability, flexible computing power supply, and AI ecological openness.

In terms of product configuration: it is necessary to build a complete technology stack of "heterogeneous computing platform + efficient AI tool chain + real-time middleware" to support complex edge AI tasks.

In terms of the industrial chain structure: upstream and downstream vendors should be committed to promoting "standardization of physical interfaces, generalization of data exchange protocols, normalization of functional safety certification, and ecologicalization of software frameworks."

including:

For example, AI Box features "heterogeneous computing + high computing power".

The computing power of mainstream automotive AI Box is 30-200TOPS. The flagship vehicle models use heterogeneous computing chip platforms. For example, the cockpit domain can use Arm Cortex-A (such as A78AE) with high-performance GPU (such as Qualcomm Adreno or high-performance ARM Mali) to support multi-screen 4K rendering and AR-HUD.

In addition, for the entry-level market, Chinese chips such as Rockchip RK3588M achieve a balance between cockpit AI interaction and basic driving-parking integration functions by integrating 6 TOPS NPUs.

In terms of software ecology, AI Box is deeply compatible with mainstream development frameworks such as PyTorch and TensorFlow, and uses ONNX as the core exchange format. At the same time, some chip vendors also provide mature underlying optimization tool chains:

NVIDIA: With the TensorRT tool chain, through operator fusion and INT8/FP8 quantization, model inference performance can be improved by several to dozens of times.

Horizon Robotics: Relying on the "OpenExplorer" platform, it provides comprehensive quantitative training (QAT) tools to ensure that the model can greatly compress the volume while controlling the accuracy loss within the effective range.

This ecological compatibility greatly reduces the threshold for algorithm migration. After developers complete model training in the cloud, they can efficiently deploy it to automotive hardware through compilation and optimization of the vendor tool chain, significantly shortening the cycle from development to production.

For instance, the "MT200 Series" launched by MeiG Smart Technology can be connected to automotive terminals for multi-modal edge processing; it can also be deployed on the roadside for intelligent traffic monitoring and CVIS. As of mid-January 2026, this product had been designated by OEMs.

The software configuration of the MT200 series:

The middleware layer is equipped with a model inference engine based on NPU hardware acceleration, supports ONNX format (inference speed increased by >=30% after optimization), and supports automated installation and version control of applications;

Its system API encapsulates underlying capabilities such as OpenCV, OpenGL, and audio and video encoding and decoding. The standardized API provides the unified device management, application deployment, and status query interface to simplify upper-layer application development;

The supporting tools integrate visual tool chains, cases and components to support rapid application construction.

Table of Contents

Definition

1 Status Quo and Trends of Automotive AI Box

2 Solutions of OEM AI Box Suppliers

3 AM AI Box

4 AI Box Parts Suppliers

(주)글로벌인포메이션 02-2025-2992 kr-info@giikorea.co.kr
ⓒ Copyright Global Information, Inc. All rights reserved.
PC버전 보기