At present, the development of fully autonomous vehicles is advancing more slowly than initially planned. As a result, expectations for autonomous driving have been scaled down. Nevertheless, the digitalization of the driver's cabin is accompanied by a growing demand for storage for the automotive industry, as cars are increasingly becoming a moving data center on wheels. That's why a profound change has taken place in the storage industry, as the automobile has become a key driver for storage technologies. This will also change the design of storage to meet the needs of the automotive industry.
At the same time, established technologies like NOR flash will play a role in supporting fast boot processes. Drivers, of course, expect the car immediately to start when they turn the key in the ignition.
Power consumption is another challenge because deploying data centers on wheels requires high computing power. It is not only limited to the processors but also to the memory. Power consumption becomes even more critical in all-electric vehicles and poses a thermal management challenge for vehicle engineers.
The degree of vehicle autonomy also influences how much and what data storage ends up in a vehicle. Current development is driven by infotainment and ADAS. These two areas are more in Level 2 and 3.
There is currently a considerable amount of room for growth in the advanced driver assistance systems (ADAS) segment. Here, adaptive cruise control, lane-keeping, automatic braking, driver monitoring systems and, in general, the digitalization of the cockpit are worth mentioning.
When the OEMs and suppliers of BMW, Audi, VW and the other well-known car brands tackle the more advanced Level 4 and 5 implementations, even more powerful and larger data systems will have to be designed in. Currently, Level 4 and 5 data is expected to be in the Exabyte range.
Even in Level 2 and 3 applications, there are so many different subsystems that the information generated would have to be displayed as part of a larger cluster.
Even without self-driving features, there could potentially be terabytes of data distributed throughout the car. In addition, more and larger displays are being installed in vehicles, and advanced features are becoming more and more standard in lower-end vehicles. This development automatically leads to more and faster storage requirements, which could soon be up to a terabyte.
In the long term, of course, autonomous vehicles will become established, especially in industrial sectors where the transition is worthwhile due to high personnel costs, such as cabs or long-distance traffic. Until that point is reached, there will be an enormous increase in the amount of vehicle data collected simultaneously and the need to store it. Furthermore, one can also assume a centralization of vehicle data processing.
The amount of data needed to be recorded is also increasing due to concepts such as Transportation as a Service (TaaS). TaaS refers to buying miles and trips without the hassle of vehicle ownership: buying and financing vehicles, maintenance, gas, insurance, and sometimes even finding and paying for storage. Using TaaS means not having to deal with the hassles of current vehicle ownership while still having access to the transportation you need.
While some autonomous vehicles require real-time communication, there are also cases where you accumulate terabytes of data over several days. One example of this is fleet management. High-capacity removable storage is becoming increasingly important in these areas.
There is also an increasing need for a system architecture that combines computing power and memory for infotainment systems with ever higher-resolution road maps. As a result, flash memory capacity is growing continuously, as is the need for fast storage products such as eMMC and UFS.
Another option is NVMe, a software interface to connect SSDs via PCI Express without vendor-specific drivers. This interface is optimized for parallel accesses and is supposed to reduce latency and overhead and increase speed.
And beyond infotainment systems, there is much to anticipate as vehicles evolve and connect with external service providers to integrate even more functionality into the car.
Like any robot that can navigate itself, the car of the future must be equipped with a whole range of sensors, algorithms and the necessary computing power. There are, for example, image and vision sensors that require computing power to process the data. These sensors provide information for navigation and other autonomous vehicle functions. Currently, most of the computing power in vehicles is geared toward working with external systems such as a GPS satellite for navigation and the internal navigation system via sensors. A combined algorithm processes the data very quickly and responds very accurately. However, there is always the possibility that vehicles will be cut off from external guidance systems due to severe weather and other causes. In that case, the cars must continue to move on their own using their internal system. Therefore, these systems need to run independently and use their computing power and memory.
While there is some need to reduce the number of separate data silos in the vehicle, there still needs to be some segmentation. It would be risky to host mission-critical ADAS functions on the same data storage medium as onboard entertainment for the kids. Specific subsystems access data from a shared pool in this cluster to balance cost and efficiency with safety and reliability.
This segmentation of the various automotive systems means that storage needs are somewhat different, relying on architectures already in use in other markets, such as mobile or IoT.
There is a transition from completely isolated systems with their memory in the vehicle itself, such as flash memory for an infotainment system, to a more unified approach where all data is stored on a large UFS memory, for example.
Protocol analyzers with which the data traffic can be streamed and decrypted can be used to design, test and debug memory designs.
The SD / SDIO / eMMC Protocol Analyzer PGY-SSM is a comprehensive protocol analyzer with several functions for recording and debugging the communication between the host and the memory to be tested. The PGY-SSM protocol analyzer supports SD, SDIO and eMMC for data rates of up to 200MHz DDR mode. It is the first eMMC protocol analyzer in the industry to support the specifications of versions 4.41, 4.51, 5.0 and 5.1.
Check in our IC selection assistant which programmer is the right one for your memory ICs.