In-vehicle imaging is an integral part of advanced driver assistance systems (ADAS), and the number of in-vehicle imaging systems is rapidly increasing, driven by safety and various other needs. At the same time, there are also many challenges in automotive imaging design. In the vehicle environment, image sensors often face more stringent working conditions and parameter requirements.
Tougher design challenges
In the image world of cars, in addition to safety, the first challenge facing autonomous driving imaging is a wide dynamic range in large scenes. Real-world scenarios can usually reach 120-140dB or even more than 140dB. There is a lot of contrast between bright and dark areas, which is often encountered, and for such scenes with high dynamic range output, it is important to capture every detail possible in order to provide advanced driver assistance system algorithms. Clear scene data.
(Dynamic range comparison, ON Semiconductor)
The ultra-wide operating temperature range is also more stringent than the operating temperature requirements of ordinary image sensors. The sensor must adapt to extreme working conditions, covering extreme ambient temperatures such as -40°C to 105°C. Traffic lights/LED street signs flickering and artifacts also pose a big problem for image sensors. LEDs belong to AC pulsed light sources, and their frequency and duty cycle are variable. The frequency of automotive lights is generally greater than 90Hz, and the than 5%. For image sensors, the lower the pulse frequency, the smaller the duty cycle, and the shorter the sensor exposure time, the more serious the flicker problem is. The image sensor cannot recognize or misjudge the scene information, which will bring serious trouble to the ADAS algorithm. Flicker issues often coexist in scenes with high dynamic range requirements, posing severe design challenges for image sensors.
Dedicated Super Exposure Technology
At present, there are two mainstream technical routes for realizing WDR, one is time-domain multiple exposure fusion, and the other is large and small pixel fusion. The essence of the two technical routes is actually the same, by taking multiple photos and then merging them into one. The sensitivity is different between different exposures, and different brightness is collected to achieve dynamic range expansion. The difference between the two is that there is only one type of pixel for multiple exposures in the time domain, and there is a time lag, and there is only one exposure for large and small pixels, but there are two sizes of pixels. Due to the time lag of multiple exposures in the time domain, there will be the problem of moving shadows, and the problem of fusion of large and small pixels is more. After all, there are differences in pixels and sensitivity.
In the super exposure pixel technology route launched by ON Semiconductor, one exposure can be equivalent to the dynamic range of the previous two exposures, which solves the respective shortcomings of the previous two mainstream technology routes. Super-exposure pixels are easier to scale to smaller sizes and do not have the serious pixel crosstalk problem of large and small pixel structures. Determine the appropriate balance point in automotive application temperature and exposure time to ensure that the image is in noise, color, sharpness, and detail. It can also effectively suppress LED flickering. In ON Semiconductor’s Hayabusa (Peregrine Falcon) image sensor platform, proprietary super-exposure technology is used to provide a common architecture and pixel performance to help designers reduce costs and accelerate development. ON Semiconductor believes that super-exposure pixels are the best choice for vision and ADAS functions.
Hayabusa Image Sensing
In traditional sensor pixels, the photosensitive area of the pixel is also a storage area, but the innovation of Hayabusa’s super exposure technology is that the photosensitive and charge storage of the pixel are separated. The pixel is responsible for light-sensing, and its capacity is not large. When it is saturated, the accumulated charge will be transferred to the external storage capacitor. This method solves the capacity limitation and facilitates expansion. The current generation of Hayabusa super exposure technology extends the dynamic range by more than 5 times the linear pixel capacity and captures pulsed light without oversaturation for long exposures. According to ON Semiconductor technicians, the next generation of its expansion range will expand dozens of times or even higher than the linear pixel capacity.
Compared with large and small pixels, Hayabusa super exposure also has advantages on LFM. Whether it’s low-noise noise or high-brightness noise, super exposure shows better HDR characteristics, especially at higher temperatures. Large and small pixels require extensive calibration, multi-dimensional color correction, and additionally enhanced denoising for any lens design, which inevitably leads to loss of detail and resolution, which Hayabusa Superexposure does not have to worry about.
Hayabusa super exposure is also optimized for human vision and machine vision, supporting >95dB dynamic range while avoiding flicker or artifacts, and full well capacity >100ke-. The fastest can reach 60fps and extremely short super exposure time to achieve more than 120dB dynamic range.
In the vehicle environment, image sensors often face more stringent challenges in terms of working conditions and parameters. With an image sensor platform that addresses major technical challenges in automotive imaging, Hayabusa is competitive in the industry. Considering that automotive applications often require training on a series of images for test drives, a scalable image sensor platform greatly reduces the training dataset of images that manufacturers need to work on multiple platforms, which is also a cost relief.