• Ei tuloksia

Generic structure of mobile phone camera

In document Benchmarking of mobile phone cameras (sivua 29-33)

In general a mobile phone camera can be divided into three logical parts: the sensor itself, the camera module, the image processing pipeline, or image signal processor (ISP), and the flash system.

The quality and benchmarking of the flash system is not part of this work. When the flash system is used, it generates a whole new dimension to still imaging. A proper investigation of a camera system with flash would require several new measurements like color temperature, uniformity, and magnitude of the flash system as well as several different environment should be noted. Even if the flash system is nowadays an essential part of mobile phone cameras, the evaluation of the flash would complicate benchmarking significantly and should be investigated in a different research.

2.2.1 Image sensor

An image sensor is an essential part of a camera system. It gets light through the lens system and transforms light first into analog signal and afterwards into digital numbers. Since CMOS sensors dominate mobile phone cameras, this section concentrates on CMOS technology. Figure 3a shows the simplified inner structure of a CMOS sensor. The example is from Samsung ISOCELL technology, where photodiodes are isolated from each other (Samsung ISOCELL).

The topmost element of the sensor is a micro lens, which collects light and bends it onto a pixel below. Use of a micro lens reduces optical crosstalk and allows use of a wider field of view in a camera system. A color filter array (CFA) below the micro lenses filters the light into different components. Usually a Bayer filter with red, green and blue filters is used (Peres, M. 2007). Without the CFA, the sensor would take monochromatic images.

(a) (b)

Figure 3 CMOS sensor a) Side view, picture by Samsung and b) Nayer filter, picture by Adimec

Figure 3b shows an example of the Bayer filter. The number of green pixels is double relative to other colors, and this correlates with the color sensitivity of the human vision system (HVS). Obviously, each color filter will absorb part of the incoming photons and will therefore decrease the quantum efficiency of the sensor.

Several different studies are ongoing to replace the technique, but currently the Bayer filter is the main method (Business Wire; Sony; Invisage; Foveon).

When a photon hits to the silicon below the color filter array, it creates an electron-hole pair which can be electrically detected. To eliminate an electron leak between pixels i.e. electronic crosstalk, Samsung with other sensor vendors has made boundaries between pixels. Samsung calls this method the ISOCELL technique.

CMOS pixels are active pixels i.e. each pixel has its amplifier. Until now, the voltages of each pixel are read line by line, converted by an analog to digital converter and sent to the image processing pipeline. However, this rolling shutter method has weaknesses. When rows are read at different time, fast moving objects are distorted in the final image. Due to this, several global shutter CMOS sensors have been recently published (Sony IMX174LLJ, CMOSIS Global Shutter). The global shutter method requires more logic per pixel. While the first CMOS pixels included three transistors, a global shutter version now requires at least five.

Finally, the bottom level of the sensor contains metal wirings which transfer the information from a pixel.

2.2.2 Camera module

A camera module packages the image sensor with the lens system and with mechanical parts which are required for features like auto focus, optical image stabilizer and aperture adjustments. It is also possible to integrate a digital signal processor into the camera module.

Figure 4a shows a simplified example of the camera module. Firstly, the package contains a lens system, nowadays mobile phone cameras with auto focus have 5-6 lens components. Secondly, the moving lens components have their own holders and controllers. Voice coil motor (VCM) is a widely used technique to adjust lenses, but new methods like micro electro-mechanical systems (MEMS) are coming to the markets. Thirdly, an infrared filter is mounted on top of the sensor to prevent saturation due to infrared light.

Finally, the sensor is wired and mounted onto a circuit board and the whole system is protected by a package. The module offers a connector which enables control of the camera and transfer of the image data.

Probably the most complicated mobile phone camera module, the camera module of Lumia 1020 phone is shown in Figure 4b. Among others, it includes a 41 mega pixel sensor, VCM based autofocus and optical image stabilizer where the whole lens system is resting on ball bearings. The size of the package is 25mm by 17mm and it contains over 130 individual components. (Microsoft)

(a)

(b)

Figure 4 Camera module of modern mobile phone: a) Simplified example and b) Camera module of Lumia 1020 by Microsoft

2.2.3 Image processing pipeline

An image processing pipeline has a significant role in modern mobile phone cameras. Unfortunately, the quality of an image without image processing (RAW image) is quite poor due to a small lens system, small pixel size and sensor artefacts.

In practice, the image processing pipeline recreates the image using a large number of different algorithms.

The image processing pipeline can be implemented by a specific processor, digital signal processor (DSP) or graphics processing unit (GPU). Also field-programmable gate array (FPGA) are used in some cases. Moreover, the pipeline can be implemented in software and using the application processor of the phone.

However, the image processing pipeline tends to be such a heavy process that it usually executes on a separate processor or chip.

Figure 5 gives an example of algorithms that the image processing pipeline may contain. The process can be divided into correction, conversion and controlling tasks, like denoising, demosaicing and auto focus correspondingly. The algorithms have many connections between each other and the actions of one quality algorithm may reduce quality of another feature. The parameterization of the algorithms is a trade-off between different quality features.

Figure 5 Example of an image processing pipeline

Auto focus and auto exposure especially, have critical roles because they control the camera functionality and they are very time critical processes. All in all the quality of the image processing pipeline defines largely the quality of the whole camera system.

In document Benchmarking of mobile phone cameras (sivua 29-33)