Why no component is an island in complete video systems

Camera-based functions and applications are becoming more and more prevalent in today’s modern society. New products and applications are being launched at an ever-faster pace. At the same time, traditional non-camera products are being transformed into version 2.0 simply by adding camera and video functionality. Just take a look around and you will notice products with cameras in all conceivable places, and products that previously carried no camera now sport at least one, if not multiple cameras. Some obvious examples of where cameras are prevalent are in cars, surveillance applications, drones and wearable cameras such as bodycams.

System wide-calibration and fine tuning are crucial for perfect video quality.


The three main stages of a video pipeline

What many of the manufacturers of these new applications are experiencing is that a complete video system is a complex and difficult domain to master. A video pipeline system typically contains many different components, each complex in its own right. Explore the three main stages of a video pipeline to learn why no component should be treated as island if you want to achieve optimal video quality.

Stage 1: Analog

The first stage is in the analog domain, namely the lens system. Different lenses have different properties. For instance, a wide-angle lens performs completely differently in many aspects (other than the obvious focal length) than a zoom lens. Different lenses from different manufacturers behave very differently with regards to aberrations, distortions and similar phenomena. One cause of complexity is that today’s products normally also carry not just one but multiple cameras for different purposes. It is not uncommon to find up to five cameras on a standard smartphone today. Image stabilization is sometimes also applied at this stage using optical image stabilization (OIS).

Stage 2: Digital image processing

The second stage is in the digital image processing domain, or the Image Signal Processing (ISP) domain. This is after light has passed through the lens system, hit the image sensor and been converted into the digital domain via an A/D converter. This stage contains many low-layer functions, such as demosaicking, denoising and blur correction, 3A, white balancing, color enhancements and tone mapping stages, lens distortion correction, and much more. These functions contribute to creating, correcting and perfecting a single frame, but a video consists of many frames per second (FPS) (in extreme cases, up to 960 FPS in today’s smartphones). to perfect a video, more processing is needed. Much of this processing takes place in the third stage.

Stage 3: Computer cluster

The third stage is the compute cluster, which contains different processing units with various degrees of compute power, each optimized for its specific usage. The modern compute clusters usually offer one or multiple CPUs for general computations and GPUs for graphical processing along with Digital Signal Processors (DSPs) and Neural Processing Units (NPUs) for machine learning and AI computations. This is the stage where video enhancement applications are taken to a higher level. Examples of applications include video stabilization, object identification, smooth transitions between cameras in a multi-camera solution, facial recognition and object tracking.

Each component in a camera pipeline cannot be treated as an individual island.

Problems with relying solely on configuration of each component in isolation

Many manufacturers of camera products learn that, to optimize the entire camera pipeline, each stage must be considered, both individually and in relation to the overall system, and that each stage may affect the overall quality of the solution. For instance, if you replace an apparently simple component such as a gyroscope, the characteristics of this new component may be altered. As a result, the overall characteristics of the entire system and the overall video quality will likely be degraded unless a new system-wide calibration and optimization is performed.

What is required for optimal video quality

In addition to selecting the best available components, an optimal system must also be calibrated and optimized as a whole system, not solely as isolated components. This means that you cannot rely only on the performance of each component, regardless how good their specifications are. Instead, you need to also consider tuning and calibration of the entire system.

Don’t get stuck on a component island – unlock your full video quality potential with video pipeline calibration, optimization and tuning on a system-wide basis. At Imint, we have long-standing experience working with a wide range of different video components and execution environments. Don’t hesitate to get in touch with us and discuss your challenges with our video quality experts. For inspiration, insights and best practices for the next generation of video enhancement, enter your email address below and subscribe to our newsletter.


We’re all about video enhancement

Book a Demo