The crucial role of motion sensor data in video stabilization

A more profound understanding of the role of motion sensor data in video stabilization will open the door to better video quality. Deep diving into the different types of motion sensors, the underlying science, the common artifacts o account for and how Optical Image Stabilization (OIS) and video stabilization software counteract these artifacts will empower you to get better results out of video stabilization. This way, you’ll have a better idea of what to look for in video stabilization software and how to use a solid SDK effectively.

motion sensor

Six degrees of freedom

Let’s start with this common phrase. Our world consists of three spatial dimensions: breadth, width, and height. The word freedom refers to free movement in these three dimensions.
When we talk about movement, we mean both changing location by moving along an axis through space (known as translation) but also doing a rotation around an axis while still remaining at the same coordinates. Consider how you would say you are moving when you are turning around, even though you remain at the same spot.
A camera in motion has six degrees of freedom. Moving forward/backward, up/down or left/right serves as translation. Rotation around an axis is often termed pitch, yaw, or roll, depending on which axis we rotate around. Although some are more prevalent and more strongly affect video quality, video stabilization software has to understand, process and potentially handle all six types of movement. But first it all has to be very precisely measured.

Two key motion sensors with crucial data

Two types of motion sensors are of interest when tracking movement: the accelerometer and the gyroscope. Both have crucial data that is frequently updated.

Accelerometer

An accelerometer detects the g-force associated with the current movement. This sensor is a very tiny chip with extremely tiny (around half a millimeter thin) moving parts made of silicon. Watch Engineerguy’s YouTube video for a fascinating explanation and demonstration of how an accelerometer works in a smartphone.

Gyroscope

The gyroscope is more of an orientation tool for your camera in motion: The roll, pitch and yaw of your device will be automatically detected by the gyroscope. The size of a typical micro-electromechanical gyroscope inside small devices is just a few millimeters.

Frequent motion sensor data updates

Information is pulled from these sensors at least a hundred times per second or faster. Compare this to the normal 30 frame-per-second rate of videos. So, why do we need sensor updates more often than there are new frames in the video? The answer will be revealed later in this post.

How video stabilizers use motion sensor data

Movement can also be computed visually by analyzing movement in the image itself, using methods such as optical flow with tracking technology, as was discussed in the object tracking post. Video stabilization software reads translation and rotation information to calculate the total movement. This enables the stabilizer to modify each frame to create the scene you intended by separating intended motion from unintended motion – keeping only the former and canceling out the latter.

But how do we go from raw data to a motion model?

“I’m afraid we need to use… math!”

For all of this to work, the data reported by the sensors must be expressed and analyzed mathematically. Although images are two-dimensional, the world we and our cameras live in is not. We have six degrees of freedom in our 3D world. A translation can easily be stated with a 3D vector, and rotation can be expressed with advanced matrix algebra, or with four-dimensional numbers.

Yes. Just as complex numbers are a two-dimensional extension of real numbers, quaternions are four-dimensional numbers. They can be added, subtracted, multiplied and divided according to their own special rules, consistent with the arithmetic rules of real numbers. It turns out that a rotation in 3D space can be represented with unit quaternions. They provide a more convenient mathematical notation for representing rotations of 3D objects.

They also have the advantage of being easier to compose, and they are more numerically stable compared with rotation matrices, perhaps more efficient, but not as easy to understand for a human. Representing rotations using unit quaternions also avoids an infamous problem in mechanical engineering known as gimbal lock.

Even if this seems straight-forward, the actual computations are not. Sensor data is “noisy”, i.e. comes with a certain degree of inaccuracy, and can only be polled a limited number of times per second. Turning all this data into one meaningful mathematical expression, especially on the low-power hardware typically used in cameras in motion like drones, is a major challenge for video stabilizers. At the same time, hardware can compensate for some of this, as we will see in the next section.

Common motion artifacts

Common artifacts elated to motion include rolling shutter artifacts and motion blur, which are both interrelated in a certain sense. Once we understand what causes these artifacts, we can begin to correct them.

Rolling shutter artifacts

Rolling shutter artifacts arise because a complete video frame is not recorded in an instant. Instead, the pixels are recorded in some order, either vertically or horizontally. With the camera in a state of motion, the last pixels may record a different scene than what the first pixels started with, causing a skewed and distorted image.

Motion blur

Somewhat related, motion blur is the apparent streaking of rapidly moving objects as a result of when the scene itself changes while recording a frame. This can be used for great photographic effects (using very long exposure times to illustrate stars moving in the night sky), but an unintentional and large addition of blur is rarely good.

How OIS uses motion sensor data to tackle artifacts

Example showing stabilization with and without compensating for in-frame motion artifacts using our video enhancement software, Vidhance.

Optical Image Stabilization (OIS) is used to counteract some of these artifacts. It does this by physically moving the camera lens ever so slightly according to high-speed information it receives from the device’s sensors. In many devices utilizing Optical Image Stabilization, including smartphones, drones and other applications, the lens is shifted (translated) small distances and not rotated. This makes it useful for still imaging, but bad for video, as we shall see below. However, some of the latest OIS modules are now capable of rotation.

Given that the OIS compensates for rotations by translating the lens, its corrections actually cause perspective distortions, which become very evident in video.

Different companies use different names for this technology, but Optical Image Stabilization is widely used for both photo and video in modern cameras in motion like smartphones, action cameras and drones. It was invented to at least partially address in-frame issues such as motion blur and rolling shutter effects. Optical Image Stabilization is a tool for correcting problems that a bigger lens and better sensor would not have when it is too expensive or complex to incorporate such lenses and sensors. The net result of OIS is improved sharpness in motion.
Because of the limitations in compensation angle, Optical Image Stabilization cannot compensate for the bigger amplitude motion between frames. In short, OIS can enhance the quality of each individual frame but not the overall stability of the video feed when the camera is in motion such as during a drone flight. Therefore, it is the job of video stabilization to also adapt to this motion and eliminate the distorted perspective, along with all the other disturbances mentioned above.

Key pieces of the video stabilization puzzle

To be effective, video stabilization software must take into account and master everything covered in this post. In other words, the software must frequently monitor critical motion sensor data and adapt to this. At the same time, it needs to properly leverage OIS to correct for common artifacts related to motion, such as rolling shutter artifacts and motion blur.

If you’d like to learn more about other key considerations for video stabilization, see our guide, “Mastering video stabilization in product development” or learn more about how our very own Vidhance video stabilization technology performs both in-frame stabilization, such as rolling shutter elimination, and inter-frame stabilization, analyzing and removing unintended motion. For inspiration, insights and best practices for the next generation of video enhancement, enter your email address and subscribe to our newsletter. 

banner-vidhance-3

Vidhance

We’re all about video enhancement

Book a Demo