Keeping Campus IP Video Costs Under Control

Learn techniques to reduce the burdens of higher demands on network bandwidth and storage capacity.
Published: March 9, 2015

The adoption of IP video systems is growing quickly, and within this segment of the security industry, high-definition (HD) and megapixel cameras are seeing greater growth rates than standard-definition IP cameras. In fact, Security Sales & Integration‘s 2015 Gold Book reported that 60% of installations involving IP cameras included megapixel models last year. However, networked HD surveillance solutions require certain optimizations in order to curtail potential spikes in system costs.

Manufacturers are working feverishly to improve and enhance higher resolution imaging technology. For example, in the recent past, low light areas and projects with other challenging lighting conditions required the use of standard-definition IP cameras for the best image quality. However, new wide dynamic range (WDR) and low light HD and megapixel cameras are now delivering superior image quality and detail for these applications, making them a viable and often sought-after option. This makes HD or megapixel cameras suitable for nearly any surveillance application. Lower-cost imaging options are also making these cameras more accessible for small- and medium-sized organizations.

For any installation, HD or megapixel cameras provide more detailed images with more useful information, such as fine scene detail that includes facial characteristics and alphanumeric information, but this can come at a cost. The volume of data being transported and stored rises significantly. The bandwidth demands that this places on the network infrastructure and the increase in required storage capacity adds significantly to total IP system costs. Disk space is one of the most expensive components of IP systems.

RELATED: Is Your Video Storage Solution Doing Its Job?

——Article Continues Below——

Get the latest industry news and research delivered directly to your inbox.

The best place to reduce these costs is at the source – in the camera, and this is done by lowering bit rates. Bit rates can be lowered, in part, by reducing noise. Noise is a random pattern of pixels visible in an image. Some degree of noise is always present in any electronic device that transmits or receives a signal. It is an undesirable byproduct of image capture.

Noise can be interpreted as motion, which makes it the most detrimental factor in clogging the encoding process. It leads directly to exaggerated bit rates for a given image. HD and megapixel cameras are more susceptible to noise as the pixels on the sensor are smaller and are not able to collect as much light. More amplification is required, which introduces noise. Low-light scenes also contribute to an increase in noise levels.

Let’s take a closer look at how to lower storage requirements, and therefore costs, by reducing image noise as well as by changing the compression levels of specific scene regions to achieve lower bit rates without compromising video quality.

Quieting Down Image Noise

Classic noise reduction can take two forms. Spatial noise reduction averages the pixels within a frame to reduce noise, while temporal noise reduction involves averaging pixels over several frames to cancel out noise artifacts. This is very effective for static images but can cause problems when there is motion in the image. If temporal noise reduction is applied to moving objects, ghosting may be visible in the image, where the objects become blurred or repeated. Nearly every-one in the industry has seen video where a washed out image of a person is visible walking a step behind the actual individual. This is called ghosting.

Combining spatial and temporal noise reduction with the ability to dynamically adjust them based upon light levels and identification of moving objects produces images with the least amount of noise, greatest amount of detail and lowest bit rates (the number of bits that are transmitted per second). Bit rates can be optimized by tuning the degree of noise reduction based upon an analysis of important objects moving through the camera’s field of view. The camera uses background subtraction to detect moving objects and adapt temporal filtering. This means that the camera identifies frames in which there is movement and passes this information back to the digital signal processor. The temporal noise reduction for these frames is then adjusted to avoid ghosting.

When the scene is quiet or no motion is present, bit rates are minimized. When an important object is detected, bit rates increase to capture maximum details. The overall result is that network bandwidth requirements remain at a lower level until something important happens in a scene. Bandwidth is only being consumed at higher levels when increased scene detail may be needed.

Posted in: News

ADVERTISEMENT
Strategy & Planning Series
Strategy & Planning Series
Strategy & Planning Series
Strategy & Planning Series
Strategy & Planning Series