As camera resolutions continue to climb higher, and with HD and megapixel video systems being deployed across a wider variety of customers and vertical markets, sending the larger video files generated by those cameras across networks has become a major issue for integrators and end users.
Three major compression algorithms — H.264, M-JPEG and MPEG-4 — have been used in the industry to date for reducing the size of video files. Of these, H.264 has become the de facto standard embraced by most, if not all, of the camera manufacturers, primarily because it is the most efficient for reducing file size for transmission and storage without also reducing image quality. File sizes can be as much as 80 percent smaller than comparable M-JPEG files. The difference is in the way video frames are selected for compression. H.264 analyzes the video and sends only frames of what has changed from one frame to the next, such as when there is motion in the scene. M-JPEG is a pixel-based technology; it determines which parts of each pixel must be compressed and discards the rest to shrink file sizes.
Because H.264 and MPEG-4 belong to the same family of compression formats, MPEG-4 is headed towards extinction, says Steve Surfaro, business development manager and security industry liaison for Chelmsford, Mass.-based Axis Communications, who also chairs the Security Industry Association’s (SIA) Digital Video Subcommittee.
“Standard MPEG-4 use started declining rapidly when H.264 was deployed, so it’s become mainly a choice between H.264 and M-JPEG,” he says. The introduction of MPEG-4 Part 2 has only hastened its demise.
While H.264 has a number of benefits, it is not without its downsides, the greatest of which is in decoding compressed video for live viewing. The processing power and, by extension, extra hardware needed to decompress H.264 files for live viewing is much, much higher than M-JPEG.
“H.264’s main purpose is to reduce bandwidth and storage requirements, but it requires a lot more horsepower to display video on a monitor. So there has to be a lot more hardware on the head end,” says Jumbi Edulbehram, director of business development for Ridgefield Park, N.J.-based Samsung Techwin America.
The good news, he adds, is that this isn’t even a consideration for the vast majority of end users. “The reason the industry doesn’t talk much about it is that a very small segment of the industry actually watches live video. The majority stores video and reviews incidents.”
While excellent for delivering a steady stream of high-resolution images or capturing video triggered by an event, there’s been one major criticism of M-JPEG. “In reality, M-JPEG can’t touch fast-moving objects,” Surfaro describes. This is an area where H.264 shines, he adds, noting that recent iterations of the codec have allowed cameras with H.264 encoding capability to be placed on moving objects such as trains and buses, as well as to monitor moving objects.
The next big thing on the resolution horizon — 4K — is also a major driver of the next big thing (or things) in compression. With resolution that is four times 1080p, file sizes of 4K format video are much, much larger than HD video.
“Four K is the next standard in video quality. And as resolution goes up, you obviously have to compress video more or storage and bandwidth requirements will go through the roof,” Edulbehram says. “The greater the resolution, the more compression you need to transmit and store video at that resolution.”
The next wave of video compression technologies will be led by High Efficiency Video Coding (HEVC), also known as H.265, which is said to double the compression ratio of H.264 while maintaining video quality. As with many things video-related, the broadcast industry has until now led the way towards widespread 4K adoption. So far, though, that adoption has been somewhat limited, Surfaro says.
“There is a very large amount of broadcast efficiency based upon 4K. Cable providers and video communication companies need to be able to send out their streams in a high-efficiency manner,” he says. “You’re going to see more 4K usage but right now there’s very little 4K content available. Once it passes a certain threshold, you’ll see some video management solution providers start adopting it, but not for now. It’s going to be very positive for the industry, so we have to be able to carry video more efficiently.”
So while HEVC lurks on the horizon, H.264 is not going anywhere. There have been a number of efficiency-boosting improvements to H.264, mostly enabled by advancements in network infrastructure and switching fabric, Surfaro says. Switching fabric or network fabric allows network nodes to interconnect via one or more network switches to spread network traffic across multiple physical links to yield higher total throughput (bandwidth) than traditional broadcast networks, such as Ethernet.
“People are going to be surprised when we get into the newer efficiency of H.264. Network fabric and hardware are enabling greater scalability and reliability, and increasing the ability to have multiple streams. It’s bringing a tremendous change in performance, simplicity and reliability, which is crucial,” Surfaro says. “In broadcasting, if a video stream goes down, people complain to their cable or satellite provider; in security, if a stream is interrupted, it could mean loss of life.”
Even with its advanced compression ability, HEVC will still need a help from cameras themselves using video analytics to make transmitting 4K video across networks as efficient and manageable as possible, Edulbehram says.
“There is no way 4K, even with increased compression, is going to be widely used without smarts in the camera. The majority of people are not interested in seeing everything, but instead are more interested in video based on motion or an event. The camera will decide what’s interesting or important and send that video to you at the best compression possible, giving you the best of both worlds.”
Another technique on the horizon for improving the efficiency of transmission involves the number of sensors within cameras.
“We’re moving toward multi-sensor cameras, with the idea being that you’re not just gathering information from one sensor and compressing the heck out of it and sending it. With multiple sensors, the camera can figure out what information the different sensors are gathering and treat different parts of the image differently. Then it will stitch together an optimized image given what happened in the scene,” Edulbehram explains.
Regardless of how video is optimized for transmission, there is just one driving force behind efforts to improve compression algorithms and develop complementary tactics for streamlining video.
“Ultimately, it’s about stability of the throughput,” Surfaro says. “You want to keep the feed and have no packet loss without affecting the quality of the video. If you don’t get the video there or if it’s broken up, it’s as good as useless.” n
3 More Ways to Reduce Bandwidth
Frame Rate
Simply reducing the frame rate from 30 fps to 20 fps will cut file sizes by one-third and in most security applications, the difference won’t be discernible. For some applications, such as traffic monitoring where cars are moving through the scene very quickly, this is not possible.
Compression Ratio
Most cameras come with a preset compression ratio of 20 to 30 percent, which can be increased to 50 or 60 percent to increase efficiency without compromising video quality.
Bitrate
With M-JPEG, bitrate is constant and predictable. But with H.264, which sends only what has changed within a scene, bitrate jumps when there is a lot of motion in a scene. Any spike in motion results in cameras sending more information than usual, increasing the strain on a network while also generating lower-quality video. Different camera manufacturers deal with this in different ways, so it’s important to do your research when evaluating cameras for a particular application.