Specialized high definition day/night cameras with infrared illumination near the U.S. border with Mexico capture digital still images of each container car passing underneath them at 70 miles per hour. They can detect whether a person is riding illegally because he or she does not fit expected geometry and is flagged as a suspicious anomaly.


Abandoned objects, the level of congestion, graffiti and people’s pathways can be detected at the same time in one view by video analytics.

When a person or an industry is young, it feels like it can do anything. There’s a lot of boasting, a lot of claims, a lot of tearing down the other guy, and still not a lot of solid information on which to base informed decisions.

“The hardest thing in this business is determining fact from fiction,” admits Mike Sherwood, channels director for Vidient, Santa Clara, Calif.

Instead of putting video analytic software on a digital signal processor (DSP) chip from Texas Instruments, as many companies have, he says Vidient uses a different processor with 20 times the horsepower. “You can do what it takes eight to 10 DSPs to do,” he asserts. “We didn’t dumb down the analytics to fit it on a chip.”

But Carolyn Ramsey, director of program management for Honeywell Systems Group, Louisville, Ky., thinks DSP chips that provide the power of 10 or more Pentium computers are still on the bleeding edge of technology.

“It’s not quite ready for prime time,” she asserts about the technology. “That’s where the great scientists in the media processing chip world are spending their energy.”

Gadi Talmon, vice president of business development at Agent VI, Fort Myers, Fla., agrees that sifting through the various claims of video analytic companies can be complicated for security dealers and systems integrators.

“There is a lot of storytelling in this industry, and the main reason I believe is because the industry is very young, and the number of operational installations are very few,” Talmon concedes. “Therefore, most of the companies, including Agent VI, are in a situation that most of their installations are pilot trials or evaluations of some type.

“Some analytics customers are still in the experimental stage,” Ramsey admits. “The early adopters have been using it for four years. The people who are waiting to make sure it is ready to handle their needs are just in the trying stage.”

Video analytics can be a study in human behavior from the views of these different areas.

A myriad of relatively young companies offering video analytics also is complicating the industry. “There are a lot of newcomers to analytics in the last year, and I think that’s very exciting,” Ramsey stresses. “It generates interest and talk, which is very good for business. Every company that enters the space will look at it from a slightly different angle, and hopefully innovate and add breadth or depth.”

Eric Brotherhood, DVTel’s national accounts manager for ADT, Ridgefield Park, N.J., reports DVTel is staying flexible in its use of different companies’ video analytics because of an eventual expected shakeout.

“Each company has a specialty with a base package, such as counting,” Brotherhood says of different video analytics companies’ offerings. “Some are smarter than others.

“Analytics are not really being used that much, except for basic motion detection,” he asserts. “They are just beginning to be used.”

IQinVision, Lancaster, Pa., has basic video analytics such as motion detection, exposure windows and image cropping in all its cameras.

“At IQinVision, we are always looking at where video analytics offer a compelling value proposition,” emphasizes Paul Bodell, IQinVision’s chief marketing officer. “There are some obvious applications, but for many algorithms, it is not clear how we could market them.”

Less than 10 percent of the company’s IQeye cameras have third-party analytics embedded in them, he reports, but he expects that percentage to grow substantially over the next few years.

The quality of video analytics installations varies because it is not a plug-and-play solution, points out Gianni Arcaini, chairman and CEO, Duos Technologies, Jacksonville, Fla.

“Most companies that offer video analytics like to sell it in a box, but it’s not going to work that way,” Arcaini maintains. “That’s why there are so many companies that are unhappy with video analytics, because it’s not a box sale and it never will be.

“To make video analytics work properly requires much more than just a server and software,” he stresses.

Video content analytics (VCA) technology has been protecting railways like this in Israel for four years.

WHAT IS ANALYTICS?

Another reason for confusion over the capabilities of video analytics is that some companies have simply dressed up the older technology of video motion detection and called it analytics.

Talmon classifies video analytic events as those that are triggered by motion, non-motion, such as unattended luggage, and the behavior of people, such as loitering, tailgating or slipping and falling.

The algorithms detect changes in the pixels of a digitized image, Talmon explains. “Everything begins with analyzing some change in pixels — this is what our software in the field is doing,” he relates. “It extracts some basic information from a change in the pixels, whether a motion-related or static change or a change in the background or foreground.

“Then this basic information is sent over the network to the server, and it analyzes and tracks these features and tries to understand exactly what it is — is it a human being or vehicle or animal or shadow or rain drops?” he continues about his company’s system. “It tries to understand what is going on, and then compares this activity or event with the rules that have been defined for this specific camera.”

Different rules can be defined for different cameras and even different times of day for the same camera. “The system constantly is looking for violations of any of these rules,” Talmon emphasizes. “A typical installation of ours will include hundreds of rules.”

Some companies insist that metadata — a stream of computer code that describes what is seen in the video — separates real video analytic systems from the pretenders.

“For a technology to be truly intelligent, it has to perform three characteristics,” emphasizes Edward Troha, director of marketing for ObjectVideo Inc., Reston, Va. “One is that it needs to be based on the science of computer vision — it needs to be artificially intelligent in nature.

“Two, it needs to be able to separate the background from the moving object or foreground objects,” he lists. “The third characteristic which is vitally important for a system to be truly intelligent in our opinion is that the system must produce a stream of metadata that then can be queried against at a later time or produce rule violations in real time. That metadata must be available to the user in some way so they can extract some business value from it.”

“If it can tell you an object was entering the parking lot, that’s advanced motion detection, but if it can tell you the difference between a car, a deer, a newspaper blowing across the parking lot and a person, in my mind, that’s real analytics,” Ramsey asserts.

Metadata also can be used for forensic purposes, points out Mike Gardner, vice president of operations for Video IQ Inc., Bedford, Mass.

“If you process all your video, you build up this big library of metadata like a card catalog, and you can send queries,” Gardner points out, such as for all the red cars seen in a parking lot for the last month. “You can search 30 days of video in about 10 seconds.”

Dr. Ting Yu of GE Global Research, Niskayuna N.Y., works on the retail analytics test bed at the GE Global Research Center. The monitor shows the analytics system detecting and tracking pedestrians entering and leaving a virtual zone in an outdoor courtyard at the research center.

ANALYTICS ON THE EDGE

Even if some people don’t know a lot about video analytics, they know the trend is to put it “on the edge” — in devices like cameras, encoders that convert analog video to digital and in DVRs and NVRs.

“The advantage is that it has potential savings in terms of bandwidth not transmitting video all the time and storage,” points out Nik Gagvani, Ph.D., chief technology officer of Cernium Corp., Reston Va. “The added advantage is that it’s right there in the camera — it’s an integrated device, so you’re not managing two different pieces of technology.”

More intelligent use of bandwidth is cited by Joe Krisciunas, business program manager for GE Enterprise Solutions, Bradenton, Fla. “You don’t have to select four, eight or 16 video streams that you process at a given time — you’ll have it spread across cameras and only use the bandwidth of piping the detailed video,” Krisciunas explains.

“As the edge devices get smarter, you can direct the attention and bandwidth to where there’s an event going on and get more coverage and more productivity in your allocation of network and human resources to focus on areas where there is an event,” he notes.

Adds Marco Graziano, founder and CEO of Eptascape Inc., Sunnyvale, Calif., “One advantage of separating content extraction and having it on an edge device is it is the only way to make the system affordable and scalable.

“It would make a lot of sense to put analytics in the camera, but I see that camera market to be very cost-sensitive,” Graziano points out. “It’s possible video encoders may be a better place for content analysis until the cost of cameras goes down.”

For one thing, in encoders, the analytics can be used on several cameras. Ramsey notes that in applications where cameras are damaged or stolen, making them more expensive by adding analytics to them may not be cost-effective. In these cases, the analytics might be best in the encoder or server.

“Analytics at the edge is a very big buzzword — a lot of people are focusing on it,” she concedes. “In one sense, it’s easier to put in the camera than a DVR, because with the camera, you only have to operate one channel. In a DVR, you’ve got eight, 16, 32 or more.

“Although I think there’s a lot of excitement in the industry about putting [analytics] in the camera, and a lot of people are doing it, only a small percentage, less than 20 percent of the market, will use it,” she maintains.

“There are customers who really benefit from a decision to keep and store and forward video at the camera or to throw away video to manage network requirements,” Ramsey notes. “So that’s a big driving requirement for intelligence at the edge.” But depending on their industry, other customers may want to store all their video for up to 90 days.

“We really see a world in which intelligence is distributed, and some customers will have it at the edge, some in DVRs and NVRs, and others will have it in a command and control center monitoring station, or a hybrid of all three,” Ramsey suggests.

She cites one typical application — people counting — in which only the metadata with the count and not the video is transmitted. Homeland security applications typically store video because its significance may only be determined in retrospect, perhaps years later.

Rustom Kanga, Ph.D., CEO of iOmniscient Corp., New York, cites the Spanish railway, which places his company’s software in edge devices along its tracks.

“It’s very appropriate to use an edge device, because you don’t want to clog up the network with information from thousands of cameras to a central location,” Kanga points out. “You want to do your analysis at the edge and only send back information when there is an event.

“The same customer in a railway station will use a centralized system, because it is all in very close proximity, and they will use computers for the analytics, which are more inexpensive than edge devices,” he notes. “In that centralized environment, there’s room for computers, whereas in a distributed environment, it makes much more sense to use an edge device. I think there’s a place for both.”

Video analytics can be used to conserve bandwidth from megapixel cameras, Gardner points out. “You can use analytics to determine what part of a megapixel image you want to send back,” he notes.

The presence of this person placing an object on railroad tracks can be detected by video analytics.

THE FUTURE

One of the most mentioned vertical markets for video analytics is retail. “Retail is the number one vertical for video surveillance equipment,” reports Krisciunas. “Eighty percent of retailers reported using video as part of their loss prevention. In 2010, retail is expected to be the largest market for video analytic devices.”

Other popular verticals for video analytics include government, large corporations, education, health care, transportation and mass transit facilities like airports and parking garages.

“I believe within a few years from now, any video surveillance camera will have some type of intelligence or analytic capability,” predicts Talmon. “Video analytics will penetrate numerous vertical markets, not only security as it is today. We see already the beginning of applications of it in retail, traffic and some other verticals.

“I think we will see many companies or groups also offering particular types of analytics,” Talmon forecasts. “I believe we will see a few companies offering not only pure algorithms, but also platforms that enable customers and sales developers to develop more and more algorithms in the future.”

Kanga agrees that better algorithms are in the future. “Ninety-nine percent of the companies you see out there are doing very simple things, and the future is going to be where the systems start doing much more sophisticated things,” he says.

One example he mentions is not just recognizing unattended baggage but also being able to recognize the luggage cart that it is on.

“It might take them two or three years to get there, and the market might walk past them, which is why they tend to use an existing algorithm,” Kanga says of video analytics companies. “We are still in our infancy in terms of what we could do if we look out a few years.”

Sidebar: I’ve Got Algorithms

Gadi Talmon, vice president of business development at Agent VI, Fort Myers, Fla., explains the capabilities of algorithms. “Algorithms today can detect 20 different types of events, including events that are actually combinations of several sub-events in different cameras,” he maintains.

Writing algorithms can be a trial-and-error experience based on such variable factors as human behavior. “It takes years of deployment in the field to understand all the movement patterns, and analytics is analyzing movement patterns,” asserts Carolyn Ramsey, director of program management for Honeywell Systems Group, Louisville, Ky.

Algorithms to provide security at open-air auto lots after hours have to distinguish between differing human behaviors.

“The behavior of someone who wants to key a car looks very different from someone who’s admiring a car for potential purchase,” Ramsey notes. “A potential buyer stays longer and looks at it from different angles, and touches different things. You touch the window with the sticker. If you’re keying a car, you don’t stand there and read that. So there are very interesting things you can tell that takes analytics deeper to behavioral analysis.”

Some maintain too much emphasis is placed on the algorithms used in video analytics. “It’s much more about the architecture than the algorithm,” maintains Larry Barfield, vice president of government programs for SightLogix, Princeton, N.J. His company manufactures edge-based intelligent video analytics sensors that use several DSPs.

“A multi-DSP architecture tightly integrated with the optics core is required to achieve a reliable video analytics solution that gives the performance needed for the automation of outdoor perimeter surveillance,” Barfield insists.

In addition to running just the analytics algorithm, his camera also processes raw video at full resolution and full frame rate, stabilizes the image, corrects for any lighting invariants and resolves the target’s GPS location.

Gianni Arcaini, chairman and CEO, Duos Technologies, Jacksonville, Fla., agrees. “The entire industry is not working to detect anomalies — the industry’s main goal is to avoid false positives,” Arcaini asserts. “That is where most work goes in. Believe it or not, our code is 1.6 million lines, of which 1.4 million lines are just to avoid false positives.”