Many people love our analytics, but few truly understand them. Most misconceptions spur from a misunderstanding of how VideoIQ analytics function versus the numerous advanced motion detection alternatives. To truly comprehend the differences, we must first examine the basics.
Advanced motion detection systems rely mostly on pixel change analysis, sometimes complemented with simple algorithms. To put it simply, each camera has a field of view with (usually) a consistent background. When a certain number of pixels deviate from the norm, an alarm is triggered. On a calm, sunny day, such an approach can be quite effective, however, this can quickly become troublesome in challenging conditions. When a tree is shaking in the wind or snow is falling, many pixels are changing at once, often resulting in false alarms. Furthermore, the camera cannot determine a static background and detection begins to break down as it struggles to determine what’s deviating from the background and what is the background.
Similarly, our analytics use pixel change to detect the first sign of a threat. However, rather than immediately identifying the object as a threat, our cameras think, comparing the object’s appearance and the way it moves, to an immense database of images and video.
Red boxes indicate the object is a person, while blue boxes are placed around vehicles. If the camera cannot immediately classify the object, users will see a yellow bounding box appear, indicating a suspicious object – the camera’s way of saying, “I know it’s there, I just don’t know what it is yet”. The camera will then watch the object for a few more frames, gathering the necessary information to properly classify it or ignore it if it fails to match human or vehicle criteria.
While most of the time the camera immediately identifies the object, taking a few extra frames to decide when uncertain can drastically reduce false alarms with negligible impact to alert time.
The final differentiator is VideoIQ’s rules. While advanced motion detection will set off an alarm nearly every time an animal or car moves through the field of view, VideoIQ users can configure rules to only alert them in certain scenarios.
Perhaps the camera watches a railroad for people walking on the tracks. A motion-based system would be riddled with false alarms as each train passing would trigger an event. On the other hand, a VideoIQ camera or encoder could be told to ignore vehicles and only send alerts for people entering that region of interest. Such rules can be used alone or in conjunction with one another to create specific alerts based on things like dwell time or direction of travel.
While I’ve barely scratched the surface, the above is a simplistic way of differentiating our technology from advanced motion detection. Watch the video to see how VideoIQ analytics can battle the harshest conditions and still deliver superior results: securitysa.com/*VideoIQ1
© Technews Publishing (Pty) Ltd | All Rights Reserved