The consequences of false alerts

May 2019 Editor's Choice, Surveillance, Integrated Solutions

Natural events like storms, wind, and even the shifting of the sun during the course of the day can lead to an excessive number of false alarms for electronic systems, as any alarm monitoring control room could tell you. Similarly, where cameras are viewing long fence lines, the movement of birds or waving of vegetation close to the camera can confuse video analytics into issuing a number of false alarms despite the best efforts of so-called ‘learning systems’.

I’ve been in control rooms where the alarm facility has been issuing several alarms per second, although even several alarms per minute or even per hour can cause overload and frustration. Standalone systems are either switched off, or placed in the corner where they won’t disturb anybody.

Where the video monitoring relies on blank screen or black screen technology, similar issues exist at times where false alarms are triggered too frequently. On one site I visited, the operator had to report and log every violation despite them being false, and virtually all of his time was spent on this with no time allocated to actually viewing the cameras that his job was supposed to be based on. These kinds of circumstances are obviously the worst case for these automated detection systems, and the industry has attempted to address much of these concerns albeit with varying degrees of success. What comes through with this though, is that false alarms are the potential killers of technology success whether one terms it analytics, intelligent systems, or AI.

Intelligence killer

Without doubt security technology systems have made huge progress over the past few years. The advent of big data and interfacing of technologies has made things possible that few would have believed 20 years ago. Yet the push for technology solving all our problems has consistently fallen short. Indeed, even some of the most sophisticated integrated systems are still delivering a ‘condition notification’ to a human operator for verification.

Part of this failure of technology systems to be ‘the solution’ has been because often people have a technology that is looking for a problem so it can be a solution, rather than being developed as solutions to problems which implies a far more directed solution. Sales people are in the forefront of pushing technologies that sadly get binned or underutilised because they are not really fit for operational purpose, or that sound great but fail under real-world conditions. These failures can be seen as due to a variety of issues, but for me one of the most crucial things to be aware of is the often ignored factor of false alarm rates.

When looking at the issue of false alarms with security technology systems, one needs to consider two issues. False alarms are typically associated with ‘false positives’ when the system identifies a problem which in fact is not an issue. Put simply, a facial recognition system would flag a person as a suspect, when he or she is in fact another person who is perfectly innocent. A similar issue could arise with number plate recognition, or video motion detection. However, one can also have a scenario of ‘false negatives’ where the system assumes it has correctly verified a condition but in fact has made an error, where for example passengers on an airliner have been checked for facial recognition of known terror suspects and the system has incorrectly accepted one of these people onto the flight as a normal passenger.

To put this in context, a system with a 90% accuracy rate is still identifying 10% of people incorrectly. So, let’s say you are wanting to identify passengers on an Airbus A380, with passenger density from about 544 in three class seating arrangements to up to around 868 if you really cram people in. It means that 10% of your passengers are going to be misidentified, a number that turns out at anything from 54 to 86 depending on the configuration.

Operational disruptions

So what one needs to do is evaluate the impact of those false positives/negatives which I will generalise as false alarms. I would say there are three main areas of impact. The first is the potential disruption of looking at and dealing with the false alarms which may require significant manpower that is diverted from primary tasks, or has to be recruited in order to deal with the added verification processes or investigations.

The second is the risk of people/conditions getting through without being detected and the damage this could cause. The third is the damage to reputation or business viability of failure to perform. The more critical the outcome, the more potential damage to the business in terms of financial damages or public relations. A side issue is where failures to detect occur, how does this impact on management confidence in the systems and possibly security.

I find that often the idea of how technology can be used for security suffers from a conception of what needs to be addressed and how it needs to be applied from a management point of view. Often the providers of the technology have no security background, or understanding of the issues involved, but are pushing it as a solution. If they don’t understand the conditions under which the technology would be successful, the system is almost automatically going to be prone to false alarms.

Under these conditions, security personnel bear the brunt of this trying to make it work. In the case of surveillance, for example, even the people who are operating the systems aren’t sure what they are supposed to be looking for and yet a technology given by even less aware people is supposed to be providing it for them. Often surveillance operators don’t know the criminal dynamics, never mind the people programming the systems.

This isn’t restricted to the operational interface. Even in research institutions and universities, some of the researchers have very little grasp of the real issues they are supposed to be developing the technologies for. Faced with this broad based approach and generalisation, the potential for multiple false alarms is built into the system from the start. Or the providers ignore the implications of high levels of false alarms in the hope of a sale, or simply don’t care about the consequences for operators of such distractor issues.

Built-in learning

Video analytics or AI system solution providers will often rationalise the potential problems away by saying that their system comes with ‘built-in learning’. This is seen to address all the concerns raised. However, there are a number of issues regarding that.

It presumes that the number of incident conditions and the similarity will number enough and be similar enough to learn from, and they will be sufficiently different from ‘normal’ conditions. While more blatant differences such as going counter flow to the normal pattern stand out and are easily identifiable, it also doesn’t necessarily mean that these are incident conditions.

As I emphasise on my training courses, incident conditions will change the pattern, but not everything that changes a pattern will be an incident condition. Ultimately, knowledgeable humans will be required to define or confirm these ‘built-in learning’ issues. With high numbers of false positives or negatives, they can impose a significant overhead on normal operations.

There are also questions about the generalisability of learnt actions by systems to other areas or conditions. There is often an assumption that the system can learn and then apply these rules to different companies or locations. Yet situational awareness can vary substantially from place to place even within the same industry, and there are often local characteristics that change the dynamics of recognition.

Another approach is that the sensitivity of the system can sometimes be ‘tuned’ or even ‘desensitised’ to enhance the ability to stop false positives. So the tolerances can be tuned, for example, to increase your certainty of detection of the real condition. However, in doing so you make things so stringent that you start missing conditions which are close but don’t match exactly.

Dangers arising from this are that the AI gets desensitised to the extent that it is going to generate increases in false negatives. One of the major problems with false negatives is that they may never be known to the user. For example, if a group of people within a company have been stealing for some time without being detected, and the AI recognition system has become accustomed to the behaviour as ‘normal’ as part of its learning, relying on a such a system is going to allow the group to carry on stealing indefinitely unless other indicators of loss cause a questioning of the status.

Quality of the input

The quality of the video analytics, intelligent systems, or AI will vary from product to product. I recently viewed a number plate recognition system that was performing exceptionally well in picking up number plates in less than ideal lighting conditions. Other programmes may not do as well with the same conditions. Similar issues in less than ideal detection conditions can occur in interpretation of behaviour, differentiation of colour, recognition of faces, picking up elements on a full body X-ray, and environmental conditions.

The original quality of content such as facial photographs, for example, may be a critical factor in the performance of the camera facial recognition and comparison against the target, and hence the level of false alarms. False positives or negatives may also arise as a consequence of people trying to get around the systems from a criminal or even personal view, or the difficulty in getting people to adhere to conditions which allow the system to work effectively such as the posture or angle of the body or face to the recognition systems.

Cheats and ways of getting around the systems are going to be intrinsic to any system where human behaviour is being controlled or observed. For example, cloned or fake number plates for number recognition systems, or people hiding their faces for facial recognition will take place as a matter of course.

The potential for video analytics, intelligent systems, ‘AI’, and the use of big data offers huge potential advantages in the collection of intelligence and assistance in detection of adverse conditions and decision making. However, the likelihood of a system to replace people is still relatively remote given the imperfect performance and potential for false alarms and false negatives.

I’ve been going through a number of full body scanners at airports over the last few years and clearly the scanners are making a large number of false alarms in the scan that then have to be verified by a physical check by a security officer. Relying purely on automated recognition in this respect would cause chaos at some major international airports.

Caveat emptor

When making a decision on implementing an ‘intelligent’ system, consideration of false alarms may mean the decision is made not on how well the system works, but on how it doesn’t work. Further you will need to consider the implications of the false alarms. For example, if one may cause severe damage to an organisations reputation and the company gets sued, then the costs clearly become an important part of any management decision. If the implications are minimal then there isn’t such an issue.

The contribution of these technology systems to management decision making or even ongoing operational performance is still going to rely on people who make a final decision, or who are required as part of a verification process. In the pursuit of making things as efficient as possible, the systems may be ideal for shortlisting suspects, identifying conditions that need to be checked, or providing information to facilitate quicker decision making, all part of what we can call augmented surveillance. However, your confidence level in purchasing a detection security technology is going to be highly dependent on the false alarm rate, the number of potential hits/misses over a particular period, the effort to address false positives and the consequence of false negatives not being detected, and the financial and organisational impact of such failure. ‘Caveat emptor’.



Credit(s)




Share this article:
Share via emailShare via LinkedInPrint this page



Further reading:

Pentagon appointed as Milestone distributor
Elvey Security Technologies News & Events Surveillance
Milestone Systems appointed Pentagon Distribution (an Elvey Group company within the Hudaco Group of Companies) as a distributor. XProtect’s open architecture means no lock-in and the ability to customise the connected video solution that will accomplish the job.

Read more...
AI-enabled tools reducing time to value and enhancing application security
Editor's Choice
Next-generation AI tools are adding new layers of intelligent testing, audit, security, and assurance to the application development lifecycle, reducing risk, and improving time to value while augmenting the overall security posture.

Read more...
2024 State of Security Report
Editor's Choice
Mobile IDs, MFA and sustainability emerge as top trends in HID Global’s 2024 State of Security Report, with artificial intelligence appearing in the conversation for the first time.

Read more...
Cyberthreats facing SMBs
Editor's Choice
Data and credential theft malware were the top two threats against SMBs in 2023, accounting for nearly 50% of all malware targeting this market segment. Ransomware is still the biggest threat.

Read more...
Are we our own worst enemy?
Editor's Choice
Sonja de Klerk believes the day-to-day issues we face can serve as opportunities for personal growth and empowerment, enabling us to contribute to creating a better and safer environment for ourselves and South Africa.

Read more...
How to spot a cyberattack if you are not a security pro
Editor's Choice
Cybersecurity awareness is straightforward if you know what to look for; vigilance and knowledge are our most potent weapons and the good news is that anyone can grasp the basics and spot suspicious activities.

Read more...
Protecting IP and secret data in the age of AI
Editor's Choice
The promise of artificial intelligence (AI) is a source of near-continuous hype for South Africans. However, for enterprises implementing AI solutions, there are some important considerations regarding their intellectual property (IP) and secret data.

Read more...
Super election year increases risks of political violence
Editor's Choice
Widening polarisation is expected in many elections, with terrorism, civil unrest, and environmental activism risks intensifying in a volatile geopolitical environment. Multinational businesses show an increasing interest in political violence insurance coverage in mitigation.

Read more...
Re-imagining business operations with the power of AI
AI & Data Analytics Surveillance
inq., a Convergence Partners company, has introduced a range of artificial intelligence (AI) solutions to assist organisations across industry verticals in optimising business operations and improving internal efficiencies.

Read more...
Enhance control rooms with surveillance and intelligence
Leaderware Editor's Choice Surveillance Mining (Industry)
Dr Craig Donald advocates the use of intelligence and smart surveillance to assist control rooms in dealing with the challenges of the size and dispersed nature common in all mining environments.

Read more...