Enhancing control room effectiveness

Issue 2 2021 Editor's Choice, Surveillance

I recently participated in an international online seminar discussing AI detection of incident conditions and smart operators and how well these two factors can come together in the control room (see https://lnkd.in/dqtxpVD). The seminar had people presenting from a number of perspectives including research, solution providers and operational and ethical perspectives.

I recently participated in an international online seminar discussing AI detection of incident conditions and smart operators and how well these two factors can come together in the control room (see https://lnkd.in/dqtxpVD). The seminar had people presenting from a number of perspectives including research, solution providers and operational and ethical perspectives.

In looking at the topic of smart operators and the need for effective analytics for detection when preparing for the presentation and observing the content of what people were discussing with respect to AI, there were a few factors that came through for me as reflecting the current status in the area. I’ve used some of the comments and perspectives from the seminar as well as the insights in my preparation to come up with some ideas about the state of AI in contributing to control room effectiveness.

The reality behind the hype

Without doubt, there is huge hype in the security industry around AI applications, which sometimes stretches the definition of AI, but provides the basis for a covering a range of analytics techniques from the simple to sophisticated. Many of these are currently being developed or adapted to compensate for limitations of detection coverage and capabilities within a control room.

The limit to CCTV coverage has always been about the camera to operator ratio, against the demand of covering as much area as possible. Too many cameras means not seeing what is in front of you. Too few means not getting enough coverage. There are ways of strategically improving the viewing process to look at the most important cameras at the most critical times, but ways to enhance the viewing process are always welcome and video and other analytics opens up opportunities to do that. This brings into focus whether the capabilities are as good as is hyped and whether they are actually useful in the real world of detection.

The use of analytics in CCTV video has now been around for some time, but users are being used as test beds for some of the concepts and expensive test beds at that. However, there are also some highly positive features that are becoming more commonplace and as I mentioned in my talk, are almost mandatory for some sites and applications.

The basis of video analytics is to look at shapes, shifts, positioning, or movement of pixel groupings to come up with representations of behaviour or objects. This process allows the analytics to calculate a match against specified or expected behaviours, or in many cases deviations from expectations. Of these available for CCTV systems, movement-based analytics are probably the most common and most workable. Common movement analytics include the existence of movement in an area when there shouldn’t be, the direction of movement (for example, towards a potential target), virtual trip wires, movement hot spots where there may be unusual concentrations or overcrowding, heightened activity levels within a crowd, or flow changes.


All of these have common application and can potentially be useful, especially when linked to some kind of interpretation. For example, unexpected movement in an area can be supplemented by the analytics judging whether the shape of the object violating the area is consistent with a person or may be an animal or other cause such as a plastic bag blowing in the wind or moving foliage. Alternatively, the movement of a person or object relative to another person or object may show some kind of relationship or intent between the two.

The ability to track people automatically with different cameras appears to be useful, especially if it can be done automatically when reviewing incidents, although I haven’t seen many applications for this. Video review software which can search large amounts of video for faces and colours of clothing to assist in identification can be efficient in linking and displaying potential suspects in an area. Software features to take things out of video or include certain things, also helps out with review.

Of all of these, movement analytics have probably had the biggest success in its practical application. These applications are almost mandatory for movement detection analytics around perimeters of residential or business estates in South Africa and large sites such as airports and military installations around the world.

Everyday versus specialised use

Because analytics are based on algorithms, they tend to be focused and specialised. There is current research in using an AI-based video analytic approach as part of customs clearance to detect normal and micro-facial expressions in response to specific questions. These are seen to have the potential to represent deceptive behaviour. However, you would never find the same technology in the camera in your local supermarket.

The camera view is simply too broad for reading such behaviour and the algorithms are not likely to be able to process the data, but also the purpose of this would be questionable. Even movement detection algorithms tend to be very specialised. Many of these algorithms are also developed in labs or universities where there is little experience of people working in actual crime conditions.

The requirement for success is a critical mass of sample behaviours that represent what is being looked for. However, lacking this, the analytics are often based on theoretical conditions, simple behavioural examples, or simulated conditions which may or may not represent real life behaviours. For example, firearm detection in a simple sense is relatively easy if one looks at a defined shape in outline. However, incorporate criminal behaviour along with the use of a firearm and things can change considerably.

I’ve seen one instance in a mall of a robber simply pulling up his jacket slightly while standing next to a security officer to warm him that he had better not consider reacting because the robber had a pistol in his belt and he was demonstrating that he had the capability to use it. All credit to the operator who observed the closeness of the person to the security officer, the posture he showed while he was standing there and the subtle lifting of the jacket to show part of the firearm.

Similarly, criminals will often only pull their firearm out when acting on the robbery rather than waving it around beforehand. By then, the reactions of people around the gunman tell you more clearly he has a gun than the gun itself. So rather than firearm detection, the ability to pick up that people are lying on the ground in a bank or supermarket is probably a better detection method of armed robberies. One of the key factors of this algorithm-based development is that you need to see if it works for you in your environment and crime dynamics, without excessive learning requirements from both the analytics software and the operator.

The ability to learn

The capacity for learning is one of the areas where analytics has the ability to improve. However, this implies that someone has to teach the system and this may be a person who is not familiar with all the crime behaviours, or is supposed to be doing a job essential to service delivery while this is all going on. Therefore, learning is dependent on people, time availability, accuracy of learning and breadth of exposure. Learning also implies predictability and environments where things may be variable or subject to changing conditions are not easy to teach or for the AI to retain as a basis for differentiation.

In many cases, analytics reflect something that is different rather than something that is wrong and the ability to differentiate these is critical. In one case, I’ve seen a flock of small birds flying around a perimeter triggering intrusion protection that was supposed to identify a person climbing over the wall. Therefore, the ability to accurately detect and to avoid false alarms is one of the key success factors in acceptance of analytics.

In my experience, AI is best in defined, less complex environments where rules can be clearly defined and variability is limited. As environments become more cluttered, visually noisy and subject to changing dynamics, AI starts to have more difficulty. Accuracy levels which lead to false alarms in the form of false positives or false negatives is critical to working with a smart operator. Too many alarms or too much ambiguity creates overload and increased stress. We need accuracy of behavioural event detection so we have confidence in AI notification.

Improving or reducing value

Operators of AI equipment will continually be making decisions based on what AI presents to them that influence outcomes. The biggest danger of AI detection is introducing additional clarification or verification activities into a control room that is already under pressure. The more an analytics technique requires human verification, the more it reduces its value.

If a person working for you makes a mistake, or even a couple, you are likely to spend time to try and work it out. Once you get to four or five mistakes, things become more fragile and the relationship becomes tenuous. For security service contractors, failure to perform leads to the company being fired if not living up to the service provisions.

So, how patient can you be with an AI based platform? What is the accuracy rate you expect from a system and can you agree this with the service provider? It’s a real factor that users need to take into consideration. More importantly, a key determining factor will be the cost of false positives. If that cost – for both the observed individual and the observer – takes the form of expensive legal cases, wrongful detention, etc., the public rejection of behaviour surveillance will likely grow. If accurate, cost-effective and giving minimal false alarms, analytics would be ideal.

AI truly comes into its own when it can interface with data and big data. We start getting the best leverage of AI when we combine a computer-detected event with information processing from databases that can help in quantifying the risk in the context of other information. This starts moving more into intelligence-driven surveillance. However, the data needs to be up to date and trustworthy.

Any introduction of new technology into the control room also has a human factors impact, a social and an operations and procedural impact. The interface is really important. Developers need to be careful that they don’t try to superimpose their tech solution as an interference into the control room. We are also going to be in a world of competing analytics with very specific purposes and being able to integrate these is going to be a challenge given they may reside in different technologies within the security system.

Ultimately, existing control rooms have present needs that new solutions need to be integrated into – the ideal is to augment the surveillance process. Changes can impact on viewing strategies, procedures, data processing and alarm functions.

As I mentioned in the seminar, criminals work to circumvent crime protection measures – AI is going to be one of these. We need to start thinking of how they are likely to do it and what we can do about it. We need to be one step ahead rather than catching up to criminal methods. I’m not sure whether AI can do this and it is going to be interesting to see what lies ahead in this respect.

We need AI or analytics that works seamlessly, instantly, with minimum operator involvement to produce relevant information to augment human decision-making. It should effectively free operators up for command and control decision making and operational responses. We are not eliminating the need for people. We are increasing the responsibility and skills requirements of fewer, more highly paid people working with smart systems.

About Craig Donald


Dr Craig Donald.

Dr Craig Donald is a human factors specialist in security and CCTV. He is a director of Leaderware which provides instruments for the selection of CCTV operators, X-ray screeners and other security personnel in major operations around the world. He also runs CCTV Surveillance Skills and Body Language, and Advanced Surveillance Body Language courses for CCTV operators, supervisors and managers internationally, and consults on CCTV management. He can be contacted on +27 11 787 7811 or [email protected]


Credit(s)




Share this article:
Share via emailShare via LinkedInPrint this page



Further reading:

Pentagon appointed as Milestone distributor
Elvey Security Technologies News & Events Surveillance
Milestone Systems appointed Pentagon Distribution (an Elvey Group company within the Hudaco Group of Companies) as a distributor. XProtect’s open architecture means no lock-in and the ability to customise the connected video solution that will accomplish the job.

Read more...
AI-enabled tools reducing time to value and enhancing application security
Editor's Choice
Next-generation AI tools are adding new layers of intelligent testing, audit, security, and assurance to the application development lifecycle, reducing risk, and improving time to value while augmenting the overall security posture.

Read more...
2024 State of Security Report
Editor's Choice
Mobile IDs, MFA and sustainability emerge as top trends in HID Global’s 2024 State of Security Report, with artificial intelligence appearing in the conversation for the first time.

Read more...
Cyberthreats facing SMBs
Editor's Choice
Data and credential theft malware were the top two threats against SMBs in 2023, accounting for nearly 50% of all malware targeting this market segment. Ransomware is still the biggest threat.

Read more...
Are we our own worst enemy?
Editor's Choice
Sonja de Klerk believes the day-to-day issues we face can serve as opportunities for personal growth and empowerment, enabling us to contribute to creating a better and safer environment for ourselves and South Africa.

Read more...
How to spot a cyberattack if you are not a security pro
Editor's Choice
Cybersecurity awareness is straightforward if you know what to look for; vigilance and knowledge are our most potent weapons and the good news is that anyone can grasp the basics and spot suspicious activities.

Read more...
Protecting IP and secret data in the age of AI
Editor's Choice
The promise of artificial intelligence (AI) is a source of near-continuous hype for South Africans. However, for enterprises implementing AI solutions, there are some important considerations regarding their intellectual property (IP) and secret data.

Read more...
Super election year increases risks of political violence
Editor's Choice
Widening polarisation is expected in many elections, with terrorism, civil unrest, and environmental activism risks intensifying in a volatile geopolitical environment. Multinational businesses show an increasing interest in political violence insurance coverage in mitigation.

Read more...
Re-imagining business operations with the power of AI
AI & Data Analytics Surveillance
inq., a Convergence Partners company, has introduced a range of artificial intelligence (AI) solutions to assist organisations across industry verticals in optimising business operations and improving internal efficiencies.

Read more...
Enhance control rooms with surveillance and intelligence
Leaderware Editor's Choice Surveillance Mining (Industry)
Dr Craig Donald advocates the use of intelligence and smart surveillance to assist control rooms in dealing with the challenges of the size and dispersed nature common in all mining environments.

Read more...