The impact of AI-enhanced video analytics on control room personnel

CCTV Handbook 2022 Editor's Choice, Surveillance

AI-capable detection systems are becoming increasingly common in control rooms and, in time, are almost automatically going to be incorporated into control room systems and equipment. So, what impact do they have on control room staff, and do they mean a change in the demands on operators and the type of people you have in the control room?

To understand the answers to this, we need to look at the type and extent of AI-enabled equipment and systems. A camera enabled with motion detection is not likely to have any real impact within the control room, but a VMS system upgrade with widespread configurable options and AI learning capabilities which need to be taught by staff could have a major implication. I have seen a security control room for a major company with upgraded systems and interfaces make all but one of the existing control room staff effectively redundant because they were unable to operate the new systems.

Before you upgrade to any AI-based technology, you need to ask how relevant it is to you, how often are you likely to use it, will it solve your problems, how many false alarms get produced that will pull people off core functions, how much AI teaching does it need to operate effectively, and how easy is it to use. I have always found with technology that the best systems are those that cover a sophisticated backend with an interface that is easy to use and can make the best use of features.

Something old and new

AI is supposed to be a quantum leap forward for CCTV and control rooms, often with sales claims about how AI can remove the need for people. Yet some of these ‘new’ capabilities have been around for a while. I still have a video in a diamond mine using simple motion detection based on specific areas on the camera view from 1989. Capabilities to distinguish between people and animals have been built into many passive infrared detectors in people’s households for years. Zone alarms on video management systems have been configurable for many years as well, and even analytics like missing object detection have been around for 20 years or more.

However, more recent ‘AI’ capabilities, along with improved software, have created a ‘sea change’ in the way we can do security by pushing things further than they were and making detection capabilities much easier to use and configure, automating actions, and using information more extensively. These capabilities are happening along with vastly improved access to information including ‘big data’ systems. These changes have allowed us to augment our control rooms with more information that is easier to access and can be used to enhance and complement operator decision making. Improved and more sophisticated access to data has meant we can use intelligence to identify and react to potential threats, evaluate conditions, monitor more effectively, respond and resolve to issues, and investigate the people and circumstances of such issues.

What are the benefits of AI?

• AI is good at doing a lot of routine things consistently that free up operators – things like movement detection can take care of extensive perimeter protection while operators can be engaged in other tasks.

• Coverage of areas that may only occasionally be an issue can be facilitated by using AI recognition.

• Anticipating threat detection using irregular movement or direction, for example towards a fence rather than along it, can highlight conditions that may turn into an incident for an operator to monitor.

• AI can be good at representing information and presenting it to operators to assist in decision making – although this presentation to the user aspect is probably an underdeveloped aspect of most AI systems.

• We can use information on one source to complement other sources – using linked datasets we can quickly find out more about a person or vehicle and evaluate the actions we see in a different context.

• AI search capabilities on big data sets allow you to go through lots of information when wanting information on incident conditions, for example face or number plate matching, clothing descriptions, movement of vehicles or people in certain areas.

AI and the operator in the control room

Ultimately, however, the AI recognition still needs to be reviewed by people in the control room. There has been a consistent trend across technology enhancements over the past 100 years similar to the one we are experiencing in control rooms, where the changes demand an increase in the skill and capacity of the people who must use and deal with the technology. If the people are not right, the systems are going to be crippled, leaving the user exposed to accusations of non-performance, or pulling people away from core functions that are key to service delivery. We are not eliminating the need for people. We are typically increasing the responsibility and skills requirements of fewer, more highly paid people.

AI is going to impact on control room operator tasks in the following way:

• There is more information and information sources than ever before that need to be handled in the control room, and that need to be managed on computer interfaces.

• Information comes in more quickly and needs to be tested for integrity, collated and rapidly integrated for effective decision making.

• Operators need an understanding of how the AI is identifying the criteria that it targets, so things like causes of false alarms can be tracked and be more understandable.

• The operator needs to master a number of systems that are often not integrated and may even operate in parallel, requiring extensive division of attention in handling multiple demands.

• A number of skills requirements go up – computer system skills, conceptualisation of a broader framework, ability to deal with multiple input sources, etc.

• The display screens are often ill suited to managing the presentation of complex information.

• Operators need to be able to look at a scene or scenario presented by AI and make a decisive and accurate judgement about the behaviour or situation they see and the associated risk. This is essential to avoid getting bogged down and to ensure a timeous response, and to cleanly handle false alarms.

• If they are responsible for teaching and giving feedback to AI systems as part of its learning mode, operators need to understand the broader context of what they see in order to teach the AI what is appropriate and what is not.

No matter what AI or video analytics system is being used in security, it is likely to need human overview and authorisation. I wrote an article in 2020 on why people are important (see https://www.securitysa.com/11567r) in working with AI technology. Use of human specialists is partly to allay concerns over legal obligations and being sued. However, not just anyone can necessarily do it.

With face recognition technology, there are almost universal procedures that a human operator has to confirm the facial identity. Yet we know that some humans are terrible at recognising faces, or even facial matching. Can you really expect somebody like that to confirm the AI interpretation? That is why people like super recognisers can be so effective in not only confirming what facial recognition systems pick up but extending it and filling in the limitations.

The importance of the human overview was also highlighted after an active shooter in Buffalo in the US was streaming himself opening fire on people in a supermarket on the Twitch platform, and the company pulled the video less than two minutes after the shooter opened fire. The response was seen as exceptionally quick, especially compared to other major platforms. Nathan Grayson for Reuters reported the Twitch executives stating, “We combine proactive detection and a robust user reporting system with urgent escalation flows led by skilled human specialists to address incidents swiftly and accurately”, and “While we use technology, like any other service, to help tell us proactively what’s going on in our service, we always keep a human in the loop of all our decisions.”

Types of AI analytics

The kinds of AI facilities in security surveillance technology range from tried and trusted to what one may call speculative. For most users, the traditional analytics that address things like motion detection, directional movement, distinguishing between human and animal, zone protection (including risk areas and virtual trip wires), and left objects are relatively easy to set up, integrate, and allow alerts or interrogation on the systems. These analytics are the most basic, but also address the bread-and-butter issues of security in many sites.

More advanced but relatively widespread and effective AI applications include facial recognition, licence plate recognition, unusual vehicle or person positioning, identification of groupings or concentrations of people, moved objects, and people counting. With COVID-19 being an issue, mask detection has become more popular as a feature. Ironically, it wasn’t a common feature before COVID when it would have been a good sign of crime if a suspect was wearing facial covering in warmer weather, and we may even find that at some stage we can use it again for these purposes if we can get COVID behind us in the longer term.

Movement patterns including direction, speed, irregular changes are analytics that can be useful, although false alarm rates are potentially much higher. While some standalone packages are better at using automated classifications of type of vehicle, and using facial, physical or clothing colour descriptors for search purposes, I haven’t seen successful use of these in live CCTV VMS or camera capabilities.

Being able to capture the details of a vehicle on the software system is very different from the system being able to work out these things automatically by itself. Similarly, AI-based firearm detection is marketed, but the useful application of this is probably limited at present unless firearms are presented in a very obvious way. Camera quality plays a major part in many of these AI-based techniques with poorer quality video making it much more difficult to work effectively. They key factor in the success of these analytics is the control over false alarms, including false positives where people may be incorrectly accused of doing something which leads to legal issues – see my article on this at https://www.securitysa.com/9445a. The methods of defining requirements, recognising and displaying such information, is getting much better, but I feel it’s still got some time to go before common usage.

Implications of AI for operator performance

There is a danger in the expectation level of what the AI can deliver – usually this is greatly exaggerated both in the sales pitch and the user expectation. Operators usually have to pick up the pieces in real life scenarios and deal with the disappointment of those who have purchased the systems. With inflated expectations of the systems, there are also inflated expectations on operators. We need to manage this perception in real life applications.

One needs to look at the likelihood of distracting operators from core functions – this applies in terms of extra duties, but also the provision of superfluous information. For example, many AI or video analytics functions like to draw rectangles around objects moving around the screen as it looks like the program is active. Putting rectangular outlines around people or vehicles does not provide any great detection benefit, unless they are violating some rules defined within the AI parameters.

Operators are quite capable of seeing there are people or vehicles moving on screen – what they want to know is who within those screens is violating a condition that should make you look at them more. Flagging of inappropriate targets using such rectangles is distracting and may reduce the quality of surveillance.

The biggest danger of AI detection is introducing additional clarification or verification activities into a control room that is already under pressure. The more an analytics technique requires human verification, the more it reduces its value. Any activity introduced into the control room should be part of a clear and meaningful strategy to assist with detection, incident handling, or provision of evidence and follow up investigation. Operators need to have the confidence and understanding be able to question the AI if they don’t believe it is right and be able to state why. That is why the better the understanding of what the AI is looking for, the more empowered they are to make a judgment.

Operators need to be able to view a display of an apparent incident and have the observation skills and crime behavioural awareness of the risk factors in that area to make a decision and act on the information quickly. They are likely to be presented on an ongoing basis with camera scenes that they need to react to. If the operators do not have the observation skills, crime behaviour awareness, and situational risk appreciation, they are not in a position to make informed decisions about clearing or investigating the scene further.

Where AI is providing simple alarms or alerts through the standard interface, there is not much of an increased demand on an operator unless there are high rates of false alarms which take skilled viewing to process. The more interaction there has to be with the system, the more active interrogation of information takes place, and the more operators have to build a conceptual framework of how different systems and information comes together, the higher the level of operator is required.

Inevitably it requires better people who can make more informed decisions based on the increased sophistication of information produced by the technology and more complex software interfaces. Higher level decisions also have more major consequences and implications that lead from them. The pace of work and processing of information changes with the introduction of more AI sources and rates of input. A comment on LinkedIn from Walter Booysen defines this well when he said “The rapid development of new technologies enables us to do things faster and more effective than ever before. Unfortunately, this also translates to making mistakes faster and more effectively...”. The use of AI is not just about smart technology and computer detection, the human element of defining requirements, interpretation and decision making needs to be smarter as well.



Dr Craig Donald.

Dr Craig Donald is a human factors specialist in security and CCTV. He is a director of Leaderware which provides instruments for the selection of CCTV operators, X-ray screeners and other security personnel in major operations around the world. He also runs CCTV Surveillance Skills and Body Language, and Advanced Surveillance Body Language courses for CCTV operators, supervisors and managers internationally, and consults on CCTV management. He can be contacted on +27 11 787 7811 or [email protected]


Credit(s)




Share this article:
Share via emailShare via LinkedInPrint this page



Further reading:

Pentagon appointed as Milestone distributor
Elvey Security Technologies News & Events Surveillance
Milestone Systems appointed Pentagon Distribution (an Elvey Group company within the Hudaco Group of Companies) as a distributor. XProtect’s open architecture means no lock-in and the ability to customise the connected video solution that will accomplish the job.

Read more...
AI-enabled tools reducing time to value and enhancing application security
Editor's Choice
Next-generation AI tools are adding new layers of intelligent testing, audit, security, and assurance to the application development lifecycle, reducing risk, and improving time to value while augmenting the overall security posture.

Read more...
2024 State of Security Report
Editor's Choice
Mobile IDs, MFA and sustainability emerge as top trends in HID Global’s 2024 State of Security Report, with artificial intelligence appearing in the conversation for the first time.

Read more...
Cyberthreats facing SMBs
Editor's Choice
Data and credential theft malware were the top two threats against SMBs in 2023, accounting for nearly 50% of all malware targeting this market segment. Ransomware is still the biggest threat.

Read more...
Are we our own worst enemy?
Editor's Choice
Sonja de Klerk believes the day-to-day issues we face can serve as opportunities for personal growth and empowerment, enabling us to contribute to creating a better and safer environment for ourselves and South Africa.

Read more...
How to spot a cyberattack if you are not a security pro
Editor's Choice
Cybersecurity awareness is straightforward if you know what to look for; vigilance and knowledge are our most potent weapons and the good news is that anyone can grasp the basics and spot suspicious activities.

Read more...
Protecting IP and secret data in the age of AI
Editor's Choice
The promise of artificial intelligence (AI) is a source of near-continuous hype for South Africans. However, for enterprises implementing AI solutions, there are some important considerations regarding their intellectual property (IP) and secret data.

Read more...
Super election year increases risks of political violence
Editor's Choice
Widening polarisation is expected in many elections, with terrorism, civil unrest, and environmental activism risks intensifying in a volatile geopolitical environment. Multinational businesses show an increasing interest in political violence insurance coverage in mitigation.

Read more...
Re-imagining business operations with the power of AI
AI & Data Analytics Surveillance
inq., a Convergence Partners company, has introduced a range of artificial intelligence (AI) solutions to assist organisations across industry verticals in optimising business operations and improving internal efficiencies.

Read more...
Enhance control rooms with surveillance and intelligence
Leaderware Editor's Choice Surveillance Mining (Industry)
Dr Craig Donald advocates the use of intelligence and smart surveillance to assist control rooms in dealing with the challenges of the size and dispersed nature common in all mining environments.

Read more...