A security guard sits in a monitoring room. In front of him: a wall of 16 CCTV feeds. His job is to watch all of them, simultaneously, for eight hours.
This is not a job any human can do well. Not because the guard is lazy or poorly trained — but because the human visual system was never built for it.
Research consistently shows that people monitoring multiple video feeds miss the majority of incidents that occur on screen. One frequently cited study from the UK Home Office found that after just 20 minutes of watching CCTV, a guard's ability to detect events falls off dramatically. By the end of an hour, the detection rate for significant events is somewhere between 5% and 10%.
That means a typical CCTV monitoring setup — the kind that costs businesses thousands of pounds per month in staffing — catches roughly one incident in ten.
The other nine happen on camera, and no one notices until it is too late.
The Science of Attention Failure
The problem is not motivation or discipline. It is neuroscience.
Human attention is a limited resource. When you focus on one thing, your capacity to process other inputs falls sharply. Psychologists call this "inattentional blindness" — the well-documented phenomenon where people fail to see clearly visible events because their attention is directed elsewhere.
The classic demonstration of this is the "invisible gorilla" experiment: participants watching a video of people passing a basketball were so focused on counting passes that they failed to notice a person in a gorilla costume walking slowly through the scene. Roughly half of all participants missed it entirely.
Now apply that finding to a monitoring room. A guard watching 16 feeds is not really watching 16 feeds. They are scanning between them, catching fragments of each, and relying on peripheral movement to trigger closer attention. In practice, they are monitoring perhaps two or three feeds with any meaningful attention at any given moment.
The others are effectively unmonitored.
The Multi-Screen Problem
Each additional screen added to a monitoring setup does not add proportional monitoring capacity. Research suggests the opposite: as screen count increases, detection performance per camera declines.
Studies by the Security Industry Authority in the UK found that operators monitoring more than four screens simultaneously saw detection accuracy drop significantly compared to operators watching a single feed. The cognitive load of switching attention between multiple inputs degrades the quality of attention applied to each.
Most commercial security monitoring setups run 8, 16, or even 32 screens per operator. The math is not encouraging.
Fatigue and Time-of-Day Effects
Human attention is also not constant across a shift. Performance peaks in the first 20 to 30 minutes of monitoring and deteriorates steadily from there. By the end of a standard two-hour monitoring rotation, event detection rates have fallen to a fraction of their initial levels.
Night shifts compound this. The circadian dip between 2:00 AM and 5:00 AM is associated with significantly degraded cognitive performance across almost every measurable dimension. This is precisely the window when many facilities are most vulnerable — fewer staff on site, lower ambient activity that reduces alertness cues, and guards who are physiologically fighting to stay awake.
The incidents that happen at 3:47 AM are the ones most likely to go undetected.
What AI Does Differently
AI camera analytics does not get tired. It does not get distracted. It does not experience inattentional blindness, and it does not have a circadian rhythm.
An AI system monitoring 16 cameras processes all 16 simultaneously, at full capacity, every second of every hour of every day. Its detection performance does not degrade over a shift. It does not perform worse at 3:00 AM than it does at 9:00 AM.
This is the fundamental difference — not that AI is cleverer than a human observer, but that it is consistent in a way no human can be.
How AI CCTV Monitoring Works
Modern AI camera analytics platforms like Horus use computer vision models trained on millions of images to detect specific events and conditions in live video feeds. The system analyses each camera frame by frame, looking for predefined triggers.
Those triggers might include:
- A person entering a restricted zone
- A vehicle remaining stationary for longer than a set period
- A queue exceeding a defined length
- A worker not wearing required PPE
- Unusual crowding in a specific area
- Motion in a zone that should be empty during certain hours
When the system detects a trigger, it generates an alert immediately — typically delivered via Telegram notification to a mobile device. The security team sees the alert within seconds of the event occurring, with a timestamped snapshot attached.
No human needs to be watching the feed at that moment for the alert to fire.
The False Alarm Problem — and How Good AI Handles It
A common objection to AI monitoring is false alarms. Early AI systems were prone to triggering on shadows, lighting changes, and animals — generating so many alerts that operators began ignoring them, defeating the purpose.
Modern AI analytics platforms have addressed this substantially. Horus, for example, runs object detection and classification models that distinguish between people, vehicles, and background movement with high accuracy. Zone-based detection means the system only alerts when a relevant object is in a relevant area — not just because something moved anywhere in the frame.
The result is a meaningful reduction in false positives compared to older motion-detection-based systems, while maintaining high detection rates for genuine events.
What This Means for Your Security Operation
The practical implication is this: if your current security operation relies primarily on human operators watching live CCTV feeds, you are probably catching a small fraction of the events occurring on camera.
That is not a criticism of your security team. It is a statement about human cognitive limits that no amount of training or discipline can overcome.
AI camera analytics addresses the gap not by replacing security personnel, but by changing their role. Instead of watching feeds and hoping to catch events in real time, security staff receive targeted alerts when specific conditions are detected. They can then review the relevant footage, assess the situation, and respond.
This model is strictly better for everyone involved. Guards are no longer expected to do something cognitively impossible. Instead they are doing what humans are actually good at: making contextual judgements based on specific information and deciding how to respond.
The On-Premises Advantage for Security
One consideration that matters significantly in security applications is where the AI processing happens. Cloud-based AI analytics systems send video footage to external servers for processing. That raises data security, latency, and compliance questions that many organisations cannot or should not ignore.
Horus processes all video on-premises, on a Windows PC at your site. Video never leaves your network. Only detection metadata — timestamps, event types, zone identifiers — is transmitted to the cloud dashboard. The footage itself stays local.
For facilities handling sensitive operations, this matters. You do not need to add a data security risk to your security infrastructure.
The Numbers Behind the Case
To frame this concretely: assume a facility with 20 cameras and a single monitoring operator. Under typical human monitoring conditions, that operator is effectively covering 2-3 cameras with real attention at any moment. The other 17-18 are largely unmonitored.
An AI system monitoring those same 20 cameras is covering all 20, simultaneously, with consistent detection performance.
The ratio of effective coverage is not 20:16 or 20:18. It is 20:2. That is the realistic comparison.
For facilities where incidents in unmonitored zones carry significant cost — theft, injury, liability, regulatory penalties — the case for AI camera analytics is not primarily a cost argument. It is a coverage argument. Human monitoring at scale does not work the way most organisations assume it does.
AI monitoring does.
Getting Started
Horus works with any existing IP cameras — Hikvision, Dahua, Axis, and most other RTSP-compatible cameras. Installation runs on a standard Windows PC on your premises, takes under an hour, and requires no specialist hardware.
You can configure detection zones, set alert thresholds, and connect Telegram notifications without needing an IT team. The 14-day free trial gives you enough time to see what your cameras have been missing.