The network anomalies detection suddenly becomes a popular topic in cyber protection market. This is to expect something unexpected then manage it, i.e. deviation from normal.
At a glance it appears as an amazing technology: no more signature based detection, no need to update detection definition, deploy and forget solution.
But if you think deeper, the technology needs a time period to learn your environment as baseline. Any deviations from this baseline will be treated as the “unexpected”.
The challenges are:
- How do you know if current network traffic is normal but not already compromised
- How much time is sufficient to establish the baseline in order to reduce false negatives or false positives to acceptable trusted level?
- How about traffic that disappears from baseline, is the technology able to report?
- Seasonal network traffic will further add complication
- Is the technology that vendor claims only able to handle specific scenarios?
- Does the vendor need extensive time to learn your environment?
- Last but not least, what are new cyber risks introduced by deploying the technology? It is talking about aggregating all network traffic from different network zones. Are these connections breaking the network segmentation principle?
Anyway, you need to validate the technology with solid success cases rather than hearsay from sales pitch.
The ultimate value of deploying technology like this is to demonstrate due diligence for cyber protection. If bad things happen, auditor or regulator asks you: do you have any APT protection? If no, you’re dead. If yes, this is an isolated case and its effectiveness is yet to be improved. Of course, actual detection will be the value-added portion.
[…] Earlier, I talked about network anomaly detection. […]