Worth a read for the next time someone tells you that AI/ML is the answer…:
While many companies are turning to machine learning tools to fight hackers, they may not be as helpful as they seem thanks to a talent shortage and a lack of transparency.
Here are the five ways machine learning may make things harder on cybersecurity pros.
1. Machine learning-equipped hackers
Machine learning can be helpful defending against attackers, but can be destructive when used by the wrong people. “An arms race is occurring as each side tries to one-up the other to make a better AI,” said Ryan Ries, AI/machine learning expert at Onica.
Machine learning works faster than humans—a quality that is typically celebrated. However, not in the case of cyberattacking efforts.
“Human attackers will perform reconnaissance on a potential victim before launching a cyber attack, investigating things like what software they are running, the version of that software, any known vulnerabilities for said version, or any un-published zero day exploits shared among the hacker community that could improve their attack. This process can take many hours,” said Emil Hozan, security analyst at WatchGuard Technologies. “But with machine learning, this research process can be carried out much more quickly and efficiently. Machine learning/AI hacking can also learn from past experiences; what didn’t work on a similar previous hack attempt could be skipped over in favor of a new tactic.”