Modern Artificial Intelligence (AI) research began soon after the end of WW II perhaps best indicated by Alan Turing’s 1950 question, “Can machines think?” Since then, while devoted researchers and academics have made (mostly) slow and steady progress in expanding the tasks that machines could accomplish as well as or better than humans, waves of excitement about the promise of AI have occasionally washed over the general public in periodic hype cycles.
“I am very optimistic about the eventual outcome of the work on machine solution of intellectual problems. Within our lifetime machines may surpass us in general intelligence.” — Marvin Minsky, AI pioneer (1927–2016), stated in 1961
The hype is understandable — even among those who should probably know better, it’s not always easy to clearly understand what has and has not been accomplished. Just ask Jonathon Schaeffer. He spent the better part of two decades (1989–2007) solving checkers via slow and steady progress. However, in his book he recounts that when others working in AI heard he was creating a checkers program he would often be asked, “Didn’t Arthur Samuel solve that back in the 1950’s?”
Arthur Samuel didn’t solve checkers, but he did coin the term “machine learning” (ML) which is one of the key approaches that in recent years is making AI a practical reality in parts of cybersecurity. At a very high-level, ML can be thought of as algorithms identifying patterns in large collections of data with the aim of making future predictions. Present as an AI technique in cybersecurity for over 25 years, ML’s recent rise in prominence was underwritten by years of exponentially faster processing (think Moore’s Law) and increasingly dense digital storage (think Kryder’s Law). Recently, ML has benefitted from the availability of large data sets and the creation of ever more clever algorithms.
While we currently appear to be in a new AI hype cycle with many outlandish marketing claims from over-enthusiastic vendors claiming to be able to use “AI” to solve all sorts of cybersecurity problems and prevent attacks even before they happen, there has been some solid progress with ML techniques. Partly this is because many cybersecurity problems can be generalized to the idea of looking for something anomalous — user behavior, network traffic, email content, etc. Even so, ML is not a panacea as specific algorithms have fairly narrow applicability and the generators of anomalies of interest (i.e. hackers) are ever-adaptive adversaries.
“If we work really hard, we’ll have an intelligent system in from four to four hundred years.” — John McCarthy, AI pioneer (1927–2011), stated ~1958
During the current AI hype cycle where some vendors appear unable to resist the temptation to overstate their AI/ML capabilities, it’s prudent to recall that accurate predictions for future AI timelines are notoriously difficult, and that real progress often comes incrementally, regardless of the field. For example, after the atom was first split in 1917 and scientists theorized about the awesome power that could be unleashed/harnessed from such an event it took nearly 30 years, including a massive 3-yr sprint, before the first atomic bomb was viable and another 10 years before we saw the first nuclear power plant. Best to be hopeful for future cybersecurity AI improvements while greeting any vendor’s cybersecurity AI claims with healthy skepticism.
CYR3CON provides cyber threat intelligence through advanced machine learning (ML) and data mining of deep-/dark-web information. CYR3CON’s “Priority” is a product that helps organizations prioritize their vulnerability mitigation efforts (patching) based on the real-word threat posed by the vulnerabilities. Using ML techniques to look at multiple indicators like, hacker discussions, hacker reputation, vendor, etc., CYR3CON Priority determines, with a high level of precision, which vulnerabilities are most likely to be used in a real-world attack. Come check us out and exercise some healthy skepticism.