The single largest threat to everyday security enterprise systems is social engineering, where cyberthieves rely on deceit and human emotions to trick people into revealing sensitive data like passwords and personally identifiable information. But in an interesting twist, leveraging artificial intelligence (AI), researchers are now working on systems that can function as the human, using machine learning (ML) to predict when the other person is being deceitful. Also, software is more likely to adhere to the rules and not to be persuaded by a sob story.

What could make this even more interesting—in a science-fiction becomes science-reality way—is that the inevitable next step is that AI bots would be on both ends of the conversation, with each side assuming the other entity is a human. Preparing for such interactions requires training software to predict truthfulness and deception from interactions with other software. Scientists are doing just that as part of an area of AI school of thought called theory of mind.

The inevitable next step is that AI bots would be on both ends of the conversation, with each side assuming the other entity is a human.

"By about the age of 4, human children understand that the beliefs of another person may diverge from reality, and that those beliefs can be used to predict the person’s future behavior," noted a recent story in Science, referring to new discoveries researchers reported in July at the International Conference on Machine Learning in Stockholm. "Some of today’s computers can label facial expressions such as “happy” or “angry”—a skill associated with theory of mind—but they have little understanding of human emotions or what motivates us."

"A final test revealed (the AI bots) could even understand when a character held a false belief, a crucial stage in developing theory of mind in humans and other animals. In this test, one type of character was programmed to be nearsighted; when the computer altered the landscape beyond its vision halfway through the game, (the AI bot) accurately predicted that it would stick to its original path more frequently than better-sighted characters, who were more likely to adapt," the story noted.

The traditional argument for companies using human associates at a call center instead of software chat sessions—or, at the very least, using human operators to supplement that software—is that humans can listen to the full context of the customer's situation and make a judgment about when it's appropriate to bend the rules. It's often referenced as the common-sense factor.

But it's from that same judgment decision that makes those humans such security risks when they are confronted by a skilled and persistent cyberthief using social engineering. This forces the question: Can AI machine learning learn to make bots good at knowing when to make exceptions but to also not be at risk of being conned?

The software-talking-to-software scenario is the inevitable next step when AI bots starting successfully blocking social engineering attempts. In other words, the bad guys will start using their own AI ML bots to learn when the good guys' AI ML bots will bend the rules. How could AI systems do better than humans at detecting fraud, especially sneaky tricky fraud? In quite a few ways.

Beyond the basics of this ML project, where the software learns to try and recognize human deception, sort of a machine learning version of a BS detector, there are things that only AI could detect. In short, can it detect the difference between someone truly distraught and a good actor pretending to be distraught? Given enough examples, it should be able to with a reasonable degree of certainty.

AI ML works when it is exposed to a vast amount of data. Envision these systems installed in most of the Fortune 1000 customer call centers. Even if it doesn't detect the social engineering attack as it happens, ML will be alerted later on when a call turns out to be engineered. It can then analyze that call—along with every other call that is later determined to have been a fraud—and learn everything it can.

Most social engineering fraudsters are persistent, and they attack many companies. The system can engage voice recognition so that it detects when that voice calls into the system. AI can look for telltale phrasing and specific examples ("my mother is on her deathbed, and I need to access her inbox right away to tell her something she needs to know"), as it's likely these thieves will repeatedly use social engineering stories that have worked. The system can watch for phone numbers and IP addresses.

Even if those numbers and addresses are faked—which is almost a certainty—the thieves will likely fake the same or similar numbers/addresses. Again, ML could detect such patterns.

How long will the attacker wait on hold? Social engineering thieves have quotas to make, and they'll abandon a call faster than a true customer in dire need would. And there are then the call verification methods, such as asking for the caller to share live video. Even if the attacker fakes that and hopes that the call center doesn't really know what the customer looks like, the background setting is likely to be similar from other fraud attempts. Remember that the one thief is likely attacking many companies.

Part of the problem here is how ML is typically perceived in IT circles. It's often seen as this abstract analytical tool for DNA exploration or figuring out the best way for software to drive a car. But it's simply a way of looking at massive datasets—the bigger, the better—and figuring out likely patterns and deviations from those patterns. Companies can use ML to answer a wide range of mundane business issues. Yes, it can be used for call center helpdesks, but it can just as easily be applied to figuring out the best supply chain routes or how potential hires should be evaluated.

The only limit to using AI is the limit of executives' imagination.

 

»Read next on Emerge: AI is here. Are you ready? An interview with Phil Armstrong, CIO at Great-West Life.