Todays big concern continues to be phishing…
The most common security threats our customers currently face is email phishing. Phishing is a form of social engineering that uses deception to get the victim to reveal sensitive information. These kinds of attacks are ones we can all relate to as we’ve all seen these kinds of emails. Usually, we can identify the emails because they sound suspicious or are otherwise unusual. Phishing emails are very common, and some statistics show that one to two percent of all email sent daily is a phishing email.
Education has helped
One of the ways we’ve been working with a number of our customers over the past year is by educating them on how to identify a phishing email. The way that we do that is by looking for characteristics that make the email jump out and seem a little unusual, and not completely natural or authentic. The challenge we face in the next year or so is that natural language AI is going to help our attackers create more naturally sounding and authentically written emails. It’s likely that the emails will be written in a way that corresponds to the organization style and culture. For example, a law firm may have more formally written emails, whereas a creative house might have some more informal lingo and even jokes included in the email to make them more authentic.
Gut instinct is losing ground
Phishing attacks work because of the sheer numbers of emails that are generated and the probabilities of recipients falling for them. The number of real victims we see is small but not insignificant. I have strong concerns that the use of natural language AI is going to increase the probability of a recipient falling for phishing emails.
Most customers don’t need to worry about targeted AI attacks specifically impersonating a particular person with a particular detailed request. What we do need to worry about are emails that seem like they came from LinkedIn or from the HR portal. ChatGPT, with a perfection of the English language, will do a much better job of convincing us that the email is written from a human than from bad translation from a foreign nation bot.
We’re going to need better tools…and fast!
A big takeaway on the IT side is that we will not be able to continue to rely on gut instinct and human perception to detect nefarious emails. This is happening very fast. In the next year, we can expect an increased reliance on AI language detection tools for anti-phishing. We can expect industry leaders to provide some of these tools, and of course, they’re going to require a cost increase for their convenience.
Scott Morabito is a technologist and founder of TechTonic. He was trained as a computer scientist and resides in Concord MA