This is the 23rd blog in a series about security and how security is about how you think.
Before drafting this blog post, I was working on one on a totally different topic, but something that happened last night made me undertake this one. I received some very strange messages from my friends on Facebook. They had received some “friend requests” from me – which was very strange, because they were already my friends. Some sent me text messages, some sent me email, some posted on Facebook, and I even got a phone call (thank you, Frank!)
What had happened was that someone had set up a parallel account, trying to impersonate me. My picture, my details, my information. My account hadn’t been compromised, but I changed my password just to make sure. If you searched for me on Facebook, you would have seen two accounts – one with all of my posts and friends, and one that had nothing.
A very weird occurrence. Why would someone want to be me – for all of the fame and fortune for being a blogger? I don’t know. After that person had created the account, they had started to text some of my friends about very weird topics. I logged into Facebook and reported the problem and within 30 minutes, the parallel account had been deleted. (It had never been confirmed anyway.).
I wanted to blog about this specific incident because it deals with one of the major topics of security – what’s a security event and what isn’t? In simpler terms, what’s normal and not normal? This is the “holy grail” of security – to instantly know what’s good and what’s not. There is an April Fool’s RFC 3514 which describes the “evil bit” in network traffic – if the bit is reset, then you can trust the traffic as good; if the bit is set, then the traffic is instantly “evil”. It would be much easier in the security world if it were that easy to differentiate good from bad traffic.
We all examine the world around us and see what’s not normal – the truck parked in the driveway at the neighbors, the new pop-up when we browse to a website, what is an everyday occurrence and what isn’t. When that happens, we subconsciously assign a “risk” to each difference and then see if that risk is worth taking action on. If we see a truck parked in the neighbor’s driveway, it correlates to the fact that they told me that they’re getting their kitchen redone. If I see a new pop-up, it’s just a new advertisement. Maybe.
If the risk is too great, then I have to tell someone. It may be my neighbors or maybe my internet service provider. It comes down to how I (and my friends) THINK security. If I don’t think it’s a risk at all, I proceed as normal. I click the link to continue, I accept the friend request, I keep going. If I think it’s a low-priority alert, I note it as such, send a text or email and keep looking for more. If it’s a high-priority alert, or if I have seen many of these, I realize that the risk is greater so I raise a severe alert, call someone and shut things down. It comes down to how I THINK about the item and how I assign risk to what’s happening.
Were the people who accepted the friend request from the imposter wrong? No. They just put it at a very low or no risk. It could have been a very normal occurrence – e.g., I could have lost my password and had to initiate a new account. If they had seen a weird text, or a strange picture from me, then that would have triggered an alert. Some thought it higher – and triggered an alert immediately. Does everyone have to have the same level of risk associated with each event? No – because it has to do with the environment and the event that happened.
So, it does come down to how my friends and I think security. It’s a constant evaluation of each event, the risk we assign to it – and, if it’s over a threshold, who I have to notify. As we get to more and more complex systems, the rules for assigning risk get more complicated (message received on port from an unknown IP, and then someone trying to log in with a bad password) and the number of devices that are part of the system grows and grows. Enterprises are very complicated environments and the number of events is staggering. How do we easily determine what’s normal and what’s not? It takes lot of learning of the environment, establishment of baselines, adjustment of rules, and automation (and trust in that automation). But it really comes down to how we THINK about security.