Thoughts on Phishing Assessments

I’ve been in this line of work (security consulting) for quite a while now, and one of the more common requests I’ve had to handle has been to perform some variety of “phishing assessment”.

In this post, I’ll outline a few thoughts I have on the whole “simulated phishing assessment” line of work, and thoughts on how we can make this better.

To get it out of the way, this post is absolutely not about phishing as part of red team operations. I’ve somewhat mixed feelings about those, that I’d like to outline in a later blog post. This post is to do with the more normal “try phish everyone in the organisation… To teach the users to not click shit or whatever” type of assessment.

To begin, how this kind of assessment tends to run (simplified) is as follows.

  1. Target org. either supplies target email list to consultant, or consultant acquires this target list via OSINT/other means.
  2. Consultant sets up something like GoPhish, does whatever magic is needed to try ensure they get successful inbox delivery, and sends out phishing emails.
  3. Assuming they arrive, either the users click and land on an actual phishing page to harvest credentials, or they get bounced to some “learn how to spot phishing!” e-learning stuff. Or their clicks just get logged.
  4. Consultant prepares a report, client files it away somewhere and maybe uses it as an excuse to fire someone (yes, this does happen).
  5. Repeat quarterly, or whatever.

Anyone else see the problems here? What value does the client org. get from this whole process? Beyond learning that their users will click links in emails, and maybe firing some people…

This kind of assessment usually teaches the target org. absolutely nothing about where their security posture is failing, beyond the usual “we need more user education”. Which leads to the exercise being repeated ad-nauseum (to the benefit of absolutely nobody… Except the consultant).

I don’t think the above is really a controversial opinion in the Year of our Lord 2020. Despite the best efforts of the Infosec industry, people continue to be pwned by phishing on the regular.

Now, I’m usually known as some kind of infosec doomer, just shitting on the industry constantly and providing nothing of value. However, the rest of this post will be about how we can do better, and sharing with you the methodology I’ve been using with a couple of clients lately.

I call it “Atomic Phishing Testing”, as it is heavily based on the methodology from the MITRE ATT&CK based “Atomic Red Team” project from Red Canary.

So the first step here, is to almost completely fucking shitcan the idea of “blind phishing tests” that involve email enumeration, etc. Those just take up more billable hours, and provide fuck all value to most orgs. early on. Again, we are not talking about red teaming.

Lets also put the “phishing real users” thing on the back burner also. We won’t test that until the end, if at all.

Instead, we gotta think about how exactly it is that our phishing email ends up in a users inbox, the link being clicked on, and the “bad things happening”, and work from there.

Effectively, we need to think “what controls are in place, and why are they not working”, and actually model the thing with our clients.

Lets take a not-uncommon “flow” of how a phishing email gets delivered, by talking about the various defenses it gets through. This will become important shortly.

So firstly, the email needs to get by some kind of email filtering stuff, that will either drop an inbound email, or redirect it to the spam folder. We will presume this check fails, and the email ends up in the inbox.

So we continue to assume that the email passed all the checks, and landed in the employees inbox. At this point, we figure the employee will open it. There will be a link, or an attachment inside. Lets assume a link. Lets also assume the link is also clicked. After all, for a lot of employees, opening emails and looking at shit in them is like most of their job.

Next, lets assume that the email gets in, links get clicked or attachments get opened. Either your employee lands on a phishing page, or malware gets executed.

At this point, a number of controls should deny this. Web filtering blocking phishing sites, web filtering blocking the malware download, AV/EDR/WTF blocking execution, or the user not entering credentials. We can safely assume at this point credentials will be entered, and if all this fails, its game over.

So lets enumerate the controls that we care about, in order, for the phishing attack to succeed. This is not an exhaustive list.

  1. Email filtering needs to be bypassed.
  2. URL/Attachment filtering (AV, Proxy) needs to be bypassed.
  3. In event of execution – execution needs to be prevented. In event of credential theft – “user awareness”.

By the time you get to step 3, the step at which the host or user has to intervene, you have already failed. Also note, this is a vastly simplified model.

So in order to create an “atomic” testing strategy, we should test different controls in use. The email filtering, the web filtering, the firewalls/IDS in use, the AV in use, and only then, the user awareness component can be considered as valid for testing.

As a rule, in my personal opinion, if the security of your entire organisation comes down to “user awareness”, you have failed fucking miserably. Instead of the user being reprimanded, your IT security staff should be taking the bullet in the neck instead.

Real Talk: while “user awareness”, and “security culture” are absolutely important, every phishing email that lands in the inbox is a failure on behalf of the “blue teams” controls. Every phishing landing page that successfully loads in a users browser, or malicious attachment that touches disk, is a failure on behalf of the “blue teams” controls.

So how do we test these? Well, we break down the steps, and use stuff like temporary whitelisting (another controversial topic in infosec, it seems) to allow us to actually test each part of the “kill chain”.

So first, you need to work out how to get visibility at each “point” in the chain, preferably with the help of the clients IT staff. This will require questions such as the following:

  1. Where do blocked/dropped emails get logged?
  2. Can we deploy a test client machine with a test account?
  3. Can we monitor AV logs/alerts on the test machine?
  4. Can we access the web proxy/firewall logs?

There are probably others, but these will do for now.

So we set up a client “test account”, on a test machine, that is just like a standard user account and build. Then, we try send it some phishing emails using all our various tricks.

We should monitor the logs on the mailserver as well as the test accounts inbox, see which ones get blocked entirely, what ends up in spam, and what ends up landing in the inbox.

Figure out what it is that is allowing the dodgy emails into the inbox, as opposed to just blocked or spam. There you will be able to identify gaps in the “email filter” layer of defense, or to put it another way, a way in which one of your controls is failing. Your job here is to identify what exactly is failing.

Assuming none of your ninja magic gets you through the email filter, you will now want to whitelist a sender and send another phishing email, containing a sketchy link or attachment. If your voodoo is, however, working – use it here.

Same drill as before, except this time we actually click it (as a real user would), and see what happens. Try this with various evasion methods in your links/attachments. What you are looking at now, is the proxy/web filter logs (in the case of a link), or EDR logs/AV logs etc (in the case of a malicious attachment). Do these controls fail? If so, how? Why? Again, we are identifying gaps in the controls, in a granular fashion.

By this point, if you are successfully gaining execution or “phishing page loaded in browser”, you know that the post-mailfilter controls are failing. Either proxy/web filtering is not doing its job, or the AV/EDR/WTF is not doing its job. Run multiple tests here to discover exactly where the gaps in this are occurring.

If you identify any other compensating technical controls along the way (this varies from org to org – I’m using a simplified model here), test those too, in a granular fashion.

By the end of this exercise, you should be able to present the client with an actually useful report that tells them precisely which of their shiny boxes are failing miserably at their job, giving them a stick to beat vendors with – instead of having them take it out on their users.

If you really must, you can follow this up with a “live test” using actual, live users. However, during this test you should still request access to mailserver and web proxy/filter logs, etc, to monitor progress. Only after you have provided a best-effort to ensure the safety of users, should you proceed with actually testing users.

Ethical concerns about the impact of running phishing tests on live users are something I’d like to address in a followup post, if that is of interest. That post will probably also touch on the “user awareness” aspect of phishing tests. Phishing in the context of a red team engagement is another thing entirely, and will require yet another writeup to capture my thoughts on it.

As a final note, if you do want this kind of work done, feel free to get in touch. I’d also highly rate Richard at The Antisocial Engineer for test work that involves social engineering, and as for creating security culture/user awareness programs, maybe consider the fine folks at Cygenta? There we go, that is the blatant shilling done 😉

%d bloggers like this: