How the Solorigate (Sunburst) Cyber Attack could have been prevented
Understanding the command and control approach, and ways to break it before it starts
Author, Todd Gifford – Chief Technology Officer (CISSP)
Context and audience
This blog post is aimed at Security Professionals, Network Security Managers and anyone in a technical capacity responsible for or working in Network Security.
The compromise of the Solarwinds Orion monitoring platform is no doubt, one of the most impactful Cyber Attacks ever – not because it caused any disruption or encrypted any data, but because it involved the compromise of software source code from a reputable company and went un-noticed for what is, in cyber terms, a very long time.
There are always times when you have no option to run software that is past its sell-by date, and there are things you can do about that that I have talked about in a previous post – What to do when you can’t patch.
In this instance, however, not patching would have been a good thing; If you hadn’t updated to the compromised software, there would have been no ‘attack’.
Recap – what is Solorigate (Sunburst)
- The Orion network monitoring software agents installed on servers had malicious code implemented at the source, which was compiled and distributed to many customers who use the software
- That compromised code was not detected by the majority of security vendors as it’s ‘trusted’ software, and at the time, there were no defined signatures or behaviours loaded into SIEM platforms, IPS or endpoint protection systems to detect and block the activity
- The compromised monitoring agents connected to command and control infrastructure, where they took command actions from the threat actors who carried out the attack
- Once the access was established, the attackers used the command and control capabilities to access other systems, capture credentials, move sideways and access cloud-based systems
- Circa 18,000 organisations had been infiltrated using the compromised software
How to get to the Root Cause
I’m a big fan of a few key principles when it comes to Cyber Security:
- Always ask, why? Then keep going with the why’s until you get a basic, simple answer
- Use the phrase ‘show me’ when it comes to evaluating security
- Keep your environment simple and compartmentalised
- Does it need to have access to that system/data/network/website? Least privilege works for systems and software, not just people
- Get good people to continually evaluate and improve
What was the Solorigate (Sunburst) root cause?
So far, I haven’t seen a definitive root cause on how the Orion code was compromised. The latest information from Solarwinds is available here.
Now I don’t like to assume anything, as assumptions are a dangerous thing. So, I won’t speculate on how a malicious attacker managed to compromise Orion Source Code. It will be good for us all to learn how that happened, so we can prevent it from happening in future. It’s not the first time such an attack has been used (see this article about how a finance platform software update was compromised).
Let’s look at the root cause from an Orion customer perspective: Trusted software compromise. Could be any software. OK, that looks bad – if I can’t trust my software, and no-one else knew about the compromise so, therefore, it wasn’t detected for many months, what are you supposed to do?
We can’t fix the root cause – now what?
Hold the phone. Orion is commonly deployed on servers in mission-critical environments. As the subsequent attacks rely on command and control (where software calls home to receive its instructions), what happens if it can’t call home? Likely not much. It didn’t encrypt files, or just infect lots of systems and be overt in its nature – it was designed to be stealthy.
For me, if the compromised software couldn’t connect to the control server(s), it couldn’t receive any commands, and therefore, that is where the attack would end.
You let your core infrastructure connect to what?
Two things could have prevented the compromised software from being a problem:
- Explicit allow
- Default Deny
The two best firewall rules ever invented. For me, I am stuck asking myself why so many (high security conscious) organisations would let connections initiated from core infrastructure connect to un-defined internet destinations. Sure – they do need to connect outbound to any cloud monitoring platform, or to download software updates – but to access a random website with no defined business purpose?
This may have been a risk-based approach: You couldn’t (easily) define what internet sites a users’ endpoint may or may not need to connect to, but it’s much easier to do for say, a domain controller, a file server, a database server etc.
Even servers that serve web pages, don’t need to initiate outbound connections to unknown destinations – that’s what modern firewalls (well any firewall from at least the last, erm, 20 years) will manage by being stateful.
Key approaches to preventing command and control attacks
Not an exhaustive list and likely not everything would be possible in all organisations, but here are some of the things I would recommend:
- Map out your known (required) connections from your server infrastructure
- Segment your servers into logical groups from a network perspective
- Do implement firewall rules between segments and the internet – limit where traffic from servers can go and what it can do
- Only allow internet access to known, the business required destinations for your servers. Yes, that is some work, but better than the alternative
- Really do keep your environment as simple as possible – less is more in security terms
- Do mix some vendors for security – but keep the list short. Microsoft was affected by this breach as they use the SolarWinds product, but they are also one of the best security vendors out there.
- Keep your Endpoint Detection and Response systems up to date and pro-actively look for indicators of compromise
- Layer in a tighter network and other security controls based on the value of the asset your trying to protect. Your source code for the example is super valuable, so always put lots of protection around that.
- Last but not least – assume breach, and work from there.
Other sources of information
There are many, but some good ones I have read are
- Fortinet also have some great information on the analysis they have done
How do I fix my firewall rules?
This is best explained with an easy example, lets’ take a firewall rule I see all too often:
|Source IP||Source port||Destination IP||Destination port||Protocol||Action|
In the above example, this rule would be applied outbound, meaning anything behind your firewall in your organisation could go through the firewall, to anywhere, using any protocol it likes. Hmmmm.
Let’s try this again.
Business requirement: Internet Browsing
Re-worked rules from the above:
|Source IP||Source port||Destination IP||Destination port||Protocol||Action|
|Any||Any||Any||80, 443, 53||IP||Allow|
Better, but, now we can only use specific ports to go anywhere. It’s pretty easy to run a command and control service on port 53, for example (yes I know, this needs to be UDP really, but let’s keep the example simple)
Here is how I would like to see the same business requirement deployed as firewall rules using the above example:
|Position||Source IP||Source port||Destination IP||Destination port||Protocol||Action|
|1||DNS servers||Any||22.214.171.124, 126.96.36.199||53||TCP/UDP||Allow|
|2||Servers that need internet browsing (say RDS Farm)||Any||Any||80,443||TCP||Allow|
Clearly, this is still a challenge though, as we haven’t specified which destination IP’s we will allow. This is where some form of protective DNS and web filtering technology will look at and filter requests higher up the TCP/IP stack, as filtering by IP alone would be a large list and just to complex to manage. It doesn’t help if there are no defined information or signatures about malicious IP addresses and urls though. There is always some risk that needs to be accepted if you connect systems to the internet, or have any data– this is all about minimising that risk.
Now – what about those servers that don’t need to browse the internet? In the above example, we have split our servers into two groups and applied specific rules. Anything else is immediately denied internet access by our default deny rule. Why? Note in the last table we added the order column: Firewall rules are evaluated in order until a match is found. Anything that didn’t ‘match the first 2 rules, is immediately matched to the last rule and is denied. This is why the deny rule should always be at the bottom.
This isn’t complicated stuff – but it does take knowledge about your environment, a little planning and some patience. It also takes good change control to make sure no-one just puts in an Any-Any rule to get something working in a hurry. Always worth reviewing firewall rules regularly. Oh, wait – that is a requirement in Cyber Essentials isn’t it? Certainly, it is in ISO27001.
Optimising IT – Cyber Security Services
Here at Optimising IT, we offer a full range of Cyber Security Services including Cyber Consultancy, Penetration testing and Cyber Essentials plus assessments and certifications. We also offer best in breed Security Solutions from Fortinet and Microsoft including supply, installation and support
Is your security approach aligned with your business risk?
If you are concerned that you may have been exposed by the recent SolarWinds attack, or you are looking for an independent review of your security, Penetration Testing, Cyber Essentials certification services or some help with your network security, get in touch with us today to see how we can help.
Contact us with any of your specific questions – and keep an eye out for my series of posts. I’ll provide some more detail on network security with some examples and possibly even a diagram or two.