Security professionals today are keen on learning what the latest threats posed to their organizations are. Be it a nasty piece of malware that’s making the rounds, a unique business email compromise (BEC) campaign, or even physical social engineering tactics that lead to a network compromise, there are a lot of attack tactics to consider and bases to cover from a security standpoint. But it’s important to note that digital crooks won’t start their attack campaigns by spending hours on developing custom file payloads or trojans. They’re looking for the path of least resistance, and most of the time, that’s by exploiting a misconfiguration in their target’s network.
Misconfigurations have led to some significant breaches recently, so to find out more about how they come to be and what security leaders can do about them, I caught up with John Cartrett, director of Americas at Trustwave SpiderLabs.
What’s your definition of a misconfiguration?
John Cartrett: It’s essentially anything that is “overprivileged.” Being overprivileged can apply to both user account permissions, as well as network service accessibility. This situation occurs when organizations do not follow the principles of least privilege during software deployments. The principles of least privilege translate to: "if there is not a business need to access a resource, then access to that resource should not be allowed." I suggest that customers ask each of their vendors if their applications are securely developed and ask them to provide all post-installation hardening guides. When an organization invests in a new product or solution, they will want to make sure that their vendor is committed to protecting its software and that the purchased software will not be used against the organization by an attacker.
Misconfigurations are leading to a tremendous amount of headline-grabbing breaches time and time again. What are security leaders missing here?
John Cartrett: One of the most significant issues with our industry is that we opt to buy a tool versus securing or hardening what we currently have. The Microsoft operating system, by itself, is an enterprise-class tool, and it has security features, that if wielded in the appropriate manner, can make it very difficult for an attacker to operate in the environment. The problem is that as an industry—and I’m talking specifically about internal IT folks—we tend to buy our way out of the solution versus trying to learn how to best wield the tools that we already have.
To take the lazy way out, I don’t have the spend the time and effort to understand and secure what I already have; I can just buy some blinking lights, put them in my environment, and I’m “secure.”
That’s a problem because it does two things:
Security vendors will not turn on any security features that require a tailored configuration because they don’t want to break anything in your environment. If a vendor has a new product and turns on all of the features by default, it’s guaranteed to break something in a customer’s unique environment. It’s going to cause that customer pain, which results in a bad experience and has the potential to cause a loss in revenue for the vendor.
Why would vendors not want to pivot on the default status?
John Cartrett: There is a two-fold answer here. The first is about the general software application, which is feature-rich and comes out-of-the-box with everything turned on and accessible to the network. The vendor leaves it up to you as the owner of that to turn off the features that you don’t want. It goes back to the principles of least privilege, meaning if there’s a service or a protocol that’s on that box that you’re not using, turn it off. That’s where the idea of misconfigurations comes in. Those legacy protocols are elementary to attack, and if they’re left on by default, that’s an easy win for an attacker. A lot of times, those are the things that we go after as a red team. The Second is more specifically about security products, which typically come with any advanced features disabled. Both of these are so that their software universally works for everyone with the least amount of headaches.
If security leaders get in the habit of buying tools to address single problems, implementing them without making changes to the underlying operating system, they aren’t making their environment any more secure. First, they’re not configuring the operating systems that they currently have, and second, they’re also installing tools on top of it that are not hardened, leaving them in a default state. There are many times that I’ve found up to date operating systems, but they’ve installed applications from a third-party vendor that was running an underlying web server that wasn’t hardened. If you don’t harden any new piece of software you introduce to your environment, it’s a misconfiguration that can be used against you.
So are misconfigurations a result of the vendor providing solutions in the default state, or the customer not making the necessary changes to the solution once it is in place?
John Cartrett: It’s on the customer. The vendors are always going to error on the side of universal use. That’s what they have to do because they don’t want their product to negatively impact a customer because it will ultimately hurt their revenue. I don’t fault them for that, but one thing I do blame them for—not all of them—is that they don’t make it easy for the customer to understand that they need to harden their software. Some are very security-minded, but that’s a tiny portion of the sector.
Security leaders today have an arsenal of security solutions, but at the end of the day—like you’re suggesting—they aren’t hardening them. How did we get to this point?
John Cartrett: At the beginning with Microsoft, it was a really tough space to get into because there wasn’t a lot of training, and there was only a small echelon of people that knew how to configure the operating system. As it grew, Microsoft—and I’m only singling them out because it’s a very ubiquitous system and has become very complex over the years—they’ve had challenges in teaching people how to configure their Operating System (OS) because there’s a conflict of interest there. They have this OS that’s complex and that they sell, but they also sell training and certifications. They probably expect that if you want to administer their OS or its applications, then you need to get certified. A lot of people don’t prescribe to that, and they are not going to send their team to get trained. They’re just going to have to get on-the-job training for the OS.
This type of IT staff only has knowledge of the OS by tinkering with it when trying to fix a specific need, or reading a forum somewhere; this results in a very limited knowledge from that perspective. It’s a lack of knowledge from a client’s perspective that they’re trying to buy their way out of. They’re expecting that the security vendor—that’s thinking in a default state—is going to be their silver bullet. To an extent, it’s also because the security vendor hypes their product and makes claims that they’ll never get breached. In reality, every product has a blind spot, and as an attacker, if you understand how the product operates, you can usually figure out a way to operate around it.
The industry’s facing a talent crisis that obviously plays into this misconfiguration debacle. What advice would you offer security leaders when it comes to overcoming this?
John Cartrett: Personally, from a management perspective, I would rather hire someone junior and moldable than someone senior and stuck in their ways. Additionally, I would rather train someone up and have them leave, then not do so and have them stay. If you’re looking for a very seasoned person, and can’t recruit that person yourself, then you’re going to have to augment with a vendor to bring in an expert. Those engagements can sometimes double as an engagement to fix a particular problem but can also serve as a training experience for your staff. If you have your junior people sit with that senior consultant that you’ve brought in from a third-party to fix something and tell the consultant that you’ll also pay for their time to sit with your team and share as much knowledge with them as you can, it’s a win-win situation.
When it comes to misconfigurations, Cartrett advises security leaders to focus on the following two areas that will help them make a hacker’s path of least resistance much more strenuous:
Security testing is a critical component of any cybersecurity program aiming to continuously improve an organization's overall security posture. But your business's cyber risk tolerance will determine how deep you want to go. This Trustwave SpiderLabs infographic helps illustrate the depth at which you can test your security.
Marcos Colón is the content marketing manager at Trustwave and a former IT security reporter and editor.