Lenny Zeltser has an excellent presentation on his web site that discusses how to respond to the unexpected. This presentation is well worth the time it takes to read it...and I mean that not only for C-suite executives, but also IT staff and responders, as well. Take a few minutes to read through the presentation...it speaks a lot of truth.
When discussing topics like this, in the past, I've thought it would be a good idea to split the presentation, discussing responding as (a) a consultant and (b) as a full-time employee (FTE) staff member. My reasoning was that as a consultant, you're walking into a completely new environment, and as a member of FTE staff, you've been working in that environment for weeks, months, or even years, and would tend to be very familiar with the environment. However, as a consultant/responder, my experience has been that most response staff isn't familiar with the incident response aspect of their...well...incident response. FTE response staff is largely ad-hoc, as well as under-trained, in response...so while they may be very familiar with the day-to-day management of systems, most times they are not familiar with how to respond to incidents in a manner that's consistent with senior management's goals. If there are any. After all, if they were, folks like me wouldn't need to be there, right?
So, the reason I mention this at all is that when an incident occurs, the very first decisions made and actions that occur have a profound effect on how the incident plays out. If (not when) an incident is detected, what happens? Many times, an admin pulls the system, wipes the drive, and then reloads the OS and data, and puts the system back into service. While this is the most direct route to recovery, it does absolutely nothing to determine the root cause, or prevent the incident from happening again in the future.
The general attitude seems to be that the needs of infosec in general, and IR activities in particular, run counter to the needs of the business. IR is something of a "new" concept to most folks, and very often, the primary business goal is to keep systems running and functionality available, whereas IR generally wants to take systems offline. In short, security breaks stuff.
Well, this can be true, IF you go about your security and IR blindly. However, if you look at incident response specifically, and infosec in general, as a business process, and incorporate it along with your other business processes (i.e., marketing, sales, collections, payroll, etc.), then you can not only maintain your usability and productivity, but you're going to save yourself a LOT of money and headaches. You're not only going to be in compliance (name your legislative or regulatory body/ies of choice with that one...) and avoid costly audits, re-audits and fines, but you're also likely going to save yourself when (notice the use of the word when, rather than if...) an incident happens.
I wanted to present a couple of scenarios based on culminations my own experience performing incident response in various environments for over 10 years. I think these scenarios are important, because like other famous events in history, they can show us what we've done right or wrong.
Scenario 1: An incident, a malware infection, is detected and the local IT staff reacts quickly and efficiently, determining that the malware was on 12 different systems in the infrastructure and eradicating each instance. During the course of response, someone found a string, and without any indication that it applied directly to the incident, Googled the string and added a relationship with a keystroke logger to the incident notes. A week later at a director's meeting, the IT director described the incident and applauded his staff's reactions. Legal counsel, also responsible for compliance, took issue with the incident description, due to the possibility of and the lack of information regarding data exfiltration. Due to the location and use of the "cleaned" systems within the infrastructure, regulatory and compliance issues are raised, due in part to the malware association with a keystroke logger, but questions cannot be answered, as the actual malware itself was never completely identified nor was a sample saved. Per legislative and regulatory requirements, the organization must now assume that any sensitive data that could have been exfiltrated was, in fact, compromised.
Scenario 2: An incident is detected involving several e-commerce servers. The local IT staff is not trained, nor has any practical knowledge of IR, and while their manager reports potential issues to his management, a couple of admins begin poking around on the servers, installing and running AV (nothing found), deleting some files, etc. Management decides to wait to see if the "problem" settles down. Two days later, one of the admins decides to connect a sniffer to the outbound portion of the network, and sees several files being moved off of the systems. Locating those files on the systems, the admin determines that the files contain PCI data; however, the servers themselves cannot be shut down. The admin reports this, but it takes 96 hrs to locate IR consultants, get them on-site, and have the IT staff familiarize the consultants with the environment. It takes longer due to the fact that the one IT admin who knows how the systems interact and where they're actually located in the data center is on vacation.
Scenario 3: A company that provided remote shell-based access for their employees was in the process of transitioning to two-factor authentication when a regular log review detected that particular user credentials were being used to log in from a different location. IT immediately shut down all remote access, and changed all admin-level passwords. Examination of logs indicated that the intruder had accessed the infrastructure with one set of credentials, used those to transition to another set, but maintained shell-based access. The second account was immediately disabled, but not deleted. While IR consultants were on their way on-site, the local IT staff identified systems the intruder had accessed. A definitive list of files known to contain 'sensitive data' (already compiled) was provided to the consultants, who determined through several means that there were no indications that those files had been accessed by the intruder. The company was able to report this with confidence to regulatory oversight bodies, and while a small fine was imposed, a much larger fine, as well as notification and disclosure costs, followed by other costs (i.e., cost to change credit/debit cards, pay for credit monitoring, civil suits, etc.) were avoided.
Remember, we're not talking about a small, single-owner storefront here...we're talking about companies that store and process data about you and me...PII, PHI, PCI/credit card data, etc. Massive amounts of data that someone wants because it means massive amounts of money to them.
So, in your next visit from an auditor, when they ask "Got IR?" what are you going to say?
No comments:
Post a Comment