Saturday, 9 February 2013

NSM With Bro-IDS Part 4: Bro and ELSA, a Happy Couple


In part three of my Bro series I started pointing out how Bro can almost single-handedly transform the way you approach network security monitoring (and network monitoring in general). Gone are the days of wondering which of the 300 sites a potentially infected machine was trying to access on that Amazon virtual server, what jar file was requested by that piece of malware and whether Sys-Admin Jack has *really* enabled Remote Desktop connections from the world on one of the public-facing web servers under his care. To gather this data "out-of-the-box" requires at least a working knowledge of the Linux or Unix command line interface and a basic understanding of tools like grep, cut and awk. Bro can generate a mountain of data, kept in plaintext log files, in a very short amount of time, and searching through all of that text at the CLI can be a daunting task.

But wait a second...we have already solved this problem! Since we have a functional ELSA deployment already, and we have verified that we can send log data to it, it is trivial to configure a machine running bro to send its bro logs to an ELSA node via rsyslog.

rsyslog on Ubuntu uses the /etc/rsyslog.d/ directory for user-add and site-specific configurations. By default, the only items in that directory on the Ubuntu machine I've used for bro are for postfix, the "default" rsyslog configuration and a stub for ufw:


I added a file called "60-bro.conf" (the numbering is important, it reflects the order in which the files should be included) with the following contents:

##### Using local7 because that's where Martin put it
###### and it's a pretty standard usage
$ModLoad imfile #
$InputFileName /usr/local/bro/logs/current/ssl.log
$InputFileTag bro_ssl:
$InputFileStateFile stat-bro_ssl
$InputFileSeverity info
$InputFileFacility local7
$InputRunFileMonitor
$InputFileName /usr/local/bro/logs/current/smtp.log
$InputFileTag bro_smtp:
$InputFileStateFile stat-bro_smtp
$InputFileSeverity info
$InputFileFacility local7
$InputRunFileMonitor
$InputFileName /usr/local/bro/logs/current/smtp_entities.log
$InputFileTag bro_smtp_entities:
$InputFileStateFile stat-bro_smtp_entities
$InputFileSeverity info
$InputFileFacility local7
$InputRunFileMonitor
$InputFileName /usr/local/bro/logs/current/notice.log
$InputFileTag bro_notice:
$InputFileStateFile stat-bro_notice
$InputFileSeverity info
$InputFileFacility local7
$InputRunFileMonitor
$InputFileName /usr/local/bro/logs/current/ssh.log
$InputFileTag bro_ssh:
$InputFileStateFile stat-bro_ssh
$InputFileSeverity info
$InputFileFacility local7
$InputRunFileMonitor
$InputFileName /usr/local/bro/logs/current/ftp.log
$InputFileTag bro_ftp:
$InputFileStateFile stat-bro_ftp
$InputFileSeverity info
$InputFileFacility local7
$InputRunFileMonitor
$InputFileName /usr/local/bro/logs/current/conn.log
$InputFileTag bro_conn:
$InputFileStateFile stat-bro_conn
$InputFileSeverity info
$InputFileFacility local7
$InputRunFileMonitor
$InputFileName /usr/local/bro/logs/current/dns.log
$InputFileTag bro_dns:
$InputFileStateFile stat-bro_dns
$InputFileSeverity info
$InputFileFacility local7
$InputRunFileMonitor
# check for new lines every second
$InputFilePollingInterval 1
###### 10.10.10.121 is the IP of the ELSA node
local7.* @10.10.10.121

ELSA's author, Martin, has some pretty awesome documentation regarding setting up a bro instance, setting up a bro cluster and configuring bro to log to syslog. Yes, I lifted the above directly from there and added the section for bro_dns. Specifically, I recommend you take a look at:

http://ossectools.blogspot.com/2011/09/bro-quickstart-cluster-edition.html

The really big plus here is that this is not specific to bro - any process that writes a log file can have that log file sourced by rsyslog, which means any text log can be pulled into ELSA...but what practical application does that have for bro?

Why, I'm glad you asked! Let's contrive a situation based on my existing VM infrastructure. To review, I have the following in place:


o 10.10.10.115 -- bro
o 10.10.10.121 -- the ELSA node
o 10.10.10.122 -- the ELSA web front-end
o 10.10.10.150 -- a KUbuntu client (simulating <n> desktops)
o 10.10.10.254 -- the FreeBSD router


A Contrived Situation


It's no small understatement that network and InfoSec folks have to wear a lot of hats, and it's not uncommon for the two groups to be comprised mostly of the same individuals. The same person who gets called to handle the investigation of a server compromise may also have to help find a stolen or "misplaced" computer or smartphone, analyze traffic to find the machine spewing funky traffic into a VLAN or, my personal favourite, find out why in the world the bandwidth usage spiked at a certain time.

To set the stage, I used my KUbuntu VM and downloaded the latest stable Linux kernel from kernel.org. It's a pretty hefty download, just under 100MB, so it's great for demonstrating this.

Here's the scenario. You're the tech for an organisation with a big event coming up in two days and you have "all hands on deck". Ordinarily your Internet connections are sufficient for your business use but for the next few days everyone is in the office, basically working around the clock, and your Internet resources are stretched pretty thin. Because you are a fairly small organisation you don't have a networking group and a security group, you have...you.

A little after 2330 you get a phone call saying the network has gone to pieces. Everything inside the company network works fine, and now everything seems okay, but for a few minutes "the Internet" slowed to a crawl and you have to find and explanation because "they can't work when the Internet is that slow" and they're afraid it's going to happen again.

You fire up your VPN client, connect back to the office and, sure enough, everything is okay now. Speed tests show good numbers, the few servers you have look okay, everything looks good. With all the system and firewall logs showing nothing odd, and no alerts firing on the IDS, it's time to go to the network logs.

Step 1: Identify the connections


Bro has a fantastic log called conn.log. With bro now being pulled into ELSA (or parts of it, anyway), it's fairly trivial to search for all of the connections Bro saw between 2300 and 2330. Just set the appropriate start and end times, change the search type from "Index" to "Archive" and use the following in the search bar:

+class:bro_conn

So if the call came in at 2330 on 8 February 2013, your initial search might look something like this:


Note: if you do NOT change the search type to Archive, you MUST supply something to search for instead of just setting a filter. If you search in Archive mode, though, you can supply just a filter and no search string. Be careful, on a busy node you may get back far too much information to be useful -- the time selections or searching a specific node may really help.

In my case, the following was returned:


Unless you only get a few results back, this may not seem to hold much useful information, but look at the "Field Summary" - srcip, srcport, dstip, dstport and a host of others. In this case, I would want to know if I had any really "chatty" talkers or listeners, probably listeners since people were complaining about Internet speeds. It's possible that it's a talker but, in my experience, if speeds are going to crud then it's probably someone trying to archive the Internet and that means a large number of bytes in.

Step Two: Who's been downloading the big stuff?


To get a list of all of the bytes_in counters, just click on "bytes_in" in the field summary. ELSA will automatically present a chart giving the "bytes_in" values and how many times those values showed up in the given time period. For this scenario, I was presented with the following:


Here I get something hinting at an anomaly. Most of the values are in the tens or thousands of bytes but whoa, is that a value of over eighty-three million? That's certainly an outlier, let's take a look at it. Just click on the '83612841' and ELSA will use that as a required search item so your search gets restricted to items that a) are only in the bro_conn class and b) have a bytes_in value of 83612841:


That's fantastic! It looks like whoever is using the machine with IP 10.10.10.150 has downloaded a pretty big file at 2325. That alone is enough to probably explain the anomaly - everyone else's downloads or activity could certainly be impacted by the one person who thought they needed to grab a 100 MB file. So, was it legitimate? Well, you can certainly find out who's using it and ask them. If you have bro's http.log being sourced by rsyslog then that would actually have shown up on this search. I do not in this instance but it is a trivial addition.

Step Three: Using the raw log archive to see what was downloaded


If you don't have it sourced by rsyslog, and want to go back after-the-fact and check the log archive, you can do that pretty easily. Bro gives me the UID for that connection, it's the second value in the raw output of the log entry, in this case JRibhV6DJge, and if you remember from my part three, that UID stays consistent across all of the various log files. So, if I wanted to search for it in the archive, I could do the following:
cd /usr/local/bro/logs/2013-02-08
grep JRibhV6DJge http.23\:00\:00-00\:00\:00.log.gz
That gives me the following output:


And there's my answer - at 2325, someone downloaded the latest linux kernel tarball from kernel.org. In this instance a single  user can cause a problem with one big download. Even with a 1Gbps, a single user who gets really lucky can saturate the connection with a large download. If they're downloading a lot of small files it may be more sporadic but you can use some slightly different parameters to see which machines are downloading a lot of small files or which machines are making a lot of connections relative to the rest of the computers on the network. These are always pretty good places to start because they're basic steps, they don't seriously impact the network and they're easy to verify.

There are a lot of other things to do with ELSA and Bro, this is just scratching the surface. For more information I highly recommend the ELSA and Bro project pages at:

http://code.google.com/p/enterprise-log-search-and-archive/
http://www.bro-ids.org/

No comments:

Post a Comment

Note: only a member of this blog may post a comment.

Logstash Profiling Part One: Time in the Pipeline

I have been pushing Mark Baggett's domain_stats.py (https://github.com/MarkBaggett/domain_stats) script out to my logstash nodes this we...