In my last post I detailed installing BIND with DLZ support -- in this post I'll actually USE that option.
Step One: Database setup
PostgreSQL, by default, now creates databases in UTF-8 encoding. The DLZ driver in BIND uses ASCII/LATIN1, which introduces some interesting peculiarities. Specifically, querying for 'zone="foo.com"' may not work. That's okay, you can still create LATIN1 databases using the template0 template. Since PostgreSQL replication is already configured, everything but the BIND configuration is done on the database master. First, add the new database user:
createuser -e -E -P dnsbh
Since this user won't have any extra permissions, just answer 'n' to the permissions questions.
Now add the new database, as latin1, with dnsbh as the owner:
createdb dnsbh -E 'latin1' -T template0 -O dnsbh
The database schema is pretty flexible; there are no required column names or types, as long as queries return the correct type of data. I like to use a schema that reflects the type of data the column holds so I'll use the following create statement:
create table dns_records (This can be modified, of course. The data, type, host and ttl fields can have size restrictions put in place and you can drop the id field altogether. The bind-dlz sourceforge page lists several more columns but those are only necessary for a full DLZ deployment, where the DNS server is authoritative for a domain, not for a purely recursive caching server.
id serial,
zone text,
host text,
ttl integer,
type text,
data text
);
create index zone_idx on dns_records(zone);
create index host_idx on dns_records(host);
create index type_idx on dns_records(type);
alter table dns_records owner to dnsbh;
Step Two: BIND setup
If you read the bind-dlz configuration page you'll find a set of database queries that get inserted into named.conf. You can modify these if you like but it's much easier to use what's provided. There are, at a minimum, four lines that need to be included. The first indicates the database type and the number of concurrent connections to keep open (4). The next is the database connection string (host, username, password). The third returns whether the database knows about a given zone and the fourth returns the actual IP for any lookup that isn't for types MX or SOA. Add the following to /etc/namedb/named.conf and restart BIND (I split the fourth "line" into multiple lines for readability - you can put it all on one line or leave it as below):
dlz "postgres" {At this point BIND should restart and it should continue to serve DNS as normal -- meaning it's time to test the blackhole. A quick and easy test is to dig for sf.net:
database "postgres 4
{host=localhost port=5432 dbname=dnsbh user=dnsbh password='password_you_used'}
{select zone from dns_records where zone = '$zone$'}
{select ttl, type,
case when lower(type)='txt' then '\"' || data || '\"' else data end
from dns_records
where zone = '$zone$' and
host = '$record$' and
not (type = 'SOA' or type = 'NS')}";
}
dig @127.0.0.1 sf.netThen add sf.net to the dns_records table on the PostgreSQL master:
insert into dns_records(zone, type, host, data, ttl) values ('sf.net', 'A', '*', '127.0.0.1', '3600');Dig for sf.net again:
dig @127.0.0.1 sf.netNote the 3600 second TTL. If a downstream DNS server were querying our blackhole then the value of 127.0.0.1 would get cached for an hour. To remove the sf.net entry from the blackhole, go back to the PostgreSQL master:
delete from dns_records where zone = 'sf.net';www.malwaredomains.com and projects like ZeusTracker keep a list of domains seen in the wild that are used to provide malware. The malwaredomains.com site has a great tutorial on how to do DNS blackholes via BIND configuration files. Since they provide a list of domains it's pretty trivial to script something to pull those domains out and jam them into the blackhole table.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.