Collecting Data

Adding Parsers

In order to add parsers, you need to add patterns to the patterndb.xml file. If you need to create new log classes and fields, it’s not too hard, but right now there is no web interface (that’s planned in the future). You’ll need to add classes to the “classes” table, fields to the “fields” table, then use the offsets listed under $Field_order in web/lib/Fields.pm to create the right entries in “fields_classes_map.” Please note that the class name MUST be upper-case. Other than those few database entries, adding the pattern and restarting syslog-ng and apache is all you have to do. The new fields will show up in the web interface, etc. If you can, try to create patterns which re-use existing classes and fields, then just dropping them into the patterndb.xml file will instantly make them parse correctly-no DB work or restarts needed. I plan on making a blog post on how to do this soon, but let me know if you run into any troubles. Here’s an example to get you started:
Example log program: test_prog message: source_ip 1.1.1.1 sent 50 bytes to destination_ip 2.2.2.2 from user joe Pick a class_id greater than 10000 for your own custom classes. Let’s say this is the first one, so your new class_id will be 10000. What to insert into syslog database on log node:
INSERT INTO classes (id, class) VALUES (10000, “NEWCLASS”);
Our fields will be conn_bytes, srcip, and dstip, which already exist in the “fields” table as well as “myuser” which we will create here for demonstration purposes:
INSERT INTO fields (field, field_type, pattern_type) VALUES (“myuser”,
“string”, “QSTRING”);

INSERT INTO fields_classes_map (class_id, field_id, field_order)
VALUES ((SELECT id FROM classes WHERE class=”NEWCLASS”), (SELECT
id FROM fields WHERE field=”srcip”), 5);
INSERT INTO fields_classes_map (class_id, field_id, field_order)
VALUES ((SELECT id FROM classes WHERE class=”NEWCLASS”), (SELECT
id FROM fields WHERE field=”conn_bytes”), 6);
INSERT INTO fields_classes_map (class_id, field_id, field_order)
VALUES ((SELECT id FROM classes WHERE class=”NEWCLASS”), (SELECT
id FROM fields WHERE field=”dstip”), 7);
Now the string field “myuser” at field_order 11, which maps to the first string column “s0″:
INSERT INTO fields_classes_map (class_id, field_id, field_order)
VALUES ((SELECT id FROM classes WHERE class=”NEWCLASS”), (SELECT
id FROM fields WHERE field=”myuser”), 11);
5, 6, and 7 correspond to the first integer columns in the schema “i0,” “i1,” and “i2.” In the pattern below, we’re extracting the data and calling it i0-i2 so that it goes into the log database correctly. The above SQL maps the names of these fields in the context of this class to those columns in the raw database when performing searches.
Example pattern:

test_prog

source_ip @IPv4:i0:@ sent @ESTRING:i1: @bytes to destination_ip @IPv4:i2:@ from user @ANYSTRING:s0:@

source_ip 1.1.1.1 sent 50 bytes to destination_ip 2.2.2.2 from user joe

1.1.1.1

50

2.2.2.2

joe

Add this in the patterndb.xml between the
elements. You can test this on a log node using the /usr/local/syslog-ng/bin/pdbtool utility like so:
/usr/local/syslog-ng/bin/pdbtool test -p /usr/local/elsa/node/conf/patterndb.xml
This should print out all of the correct test values. You can test it against example messages as well like this:
/usr/local/syslog-ng/bin/pdbtool match -p /usr/local/elsa/node/conf/patterndb.xml -P test_prog -M “source_ip 1.1.1.1 sent 50 bytes to destination_ip 2.2.2.2 from user joe”
After the patterndb.xml file and the database are updated, you will need to restart syslog-ng:
service syslog-ng restart
If you are already logged into ELSA, simply refreshing the page should make those new classes and fields available.

Configuring IDS to Forward Logs

Snort
There are two ways to configure Snort to send logs. Either configure barnyard or Snort itself to send logs to local syslog. Both configuration entries (in either snort.conf or barnyard.conf) will look like this:
output alert_syslog: LOG_LOCAL6 LOG_ALERT

Suricata
To log to local syslog from Suricata, edit the “outputs” stanza to contain:
outputs:
– syslog:
enabled: yes
identity: “snort”
facility: local6

Forwarding Local Logs to ELSA
You will then need to configure the local syslog on the box that is running Snort to forward logs to ELSA.
rsyslog/Syslogd
If the box is running a simple syslogd, it would look like this to forward all logs to ELSA (which is usually a good idea):
*.* @ip.address.of.elsa

Syslog-NG
If it’s running syslog-ng, use this:
source src { unix-dgram(“/dev/log”); };
filter f_local6 { facility(local6); };
destination d_elsa { udp(“ip.address.of.elsa”); };
log { source(src); filter(f_local6); destination(d_elsa); };

Eventlog-to-Syslog

Sending logs from Windows servers is best achieved with the free, open-source program Eventlog-to-Syslog. It’s incredibly easy to install:
1. Login as an administrator or use runas
2. Copy evtsys.exe and evtsys.dll to Windows machine in the system directory (eg.C:\Windows\System32).
3. Install with: evtsys.exe -i -h ip.of.elsa.node
4. Profit
The logs will be sent using the syslog protocol to your ELSA server where they will be parsed as the class “WINDOWS” and available for reporting, etc.

Datasources

ELSA can be configured to query external datasources with the same framework as native ELSA data. Datasources are defined by plugins. The only plugin currently available is for databases. Database datasources are added under the “datasource” configuration section, like this:
“datasources”: {
“database”: {
“hr_database”: {
“alias”: “hr”,
“dsn”: “dbi:Oracle:Oracle_HR_database”,
“username”: “scott”,
“password”: “tiger”,
“query_template”: “SELECT %s FROM (SELECT person AS name, dept AS department, email_address AS email) derived WHERE %s %s ORDER BY %s LIMIT %d,%d”,
“fields”: [
{ “name”: “name” },
{ “name”: “department” },
{ “name”: “email” }
]
The configuration items for a database datasource are as follows:
Item Purpose
alias What the datasource will be referred to when querying
dsn Connection string for Perl
username User
password Password
query_template sprintf formatted query with the placeholders listed below
fields A list of hashes containing name (required), type (optional, default is char), and alias which functions as both an alternative name for the field as well as the special aliases “count” to refer to the column to use for summation and “timestamp” which defines the column to use in time-based charts.

Query_template parameters (all are required):
1. The columns for SELECT
2. The expression for WHERE
3. The column for GROUP BY
4. The column for ORDER BY
5. OFFSET
6. LIMIT

Sourcetypes

ELSA ships with several plugins:
● Windows logs from Eventlog-to-Syslog
● Snort/Suricata logs
● Bro logs
● Url logs from httpry_logger

List of classes supported out of the box

● BARRACUDA_RECV
● BARRACUDA_SCAN
● BARRACUDA_SEND
● BRO_CAPTURE_LOSS
● BRO_CONN
● BRO_DNS
● BRO_FILE
● BRO_FILES
● BRO_FTP
● BRO_HTTP
● BRO_IRC
● BRO_KNOWN_CERTS
● BRO_KNOWN_HOSTS
● BRO_KNOWN_SERVICES
● BRO_NOTICE
● BRO_SMTP
● BRO_SMTP_ENTITIES
● BRO_SOFTWARE
● BRO_SSH
● BRO_SSL
● BRO_SYSLOG
● BRO_TUNNEL
● BRO_WEIRD
● CEF
● CHECKPOINT
● CISCO_WARN
● DHCP
● ELSA_OPS
● EXCHANGE
● FIREEYE
● FIREWALL_ACCESS_DENY
● FIREWALL_CONNECTION_END
● FORTINET_TRAFFIC
● FORTINET_URL
● FTP
● LOG2TIMELINE
● NAT
● NETFLOW
● OSSEC_ALERTS
● PALO_ALTO_TRAFFIC
● PALO_ALTO_URL
● SNORT
● SSH_ACCESS_DENY
● SSH_LOGIN
● SSH_LOGOUT
● URL
● VPN
● WEB_CONTENT_FILTER
● WINDOWS

These plugins tell the web server what to do when a user clicks the “Info” link next to each log. It can do anything, but it is designed for returning useful information in a dialog panel in ELSA with an actions menu. An example that ships with ELSA is that if a StreamDB URL is configured (or OpenFPC) any log that has an IP address in it will have a “getPcap” option which will autofill pcap request parameters for one-click access to the traffic related to the log being viewed.
New plugins can be added easily by subclassing the “Info” Perl class and editing the elsa_web.conf file to include them. Contributions are welcome!
Suricata
To setup logging with ELSA and Suricata running on same/different boxes
https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Suricata_with_ELSA_Enterprise_logging_set_up_guide

Livetail

Livetail is deprecated until further notice due to stability issues. ELSA has the ability to allow each user to get a live feed of a given search delivered to a browser window. Livetail allows you to use full PCRE to search incoming logs without impacting logging performance. This is done by forking a separate process on each node that reads the text file being written by the main logging process, ensuring that no extra load is put on the main process and therefore avoiding log loss in high volume situations.
Starting a Livetail
To start a livetail, simply choose the “Livetail” option from the “Index” button, which will open a new window. Your search will begin immediately and results will be displayed from all nodes as they are available. Your browser window will poll the server every five seconds for new results. The window will scroll with new results. If you keep your mouse pointer over the window, it will cease scrolling.
Ending a Livetail
Livetails will automatically be cancelled when you close the browser window. If the browser crashes and the livetail continues, it will be replaced by any livetail you start again, or will timeout after an hour. An administrator can cancel all livetails by choosing “Cancel Livetails” from the “Admin” menu.
Livetail results are temporary and cannot be saved. You can copy and paste data from the window, or run a normal ELSA search to save the data