Installation is done by running the install.sh file obtained either by downloading from the sources online or grabbing from the install tarball featured on the ELSA Google Code home page. When install.sh runs, it will check for the existence of /etc/elsa_vars.sh to see if there are any local customizations, such as passwords, file locations, etc. to apply. The install.sh script will update itself if it finds a newer version online, so be sure to store any changes in /etc/elsa_vars.sh.
The install.sh script should be run separately for a node install and a web install. You can install both like this: sh install.sh node && sh install.sh web. Installation will attempt to download and install all prerequisites and initialize databases and folders. It does not require any interaction.
Currently, Linux and FreeBSD 8.x are supported, with Linux distros based on Debian (including Ubuntu), RedHat (including CentOS), and SuSE tested. install.sh should run and succeed on these distributions, assuming that the defaults are chosen and that no existing configurations will conflict.
Supported Operating System
- Ubuntu 12.04
- Ubuntu 14.04
- RedHat 6.6
- CentOS 6.6
Installation using Packages
Note: You may still use elsa_vars.sh under /etc directory before running the package to make any configuration changes as is the case with the original ELSA installation.
Download the package,
Run the package,
sudo dpkg -i ode_0.3-2_all.deb
sudo apt-get install -f (this command is not required for updates)
Note: dpkg will complain of missing dependencies (for fresh install), ignore it.
Download the package,
curl -L -o ode-0.3-3.noarch.rpm https://s3-us-west-1.amazonaws.com/ode0.3/ode-0.3-3.noarch.rpm
Run the package,
sudo yum -y install ode-0.3-3.noarch.rpm (for fresh install)
sudo yum -y update ode-0.3-3.noarch.rpm (for upgrade from ODE 0.1 to 0.3)
After upgrade (ODE 0.1 to 0.3) you may have to restart services due to a bug in 0.1 package,
service syslog-ng restart
service searchd restart
service starman restart
Note: The start of services is only required when upgrading from 0.1 to 0.3 not for fresh installs.
Enable port 80 (firewall blocks port 80 by default on Centos),
sudo vi /etc/sysconfig/iptables
Copy the ssh accept line and change the port to 80
sudo service iptables restart
Check the log,
Run web UI,
In case you want to remove the installed package. Note, this will delete all the ode related data and configuration on your machine.
apt-get --purge autoremove ode
yum remove ode
You may also use the pre-built ODE images (medium and large systems) on AWS for quick installation or evaluation. You can search for “opallios” in the Community AMIs for these images.
- Ubuntu 14.04 Med – ODE-0.3-ubuntu-14.04-med-Opallios
- RedHat 6.6 Med – ODE-0.3-rhel-6.6-med-Opallios
The main ELSA configuration files are /etc/elsa_node.conf and /etc/elsa_web.conf. All configuration is controlled through these files, except for query permissions which are stored in the database and administered through the web interface. Nodes read in the elsa_node.conf file every batch load, so changes may be made to it without having to restart Syslog-NG.
Most Linux distributions do not ship recent versions of Syslog-NG. Therefore, the install compiles it from source and installs it to $BASE_DIR/syslog-ng with the configuration file in $BASE_DIR/syslog-ng/etc/, where it will be read by default. By default, $BASE_DIR is /usr/local and $DATA_DIR is /data. Syslog-NG writes raw files to $DATA_DIR/elsa/tmp/buffers/<random file name> and loads them into the index and archive tables at an interval configured in the elsa_node.conf file, which is 60 seconds by default. The files are deleted upon successful load. When the logs are bulk inserted into the database, Sphinx is called to index the new rows. When indexing is complete, the loader notes the new index in the database which will make it available to the next query. Indexes are stored in $DATA_DIR/sphinx and comprise about as much space as the raw data stored in MySQL.
Archive tables typically compress at a 10:1 ratio, and therefore use only about 5% of the total space allocated to logs compared with the index tables and indexes themselves. The index tables are necessary because Sphinx searches return only the ID’s of the matching logs, not the logs themselves, therefore a primary key lookup is required to retrieve the raw log for display. For this reason, archive tables alone are insufficient because they do not contain a primary key.
If desired, MySQL database files can be stored in a specified directory by adding the “mysql_dir” directive to elsa_node.conf and pointing it to a folder created which has proper permissions and SELinux/apparmor security settings.
Hosting all files locally
Edit the elsa_web.conf file and set yui/local to be “inc” and comment out “version” and “modifier.”
Caveats for Local File Hosting
If Internet access is not available, some plugins will not function correctly. In particular the whois plugin uses an external web service to do lookups, and these will not be possible without Internet connectivity. In addition, dashboards will not work if the client’s browser does not have connectivity to Google to pull down their graphing library.
The web frontend is typically served with Apache, but the Plack Perl module allows for any web server to be used, including a standalone server called Starman which can be downloaded from CPAN. Any implementation will still have all authentication features available because they are implemented in the underlying Perl.
The server is backended on the ELSA web database, (elsa_web by default), which stores user information including permissions, query log, stored results, and query schedules for alerting.
Admins are designated by configuration variables in the elsa_web.conf file, either by system group when using local auth, or by LDAP/AD group when using LDAP auth. To designate a group as an admin, add the group to the array in the configuration. Under the “none” auth mode, all users are admins because they are all logged in under a single pseudo-username.
The web server is required for both log collectors and log searchers (node and web) because searches query nodes (peers) using a web services API.