1. Installation

     1.1. RVM Installation (rpm based boxes only)

FluentD plug-in’s, written in Ruby & Gems, require RVM as prerequisite to be installed on RHEL/CentOS boxes. Follow the steps below to Install RVM.

 

  #Step 1: Upgrade Packages

a)      Login as root
b)      $ yum update
c)      $ yum groupinstall "Development Tools"

 

  #Step 2: Installing Recommended Packages

a)      $ yum install gcc-c++ patch readline readline-devel zlib zlib-devel
b)      $ yum install libyaml-devel libffi-devel openssl-devel make
c)      $ yum install bzip2 autoconf automake libtool bison iconv-devel

 

  #Step 3: Install RVM (Ruby Version Manager)

$ gpg --keyserver hkp://keys.gnupg.net --recv-keys D39DC0E3
$  \curl -sSL https://get.rvm.io | bash -s stable$ source /etc/profile.d/rvm.sh$ rvm autolibs disable
$ rvm requirements # manually install these
$ rvm reload
$ rvm install 1.9.3
$ ruby –v    # check ruby version

 

    1.2. FluentD Installation


Run the fluentd_ode.sh to install fluentD and related plug-in’s. Please change the file permissions to execute. 
$ cd /usr/local/elsa/contrib/fluentd
$ ./ fluentd_ode.sh

   

    1.3. Validate data directories in td-agent.conf file

Typically the fluentd data directory will be created under $DATA_DIR/fluentd. In case DATA_DIR is not set, you can find them under /data/fluentd. Validate with the directories mentioned at /etc/td-agent/td-agent.conf file. They should be identical.

2. Configuring FluentD

Fluentd needs to be configured for each data source type. As the input message format may vary from use case to use case, here we have provided some sample messages and its configuration as for your reference to configure your own data source for fluentd.

    2.1. Create Tags:

Users need to create tags for each message type they want to process in the file located at /etc/td-agent/plugin/log_tags.rb

 

Existing file has following tags which could be used for reference-
$apache_tag = '%ode-5-10005:'
$json_tag = '%ode-6-10006:'
$custom_tag ='%ode-5-10001:'
$netflow_tag = 'netflow_syslog'

    2.2. Create formatter plug-in’s

                  For each type of message, formatter plugin needs to be created using the Tags created above, mentioned below is the example of formatter plug in created for Json Messages. Please note the name of created file should start with formatter_. Please refer to file /etc/td-agent/plugin/ formatter_json_ltsv.rb. Please make required changes as explained in bold in below example.

 

require_relative 'log_tags'
module Fluent
module TextFormatter
class LTSVFormatter1 < Formatter                        --- Class Name to be changed
Plugin.register_formatter('json_ltsv', self)        --- formatter name should be changed as per message
#   include Configurable # This enables the use of config_param
include HandleTagAndTimeMixin # If you wish to use tag_key, time_key, etc.
#   def configure(conf)
#     super
#   end
config_param :delimiter, :string, :default => "\t"
config_param :label_delimiter, :string, :default =>  ":"
def format(tag, time, record)
filter_record(tag, time, record)
formatted = $json_tag + record.inject('') { |result, pair|    --- use Tag name which is already created in 2.1
result << @delimiter if result.length.nonzero?
result << "#{pair.first}#{@label_delimiter}#{pair.last}"
}        formatted << "\n"
formatted
end
end
end
end

 

    2.3. FluentD configuration for Json Messages

        i) For Source type File :

As per example mentioned below, please note

           For <source> tag

path —  directory path where input files will be stored.

Pos_file — directory path where Fluentd stores position of               different input files.

           For <filter> tag

Keep_keys — these are the fields of interest from the input message which user would like to store in the database

           For <match flattened.json000.**> tag

Path — output file directory path of where flattened messages will get stored

 

           Following needs to be created in  /etc/td-agent/td-       agent.conf file

 
# source for file input
<source>
type tail
format json
read_from_head true
path /data/fluentd/json_log/in_files/*
pos_file /data/fluentd/json_log/out_files/json.log.pos
tag json000
</source>
<filter json000>
type record_transformer
renew_record true
keep_keys
startTime,endTime,srcMac,destMac,srcIp,destIp,srcPort,destPort,protocol,app,hlApp,security,packetsCaptured,bytesCaptured,
terminationReason,empty,boxId,networks,srcLocation,destLocation
</filter>
<match json000>
type flatten_hash
add_tag_prefix flattened.
separator _
</match>
<match flattened.json000.**>
type file
format json_ltsv
append true
delimiter ,
label_delimiter =
path /data/fluentd/json_log/out_files/json
buffer_type file
buffer_path/data/fluentd/json_log/out_files/buffer
time_slice_format  out
append true
flush_interval  1s
</match>

        ii) For Source type Stream :

    As per example mentioned below, please note

    For <source> tag

port —  Port number where messages to be sourced ( this should be different that already reserved port like 514 etc)

Rest of the setting remains same as mention above for file

 
# source as stream
<source>
type tcp
format json
port 5170
bind 0.0.0.0
tag json000
</source>
<filter json000>
type record_transformer
renew_record true
keep_keys
startTime,endTime,srcMac,destMac,srcIp,destIp,srcPort,destPort,protocol,app,hlApp,security,packetsCaptured,bytesCaptured,
terminationReason,empty,boxId,networks,srcLocation,destLocation
</filter>
<match json000>
type flatten_hash
add_tag_prefix flattened.
separator _
</match>
<match flattened.json000.**>
type file
format json_ltsv
append true
delimiter ,
label_delimiter =
path /data/fluentd/json_log/out_files/json
buffer_type file
buffer_path /data/fluentd/json_log/out_files/buffer
time_slice_format  out
append true
flush_interval  1s
</match>

 2.4. FluentD configuration for Apache Messages

    1. For Source type File :

As per example mentioned below, please note

        For <source> tag

path —  directory path where input files will be stored.

Pos_file — directory path where Fluentd stores position of different input files.

        For <filter> tag

Keep_keys — these are the fields of interest from the input message which user would like to store in the database

       For <match flattened.apache000.**> tag

Path — output file directory path of where flattened messages will get stored

 

        Following needs to be created in  /etc/td-agent/td-agent.conf file

 
# source for file input
<source>
type tail
format apache
read_from_head true
path /data/fluentd/apache_log/in_files/*
pos_file /data/fluentd/apache_log/out_files/apache.log.pos
tag apache000
</source>
<filter apache000>
type record_transformer
renew_record true
keep_keys host,user,method,path,code,size,referer,agent
</filter>
<match apache000>
type flatten_hash
add_tag_prefix flattened.
separator _
</match>
<match flattened.apache000.**>
type file
format apache_ltsv
append true
delimiter ,
label_delimiter =
path /data/fluentd/apache_log/out_files/apache
buffer_type file
buffer_path /data/fluentd/apache_log/out_files/buffer
time_slice_format  out
append true
flush_interval  1s
</match>
       

2) For Source type Stream :

    As per example mentioned below, please note

    For <source> tag

port —  Port number where messages to be sourced ( this should be different that already reserved port like 514 etc)

Rest of the setting remains same as mention above for file

 
# source as stream
<source>
type tcp
format apache2
port 5170
bind 0.0.0.0
tag json000
</source>
<filter apache000>
type record_transformer
renew_record true
keep_keys host,user,method,path,code,size,referer,agent
$nbsp;
</filter>
<match apache000>
type flatten_hash  add_tag_prefix flattened.  separator _</match>
<match flattened.apache000.**>
type file
format apache_ltsv
append true
delimiter ,
label_delimiter =path /data/fluentd/apache_log/out_files/apache
buffer_type file
buffer_path /data/fluentd/apache_log/out_files/buffer
time_slice_format  outappend trueflush_interval  1s
</match>

        2.5. FluentD configuration for Netflow Messages

            1) For Source type File :

As per example mentioned below, please note

                For <source> tag

path —  directory path where input files will be stored.

Pos_file — directory path where Fluentd stores position of different input files.

                For <match net000>  tag

Path — output file directory path of where flattened messages will get stored

 

              Following needs to be created in  /etc/td-agent/td-agent.conf file

 
# source for file input
<source>
type tail
format none
message_key
read_from_head true
path /data/fluentd/netflow_log/in_files/*
pos_file /data/fluentd/netflow_log/out_files/netflow.log.pos
tag net000
</source>
<match net000>
type file
format netflow_ltsv
path /data/fluentd/netflow_log/out_files/netflow
buffer_type file
buffer_path /data/fluentd/netflow_log/out_files/buffer
time_slice_format out
append true
flush_interval  1s
</match>

 

        2) For Source type Stream :

             As per example mentioned below, please note

             For <source> tag

port —  Port number where messages to be sourced ( this should be different that already reserved port like 514 etc)

Rest of the setting remains same as mention above for file

 
# source as stream
<source>
type tcp
message_key
format none
port 5170
bind 0.0.0.0
tag net000
</source>
<match net000>
type file
format netflow_ltsv
path /data/fluentd/netflow_log/out_files/netflow
buffer_type file
buffer_path /data/fluentd/netflow_log/out_files/buffer
time_slice_format out
append true
flush_interval  1s
</match>

     2.6. FluentD configuration for Customized Messages

         1. For Source type File :

As per example mentioned below, please note

             For <source> tag

path —  directory path where input files will be stored.

Pos_file — directory path where Fluentd stores position of different input files.

            For <match custom000>  tag

Path — output file directory path of where flattened messages will get stored

 

            Following needs to be created in  /etc/td-agent/td-agent.conf file

 
# source for file input
<source>
type tail
format none
message_key
read_from_head true
path /data/fluentd/custom_log/in_files/*
pos_file /data/fluentd/custom_log/out_files/custom.log.pos
tag custom000
</source>

<match custom000>
type file
format custom_ltsv
path /data/fluentd/custom_log/out_files/custom
buffer_type file
buffer_path /data/fluentd/custom_log/out_files/buffer
time_slice_format out
append trueflush_interval  1s
</match>

         2) For Source type Stream :

    As per example mentioned below, please note

    For <source> tag

port —  Port number where messages to be sourced ( this should be different that already reserved port like 514 etc)

Rest of the setting remains same as mention above for file

 
# source as stream
<source>
type tcp
message_key format none
port 5170
bind 0.0.0.0
tag custom000
</source>

<match custom000>
type file
format custom_ltsv
path /data/fluentd/custom_log/out_files/custom
buffer_type file
buffer_path /data/fluentd/custom_log/out_files/buffer
time_slice_format out
append trueflush_interval  1s
</match>

 

    2.7. Patterns configuration

              

Users needs to create patterns for each message type as explained below and copy them in patterndb.xml files located at /usr/local/elsa/node/conf directory.

Please check for the program tag in the below patterns (highlighted). They should be same as mentioned in file /usr/local/elsa/contrib/plugin/log_tags.rb

You can add your tags for new messages and also mention in the patterns as mentioned below.

        1. Pattern for Json Messages

#pattern for Json
<ruleset>
<rules>
<rule class='10006' id='10006'>
<patterns>
<pattern>startTime=@NUMBER::@,endTime=@NUMBER::@,srcMac=@ESTRING::,@destMac=@ESTRING::,@srcIp=@IPv4:i0:@,destIp=@IPv4:i2:@,
srcPort=@NUMBER:i1:@,destPort=@NUMBER:i3:@,protocol=@ESTRING::,@app=@ESTRING::,@hlApp=@ESTRING::,@security=@ESTRING::,
@packetsCaptured=@NUMBER::@,bytesCaptured=@NUMBER:i4:@,terminationReason=@ESTRING::,@empty=@ESTRING::,@boxId=@NUMBER::@,
networks_0=@ESTRING::,@networks_1=@ESTRING::,@srcLocation_countryName=@ESTRING:s0:,@srcLocation_countryCode=@ESTRING::,
@srcLocation_longitude=@ESTRING:s1:,@srcLocation_latitude=@ESTRING:s2:,@@ANYSTRING::@</pattern>
</patterns>
<examples>
<example>
<test_message program="%ode-6-10006">startTime=141741690023357880,endTime=141741690023733424,srcMac=00:14:22:18:DA:7A,
destMac=00:17:C5:15:AC:C4,srcIp=192.168.1.59,destIp=192.168.4.7,srcPort=54205,destPort=161,protocol=UDP,
app=SNMP,app=SNMP,hlApp=SNMP,security=NONE,packetsCaptured=2,bytesCaptured=216,terminationReason=Timeout,
empty=false,boxId=1,networks_0=Network 1,networks_1=Network 4,srcLocation_countryName=Local,
srcLocation_countryCode=Local,srcLocation_longitude=0.0,srcLocation_latitude=0.0,
</test_message>
<test_value name="i0">192.168.1.59</test_value>
<test_value name="i2">192.168.4.7</test_value>
<test_value name="i1">54205</test_value>
<test_value name="i3">161</test_value>
<test_value name="i4">216</test_value>
<test_value name="s0">Local</test_value>
<test_value name="s1">0.0</test_value>
<test_value name="s2">0.0</test_value>
</example>
</examples>
</rule>
</rules>
</ruleset>

# Test if json parser is working
> /usr/local/syslog-ng/bin/pdbtool match -p /usr/local/elsa/node/conf/patterndb.xml -P %ode-6-10006 -M
"startTime=141741690021799452,endTime=141741690022239794,srcMac=00:14:22:18:DA:7A,destMac=00:17:C5:15:AC:C4,srcIp=192.168.1.59,
destIp=192.168.4.7,srcPort=54203,destPort=161,protocol=UDP,
app=SNMP,hlApp=SNMP,security=NONE,packetsCaptured=2,bytesCaptured=216,terminationReason=Timeout,empty=false,boxId=1,
networks_0=Network 1,networks_1=Network 4,srcLocation_countryName=Local,srcLocation_countryCode=Local,
srcLocation_longitude=0.0,srcLocation_latitude=0.0,
destLocation_countryName=Local,destLocation_countryCode=Local,destLocation_longitude=0.0,destLocation_latitude=0.0"

  1. This should parse srcIp, destIp, srcPort, dstPort .

# Push the changes to merged.xml

b)      > sudo sh -c "sh /usr/local/elsa/contrib/install.sh node set_syslogng_conf"

 2) Pattern for Apache messages

#pattern for apache
<ruleset name='APACHE_LOG' id='10005'>
<rules>
<rule provider='APACHE_LOG' class='10005' id='10005'> <patterns>
<pattern>host=@IPv4:i0:@,user=@ESTRING:s0:,@method=@ESTRING:s1:,@path=@ESTRING:s2:,@code=@NUMBER:i1:@,size=@NUMBER:i2:@,
referer=@ESTRING:s3:,@@ANYSTRING:s4:@</pattern>        </patterns>
<examples>
<example>
<test_message program="%ode-5-10005">host=127.0.0.1,user=-,method=GET,path=/API/local_info,code=200,size=322000,
referer=http://127.0.0.1/API/local_info,agent=Mozilla/5.0
(compatible; U; AnyEvent-HTTP/2.22; +http://software.schmorp.de/pkg/AnyEvent)</test_message>
<test_values>
<test_value name='i0'>127.0.0.1</test_value>
<test_value name='i1'>200</test_value>
<test_value name='i2'>322000</test_value>
<test_value name='s0'>-</test_value>
<test_value name='s1'>Method</test_value>
<test_value name='s2'>/API/local_info</test_value>
<test_value name='s3'>http://127.0.0.1/API/local_info</test_value>
<test_value name='s4'>Mozilla/5.0 (compatible; U; AnyEvent-HTTP/2.22; +http://software.schmorp.de/pkg/AnyEvent)</test_value>
</test_values>
</example>
</examples>
</rule>
</rules>
</ruleset>

             # Test if Apache parser is working
/usr/local/syslog-ng/bin/pdbtool match -p /usr/local/elsa/node/conf/patterndb.xml -P %ode-5-10005 -M "host=127.0.0.1,
user=-,method=GET,
path=/API/local_info,code=200,size=322000,referer=http://127.0.0.1/API/local_info,agent=Mozilla/5.0
(compatible; U; AnyEvent-HTTP/2.22; +http://software.schmorp.de/pkg/AnyEvent)%ode-5-10005:host=::1,
user=-,method=GET,path=/,code=200,size=137896,referer=-,agent=curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.2.3
Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2"

 

  1. This should parse host,user,method,path .
  2. Push the changes to merged.xml

> sudo sh -c "sh /usr/local/elsa/contrib/install.sh node set_syslogng_conf"

 

    3) Pattern for Netflow messages

#pattern for apache
<ruleset >
<rules>
<rule class='netflow_syslog' id='34'>
<patterns>
<pattern>@ESTRING:i0:|@@ESTRING:i1:|@@ESTRING:i2:|@@ESTRING:i3:|
@@ESTRING:i4:|@@ESTRING:i5:|@@ESTRING:s0:|@@ESTRING:s1:|@@ESTRING:s2:|
@@ESTRING:s3:|@@ESTRING:s4:|@@ANYSTRING:s5:@</pattern>
<pattern>@ESTRING:i0:|@@ESTRING:i1:|@@ESTRING:i2:|@@ESTRING:i3:|
@@ESTRING:i4:|@@ESTRING:i5:|@@ESTRING:s0:|@@ESTRING:s1:|@@ESTRING:s2:|
@@ESTRING:s3:|@@ESTRING:s4:|@</pattern>
<pattern>@ESTRING:i0:|@@ESTRING:i1:|@@ESTRING:i2:|@@ESTRING:i3:|
@@ESTRING:i4:|@@ESTRING:i5:|@@ESTRING:s0:|@@ESTRING:s1:|@@ESTRING:s2:|
@</pattern>
</patterns>
<examples>
<example>
<test_message program="netflow_url">tcp|192.85.128.47|35843|1.1.1.1|443|30486|2173|US|Palo Alto, CA|37.376202|-122.182602|
HPES - Hewlett-Packard Company</test_message>
<test_values>
<test_value name="i0">tcp</test_value>
<test_value name="i1">192.85.128.47</test_value>
<test_value
name="i2">35843</test_value>
<test_value name="i3">1.1.1.1</test_value>
<test_value name="i4">443</test_value>
<test_value name="i5">30486</test_value>
<test_value name="s0">2173</test_value>
<test_value name="s1">US</test_value>
<test_value name="s2">Palo Alto, CA</test_value>
<test_value name="s3">37.376202</test_value>
<test_value name="s4">-122.182602
</test_value>
<test_value name="s5">HPES - Hewlett-Packard Company</test_value>
</test_values>
</example>
</examples>
</rule>
</rules>
</ruleset>

             # Test if Netflow parser is working
/usr/local/syslog-ng/bin/pdbtool match -p /usr/local/elsa/node/conf/patterndb.xml -P netflow_syslog -M
"tcp|2.2.2.2|35843|1.1.1.1|222|30486|2173|US|Palo Alto, CA|37.376202|-122.182602|HPES - Hewlett-Packard Company"

  1. This should parse class,proto,srcip srcportdstip,dstport,conn_bytesasn etc
  2. Push the changes to merged.xml

> sudo sh -c "sh /usr/local/elsa/contrib/install.sh node set_syslogng_conf"

 

    4) Pattern for customized messages

# Pattern for custom message
<ruleset>
<rules>
<rule class='10001' id='10001'>
<patterns> <pattern>@ESTRING:s0: @@ESTRING:s1: @@ESTRING:i0: @This is event</pattern>
</patterns>
<examples>
<example>
<test_message program='%ode-5-10001'>Server1 Warning 1 This is event 1</test_message>
<test_value name='s0'>Server1</test_value>
<test_value name='s1'>Warning</test_value>
<test_value name='i0'>1</test_value>
</example>
</examples>
</rule>
</rules>
</ruleset>

             # Test if customized parser is working
/usr/local/syslog-ng/bin/pdbtool match -p /usr/local/elsa/node/conf/patterndb.xml -P %ode-5-10001 -M
"Server5 AllEvents 9900 This is event 9900 This is padding for all the events, except the specia2 cases listed above
qqqqqqqqqRRRRRRRRRRssssssssssTTTTTTTTTTuuuuuuuuuu"

  1. This should parse eventide,servername etc
  2. Push the changes to merged.xml

> sudo sh -c "sh /usr/local/elsa/contrib/install.sh node set_syslogng_conf"

 

2.8. Configuring syslog-ng

                  Copy following lines in syslog-ng files ( /usr/local/syslog-ng/etc/syslog-ng.conf)

Copy these lines just above source s_network  at the beginning of file. Also check the directories mentioned,
these should exactly be same as mentioned in td-agent.conf as mentioned in above steps.

source s_file {
file("/data/fluentd/json_log/out_files/json.out.log" follow_freq(1));            -- This is outfile for Json Messages
file("/data/fluentd/apache_log/out_files/apache.out.log" follow_freq(1));  -- This is outfile for Apache Messages
file("/data/fluentd/netflow_log/out_files/netflow.out.log" follow_freq(1)); -- This is outfile for Netflow Messages
file("/data/fluentd/custom_log/out_files/custom.out.log" follow_freq(1)); -- This is outfile for customized Message};

Make below mentioned changes at the bottom of file in existing code
log {source(s_file);    -- Add this
source(s_import);
rewrite(r_cisco_program);
rewrite(r_snare);#
rewrite(r_from_pipes); -- comment this#
rewrite(r_pipes);   -- comment this
parser(p_db);
rewrite(r_extracted_host);
rewrite(r_extracted_timestamp);
destination(d_elsa_import);
#flags(flow-control)
destination(d_elsa);  -- Add this
};

 2.9. Create classes and rules in DB

Please refer to the file   /usr/local/elsa/contrib/fluentd/sample-classdb.sh
All the rules should be executed under syslog_data database in mysql.
Please change permission to execute, if want to create rules for sample data.

 3. Log rotation

Over a period of time the file generated in out_files directory of various messages will grow. In order to log rotate them please follow below –

Create a config file under /etc/logrotate.d/ .you can use already available files as template
For  json messages it should be like–
/data/fluentd/json_log/out_files/json.out.log {
rotate 5
missingok
size=10k
compress
olddir /data/fluentd/json_log/out_files/old
notifempty
create 640 td-agent td-agent
}
Please check for the path of file mentioned.
Currently this scheduled on daily basis via a cron job. This can be configured with different cron job at desired frequency.
 In order to test please run
 $ /etc/cron.daily/logrotate
 Check for  logrotated file  in /data/fluentd/json_log/out_files/old directory

4. Directory Setup & Structure  

    4.1. Data directories

As we are expecting huge data to be processed, so we have created input & output file directories under DATA Directory. Following is the structure for each type of message.

 

For Json Messages

$DATA_DIR/fluentd/json_log/in_files                     — for input files

$DATA_DIR/fluentd/json_log/out_files                  — for out files

$DATA_DIR/fluentd/json_log/out_files/old_files  —  for log rotated files

For Apache Messages

$DATA_DIR/fluentd/apache_log/in_files                     — for input files

$DATA_DIR/fluentd/apache_log/out_files                  — for out files

$DATA_DIR/fluentd/apache_log/out_files/old_files  —  for log rotated files

 

For Netflow Messages

$DATA_DIR/fluentd/netflow_log/in_files                     — for input files

$DATA_DIR/fluentd/netflow_log/out_files                  — for out files

$DATA_DIR/fluentd/netflow_log/out_files/old_files  —  for log rotated files

For custom  Messages

$DATA_DIR/fluentd/custom_log/in_files                     — for input files

$DATA_DIR/fluentd/custom_log/out_files                  — for out files

$DATA_DIR/fluentd/custom_log/out_files/old_files  —  for log rotated files

  • Log directory

You can check the td-agent logs @ /var/log/td-agent/td-agent.log to troubleshoot any issue related to td-agent.

5. Message setup Verification

    5.1. In order to test setup for each message type, create files with sample messages under following directories ( you can test with any or all of them)

$DATA_DIR/fluentd/json_log/in_files/json.log

$DATA_DIR/fluentd/apache_log/in_files/apache.log

$DATA_DIR/fluentd/netflow_log/in_files/netflow.log

$DATA_DIR/fluentd/custom_log/in_files/custom.log

    5.2. Start Services
$ service  td-agent start
$ service   td-agent reload
$ service syslog-ng restart

    5.3. Verify output files

    Following ouput files would be created

$DATA_DIR/fluentd/json_log/out_files/json.out.log

$DATA_DIR/fluentd/apache_log/out_files /apache.out.log

$DATA_DIR/fluentd/netflow_log/out_files /netflow.out.log

$DATA_DIR/fluentd/custom_log/ out_files /custom.out.log

    5.4.  Query Web UI

Query the web UI for relevant messages

    5.5.  Streaming option verification

Similar to the files set up as explained above , If messages are to be sourced via stream  then please follow below steps

a) Open /etc/td-agent/td-agent.conf file

b) Uncomment the sources for streaming ( you can see port 5170 is currently configured , you need to change as per your requirements. Make sure port you use should be different than the reserved one like 514 etc.

c) Restart td-agent service

You can test with netcat utility as by executing following on $ prompt

 

echo '{"startTime":141741690024939218,"endTime":141741690025403185,"srcMac":"00:14:22:18:DA:7A","destMac":"00:17:C5:15:AC:C4",
"srcIp":"192.168.1.59","destIp":"192.168.4.7","srcPort":54207,"destPort":161,"protocol":"UDP","app":"SNMP","hlApp":"SNMP",
"security":"NONE","packetsCaptured":2,"bytesCaptured":216,"terminationReason":"Timeout","empty":"false","boxId":"1",
"networks":["Network 1","Network 4"],"srcLocation":{"countryName":"Local","countryCode":"Local","longitude":0.0,
"latitude":0.0},"destLocation":{"countryName":"Local","countryCode":"Local","longitude":2.0,
"latitude":2.0}}' | nc 127.0.0.1 5170

d) Check the output on web UI.