browse
Splunk is a tool for log analysis. It provides a powerful interface for analyzing large chunks of data, such as the logs provided by Cisco Umbrella for your organization's DNS traffic.
This article covers the basics of getting Splunk up and running so it is able to consume the logs from your Cisco-managed S3 bucket. You will:
- Set up your Cisco-managed S3 bucket in your dashboard.
- Ensure AWS CLI prerequisites are met
- Create a cron job to retrieve files from the bucket and store them locally on your server.
- Configure Splunk to read from a local directory.
Note
Existing Umbrella Insights and Umbrella Platform customers can access Log Management with Amazon S3 through the dashboard. Log Management is not available in all packages. If you are interested in this feature, please contact your account manager.
Prerequisites
- Download and install the AWS CLI from https://aws.amazon.com/cli/
- Create your Cisco managed bucket as described here
Create a cron job on your Splunk server
1. Create a shell script named "pull-umbrella-logs.sh" with the following contents, which will run on a scheduled cron job.
#!/bin/sh
cd <local data dir>
AWS_ACCESS_KEY_ID=<accesskey> AWS_SECRET_ACCESS_KEY=<secretkey> aws s3 sync <data path> .
Make sure to replace <local data dir>, <accesskey>, <secretkey>, and <data path> with their corresponding values:
- local data dir—The directory on disk to use for the downloaded files.
- accesskey—Access key provided by the Umbrella dashboard.
- secretkey—Secret key provided by the Umbrella dashboard.
- data path—Data path shown in the log management UI (for example, s3://cisco-managed-<region>/1_2xxxxxxxxxxxxxxxxxa120c73a7c51fa6c61a4b6/dnslogs/).
2. Save the shell script and set the execute permission. The script should be owned by root.
$ chmod u+x pull-umbrella-logs.sh
3. Manually execute the "pull-umbrella-logs.sh" script once to confirm that syncing works. The sync does not have to be completed, this is a test to confirm that the keys are correct and there are no issues with the script.
4. Add the following line to your Splunk server's crontab.
*/5 * * * * root root /path/to/pull-umbrella-logs.sh &2>1 >/var/log/pull-umbrella-logs.txt
Make sure to edit the line to use the correct path to the script. This will execute a sync every five minutes. The S3 storage directory will be updated every 10 minutes and the data will remain on the S3 storage for 30 days. This will keep the two in sync.
Configure Splunk to read from a local directory
1. In Splunk, add a new data source by navigating to Settings > Data Inputs > Files & Directories and click New.
2. In the File or Directory field, specify the local directory that S3 is syncing files to.
3. Click Next and complete the rest of the wizard using the default settings.
Once there is data in the local directory and Splunk is configured, the data should be available to query and report on in Splunk.