Deploying Splunk Multi-Site using Google Cloud Platform

Use Case

We have been using Splunk as our enterprise monitoring tool for some time and have it running as a multisite venture that gives us redundancy as well as scale ability. With the advent and recent adoption of public cloud, I was keen to try out if there was an ask to move this on-premise infrastructure to cloud, how would it work. This blog post is an effort to document my output for a wider audience.
I used my personal Google account to setup the solution in Google Cloud. Google offers $300 of cloud or 1 year whichever comes first when you register for Google Cloud as an individual.

To replicate our architecture, I have used us-central as site1 which will house the Cluster Master, Search Head and Indexers for site1 while us-east is used as site2 and will be used as secondary site. I have used Ubuntu 19.10 as the base OS which has the Splunk Enterprise installed in the /opt directory. To maintain consistency, I created a custom image and would be used as a base image for all the Splunk Servers.
To run this PoC, I created a separate Splunk VPC that would house the whole architecture. The command to create the VPC is -

gcloud compute networks create splunk \ --subnet-mode=auto \ --bgp-routing-mode=global

It will appear in VPC network as follows -

To manage the infrastructure from Splunk’s management console(port-8000) as well as to allow Universal Forwarders(port-9997) to send the data the firewall needs to be opened. The VMs will be managed over the ssh(port-22). Just to be safe I’ve open the replication port 8080 and Cluster Master port-8089-as well.

The command to open these ports is-

gcloud compute firewall-rules create splunk-allow \--project splunk-261402 --network splunk \--allow tcp:22,tcp:8089,tcp:8080,tcp:9997,tcp:8000,icmp \--source-ranges 0.0.0.0/0 --target-tags=splunk

The rule will appear in firewalls as -

Creating Cluster Master, Indexers and Search Head for Splunk on site1 and site 2

We have our VPC available, the next steps would be to create the VMs that will function as Search Head, Cluster Master and Indexers across the two sites.

Create the Search Head in us-central region and zone us-central1-b

gcloud compute instances create sh-ctr \--image-project splunk-261402 --zone=us-central1-b \--image=splunk-image --subnet=splunk \--boot-disk-size=30 --boot-disk-type=pd-standard

Create the Cluster Master for Indexers in us-central region and zone us-central1-b

gcloud compute instances create idx-mstr \--image-project splunk-261402 --zone=us-central1-b \--image=splunk-image --subnet=splunk \--boot-disk-size=30 --boot-disk-type=pd-standard

Create the three indexers in us-east region and zone us-east1-b

gcloud compute instances create idx-east-1 \--image-project splunk-261402 --zone=us-east1-b \--image=splunk-image --subnet=splunk \--boot-disk-size=30 --boot-disk-type=pd-standardgcloud compute instances create idx-east-2 \--image-project splunk-261402 --zone=us-east1-b \--image=splunk-image --subnet=splunk \--boot-disk-size=30 --boot-disk-type=pd-standardgcloud compute instances create idx-east-3 \--image-project splunk-261402 --zone=us-east1-b \--image=splunk-image --subnet=splunk \--boot-disk-size=30 --boot-disk-type=pd-standard

Create the three indexers in us-central region and zone us-central1-b

gcloud compute instances create idx-ctr-1 \--image-project splunk-261402 --zone=us-central1-b \--image=splunk-image --subnet=splunk \--boot-disk-size=30 --boot-disk-type=pd-standard

Google Doc Reference to create VMs

After having all the above VM commands run we should have our Search Head, Cluster Master and Indexers in site1 and site2 created as follows -

Make a note of the cluster master internal IP address as we will be using that to configure the cluster. Now should we try to connect them via ssh we will not be able to as these VMs aren’t tied to a firewall rule that allows then access to connectivity.

To do that we will assign the ‘splunk’ tag to them and it will allow them to access to the firewall rule we have created.

gcloud compute instances add-tags sh-ctr \ --zone us-central1-b \ --tags splunkgcloud compute instances add-tags idx-mstr \ --zone us-central1-b \ --tags splunkgcloud compute instances add-tags idx-ctr-1 \ --zone us-central1-b \ --tags splunkgcloud compute instances add-tags idx-ctr-2 \ --zone us-central1-b \ --tags splunkgcloud compute instances add-tags idx-ctr-3 \ --zone us-central1-b \ --tags splunkgcloud compute instances add-tags idx-east-1 \ --zone us-east1-b \ --tags splunkgcloud compute instances add-tags idx-east-2 \ --zone us-east1-b \ --tags splunkgcloud compute instances add-tags idx-east-3 \ --zone us-east1-b \ --tags splunk

After successful execution of the commands, each of the machines will have a tag ‘splunk’ associated with it and we should be able to connect them via ssh.

Putting it all together!

Now that we have all of our VMs created, we need to put them all together so that it works as a cluster. This would involve changing the server.conf file in the /opt/splunk/etc/system/local directory to let each other know how to communicate with the Cluster Master and Search Head. The server.conf file is not created until the splunk instance is started and the license agreement is accepted, so go ahead and start it and accept the agreement. Splunk will ask for a user and a password to access the management console and administration. To keep it simple, I’ve created the user admin and the password as passw0rd for all the splunk instances on all the VMs. The details on how to setup is in detail on Splunk’s website
In the general section, add a line to indicate the site at which the splunk machine is running. In this PoC we are running the Cluster Master, Search Head and 3 Indexers, idx-ctr-1, idx-ctr-2 and idx-ctr-3 in site1 from us-central1-b and 3 Indexers idx-ctr-1, idx-ctr-2 and idx-ctr-3 in site2.

here. The Cluster Master’s server.conf contains the major details related to how the cluster is setup.

Since we are running the Search Head in site1 the general section will indicate it as site1.

Since we are running the Indexers on two sites, the us-central1-b will have the general section as site1.

And the Indexers on us-east1-b will have the general section as site2.

Now we have all the items configured, restart the instances and they auto connect. To check the status, use the external IP address of Cluster Master and using the management port 8000 access it. Using the user id as admin and password as passw0rd, check the monitoring console. The Indexers will list themselves in the Peers and we should be having six of them.
The indexes created in cluster will be in the indexes tab -

The Search Head and Cluster Master will be visible in the Search Heads tab.

There you have it, we are all set and out Splunk infrastructure is now setup in Cloud and ready to ingest data!!

Additional Items

While creating this architecture, I had to start the splunkd instance multiple times as the machines would get rebooted. To make it easy, I created a script-startSplunk.sh-and had it in the home directory of root and turned it as executable. Use this script as a startup script and have it as part of the metadata for the VM instance. This will start the splunkd instance when the VM is started as well.
The content of the script-

The command to have the startup script assigned to the VM.

gcloud compute instances add-metadata idx-mstr \ --zone us-central1-b \ --metadata startup-script=/root/startSplunk.shgcloud compute instances add-metadata sh-ctr \ --zone us-central1-b \ --metadata startup-script=/root/startSplunk.shgcloud compute instances add-metadata idx-ctr-1 \ --zone us-central1-b \ --metadata startup-script=/root/startSplunk.shgcloud compute instances add-metadata idx-ctr-2 \ --zone us-central1-b \ --metadata startup-script=/root/startSplunk.shgcloud compute instances add-metadata idx-ctr-3 \ --zone us-central1-b \ --metadata startup-script=/root/startSplunk.shgcloud compute instances add-metadata idx-east-1 \ --zone us-east1-b \ --metadata startup-script=/root/startSplunk.shgcloud compute instances add-metadata idx-east-2 \ --zone us-east1-b \ --metadata startup-script=/root/startSplunk.shgcloud compute instances add-metadata idx-east-3 \ --zone us-east1-b \ --metadata startup-script=/root/startSplunk.sh

Google Doc Reference to Startup Script

Summary

This particular PoC showcases how we can run a Splunk multisite environment in Google Cloud. Some things to keep in mind should one prepare to move this to Google Cloud -

To actually run this solution in Google Cloud, I’d suggest reading this

  • In this scenario I have used the disk storage that are attached to the VMs as boot image. Ideally in PROD environment, the VMs would have persistent disks storing the Splunk indexed data. This would allow for redundancy as the disks can be detached from an instance and attached to another instances.
  • I have used in this case, a VPC that functions as the substitute for the on-premise network. In a real world scenario, one would be looking at creating a VPN which will have the current on-prem scaling out to cloud. Over a period of time the on premise architecture will slowly diminish and most of the Splunk architecture will work out of cloud.
  • For the brevity of the solution, I have not included the deployment server architecture which might be part of the architecture in some cases. In that case, assume a VM in either of the site that functions as a Deployment Server.

white paper from Splunk before starting out. This paper mentions the suggested machine types according to the scale of the deployment to run.

Originally published at http://godfreym.blogspot.com.

--

--

--

Love podcasts or audiobooks? Learn on the go with our new app.

How to install Laravel with Xampp

HELLO WORLD?

Code block image

TERMS AND CONDITIONS FOR PARTICIPATION IN PAID EVENTS USING THE FERFIT APPLICATION (SERVICE)

WTF: The Who to Follow Service at Twitter

Why You Need QA and How It Can Save Your Money

Zero to Hero in Python in 30 days: Day 10: Functions Part II

How can I add Flutter web support in my existing Flutter project?

Respond to Low memory warnings using 4 different ways.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Godfrey Menezes

Godfrey Menezes

More from Medium

Serverless Voting APP using the AWS Cloud Services

CLOUD FIRESTORE

App stream 2.0 & AWS console authentications with Active Directory Federation Services (ADFS)

Deploying firebase functions to Google Cloud Run