In this series of posts, I run through the process of aggregating logs with Wildfly, Filebeat, ElasticSearch and Kibana.
In Log Agrgegation - ElasticSearch & Kibana I went through creating the ElasticSearch domain but now we need to create some logs to play with.
For this post, I setup a simple Wildfly instance with a basic Java application to produce some logs.
I have provisioned a simple EC2 instance with Wildfly and deployed a really simple application to it.
In a setting requiring this process to be more repeatable I would typically build out a Cloudformation template with a solid AWS::CloudFormation::Init
section (in the LaunchConfiguration Metadata) to achieve this, but in this case with a throwaway instance, the following UserData
block will suffice:
|
|
That’s probably quite a lot to take in, but briefly:
- lines 2 & 3 remove java 7, install java 8
- lines 4 & 5 downloads the Wildfly zip and extracts it to the /opt/wildfly directory
- lines 6 & 7 downloads a couple custom configuration files to tweak Wildfly
- lines 8 & 9 copies a template init script to
/etc/init.d/wildfly
to configure the Wildfly service - line 10 grabs the simple Java application, it will get auto-deployed with the context path
log-agg
when Wildfly starts - finally, line 11 starts the wildfly servce
The application simply logs to Wildfly’s /var/log/server.log
file whenever the /log-agg/healthcheck
URL is hit, which produces lines like:
2018-01-31 13:32:38,747 DEBUG [default task-2] com.thecuriousdev.logaggregation.healthcheck.HealthCheckServlet Hit /healthcheck
If this was your only server then it might not be out of a question to perhaps simply SSH to the box and tail the logs, but in a microservices style deployment this becomes inpractical real quick.
Of course, with the tailing example, you will likely be logging into a new box every time you do a deploy which gets tired pretty quickly :)
Next post, I’ll go into utilising Filebeat to get our logs from the log file to our ElasticSearch Domain and Kibana.