Set up log monitor with authentication for Asp.Net Core 5 Web Api with NLog, Fluentd, Elasticsearch and Kibana

Zysce
16 min readJun 28, 2021

When you deploy an application on the Cloud, you have some tools which allow you to consult the logs pretty easily.
(Application Insight + Azure for instance)

Fluentd is a data collector, in our case, will be our log collector.

Elasticsearch is a search engine mostly for log analytics.

Kibana is the UI associated to Elasticsearch allowing you to browse the logs and to do more cool stuffs like dashboards based on your log data.

Here is one example how to setup a log monitor on your own with docker deploying directly from your repository.

Prerequirements:

  • Knowledge on Asp.Net Core
  • Knowledge on Docker

In this tutorial, I’ll separate the Web Api with the log monitor, following this folder structure:

  • log folder which will contain the configuration for fluentd, elasticsearch and kibana
  • api folder will contain the Web Api.

First, let’s setup fluentd. (Inspired by Fluent Container Deployment tutorial)

Into the log folder, create:

For the file fluent.conf.template:
This file contains a template for fluentd config which will be generated at each build of the Dockerfile

The source tells fluentd to listen to your host on port 24224.
All logs are matched with the rule *.**

The logs are copied first in elasticsearch, which host is elasticsearch and port is 9200, we will come back to that later.

Then the logs are copied in the stdout as well for debug purpose of the container.

For the file generate_config.sh:

This file will generate the config inside the image.

NB: Later on, we will need to inject the user and password required to connect to elasticsearch, because of that, it is better to not use volume.

Furthermore, I prefer this method, which allow me to remove all potential links to my source code and all configuration will be in the built image.

Lastly, the Dockerfile:

The configuration file is first built on a debian image then copied into the fluentd image in the folder /fluentd/etc/.

Now, let’s test all of this by building and running the Dockerfile.

Zysce@ MINGW64 /c/medium/log/fluentd
$ docker build . -t medium_fluentd -q && docker run -d medium_fluentd
sha256:cdd2a1629127004a39116c3811fc7b61cbb3fad524283ec05461e6e9207362a9
f41401ad213ad4fdda5b420b8a9186a4ed6daa6d035cc7fb5e2922911e9897a0

When we take a look at the logs, there is a failure because fluentd does not find any instance of elasticsearch.

Zysce@ MINGW64 /c/medium/log/fluentd
$ docker logs $(docker ps -f ancestor=medium_fluentd -q)
2021–06–27 19:18:33 +0000 [info]: parsing config file is succeeded path=”/fluentd/etc/fluent.conf”
2021–06–27 19:18:33 +0000 [info]: gem ‘fluent-plugin-elasticsearch’ version ‘4.3.3’
2021–06–27 19:18:33 +0000 [info]: gem ‘fluentd’ version ‘1.12.0’
2021–06–27 19:18:34 +0000 [info]: ‘flush_interval’ is configured at out side of <buffer>. ‘flush_mode’ is set to ‘interval’ to keep existing behaviour
2021–06–27 19:18:34 +0000 [warn]: define <match fluent.**> to capture fluentd logs in top level is deprecated. Use <label @FLUENT_LOG> instead2021–06–27 19:18:34 +0000 [info]: using configuration file: <ROOT>
<source>
@type forward
port 24224
bind “0.0.0.0”
</source>
<match *.**>
@type copy
<store>
@type “elasticsearch”
host “elasticsearch”
port 9200
logstash_format true
logstash_prefix “fluentd”
logstash_dateformat “%Y%m%d”
include_tag_key true
type_name “access_log”
tag_key “@log_name
flush_interval 1s
<buffer>
flush_interval 1s
</buffer>
</store>
<store>
@type “stdout”
</store>
</match>
</ROOT>
2021–06–27 19:18:34 +0000 [info]: starting fluentd-1.12.0 pid=7 ruby=”2.6.6"
2021–06–27 19:18:34 +0000 [info]: spawn command to main: cmdline=[“/usr/local/bin/ruby”, “-Eascii-8bit:ascii-8bit”, “/usr/local/bundle/bin/fluentd”, “-c”, “/fluentd/etc/fluent.conf”, “-p”, “/fluentd/plugins”, “ — under-supervisor”]
2021–06–27 19:18:35 +0000 [info]: adding match pattern=”*.**” type=”copy”
2021–06–27 19:18:35 +0000 [info]: #0 ‘flush_interval’ is configured at out side of <buffer>. ‘flush_mode’ is set to ‘interval’ to keep existing behaviour
2021–06–27 19:18:41 +0000 [warn]: #0 Could not communicate to Elasticsearch, resetting connection and trying again. no address for elasticsearch (Resolv::ResolvError)
2021–06–27 19:18:41 +0000 [warn]: #0 Remaining retry: 14. Retry to communicate after 2 second(s).

Let’s stop the container for now

Zysce@ MINGW64 /c/medium/log/fluentd
$ docker container stop $(docker ps -f ancestor=medium_fluentd -q)
f41401ad213a

Now, let’s configure Elasticsearch:

In the log folder, create:

For the file elasticsearch.yml:

This is the yaml configuration file for elasticsearch.

The setting discovery type is needed to tell elasticsearch to not try to find other nodes and create a single node.
You can configure Elasticsearch with Docker Compose to deploy multiples nodes but it is not the subject of this article.

We will add later the configuration to enable authentication.

For the Dockerfile:

We copy the config file into /usr/share/elasticsearch/config.

Let’s build and run for test:

Zysce@ MINGW64 /c/medium/log/elasticsearch
$ docker build . -t medium_elasticsearch -q && docker run -d medium_elasticsearch
sha256:4268fece086b97fc321d151a767ce639a171c5d82dea8910d9c522f7bc3a2313
278d3c44bae39efb2f4b4141fe17837cee0cac2b85cad446c6e07c4927acc7ff
Zysce@ MINGW64 /c/medium/log/elasticsearch
$ docker logs $(docker ps -f ancestor=medium_elasticsearch -q)
{“type”: “server”, “timestamp”: “2021–06–27T19:40:41,912Z”, “level”: “INFO”, “component”: “o.e.n.Node”, “cluster.name”: “my-elasticsearch-cluster”, “node.name”: “6d6e76819d9c”, “message”: “version[7.13.2], pid[7], build[default/docker/4d960a0733be83dd2543ca018aa4ddc42e956800/2021–06–10T21:01:55.251515791Z], OS[Linux/5.4.72-microsoft-standard-WSL2/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/16/16+36]” }
{“type”: “server”, “timestamp”: “2021–06–27T19:40:41,915Z”, “level”: “INFO”, “component”: “o.e.n.Node”, “cluster.name”: “my-elasticsearch-cluster”, “node.name”: “6d6e76819d9c”, “message”: “JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]” }
.....

The container is running as it should, let’s stop it for now since we will use a docker compose to bind everything together.

Finally, let’s configure Kibana:

For this, you can just run the official image as it is.
All configuration needed can be done through environment variables.

Let’s create the file docker-compose.yml at the root of the log folder to bind everything together:

To test on your local machine, let’s add a file docker-compose.override.yml:

Let’s inject the volume path with a .env file

  • For the service fluentd, we want to build the Dockerfile created earlier and expose the port 24224.
    There are 2 networks associated with this service, one to access the service elasticsearch, one for the service, we will use this one to expose the service fluentd to other docker composes services, we will come back to that later.
  • For the service elasticsearch, we want to build the Dockerfile and expose the port 9200.
    There are 2 networks as well, one for the service, one to allow kibana to access elasticsearch.
    The volume will allow you to persist elasticsearch data even if the container is shutdown and deleted.
    The docker-compose.override.yml file will allow you to test this on your local machine.
    The .env is used to set the path to your deployment environment for persistence.
    You can skip that and remove the volume if you don’t want persistence.
  • For the service kibana, we want to expose the port 5601 and give access to the service elasticsearch

Let’s build and start all services:

Zysce@ MINGW64 /c/medium/log
$ docker-compose build -q && docker-compose up -d
Creating network “log_elasticsearch” with the default driver
Creating network “log_kibana” with the default driver
Creating network “medium-fluentd-ntw” with the default driver
Pulling kibana (kibana:7.13.2)…

Creating log_elasticsearch_1 … done
Creating log_kibana_1 … done
Creating log_fluentd_1 … done

You can now access kibana with the url http://localhost:5601

http://localhost:5601

Let’s create a Web Api to have some logs in elasticsearch.

Zysce@ MINGW64 /c/medium/api
$ dotnet new webapi -n mediumfluentd.web
The template "ASP.NET Core Web API" was created successfully.
Processing post-creation actions...
Running 'dotnet restore' on mediumfluentd.web\mediumfluentd.web.csproj...
Determining projects to restore...
Restored C:\medium\api\mediumfluentd.web\mediumfluentd.web.csproj (in 272 ms).
Restore succeeded.
Zysce@ MINGW64 /c/medium/api
$ dotnet new sln -n mediumfluentd.web && dotnet sln add mediumfluentd.web/mediumfluentd.web.csproj
The template "Solution File" was created successfully.
Project `mediumfluentd.web\mediumfluentd.web.csproj` added to the solution.

Add these two packages to the web project:

The second package will send the log through a TCP connexion to Fluentd.

Let’s add a nlog.config file to the project

The target Fluentd will use the package NLog.Targets.Fluentd.Net5 to send the log to Fluentd.

The host name needs to be the name of the docker compose service and the port is the port you exposed.
In our case, fluentd and 24224.

For more details about the configuration of the target, feel free to go the nuget package page.

Let’s configure Nlog in Program.cs

NB: For the sake of this tutorial, you will need either to configure yourself https or disable https redirection

In the WeatherForecastController, add a log in the get action.

Now, let’s create a Dockerfile in the folder mediumfluentd.web

Then a docker-compose.yml file at the root of the folder api

NB: We are using Development for the sake of this tutorial to enable access to swagger. You will need to change to Production if you want to deploy to your production environment.

You need to set the network name the same as the network you create for service fluentd and you need to set it as external as well to tell compose that this network is created else where.

Reminder: This means you need to start and create all services from the log folder before starting the Web Api.
The network needs to be created before you start the Web Api.

Let’s build and run the Web Api

Zysce@ MINGW64 /c/medium/api
$ docker-compose build -q && docker-compose up -d
Creating api_mediumfluentdweb_1 ... done

We can create some logs by calling the get action of the WeatherForecastController

http://localhost/swagger/index.html

Now, let’s go to kibana to create an index pattern for fluentd

http://localhost:5601/app/management/kibana/indexPatterns

Select Create index pattern, then type fluentd-* for the index pattern name

Click on Next step then select @timestamp as Time field, finally click on Create index pattern.

To see the logs, click on the burger menu top-left then Discover

Make sure the index fluentd-* is selected

The log monitor is now configured, however, everyone can access kibana and elasticsearch, let’s add some security

I was inspired by this article to enable the security with Elasticsearch

To enable the authentication, you need to activate first ssl on elasticsearch.

You can use elasticsearch tools to generate a self-signed certificate.

First, you need to connect to the container using docker-compose exec elasticsearch bash
Then, use bin/elasticsearch-certutil ca to generate a certificate authority for your certificate
Finally, use bin/elasticsearch-certutil cert — ca elastic-stack-ca.p12 to generate the certificate.
No password is added here, but you can add one, you will need to inject the passwork in elasticsearch.yml

Zysce@ MINGW64 /c/medium/log
$ docker-compose exec elasticsearch bash
[root@6ac9fde8a335 elasticsearch]# bin/elasticsearch-certutil ca
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.
The 'ca' mode generates a new 'certificate authority'
This will create a new X.509 certificate and private key that can be used
to sign certificate when running in 'cert' mode.
Use the 'ca-dn' option if you wish to configure the 'distinguished name'
of the certificate authority
By default the 'ca' mode produces a single PKCS#12 output file which holds:
* The CA certificate
* The CA's private key
If you elect to generate PEM format certificates (the -pem option), then the output will
be a zip file containing individual files for the CA certificate and private key
Please enter the desired output file [elastic-stack-ca.p12]:
Enter password for elastic-stack-ca.p12 :
[root@6ac9fde8a335 elasticsearch]# bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.
The 'cert' mode generates X.509 certificate and private keys.
* By default, this generates a single certificate and key for use
on a single instance.
* The '-multiple' option will prompt you to enter details for multiple
instances and will generate a certificate and key for each one
* The '-in' option allows for the certificate generation to be automated by describing
the details of each instance in a YAML file
* An instance is any piece of the Elastic Stack that requires an SSL certificate.
Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats
may all require a certificate and private key.
* The minimum required value for each instance is a name. This can simply be the
hostname, which will be used as the Common Name of the certificate. A full
distinguished name may also be used.
* A filename value may be required for each instance. This is necessary when the
name would result in an invalid file or directory name. The name provided here
is used as the directory name (within the zip) and the prefix for the key and
certificate files. The filename is required if you are prompted and the name
is not displayed in the prompt.
* IP addresses and DNS names are optional. Multiple values can be specified as a
comma separated string. If no IP addresses or DNS names are provided, you may
disable hostname verification in your SSL configuration.
* All certificates generated by this tool will be signed by a certificate authority (CA)
unless the --self-signed command line option is specified.
The tool can automatically generate a new CA for you, or you can provide your own with
the --ca or --ca-cert command line options.
By default the 'cert' mode produces a single PKCS#12 output file which holds:
* The instance certificate
* The private key for the instance certificate
* The CA certificate
If you specify any of the following options:
* -pem (PEM formatted output)
* -keep-ca-key (retain generated CA key)
* -multiple (generate multiple certificates)
* -in (generate certificates from an input file)
then the output will be be a zip file containing individual certificate/key files
Enter password for CA (elastic-stack-ca.p12) :
Please enter the desired output file [elastic-certificates.p12]:
Enter password for elastic-certificates.p12 :
Certificates written to /usr/share/elasticsearch/elastic-certificates.p12This file should be properly secured as it contains the private key for
your instance.
This file is a self contained file and can be copied and used 'as is'
For each Elastic product that you wish to configure, you should copy
this '.p12' file to the relevant configuration directory
and then follow the SSL configuration instructions in the product guide.
For client applications, you may only need to copy the CA certificate and
configure the client to trust this certificate.

The certificate is generated, now, let’s copy it in elasticsearch folder

Zysce@ MINGW64 /c/medium/log
$ docker cp "$(docker-compose ps -q elasticsearch)":/usr/share/elasticsearch/elastic-certificates.p12 elasticsearch/

Now, modify the Dockerfile to copy the certificate into the image

Then, modify the elasticsearch.yml to enable ssl and authentication

Rebuild and restart the containers through docker compose.

Now, you need to generate the users/passwords, since Kibana and Fluentd cannot connect as anonymous anymore.

Connect to elasticsearch container then run the command bin/elasticsearch-setup-passwords auto

NB: If you did not setup a volume and you kill the container, you will have to redo this operation each time you start the container, so better have a volume.

Zysce@ MINGW64 /c/medium/log
$ docker-compose exec elasticsearch bash
[root@25bd74f7230c elasticsearch]# bin/elasticsearch-setup-passwords auto
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
The passwords will be randomly generated and printed to the console.
Please confirm that you would like to continue [y/N]y
Changed password for user apm_system
PASSWORD apm_system = IwaK0R811dSbDgHGnUK8
Changed password for user kibana_system
PASSWORD kibana_system = Lju3oNWRu3hJf469stMf
Changed password for user kibana
PASSWORD kibana = Lju3oNWRu3hJf469stMf
Changed password for user logstash_system
PASSWORD logstash_system = E5IzlLeonZ64EaFK84Cz
Changed password for user beats_system
PASSWORD beats_system = P9Nme9EnmcgqIldGDs88
Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = avkPFKQkPUITADw3n3bl
Changed password for user elastic
PASSWORD elastic = 5SwYmrGYsDFiCmMw6AfM

Save these passwords somewhere safe.

Let’s start with Kibana.

You just need to modify the docker compose and pass the user kibana and password as environment variables.

You can inject KIBANA_PWD environment variable as a pipeline variable.
For this tutorial, I am adding it to the .env file

Reminder: Be EXTRA careful, do not push a .env or any file with a password inside into your repository.

Let’s restart Kibana container and access to Kibana url

http://localhost:5601

Login as elastic user you create previously.
We will now setup Fluentd.

Go to Stack Management > Roles > Create Role

Let’s create a role for fluentd.

Fluentd needs the privilege monitor on the cluster and the privilege all for the index fluentd-*

Now, create the user fluentd with a secure password and assign to this user the role you created above.

Now, let’s add the user to fluentd

First, you need to modify the fluent.conf.template

user and password were added to elasticsearch store.

Now, let’s modify generate_config.sh to replace the placeholders in fluent.conf.template file

Then the Dockerfile to inject FLUENTD_USERNAME and FLUENTD_PASSWORD as arguments

Now the docker-compose.yml
Both variables are injected as build args.

You can inject the username and the password as pipeline variables.
For this tutorial, I modified the .env file

Now, let’s restart the containers for a last time with docker-compose down and docker-compose build -q && docker-compose up -d

Zysce@ MINGW64 /c/medium/log
$ docker-compose logs fluentd
Attaching to log_fluentd_1
fluentd_1 | 2021-06-28 01:42:25 +0000 [info]: parsing config file is succeeded path="/fluentd/etc/fluent.conf"
fluentd_1 | 2021-06-28 01:42:25 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '4.3.3'
fluentd_1 | 2021-06-28 01:42:25 +0000 [info]: gem 'fluentd' version '1.12.0'
fluentd_1 | 2021-06-28 01:42:25 +0000 [info]: 'flush_interval' is configured at out side of <buffer>. 'flush_mode' is set to 'interval' to keep existing behaviour
fluentd_1 | 2021-06-28 01:42:25 +0000 [warn]: define <match fluent.**> to capture fluentd logs in top level is deprecated. Use <label @FLUENT_LOG> instead
fluentd_1 | 2021-06-28 01:42:25 +0000 [info]: using configuration file: <ROOT>
fluentd_1 | <source>
fluentd_1 | @type forward
fluentd_1 | port 24224
fluentd_1 | bind "0.0.0.0"
fluentd_1 | </source>
fluentd_1 | <match *.**>
fluentd_1 | @type copy
fluentd_1 | <store>
fluentd_1 | @type "elasticsearch"
fluentd_1 | host "elasticsearch"
fluentd_1 | port 9200
fluentd_1 | user "fluentd"
fluentd_1 | password xxxxxx
fluentd_1 | logstash_format true
fluentd_1 | logstash_prefix "fluentd"
fluentd_1 | logstash_dateformat "%Y%m%d"
fluentd_1 | include_tag_key true
fluentd_1 | type_name "access_log"
fluentd_1 | tag_key "@log_name"
fluentd_1 | flush_interval 1s
fluentd_1 | <buffer>
fluentd_1 | flush_interval 1s
fluentd_1 | </buffer>
fluentd_1 | </store>
fluentd_1 | <store>
fluentd_1 | @type "stdout"
fluentd_1 | </store>
fluentd_1 | </match>
fluentd_1 | </ROOT>
fluentd_1 | 2021-06-28 01:42:25 +0000 [info]: starting fluentd-1.12.0 pid=8 ruby="2.6.6"
fluentd_1 | 2021-06-28 01:42:25 +0000 [info]: spawn command to main: cmdline=["/usr/local/bin/ruby", "-Eascii-8bit:ascii-8bit", "/usr/local/bundle/bin/fluentd", "-c", "/fluentd/etc/fluent.conf", "-p", "/fluentd/plugins", "--under-supervisor"]
fluentd_1 | 2021-06-28 01:42:26 +0000 [info]: adding match pattern="*.**" type="copy"
fluentd_1 | 2021-06-28 01:42:26 +0000 [info]: #0 'flush_interval' is configured at out side of <buffer>. 'flush_mode' is set to 'interval' to keep existing behaviour
fluentd_1 | 2021-06-28 01:42:28 +0000 [warn]: #0 Could not communicate to Elasticsearch, resetting connection and trying again. Connection refused - connect(2) for 192.168.32.2:9200 (Errno::ECONNREFUSED)
fluentd_1 | 2021-06-28 01:42:28 +0000 [warn]: #0 Remaining retry: 14. Retry to communicate after 2 second(s).
fluentd_1 | 2021-06-28 01:42:32 +0000 [warn]: #0 Could not communicate to Elasticsearch, resetting connection and trying again. Connection refused - connect(2) for 192.168.32.2:9200 (Errno::ECONNREFUSED)
fluentd_1 | 2021-06-28 01:42:32 +0000 [warn]: #0 Remaining retry: 13. Retry to communicate after 4 second(s).
fluentd_1 | 2021-06-28 01:42:40 +0000 [warn]: #0 Could not communicate to Elasticsearch, resetting connection and trying again. Connection refused - connect(2) for 192.168.32.2:9200 (Errno::ECONNREFUSED)
fluentd_1 | 2021-06-28 01:42:40 +0000 [warn]: #0 Remaining retry: 12. Retry to communicate after 8 second(s).
fluentd_1 | 2021-06-28 01:42:56 +0000 [warn]: #0 Could not communicate to Elasticsearch, resetting connection and trying again. Connection refused - connect(2) for 192.168.32.2:9200 (Errno::ECONNREFUSED)
fluentd_1 | 2021-06-28 01:42:56 +0000 [warn]: #0 Remaining retry: 11. Retry to communicate after 16 second(s).
fluentd_1 | 2021-06-28 01:42:56 +0000 [warn]: #0 Detected ES 7.x: `_doc` will be used as the document `_type`.
fluentd_1 | 2021-06-28 01:42:56 +0000 [info]: adding source type="forward"
fluentd_1 | 2021-06-28 01:42:56 +0000 [warn]: #0 define <match fluent.**> to capture fluentd logs in top level is deprecated. Use <label @FLUENT_LOG> instead
fluentd_1 | 2021-06-28 01:42:56 +0000 [info]: #0 starting fluentd worker pid=17 ppid=8 worker=0
fluentd_1 | 2021-06-28 01:42:56 +0000 [info]: #0 listening port port=24224 bind="0.0.0.0"
fluentd_1 | 2021-06-28 01:42:56 +0000 [info]: #0 fluentd worker is now running worker=0
fluentd_1 | 2021-06-28 01:42:56.970389800 +0000 fluent.info: {"pid":17,"ppid":8,"worker":0,"message":"starting fluentd worker pid=17 ppid=8 worker=0"}
fluentd_1 | 2021-06-28 01:42:56.971255600 +0000 fluent.info: {"port":24224,"bind":"0.0.0.0","message":"listening port port=24224 bind=\"0.0.0.0\""}
fluentd_1 | 2021-06-28 01:42:56.972543300 +0000 fluent.info: {"worker":0,"message":"fluentd worker is now running worker=0"}
fluentd_1 | warning: 299 Elasticsearch-7.13.2-4d960a0733be83dd2543ca018aa4ddc42e956800 "[types removal] Specifying types in bulk requests is deprecated."

Let’s take a look at Kibana

Same page but with authentication enabled and fluentd working.

With this configuration, you can deploy multiples projects and they can all send their logs to this fluentd instance.

You just need to modify the tag in nlog.config and set the external network in docker-compose.yml

In the log explorer, you can explore per project by filtering with @log_name

Possibles improvements:

  • Move elasticsearch certificate outside the repository
  • Add multiples nodes to the elastic search cluster
  • Hide Kibana behind a reverse proxy, you will be able to easily setup TLS this way

Conclusion

Now you have a Web Api sending its logs to Fluentd, storing them within a elasticsearch cluster.

Said logs accessible with Kibana as UI to monitor, create dashboards, etc

Here is the final project structure:

This is my first story, feel free to give me any constructive criticism. ;)

--

--