Centralized Logging for microservices in .net core

Manjit Singh
6 min readNov 29, 2020

We all know importance of logging in applications but it is even more crucial in distributed systems. There are challenges of logging in microservices architectures.

Effective Logging

  • Should provide consistent information
  • Be easily consumable
  • Should enable understanding of application/service behavior — Do we have enough detail to do proper analysis?
  • Should help in identifying and investigate error — Do we have enough information to diagnose a bug?

Security Concerns

  • Do not log sensitive information like passwords or any uniquely identifying information that should be private
  • If you send logs to some logging cloud service, what information are you allowed to send? For example, can you send user names, machine names , IP addresses? This can vary by company policies and type of data you deal with.
  • Even if logs are stored on premises, access level of users analyzing logs may not allow access to sensitive customer or user information. In cases like these, you may store just identifier to look up data

Logging in .Net Core
.Net core provides WebHost,CreateDefaultBuilder and ILogger interface. Console, Debug and EventSource are provided by default. ILogger<T> is accessed via dependency injection. Provided <T> becomes category and is logged as SourceContext and helps in filtering and grouping. LoggerFactory can be used to define custom categories.

public HomeController(ILogger<HomeController> logger)
public ImageRepository(ILogger<ImageRepository> logger)

We can control logging by using:

  • Levels — Trace/Debug/Information/Warning/Error/Critical
  • Categories
  • Filters
  • Scopes

Set up Log Levels and Inject Logger
We can just set default logging level in config:

{
“Logging”: {
“LogLevel”: {
“Default”: “Warning”
}
}
Or set it based on namespaces:
{
“Logging”: {
“LogLevel”: {
“Default”: “Warning”,
“Microsoft” : “Information”
}
}

Then inject logger in code:

private readonly ILogger<ImageController> _logger;
public ImageController(IImageRepository imageRepo, ILogger<ImageController> logger)
{
_imageRepo = imageRepo;
_logger = logger;
}

We can configure it to use third party option like Serilog. Serilog is most popular choice for dot net (as of this writing second most popular package on nuget after newtonsoft.json). It provides lot of Sink options to store logs.

If using Serilog:


{
“Serilog”: {
“MinimumLevel”: {
“Default”: “Debug”,
“Override”: {
“Microsoft”: “Warning”,
“System”: “Warning”,
“CustomApp.Database”: “Information”,
“CustomApp.Api”: “Debug”,
}
}
}

Or set it in code:

Structured Logging
To get full benefits of filtering etc., need to use structured logging instead of formatted string only:

_logger.LogInformation(“Calling API with ProductId {ProductId} and url {ProductApiUrl}”, productId, url);

This helps in query and filtering logs with named fields.

Adding File support to logs in .net core
The Serilog.Extensions.Logging.File package implements loggerFactory.AddFile() to quickly and easily set up file logging in ASP.NET Core apps.
Add following packages:
Serilog.AspNetCore
Serilog.Settings.Configuration
Serilog.Sinks.File

No other code change is needed. The framework will inject ILogger<T> into controllers and other classes:

This method is only to be used to add quick support for file logging to default logging in .net core app. In most cases Serilog with multiple sinks would be added to the project but to support only file logging, this type of light weight solution can be used. Other loggers like NLog provide similar functionality.

RequestId is added by framework. It keeps track of chain call.
Let us say we have action Products in HomeController (ideally it would be in its own controller) and it is called when user clicks Products link. This action method then calls ProductRepository to get list of products. Following log entries are logged:

RequestId (in green) is flowing through whole chain. SourceContext (red) is based on ILogger<T> so that we know where actual operation happened. But we still know what action (yellow) caused this chain of events. This is very valuable information for debugging and is easy to use when we send logs to some sink like SEQ or Elasticsearch.

Centralized Logging in Distributed Systems
Logging to files is fine and very handy at individual service level but most of requests these days end up in chain of service calls. It is not very efficient to go through multiple log files and correlate call chain manually. Solution is to implement centralized logging. There are lot of options out there but we would continue our example of Serilog to send to various sinks. List of sinks for Serilog can be found here:

https://github.com/serilog/serilog/wiki/Provided-Sinks

We would explore sending logs to Elasticsearch and Seq. I personally also like sending logs to Kafka because with Kafka streams you get far more flexibility to manage logs but this is bit more involved.

If we still want to maintain local log files, we can configure Serilog to do so. It can be configured to send logs to local sinks like Console/Rolling File and centralized sink like Seq/ AWS Cloudwatch/Elasticsearch.

We would now look at an example of setting Serilog for multiple sinks. Normally you would not have so many sinks in your environment but you can try few options in your dev environment. Logging is just one part of it, you always have to look at how good the tool is to help in debugging, query/filter logs, set up alerts and so on. I did try all of sinks in attached example in test example to see what I would like to keep. As mentioned earlier, what can be logged to cloud sinks would depend on your company policy.

Using following packages:

Serilog setup in code for multiple sinks:

Why to try multiple sinks?
Once logger is setup with multiple sinks, no further code change is required in actual logging. So, if feasible, try with multiple sinks in dev environment to see which sinks works better for you. You can enable/disable sinks by modifying setup code to use on/off flags from config file for various sinks. Sinks should be evaluated for:

  • Required performance — This would also depend on log retention period.
  • Out of box functionality
  • Cost
  • Cloud or on premises
  • Support for easy filtering based on categorization and/or log levels
  • Alerting based on filtering
  • Ability to have different retention periods based on application name and/or log levels. For example, we can keep errors for longer duration and information logs for much shorter duration. This helps in storage cost and keeps queries faster.

Seq comes with following UI by default:

Of course, we can define our own signals by combing source application name and/or log levels. Once logged to centralized logging system, we can view them in multiple ways from single place:

  • Just chronological flow of logs across system — all logs or errors/warnings
  • Filtered by application and/or log level
  • Filtered by given RequestId (across various microservices)
  • Filtered by User name

Elasticsearch/Kibana or AWS Cloudwatch would provide similar functionality and lot more but may not be an option in all environments.

Summary

Logging in .net core is very easy to setup with provided logger and with support for third party loggers. This combined with centralized logging sinks is very powerful and makes life of developers bit easier in distributed systems world.

Thanks!

--

--