Distributed Tracing with Custom Spring Sleuth Library, Zipkin, Elasticsearch and Apache Kafka

Caner Kaya
4 min readMay 31, 2020

Distributed tracing is about debuging and monitoring the distributed software architectures. Especially, we need this method in Microservice projects. There are some important terms that we must know if we are working with Sleuth like Span, Trace Id etc. But i am not going to talk about that. Before starting the article you should read “offical introduction” from here:

https://cloud.spring.io/spring-cloud-static/spring-cloud-sleuth/2.2.3.RELEASE/reference/html/#introduction

This is our ecosystem after we finished the project

First, we will create our custom library. It will be a maven project. I will use Intellij IDEA to create a maven project but you can chose whatever you want to use.

You need to add these two dependencies to your pom.xml. I will use 2.2.3.RELEASE version for Spring Cloud Sleuth. Click here to see documentation page for detail about related version.

Create configuration properties for Kafka Producer.

Then create custom sender. This class will be responsable for sending spans to kafka broker.

Then create our auto-configuration class.

We will add the destination of configuration class into spring.factories. Because Spring Boot checks META-INF/spring.factories for configuration classes.

We are done for the library implementation. Now we will install the library into our local maven repository. Go to library folder and enter:

mvn install

Finaly we can use this library in our client.

Create client app with spring initializr.

Set configurations in application.properties.

Create Controller.

Add our library into pom.xml

Now we will create second application to see the path of request more clear. Create project with spring initializr.

Set configurations in application.properties

Add RestTemplate Bean to project.

Create controller.

Add our library into pom.xml

Now we will setup the Zipkin environment.

For Apache Kafka, i will use 2.2.1 version. You can download the binary from here https://kafka.apache.org/downloads

Apache Kafka is based on Zookeeper. Before starting the kafka server, we must run zookeeper server.

Go to \bin\windows folder. Enter:

Than run kafka server

We are done with Kafka. Now we will start Elasticsearch server. Zipkin server is compatible with 5–7.x versions of Elasticsearch. I will use latest version which is 7.7.0. You can download from here: https://www.elastic.co/downloads/elasticsearch

Then go to \bin folder and start server.

It is time for the Zipkin server. Download the latest version of Zipkin Server.

curl -sSL https://zipkin.io/quickstart.sh | bash -s

One thing left before the run Zipkin server. You should set environment variables. Zipkin Server supports multiple ways to store spans. Here is the details https://github.com/openzipkin/zipkin/tree/master/zipkin-server.

  • STORAGE_TYPE: elasticsearch
  • ES_HOSTS: http://localhost:9200
  • KAFKA_BOOTSTRAP_SERVERS: localhost:9092
  • KAFKA_TOPIC: zipkintest.t

One example for setting environment variables in Windows 10:

We can start Zipkin Server finally.

java -jar zipkin.jar

Check for Zipkin UI: http://localhost:9411

Now run Application 1 and call http://localhost:5050 and visit Zipkin UI. Here is the our result.

Now run Application 2 and call http://localhost:5051 and visit Zipkin UI

Click for details. And you will see the path of request. And you can see the how many second pass in each application to generate our response. Thus, if we meet an anomaly, it will help us the debug easily.

Thank you for reading. Here is the source code of project.

https://github.com/canerky96/spring-cloud-sleuth-library

--

--