Wednesday, August 15, 2018

Spring Kafka Producer Class diagram

Spring Kafka offers a Spring Bean structure for Producing Kafka Messages. For someone familiar with using Kafka API, Spring Kafka can seem a bit different.

Here is a code snippet from Spring Kafka documentation showing the Classes involved in the Spring Kafka Producer.

We can see that there are 2 important classes involved. DefaultKafkaProducerFactory and KafkaTemplate. Here is a UML class diagram showing how they are related.


Spring KafkaTemplate UML diagram



The KafkaTemplate bean needs an implementation  ProducerFactory interface that has information to produce messages. There is only one implementation available - the DefaultKafkaProducerFactory class.

DefaultKafkaProducerFactory takes a Hashmap with properties like Kafka Server URLs, Serializer classes, etc.


Reference - https://docs.spring.io/spring-kafka/reference/htmlsingle/#_overview

Thursday, May 10, 2018

Some common errors seen after upgrading Elasticsearch, Logstash & Kibana stack from version 5 to version 6

I recently worked on upgrading an existing Elasticsearch, Logstash & Kibana (ELK ) stack from version 5.2 to 6.2.4. There are several breaking  changes in this upgrade.

I encountered the following changes in the Index mapping. Please see each error below and how to fix them. I hope this will help someone searching on Google to fix them.

These errors will be displayed in the logstash-plain.log or logstash.log.

[2018-05-09T03:45:12,204][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2018.05.09", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x673f5e81>], :response=>{"index"=>{"_index"=>"logstash-2018.05.09", "_type"=>"doc", "_id"=>"t92BRGMBQMgRhS0WQ5xE", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to find type parsed [string] for [uriParams]"}}}}

Solution - Change the field type of the uriParams field from string to text or keyword.

[2018-05-09T13:17:45,930][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2018.05.09", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x238de49c>], :response=>{"index"=>{"_index"=>"logstash-2018.05.09", "_type"=>"doc", "_id"=>"n06NRmMBQMgRhS0WdPQs", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"Could not convert [fielddata] to boolean", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"Failed to parse value [{format=false}] as only [true] or [false] are allowed."}}}}}}
Solution - Change the value of the "fielddata" attributes in the Index mapping. If you had been running ELK since version 2, your  "fielddata" value will have been set as

"fielddata": {
   "format": "disabled" 


}                       


This is not supported in version 6. Change the value as follows:

"fielddata": true



References:


Tuesday, January 23, 2018

Apache Kafka InvalidReplicationFactorException

When creating a topic this Exception may be thrown by Kafka.

Scenario

If we try to create a topic with replication factor larger than the number of cluster, we will see an error.


Exception

This is the exception thrown by Apache Kafka.




Resolution

Ensure that the replication factor is less than or equal to the number of brokers in the cluster.

Kafka Exception - kafka.common.InconsistentBrokerIdException

I came across this exception when I was working with Apache Kafka.

Scenario

Set up multiple Kafka brokers in a cluster. Copy the server.properties to create a new propeties file for the new broker. I was following the instructions here https://kafka.apache.org/documentation/#quickstart_multibroker

Result

The first broker started without error. When I started the second broker, I saw this exception - kafka.common.InconsistentBrokerIdException.


Resolution

I found that the cause was that when I copied the proerties file I had missed changing the "log.dirs" property. If the two brokers share the same "log.dirs" property then we get this error. I restarted the second instance with a unique value for "log.dirs" property. The issue was resolved.