During my career I made a lot of mistakes. And I have tried to fix those of others even more.
This 7-part-series talks about how I and my fellow engineers have wasted company resources while applying inappropriate technologies and patterns.
Today I will talk about Kafka.
If you missed the other parts of this series find and discuss them here:
Part 1 – Trends
Part 2 – Microservices
Part 3 – Docker + Kubernetes
Part 5 – Low abstraction technologies like Go & Typescript
Part 6 – Using Slack for Monitoring
Part 7 – Cross-Team Collaboration
I still remember when, a decade ago, my then-CTO at Axel Springer coming to me and asking about Kafka, with scepticism.
I had not really bothered thinking about it as our use-cases were more than satisfyingly solved with messaging. We relied on rabbitmq which was a perfect fit.
A technology that provided us with everything we needed and not more.
He was confronted with a situation in which several other teams in his unit were pushing for Kafka adoption and he was unsure about it.
In hindsight, this was a smart concern.
I personally did not bother using Kafka. When I needed a Streaming Service later, I opted for Amazon Kinesis.
Honestly, I thought the Kafka train had stopped in some remote station and was never picked up again.
I was wrong. Through hype, Kafka came back with a vengeance and became the de-facto standard integration pattern for startups.
Which is terrible. Because only seldomly are we confronted with problems which qualify streaming as a solution.
Media Streaming is amongst them. Or analytics data streams.
Messaging and event-driven architecture are not.
If you are confronted with a situation in which you want to fire messages, may it be plain messages or events in event-driven-architecture – scenarios, you are better off with messaging infrastructure.
You know why Kaka is called Kaka? It’s because it is write-optimised, not aimed at reads and delivery.
Did you ever experience how you understand the context of a text while reading even i you skip letters or even words?
This is how Kaka was intended to work.
These are the use-cases streaming is or.
Messaging can guarantee delivery. It has dead-letter-queues that you can monitor and use for redelivery.
Streaming infrastructure is functioning in a way that you will want to skip events.
Making streaming behave appropriately in other scenarios is painful and expensive, hard to maintain and harder to understand.
You already read the sentence. How do I bring the ‘f’ to the stream once you already processed it?
Once the river flew, the fish was gone if you did not catch it.
Asynchronous architecture should _generally_ not rely on message ordering, by the way.
Chances are the business process that relies on ordering is designed poorly, should be sliced differently or not be modelled asynchronously in the first place.
If you decided for Kafka or Kinesis in a scenario in which you should have used a message queue or a synchronous call instead, you have created a situation in which you struggle from loss of control over your system.
You for sure lack awareness of the integrity of processes.
This clearly is a situation in which you burn money and engineering capacity.
If you decide to implement Event Sourcing with Kafka, you increase the likelihood of overloading you and your fellow engineers with unnecessary complexity to a point of total unreliability and unproductivity.
How do you feel about Kafka? Is it part of your religion, such as other trends? Or have you also often had the feeling that something is wrong, should be easier?
I am looking forward to your opinions.
Stay tuned for part 5 of this series in which I put on my boxing gloves to beat up low-abstraction technologies like go and javascript to defend enterprise technologies such as Java.
If you missed the other parts of this series, go read and discuss them:
Part 1 – Trends
Part 2 – Microservices
Part 3 – Docker + Kubernetes
Part 5 – Low abstraction technologies like Go & Typescript
Part 6 – Using Slack for Monitoring
Part 7 – Cross-Team Collaboration