Prodcuer

  1. Application that sends the data to the Kafka is called Producer.
  2. Kafka response Acknowledgment to producer if everything is good.
  3. Kafka response No Acknowledment to producer if it is not able to read or process the data.
  4. Producer will retry to send the data to Kafka in following scenario:
    • NACK (Not Acknowleged)
    • Request Time out
  5. Many producers can send the data to Kafka concurrently.

Broker

  1. Kafka is formally called as Cluster of Brokers.
  2. Brokers receives the data from producers.
  3. Borkers saves the data
    • Temporarly in Page Cache.
    • Permanantly on Disk, after OS flush the page cache.

Retension time

  1. How long the data is stored in the Kafka is determined by Retension Time
  2. Default retention time is 1 week.

Consumer

  1. Application that reads the data from the Kafka is called Consumer.
  2. A consumer periodically polls data from Kafka.
  3. Many consumers can poll data from Kafka at the same time.
  4. Different consumers can poll the same data, each at their own pace.
  5. Parallelism is atained by organizing consumers in a consumer groups and split up the work.
  6. A consumer pulls message from one or more topics in the cluster.
  7. A consumer offset keeps track of the last message read and it is stored in a special Kafka topic.
  8. If necessary the offset can be change to reread the message

Architecture of Kafka

Architecture of Kafka

  1. Producers and consumer are decoupled.
  2. Add consumer without affecting producer.
  3. Failure of consumer does not affact system.
  4. Slow Consumer does not affect producer.
  5. Producers and Consumers simply need to agree on the data format of the records produced and consumed.

Topics, Partitions, Segments

  1. Topic A topic comprises all messages of a given category. Example: temperature_reading
  2. Partitions A topic can be split into many partitions to achieve paralellism and increase throughput.
    • A default algorithm used to decide to which partition a message goes uses the Hash Code of the message key.
    • A partition can be viewed as log.
  3. Segment A segment can be pictured as a file on a disk.
    • Brokers uses a rolling-file stratergy for creating a new segment.
    • Rolling file stratergy is controlled by 2 properties:
      • Maximum size of Segment, default is 1 GB.
      • Maximum time of Segment, default is 1 week.
  4. Kafka uses Segments to manage the data retension and remove old data

Note
1. A Topic (category of message) consists of partitions.
2. Partitions of each topic are distributed among the brokers.
3. Each partitions on a given broker results in one or many physical files called segments.

Log

  1. A log is a data structure that is like a queue of elements.
  2. A element written at the end of the log and once written are never changed. It is a write once data structure.
  3. Elements that are written to the log are strictly ordered in time.

Stream

  1. A stream is a sequence of events.
  2. A stream is an open ended.
    3, A stream is Immutable
  3. A sequence has a beginning and started somewhere in the past. The first event has an offset 0.
  4. As the stream is open ended we cannot know how much data is coming in the future so, we cannot wait until all the data has arrived to start processing.
  5. In stream processing one never modifies an existing stream but always generates a new output stream.

Data Element

  1. A data element in a log is called a record. It is also called as message or event.
  2. A record consists of Metadata and Body
    • Metadata:
      • Offset
      • Compression
      • Magic byte
      • Timestamp
      • Optional Headers
    • Body:
      • Key By default determines which partition the message will be written to. Ordering is gaurenteed on the partition and not on a topic.
      • Value Contains the Business relanvent data

Broker

  1. Producer send messages to brokers
  2. Brokers receives and stores the message.
  3. A Kafka cluster will have many brokers.
  4. Each broker manages multiple partitions.
  5. A typical cluster has many brokers for high availablility and scalability.
  6. Each broker handles many partitions for either same topic (If number of partitions are more than brokers) or from different topics.

Broker Replication

  1. Kafka replicates the partition across broker for fault tolerance.
  2. Each partition has leader server and 0 or more follower servers.
  3. Leader servers handles all read and write requests for a partition.

Distribution Consumption

  1. To increase the throughput in downstream consumption of data flowing in to topic, kafka introduced Consumer Group
  2. A consumer group consists of 1 or many consumer instances.
  3. All Consumers in the consumer group are identical clones of each other.
  4. To add a consumer to the same group, an instance need to use the same group.id
  5. A consumer group can scale out and paralleize work until
    No. of Consumer Instances == No. of Topic partition
    

ISR In-Sync Replicas


Published on 24 September 2022