Kafka

Kafka is an open-source event streaming platform designed to handle
massive streams of real-time data. It offers a distributed architecture,
high scalability, and fault tolerance, enabling efficient and reliable
processing and transmission of large volumes of data. In Kafka, data is
organized into topics, which are divided into partitions distributed
among Kafka brokers. Producers publish records to topics, and consumers
can subscribe to receive these records in real-time. This makes Kafka
ideal for use cases such as real-time event processing, batch data
ingestion, asynchronous messaging, system integration, and data
pipelines. With its high throughput and low latency, Kafka has been
widely adopted by companies to build scalable and distributed data
architectures, enabling real-time analysis, continuous stream
processing, and high-performance applications.

How to Monitor Kafka on the One Platform

To set up monitoring for Kafka on the platform, follow these steps:

  1. Go to the product application where you want to add Kafka as a dependency on the platform.

  2. Click on the “Products” menu and select the desired product card.

  3. Then, click on the name of the specific application where you want to configure Kafka monitoring.

  4. Look for the section called “External Dependencies,” usually located just below the latency graph of the application.

  5. To add an already registered dependency, type the name of the
    dependency in the search field and select it when it appears in the
    list.

  6. If Kafka is not yet registered as a dependency, click on the green button with a plus (+) symbol to add a new dependency.


 

When you click on “Add,” a modal will appear. In this modal, you will
name your queue and choose the Environment. In the “Check type” field,
select the option “Queue,” and in the “Method” field, choose “Kafka.”
After selecting the method, a field for “Healthcheck URL” will appear.


 

To perform this step, the responsible person needs to understand how
the Kafka cluster operates and what type of checks they want to perform.
Below are the examples of strings for Kafka checks:

There are four ways to check the Kafka cluster:

  1. Connection without authentication and a simple check of a specific topic for the platform: HOST:PORT/TOPIC

  2. Connection without authentication and a simple check of a specific topic for the platform, but with a list of brokers: [HOST1:PORT,HOST2:PORT]/TOPIC or HOST1:PORT,HOST2:PORT/TOPIC

  3. Connection without authentication and verification of message
    consumption delay size for a topic (Production) from the perspective of a
    consumer group: [HOST:PORT]/TOPIC/CONSUMER-GROUP/LAG-TOLERANCE

  4. Connection with SASL authentication and simple (or non-simple) verification: USER:PASSWORD:MECHANISM:TLS:SASL@[HOST1:PORT,HOST2:PORT]/TOPIC or USER:PASSWORD:MECHANISM:TLS:SASL@[HOST1:PORT,HOST2:PORT]/TOPIC/CONSUMER-GROUP/LAG-TOLERANCE

Example of a verification string for the example 4:

kafka:{{.kafka_password}}:SCRAM-SHA-512:true:true@[b-2.kafka-production.amazonaws.com:9096,b-1.kafka-production.amazonaws.com:9096,b-3.kafka-production.amazonaws.com:9096]/eventos/consumidor-de-eventos/200

 

Note: For security reasons, it is not permitted to enter an IP in the
healthcheck field. To monitor an IP, you need to enter it in a secret
and use it in healthcheck

Kafka is an open-source event streaming platform designed to handle
massive streams of real-time data. It offers a distributed architecture,
high scalability, and fault tolerance, enabling efficient and reliable
processing and transmission of large volumes of data. In Kafka, data is
organized into topics, which are divided into partitions distributed
among Kafka brokers. Producers publish records to topics, and consumers
can subscribe to receive these records in real-time. This makes Kafka
ideal for use cases such as real-time event processing, batch data
ingestion, asynchronous messaging, system integration, and data
pipelines. With its high throughput and low latency, Kafka has been
widely adopted by companies to build scalable and distributed data
architectures, enabling real-time analysis, continuous stream
processing, and high-performance applications.

How to Monitor Kafka on the One Platform

To set up monitoring for Kafka on the platform, follow these steps:

  1. Go to the product application where you want to add Kafka as a dependency on the platform.

  2. Click on the “Products” menu and select the desired product card.

  3. Then, click on the name of the specific application where you want to configure Kafka monitoring.

  4. Look for the section called “External Dependencies,” usually located just below the latency graph of the application.

  5. To add an already registered dependency, type the name of the
    dependency in the search field and select it when it appears in the
    list.

  6. If Kafka is not yet registered as a dependency, click on the green button with a plus (+) symbol to add a new dependency.


 

When you click on “Add,” a modal will appear. In this modal, you will
name your queue and choose the Environment. In the “Check type” field,
select the option “Queue,” and in the “Method” field, choose “Kafka.”
After selecting the method, a field for “Healthcheck URL” will appear.


 

To perform this step, the responsible person needs to understand how
the Kafka cluster operates and what type of checks they want to perform.
Below are the examples of strings for Kafka checks:

There are four ways to check the Kafka cluster:

  1. Connection without authentication and a simple check of a specific topic for the platform: HOST:PORT/TOPIC

  2. Connection without authentication and a simple check of a specific topic for the platform, but with a list of brokers: [HOST1:PORT,HOST2:PORT]/TOPIC or HOST1:PORT,HOST2:PORT/TOPIC

  3. Connection without authentication and verification of message
    consumption delay size for a topic (Production) from the perspective of a
    consumer group: [HOST:PORT]/TOPIC/CONSUMER-GROUP/LAG-TOLERANCE

  4. Connection with SASL authentication and simple (or non-simple) verification: USER:PASSWORD:MECHANISM:TLS:SASL@[HOST1:PORT,HOST2:PORT]/TOPIC or USER:PASSWORD:MECHANISM:TLS:SASL@[HOST1:PORT,HOST2:PORT]/TOPIC/CONSUMER-GROUP/LAG-TOLERANCE

Example of a verification string for the example 4:

kafka:{{.kafka_password}}:SCRAM-SHA-512:true:true@[b-2.kafka-production.amazonaws.com:9096,b-1.kafka-production.amazonaws.com:9096,b-3.kafka-production.amazonaws.com:9096]/eventos/consumidor-de-eventos/200

 

Note: For security reasons, it is not permitted to enter an IP in the
healthcheck field. To monitor an IP, you need to enter it in a secret
and use it in healthcheck