Apache Kafka
Step-by-step guide to send event data from RudderStack to Apache Kafka
Last updated
Was this helpful?
Step-by-step guide to send event data from RudderStack to Apache Kafka
Last updated
Was this helpful?
Apache Kafka is a popular distributed streaming platform. It allows you to handle large-scale workloads with high throughput and low latency. Apache Kafka is highly available and is used across the world for building real-time data pipelines and streaming applications.
RudderStack allows you to configure Apache Kafka as a destination to which you can send your event data seamlessly.
Find the open-source transformer code for this destination in our .
In order to enable dumping data to Kafka, you will first need to add it as a destination to the source from which you are sending event data. Once the destination is enabled, events from RudderStack will start flowing to Kafka.
Before configuring your source and destination on the , please check whether the platform you are working on is supported by Apache Kafka. Please refer to the table below:
Connection Mode
Web
Mobile
Server
Device mode
-
-
-
Cloud mode
Supported
Supported
Supported
To know more about the difference between Cloud mode and Device mode in RudderStack, read the guide.
Once you have confirmed that the platform supports sending events to Kafka, perform the steps below:
Choose a source to which you would like to add Kafka as a destination.
Select the destination as Kafka to your source. Give your destination a name and then click on Next.
Next, in the Connection Settings, fill all the fields with the relevant information and click Next**
Kafka Connection Settings
Host Name: Your Kafka server broker's host name goes here.
Port: The port to connect to the broker goes here.
Topic Name: Provide the topic name, to which you want to send data.
SSL Enabled: Enable this option if you have enabled SSL to connect to your broker.
CA Certificate: If you have enabled SSL, provide the CA certificate in this field.
Enable SASL with SSL: If you have enabled SSL, you can optionally use SASL for client authentication.
Username: Provide the username as configured in Kafka for authenticating clients with SASL.
Password: Provide the password as configured in Kafka for authenticating clients with SASL.
You need to enable SSL to use SASL authentication.
RudderStack currently supports the following SASL types:
PLAIN
SCRAM SHA-256
SCRAM SHA-512
We use userId
as the partition key of message.
If userId
is not present in payload, then anonymousId
is used instead.
So, if you have a multi-partitioned topic, then the records of the same userId
(or anonymousId
in absence of userId
) will always go to the same partition.
If you have enabled 2-way SSL, i.e. your server requires client authentication, then you need to have our CA certificate and put that in the Truststore of your server.
Please follow the steps below that make use of Java's keytool utility.
Generate Key and Certificates: keytool -keystore kafka.server.keystore.jks -alias localhost -keyalg RSA -genkey
Create your own CA
Generate a CA that is simply a public-private key pair and certificate, and it is intended to sign other certificates. You need to put this certificate at the RudderStack Web App as CA certificate.
Add the generated CA to the broker's truststore so that the brokers can trust this CA.
keytool -keystore kafka.server.truststore.jks -alias CARoot -importcert -file ca-cert
3. Sign the certificates
Export the certificate from the keystore, like so: keytool -keystore kafka.server.keystore.jks -alias localhost -certreq -file cert-file
Sign it with the CA: openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password}
Import both the certificate of the CA and the signed certificate into the broker keystore: 1. keytool -keystore kafka.server.keystore.jks -alias CARoot -import -file ca-cert 2. keytool -keystore kafka.server.keystore.jks -alias localhost -import -file cert-signed
By following all the steps described above, the script to create the CA and broker and client truststores and keystores is as shown:
Put the below parameters in your server.properties
You also need to put RudderStack's CA certificate to your truststore, as shown: **
Here is the CA certificate that you need to add to your trust store:
How can you connect to RudderStack if your Kafka server is running in a Kubernetes cluster?
You will need to expose one public address, to which RudderStack connects. We recommend using SSL for that. Please note that you should allow only the authenticated clients for this exposed address. If you use PLAINTEXT
for your internal services within your cluster, you might have the same.
Open this address with SSL in addition to that. For that, you need to update advertised.listeners
in your server.properties
.
A sample entry is as shown below:
SASL_PLAINTEXT
authentication?SASL
?You should configure your Zookeeper with SASL_SSL
.
SASL Connection Settings
For more information on the Apache Kafka SASL authentication, visit the .
RudderStack does not support SASL_PLAINTEXT
authentication. You can use SASL_SSL
instead. The recommends using SASL with SSL in production.
If you come across any issues while configuring or using Kafka with RudderStack, please feel free to . You can also start a conversation in our community; we will be happy to talk to you!