In this tutorial I will show you how to use Kerberos/SSL with NiFi. I will use self signed certs for this example. Before you begin ensure you have installed Kerberos Server and Kafka.
If you don’t want to use the built in Zookeeper you can setup your own. To do that following this tutorial.
This assumes your hostname is “hadoop”
Create Kerberos Principals
- cd /etc/security/keytabs/
- sudo kadmin.local
- #You can list princepals
- listprincs
- #Create the following principals
- addprinc -randkey kafka/hadoop@REALM.CA
- addprinc -randkey zookeeper/hadoop@REALM.CA
- #Create the keytab files.
- #You will need these for Hadoop to be able to login
- xst -k kafka.service.keytab kafka/hadoop@REALM.CA
- xst -k zookeeper.service.keytab zookeeper/hadoop@REALM.CA
Set Keytab Permissions/Ownership
- sudo chown root:hadoopuser /etc/security/keytabs/*
- sudo chmod 750 /etc/security/keytabs/*
Hosts Update
- sudo nano /etc/hosts
- #Remove 127.0.1.1 line
- #Change 127.0.0.1 to the following
- 127.0.0.1 realm.ca hadoop localhost
Ubuntu Firewall
- sudo ufw disable
SSL
Setup SSL Directories if you have not previously done so.
- sudo mkdir -p /etc/security/serverKeys
- sudo chown -R root:hadoopuser /etc/security/serverKeys/
- sudo chmod 755 /etc/security/serverKeys/
- cd /etc/security/serverKeys
Setup Keystore
- sudo keytool -genkey -alias NAMENODE -keyalg RSA -keysize 1024 -dname "CN=NAMENODE,OU=ORGANIZATION_UNIT,C=canada" -keypass PASSWORD -keystore /etc/security/serverKeys/keystore.jks -storepass PASSWORD
- sudo keytool -export -alias NAMENODE -keystore /etc/security/serverKeys/keystore.jks -rfc -file /etc/security/serverKeys/NAMENODE.csr -storepass PASSWORD
Setup Truststore
- sudo keytool -import -noprompt -alias NAMENODE -file /etc/security/serverKeys/NAMENODE.csr -keystore /etc/security/serverKeys/truststore.jks -storepass PASSWORD
Generate Self Signed Certifcate
- sudo openssl genrsa -out /etc/security/serverKeys/NAMENODE.key 2048
- sudo openssl req -x509 -new -key /etc/security/serverKeys/NAMENODE.key -days 300 -out /etc/security/serverKeys/NAMENODE.pem
- sudo keytool -keystore /etc/security/serverKeys/keystore.jks -alias NAMENODE -certreq -file /etc/security/serverKeys/NAMENODE.cert -storepass PASSWORD -keypass PASSWORD
- sudo openssl x509 -req -CA /etc/security/serverKeys/NAMENODE.pem -CAkey /etc/security/serverKeys/NAMENODE.key -in /etc/security/serverKeys/NAMENODE.cert -out /etc/security/serverKeys/NAMENODE.signed -days 300 -CAcreateserial
Setup File Permissions
- sudo chmod 440 /etc/security/serverKeys/*
- sudo chown root:hadoopuser /etc/security/serverKeys/*
Edit server.properties Config
- cd /usr/local/kafka/config
- sudo nano server.properties
- #Edit or Add the following properties.
- ssl.endpoint.identification.algorithm=HTTPS
- ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
- ssl.key.password=PASSWORD
- ssl.keystore.location=/etc/security/serverKeys/keystore.jks
- ssl.keystore.password=PASSWORD
- ssl.truststore.location=/etc/security/serverKeys/truststore.jks
- ssl.truststore.password=PASSWORD
- listeners=SASL_SSL://:9094
- security.inter.broker.protocol=SASL_SSL
- ssl.client.auth=required
- authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
- ssl.keystore.type=JKS
- ssl.truststore.type=JKS
- sasl.kerberos.service.name=kafka
- zookeeper.connect=hadoop:2181
- sasl.mechanism.inter.broker.protocol=GSSAPI
- sasl.enabled.mechanisms=GSSAPI
Edit zookeeper.properties Config
- sudo nano zookeeper.properties
- #Edit or Add the following properties.
- server.1=hadoop:2888:3888
- clientPort=2181
- authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
- requireClientAuthScheme=SASL
- jaasLoginRenew=3600000
Edit producer.properties Config
- sudo nano producer.properties
- bootstrap.servers=hadoop:9094
- security.protocol=SASL_SSL
- sasl.kerberos.service.name=kafka
- ssl.truststore.location=/etc/security/serverKeys/truststore.jks
- ssl.truststore.password=PASSWORD
- ssl.keystore.location=/etc/security/serverKeys/keystore.jks
- ssl.keystore.password=PASSWORD
- ssl.key.password=PASSWORD
- sasl.mechanism=GSSAPI
Edit consumer.properties Config
- sudo nano consumer.properties
- zookeeper.connect=hadoop:2181
- bootstrap.servers=hadoop:9094
- group.id=securing-kafka-group
- security.protocol=SASL_SSL
- sasl.kerberos.service.name=kafka
- ssl.truststore.location=/etc/security/serverKeys/truststore.jks
- ssl.truststore.password=PASSWORD
- sasl.mechanism=GSSAPI
Add zookeeper_jass.conf Config
- sudo nano zookeeper_jass.conf
- Server {
- com.sun.security.auth.module.Krb5LoginModule required
- debug=true
- useKeyTab=true
- keyTab="/etc/security/keytabs/zookeeper.service.keytab"
- storeKey=true
- useTicketCache=true
- refreshKrb5Config=true
- principal="zookeeper/hadoop@REALM.CA";
- };
Add kafkaserver_jass.conf Config
- sudo nano kafkaserver_jass.conf
- KafkaServer {
- com.sun.security.auth.module.Krb5LoginModule required
- debug=true
- useKeyTab=true
- storeKey=true
- refreshKrb5Config=true
- keyTab="/etc/security/keytabs/kafka.service.keytab"
- principal="kafka/hadoop@REALM.CA";
- };
- kafkaClient {
- com.sun.security.auth.module.Krb5LoginModule required
- useTicketCache=true
- refreshKrb5Config=true
- debug=true
- useKeyTab=true
- storeKey=true
- keyTab="/etc/security/keytabs/kafka.service.keytab"
- principal="kafka/hadoop@REALM.CA";
- };
Edit kafka-server-start.sh
- cd /usr/local/kafka/bin/
- sudo nano kafka-server-start.sh
- jaas="$base_dir/../config/kafkaserver_jaas.conf"
- export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=$jaas"
Edit zookeeper-server-start.sh
- sudo nano zookeeper-server-start.sh
- jaas="$base_dir/../config/zookeeper_jaas.conf"
- export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=$jaas"
Kafka-ACL
- cd /usr/local/kafka/bin/
- #Grant topic access and cluster access
- ./kafka-acls.sh --operation All --allow-principal User:kafka --authorizer-properties zookeeper.connect=hadoop:2181 --add --cluster
- ./kafka-acls.sh --operation All --allow-principal User:kafka --authorizer-properties zookeeper.connect=hadoop:2181 --add --topic TOPIC
- #Grant all groups for a specific topic
- ./kafka-acls.sh --operation All --allow-principal User:kafka --authorizer-properties zookeeper.connect=hadoop:2181 --add --topic TOPIC --group *
- #If you want to remove cluster access
- ./kafka-acls.sh --authorizer-properties zookeeper.connect=hadoop:2181 --remove --cluster
- #If you want to remove topic access
- ./kafka-acls.sh --authorizer-properties zookeeper.connect=hadoop:2181 --remove --topic TOPIC
- #List access for cluster
- ./kafka-acls.sh --list --authorizer-properties zookeeper.connect=hadoop:2181 --cluster
- #List access for topic
- ./kafka-acls.sh --list --authorizer-properties zookeeper.connect=hadoop:2181 --topic TOPIC
kafka-console-producer.sh
If you want to test using the console producer you need to make these changes.
- cd /usr/local/kafka/bin/
- nano kafka-console-producer.sh
- #Add the below before the last line
- base_dir=$(dirname $0)
- jaas="$base_dir/../config/kafkaserver_jaas.conf"
- export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=$jaas"
- #Now you can run the console producer
- ./kafka-console-producer.sh --broker-list hadoop:9094 --topic TOPIC -producer.config ../config/producer.properties
kafka-console-consumer.sh
If you want to test using the console consumer you need to make these changes.
- cd /usr/local/kafka/bin/
- nano kafka-console-consumer.sh
- #Add the below before the last line
- base_dir=$(dirname $0)
- jaas="$base_dir/../config/kafkaserver_jaas.conf"
- export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=$jaas"
- #Now you can run the console consumer
- ./kafka-console-consumer.sh --bootstrap-server hadoop:9094 --topic TOPIC --consumer.config ../config/consumer.properties --from-beginning
References
https://www.confluent.io/blog/apache-kafka-security-authorization-authentication-encryption/
https://github.com/confluentinc/securing-kafka-blog/blob/master/manifests/default.pp
2 thoughts on “Kafka: Kerberize/SSL”
Comments are closed.