HBase: Kerberize/SSL Installation

In this tutorial I will show you how to use Kerberos/SSL with HBase. I will use self signed certs for this example. Before you begin ensure you have installed Kerberos Server, Hadoop and Zookeeper.

This assumes your hostname is “hadoop”

We will install a Master, RegionServer and Rest Client

Create Kerberos Principals

  1. cd /etc/security/keytabs/
  2.  
  3. sudo kadmin.local
  4.  
  5. #You can list princepals
  6. listprincs
  7.  
  8. #Create the following principals
  9. addprinc -randkey hbase/hadoop@REALM.CA
  10. addprinc -randkey hbaseHTTP/hadoop@REALM.CA
  11.  
  12. #Create the keytab files.
  13. #You will need these for Hadoop to be able to login
  14. xst -k hbase.service.keytab hbase/hadoop@REALM.CA
  15. xst -k hbaseHTTP.service.keytab hbaseHTTP/hadoop@REALM.CA

Set Keytab Permissions/Ownership

  1. sudo chown root:hadoopuser /etc/security/keytabs/*
  2. sudo chmod 750 /etc/security/keytabs/*

Install HBase

  1. wget http://apache.forsale.plus/hbase/2.1.0/hbase-2.1.0-bin.tar.gz
  2. tar -zxvf hbase-2.1.0-bin.tar.gz
  3. sudo mv hbase-2.1.0 /usr/local/hbase/
  4. cd /usr/local/hbase/conf/

Setup .bashrc:

  1. sudo nano ~/.bashrc

Add the following to the end of the file.

#HBASE VARIABLES START
export HBASE_HOME=/usr/local/hbase
export PATH=$PATH:$HBASE_HOME/bin
export HBASE_CONF_DIR=$HBASE_HOME/conf
#HBASE VARIABLES END

  1. source ~/.bashrc

hbase_client_jaas.conf

  1. Client {
  2. com.sun.security.auth.module.Krb5LoginModule required
  3. useKeyTab=false
  4. useTicketCache=true;
  5. };

hbase_server_jaas.conf

  1. Client {
  2. com.sun.security.auth.module.Krb5LoginModule required
  3. useKeyTab=true
  4. useTicketCache=false
  5. keyTab="/etc/security/keytabs/hbase.service.keytab"
  6. principal="hbase/hadoop@REALM.CA";
  7. };

regionservers

  1. hadoop

hbase-env.sh

Add or modify the following settings.

  1. export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/
  2. export HBASE_CONF_DIR=${HBASE_CONF_DIR:-/usr/local/hbase/conf}
  3. export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-/usr/local/hadoop/etc/hadoop}
  4. export HBASE_CLASSPATH="$CLASSPATH:$HADOOP_CONF_DIR"
  5. export HBASE_REGIONSERVERS=${HBASE_CONF_DIR}/regionservers
  6. export HBASE_LOG_DIR=${HBASE_HOME}/logs
  7. export HBASE_PID_DIR=/home/hadoopuser
  8. export HBASE_MANAGES_ZK=false
  9. export HBASE_OPTS="-Djava.security.auth.login.config=$HBASE_CONF_DIR/hbase_client_jaas.conf"
  10. export HBASE_MASTER_OPTS="-Djava.security.auth.login.config=$HBASE_CONF_DIR/hbase_server_jaas.conf"
  11. export HBASE_REGIONSERVER_OPTS="-Djava.security.auth.login.config=$HBASE_CONF_DIR/hbase_server_jaas.conf"

hbase-site.xml

  1. <configuration>
  2. <property>
  3. <name>hbase.rootdir</name>
  4. <value>hdfs://hadoop:54310/hbase</value>
  5. </property>
  6. <property>
  7. <name>hbase.zookeeper.property.dataDir</name>
  8. <value>/usr/local/zookeeper/data</value>
  9. </property>
  10. <property>
  11. <name>hbase.cluster.distributed</name>
  12. <value>true</value>
  13. </property>
  14. <property>
  15. <name>hbase.regionserver.kerberos.principal</name>
  16. <value>hbase/_HOST@REALM.CA</value>
  17. </property>
  18. <property>
  19. <name>hbase.regionserver.keytab.file</name>
  20. <value>/etc/security/keytabs/hbase.service.keytab</value>
  21. </property>
  22. <property>
  23. <name>hbase.master.kerberos.principal</name>
  24. <value>hbase/_HOST@REALM.CA</value>
  25. </property>
  26. <property>
  27. <name>hbase.master.keytab.file</name>
  28. <value>/etc/security/keytabs/hbase.service.keytab</value>
  29. </property>
  30. <property>
  31. <name>hbase.security.authentication.spnego.kerberos.principal</name>
  32. <value>hbaseHTTP/_HOST@REALM.CA</value>
  33. </property>
  34. <property>
  35. <name>hbase.security.authentication.spnego.kerberos.keytab</name>
  36. <value>/etc/security/keytabs/hbaseHTTP.service.keytab</value>
  37. </property>
  38. <property>
  39. <name>hbase.security.authentication</name>
  40. <value>kerberos</value>
  41. </property>
  42. <property>
  43. <name>hbase.security.authorization</name>
  44. <value>true</value>
  45. </property>
  46. <property>
  47. <name>hbase.coprocessor.region.classes</name>
  48. <value>org.apache.hadoop.hbase.security.token.TokenProvider</value>
  49. </property>
  50. <property>
  51. <name>hbase.rpc.protection</name>
  52. <value>integrity</value>
  53. </property>
  54. <property>
  55. <name>hbase.rpc.engine</name>
  56. <value>org.apache.hadoop.hbase.ipc.SecureRpcEngine</value>
  57. </property>
  58. <property>
  59. <name>hbase.coprocessor.master.classes</name>
  60. <value>org.apache.hadoop.hbase.security.access.AccessController</value>
  61. </property>
  62. <property>
  63. <name>hbase.coprocessor.region.classes</name>
  64. <value>org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.AccessController</value>
  65. </property>
  66. <property>
  67. <name>hbase.security.authentication.ui</name>
  68. <value>kerberos</value>
  69. <description>Controls what kind of authentication should be used for the HBase web UIs.</description>
  70. </property>
  71. <property>
  72. <name>hbase.master.port</name>
  73. <value>16000</value>
  74. </property>
  75. <property>
  76. <name>hbase.master.info.bindAddress</name>
  77. <value>0.0.0.0</value>
  78. </property>
  79. <property>
  80. <name>hbase.master.info.port</name>
  81. <value>16010</value>
  82. </property>
  83. <property>
  84. <name>hbase.regionserver.hostname</name>
  85. <value>hadoop</value>
  86. </property>
  87. <property>
  88. <name>hbase.regionserver.port</name>
  89. <value>16020</value>
  90. </property>
  91. <property>
  92. <name>hbase.regionserver.info.port</name>
  93. <value>16030</value>
  94. </property>
  95. <property>
  96. <name>hbase.regionserver.info.bindAddress</name>
  97. <value>0.0.0.0</value>
  98. </property>
  99. <property>
  100. <name>hbase.master.ipc.address</name>
  101. <value>0.0.0.0</value>
  102. </property>
  103. <property>
  104. <name>hbase.regionserver.ipc.address</name>
  105. <value>0.0.0.0</value>
  106. </property>
  107. <property>
  108. <name>hbase.ssl.enabled</name>
  109. <value>true</value>
  110. </property>
  111. <property>
  112. <name>hadoop.ssl.enabled</name>
  113. <value>true</value>
  114. </property>
  115. <property>
  116. <name>ssl.server.keystore.keypassword</name>
  117. <value>startrek</value>
  118. </property>
  119. <property>
  120. <name>ssl.server.keystore.password</name>
  121. <value>startrek</value>
  122. </property>
  123. <property>
  124. <name>ssl.server.keystore.location</name>
  125. <value>/etc/security/serverKeys/keystore.jks</value>
  126. </property>
  127. <property>
  128. <name>hbase.rest.ssl.enabled</name>
  129. <value>true</value>
  130. </property>
  131. <property>
  132. <name>hbase.rest.ssl.keystore.store</name>
  133. <value>/etc/security/serverKeys/keystore.jks</value>
  134. </property>
  135. <property>
  136. <name>hbase.rest.ssl.keystore.password</name>
  137. <value>startrek</value>
  138. </property>
  139. <property>
  140. <name>hbase.rest.ssl.keystore.keypassword</name>
  141. <value>startrek</value>
  142. </property>
  143. <property>
  144. <name>hbase.superuser</name>
  145. <value>hduser</value>
  146. </property>
  147. <property>
  148. <name>hbase.tmp.dir</name>
  149. <value>/tmp/hbase-${user.name}</value>
  150. </property>
  151. <property>
  152. <name>hbase.local.dir</name>
  153. <value>${hbase.tmp.dir}/local</value>
  154. </property>
  155. <property>
  156. <name>hbase.zookeeper.property.clientPort</name>
  157. <value>2181</value>
  158. </property>
  159. <property>
  160. <name>hbase.unsafe.stream.capability.enforce</name>
  161. <value>false</value>
  162. </property>
  163. <property>
  164. <name>hbase.zookeeper.quorum</name>
  165. <value>hadoop</value>
  166. </property>
  167. <property>
  168. <name>zookeeper.znode.parent</name>
  169. <value>/hbase-secure</value>
  170. </property>
  171. <property>
  172. <name>hbase.regionserver.dns.interface</name>
  173. <value>enp0s3</value>
  174. </property>
  175. <property>
  176. <name>hbase.rest.authentication.type</name>
  177. <value>kerberos</value>
  178. </property>
  179. <property>
  180. <name>hadoop.proxyuser.HTTP.groups</name>
  181. <value>*</value>
  182. </property>
  183. <property>
  184. <name>hadoop.proxyuser.HTTP.hosts</name>
  185. <value>*</value>
  186. </property>
  187. <property>
  188. <name>hbase.rest.authentication.kerberos.keytab</name>
  189. <value>/etc/security/keytabs/hbaseHTTP.service.keytab</value>
  190. </property>
  191. <property>
  192. <name>hbase.rest.authentication.kerberos.principal</name>
  193. <value>hbaseHTTP/_HOST@REALM.CA</value>
  194. </property>
  195. <property>
  196. <name>hbase.rest.kerberos.principal</name>
  197. <value>hbase/_HOST@REALM.CA</value>
  198. </property>
  199. <property>
  200. <name>hbase.rest.keytab.file</name>
  201. <value>/etc/security/keytabs/hbase.service.keytab</value>
  202. </property>
  203. </configuration>

Change Ownership of HBase files

  1. sudo chown hadoopuser:hadoopuser -R /usr/local/hbase/*

Hadoop HDFS Config Changes

You will need to add two properties into the core-site.xml file of Hadoop.

  1. nano /usr/local/hadoop/etc/hadoop/core-site.xml
  2.  
  3. <property>
  4. <name>hadoop.proxyuser.hbase.hosts</name>
  5. <value>*</value>
  6. </property>
  7. <property>
  8. <name>hadoop.proxyuser.hbase.groups</name>
  9. <value>*</value>
  10. </property>
  11. <property>
  12. <name>hadoop.proxyuser.HTTP.hosts</name>
  13. <value>*</value>
  14. </property>
  15. <property>
  16. <name>hadoop.proxyuser.HTTP.groups</name>
  17. <value>*</value>
  18. </property>

AutoStart

  1. crontab -e
  2.  
  3. @reboot /usr/local/hbase/bin/hbase-daemon.sh --config /usr/local/hbase/conf/ start master
  4. @reboot /usr/local/hbase/bin/hbase-daemon.sh --config /usr/local/hbase/conf/ start regionserver
  5. @reboot /usr/local/hbase/bin/hbase-daemon.sh --config /usr/local/hbase/conf/ start rest --infoport 17001 -p 17000

Validation

  1. kinit -kt /etc/security/keytabs/hbase.service.keytab hbase/hadoop@REALM.ca
  2. hbase shell
  3. status 'detailed'
  4. whoami
  5. kdestroy

References

https://hbase.apache.org/0.94/book/security.html
https://pivotalhd-210.docs.pivotal.io/doc/2100/webhelp/topics/ConfiguringSecureHBase.html
https://ambari.apache.org/1.2.5/installing-hadoop-using-ambari/content/ambari-kerb-2-3-2-1.html
https://hbase.apache.org/book.html#_using_secure_http_https_for_the_web_ui

Kafka: Kerberize/SSL

In this tutorial I will show you how to use Kerberos/SSL with NiFi. I will use self signed certs for this example. Before you begin ensure you have installed Kerberos Server and Kafka.

If you don’t want to use the built in Zookeeper you can setup your own. To do that following this tutorial.

This assumes your hostname is “hadoop”

Create Kerberos Principals

  1. cd /etc/security/keytabs/
  2.  
  3. sudo kadmin.local
  4.  
  5. #You can list princepals
  6. listprincs
  7.  
  8. #Create the following principals
  9. addprinc -randkey kafka/hadoop@REALM.CA
  10. addprinc -randkey zookeeper/hadoop@REALM.CA
  11.  
  12. #Create the keytab files.
  13. #You will need these for Hadoop to be able to login
  14. xst -k kafka.service.keytab kafka/hadoop@REALM.CA
  15. xst -k zookeeper.service.keytab zookeeper/hadoop@REALM.CA

Set Keytab Permissions/Ownership

  1. sudo chown root:hadoopuser /etc/security/keytabs/*
  2. sudo chmod 750 /etc/security/keytabs/*

Hosts Update

  1. sudo nano /etc/hosts
  2.  
  3. #Remove 127.0.1.1 line
  4.  
  5. #Change 127.0.0.1 to the following
  6. 127.0.0.1 realm.ca hadoop localhost

Ubuntu Firewall

  1. sudo ufw disable

SSL

Setup SSL Directories if you have not previously done so.

  1. sudo mkdir -p /etc/security/serverKeys
  2. sudo chown -R root:hadoopuser /etc/security/serverKeys/
  3. sudo chmod 755 /etc/security/serverKeys/
  4.  
  5. cd /etc/security/serverKeys

Setup Keystore

  1. sudo keytool -genkey -alias NAMENODE -keyalg RSA -keysize 1024 -dname "CN=NAMENODE,OU=ORGANIZATION_UNIT,C=canada" -keypass PASSWORD -keystore /etc/security/serverKeys/keystore.jks -storepass PASSWORD
  2. sudo keytool -export -alias NAMENODE -keystore /etc/security/serverKeys/keystore.jks -rfc -file /etc/security/serverKeys/NAMENODE.csr -storepass PASSWORD

Setup Truststore

  1. sudo keytool -import -noprompt -alias NAMENODE -file /etc/security/serverKeys/NAMENODE.csr -keystore /etc/security/serverKeys/truststore.jks -storepass PASSWORD

Generate Self Signed Certifcate

  1. sudo openssl genrsa -out /etc/security/serverKeys/NAMENODE.key 2048
  2.  
  3. sudo openssl req -x509 -new -key /etc/security/serverKeys/NAMENODE.key -days 300 -out /etc/security/serverKeys/NAMENODE.pem
  4.  
  5. sudo keytool -keystore /etc/security/serverKeys/keystore.jks -alias NAMENODE -certreq -file /etc/security/serverKeys/NAMENODE.cert -storepass PASSWORD -keypass PASSWORD
  6.  
  7. sudo openssl x509 -req -CA /etc/security/serverKeys/NAMENODE.pem -CAkey /etc/security/serverKeys/NAMENODE.key -in /etc/security/serverKeys/NAMENODE.cert -out /etc/security/serverKeys/NAMENODE.signed -days 300 -CAcreateserial

Setup File Permissions

  1. sudo chmod 440 /etc/security/serverKeys/*
  2. sudo chown root:hadoopuser /etc/security/serverKeys/*

Edit server.properties Config

  1. cd /usr/local/kafka/config
  2.  
  3. sudo nano server.properties
  4.  
  5. #Edit or Add the following properties.
  6. ssl.endpoint.identification.algorithm=HTTPS
  7. ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
  8. ssl.key.password=PASSWORD
  9. ssl.keystore.location=/etc/security/serverKeys/keystore.jks
  10. ssl.keystore.password=PASSWORD
  11. ssl.truststore.location=/etc/security/serverKeys/truststore.jks
  12. ssl.truststore.password=PASSWORD
  13. listeners=SASL_SSL://:9094
  14. security.inter.broker.protocol=SASL_SSL
  15. ssl.client.auth=required
  16. authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
  17. ssl.keystore.type=JKS
  18. ssl.truststore.type=JKS
  19. sasl.kerberos.service.name=kafka
  20. zookeeper.connect=hadoop:2181
  21. sasl.mechanism.inter.broker.protocol=GSSAPI
  22. sasl.enabled.mechanisms=GSSAPI

Edit zookeeper.properties Config

  1. sudo nano zookeeper.properties
  2.  
  3. #Edit or Add the following properties.
  4.  
  5. server.1=hadoop:2888:3888
  6. clientPort=2181
  7. authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
  8. requireClientAuthScheme=SASL
  9. jaasLoginRenew=3600000

Edit producer.properties Config

  1. sudo nano producer.properties
  2.  
  3. bootstrap.servers=hadoop:9094
  4. security.protocol=SASL_SSL
  5. sasl.kerberos.service.name=kafka
  6. ssl.truststore.location=/etc/security/serverKeys/truststore.jks
  7. ssl.truststore.password=PASSWORD
  8. ssl.keystore.location=/etc/security/serverKeys/keystore.jks
  9. ssl.keystore.password=PASSWORD
  10. ssl.key.password=PASSWORD
  11. sasl.mechanism=GSSAPI

Edit consumer.properties Config

  1. sudo nano consumer.properties
  2.  
  3. zookeeper.connect=hadoop:2181
  4. bootstrap.servers=hadoop:9094
  5. group.id=securing-kafka-group
  6. security.protocol=SASL_SSL
  7. sasl.kerberos.service.name=kafka
  8. ssl.truststore.location=/etc/security/serverKeys/truststore.jks
  9. ssl.truststore.password=PASSWORD
  10. sasl.mechanism=GSSAPI

Add zookeeper_jass.conf Config

  1. sudo nano zookeeper_jass.conf
  2.  
  3. Server {
  4. com.sun.security.auth.module.Krb5LoginModule required
  5. debug=true
  6. useKeyTab=true
  7. keyTab="/etc/security/keytabs/zookeeper.service.keytab"
  8. storeKey=true
  9. useTicketCache=true
  10. refreshKrb5Config=true
  11. principal="zookeeper/hadoop@REALM.CA";
  12. };

Add kafkaserver_jass.conf Config

  1. sudo nano kafkaserver_jass.conf
  2.  
  3. KafkaServer {
  4. com.sun.security.auth.module.Krb5LoginModule required
  5. debug=true
  6. useKeyTab=true
  7. storeKey=true
  8. refreshKrb5Config=true
  9. keyTab="/etc/security/keytabs/kafka.service.keytab"
  10. principal="kafka/hadoop@REALM.CA";
  11. };
  12.  
  13. kafkaClient {
  14. com.sun.security.auth.module.Krb5LoginModule required
  15. useTicketCache=true
  16. refreshKrb5Config=true
  17. debug=true
  18. useKeyTab=true
  19. storeKey=true
  20. keyTab="/etc/security/keytabs/kafka.service.keytab"
  21. principal="kafka/hadoop@REALM.CA";
  22. };

Edit kafka-server-start.sh

  1. cd /usr/local/kafka/bin/
  2.  
  3. sudo nano kafka-server-start.sh
  4.  
  5. jaas="$base_dir/../config/kafkaserver_jaas.conf"
  6.  
  7. export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=$jaas"

Edit zookeeper-server-start.sh

  1. sudo nano zookeeper-server-start.sh
  2.  
  3. jaas="$base_dir/../config/zookeeper_jaas.conf"
  4.  
  5. export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=$jaas"

Kafka-ACL

  1. cd /usr/local/kafka/bin/
  2.  
  3. #Grant topic access and cluster access
  4. ./kafka-acls.sh --operation All --allow-principal User:kafka --authorizer-properties zookeeper.connect=hadoop:2181 --add --cluster
  5. ./kafka-acls.sh --operation All --allow-principal User:kafka --authorizer-properties zookeeper.connect=hadoop:2181 --add --topic TOPIC
  6.  
  7. #Grant all groups for a specific topic
  8. ./kafka-acls.sh --operation All --allow-principal User:kafka --authorizer-properties zookeeper.connect=hadoop:2181 --add --topic TOPIC --group *
  9.  
  10. #If you want to remove cluster access
  11. ./kafka-acls.sh --authorizer-properties zookeeper.connect=hadoop:2181 --remove --cluster
  12.  
  13. #If you want to remove topic access
  14. ./kafka-acls.sh --authorizer-properties zookeeper.connect=hadoop:2181 --remove --topic TOPIC
  15.  
  16. #List access for cluster
  17. ./kafka-acls.sh --list --authorizer-properties zookeeper.connect=hadoop:2181 --cluster
  18.  
  19. #List access for topic
  20. ./kafka-acls.sh --list --authorizer-properties zookeeper.connect=hadoop:2181 --topic TOPIC

kafka-console-producer.sh

If you want to test using the console producer you need to make these changes.

  1. cd /usr/local/kafka/bin/
  2. nano kafka-console-producer.sh
  3.  
  4. #Add the below before the last line
  5.  
  6. base_dir=$(dirname $0)
  7. jaas="$base_dir/../config/kafkaserver_jaas.conf"
  8. export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=$jaas"
  9.  
  10.  
  11. #Now you can run the console producer
  12. ./kafka-console-producer.sh --broker-list hadoop:9094 --topic TOPIC -producer.config ../config/producer.properties

kafka-console-consumer.sh

If you want to test using the console consumer you need to make these changes.

  1. cd /usr/local/kafka/bin/
  2. nano kafka-console-consumer.sh
  3.  
  4. #Add the below before the last line
  5.  
  6. base_dir=$(dirname $0)
  7. jaas="$base_dir/../config/kafkaserver_jaas.conf"
  8. export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=$jaas"
  9.  
  10.  
  11. #Now you can run the console consumer
  12. ./kafka-console-consumer.sh --bootstrap-server hadoop:9094 --topic TOPIC --consumer.config ../config/consumer.properties --from-beginning

References

https://www.confluent.io/blog/apache-kafka-security-authorization-authentication-encryption/
https://github.com/confluentinc/securing-kafka-blog/blob/master/manifests/default.pp

HortonWorks: SSL Setup

This entry is part 3 of 7 in the series HortonWorks

If you want to use SSL with Ambari Server (note this is not with Hadoop yet) then follow the below steps. Please note this does not cover the creation of a SSL Cert as there are many tutorials on how to create self signed certs, etc available.

Step 1: Stop the Ambari Server

  1. sudo ambari-server stop

Step 2: Run Ambari Server Security Setup Command

  1. sudo ambari-server setup-security

Select option 1 during the prompts and note that you cannot use port 443 for https as that is reserved in Ambari. The default is 8443 and that is what they recommend. Enter path to your cert /etc/ssl/certs/hostname.cer file. Enter path to your encrypted key /etc/ssl/private/hostname.key file. Follow the rest of the prompts.

Step 3: Start Ambari Server

  1. sudo ambari-server start

Step 4: Login to Ambari Server now available at https://hostname:8443