Phoenix & Java: Connecting Secure

In this tutorial I will show you how to connect to an Secure Phoenix using Java. It’s rather straight forward.

POM.xml

  1. <dependency>
  2. <groupId>org.apache.phoenix</groupId>
  3. <artifactId>phoenix-queryserver</artifactId>
  4. <version>5.0.0-HBase-2.0</version>
  5. </dependency>

Imports:

  1. import java.io.IOException;
  2. import java.sql.Connection;
  3. import java.sql.DriverManager;
  4. import java.sql.SQLException;
  5. import java.sql.ResultSet;
  6. import java.sql.Statement;

Initiate Kerberos Authentication

  1. System.setProperty("java.security.krb5.conf", "C:\\Program Files\\Java\\jdk1.8.0_171\\jre\\lib\\security\\krb5.conf");
  2. System.setProperty("java.security.krb5.realm", "REALM.CA");
  3. System.setProperty("java.security.krb5.kdc", "REALM.CA");
  4. System.setProperty("sun.security.krb5.debug", "true");
  5. System.setProperty("javax.net.debug", "all");

Connect:

Now we create the connection.

  1. Class.forName("org.apache.phoenix.jdbc.PhoenixDriver");
  2. String url = "jdbc:phoenix:hadoop:2181:/hbase-secure:hbase/hadoop@REALM.CA:\\data\\hbase.service.keytab";
  3. Connection connection = DriverManager.getConnection(url);
  4.  
  5. System.out.println("Connected");
  6.  
  7. Statement statement = connection.createStatement();
  8.  
  9. //Drop table
  10. String deleteTableSql = "DROP TABLE IF EXISTS employee";
  11. System.out.println("Deleting Table: " + deleteTableSql);
  12. statement.executeUpdate(deleteTableSql);
  13. System.out.println("Created Table");
  14. //Create a table
  15. String createTableSql = "CREATE TABLE employee ( eid bigint primary key, name varchar)";
  16. System.out.println("Creating Table: " + createTableSql);
  17. statement.executeUpdate(createTableSql);
  18. System.out.println("Created Table");
  19.  
  20. //Insert Data
  21. String insertTableSql = "UPSERT INTO employee VALUES(1, 'Oliver')";
  22. System.out.println("Inserting Data: " + insertTableSql);
  23. statement.executeUpdate(insertTableSql);
  24. System.out.println("Inserted Data");
  25.  
  26. connection.commit();
  27.  
  28. //Select Data
  29. String selectTablesSql = "select * from employee";
  30. System.out.println("Show records: " + selectTablesSql);
  31. ResultSet res = statement.executeQuery(selectTablesSql);
  32. while (res.next()) {
  33. System.out.println(String.format("id: %s name: %s", res.getInt("eid"), res.getString("name")));
  34. }

 

 

 

 

 

Phoenix: Kerberize Installation

In this tutorial I will show you how to use Kerberos with Phoenix. Before you begin ensure you have installed Kerberos Server, Hadoop, HBase and Zookeeper.

This assumes your hostname is “hadoop”

Install Phoenix

  1. wget http://apache.forsale.plus/phoenix/apache-phoenix-5.0.0-HBase-2.0/bin/apache-phoenix-5.0.0-HBase-2.0-bin.tar.gz
  2. tar -zxvf apache-phoenix-5.0.0-HBase-2.0-bin.tar.gz
  3. sudo mv apache-phoenix-5.0.0-HBase-2.0-bin /usr/local/phoenix/
  4. cd /usr/local/phoenix/

Setup .bashrc:

  1. sudo nano ~/.bashrc

Add the following to the end of the file.

#PHOENIX VARIABLES START
export PHOENIX_HOME=/usr/local/phoenix
export PHOENIX_CLASSPATH=$PHOENIX_HOME/*
export PATH=$PATH:$PHOENIX_HOME/bin
#PHOENIX VARIABLES END

  1. source ~/.bashrc

Link Files

  1. ln -sf $HBASE_CONF_DIR/hbase-site.xml $PHOENIX_HOME/bin/hbase-site.xml
  2. ln -sf $HADOOP_CONF_DIR/core-site.xml $PHOENIX_HOME/bin/core-site.xml
  3. ln -sf $PHOENIX_HOME/phoenix-5.0.0-HBase-2.0-server.jar $HBASE_HOME/lib/phoenix-5.0.0-HBase-2.0-server.jar

hbase-env.sh

  1. nano /usr/local/hbase/conf/hbase-env.sh
  2.  
  3. #Ensure the following env variables are set
  4.  
  5. export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-/usr/local/hadoop/etc/hadoop}
  6. export PHOENIX_CLASSPATH=${PHOENIX_CLASSPATH:-/usr/local/phoenix}
  7. export HBASE_CLASSPATH="$HBASE_CLASSPATH:$CLASSPATH:$HADOOP_CONF_DIR:$PHOENIX_CLASSPATH/phoenix-5.0.0-HBase-2.0-server.jar:$PHOENIX_CLASSPATH/phoenix-core-5.0.0-HBase-2.0.jar:$PHOENIX_CLASSPATH/phoenix-5.0.0-HBase-2.0-client.jar"

hbase-site.xml

  1. nano /usr/local/hbase/conf/hbase-site.xml
  2.  
  3. #Add the following properties
  4.  
  5. <property>
  6. <name>phoenix.functions.allowUserDefinedFunctions</name>
  7. <value>true</value>
  8. <description>enable UDF functions</description>
  9. </property>
  10. <property>
  11. <name>hbase.regionserver.wal.codec</name>
  12. <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
  13. </property>
  14. <property>
  15. <name>hbase.region.server.rpc.scheduler.factory.class</name>
  16. <value>org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory</value>
  17. <description>Factory to create the Phoenix RPC Scheduler that uses separate queues for index and metadata updates</description>
  18. </property>
  19. <property>
  20. <name>hbase.rpc.controllerfactory.class</name>
  21. <value>org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory</value>
  22. <description>Factory to create the Phoenix RPC Scheduler that uses separate queues for index and metadata updates</description>
  23. </property>
  24. <property>
  25. <name>hbase.defaults.for.version.skip</name>
  26. <value>true</value>
  27. </property>
  28. <property>
  29. <name>phoenix.queryserver.http.port</name>
  30. <value>8765</value>
  31. </property>
  32. <property>
  33. <name>phoenix.queryserver.serialization</name>
  34. <value>PROTOBUF</value>
  35. </property>
  36. <property>
  37. <name>phoenix.queryserver.keytab.file</name>
  38. <value>/etc/security/keytabs/hbase.service.keytab</value>
  39. </property>
  40. <property>
  41. <name>phoenix.queryserver.kerberos.principal</name>
  42. <value>hbase/hadoop@REALM.CA</value>
  43. </property>
  44. <property>
  45. <name>hoenix.queryserver.http.keytab.file</name>
  46. <value>/etc/security/keytabs/hbaseHTTP.service.keytab</value>
  47. </property>
  48. <property>
  49. <name>phoenix.queryserver.http.kerberos.principal</name>
  50. <value>hbaseHTTP/hadoop@REALM.CA</value>
  51. </property>
  52. <property>
  53. <name>phoenix.queryserver.dns.nameserver</name>
  54. <value>hadoop</value>
  55. </property>
  56. <property>
  57. <name>phoenix.queryserver.dns.interface</name>
  58. <value>enp0s3</value>
  59. </property>
  60. <property>
  61. <name>phoenix.schema.mapSystemTablesToNamespace</name>
  62. <value>true</value>
  63. </property>
  64. <property>
  65. <name>phoenix.schema.isNamespaceMappingEnabled</name>
  66. <value>true</value>
  67. </property>

sqlline.py

  1. sqlline.py hadoop:2181:/hbase-secure:hbase/hadoop@GAUDREAULT_KDC.CA:/etc/security/keytabs/hbase.service.keytab

 

HBASE & Java: Connecting Secure

In this tutorial I will show you how to connect to an Secure HBASE using Java. It’s rather straight forward.

Import SSL Cert to Java:

Follow this tutorial to “Installing unlimited strength encryption Java libraries

If on Windows do the following

  1. #Import it
  2. "C:\Program Files\Java\jdk1.8.0_171\bin\keytool" -import -file hadoop.csr -keystore "C:\Program Files\Java\jdk1.8.0_171\jre\lib\security\cacerts" -alias "hadoop"
  3.  
  4. #Check it
  5. "C:\Program Files\Java\jdk1.8.0_171\bin\keytool" -list -v -keystore "C:\Program Files\Java\jdk1.8.0_171\jre\lib\security\cacerts"
  6.  
  7. #If you want to delete it
  8. "C:\Program Files\Java\jdk1.8.0_171\bin\keytool" -delete -alias hadoop -keystore "C:\Program Files\Java\jdk1.8.0_171\jre\lib\security\cacerts"

POM.xml

  1. <dependency>
  2. <groupId>org.apache.hbase</groupId>
  3. <artifactId>hbase-client</artifactId>
  4. <version>2.1.0</version>
  5. </dependency>
  6. <dependency>
  7. <groupId>org.apache.hbase</groupId>
  8. <artifactId>hbase</artifactId>
  9. <version>2.1.0</version>
  10. <type>pom</type>
  11. </dependency>

Imports:

  1. import org.apache.hadoop.conf.Configuration;
  2. import org.apache.hadoop.hbase.HBaseConfiguration;
  3. import org.apache.hadoop.hbase.client.Admin;
  4. import org.apache.hadoop.hbase.client.Connection;
  5. import org.apache.hadoop.hbase.client.ConnectionFactory;
  6. import org.apache.hadoop.security.UserGroupInformation;

Initiate Kerberos Authentication

  1. System.setProperty("java.security.auth.login.config", "C:\\data\\kafkaconnect\\kafka\\src\\main\\resources\\client_jaas.conf");
  2. System.setProperty("https.protocols", "TLSv1,TLSv1.1,TLSv1.2");
  3. System.setProperty("java.security.krb5.conf", "C:\\Program Files\\Java\\jdk1.8.0_171\\jre\\lib\\security\\krb5.conf");
  4. System.setProperty("java.security.krb5.realm", "REALM.CA");
  5. System.setProperty("java.security.krb5.kdc", "REALM.CA");
  6. System.setProperty("sun.security.krb5.debug", "false");
  7. System.setProperty("javax.net.debug", "false");
  8. System.setProperty("javax.net.ssl.keyStorePassword", "changeit");
  9. System.setProperty("javax.net.ssl.keyStore", "C:\\Program Files\\Java\\jdk1.8.0_171\\jre\\lib\\security\\cacerts");
  10. System.setProperty("javax.net.ssl.trustStore", "C:\\Program Files\\Java\\jdk1.8.0_171\\jre\\lib\\security\\cacerts");
  11. System.setProperty("javax.net.ssl.trustStorePassword", "changeit");
  12. System.setProperty("javax.security.auth.useSubjectCredsOnly", "false");

Config:

We will use the basic configuration here. You should secure the cluster and use appropriate settings for that.

  1. // Setup the configuration object.
  2. final Configuration config = HBaseConfiguration.create();
  3. config.set("hbase.zookeeper.quorum", "hadoop");
  4. config.set("hbase.zookeeper.property.clientPort", "2181");
  5. config.set("hadoop.security.authentication", "kerberos");
  6. config.set("hbase.security.authentication", "kerberos");
  7. config.set("hbase.cluster.distributed", "true");
  8. config.set("hbase.rpc.protection", "integrity");
  9. config.set("zookeeper.znode.parent", "/hbase-secure");
  10. config.set("hbase.master.kerberos.principal", "hbase/hadoop@REALM.CA");
  11. config.set("hbase.regionserver.kerberos.principal", "hbase/hadoop@REALM.CA");

Connect:

Now we create the connection.

  1. UserGroupInformation.setConfiguration(config);
  2. UserGroupInformation.setLoginUser(UserGroupInformation.loginUserFromKeytabAndReturnUGI("hbase/hadoop@REALM.CA", "c:\\data\\hbase.service.keytab"));
  3.  
  4. System.out.println(UserGroupInformation.getLoginUser());
  5. System.out.println(UserGroupInformation.getCurrentUser());
  6.  
  7. Connection conn = ConnectionFactory.createConnection(config);
  8.  
  9. //Later when we are done we will want to close the connection.
  10. conn.close();

Hbase Admin:

Retrieve an Admin implementation to administer an HBase cluster. If you need it.

  1. Admin admin = conn.getAdmin();
  2. //Later when we are done we will want to close the connection.
  3. admin.close();

HBase: Kerberize/SSL Installation

In this tutorial I will show you how to use Kerberos/SSL with HBase. I will use self signed certs for this example. Before you begin ensure you have installed Kerberos Server, Hadoop and Zookeeper.

This assumes your hostname is “hadoop”

We will install a Master, RegionServer and Rest Client

Create Kerberos Principals

  1. cd /etc/security/keytabs/
  2.  
  3. sudo kadmin.local
  4.  
  5. #You can list princepals
  6. listprincs
  7.  
  8. #Create the following principals
  9. addprinc -randkey hbase/hadoop@REALM.CA
  10. addprinc -randkey hbaseHTTP/hadoop@REALM.CA
  11.  
  12. #Create the keytab files.
  13. #You will need these for Hadoop to be able to login
  14. xst -k hbase.service.keytab hbase/hadoop@REALM.CA
  15. xst -k hbaseHTTP.service.keytab hbaseHTTP/hadoop@REALM.CA

Set Keytab Permissions/Ownership

  1. sudo chown root:hadoopuser /etc/security/keytabs/*
  2. sudo chmod 750 /etc/security/keytabs/*

Install HBase

  1. wget http://apache.forsale.plus/hbase/2.1.0/hbase-2.1.0-bin.tar.gz
  2. tar -zxvf hbase-2.1.0-bin.tar.gz
  3. sudo mv hbase-2.1.0 /usr/local/hbase/
  4. cd /usr/local/hbase/conf/

Setup .bashrc:

  1. sudo nano ~/.bashrc

Add the following to the end of the file.

#HBASE VARIABLES START
export HBASE_HOME=/usr/local/hbase
export PATH=$PATH:$HBASE_HOME/bin
export HBASE_CONF_DIR=$HBASE_HOME/conf
#HBASE VARIABLES END

  1. source ~/.bashrc

hbase_client_jaas.conf

  1. Client {
  2. com.sun.security.auth.module.Krb5LoginModule required
  3. useKeyTab=false
  4. useTicketCache=true;
  5. };

hbase_server_jaas.conf

  1. Client {
  2. com.sun.security.auth.module.Krb5LoginModule required
  3. useKeyTab=true
  4. useTicketCache=false
  5. keyTab="/etc/security/keytabs/hbase.service.keytab"
  6. principal="hbase/hadoop@REALM.CA";
  7. };

regionservers

  1. hadoop

hbase-env.sh

Add or modify the following settings.

  1. export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/
  2. export HBASE_CONF_DIR=${HBASE_CONF_DIR:-/usr/local/hbase/conf}
  3. export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-/usr/local/hadoop/etc/hadoop}
  4. export HBASE_CLASSPATH="$CLASSPATH:$HADOOP_CONF_DIR"
  5. export HBASE_REGIONSERVERS=${HBASE_CONF_DIR}/regionservers
  6. export HBASE_LOG_DIR=${HBASE_HOME}/logs
  7. export HBASE_PID_DIR=/home/hadoopuser
  8. export HBASE_MANAGES_ZK=false
  9. export HBASE_OPTS="-Djava.security.auth.login.config=$HBASE_CONF_DIR/hbase_client_jaas.conf"
  10. export HBASE_MASTER_OPTS="-Djava.security.auth.login.config=$HBASE_CONF_DIR/hbase_server_jaas.conf"
  11. export HBASE_REGIONSERVER_OPTS="-Djava.security.auth.login.config=$HBASE_CONF_DIR/hbase_server_jaas.conf"

hbase-site.xml

  1. <configuration>
  2. <property>
  3. <name>hbase.rootdir</name>
  4. <value>hdfs://hadoop:54310/hbase</value>
  5. </property>
  6. <property>
  7. <name>hbase.zookeeper.property.dataDir</name>
  8. <value>/usr/local/zookeeper/data</value>
  9. </property>
  10. <property>
  11. <name>hbase.cluster.distributed</name>
  12. <value>true</value>
  13. </property>
  14. <property>
  15. <name>hbase.regionserver.kerberos.principal</name>
  16. <value>hbase/_HOST@REALM.CA</value>
  17. </property>
  18. <property>
  19. <name>hbase.regionserver.keytab.file</name>
  20. <value>/etc/security/keytabs/hbase.service.keytab</value>
  21. </property>
  22. <property>
  23. <name>hbase.master.kerberos.principal</name>
  24. <value>hbase/_HOST@REALM.CA</value>
  25. </property>
  26. <property>
  27. <name>hbase.master.keytab.file</name>
  28. <value>/etc/security/keytabs/hbase.service.keytab</value>
  29. </property>
  30. <property>
  31. <name>hbase.security.authentication.spnego.kerberos.principal</name>
  32. <value>hbaseHTTP/_HOST@REALM.CA</value>
  33. </property>
  34. <property>
  35. <name>hbase.security.authentication.spnego.kerberos.keytab</name>
  36. <value>/etc/security/keytabs/hbaseHTTP.service.keytab</value>
  37. </property>
  38. <property>
  39. <name>hbase.security.authentication</name>
  40. <value>kerberos</value>
  41. </property>
  42. <property>
  43. <name>hbase.security.authorization</name>
  44. <value>true</value>
  45. </property>
  46. <property>
  47. <name>hbase.coprocessor.region.classes</name>
  48. <value>org.apache.hadoop.hbase.security.token.TokenProvider</value>
  49. </property>
  50. <property>
  51. <name>hbase.rpc.protection</name>
  52. <value>integrity</value>
  53. </property>
  54. <property>
  55. <name>hbase.rpc.engine</name>
  56. <value>org.apache.hadoop.hbase.ipc.SecureRpcEngine</value>
  57. </property>
  58. <property>
  59. <name>hbase.coprocessor.master.classes</name>
  60. <value>org.apache.hadoop.hbase.security.access.AccessController</value>
  61. </property>
  62. <property>
  63. <name>hbase.coprocessor.region.classes</name>
  64. <value>org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.AccessController</value>
  65. </property>
  66. <property>
  67. <name>hbase.security.authentication.ui</name>
  68. <value>kerberos</value>
  69. <description>Controls what kind of authentication should be used for the HBase web UIs.</description>
  70. </property>
  71. <property>
  72. <name>hbase.master.port</name>
  73. <value>16000</value>
  74. </property>
  75. <property>
  76. <name>hbase.master.info.bindAddress</name>
  77. <value>0.0.0.0</value>
  78. </property>
  79. <property>
  80. <name>hbase.master.info.port</name>
  81. <value>16010</value>
  82. </property>
  83. <property>
  84. <name>hbase.regionserver.hostname</name>
  85. <value>hadoop</value>
  86. </property>
  87. <property>
  88. <name>hbase.regionserver.port</name>
  89. <value>16020</value>
  90. </property>
  91. <property>
  92. <name>hbase.regionserver.info.port</name>
  93. <value>16030</value>
  94. </property>
  95. <property>
  96. <name>hbase.regionserver.info.bindAddress</name>
  97. <value>0.0.0.0</value>
  98. </property>
  99. <property>
  100. <name>hbase.master.ipc.address</name>
  101. <value>0.0.0.0</value>
  102. </property>
  103. <property>
  104. <name>hbase.regionserver.ipc.address</name>
  105. <value>0.0.0.0</value>
  106. </property>
  107. <property>
  108. <name>hbase.ssl.enabled</name>
  109. <value>true</value>
  110. </property>
  111. <property>
  112. <name>hadoop.ssl.enabled</name>
  113. <value>true</value>
  114. </property>
  115. <property>
  116. <name>ssl.server.keystore.keypassword</name>
  117. <value>startrek</value>
  118. </property>
  119. <property>
  120. <name>ssl.server.keystore.password</name>
  121. <value>startrek</value>
  122. </property>
  123. <property>
  124. <name>ssl.server.keystore.location</name>
  125. <value>/etc/security/serverKeys/keystore.jks</value>
  126. </property>
  127. <property>
  128. <name>hbase.rest.ssl.enabled</name>
  129. <value>true</value>
  130. </property>
  131. <property>
  132. <name>hbase.rest.ssl.keystore.store</name>
  133. <value>/etc/security/serverKeys/keystore.jks</value>
  134. </property>
  135. <property>
  136. <name>hbase.rest.ssl.keystore.password</name>
  137. <value>startrek</value>
  138. </property>
  139. <property>
  140. <name>hbase.rest.ssl.keystore.keypassword</name>
  141. <value>startrek</value>
  142. </property>
  143. <property>
  144. <name>hbase.superuser</name>
  145. <value>hduser</value>
  146. </property>
  147. <property>
  148. <name>hbase.tmp.dir</name>
  149. <value>/tmp/hbase-${user.name}</value>
  150. </property>
  151. <property>
  152. <name>hbase.local.dir</name>
  153. <value>${hbase.tmp.dir}/local</value>
  154. </property>
  155. <property>
  156. <name>hbase.zookeeper.property.clientPort</name>
  157. <value>2181</value>
  158. </property>
  159. <property>
  160. <name>hbase.unsafe.stream.capability.enforce</name>
  161. <value>false</value>
  162. </property>
  163. <property>
  164. <name>hbase.zookeeper.quorum</name>
  165. <value>hadoop</value>
  166. </property>
  167. <property>
  168. <name>zookeeper.znode.parent</name>
  169. <value>/hbase-secure</value>
  170. </property>
  171. <property>
  172. <name>hbase.regionserver.dns.interface</name>
  173. <value>enp0s3</value>
  174. </property>
  175. <property>
  176. <name>hbase.rest.authentication.type</name>
  177. <value>kerberos</value>
  178. </property>
  179. <property>
  180. <name>hadoop.proxyuser.HTTP.groups</name>
  181. <value>*</value>
  182. </property>
  183. <property>
  184. <name>hadoop.proxyuser.HTTP.hosts</name>
  185. <value>*</value>
  186. </property>
  187. <property>
  188. <name>hbase.rest.authentication.kerberos.keytab</name>
  189. <value>/etc/security/keytabs/hbaseHTTP.service.keytab</value>
  190. </property>
  191. <property>
  192. <name>hbase.rest.authentication.kerberos.principal</name>
  193. <value>hbaseHTTP/_HOST@REALM.CA</value>
  194. </property>
  195. <property>
  196. <name>hbase.rest.kerberos.principal</name>
  197. <value>hbase/_HOST@REALM.CA</value>
  198. </property>
  199. <property>
  200. <name>hbase.rest.keytab.file</name>
  201. <value>/etc/security/keytabs/hbase.service.keytab</value>
  202. </property>
  203. </configuration>

Change Ownership of HBase files

  1. sudo chown hadoopuser:hadoopuser -R /usr/local/hbase/*

Hadoop HDFS Config Changes

You will need to add two properties into the core-site.xml file of Hadoop.

  1. nano /usr/local/hadoop/etc/hadoop/core-site.xml
  2.  
  3. <property>
  4. <name>hadoop.proxyuser.hbase.hosts</name>
  5. <value>*</value>
  6. </property>
  7. <property>
  8. <name>hadoop.proxyuser.hbase.groups</name>
  9. <value>*</value>
  10. </property>
  11. <property>
  12. <name>hadoop.proxyuser.HTTP.hosts</name>
  13. <value>*</value>
  14. </property>
  15. <property>
  16. <name>hadoop.proxyuser.HTTP.groups</name>
  17. <value>*</value>
  18. </property>

AutoStart

  1. crontab -e
  2.  
  3. @reboot /usr/local/hbase/bin/hbase-daemon.sh --config /usr/local/hbase/conf/ start master
  4. @reboot /usr/local/hbase/bin/hbase-daemon.sh --config /usr/local/hbase/conf/ start regionserver
  5. @reboot /usr/local/hbase/bin/hbase-daemon.sh --config /usr/local/hbase/conf/ start rest --infoport 17001 -p 17000

Validation

  1. kinit -kt /etc/security/keytabs/hbase.service.keytab hbase/hadoop@REALM.ca
  2. hbase shell
  3. status 'detailed'
  4. whoami
  5. kdestroy

References

https://hbase.apache.org/0.94/book/security.html
https://pivotalhd-210.docs.pivotal.io/doc/2100/webhelp/topics/ConfiguringSecureHBase.html
https://ambari.apache.org/1.2.5/installing-hadoop-using-ambari/content/ambari-kerb-2-3-2-1.html
https://hbase.apache.org/book.html#_using_secure_http_https_for_the_web_ui

HBASE & Java: Scan Filters

This tutorial will guide you through how to use filtering when scanning a HBASE table using Java 8. Make sure you first follow this tutorial on connecting to HBASE and this tutorial on scanning HBase.

Row Key Filter (PrefixFilter):

  1. final PrefixFilter prefixFilter = new PrefixFilter(Bytes.toBytes(myRoKey));
  2. scan.addFilter(prefixFilter);

Column Value Filter:

  1. final SingleColumnValueFilter columnValueFilter = new SingleColumnValueFilter(myColumnFamily, myColumnName, CompareOp.EQUAL, Bytes.toBytes(myValue));
  2. scan.addFilter(columnValueFilter);

Regex Filter:

  1. final RegexStringComparator regexStringComparator = new RegexStringComparator(".*");
  2. final SingleColumnValueFilter singleColumnValueFilter = new SingleColumnValueFilter(myColumnFamily, myColumnName, CompareOp.EQUAL, regexStringComparator);
  3. scan.addFilter(singleColumnValueFilter);

 

HBASE & Java: Delete a Table

This tutorial will guide you through how to delete a HBASE table using Java 8. Make sure you first follow this tutorial on connecting to HBASE.

Import:

  1. import org.apache.hadoop.hbase.client.Admin;

Delete:

  1. //You must first disable the table
  2. conn.getAdmin().disableTable(TableName.valueOf("myTable"));
  3.  
  4. //Now you can delete the table
  5. conn.getAdmin().deleteTable(TableName.valueOf("myTable"));

HBASE Phoenix & Java: Unsecure Connection

In this tutorial I will show you how to do a basic connection to remote unsecure HBase Pheonix Query Server using Java. Phoenix allows you to run SQL commands over top HBASE. You can find the commands listed here.

POM.xml:

  1. <dependency>
  2. <groupId>org.apache.phoenix</groupId>
  3. <artifactId>phoenix-server-client</artifactId>
  4. <version>4.7.0-HBase-1.1</version>
  5. </dependency>

Imports:

  1. import java.sql.DriverManager;
  2. import java.sql.SQLException;

Connect:

  1. Class.forName("org.apache.phoenix.queryserver.client.Driver");
  2. Connection conn = DriverManager.getConnection("jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF");

HBASE & Java: Search for Data

This tutorial will give you a quick overview of how to search for data using HBASE. If you have not done so yet. Follow the following two tutorials on HBASE: Connecting and HBASE: Create a Table.

Search for Data:

Basically we have to scan the table for data. So we must first setup a scan object then search for the data.

  1. import org.apache.hadoop.hbase.client.Result;
  2. import org.apache.hadoop.hbase.client.ResultScanner;
  3. import org.apache.hadoop.hbase.client.Scan;
  4. import org.apache.hadoop.hbase.Cell;
  5. import org.apache.hadoop.hbase.client.Table;
  6. import org.apache.hadoop.hbase.TableName;
  7. import org.apache.hadoop.hbase.util.Bytes;
  8.  
  9. //Lets setup our scan object.
  10. final Scan scan = new Scan();
  11. //Search a particular column
  12. scan.addColumn(Bytes.toBytes("columnFamily"), Bytes.toBytes("columnName"));
  13. //Check the row key prefix
  14. scan.setRowPrefixFilter(Bytes.toBytes("rowkey"));
  15.  
  16. final TableName table = TableName.valueOf(yourTableName);
  17.  
  18. //Get the table you want to work with. using the connection from the tutorial above.
  19. final Table table = conn.getTable(table);
  20. //Create our scanner based on the scan object above.
  21. final ResultScanner scanner = table.getScanner(scan);
  22.  
  23. //Now we will loop through our results
  24. for (Result result = scanner.next(); result != null; result = scanner.next()) {
  25. //Lets get our row key
  26. final String rowIdentifier = Bytes.toString(result.getRow());
  27.  
  28. //Now based on each record found we will loop through the available cells for that record.
  29. for (final Cell cell : result.listCells()) {
  30. //now we can do whatever we need to with the data.
  31. log.info("column {} value {}", Bytes.toString(cell.getQualifierArray(), cell.getQualifierOffset(), cell.getQualifierLength()), Bytes.toString(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength()));
  32. }
  33. }
  34.  

HBASE & Java: Create a Table

This tutorial will guide you through how to create a HBASE table using Java 8. Make sure you first follow this tutorial on connecting to HBASE.

Table Exists:

This checks if the table already exists in HBASE.

  1. import org.apache.hadoop.hbase.TableName;
  2.  
  3. final TableName table = TableName.valueOf(yourTableName);
  4.  
  5. //Use the connection object to getAdmin from the connection tutorial above.
  6. conn.getAdmin().tableExists(table);

Create Table:

In the most basic example of creating a HBASE table you need to know the name and the column families. A column family is columns grouped together. The data is related in some way and stored together on disk. Notice how we don’t define columns in the table design. Columns are added as we put data. Which I will give example below.

  1. import org.apache.hadoop.hbase.HColumnDescriptor;
  2. import org.apache.hadoop.hbase.HTableDescriptor;
  3. import org.apache.hadoop.hbase.TableName;
  4.  
  5. final TableName table = TableName.valueOf(yourTableName);
  6.  
  7. final HTableDescriptor hTableBuilder = new HTableDescriptor(table);
  8. final HColumnDescriptor column = new HColumnDescriptor(family);
  9. hTableBuilder.addFamily(column);
  10.  
  11. //Use the connection object to getAdmin from the connection tutorial above.
  12. conn.getAdmin().createTable(hTableBuilder);

Get a Table:

This will retrieve a table from HBASE so you can use it to put data, etc.

  1. import org.apache.hadoop.hbase.TableName;
  2. import org.apache.hadoop.hbase.client.Table;
  3.  
  4. final TableName table = TableName.valueOf(yourTableName);
  5.  
  6. //Use the connection object from the connection tutorial above.
  7. final Table table = conn.getTable(table);

Put Data:

Now we will put data into the table we have reference to above. Notice how the columns are referenced.

  1. import org.apache.hadoop.hbase.client.Put;
  2. import org.apache.hadoop.hbase.util.Bytes;
  3.  
  4. final byte[] rowKey = Bytes.toBytes("some row identifier");
  5. final byte[] columnFamily = Bytes.toBytes("myFamily");
  6. final byte[] columnName = Bytes.toBytes("columnName");
  7. final byte[] data = Bytes.toBytes(myData);
  8.  
  9. final Put put = new Put(rowKey);
  10. put.addColumn(columnFamily, columnName, data);
  11.  
  12. //Insert the data.
  13. table.put(put);
  14. //Close the table.
  15. table.close();

HBASE: Connecting Unsecure

In this tutorial I will show you how to connect to an Unsecure HBASE using Java. It’s rather straight forward. This tutorial assumes no security. There are so many different options you can set we will just take the bare minimum so you can connect.

POM:

  1. <dependency>
  2. <groupId>org.apache.hbase</groupId>
  3. <artifactId>hbase-client</artifactId>
  4. <version>1.4.1</version>
  5. </dependency>
  6. <dependency>
  7. <groupId>org.apache.hbase</groupId>
  8. <artifactId>hbase</artifactId>
  9. <version>1.4.1</version>
  10. <type>pom</type>
  11. </dependency>

Imports:

  1. import org.apache.hadoop.conf.Configuration;
  2. import org.apache.hadoop.hbase.HBaseConfiguration;
  3. import org.apache.hadoop.hbase.client.Admin;
  4. import org.apache.hadoop.hbase.client.Connection;
  5. import org.apache.hadoop.hbase.client.ConnectionFactory;

Config:

We will use the basic configuration here. You should secure the cluster and use appropriate settings for that.

  1. final Configuration config = HBaseConfiguration.create();
  2. config.set("hbase.zookeeper.quorum", "myurl.com"); //Can be comma seperated if you have more than 1
  3. config.set("hbase.zookeeper.property.clientPort", "2181");
  4. config.set("zookeeper.znode.parent", "/hbase-unsecure");

Connect:

Now we create the connection.

  1. Connection conn = ConnectionFactory.createConnection(config);
  2.  
  3. //Later when we are done we will want to close the connection.
  4. conn.close();

Hbase Admin:

Retrieve an Admin implementation to administer an HBase cluster. If you need it.

  1. Admin admin = conn.getAdmin();
  2. //Later when we are done we will want to close the connection.
  3. admin.close();