[HD] Hadoop开源环境搭建(集群模式):2.Hive

常用的华为FusionInsight C60U10中各组件的版本,以此作为兼容参考:

HDFS:2.7.2
Hive:1.3.0
HBase:1.0.2
Spark:1.5.1
Solr:5.3.1
Flume:1.6.0
Kafka:2.10-0.10.0.0
Storm:0.10.0
Hue:3.9.0
Redis:3.0.5

本文配置:Redhat6.5、JDK-jdk1.7.0_79 、Hadoop-hadoop-2.7.3、apache-hive-2.1.1

三节点配置和名称(主 8C30G、子6C20G):

192.168.111.140	HMASTER  namenode,datanode(nodeagent)
192.168.111.141	HDATA01  datanode(nodeagent)
192.168.111.142	HDATA02  datanode(nodeagent)

详细步骤如下

 

一、安装mysql.

1.下载mysql安装包。 (root用户)
https://cdn.mysql.com//Downloads/MySQL-5.6/MySQL-5.6.35-1.el6.x86_64.rpm-bundle.tar

2.rpm方式安装.按照顺序装,必须先安装“MySQL-shared-compat-5.6.35-1.el6.x86_64.rpm”这个兼容包


rpm -ivh MySQL-shared-compat-5.6.35-1.el6.x86_64.rpm
rpm -ivh MySQL-server-5.6.35-1.el6.x86_64.rpm  #如果遇到mysql.lib冲突,那就rpm -qa|grep 出来然后删掉即可
rpm -ivh MySQL-client-5.6.35-1.el6.x86_64.rpm
rpm -ivh MySQL-devel-5.6.35-1.el6.x86_64.rpm
rpm -ivh MySQL-embedded-5.6.35-1.el6.x86_64.rpm
rpm -ivh MySQL-shared-5.6.35-1.el6.x86_64.rpm

 

3.配置mysql


获取初始密码:cat /root/.mysql_secret (root用户)

$ service mysql start
$ mysql -uroot -p
$ SET PASSWORD FOR 'root'@'localhost' = PASSWORD('password');
$ quit
$ service mysql restart

二、配置Hive
1.创建mysql的hive用户(hadoop用户)  注:编码问题,实际使用请去掉gr#ant中的#

$ mysql -u root -p
mysql> create user 'hive' identified by 'hive';
mysql> gr#ant all privileges on *.* to 'hive' with gr#ant option;   
mysql> flush privileges;

 

2.配置Hive。解压apache-hive-2.1.1-bin.tar.gz到 /home/hadoop/BigData (hadoop用户)
Step2.1 创建/修改/home/hadoop/BigData/apache-hive-2.1.1-bin/conf/hive-env.sh (默认即可,有需要可以修改)

Step2.2 创建/修改/home/hadoop/BigData/apache-hive-2.1.1-bin/conf/hive-default.xml (默认即可)

Step2.3 创建/修改/home/hadoop/BigData/apache-hive-2.1.1-bin/conf/hive-site.xml,参考如下:

<configuration>
    <property>
        <name>javax.jdo.option.ConnectionURL</name>
        <value>jdbc:mysql://HMaster:3306/hive?createDatabaseIfNotExist=true</value>
        <description>JDBC connect string for a JDBC metastore</description>    
    </property>   
    <property> 
        <name>javax.jdo.option.ConnectionDriverName</name> 
        <value>com.mysql.jdbc.Driver</value> 
        <description>Driver class name for a JDBC metastore</description>     
    </property>
    <property>
        <name>javax.jdo.option.ConnectionUserName</name>
        <value>hive</value>
        <description>username to use against metastore database</description>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionPassword</name>
        <value>hive</value>
        <description>password to use against metastore database</description>  
    </property>          
   <property>
   <name>hive.metastore.schema.verification</name>
   <value>false</value>
    <description>force metastore schema version consistency.</description>
 </property>

</configuration>

 

3.放mysql-connector-java-5.1.40-bin.jar到/home/hadoop/BigData/apache-hive-2.1.1-bin/lib/下,注意权限 (hadoop用户)
下载路径:https://cdn.mysql.com//Downloads/Connector-J/mysql-connector-java-5.1.40.zip

 

4.启动Hive
注意,要先对hive的mysql进行初始化,


$ cd /home/hadoop/BigData/apache-hive-2.1.1-bin/bin/
$ ./schematool -dbType mysql -initSchema
$ ./hive

 

三、配置HiveServer2
HiveServer不能处理多于一个客户端的并发请求,HiveServer2支持多客户端的并发和认证。这里我们开始配置

5.HiveServer2配置:
step1. 修改hive-default.xml 中的hive.server2.authentication 默认的NONE表示不认证(这里我就不改了)
step2. 修改hive-site.xml,增加如下内容


</property>
<!-- Env__HiveServer2-->
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>
<property>
<name>hive.server2.thrift.bind.host</name>
<value>HMaster</value>
</property>

 

Step3
hive –service metastore &
hive –service hiveserver2 &

6.beeline连接hiveserver2

启动beeline :bin/beeline
连接:!connect jdbc:hive2://HMaster:10000 hive hive

或者直接:

$ beeline -u jdbc:hive2://HMaster:10000/default -n hive -p hive

 

如果遇到错误:Error: Could not open client transport with JDBC Uri: jdbc:hive2://HMaster:10000: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: hadoop is not allowed to impersonate hive (state=08S01,code=0)
则按照如下方式解决:vi /home/hadoop/hadoop-2.7.3/etc/hadoop/core-site.xml增加如下内容:


<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>

然后重启hadoop即可

分类上一篇:     分类下一篇:无,已是最新文章

Leave a Reply