[HD] Hadoop开源环境搭建(集群模式):7.Hue-3.12.0

常用的华为FusionInsight C60U10中各组件的版本,以此作为兼容参考:

HDFS:2.7.2
Hive:1.3.0
HBase:1.0.2
Spark:1.5.1
Solr:5.3.1
Flume:1.6.0
Kafka:2.10-0.10.0.0
Storm:0.10.0
Hue:3.9.0
Redis:3.0.5

本文配置:Redhat6.5、JDK-jdk1.7.0_79 、Hadoop-hadoop-2.7.3、apache-hive-2.1.1、hbase-1.2.4-bin、Hue-3.12.0

•一、安装Hue
•二、配置Hue
•三、使用Hue

详细步骤如下

一、安装Hue

1.下载Hue。GFW太厉害,官网无法下载,从github上可以下载 https://github.com/cloudera/hue/releases
注:不过最后我的100M电信宽带竟花了3个小时才能下载下来(最后移动网络10秒钟下载下来了),为了方便大家,在附件共共享下下载。再次声明:这里纯粹是开源技术爱好,来源于github.com,gethue.com,向开源致敬。
扯皮结束,我的计划安装目录: /home/hadoop/BigData/hue-3.12.0,解压hue-release-3.12.0.tar.gz到/home/hadoop/BigData/hue-release-3.12.0。

$ tar zxvf hue-release-3.12.0.tar.gz -C /home/hadoop/BigData/

 

2.安装相关依赖并编译,参考如下,其中Python-dev可从http://www.rpmfind.net/linux/rpm2html/search.php?query=python-devel找到
redhat:* Oracle’s JDK [(read more here)](https://www.digitalocean.com/community/tutorials/how-to-install-java-on-centos-and-fedora)
* ant
* asciidoc
* cyrus-sasl-devel
* cyrus-sasl-gssapi
* cyrus-sasl-plain
* gcc
* gcc-c++
* krb5-devel
* libffi-devel
* libtidy (for unit tests only)
* libxml2-devel
* libxslt-devel
* make
* mvn (from [“apache-maven“](https://gist.github.com/sebsto/19b99f1fa1f32cae5d00) package or maven3 tarball)
* mysql
* mysql-devel
* openldap-devel
* python-devel
* sqlite-devel
* openssl-devel (for version 7+)
* gmp-devel

ubuntu:* Oracle’s JDK [(read more here)](https://help.ubuntu.com/community/Java)
* ant
* gcc
* g++
* libffi-dev
* libkrb5-dev
* libmysqlclient-dev
* libsasl2-dev
* libsasl2-modules-gssapi-mit
* libsqlite3-dev
* libssl-dev
* libtidy-0.99-0 (for unit tests only)
* libxml2-dev
* libxslt-dev
* make
* mvn (from “maven“ package or maven3 tarball)
* openldap-dev / libldap2-dev
* python-dev
* python-setuptools
* libgmp3-dev
* libz-dev

$ yum install ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi cyrus-sasl-plain gcc gcc-c++ krb5-devel libffi-devel libtidy libxml2-devel libxslt-devel make mvn  mysql mysql-devel openldap-devel python-devel sqlite-devel openssl-devel gmp-devel
$ cd /home/hadoop/BigData/hue-release-3.12.0
$ PREFIX=/home/hadoop/BigData/hue-3.12.0 make install

注:Hue的编译是个比较复杂的过程,请确保全部安装,否则会报很多错误。除了mvn,其他都可以在yum中安装。mvn的安装方法见 [Linux] redhat6安装mvn-3.9.9

 

3.启动Hue
编写脚本 Hue_start.sh,内容如下,然后sh Hue_start.sh

$ cd /home/hadoop/BigData/hue-release-3.12.0/build/env/bin/
$ ./supervisor &

 

二、配置Hue

1.配置Hive和HDFS、HBase

Step1.修改core-site.xml,添加hue的权限

<property>
    <name>hadoop.proxyuser.hue.hosts</name>
    <value>*</value>
</property>
<property>
    <name>hadoop.proxyuser.hue.groups</name>
    <value>*</value>
</property>

 

Step2.修改hdfs-site.xml,添加web访问hdfs的权限

<property>
    <name>dfs.webhdfs.enabled</name>
    <value>true</value>
</property>

 

Step3.修改Hue的配置文件

$ vi /home/hadoop/BigData/hue-release-3.12.0/desktop/conf/pseudo-distributed.ini 修改其中内容如下:

[hadoop]

# Configuration for HDFS NameNode
# ------------------------------------------------------------------------
[[hdfs_clusters]]
# HA support by using HttpFs

[[[default]]]
# Enter the filesystem uri
fs_defaultfs=hdfs://localhost:9000

# Default port is 14000 for HttpFs.
webhdfs_url=http://192.168.111.140:50070/webhdfs/v1

# Directory of the Hadoop configuration
hadoop_conf_dir=/home/hadoop/hadoop-2.7.3/etc/hadoop

[beeswax]

# Host where HiveServer2 is running.
# If Kerberos security is enabled, use fully-qualified domain name (FQDN).
hive_server_host=HMaster

# Port where HiveServer2 Thrift server runs on.
hive_server_port=10000

# Hive configuration directory, where hive-site.xml is located
hive_conf_dir=/home/hadoop/BigData/apache-hive-2.1.1-bin/conf
[hbase]
# If using Kerberos we assume GSSAPI SASL, not PLAIN.
hbase_clusters=(Cluster|localhost:9090)

# HBase configuration directory, where hbase-site.xml is located.
hbase_conf_dir=/home/hadoop/BigData/hbase-1.2.4/conf

 

 

 

三、使用Hue

1.使用Hive、HDFS、HBase,Hue使用起来太容易了,功能很全请大家自己发掘,所以以下仅做Hive示例。

菜单[Query Editors]–>[Hive],输入 select * from students;

 

 

常见问题及处理:

1.编译hue的时候遇到错误 [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException
make[2]: *** [/home/hadoop/BigData/hue-release-3.12.0/desktop/libs/hadoop/java-lib/hue-plugins-3.12.0-SNAPSHOT.jar] Error 1
make[2]: Leaving directory `/home/hadoop/BigData/hue-release-3.12.0/desktop/libs/hadoop’
make[1]: *** [.recursive-install-bdist/libs/hadoop] Error 2
make[1]: Leaving directory `/home/hadoop/BigData/hue-release-3.12.0/desktop’

处理:类似的存在网络不稳定,其次有些进程没关闭,所以删掉安装目录,重新编译即可。

 

 

分类上一篇:     分类下一篇:

Leave a Reply