[Kafka] 大数据开源环境搭建(集群):31.Kafka-2.11

本文配置:Redhat6.5、JDK-jdk1.7.0_79 、Zookeeper-3.5.1

三节点配置和名称(主 12C30G、子8C20G):

192.168.111.140 HMASTER zookeeper:2181 Kafka
192.168.111.141 HDATA01 zookeeper:2181 Kafka
192.168.111.142 HDATA02 zookeeper:2181 Kafka

详细步骤如下:

1.下载kafka安装包 http://mirrors.hust.edu.cn/apache/kafka/0.10.1.0/kafka_2.11-0.10.1.0.tgz
我的计划安装目录: /hom/hadoop/Streaming/kafka_2.11

$ tar zxvf kafka_2.11-0.10.1.0.tgz -C /home/hadoop/BigData/
$ mv /home/hadoop/BigData/ kafka_2.11-0.10.1.0 kafka_2.11

 

2.配置Kafka
$ vi /home/hadoop/BigData/kafka_2.11/config/server.properties 修改如下内容:

# The id of the broker. This must be set to a unique integer for each broker.
#broker.id=0
host.name=192.168.111.140

# The minimum age of a log file to be eligible for deletion
log.retention.hours=168

message.max.byte=5242880
default.replication.factor=2
replica.fetch.max.bytes=5242880

# root directory for all kafka znodes.
zookeeper.connect=192.168.111.140:2181,192.168.111.141:2181,192.168.111.142:2181

 

3.配置其他节点

$ scp -r /home/hadoop/BigData/kafka_2.11 hadoop@HData01:/home/hadoop/BigData/
$ scp -r /home/hadoop/BigData/kafka_2.11 hadoop@HData02:/home/hadoop/BigData/

然后更改server.properties中的host.name为本机物理IP地址

4.启动Kafka集群
建议写个expect脚本用于管理kafka启动。三台机器下执行:

$ sh /home/hadoop/BigData/kafka_2.11/bin/kafka-server-start.sh -daemon /home/hadoop/BigData/kafka_2.11/config/server.properties

启动后,jps可以查看到Kafka进程。
好了,启动完毕后,我们来创建个topic进行测试:
First. 创建topic

$ ./kafka-topics.sh --create --zookeeper 192.168.111.140:2181,192.168.111.141:2181,192.168.111.142:2181 --replication-factor 2 --partitions 1 --topic topic-one

Next.创建一个producer和consumer来测试,如下:

$ cd /home/hadoop/BigData/kafka_2.11/bin
$ ./kafka-console-producer.sh --broker-list 192.168.111.140:9092 --topic topic-one
abc
sdf

新开一个窗口
$ ./kafka-console-consumer.sh --zookeeper 192.168.111.140:2181,192.168.111.141:2181,192.168.111.142:2181 --topic topic-one --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-
server] instead of [zookeeper].
abc
sdf

 

分类上一篇:     分类下一篇:

Leave a Reply