国产探花免费观看_亚洲丰满少妇自慰呻吟_97日韩有码在线_资源在线日韩欧美_一区二区精品毛片,辰东完美世界有声小说,欢乐颂第一季,yy玄幻小说排行榜完本

首頁 > 學(xué)院 > 開發(fā)設(shè)計 > 正文

flume+kafka+storm整合

2019-11-14 11:42:34
字體:
供稿:網(wǎng)友

flume采集數(shù)據(jù)

kafka做消息隊列(緩存)

storm做流式處理

flume版本 apache-flume-1.7.0-bin

kafka版本 kafka_2.11-0.10.1.0(要注意的是有些flume的版本和kafka的版本不兼容,flume采集的數(shù)據(jù)無法寫入到kafka的話題中去,我在這里被坑過)

storm版本 apache-storm-0.9.2-incubating

一、配置(必須先安裝zookeeper)

flume配置:

在conf文件夾下新建demoagent.conf文件

(1)監(jiān)聽端口配置

A simple example # example.conf: A single-node Flume configuration# Name the components on this agenta1.sources = r1a1.sinks = k1a1.channels = c1# Describe/configure the sourcea1.sources.r1.type = netcata1.sources.r1.bind = localhosta1.sources.r1.port = 44444# Describe the sinka1.sinks.k1.type = logger# Use a channel which buffers events in memorya1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100# Bind the source and sink to the channela1.sources.r1.channels = c1a1.sinks.k1.channel = c1

  (2)命令監(jiān)聽程序

# example.conf: A single-node Flume configuration# Name the components on this agenta1.sources = r1a1.sinks = k1a1.channels = c1# Describe/configure the source       netcat a1.sources.r1.type = execa1.sources.r1.command = tail -f /home/zzq/flumedemo/test.loga1.sources.r1.port = 44444a1.sources.r1.channels = c1# Describe the sinka1.sinks.k1.type = logger# Use a channel which buffers events in memorya1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100# Bind the source and sink to the channela1.sources.r1.channels = c1a1.sinks.k1.channel = c1

 (3)flume 和 kafka整合

# example.conf: A single-node Flume configuration# Name the components on this agenta1.sources = r1a1.sinks = k1a1.channels = c1# Describe/configure the source       netcat a1.sources.r1.type = execa1.sources.r1.command = tail -f /home/zzq/flumedemo/test.loga1.sources.r1.port = 44444a1.sources.r1.channels = c1# Describe the sink#a1.sinks.k1.type = logger# Use a channel which buffers events in memorya1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSinka1.sinks.k1.kafka.topic = testKJ1a1.sinks.k1.kafka.bootstrap.servers = weekend114:9092,weekend115:9092,weekend116:9092a1.sinks.k1.kafka.flumeBatchSize = 20a1.sinks.k1.kafka.PRoducer.acks = 1a1.sinks.k1.kafka.producer.linger.ms = 1a1.sinks.ki.kafka.producer.compression.type = snappy# Bind the source and sink to the channela1.sources.r1.channels = c1a1.sinks.k1.channel = c1

我們現(xiàn)在要用的就是第3種flume 和 kafka整合,我們將這個內(nèi)容放到demoagent.conf文件

[zzq@weekend110 conf]$ cat demoagent.conf # example.conf: A single-node Flume configuration# Name the components on this agenta1.sources = r1a1.sinks = k1a1.channels = c1# Describe/configure the source       netcat a1.sources.r1.type = execa1.sources.r1.command = tail -f /home/zzq/flumedemo/test.loga1.sources.r1.port = 44444a1.sources.r1.channels = c1# Describe the sink#a1.sinks.k1.type = logger# Use a channel which buffers events in memorya1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSinka1.sinks.k1.kafka.topic = testKJ2a1.sinks.k1.kafka.bootstrap.servers = weekend110:9092a1.sinks.k1.kafka.flumeBatchSize = 200a1.sinks.k1.kafka.producer.acks = 1a1.sinks.k1.kafka.producer.linger.ms = 1a1.sinks.ki.kafka.producer.compression.type = snappy# Bind the source and sink to the channela1.sources.r1.channels = c1a1.sinks.k1.channel = c1

配置kafka:

 

vim config/server.properties 
broker.id=1zookeeper.connect=weekend114:2181,weekend115:2181,weekend116:2181加入id和zookeeper地址(我的是zookeeper集群)

配置storm:

修改配置文件storm.yaml

#所使用的zookeeper集群主機storm.zookeeper.servers:     - "weekend114"     - "weekend115"     - "weekend116"
#nimbus所在的主機名nimbus.host: "weekend114"
supervisor.slots.ports-6701-6702-6703-6704-6705二、啟動

       (1)、啟動strom

在nimbus主機上nohup ./storm nimbus 1>/dev/null 2>&1 &nohup ./storm ui 1>/dev/null 2>&1 &在supervisor主機上nohup ./storm supervisor 1>/dev/null 2>&1 &

   (2)啟動kafka

在每一臺節(jié)點上啟動brokerbin/kafka-server-start.sh config/server.properties

kafka其他實用操作:

5、在kafka集群中創(chuàng)建一個topicbin/kafka-topics.sh --create --zookeeper weekend114:2181 --replication-factor 3 --partitions 1 --topic order6、用一個producer向某一個topic中寫入消息bin/kafka-console-producer.sh --broker-list weekend110:9092 --topic order7、用一個comsumer從某一個topic中讀取信息bin/kafka-console-consumer.sh --zookeeper weekend114:2181 --from-beginning --topic order8、查看一個topic的分區(qū)及副本狀態(tài)信息bin/kafka-topics.sh --describe --zookeeper weekend114:2181 --topic order查看全部話題./bin/kafka-topics.sh --list --zookeeper weekend114:2181

(3)啟動flume
bin/flume-ng agent --conf conf --conf-file conf/demoagent.conf --name a1 -Dflume.root.logger=INFO,console

我們現(xiàn)在向/home/zzq/flumedemo/test.log文件追加內(nèi)容

[zzq@weekend110 ~]$ echo '您好啊' >> /home/zzq/flumedemo/test.log

此時我們查看kafka話題的內(nèi)容

可以看到kafka已經(jīng)接收到了,我們現(xiàn)在再用storm讀kafka做流式處理

storm代碼下載地址:http://download.csdn.net/detail/baidu_19473529/9746787

這樣整合就完成了


發(fā)表評論 共有條評論
用戶名: 密碼:
驗證碼: 匿名發(fā)表
主站蜘蛛池模板: 阳谷县| 明溪县| 海淀区| 永安市| 正阳县| 昌平区| 定陶县| 石泉县| 防城港市| 永修县| 汤阴县| 安乡县| 许昌县| 沭阳县| 包头市| 赤峰市| 玛沁县| 益阳市| 武邑县| 南溪县| 深州市| 中方县| 宝鸡市| 泗阳县| 民县| 永胜县| 吴旗县| 甘南县| 大渡口区| 尼木县| 宜川县| 乡城县| 德钦县| 郁南县| 马龙县| 大英县| 龙陵县| 房产| 昆山市| 萍乡市| 定陶县|