Storm HA部署
鸡汤: 哪有什么选择恐惧症,还不是因为你穷;哪有什么优柔寡断,还不是因为你怂。
1. 环境¶
序列号 | IP 地址 | 主机名 | 角色 | 安装软件 |
---|---|---|---|---|
1 | 192.168.186.10 | master | storm master |
zookeeper ,jdk1.8 ,storm1.2 |
2 | 192.168.186.11 | slave | storm slave1 |
zookeeper ,jdk1.8 ,storm1.2 |
3 | 192.168.186.12 | slave | storm slave2 |
zookeeper ,jdk1.8 ,storm1.2 |
下载地址: https://archive.apache.org/dist/storm/ http://storm.apache.org/downloads.html https://archive.apache.org/dist/storm/
2. 部署¶
storm在 storm 1.x版本 开始支持多nimbus,也就是HA。
2.1 JDK 检查¶
安装之前线检查jdk
[root@master ~]# java -version java version "1.8.0_172" Java(TM) SE Runtime Environment (build 1.8.0_172-b11) Java HotSpot(TM) 64-Bit Server VM (build 25.172-b11, mixed mode)
其他两台机器省略,保证三台机器能均安装
JDK
2.2 zookeeper 检查¶
同时检查zookeeper
是否安装[storm 依赖zookeeper
]
[root@master ~]# /usr/local/zookeeper/bin/zkServer.sh status JMX enabled by default Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg Mode: leader [root@slave1 ~]# /usr/local/zookeeper/bin/zkServer.sh status JMX enabled by default Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg Mode: follower [root@slave2 ~]# /usr/local/zookeeper/bin/zkServer.sh status JMX enabled by default Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg Mode: follower
保证有
zookeeper
集群
2.3 部署¶
分布式部署,官网参考链接: http://storm.apache.org/releases/1.2.3/Setting-up-a-Storm-cluster.html
cd /usr/local/src/ wget https://archive.apache.org/dist/storm/apache-storm-1.2.3/apache-storm-1.2.3.tar.gz tar zx apache-storm-1.2.3.tar.gz -C /usr/local/ \ln -sf /usr/local/apache-storm-1.2.3 /usr/local/storm mkdir -p /usr/local/storm/data
详细操作
[root@master conf]# grep 192* /etc/hosts 192.168.186.10 master 192.168.186.11 slave1 192.168.186.12 slave2 [root@master ~]# cd /usr/local/src/ [root@master src]# cd /usr/local/src/ [root@master src]# wget https://archive.apache.org/dist/storm/apache-storm-1.2.3/apache-storm-1.2.3.tar.gz --2019-09-04 10:31:01-- https://archive.apache.org/dist/storm/apache-storm-1.2.3/apache-storm-1.2.3.tar.gz [root@master src]# tar zx apache-storm-1.2.3.tar.gz -C /usr/local/ [root@master src]# \ln -sf /usr/local/apache-storm-1.2.3 /usr/local/storm [root@master local]# mkdir -p /usr/local/storm/data # data是存放storm数据地方
2.4 环境变量¶
[root@master storm]# tail -2 /etc/profile export STORM_HOME=/usr/local/storm export PATH=$PATH:$STORM_HOME/bin [root@master storm]# source /etc/profile [root@master conf]# egrep -v '#|^$' storm.yaml storm.zookeeper.servers: - "master" - "slave1" - "slave2" nimbus.seeds: ["master","slave1","slave2"] storm.local.dir: "/usr/local/storm/data" supervisor.slots.ports: - 6700 - 6701 - 6702 - 6703
高版本废除了nimbus.host参数,而采用了nimbus.seeds参数替代nimbus.host,主要用于发现nimbus leader
配置文件
# 设置Zookeeper的主机名称 storm.zookeeper.servers: - "master" - "slave1" - "slave2" # 设置主节点的主机名称 nimbus.seeds: ["master","slave1","slave2"] # 设置Storm的数据存储路径 storm.local.dir: "/usr/local/storm/data" # 设置Worker的端口号 supervisor.slots.ports: - 6700 - 6701 - 6702 - 6703
注意 - 67*参数 这几个前面是 空格 不是
TAB
键nimbus.seeds
可以配置多个nimbus.seeds
:["master","master2"], 组成HA。
2.5 包分发¶
[root@master ~]# rsync -az /usr/local/{storm,apache-storm-1.2.3} slave1:/usr/local/ [root@master ~]# rsync -az /usr/local/{storm,apache-storm-1.2.3} slave2:/usr/local/ [root@slave1 ~]# tail -2 /etc/profile export STORM_HOME=/usr/local/storm export PATH=$PATH:$STORM_HOME/bin [root@slave1 ~]# source /etc/profile [root@slave2 ~]# tail -2 /etc/profile export STORM_HOME=/usr/local/storm export PATH=$PATH:$STORM_HOME/bin [root@slave2 ~]# source /etc/profile
2.6 启动集群¶
- master 节点启动nimbus、supervisor、ui和logviewer
/usr/local/storm/bin/storm nimbus >/dev/null 2>&1 & /usr/local/storm/bin/storm supervisor >/dev/null 2>&1 & /usr/local/storm/bin/storm ui >/dev/null 2>&1 & /usr/local/storm/bin/storm logviewer >/dev/null 2>&1 &
[root@master local]# /usr/local/storm/bin/storm nimbus >/dev/null 2>&1 & [1] 66948 [root@master local]# /usr/local/storm/bin/storm supervisor >/dev/null 2>&1 & [2] 67014 [root@master local]# /usr/local/storm/bin/storm ui >/dev/null 2>&1 & [3] 67015 [root@master local]# /usr/local/storm/bin/storm logviewer >/dev/null 2>&1 & [4] 67075 检查 [root@master local]# jps|egrep -i 'nimbus|supervisor|logviewer|core' 67075 logviewer 66948 nimbus 67015 core 67014 Supervisor
core 是 UI
- slave 节点启动Supervisor
/usr/local/storm/bin/storm nimbus >/dev/null 2>&1 & /usr/local/storm/bin/storm supervisor >/dev/null 2>&1 & /usr/local/storm/bin/storm logviewer >/dev/null 2>&1 &
slave1 [root@slave1 local]# /usr/local/storm/bin/storm nimbus >/dev/null 2>&1 & [1] 19068 [root@slave1 local]# /usr/local/storm/bin/storm supervisor >/dev/null 2>&1 & [2] 19069 [root@slave1 local]# /usr/local/storm/bin/storm logviewer >/dev/null 2>&1 & [3] 19128 slave2 [root@slave2 local]# /usr/local/storm/bin/storm nimbus >/dev/null 2>&1 & [1] 29818 [root@slave2 local]# /usr/local/storm/bin/storm supervisor >/dev/null 2>&1 & [2] 29819 [root@slave2 local]# /usr/local/storm/bin/storm logviewer >/dev/null 2>&1 & [3] 29857 检查 [root@slave1 local]# jps|egrep -i 'nimbus|super|logviewer' 19128 logviewer 19068 nimbus 19069 Supervisor [root@slave2 local]# jps|egrep -i 'nimbus|supervisor|logviewer' 29857 logviewer 29818 nimbus 29819 Supervisor
logviewer
,可以在web页面点击相应的端口号即可查看日志UI
,是web展示
2.7 UI 登录¶

此时可以看到nimbus中谁是主。
2.8 Storm命令行操作¶
1)nimbus:启动nimbus守护进程 storm nimbus 2)supervisor:启动supervisor守护进程 storm supervisor 3)ui:启动UI守护进程。 storm ui 4)list:列出正在运行的拓扑及其状态 storm list 5)logviewer:Logviewer提供一个web接口查看Storm日志文件。 storm logviewer 6)jar: storm jar 【jar路径】 【拓扑包名.拓扑类名】 【拓扑名称】 7)kill:杀死名为Topology-name的拓扑 storm kill topology-name [-w wait-time-secs] -w:等待多久后杀死拓扑 8)active:激活指定的拓扑spout。 storm activate topology-name 9)deactivate:禁用指定的拓扑Spout。 storm deactivate topology-name 10)help:打印一条帮助消息或者可用命令的列表。 storm help storm help <command>
3. HA测试¶
此时nimbus leader是master,我此时干掉master的nimbus。
详细操作
[root@master ~]# jps 67075 logviewer 67762 Jps 66948 nimbus 67015 core 67014 Supervisor 8969 ThriftServer 7578 ResourceManager 7419 SecondaryNameNode 7230 NameNode 7982 QuorumPeerMain [root@master ~]# kill -9 66948 [root@master ~]# jps 67777 Jps 67075 logviewer 67015 core 67014 Supervisor 8969 ThriftServer 7578 ResourceManager 7419 SecondaryNameNode 7230 NameNode 7982 QuorumPeerMain
等待一会刷新页面,查看情况。

从图上可以看出 故障转移了。HA实现了。