Redis 生产级高可用、高扩展应用方案,通过划分槽位的方式完成水平扩展。

集群规划

  • 理想情况

该图并不代表集群初始化的实际拓扑图,只用来展示每台机器一主一从的理想结构。

在这里插入图片描述

HOSTNAME IP PORT INFO
redis01 13.13.13.5/16 6379 Master
6380 Slave
redis02 13.13.13.5/16 6379 Master
6380 Slave
redis03 13.13.13.5/16 6379 Master
6380 Slave
  • 配置/etc/hosts
[root@redis01 ~]# tail -3 /etc/hosts
13.13.13.5 redis01
13.13.13.6 redis02
13.13.13.7 redis03
[root@redis01 ~]# 

集群搭建

所有的主机开启防火墙和SELinux。

  • 配置防火墙和SELinux(all)
  1. 防火墙

默认集群中的每个节点都会单独开辟一个TCP通道,用于节点之间彼此的通信,通信端口在基础端口上加10000。

add_remove_port.sh 脚本模板:https://blog.csdn.net/weixin_42480750/article/details/108975726

[root@redis01 ~]# sh add_remove_port.sh 

Quikly Open/Shudown Ports

e.g. 37001
e.g. 37001 37003
e.g. 37001-37003 37006

ports:6379 16379 6380 16380

e.g tcp/udp/all, default tcp.

protocol:tcp

e.g add/remove, default add.

action:

firewall-cmd --zone=public --permanent --add-port=6379/tcp
success
firewall-cmd --zone=public --permanent --add-port=16379/tcp
success
firewall-cmd --zone=public --permanent --add-port=6380/tcp
success
firewall-cmd --zone=public --permanent --add-port=16380/tcp
success

firewall-cmd --reload
success
firewall-cmd --list-port
6379/tcp 16379/tcp 6380/tcp 16380/tcp

[root@redis01 ~]# 
  1. SELinux
[root@redis01 ~]# semanage port -l | grep redis
redis_port_t                   tcp      6379, 16379, 26379
[root@redis01 ~]# semanage port -a -t redis_port_t -p tcp 6380
[root@redis01 ~]# semanage port -a -t redis_port_t -p tcp 16380
  • 搭建Master
  1. 配置(all)

一定要把本机IP放置在回环网路IP前bind 13.13.13.5 127.0.0.1,否则集群创建时会报错,暂不清楚原因。

[root@redis01 ~]# dnf install redis
[root@redis01 ~]# cp /etc/redis.conf{,.bak}
[root@redis01 ~]# vi /etc/redis.conf
[root@redis01 ~]# cat /etc/redis.conf
daemonize yes
port 6379
bind 13.13.13.5 127.0.0.1
dir     /var/lib/redis
pidfile /var/run/redis/redis.pid
logfile /var/log/redis/redis.log

# If yes, enables Redis Cluster support in a specific Redis instance. 
# Otherwise the instance starts as a stand alone instance as usual.
cluster-enabled yes

# This is not a user editable configuration file, but the file where
# a Redis Cluster node automatically persists the cluster configuration 
# every time there is a change, in order to be able to re-read it at startup.
cluster-config-file nodes.conf

# The maximum amount of time a Redis Cluster node can be unavailable,
#  without it being considered as failing.
cluster-node-timeout 5000

appendonly yes
[root@redis01 ~]# 
[root@redis02 ~]# cat /etc/redis.conf
....
bind 13.13.13.6 127.0.0.1
....
[root@redis02 ~]# 
[root@redis03 ~]# cat /etc/redis.conf
....
bind 13.13.13.7 127.0.0.1
....
[root@redis03 ~]# 
  1. 启动

使用集群模式启动会自动生成cluster-config-file文件来记录当前的集群信息。

[root@redis01 ~]# systemctl start redis
[root@redis01 ~]# ls /var/lib/redis/
appendonly.aof  nodes.conf      
[root@redis01 ~]# cat /var/lib/redis/nodes.conf 
9de08b60a9ea7da488c21f086b124b7577d8016f :0@0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0
[root@redis01 ~]# redis-cli 
127.0.0.1:6379> CLUSTER INFO
cluster_state:fail
cluster_slots_assigned:0
cluster_slots_ok:0
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:1
cluster_size:0
cluster_current_epoch:0
cluster_my_epoch:0
cluster_stats_messages_sent:0
cluster_stats_messages_received:0
127.0.0.1:6379> CLUSTER NODES
9de08b60a9ea7da488c21f086b124b7577d8016f :6379@16379 myself,master - 0 0 0 connected
127.0.0.1:6379> 
  • 搭建Slave
  1. 创建新的实例(all)

setup_another_redis_instance.sh 脚本参考:https://blog.csdn.net/weixin_42480750/article/details/109024446

[root@redis01 ~]# head -12 setup_another_redis_instance.sh 
#!/bin/bash

# Setup Another Redis Instance
#  -- all you need to do is defining two vars below
#
# suffix : distinguish from standart redis instance
#          e.g standard instance conf-file : /etc/redis.conf
#              new created instance conf-file : /etc/redis-6380.conf : which will add a suffix
# port : distinguish from the standard port 6379
#
suffix=6380
port=6380
[root@redis01 ~]# sh setup_another_redis_instance.sh 
Setup work seems done!
Now you can either check the setup log file "/tmp/setup_another_redis_instance_2020-10-12.log" to see if had something wrong.
Or just start the service directly with the command : "systemctl start redis-6380.service " 
[root@redis01 ~]# 
  1. 修改配置文件(all)
[root@redis01 ~]# cp /etc/redis-6380.conf{,.bak}
[root@redis01 ~]# vi /etc/redis-6380.conf
[root@redis01 ~]# cat /etc/redis-6380.conf
daemonize yes
port 6380
bind 13.13.13.5 127.0.0.1
dir     /var/lib/redis-6380
pidfile /var/run/redis-6380/redis.pid
logfile /var/log/redis-6380/redis.log

cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
[root@redis01 ~]# 
[root@redis02 ~]# cat /etc/redis-6380.conf
....
bind 13.13.13.6 127.0.0.1
....
[root@redis02 ~]# 
[root@redis03 ~]# cat /etc/redis-6380.conf
....
bind 13.13.13.7 127.0.0.1
....
[root@redis03 ~]# 
  1. 启动
[root@redis01 ~]# systemctl start redis-6380.service
[root@redis01 ~]# cat /var/lib/redis-6380/nodes.conf 
d116b945d1f7fcfb28189a416d89697696d126a7 :0@0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0
[root@redis01 ~]# redis-cli -p 6380
127.0.0.1:6380> CLUSTER NODES
d116b945d1f7fcfb28189a416d89697696d126a7 :6380@16380 myself,master - 0 0 0 connected
127.0.0.1:6380>
  • 配置集群

参考:https://redis.io/topics/cluster-tutorial

[root@redis01 ~]# redis-cli --cluster create 13.13.13.5:6379 13.13.13.5:6380 \
> 13.13.13.6:6379 13.13.13.6:6380 13.13.13.7:6379 13.13.13.7:6380 \
> --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 13.13.13.6:6380 to 13.13.13.5:6379
Adding replica 13.13.13.5:6380 to 13.13.13.6:6379
Adding replica 13.13.13.7:6380 to 13.13.13.7:6379
>>> Trying to optimize slaves allocation for anti-affinity
[OK] Perfect anti-affinity obtained!
M: feea699b984fea4aad8d88db5f08cb0ec843877f 13.13.13.5:6379
   slots:[0-5460] (5461 slots) master
S: f73a319996400e4dbc651a4c97a5588a527531a4 13.13.13.5:6380
   replicates 425226a07870d852f0e554521e5f5b5008921bc5
M: a20a81c4dea92bbb899372e5f7ce5cb6882367b4 13.13.13.6:6379
   slots:[5461-10922] (5462 slots) master
S: d383742388b9c43dea148d515eabd60297139835 13.13.13.6:6380
   replicates feea699b984fea4aad8d88db5f08cb0ec843877f
M: 425226a07870d852f0e554521e5f5b5008921bc5 13.13.13.7:6379
   slots:[10923-16383] (5461 slots) master
S: d017adb66e67e1b7761860d8a11edb628cea140b 13.13.13.7:6380
   replicates a20a81c4dea92bbb899372e5f7ce5cb6882367b4
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
....
>>> Performing Cluster Check (using node 13.13.13.5:6379)
M: feea699b984fea4aad8d88db5f08cb0ec843877f 13.13.13.5:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 425226a07870d852f0e554521e5f5b5008921bc5 13.13.13.7:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: d383742388b9c43dea148d515eabd60297139835 13.13.13.6:6380
   slots: (0 slots) slave
   replicates feea699b984fea4aad8d88db5f08cb0ec843877f
S: d017adb66e67e1b7761860d8a11edb628cea140b 13.13.13.7:6380
   slots: (0 slots) slave
   replicates a20a81c4dea92bbb899372e5f7ce5cb6882367b4
M: a20a81c4dea92bbb899372e5f7ce5cb6882367b4 13.13.13.6:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: f73a319996400e4dbc651a4c97a5588a527531a4 13.13.13.5:6380
   slots: (0 slots) slave
   replicates 425226a07870d852f0e554521e5f5b5008921bc5
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@redis01 ~]# 

自动分配的也不错,一主一从。

在这里插入图片描述

  • 使用集群:请求通过Hash分散地分发至各个主节点

-c:开启集群模式

[root@redis01 ~]# redis-cli -c
127.0.0.1:6379> set k1 v1
-> Redirected to slot [12706] located at 13.13.13.7:6379
OK
13.13.13.7:6379> set k2 v2
-> Redirected to slot [449] located at 13.13.13.5:6379
OK
13.13.13.5:6379> set k3 v3
OK
13.13.13.5:6379> 
  • 调整集群
  1. 重启所有服务器模拟集群错乱。

redis02出现两个实例同时为从的情况,造成redis01两个实例为主压力过大。

[root@redis01 ~]# cat /var/lib/redis/nodes.conf 
a20a81c4dea92bbb899372e5f7ce5cb6882367b4 13.13.13.6:6379@16379 slave d017adb66e67e1b7761860d8a11edb628cea140b 0 1602484347000 8 connected
425226a07870d852f0e554521e5f5b5008921bc5 13.13.13.7:6379@16379 slave f73a319996400e4dbc651a4c97a5588a527531a4 0 1602484347549 7 connected
f73a319996400e4dbc651a4c97a5588a527531a4 13.13.13.5:6380@16380 master - 0 1602484347000 7 connected 10923-16383
feea699b984fea4aad8d88db5f08cb0ec843877f 13.13.13.5:6379@16379 myself,master - 0 1602484347000 1 connected 0-5460
d017adb66e67e1b7761860d8a11edb628cea140b 13.13.13.7:6380@16380 master - 0 1602484347448 8 connected 5461-10922
d383742388b9c43dea148d515eabd60297139835 13.13.13.6:6380@16380 slave feea699b984fea4aad8d88db5f08cb0ec843877f 0 1602484347000 4 connected
vars currentEpoch 8 lastVoteEpoch 8
[root@redis01 ~]# 

在这里插入图片描述

  1. 调整集群:redis02 : 6379、redis03 : 6379由从变主

CLUSTER FAILOVER takeover:从库执行,变身为主。

[root@redis03 ~]# redis-cli
127.0.0.1:6379> CLUSTER FAILOVER takeover
OK
127.0.0.1:6379> CLUSTER NODES
....
f73a319996400e4dbc651a4c97a5588a527531a4 13.13.13.5:6380@16380 slave 425226a07870d852f0e554521e5f5b5008921bc5 0 1602486266569 9 connected
425226a07870d852f0e554521e5f5b5008921bc5 13.13.13.7:6379@16379 myself,master - 0 1602486266000 9 connected 10923-16383
....
127.0.0.1:6379> 
[root@redis02 ~]# redis-cli 
127.0.0.1:6379> CLUSTER FAILOVER takeover
OK
127.0.0.1:6379> CLUSTER NODES
....
d017adb66e67e1b7761860d8a11edb628cea140b 13.13.13.7:6380@16380 slave a20a81c4dea92bbb899372e5f7ce5cb6882367b4 0 1602486495092 10 connected
a20a81c4dea92bbb899372e5f7ce5cb6882367b4 13.13.13.6:6379@16379 myself,master - 0 1602486493000 10 connected 5461-10922
....
127.0.0.1:6379> 

扩容收缩

扩容

  1. 新建虚拟机,配置防火墙以及SELinux
[root@redis04 ~]# sh add_remove_port.sh 

Quikly Open/Shudown Ports

e.g. 37001
e.g. 37001 37003
e.g. 37001-37003 37006

ports:6379 16379 6380 16380

e.g tcp/udp/all, default tcp.

protocol:tcp

e.g add/remove, default add.

action:

firewall-cmd --zone=public --permanent --add-port=6379/tcp
success
firewall-cmd --zone=public --permanent --add-port=16379/tcp
success
firewall-cmd --zone=public --permanent --add-port=6380/tcp
success
firewall-cmd --zone=public --permanent --add-port=16380/tcp
success

firewall-cmd --reload
success
firewall-cmd --list-port
6379/tcp 16379/tcp 6380/tcp 16380/tcp

[root@redis04 ~]# semanage port -a -t redis_port_t -p tcp 6380
[root@redis04 ~]# semanage port -a -t redis_port_t -p tcp 16380
[root@redis04 ~]# semanage port -l | grep redis
redis_port_t                   tcp      16380, 6380, 6379, 16379, 26379
[root@redis04 ~]# 
  1. 配置新的Master实例
[root@redis04 ~]# cat /etc/redis.conf 
daemonize yes
port 6379
bind 13.13.13.8 127.0.0.1
dir     /var/lib/redis
pidfile /var/run/redis/redis.pid
logfile /var/log/redis/redis.log

cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
[root@redis04 ~]# 
  1. 添加新的Master实例至集群
[root@redis04 ~]# systemctl start redis
[root@redis04 ~]# redis-cli --cluster add-node 13.13.13.8:6379 13.13.13.5:6379
>>> Adding node 13.13.13.8:6379 to cluster 13.13.13.5:6379
....
>>> Send CLUSTER MEET to node 13.13.13.8:6379 to make it join the cluster.
[OK] New node added correctly.
[root@redis04 ~]# redis-cli CLUSTER NODES | grep myself
83cf9b7471c71783a59f93811accdc0380f4f2c3 :6379@16379 myself,master - 0 0 0 connected
[root@redis04 ~]# 
  1. 重新划分槽位

共16384个槽位,四个Master重新分配每个4096,因此本次要移动4096个槽位给新的Master;

4096个槽位需要从其它所有的节点上均匀的扣除;

[root@redis04 ~]# redis-cli --cluster reshard 13.13.13.8:6379
>>> Performing Cluster Check (using node 13.13.13.8:6379)
M: 83cf9b7471c71783a59f93811accdc0380f4f2c3 13.13.13.8:6379
   slots: (0 slots) master
S: f73a319996400e4dbc651a4c97a5588a527531a4 13.13.13.5:6380
   slots: (0 slots) slave
   replicates 425226a07870d852f0e554521e5f5b5008921bc5
S: d383742388b9c43dea148d515eabd60297139835 13.13.13.6:6380
   slots: (0 slots) slave
   replicates feea699b984fea4aad8d88db5f08cb0ec843877f
M: a20a81c4dea92bbb899372e5f7ce5cb6882367b4 13.13.13.6:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: 425226a07870d852f0e554521e5f5b5008921bc5 13.13.13.7:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: feea699b984fea4aad8d88db5f08cb0ec843877f 13.13.13.5:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: d017adb66e67e1b7761860d8a11edb628cea140b 13.13.13.7:6380
   slots: (0 slots) slave
   replicates a20a81c4dea92bbb899372e5f7ce5cb6882367b4
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096
What is the receiving node ID? 83cf9b7471c71783a59f93811accdc0380f4f2c3
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: all
....
Do you want to proceed with the proposed reshard plan (yes/no)? yes
  1. 检测集群的正常使用
[root@redis04 ~]# redis-cli  CLUSTER NODES
f73a319996400e4dbc651a4c97a5588a527531a4 13.13.13.5:6380@16380 slave 425226a07870d852f0e554521e5f5b5008921bc5 0 1602491321000 9 connected
d383742388b9c43dea148d515eabd60297139835 13.13.13.6:6380@16380 slave feea699b984fea4aad8d88db5f08cb0ec843877f 0 1602491322203 1 connected
a20a81c4dea92bbb899372e5f7ce5cb6882367b4 13.13.13.6:6379@16379 master - 0 1602491321701 10 connected 6827-10922
425226a07870d852f0e554521e5f5b5008921bc5 13.13.13.7:6379@16379 master - 0 1602491320000 9 connected 12288-16383
83cf9b7471c71783a59f93811accdc0380f4f2c3 13.13.13.8:6379@16379 myself,master - 0 1602491321000 11 connected 0-1364 5461-6826 10923-12287
feea699b984fea4aad8d88db5f08cb0ec843877f 13.13.13.5:6379@16379 master - 0 1602491320594 1 connected 1365-5460
d017adb66e67e1b7761860d8a11edb628cea140b 13.13.13.7:6380@16380 slave a20a81c4dea92bbb899372e5f7ce5cb6882367b4 0 1602491321196 10 connected
[root@redis04 ~]# 

在这里插入图片描述

[root@redis04 ~]# redis-cli
127.0.0.1:6379> set n1 m1
(error) MOVED 3671 13.13.13.5:6379
127.0.0.1:6379> set n2 m2
(error) MOVED 15924 13.13.13.7:6379
127.0.0.1:6379> set n3 m3
OK
127.0.0.1:6379> set n4 m4
(error) MOVED 7922 13.13.13.6:6379
127.0.0.1:6379> 
  • 添加新的Slave
  1. 创建新的Slave实例
[root@redis04 ~]# head -12 setup_another_redis_instance.sh 
#!/bin/bash

# Setup Another Redis Instance
#  -- all you need to do is defining two vars below
#
# suffix : distinguish from standart redis instance
#          e.g standard mongodb instance conf-file : /etc/redis.conf
#              this new created instance conf-file : /etc/redis-new.conf : which will add suffix 'new'
# port : distinguish from the standard port 6379
#
suffix=6380
port=6380
[root@redis04 ~]# sh setup_another_redis_instance.sh 
Setup work seems done!
Now you can either check the setup log file "/tmp/setup_another_redis_instance_2020-10-12.log" to see if had something wrong.
Or just start the service directly with the command : "systemctl start redis-6380.service " 
[root@redis04 ~]# 
  1. 配置新的Slave实例
[root@redis04 ~]# cat /etc/redis-6380.conf 
daemonize yes
port 6380
bind 13.13.13.8 127.0.0.1
dir     /var/lib/redis-6380
pidfile /var/run/redis-6380/redis.pid
logfile /var/log/redis-6380/redis.log

cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
[root@redis04 ~]# 
  1. 添加新的Slave实例至集群并重新组织集群关系
[root@redis04 ~]# redis-cli  CLUSTER NODES
f73a319996400e4dbc651a4c97a5588a527531a4 13.13.13.5:6380@16380 slave 425226a07870d852f0e554521e5f5b5008921bc5 0 1602491321000 9 connected
d383742388b9c43dea148d515eabd60297139835 13.13.13.6:6380@16380 slave feea699b984fea4aad8d88db5f08cb0ec843877f 0 1602491322203 1 connected
a20a81c4dea92bbb899372e5f7ce5cb6882367b4 13.13.13.6:6379@16379 master - 0 1602491321701 10 connected 6827-10922
425226a07870d852f0e554521e5f5b5008921bc5 13.13.13.7:6379@16379 master - 0 1602491320000 9 connected 12288-16383
83cf9b7471c71783a59f93811accdc0380f4f2c3 13.13.13.8:6379@16379 myself,master - 0 1602491321000 11 connected 0-1364 5461-6826 10923-12287
feea699b984fea4aad8d88db5f08cb0ec843877f 13.13.13.5:6379@16379 master - 0 1602491320594 1 connected 1365-5460
d017adb66e67e1b7761860d8a11edb628cea140b 13.13.13.7:6380@16380 slave a20a81c4dea92bbb899372e5f7ce5cb6882367b4 0 1602491321196 10 connected
[root@redis04 ~]#

在这里插入图片描述

[root@redis04 ~]# redis-cli cluster nodes
....
425226a07870d852f0e554521e5f5b5008921bc5 13.13.13.7:6379@16379 master - 0 1602492936831 9 connected 12288-16383
....
[root@redis04 ~]# redis-cli --cluster add-node 13.13.13.8:6380 13.13.13.8:6379 --cluster-slave \
> --cluster-master-id 425226a07870d852f0e554521e5f5b5008921bc5
>>> Adding node 13.13.13.8:6380 to cluster 13.13.13.8:6379
....
>>> Configure node as replica of 13.13.13.7:6379.
[OK] New node added correctly.
[root@redis04 ~]# redis-cli cluster nodes
....
425226a07870d852f0e554521e5f5b5008921bc5 13.13.13.7:6379@16379 master - 0 1602492978646 9 connected 12288-16383
c738886ecd3de428cfe1cd212f202d564a11eba0 13.13.13.8:6380@16380 slave 425226a07870d852f0e554521e5f5b5008921bc5 0 1602492977000 9 connected
....
[root@redis04 ~]# 
[root@redis04 ~]# redis-cli -h 13.13.13.5 -p 6380
13.13.13.5:6380> CLUSTER NODES
....
f73a319996400e4dbc651a4c97a5588a527531a4 13.13.13.5:6380@16380 myself,slave 425226a07870d852f0e554521e5f5b5008921bc5 0 1602493274000 7 connected
425226a07870d852f0e554521e5f5b5008921bc5 13.13.13.7:6379@16379 master - 0 1602493275576 9 connected 12288-16383
....
13.13.13.5:6380> CLUSTER REPLICATE 83cf9b7471c71783a59f93811accdc0380f4f2c3
OK
13.13.13.5:6380> CLUSTER NODES
....
f73a319996400e4dbc651a4c97a5588a527531a4 13.13.13.5:6380@16380 myself,slave 83cf9b7471c71783a59f93811accdc0380f4f2c3 0 1602493397000 7 connected
83cf9b7471c71783a59f93811accdc0380f4f2c3 13.13.13.8:6379@16379 master - 0 1602493397580 11 connected 0-1364 5461-6826 10923-12287
....
13.13.13.5:6380> 

收缩

销毁redis02节点上的所有redis实例。

  1. 将redis02主节点上拥有的槽位均分至其它Master节点

4096平分三份为1365、1365、1366

[root@redis02 ~]# redis-cli --cluster reshard 127.0.0.1:6379
>>> Performing Cluster Check (using node 127.0.0.1:6379)
M: a20a81c4dea92bbb899372e5f7ce5cb6882367b4 127.0.0.1:6379
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
M: feea699b984fea4aad8d88db5f08cb0ec843877f 13.13.13.5:6379
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
....
How many slots do you want to move (from 1 to 16384)? 1365
What is the receiving node ID? feea699b984fea4aad8d88db5f08cb0ec843877f
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: a20a81c4dea92bbb899372e5f7ce5cb6882367b4
Source node #2: done
[root@redis02 ~]# redis-cli --cluster reshard 127.0.0.1:6379
>>> Performing Cluster Check (using node 127.0.0.1:6379)
M: a20a81c4dea92bbb899372e5f7ce5cb6882367b4 127.0.0.1:6379
   slots:[8192-10922] (2731 slots) master
   1 additional replica(s)
....
M: 425226a07870d852f0e554521e5f5b5008921bc5 13.13.13.7:6379
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
....
How many slots do you want to move (from 1 to 16384)? 1365
What is the receiving node ID? 425226a07870d852f0e554521e5f5b5008921bc5
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: a20a81c4dea92bbb899372e5f7ce5cb6882367b4
Source node #2: done
[root@redis02 ~]# redis-cli --cluster reshard 127.0.0.1:6379
>>> Performing Cluster Check (using node 127.0.0.1:6379)
M: a20a81c4dea92bbb899372e5f7ce5cb6882367b4 127.0.0.1:6379
   slots:[9557-10922] (1366 slots) master
   1 additional replica(s)
....
M: 83cf9b7471c71783a59f93811accdc0380f4f2c3 13.13.13.8:6379
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
   1 additional replica(s)
....
How many slots do you want to move (from 1 to 16384)? 1366
What is the receiving node ID? 83cf9b7471c71783a59f93811accdc0380f4f2c3
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: a20a81c4dea92bbb899372e5f7ce5cb6882367b4
Source node #2: done
  1. 删除redis02节点上的所有redis实例
[root@redis02 ~]# redis-cli CLUSTER NODES
....
a20a81c4dea92bbb899372e5f7ce5cb6882367b4 13.13.13.6:6379@16379 myself,master - 0 1602494327000 10 connected
d383742388b9c43dea148d515eabd60297139835 13.13.13.6:6380@16380 slave feea699b984fea4aad8d88db5f08cb0ec843877f 0 1602494329552 12 connected
....
[root@redis02 ~]# redis-cli --cluster del-node 127.0.0.1:6379 a20a81c4dea92bbb899372e5f7ce5cb6882367b4
>>> Removing node a20a81c4dea92bbb899372e5f7ce5cb6882367b4 from cluster 127.0.0.1:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
[root@redis02 ~]# redis-cli --cluster del-node 127.0.0.1:6380 d383742388b9c43dea148d515eabd60297139835
>>> Removing node d383742388b9c43dea148d515eabd60297139835 from cluster 127.0.0.1:6380
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
[root@redis02 ~]# ss -luntp | egrep '6379|6380'
[root@redis02 ~]# redis-cli -h 13.13.13.5 CLUSTER NODES
83cf9b7471c71783a59f93811accdc0380f4f2c3 13.13.13.8:6379@16379 master - 0 1602494448321 14 connected 0-1364 5461-6826 9557-12287
c738886ecd3de428cfe1cd212f202d564a11eba0 13.13.13.8:6380@16380 slave 425226a07870d852f0e554521e5f5b5008921bc5 0 1602494447314 13 connected
425226a07870d852f0e554521e5f5b5008921bc5 13.13.13.7:6379@16379 master - 0 1602494448000 13 connected 8192-9556 12288-16383
f73a319996400e4dbc651a4c97a5588a527531a4 13.13.13.5:6380@16380 slave 83cf9b7471c71783a59f93811accdc0380f4f2c3 0 1602494448000 14 connected
feea699b984fea4aad8d88db5f08cb0ec843877f 13.13.13.5:6379@16379 myself,master - 0 1602494447000 12 connected 1365-5460 6827-8191
d017adb66e67e1b7761860d8a11edb628cea140b 13.13.13.7:6380@16380 slave 83cf9b7471c71783a59f93811accdc0380f4f2c3 0 1602494447516 14 connected
[root@redis02 ~]# 
  1. 调整集群结构

在这里插入图片描述

[root@redis01 ~]# redis-cli -h 13.13.13.7 -p 6380
13.13.13.7:6380> CLUSTER REPLICATE feea699b984fea4aad8d88db5f08cb0ec843877f
OK
13.13.13.7:6380> CLUSTER NODES
83cf9b7471c71783a59f93811accdc0380f4f2c3 13.13.13.8:6379@16379 master - 0 1602494847353 14 connected 0-1364 5461-6826 9557-12287
425226a07870d852f0e554521e5f5b5008921bc5 13.13.13.7:6379@16379 master - 0 1602494848357 13 connected 8192-9556 12288-16383
c738886ecd3de428cfe1cd212f202d564a11eba0 13.13.13.8:6380@16380 slave 425226a07870d852f0e554521e5f5b5008921bc5 0 1602494847855 13 connected
f73a319996400e4dbc651a4c97a5588a527531a4 13.13.13.5:6380@16380 slave 83cf9b7471c71783a59f93811accdc0380f4f2c3 0 1602494848000 14 connected
feea699b984fea4aad8d88db5f08cb0ec843877f 13.13.13.5:6379@16379 master - 0 1602494848560 12 connected 1365-5460 6827-8191
d017adb66e67e1b7761860d8a11edb628cea140b 13.13.13.7:6380@16380 myself,slave feea699b984fea4aad8d88db5f08cb0ec843877f 0 1602494848000 8 connected
13.13.13.7:6380> 
Logo

一站式 AI 云服务平台

更多推荐