>>> Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383 Adding replica 127.0.0.1:16480 to 127.0.0.1:16379
Adding replica 127.0.0.1:16481 to 127.0.0.1:16380
Adding replica 127.0.0.1:16479 to 127.0.0.1:16381
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master M: 90141301b92594f876598d6ac99ee32360930c93 127.0.0.1:16379
slots:[0-5460] (5461 slots) master
M: 6249fa8bee783808b2a890f0fa9304b76d0292d0 127.0.0.1:16380
slots:[5461-10922] (5462 slots) master
M: bc77c4fc6f836b47e83d241d01aad2297177bb46 127.0.0.1:16381
slots:[10923-16383] (5461 slots) master
S: e4ccfa14595d949e6b52b1925822fa1c34c4fed8 127.0.0.1:16479
replicates bc77c4fc6f836b47e83d241d01aad2297177bb46
S: abc9089dc7602f0437f3e80c4b068d9942a18997 127.0.0.1:16480
replicates 90141301b92594f876598d6ac99ee32360930c93
S: 5b07bd288945ebb3b92f8c056787fad03bd9a597 127.0.0.1:16481
replicates 6249fa8bee783808b2a890f0fa9304b76d0292d0
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
....
>>> Performing Cluster Check (using node 127.0.0.1:16379)
M: 90141301b92594f876598d6ac99ee32360930c93 127.0.0.1:16379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: 6249fa8bee783808b2a890f0fa9304b76d0292d0 127.0.0.1:16380
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
M: bc77c4fc6f836b47e83d241d01aad2297177bb46 127.0.0.1:16381
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: 5b07bd288945ebb3b92f8c056787fad03bd9a597 127.0.0.1:16481
slots: (0 slots) slave
replicates 6249fa8bee783808b2a890f0fa9304b76d0292d0
S: e4ccfa14595d949e6b52b1925822fa1c34c4fed8 127.0.0.1:16479
slots: (0 slots) slave
replicates bc77c4fc6f836b47e83d241d01aad2297177bb46
S: abc9089dc7602f0437f3e80c4b068d9942a18997 127.0.0.1:16480
slots: (0 slots) slave
replicates 90141301b92594f876598d6ac99ee32360930c93
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
验证集群功能,连接集群需增加-c 和-p 端口参数
/home/redisCluster/src/redis-cli -c -p 16379
127.0.0.1:16379> set nccloud 1
-> Redirected to slot [7504] located at 127.0.0.1:16380
OK
127.0.0.1:16380> get nccloud
"1"
127.0.0.1:16380> exit
[root@rac157 redisCluster]# /home/redisCluster/src/redis-cli -c -p 16481
127.0.0.1:16481> get nccloud
-> Redirected to slot [7504] located at 127.0.0.1:16380
"1"
2 个回复
nccloud
以单机配置redis集群为例。准备3主3备共计6个节点。
安装系统依赖
更新ruby
节点清单
分别修改对应目录下的配置文件,主要调整参数
安装openssl
http://nccloud.yytimes.com/q_63.html
升级 ruby
http://nccloud.yytimes.com/q_64.html
安装redis-trib.rb运行依赖的ruby的包redis-XXX.gem
启动6个节点
创建集群
验证集群功能,连接集群需增加-c 和-p 端口参数
nccloud
附:redis集群常用命令
连接