使用 MySQLRouter 部署 InnoDB ClusterSet

使用 MySQLRouter 部署 InnoDB ClusterSet

翻译:https://www.percona.com/blog/innodb-clusterset-deployment-with-mysqlrouter/

这篇博文将介绍 InnoDB ClusterSet 环境的基本设置,该环境通过将主 InnoDB Cluster 与备用位置/不同数据中心的一个或多个副本关联起来,为 InnoDB Cluster 部署提供容灾能力。InnoDB ClusterSet 通过特定的 ClusterSet 异步复制通道自动管理从主集群到副本集群的复制。如果主集群由于网络连接丢失或数据中心问题而变得无法访问,您可以在其位置上激活副本集群。

现在,让我们详细看看如何准确配置拓扑。

InnoDB ClusterSet部署

我们使用了通过 MySQLShell 实用程序提供的沙箱环境来进行此设置。

环境
Cluster1:
127.0.0.1:3308
127.0.0.1:3309
127.0.0.1:3310

Cluster2:
127.0.0.1:3311
127.0.0.1:3312
127.0.0.1:3313

Router:
127.0.0.1:6446/6447
让我们设置第一个集群(“cluster1”)
部署沙箱。
MySQL JS > dba.deploySandboxInstance(3308)
MySQL JS > dba.deploySandboxInstance(3309)
MySQL JS > dba.deploySandboxInstance(33010)
然后,我们需要在启动集群之前执行一些预检查。
###connecting to the concerned nodes one by one.

MySQL JS > shell.connect(‘root@localhost:3308’)
MySQL localhost:3308 ssl JS > shell.connect(‘root@localhost:3309’)
MySQL localhost:3309 ssl JS > shell.connect(‘root@localhost:3310’)

###The below commands will check if satisfying the Innodb cluster requirements (group replication settings) and fix the missing requirements automatically. Here, we have configured a new user “iroot” for cluster deployment.

MySQL localhost:3308 ssl JS > dba.checkInstanceConfiguration(‘root@localhost:3308’)
MySQL localhost:3308 ssl JS > dba.configureInstance(‘root@127.0.0.1:3308’,{clusterAdmin: ‘iroot’, clusterAdminPassword: ‘Iroot@1234’})

MySQL localhost:3309 ssl JS > dba.checkInstanceConfiguration(‘root@localhost:3309’)
MySQL localhost:3309 ssl JS > dba.configureInstance(‘root@127.0.0.1:3309’,{clusterAdmin: ‘iroot’, clusterAdminPassword: ‘Iroot@1234’})

MySQL localhost:3310 ssl JS > dba.checkInstanceConfiguration(‘root@localhost:3310’)
MySQL localhost:3310 ssl JS > dba.configureInstance(‘root@127.0.0.1:3310’,{clusterAdmin: ‘iroot’, clusterAdminPassword: ‘Iroot@1234’})
一旦所有实例准备就绪,我们就可以计划使用种子节点创建集群。“createcluster”命令将执行初始化组复制的所有隐藏步骤,随后其他节点通过分布式恢复/克隆插件加入该组。

InnoDB 集群构建在组复制之上,提供(自动成员管理、容错和自动故障转移)。它为我们提供了一个简单的界面来部署/管理具有灾难恢复支持的复杂拓扑。

我们将使用初始节点(“ localhost:3308” )引导集群。
MySQL localhost:3310 ssl JS > shell.connect(‘iroot@localhost:3308’)
MySQL localhost:3308 ssl JS > cluster1 = dba.createCluster(‘Cluster1’)
MySQL localhost:3308 ssl JS > cluster1 = dba.getCluster()
输出:

MySQL localhost:3308 ssl JS > cluster1.status()
{
“clusterName”: “Cluster1”,
“defaultReplicaSet”: {
“name”: “default”,
“primary”: “127.0.0.1:3308”,
“ssl”: “REQUIRED”,
“status”: “OK_NO_TOLERANCE”,
“statusText”: “Cluster is NOT tolerant to any failures.”,
“topology”: {
“127.0.0.1:3308”: {
“address”: “127.0.0.1:3308”,
“memberRole”: “PRIMARY”,
“mode”: “R/W”,
“readReplicas”: {},
“replicationLag”: “applier_queue_applied”,
“role”: “HA”,
“status”: “ONLINE”,
“version”: “8.0.31”
}
},
“topologyMode”: “Single-Primary”
},
“groupInformationSourceMember”: “127.0.0.1:3308”
}
在这里,我们已经成功引导第一个节点。接下来,其他节点将使用 CLONE 插件加入集群。
MySQL localhost:3308 ssl JS > cluster1.addInstance(“iroot@localhost:3309”,{password:‘Iroot@1234’})
输出:

  • Waiting for clone to finish…

NOTE: 127.0.0.1:3309 is being cloned from 127.0.0.1:3308

** Stage DROP DATA: Completed

** Clone Transfer

FILE COPY  ############################################################  100%  Completed
 
PAGE COPY  ############################################################  100%  Completed
 
REDO COPY  ############################################################  100%  Completed

NOTE: 127.0.0.1:3309 is shutting down…

  • Waiting for server restart… ready

  • 127.0.0.1:3309 has restarted, waiting for clone to finish…

** Stage RESTART: Completed

  • Clone process has finished: 73.66 MB transferred in about 1 second (~73.66 MB/s)

State recovery already finished for ‘127.0.0.1:3309’

The instance ‘127.0.0.1:3309’ was successfully added to the cluster.

MySQL localhost:3308 ssl JS > cluster1.addInstance(“iroot@localhost:3310”,{password:‘Iroot@1234’})
输出:

  • Waiting for clone to finish…

NOTE: 127.0.0.1:3310 is being cloned from 127.0.0.1:3309

** Stage DROP DATA: Completed

** Clone Transfer

FILE COPY  ############################################################  100%  Completed
 
PAGE COPY  ############################################################  100%  Completed
 
REDO COPY  ############################################################  100%  Completed

NOTE: 127.0.0.1:3310 is shutting down…

  • Waiting for server restart… ready

  • 127.0.0.1:3310 has restarted, waiting for clone to finish…

** Stage RESTART: Completed

  • Clone process has finished: 73.66 MB transferred in about 1 second (~73.66 MB/s)

State recovery already finished for ‘127.0.0.1:3310’

The instance ‘127.0.0.1:3310’ was successfully added to the cluster.
在此阶段,我们的第一个集群已准备好所有三个节点。
MySQL localhost:3308 ssl JS > cluster1.status()
输出:

{

"clusterName": "Cluster1", 
 
"defaultReplicaSet": {
 
    "name": "default", 
 
    "primary": "127.0.0.1:3308", 
 
    "ssl": "REQUIRED", 
 
    "status": "OK", 
 
    "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
 
    "topology": {
 
        "127.0.0.1:3308": {
 
            "address": "127.0.0.1:3308", 
 
            "memberRole": "PRIMARY", 
 
            "mode": "R/W", 
 
            "readReplicas": {}, 
 
            "replicationLag": "applier_queue_applied", 
 
            "role": "HA", 
 
            "status": "ONLINE", 
 
            "version": "8.0.31"
 
        }, 
 
        "127.0.0.1:3309": {
 
            "address": "127.0.0.1:3309", 
 
            "memberRole": "SECONDARY", 
 
            "mode": "R/O", 
 
            "readReplicas": {}, 
 
            "replicationLag": "applier_queue_applied", 
 
            "role": "HA", 
 
            "status": "ONLINE", 
 
            "version": "8.0.31"
 
        }, 
 
        "127.0.0.1:3310": {
 
            "address": "127.0.0.1:3310", 
 
            "memberRole": "SECONDARY", 
 
            "mode": "R/O", 
 
            "readReplicas": {}, 
 
            "replicationLag": "applier_queue_applied", 
 
            "role": "HA", 
 
            "status": "ONLINE", 
 
            "version": "8.0.31"
 
        }
 
    }, 
 
    "topologyMode": "Single-Primary"
 
}, 
 
"groupInformationSourceMember": "127.0.0.1:3308"

}
现在让我们继续进行第二个集群(“cluster2”)设置
通过 MySqlShell 部署沙箱。
MySQL JS > dba.deploySandboxInstance(3311)
MySQL JS > dba.deploySandboxInstance(3312)
MySQL JS > dba.deploySandboxInstance(3313)
同样,像我们对“cluster1”节点所做的那样执行一些预检查。

connecting to the concerned nodes.

MySQL JS > shell.connect(‘root@localhost:3311’)
MySQL JS > shell.connect(‘root@localhost:3312’)
MySQL JS > shell.connect(‘root@localhost:3313’)

The below commands will check if satisfying the Innodb cluster requirements (group replication settings) and fix the missing requirements automatically. Here, we have configured a new user “iroot” for cluster deployment.

MySQL localhost:3308 ssl JS > dba.checkInstanceConfiguration(‘root@localhost:3311’)
MySQL localhost:3308 ssl JS > dba.configureInstance(‘root@127.0.0.1:3311’,{clusterAdmin: ‘iroot’, clusterAdminPassword: ‘Iroot@1234’})

MySQL localhost:3308 ssl JS > dba.checkInstanceConfiguration(‘root@localhost:3312’)
MySQL localhost:3308 ssl JS > dba.configureInstance(‘root@127.0.0.1:3312’,{clusterAdmin: ‘iroot’, clusterAdminPassword: ‘Iroot@1234’})

MySQL localhost:3308 ssl JS > dba.checkInstanceConfiguration(‘root@localhost:3313’)
MySQL localhost:3308 ssl JS > dba.configureInstance(‘root@127.0.0.1:3313’,{clusterAdmin: ‘iroot’, clusterAdminPassword: ‘Iroot@1234’})
接下来,我们将通过现有 cluster1 节点在节点 ( 127.0.0.1:3311 ) 上触发同步来创建 ClusterSet 拓扑。节点(127.0.0.1:3311)将成为 cluster2 的主节点,其余其他节点将通过克隆/增量过程加入该节点。

  1. 首先,连接到“cluster1”节点。
    MySQL localhost:3308 ssl JS > c iroot@127.0.0.1:3308
    MySQL 127.0.0.1:3308 ssl JS > cluster1 = dba.getCluster()
    2)这里,“cluster1”加入ClusterSet拓扑,
    MySQL 127.0.0.1:3308 ssl JS > myclusterset = cluster1.createClusterSet(‘firstclusterset’)
    输出:

ClusterSet successfully created. Use ClusterSet.createReplicaCluster() to add Replica Clusters to it.
ClusterSet:firstclusterset`
3) 验证状态。
MySQL 127.0.0.1:3308 ssl JS > myclusterset.status({extended: 1})
输出:

{

"clusters": {
 
    "Cluster1": {
 
        "clusterRole": "PRIMARY", 
 
        "globalStatus": "OK", 
 
        "primary": "127.0.0.1:3308", 
 
        "status": "OK", 
 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
 
        "topology": {
 
            "127.0.0.1:3308": {
 
                "address": "127.0.0.1:3308", 
 
                "memberRole": "PRIMARY", 
 
                "mode": "R/W", 
 
                "status": "ONLINE", 
 
                "version": "8.0.31"
 
            }, 
 
            "127.0.0.1:3309": {
 
                "address": "127.0.0.1:3309", 
 
                "memberRole": "SECONDARY", 
 
                "mode": "R/O", 
 
                "replicationLagFromImmediateSource": "", 
 
                "replicationLagFromOriginalSource": "", 
 
                "status": "ONLINE", 
 
                "version": "8.0.31"
 
            }, 
 
            "127.0.0.1:3310": {
 
                "address": "127.0.0.1:3310", 
 
                "memberRole": "SECONDARY", 
 
                "mode": "R/O", 
 
                "replicationLagFromImmediateSource": "", 
 
                "replicationLagFromOriginalSource": "", 
 
                "status": "ONLINE", 
 
                "version": "8.0.31"
 
            }
 
        }, 
 
        "transactionSet": "39c28b63-285a-11ee-a411-5254004d77d3:1-4,59d8e60a-285d-11ee-bb44-5254004d77d3:1-85,59d8f3a6-285d-11ee-bb44-5254004d77d3:1-5"
 
    }
 
}, 
 
"domainName": "firstclusterset", 
 
"globalPrimaryInstance": "127.0.0.1:3308", 
 
"metadataServer": "127.0.0.1:3308", 
 
"primaryCluster": "Cluster1", 
 
"status": "HEALTHY", 
 
"statusText": "All Clusters available."

}
4) 现在,节点(“127.0.0.1 :3311”)将通过异步进程与现有的“cluster1”同步。
MySQL 127.0.0.1:3308 ssl JS > c iroot@127.0.0.1:3311
MySQL 127.0.0.1:3311 ssl JS > cluster2 = myclusterset.createReplicaCluster(“127.0.0.1:3311”, “cluster2”, {recoveryProgress: 1, timeout: 10})
输出:

… Replica Cluster ‘cluster2’ successfully created on ClusterSet ‘firstclusterset’. …
5) 接下来,其他节点通过克隆进程加入“cluster2”。
MySQL 127.0.0.1:3311 ssl JS > cluster2.addInstance(“iroot@127.0.0.1:3312”,{password:‘Iroot@1234’})
MySQL 127.0.0.1:3311 ssl JS > cluster2.addInstance(“iroot@127.0.0.1:3313”,{password:‘Iroot@1234’})
6) 最后,检查集群环境的状态。
MySQL 127.0.0.1:3311 ssl JS > myclusterset = dba.getClusterSet()
MySQL 127.0.0.1:3311 ssl JS > myclusterset.status({extended: 1})
输出:

{

"clusters": {
 
    "Cluster1": {
 
        "clusterRole": "PRIMARY", 
 
        "globalStatus": "OK", 
 
        "primary": "127.0.0.1:3308", 
 
        "status": "OK", 
 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
 
        "topology": {
 
            "127.0.0.1:3308": {
 
                "address": "127.0.0.1:3308", 
 
                "memberRole": "PRIMARY", 
 
                "mode": "R/W", 
 
                "status": "ONLINE", 
 
                "version": "8.0.31"
 
            }, 
 
            "127.0.0.1:3309": {
 
                "address": "127.0.0.1:3309", 
 
                "memberRole": "SECONDARY", 
 
                "mode": "R/O", 
 
                "replicationLagFromImmediateSource": "", 
 
                "replicationLagFromOriginalSource": "", 
 
                "status": "ONLINE", 
 
                "version": "8.0.31"
 
            }, 
 
            "127.0.0.1:3310": {
 
                "address": "127.0.0.1:3310", 
 
                "memberRole": "SECONDARY", 
 
                "mode": "R/O", 
 
                "replicationLagFromImmediateSource": "", 
 
                "replicationLagFromOriginalSource": "", 
 
                "status": "ONLINE", 
 
                "version": "8.0.31"
 
            }
 
        }, 
 
        "transactionSet": "39c28b63-285a-11ee-a411-5254004d77d3:1-4,59d8e60a-285d-11ee-bb44-5254004d77d3:1-124,59d8f3a6-285d-11ee-bb44-5254004d77d3:1-5"
 
    }, 
 
    "cluster2": {
 
        "clusterRole": "REPLICA", 
 
        "clusterSetReplication": {
 
            "applierStatus": "APPLIED_ALL", 
 
            "applierThreadState": "Waiting for an event from Coordinator", 
 
            "applierWorkerThreads": 4, 
 
            "receiver": "127.0.0.1:3311", 
 
            "receiverStatus": "ON", 
 
            "receiverThreadState": "Waiting for source to send event", 
 
            "source": "127.0.0.1:3308"
 
        }, 
 
        "clusterSetReplicationStatus": "OK", 
 
        "globalStatus": "OK", 
 
        "status": "OK", 
 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
 
        "topology": {
 
            "127.0.0.1:3311": {
 
                "address": "127.0.0.1:3311", 
 
                "memberRole": "PRIMARY", 
 
                "mode": "R/O", 
 
                "replicationLagFromImmediateSource": "", 
 
                "replicationLagFromOriginalSource": "", 
 
                "status": "ONLINE", 
 
                "version": "8.0.31"
 
            }, 
 
            "127.0.0.1:3312": {
 
                "address": "127.0.0.1:3312", 
 
                "memberRole": "SECONDARY", 
 
                "mode": "R/O", 
 
                "replicationLagFromImmediateSource": "", 
 
                "replicationLagFromOriginalSource": "", 
 
                "status": "ONLINE", 
 
                "version": "8.0.31"
 
            }, 
 
            "127.0.0.1:3313": {
 
                "address": "127.0.0.1:3313", 
 
                "memberRole": "SECONDARY", 
 
                "mode": "R/O", 
 
                "replicationLagFromImmediateSource": "", 
 
                "replicationLagFromOriginalSource": "", 
 
                "status": "ONLINE", 
 
                "version": "8.0.31"
 
            }
 
        }, 
 
        "transactionSet": "2e71122e-2862-11ee-b81c-5254004d77d3:1-5,39c28b63-285a-11ee-a411-5254004d77d3:1-4,59d8e60a-285d-11ee-bb44-5254004d77d3:1-124,59d8f3a6-285d-11ee-bb44-5254004d77d3:1-5", 
 
        "transactionSetConsistencyStatus": "OK", 
 
        "transactionSetErrantGtidSet": "", 
 
        "transactionSetMissingGtidSet": ""
 
    }
 
}, 
 
"domainName": "firstclusterset", 
 
"globalPrimaryInstance": "127.0.0.1:3308", 
 
"metadataServer": "127.0.0.1:3308", 
 
"primaryCluster": "Cluster1", 
 
"status": "HEALTHY", 
 
"statusText": "All Clusters available."

此处,ClusterSet 拓扑现已准备就绪,包含所有六个节点。

在下一阶段,我们将使用新创建的 ClusterSet 环境引导 MySQLRouter:

首先,我们将生成一个专用用户用于MySQLRouter监控/管理。
MySQL 127.0.0.1:3311 ssl JS > c iroot@localhost:3308
MySQL localhost:3308 ssl JS > cluster1 = dba.getCluster();
MySQL localhost:3308 ssl JS > cluster1.setupRouterAccount(‘router_usr’)
输出:

Missing the password for new account router_usr@%. Please provide one.
Password for new account: **********
Confirm password: **********
Creating user router_usr@%.
Account router_usr@% was successfully created.
使用用户(“router_usr”)和路由器名称(“Router1”)引导路由器。
[vagrant@localhost ~]$ sudo mysqlrouter --bootstrap iroot@127.0.0.1:3308 --account=router_usr --name=‘Router1’ --user root --force
我们在这里使用–force是因为如果没有–force mysqlrouter将无法识别集群集。这将重新配置现有的集群集。

在这里,我们将看到一些稍后连接数据库或管理服务所需的有用信息。

MySQL Router ‘Router1’ configured for the ClusterSet ‘firstclusterset’

After this MySQL Router has been started with the generated configuration

$ /etc/init.d/mysqlrouter restart

or

$ systemctl start mysqlrouter

or

$ mysqlrouter -c /etc/mysqlrouter/mysqlrouter.conf

ClusterSet ‘firstclusterset’ can be reached by connecting to:

MySQL Classic protocol

  • Read/Write Connections: localhost:6446

  • Read/Only Connections: localhost:6447

MySQL X protocol

  • Read/Write Connections: localhost:6448

  • Read/Only Connections: localhost:6449
    最后,启动mysqlrouter服务:
    sudo mysqlrouter -c /etc/mysqlrouter/mysqlrouter.conf &
    验证连接路由
    连接到路由器端口“6446”并创建一些演示表/数据:
    shell> mysql -h 127.0.0.1 -u root -pRoot@1234 -P 6446 -e “create database sbtest;use sbtest;create table sbtest1 (id int(10) not null auto_increment primary key, user varchar(50));insert into sbtest1(user) values(‘test’);”
    连接到路由器端口“6447”以进行读取。这里,默认情况下,连接将在主集群(cluster1)的节点数量之间进行平衡。
    [vagrant@localhost ~]$ mysql -h 127.0.0.1 -u root -pRoot@1234 -P 6447 -e “use sbtest;select * from sbtest1;select @@server_id;”
    ±—±-----+
    | id | user |
    ±—±-----+
    | 1 | test |
    ±—±-----+
    ±------------+
    | @@server_id |
    ±------------+
    | 194452202 |
    ±------------+

[vagrant@localhost ~]$ mysql -h 127.0.0.1 -u root -pRoot@1234 -P 6447 -e “use sbtest;select * from sbtest1;select @@server_id;”
±—±-----+
| id | user |
±—±-----+
| 1 | test |
±—±-----+
±------------+
| @@server_id |
±------------+
| 2376678236 |
±------------+

[vagrant@localhost ~]$ mysql -h 127.0.0.1 -u root -pRoot@1234 -P 6447 -e “use sbtest;select * from sbtest1;select @@server_id;”
mysql: [Warning] Using a password on the command line interface can be insecure.
±—±-----+
| id | user |
±—±-----+
| 1 | test |
±—±-----+
±------------+
| @@server_id |
±------------+
| 194452202 |
±------------+
因此,默认情况下,所有连接都将路由到默认的“主”集群,在我们的例子中为“Cluster1”;但是,我们可以根据要求更改主要组件。

更改 ClusterSet 拓扑
MySQL localhost:3308 ssl JS > myclusterset=dba.getClusterSet()
MySQL localhost:3308 ssl JS > myclusterset.status({extended:1})
输出:

ClusterSet:firstclusterset

{

"clusters": {
 
    "Cluster1": {
 
        "clusterRole": "PRIMARY", 
 
        "globalStatus": "OK", 
 
        "primary": "127.0.0.1:3308", 
 
        "status": "OK", 
 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
 
        "topology": {
 
            "127.0.0.1:3308": {
 
                "address": "127.0.0.1:3308", 
 
                "memberRole": "PRIMARY", 
 
                "mode": "R/W", 
 
                "status": "ONLINE", 
 
                "version": "8.0.31"
 
            }, 
 
            "127.0.0.1:3309": {
 
                "address": "127.0.0.1:3309", 
 
                "memberRole": "SECONDARY", 
 
                "mode": "R/O", 
 
                "replicationLagFromImmediateSource": "", 
 
                "replicationLagFromOriginalSource": "", 
 
                "status": "ONLINE", 
 
                "version": "8.0.31"
 
            }, 
 
            "127.0.0.1:3310": {
 
                "address": "127.0.0.1:3310", 
 
                "memberRole": "SECONDARY", 
 
                "mode": "R/O", 
 
                "replicationLagFromImmediateSource": "", 
 
                "replicationLagFromOriginalSource": "", 
 
                "status": "ONLINE", 
 
                "version": "8.0.31"
 
            }
 
        }, 
 
        "transactionSet": "39c28b63-285a-11ee-a411-5254004d77d3:1-4,59d8e60a-285d-11ee-bb44-5254004d77d3:1-143,59d8f3a6-285d-11ee-bb44-5254004d77d3:1-5"
 
    }, 
 
    "cluster2": {
 
        "clusterRole": "REPLICA", 
 
        "clusterSetReplication": {
 
            "applierStatus": "APPLIED_ALL", 
 
            "applierThreadState": "Waiting for an event from Coordinator", 
 
            "applierWorkerThreads": 4, 
 
            "receiver": "127.0.0.1:3311", 
 
            "receiverStatus": "ON", 
 
            "receiverThreadState": "Waiting for source to send event", 
 
            "source": "127.0.0.1:3308"
 
        }, 
 
        "clusterSetReplicationStatus": "OK", 
 
        "globalStatus": "OK", 
 
        "status": "OK", 
 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
 
        "topology": {
 
            "127.0.0.1:3311": {
 
                "address": "127.0.0.1:3311", 
 
                "memberRole": "PRIMARY", 
 
                "mode": "R/O", 
 
                "replicationLagFromImmediateSource": "", 
 
                "replicationLagFromOriginalSource": "", 
 
                "status": "ONLINE", 
 
                "version": "8.0.31"
 
            }, 
 
            "127.0.0.1:3312": {
 
                "address": "127.0.0.1:3312", 
 
                "memberRole": "SECONDARY", 
 
                "mode": "R/O", 
 
                "replicationLagFromImmediateSource": "", 
 
                "replicationLagFromOriginalSource": "", 
 
                "status": "ONLINE", 
 
                "version": "8.0.31"
 
            }, 
 
            "127.0.0.1:3313": {
 
                "address": "127.0.0.1:3313", 
 
                "memberRole": "SECONDARY", 
 
                "mode": "R/O", 
 
                "replicationLagFromImmediateSource": "", 
 
                "replicationLagFromOriginalSource": "", 
 
                "status": "ONLINE", 
 
                "version": "8.0.31"
 
            }
 
        }, 
 
        "transactionSet": "2e71122e-2862-11ee-b81c-5254004d77d3:1-5,39c28b63-285a-11ee-a411-5254004d77d3:1-4,59d8e60a-285d-11ee-bb44-5254004d77d3:1-143,59d8f3a6-285d-11ee-bb44-5254004d77d3:1-5", 
 
        "transactionSetConsistencyStatus": "OK", 
 
        "transactionSetErrantGtidSet": "", 
 
        "transactionSetMissingGtidSet": ""
 
    }
 
}, 
 
"domainName": "firstclusterset", 
 
"globalPrimaryInstance": "127.0.0.1:3308", 
 
"metadataServer": "127.0.0.1:3308", 
 
"primaryCluster": "Cluster1", 
 
"status": "HEALTHY", 
 
"statusText": "All Clusters available."

}
将主集群从“cluster1”更改为“cluster2”:
MySQL localhost:3308 ssl JS > myclusterset.setPrimaryCluster(‘cluster2’)
输出:

Switching the primary cluster of the clusterset to ‘cluster2’

  • Verifying clusterset status

** Checking cluster cluster2

Cluster ‘cluster2’ is available

** Checking cluster Cluster1

Cluster ‘Cluster1’ is available

  • Reconciling 5 internally generated GTIDs

  • Refreshing replication account of demoted cluster

  • Synchronizing transaction backlog at 127.0.0.1:3311

** Transactions replicated ############################################################ 100%

  • Updating metadata

  • Updating topology

** Changing replication source of 127.0.0.1:3309 to 127.0.0.1:3311

** Changing replication source of 127.0.0.1:3310 to 127.0.0.1:3311

** Changing replication source of 127.0.0.1:3308 to 127.0.0.1:3311

  • Acquiring locks in replicaset instances

** Pre-synchronizing SECONDARIES

** Acquiring global lock at PRIMARY

** Acquiring global lock at SECONDARIES

  • Synchronizing remaining transactions at promoted primary

** Transactions replicated ############################################################ 100%

  • Updating replica clusters

Cluster ‘cluster2’ was promoted to PRIMARY of the clusterset. The PRIMARY instance is ‘127.0.0.1:3311’
如果我们再次看到输出,我们可以观察到“clusterRole:PRIMARY”已转移到“cluster2”。
MySQL localhost:3308 ssl JS > myclusterset.status({extended:1})
输出:

{

"clusters": {
 
    "Cluster1": {
 
        "clusterRole": "REPLICA", 
 
        "clusterSetReplication": {
 
            "applierStatus": "APPLIED_ALL", 
 
            "applierThreadState": "Waiting for an event from Coordinator", 
 
            "applierWorkerThreads": 4, 
 
            "receiver": "127.0.0.1:3308", 
 
            "receiverStatus": "ON", 
 
            "receiverThreadState": "Waiting for source to send event", 
 
            "source": "127.0.0.1:3311"
 
        }, 
 
        "clusterSetReplicationStatus": "OK", 
 
        "globalStatus": "OK", 
 
        "status": "OK", 
 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
 
        "topology": {
 
            "127.0.0.1:3308": {
 
                "address": "127.0.0.1:3308", 
 
                "memberRole": "PRIMARY", 
 
                "mode": "R/O", 
 
                "replicationLagFromImmediateSource": "", 
 
                "replicationLagFromOriginalSource": "", 
 
                "status": "ONLINE", 
 
                "version": "8.0.31"
 
            }, 
 
            "127.0.0.1:3309": {
 
                "address": "127.0.0.1:3309", 
 
                "memberRole": "SECONDARY", 
 
                "mode": "R/O", 
 
                "replicationLagFromImmediateSource": "", 
 
                "replicationLagFromOriginalSource": "", 
 
                "status": "ONLINE", 
 
                "version": "8.0.31"
 
            }, 
 
            "127.0.0.1:3310": {
 
                "address": "127.0.0.1:3310", 
 
                "memberRole": "SECONDARY", 
 
                "mode": "R/O", 
 
                "replicationLagFromImmediateSource": "", 
 
                "replicationLagFromOriginalSource": "", 
 
                "status": "ONLINE", 
 
                "version": "8.0.31"
 
            }
 
        }, 
 
        "transactionSet": "2e71122e-2862-11ee-b81c-5254004d77d3:1-5,39c28b63-285a-11ee-a411-5254004d77d3:1-4,59d8e60a-285d-11ee-bb44-5254004d77d3:1-145,59d8f3a6-285d-11ee-bb44-5254004d77d3:1-5", 
 
        "transactionSetConsistencyStatus": "OK", 
 
        "transactionSetErrantGtidSet": "", 
 
        "transactionSetMissingGtidSet": ""
 
    }, 
 
    "cluster2": {
 
        "clusterRole": "PRIMARY", 
 
        "globalStatus": "OK", 
 
        "primary": "127.0.0.1:3311", 
 
        "status": "OK", 
 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
 
        "topology": {
 
            "127.0.0.1:3311": {
 
                "address": "127.0.0.1:3311", 
 
                "memberRole": "PRIMARY", 
 
                "mode": "R/W", 
 
                "status": "ONLINE", 
 
                "version": "8.0.31"
 
            }, 
 
            "127.0.0.1:3312": {
 
                "address": "127.0.0.1:3312", 
 
                "memberRole": "SECONDARY", 
 
                "mode": "R/O", 
 
                "replicationLagFromImmediateSource": "", 
 
                "replicationLagFromOriginalSource": "", 
 
                "status": "ONLINE", 
 
                "version": "8.0.31"
 
            }, 
 
            "127.0.0.1:3313": {
 
                "address": "127.0.0.1:3313", 
 
                "memberRole": "SECONDARY", 
 
                "mode": "R/O", 
 
                "replicationLagFromImmediateSource": "", 
 
                "replicationLagFromOriginalSource": "", 
 
                "status": "ONLINE", 
 
                "version": "8.0.31"
 
            }
 
        }, 
 
        "transactionSet": "2e71122e-2862-11ee-b81c-5254004d77d3:1-5,39c28b63-285a-11ee-a411-5254004d77d3:1-4,59d8e60a-285d-11ee-bb44-5254004d77d3:1-145,59d8f3a6-285d-11ee-bb44-5254004d77d3:1-5"
 
    }
 
}, 
 
"domainName": "firstclusterset", 
 
"globalPrimaryInstance": "127.0.0.1:3311", 
 
"metadataServer": "127.0.0.1:3311", 
 
"primaryCluster": "cluster2", 
 
"status": "HEALTHY", 
 
"statusText": "All Clusters available."

}
因此,我们将 Primary 组件从 cluster1 更改为 cluster2,但路由仍然为 cluster1 设置。为了将流量发送到 cluster2,我们还必须更改路由选项。

MySQL localhost:3308 ssl JS > myclusterset.listRouters()
输出:

{ “domainName”: “firstclusterset”, “routers”: { “localhost.localdomain::Router1”: { “hostname”: “localhost.localdomain”, “lastCheckIn”: “2023-07-22 02:47:42”, “roPort”: “6447”, “roXPort”: “6449”, “rwPort”: “6446”, “rwXPort”: “6448”, “targetCluster”: “primary”, “version”: “8.0.32” },
将连接目标从“cluster1”更改为“cluster2”:
MySQL localhost:3308 ssl JS > myclusterset.setRoutingOption(‘localhost.localdomain::Router1’, ‘target_cluster’, ‘cluster2’)
MySQL localhost:3308 ssl JS > myclusterset.listRouters()
输出:

MySQL localhost:3308 ssl JS > myclusterset.listRouters()
{

"domainName": "firstclusterset", 
 
"routers": {
 
    "localhost.localdomain::Router1": {
 
        "hostname": "localhost.localdomain", 
 
        "lastCheckIn": "2023-07-22 02:47:42", 
 
        "roPort": "6447", 
 
        "roXPort": "6449", 
 
        "rwPort": "6446", 
 
        "rwXPort": "6448", 
 
        "targetCluster": "cluster2", 
 
        "version": "8.0.32"
 
    }

验证现有集群集中的路由策略
MySQL localhost:3308 ssl JS > myclusterset.routingOptions()
输出:

{

"domainName": "firstclusterset", 
 
"global": {
 
    "invalidated_cluster_policy": "drop_all", 
 
    "stats_updates_frequency": 0, 
 
    "target_cluster": "primary"
 
}, 
 
"routers": {
 
    "localhost.localdomain::Router1": {
 
        "target_cluster": "cluster2"
 
    }
 
      }

}
在某些情况下,主集群不可用或不可访问。在某些情况下,直接的解决方案是执行紧急故障转移,以避免应用程序阻塞。

紧急故障转移基本上是从主 InnoDB 集群切换到选定的副本集群以进行 InnoDB ClusterSet 部署。紧急故障切换过程中,由于异步复制等网络因素,无法保证数据一致性,因此为了安全起见,故障切换过程中会将原主集群标记为失效。

因此,如果万一原始主集群仍然在线,则应该将其关闭。稍后,失效的主集群可以通过重新加入/修复过程加入集群集。

执行紧急故障转移
myclusterset.forcePrimaryCluster(“cluster2”)
myclusterset.setRoutingOption(‘localhost::Route1’, ‘target_cluster’, ‘cluster2’)
概括
在 ClusterSet 实施的帮助下,在不同区域部署灾难恢复支持不再是一个复杂的挑战。MySQLShell 和 InnoDB 集群处理幕后的所有配置和同步过程。借助管理 API/MySQLShell 命令,可以最大程度地减少灾难恢复和故障时间。

集群集的功能有一个需要注意的地方。如果现有主数据库出现故障,它不支持新主数据库的高可用性/自动升级。我们必须通过一些手动干预或通过一些内部自动化流程来处理同样的问题。


免责声明:

1、本站资源由自动抓取工具收集整理于网络。

2、本站不承担由于内容的合法性及真实性所引起的一切争议和法律责任。

3、电子书、小说等仅供网友预览使用,书籍版权归作者或出版社所有。

4、如作者、出版社认为资源涉及侵权,请联系本站,本站将在收到通知书后尽快删除您认为侵权的作品。

5、如果您喜欢本资源,请您支持作者,购买正版内容。

6、资源失效,请下方留言,欢迎分享资源链接

文章评论

0条评论