如果只有一个节点或者使用Master-Slave模式,存在主机挂掉后“单点失效”的问题。通过使用Mongo DB副本集,可以提高容错性和可用性~~
A replica set in MongoDB is a group of processes that maintain the same data set. Replica sets provide redundancy and high availability
什么是副本集( Repilca Set )?
副本集在Mongo DB中就是一组mongod维护相同的数据集,副本集提供冗余和高可用~~
一个三个节点的副本集如下图所示:
本篇博文主要给出一个搭建三个节点Mongo DB副本集的示例,只要5个步骤即可~废话不说了,直接来战~~~
第一步 - 准备环境
准备三个虚拟机,其中一个用作Primary,另外两个用作Secondary。如上图展示的那样~
虚拟机信息如下:
Primary - 172.xx.xx.107
Secondary - 172.xx.xx.105 和172.xx.xx.106
本文的虚拟机装的是CentOS 6.7 ~
第二步 - yum安装Mongo
本文使用yum install mongodb-org命令来安装。
如果遇到No package mongodb-org available.的问题,如:
Loaded plugins: fastestmirrorSetting up Install ProcessLoading mirror speeds from cached hostfileNo package mongodb-org available.Error: Nothing to do
在/etc/yum.repos.d/目录下,创建一个mongodb.repo文件,指定MongoDB资源库即可。
使用 vim /etc/yum.repos.d/mongodb.repo命令,创建并打开文件mongodb.repo,
[mongodb-org-3.4]name=MongoDB Repositorybaseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.4/x86_64/gpgcheck=1enabled=1gpgkey=https://www.mongodb.org/static/pgp/server-3.4.asc
然后,重新使用yum install mongodb-org命令安装即可。
第三步 - 配置副本集
使用vim /etc/mongod.conf配置,每一台虚拟机上的Mongod配置文件。
在replication选项中添加oplogSizeMB 和 replSetName两个属性~
replication: oplogSizeMB: 1024 replSetName: wang
配置文件mongod.conf如下所示:
# mongod.conf# for documentation of all options, see:# http://docs.mongodb.org/manual/reference/configuration-options/# where to write logging data.systemLog: destination: file logAppend: true path: /var/log/mongodb/mongod.log# Where and how to store data.storage: dbPath: /var/lib/mongo journal: enabled: true# engine:# mmapv1:# wiredTiger:# how the process runsprocessManagement: fork: true # fork and run in background pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile# network interfacesnet: port: 27017 bindIp: 0.0.0.0 # Listen to local interface only, comment to listen on all interfaces.#security:#operationProfiling:replication: oplogSizeMB: 1024 replSetName: wang#sharding:## Enterprise-Only Options#auditLog:#snmp:
注意:
3台虚拟机,MongoDB配置文件mongod.conf中的replSetName名字要保持一致,
在本例中,replSetName的名字为wang,这个名字可以随便取~~
第四步 - 启动
配置好副本集之后,通过mongod --config /etc/mongod.conf 命令启动三个虚拟机中的Mongo服务~,如:
[root@dev04 mongodb]# mongod --config /etc/mongod.conf about to fork child process, waiting until server is ready for connections.forked process: 30799child process started successfully, parent exiting
因为107端口的虚拟机安装的MongoDB要用作Primary节点,所以,我们可以使用mongo命令来连接~
[root@dev04 mongodb]# mongoMongoDB shell version v3.4.2connecting to: mongodb://127.0.0.1:27017MongoDB server version: 3.4.2Server has startup warnings: 2017-02-17T09:19:20.240+0800 I STORAGE [initandlisten] 2017-02-17T09:19:20.240+0800 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine2017-02-17T09:19:20.240+0800 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem2017-02-17T09:19:20.964+0800 I CONTROL [initandlisten] 2017-02-17T09:19:20.964+0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.2017-02-17T09:19:20.964+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.2017-02-17T09:19:20.964+0800 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.2017-02-17T09:19:20.965+0800 I CONTROL [initandlisten] 2017-02-17T09:19:20.965+0800 I CONTROL [initandlisten] 2017-02-17T09:19:20.965+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.2017-02-17T09:19:20.965+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'2017-02-17T09:19:20.965+0800 I CONTROL [initandlisten] 2017-02-17T09:19:20.965+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.2017-02-17T09:19:20.965+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'2017-02-17T09:19:20.965+0800 I CONTROL [initandlisten]
使用use admin, 切换到时admin数据库,
> use adminswitched to db admin
然后通过config配置设置副本集节点成员~~
config={_id:"wang",members:[{_id:0,host:"172.xxx.xxx.107:27017"},{_id:1,host:"172.xxx.xxx.106:27017"},{_id:2,host:"172.xxx.xxx.105:27017"}]}
注:
_id:"wang", wang是副本集中取得名字。
members中添加每个副本集Mongod的_id和host信息
执行完上述信息,会出现如下信息:
> config={_id:"wang",members:[{_id:0,host:"172.xxx.xxx.107:27017"},{_id:1,host:"172.xxx.xxx.106:27017"},{_id:2,host:"172.xxx.xxx.105:27017"}]}{ "_id" : "wang", "members" : [ { "_id" : 0, "host" : "172.xxx.xxx.107:27017" }, { "_id" : 1, "host" : "172.xxx.xxx.106:27017" }, { "_id" : 2, "host" : "172.xxx.xxx.105:27017" } ]}>
然后,执行配置初始化,看到{ "ok" : 1 },则表明初始化成功~
> rs.initiate(config){ "ok" : 1 }
使用rs.status()查看副本节点状态~~
wang:PRIMARY> rs.status() { "set" : "wang", "date" : ISODate("2017-02-17T01:30:53.128Z"), "myState" : 1, "term" : NumberLong(1), "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1487295047, 1), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1487295047, 1), "t" : NumberLong(1) }, "durableOpTime" : { "ts" : Timestamp(1487295047, 1), "t" : NumberLong(1) } }, "members" : [ { "_id" : 0, "name" : "172.xxx.xxx.107:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 693, "optime" : { "ts" : Timestamp(1487295047, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2017-02-17T01:30:47Z"), "infoMessage" : "could not find member to sync from", "electionTime" : Timestamp(1487294966, 1), "electionDate" : ISODate("2017-02-17T01:29:26Z"), "configVersion" : 1, "self" : true }, { "_id" : 1, "name" : "172.xxx.xxx.106:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 96, "optime" : { "ts" : Timestamp(1487295047, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1487295047, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2017-02-17T01:30:47Z"), "optimeDurableDate" : ISODate("2017-02-17T01:30:47Z"), "lastHeartbeat" : ISODate("2017-02-17T01:30:52.708Z"), "lastHeartbeatRecv" : ISODate("2017-02-17T01:30:51.674Z"), "pingMs" : NumberLong(0), "syncingTo" : "172.xxx.xxx.107:27017", "configVersion" : 1 }, { "_id" : 2, "name" : "172.xxx.xxx.105:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 96, "optime" : { "ts" : Timestamp(1487295047, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1487295047, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2017-02-17T01:30:47Z"), "optimeDurableDate" : ISODate("2017-02-17T01:30:47Z"), "lastHeartbeat" : ISODate("2017-02-17T01:30:52.708Z"), "lastHeartbeatRecv" : ISODate("2017-02-17T01:30:51.745Z"), "pingMs" : NumberLong(0), "syncingTo" : "172.xxx.xxx.106:27017", "configVersion" : 1 } ], "ok" : 1}wang:PRIMARY>
从上图圈出来的信息可以看出,一个Primary 和两个Secondary的副本集已经完成~~~
第五步 - 验证
最后一步就是用来验证了, 看看数据能否同步过来~~~
写的操作是在Primary节点上操作的。
在107节点上,创建一个messages的数据库,然后在message Collection中插入两条message。
wang:PRIMARY> show dbsadmin 0.000GBlocal 0.000GBwang:PRIMARY> use messagesswitched to db messageswang:PRIMARY> db.message.insert({"name":"This is a test message"})WriteResult({ "nInserted" : 1 })wang:PRIMARY> show dbsadmin 0.000GBlocal 0.000GBmessages 0.000GBwang:PRIMARY> db.message.insert({"name":"This is a test message111"})WriteResult({ "nInserted" : 1 })wang:PRIMARY>
通过可视化工具查看,Secondary节点105和106上是否可以同步Primary节点107上messages数据库的信息~~
详细信息如下:
从可视化工具的截图可以看出,两个Secondary节点105和106,与Primary节点107,拥有同样的数据集~~ 至此,Mongo DB副本集的环境搭建完成~~~