CentOS7下部署Ceph集群(版本10.2.2)

2019-11-01 11:19 来源:未知

第三步:关闭所有节点的防火墙以及安全选项(在所有节点上执行)以及其他一些步骤

[ceph@admin my-cluster]# ceph-deploy osd activate mon3:/home/ceph/osd
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked : /usr/bin/ceph-deploy osd activate mon3:/home/ceph/osd
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : activate
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf :
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func :
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : [('mon3', '/home/ceph/osd', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks mon3:/home/ceph/osd:
[mon3][DEBUG ] connection detected need for sudo
[mon3][DEBUG ] connected to host: mon3
[mon3][DEBUG ] detect platform information from remote host
[mon3][DEBUG ] detect machine type
[mon3][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core
[ceph_deploy.osd][DEBUG ] activating host mon3 disk /home/ceph/osd
[ceph_deploy.osd][DEBUG ] will use init type: systemd
[mon3][DEBUG ] find the location of an executable
[mon3][INFO ] Running command: sudo /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /home/ceph/osd
[mon3][WARNIN] main_activate: path = /home/ceph/osd
[mon3][WARNIN] activate: Cluster uuid is 20fa28ad-98e6-4d89-bc2a-771e94e0de43
[mon3][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[mon3][WARNIN] activate: Cluster name is ceph
[mon3][WARNIN] activate: OSD uuid is f2243a79-0e54-475a-ab83-11a2c4811ddb
[mon3][WARNIN] allocate_osd_id: Allocating OSD id...
[mon3][WARNIN] command: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise f2243a79-0e54-475a-ab83-11a2c4811ddb
[mon3][WARNIN] command: Running command: /sbin/restorecon -R /home/ceph/osd/whoami.4359.tmp
[mon3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /home/ceph/osd/whoami.4359.tmp
[mon3][WARNIN] activate: OSD id is 1
[mon3][WARNIN] activate: Initializing OSD...
[mon3][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /home/ceph/osd/activate.monmap
[mon3][WARNIN] got monmap epoch 1
[mon3][WARNIN] command: Running command: /usr/bin/timeout 300 ceph-osd --cluster ceph --mkfs --mkkey -i 1 --monmap /home/ceph/osd/activate.monmap --osd-data /home/ceph/osd --osd-journal /home/ceph/osd/journal --osd-uuid f2243a79-0e54-475a-ab83-11a2c4811ddb --keyring /home/ceph/osd/keyring --setuser ceph --setgroup ceph
[mon3][WARNIN] activate: Marking with init system systemd
[mon3][WARNIN] command: Running command: /sbin/restorecon -R /home/ceph/osd/systemd
[mon3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /home/ceph/osd/systemd
[mon3][WARNIN] activate: Authorizing OSD key...
[mon3][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.1 -i /home/ceph/osd/keyring osd allow * mon allow profile osd
[mon3][WARNIN] added key for osd.1
[mon3][WARNIN] command: Running command: /sbin/restorecon -R /home/ceph/osd/active.4359.tmp
[mon3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /home/ceph/osd/active.4359.tmp
[mon3][WARNIN] activate: ceph osd.1 data dir is ready at /home/ceph/osd
[mon3][WARNIN] activate_dir: Creating symlink /var/lib/ceph/osd/ceph-1 -> /home/ceph/osd
[mon3][WARNIN] start_daemon: Starting ceph osd.1...
[mon3][WARNIN] command_check_call: Running command: /usr/bin/systemctl disable ceph-osd@1
[mon3][WARNIN] command_check_call: Running command: /usr/bin/systemctl disable ceph-osd@1 --runtime
[mon3][WARNIN] command_check_call: Running command: /usr/bin/systemctl enable ceph-osd@1
[mon3][WARNIN] Created symlink from /etc/systemd/system/ceph-osd.target.wants/ceph-osd@1.service to /usr/lib/systemd/system/ceph-osd@.service.
[mon3][WARNIN] command_check_call: Running command: /usr/bin/systemctl start ceph-osd@1
[mon3][WARNIN] Job for ceph-osd@1.service failed because the control process exited with error code. See "systemctl status ceph-osd@1.service" and "journalctl -xe" for details.
[mon3][WARNIN] Traceback (most recent call last):
[mon3][WARNIN] File "/usr/sbin/ceph-disk", line 9, in
[mon3][WARNIN] load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
[mon3][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5371, in run
[mon3][WARNIN] main(sys.argv[1:])
[mon3][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5322, in main
[mon3][WARNIN] args.func
[mon3][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3483, in main_activate
[mon3][WARNIN] osd_id=osd_id,
[mon3][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3060, in start_daemon
[mon3][WARNIN] raise Error('ceph osd start failed', e)
[mon3][WARNIN] ceph_disk.main.Error: Error: ceph osd start failed: Command '['/usr/bin/systemctl', 'start', 'ceph-osd@1']' returned non-zero exit status 1
[mon3][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /home/ceph/osd
但是如果我重新挂载磁盘到/osd。然后 ceph-deploy osd create hostname:/osd/home/ceph/osd 这样就没问题,为什么?

[ceph@node0 cluster]$ sudo cat /etc/hosts 
[sudo] password for ceph: 
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 
192.168.92.100 node0 
192.168.92.101 node1 
192.168.92.102 node2 
192.168.92.103 node3 
[ceph@node0 cluster]$ 

ceph部署时osd create在/home/osd下出问题
今天我在实机部署的时候,因为之前分区大部分空间都在home下,所以只能将osd都弄在home下了,但是ceph-deploy osd create osd1:/home/ceph/osd 和激活的时候,出现下面的错,求解决。

如:

[ceph@node1 cluster]$ ceph osd tree 
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 0.05835 root default 
-2 0.01459 host node1 
1 0.01459 osd.1 up 1.00000 1.00000 
-3 0.01459 host node3 
3 0.01459 osd.3 up 1.00000 1.00000 
-4 0.01459 host node0 
0 0.01459 osd.0 up 1.00000 1.00000 
-5 0.01459 host node2 
2 0.01459 osd.2 up 1.00000 1.00000 
[ceph@node1 cluster]$ 

[root@node0 ~]# adduser -d /home/ceph -m ceph

本文永久更新链接地址http://www.linuxidc.com/Linux/2017-02/140728.htm

sudo vim /etc/yum.repos.d/ceph.repo

[ceph@node1 cluster]$ cat rmosd.sh 
############################################################################### 
# Author : younger_liucn@126.com 
# File Name : rmosd.sh 
# Description : 

############################################################################### 
#!/bin/bash 
if [ $# != 1 ]; then 
echo "Error!"; 
exit 1; 
fi 
ID=${1} 
sudo systemctl stop ceph-osd@${ID} 
ceph osd crush remove osd.${ID} 
ceph osd down ${ID} 
ceph auth del osd.${ID} 
ceph osd rm ${ID} 
[ceph@node1 cluster]$ 

第一步:在管理节点主机上执行命令:

[ceph@node0 cluster]$ cat ceph.conf
[global]
fsid = 3c9892d0-398b-4808-aa20-4dc622356bd0
mon_initial_members = node1, node2, node3
mon_host = 192.168.92.111,192.168.92.112,192.168.92.113
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
[ceph@node0 my-cluster]$

如:

ceph-deploy osd activate {ceph-node}:/path/to/directory

public_network = 192.168.92.0/6789

6.3 删除OSD

那么就需要在ceph.conf配置文档[global]部分增加参数public network参数:

3.5 关闭防火墙  

如何升级Ceph版本及注意事项  http://www.linuxidc.com/Linux/2017-02/140631.htm

Ø Monitors: Ceph Monitor 维护了集群map的状态,主要包括monitor map, OSD map, Placement Group (PG) map, 以及CRUSH map. Ceph 维护了 Ceph Monitors, Ceph OSD Daemons, 以及PGs状态变化的历史记录 (called an “epoch”).

[root@localhost ceph]# cat /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted

1 简单介绍
Ceph的部署模式下主要包含以下几个类型的节点

4 管理节点安装ceph-deploy工具
第一步:增加 yum配置文件

3.6 禁用selinux
当前禁用

[ceph@node0 cluster]$ sudo systemctl stop firewall.service;
[ceph@node0 cluster]$ sudo systemctl disable firewall.service

3.4 管理节点的无密码远程访问权限
配置管理节点与其他节点ssh无密码root权限访问其它节点。

[ceph@node0 cluster]$ <strong>ceph-deploy new node1 node2 node3</strong>
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy new node1 node2 node3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] func : <function new at 0x29f2b18>
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x2a15a70>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] ssh_copykey : True
[ceph_deploy.cli][INFO ] mon : ['node1', 'node2', 'node3']
[ceph_deploy.cli][INFO ] public_network : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster_network : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] fsid : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[node1][DEBUG ] connected to host: node0
[node1][INFO ] Running command: ssh -CT -o BatchMode=yes node1
[node1][DEBUG ] connection detected need for sudo
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: sudo /usr/sbin/ip link show
[node1][INFO ] Running command: sudo /usr/sbin/ip addr show
[node1][DEBUG ] IP addresses found: ['192.168.92.101', '192.168.1.102', '192.168.122.1']
[ceph_deploy.new][DEBUG ] Resolving host node1
[ceph_deploy.new][DEBUG ] Monitor node1 at 192.168.92.101
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[node2][DEBUG ] connected to host: node0
[node2][INFO ] Running command: ssh -CT -o BatchMode=yes node2
[node2][DEBUG ] connection detected need for sudo
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: sudo /usr/sbin/ip link show
[node2][INFO ] Running command: sudo /usr/sbin/ip addr show
[node2][DEBUG ] IP addresses found: ['192.168.1.103', '192.168.122.1', '192.168.92.102']
[ceph_deploy.new][DEBUG ] Resolving host node2
[ceph_deploy.new][DEBUG ] Monitor node2 at 192.168.92.102
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[node3][DEBUG ] connected to host: node0
[node3][INFO ] Running command: ssh -CT -o BatchMode=yes node3
[node3][DEBUG ] connection detected need for sudo
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: sudo /usr/sbin/ip link show
[node3][INFO ] Running command: sudo /usr/sbin/ip addr show
[node3][DEBUG ] IP addresses found: ['192.168.122.1', '192.168.1.104', '192.168.92.103']
[ceph_deploy.new][DEBUG ] Resolving host node3
[ceph_deploy.new][DEBUG ] Monitor node3 at 192.168.92.103
[ceph_deploy.new][DEBUG ] Monitor initial members are ['node1', 'node2', 'node3']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.92.101', '192.168.92.102', '192.168.92.103']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
[ceph@node0 cluster]$

5.1 如何清空ceph数据
先清空之前所有的ceph数据,如果是新装不用执行此步骤,如果是重新部署的话也执行下面的命令:

ssh-keygen

[ceph@node0 cluster]$ sudo systemctl enable ceph-mon.target 
[ceph@node0 cluster]$ sudo systemctl enable ceph-osd.target 
[ceph@node0 cluster]$ sudo systemctl enable ceph.target 

$sudo chmod 600 config 

ssh-copy-id    ceph@node3

[root@node0 ~]#passwd ceph

[root@localhost ceph]# cat /etc/selinux/config 
# This file controls the state of SELinux on the system. 
# SELINUX= can take one of these three values: 
# enforcing - SELinux security policy is enforced. 
# permissive - SELinux prints warnings instead of enforcing. 
# disabled - No SELinux policy is loaded. 
SELINUX=disabled 
# SELINUXTYPE= can take one of three two values: 
# targeted - Targeted processes are protected, 
# minimum - Modification of targeted policy. Only selected processes are protected. 
# mls - Multi Level Security protection. 
SELINUXTYPE=targeted 

[ceph@node0 cluster]$ ssh node0 "sudo systemctl enable ceph-mon.target;sudo systemctl enable ceph-osd.target;sudo systemctl enable ceph.target" 
[ceph@node0 cluster]$ ssh node1 "sudo systemctl enable ceph-mon.target;sudo systemctl enable ceph-osd.target;sudo systemctl enable ceph.target" 
[ceph@node0 cluster]$ ssh node2 "sudo systemctl enable ceph-mon.target;sudo systemctl enable ceph-osd.target;sudo systemctl enable ceph.target" 
[ceph@node0 cluster]$ ssh node3 "sudo systemctl enable ceph-mon.target;sudo systemctl enable ceph-osd.target;sudo systemctl enable ceph.target" 
[ceph@node0 cluster]$ 

[ceph@node0 cluster]$ cat ~/.ssh/config 
Host node0 
Hostname node0 
User ceph 
Host node1 
Hostname node1 
User ceph 
Host node2 
Hostname node2 
User ceph 
Host node3 
Hostname node3 
User ceph 
[ceph@node0 cluster]$ 

CentOS 6.3上部署Ceph http://www.linuxidc.com/Linux/2013-05/85213.htm 

[ceph@node0 cluster]$ cat ~/.ssh/config
Host node0
Hostname node0
User ceph
Host node1
Hostname node1
User ceph
Host node2
Hostname node2
User ceph
Host node3
Hostname node3
User ceph
[ceph@node0 cluster]$

Hostname(如何更新)

VmNet

节点IP

说明

node0

HostOnly

192.168.92.100

Admin, osd(sdb)

   

node1

HostOnly

192.168.92.101

Osd(sdb),mon

   

node2

HostOnly

192.168.92.102

Osd(sdb),mon,mds

   

node3

HostOnly

192.168.92.103

Osd(sdb),mon,mds

   

client-node

HostOnly

192.168.92.109

用户端节点;客服端,主要利用它挂载ceph集群提供的存储进行测试

[ceph@node0 cluster]$ ls 
ceph.conf ceph.log ceph.mon.keyring 

6.2 激活OSD
命令:

3.2 管理节点修改hosts
修改/etc/hosts

[ceph@node0 cluster]$ grep "osd_pool_default_size" ./ceph.conf 
osd_pool_default_size = 2 
[ceph@node0 cluster]$ 

实验环境Ceph 9.2.1部署笔记 http://www.linuxidc.com/Linux/2016-11/137094.htm

[ceph@node0 cluster]$ ceph-deploy mon create-initial 
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf 
[ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy mon create-initial 
[ceph_deploy.cli][INFO ] ceph-deploy options: 
[ceph_deploy.cli][INFO ] username : None 
[ceph_deploy.cli][INFO ] verbose : False 
[ceph_deploy.cli][INFO ] overwrite_conf : False 
[ceph_deploy.cli][INFO ] subcommand : create-initial 
[ceph_deploy.cli][INFO ] quiet : False 
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fbe46804cb0> 
[ceph_deploy.cli][INFO ] cluster : ceph 
[ceph_deploy.cli][INFO ] func : <function mon at 0x7fbe467f6aa0> 
[ceph_deploy.cli][INFO ] ceph_conf : None 
[ceph_deploy.cli][INFO ] default_release : False 
[ceph_deploy.cli][INFO ] keyrings : None 
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts node1 node2 node3 
[ceph_deploy.mon][DEBUG ] detecting platform for host node1 ... 
[node1][DEBUG ] connection detected need for sudo 
[node1][DEBUG ] connected to host: node1 
[node1][DEBUG ] detect platform information from remote host 
[node1][DEBUG ] detect machine type 
[node1][DEBUG ] find the location of an executable 
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.2.1511 Core 
[node1][DEBUG ] determining if provided host has same hostname in remote 
[node1][DEBUG ] get remote short hostname 
[node1][DEBUG ] deploying mon to node1 
[node1][DEBUG ] get remote short hostname 
[node1][DEBUG ] remote hostname: node1 
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf 
[node1][DEBUG ] create the mon path if it does not exist 
[node1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node1/done 
[node1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node1/done 
[node1][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-node1.mon.keyring 
[node1][DEBUG ] create the monitor keyring file 
[node1][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i node1 --keyring /var/lib/ceph/tmp/ceph-node1.mon.keyring --setuser 1001 --setgroup 1001 
[node1][DEBUG ] ceph-mon: mon.noname-a 192.168.92.101:6789/0 is local, renaming to mon.node1 
[node1][DEBUG ] ceph-mon: set fsid to 4f8f6c46-9f67-4475-9cb5-52cafecb3e4c 
[node1][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node1 for mon.node1 
[node1][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-node1.mon.keyring 
[node1][DEBUG ] create a done file to avoid re-doing the mon deployment 
[node1][DEBUG ] create the init path if it does not exist 
[node1][INFO ] Running command: sudo systemctl enable ceph.target 
[node1][INFO ] Running command: sudo systemctl enable ceph-mon@node1 
[node1][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@node1.service to /usr/lib/systemd/system/ceph-mon@.service. 
[node1][INFO ] Running command: sudo systemctl start ceph-mon@node1 
[node1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status 
[node1][DEBUG ] ******************************************************************************** 
[node1][DEBUG ] status for monitor: mon.node1 
[node1][DEBUG ] { 
[node1][DEBUG ] "election_epoch": 0, 
[node1][DEBUG ] "extra_probe_peers": [ 
[node1][DEBUG ] "192.168.92.102:6789/0", 
[node1][DEBUG ] "192.168.92.103:6789/0" 
[node1][DEBUG ] ], 
[node1][DEBUG ] "monmap": { 
[node1][DEBUG ] "created": "2016-06-24 14:43:29.944474", 
[node1][DEBUG ] "epoch": 0, 
[node1][DEBUG ] "fsid": "4f8f6c46-9f67-4475-9cb5-52cafecb3e4c", 
[node1][DEBUG ] "modified": "2016-06-24 14:43:29.944474", 
[node1][DEBUG ] "mons": [ 
[node1][DEBUG ] { 
[node1][DEBUG ] "addr": "192.168.92.101:6789/0", 
[node1][DEBUG ] "name": "node1", 
[node1][DEBUG ] "rank": 0 
[node1][DEBUG ] }, 
[node1][DEBUG ] { 
[node1][DEBUG ] "addr": "0.0.0.0:0/1", 
[node1][DEBUG ] "name": "node2", 
[node1][DEBUG ] "rank": 1 
[node1][DEBUG ] }, 
[node1][DEBUG ] { 
[node1][DEBUG ] "addr": "0.0.0.0:0/2", 
[node1][DEBUG ] "name": "node3", 
[node1][DEBUG ] "rank": 2 
[node1][DEBUG ] } 
[node1][DEBUG ] ] 
[node1][DEBUG ] }, 
[node1][DEBUG ] "name": "node1", 
[node1][DEBUG ] "outside_quorum": [ 
[node1][DEBUG ] "node1" 
[node1][DEBUG ] ], 
[node1][DEBUG ] "quorum": [], 
[node1][DEBUG ] "rank": 0, 
[node1][DEBUG ] "state": "probing", 
[node1][DEBUG ] "sync_provider": [] 
[node1][DEBUG ] } 
[node1][DEBUG ] ******************************************************************************** 
[node1][INFO ] monitor: mon.node1 is running 
[node1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status 
[ceph_deploy.mon][DEBUG ] detecting platform for host node2 ... 
[node2][DEBUG ] connection detected need for sudo 
[node2][DEBUG ] connected to host: node2 
[node2][DEBUG ] detect platform information from remote host 
[node2][DEBUG ] detect machine type 
[node2][DEBUG ] find the location of an executable 
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.2.1511 Core 
[node2][DEBUG ] determining if provided host has same hostname in remote 
[node2][DEBUG ] get remote short hostname 
[node2][DEBUG ] deploying mon to node2 
[node2][DEBUG ] get remote short hostname 
[node2][DEBUG ] remote hostname: node2 
[node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf 
[node2][DEBUG ] create the mon path if it does not exist 
[node2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node2/done 
[node2][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node2/done 
[node2][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-node2.mon.keyring 
[node2][DEBUG ] create the monitor keyring file 
[node2][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i node2 --keyring /var/lib/ceph/tmp/ceph-node2.mon.keyring --setuser 1001 --setgroup 1001 
[node2][DEBUG ] ceph-mon: mon.noname-b 192.168.92.102:6789/0 is local, renaming to mon.node2 
[node2][DEBUG ] ceph-mon: set fsid to 4f8f6c46-9f67-4475-9cb5-52cafecb3e4c 
[node2][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node2 for mon.node2 
[node2][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-node2.mon.keyring 
[node2][DEBUG ] create a done file to avoid re-doing the mon deployment 
[node2][DEBUG ] create the init path if it does not exist 
[node2][INFO ] Running command: sudo systemctl enable ceph.target 
[node2][INFO ] Running command: sudo systemctl enable ceph-mon@node2 
[node2][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@node2.service to /usr/lib/systemd/system/ceph-mon@.service. 
[node2][INFO ] Running command: sudo systemctl start ceph-mon@node2 
[node2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status 
[node2][DEBUG ] ******************************************************************************** 
[node2][DEBUG ] status for monitor: mon.node2 
[node2][DEBUG ] { 
[node2][DEBUG ] "election_epoch": 1, 
[node2][DEBUG ] "extra_probe_peers": [ 
[node2][DEBUG ] "192.168.92.101:6789/0", 
[node2][DEBUG ] "192.168.92.103:6789/0" 
[node2][DEBUG ] ], 
[node2][DEBUG ] "monmap": { 
[node2][DEBUG ] "created": "2016-06-24 14:43:34.865908", 
[node2][DEBUG ] "epoch": 0, 
[node2][DEBUG ] "fsid": "4f8f6c46-9f67-4475-9cb5-52cafecb3e4c", 
[node2][DEBUG ] "modified": "2016-06-24 14:43:34.865908", 
[node2][DEBUG ] "mons": [ 
[node2][DEBUG ] { 
[node2][DEBUG ] "addr": "192.168.92.101:6789/0", 
[node2][DEBUG ] "name": "node1", 
[node2][DEBUG ] "rank": 0 
[node2][DEBUG ] }, 
[node2][DEBUG ] { 
[node2][DEBUG ] "addr": "192.168.92.102:6789/0", 
[node2][DEBUG ] "name": "node2", 
[node2][DEBUG ] "rank": 1 
[node2][DEBUG ] }, 
[node2][DEBUG ] { 
[node2][DEBUG ] "addr": "0.0.0.0:0/2", 
[node2][DEBUG ] "name": "node3", 
[node2][DEBUG ] "rank": 2 
[node2][DEBUG ] } 
[node2][DEBUG ] ] 
[node2][DEBUG ] }, 
[node2][DEBUG ] "name": "node2", 
[node2][DEBUG ] "outside_quorum": [], 
[node2][DEBUG ] "quorum": [], 
[node2][DEBUG ] "rank": 1, 
[node2][DEBUG ] "state": "electing", 
[node2][DEBUG ] "sync_provider": [] 
[node2][DEBUG ] } 
[node2][DEBUG ] ******************************************************************************** 
[node2][INFO ] monitor: mon.node2 is running 
[node2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status 
[ceph_deploy.mon][DEBUG ] detecting platform for host node3 ... 
[node3][DEBUG ] connection detected need for sudo 
[node3][DEBUG ] connected to host: node3 
[node3][DEBUG ] detect platform information from remote host 
[node3][DEBUG ] detect machine type 
[node3][DEBUG ] find the location of an executable 
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.2.1511 Core 
[node3][DEBUG ] determining if provided host has same hostname in remote 
[node3][DEBUG ] get remote short hostname 
[node3][DEBUG ] deploying mon to node3 
[node3][DEBUG ] get remote short hostname 
[node3][DEBUG ] remote hostname: node3 
[node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf 
[node3][DEBUG ] create the mon path if it does not exist 
[node3][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node3/done 
[node3][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node3/done 
[node3][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-node3.mon.keyring 
[node3][DEBUG ] create the monitor keyring file 
[node3][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i node3 --keyring /var/lib/ceph/tmp/ceph-node3.mon.keyring --setuser 1001 --setgroup 1001 
[node3][DEBUG ] ceph-mon: mon.noname-c 192.168.92.103:6789/0 is local, renaming to mon.node3 
[node3][DEBUG ] ceph-mon: set fsid to 4f8f6c46-9f67-4475-9cb5-52cafecb3e4c 
[node3][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node3 for mon.node3 
[node3][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-node3.mon.keyring 
[node3][DEBUG ] create a done file to avoid re-doing the mon deployment 
[node3][DEBUG ] create the init path if it does not exist 
[node3][INFO ] Running command: sudo systemctl enable ceph.target 
[node3][INFO ] Running command: sudo systemctl enable ceph-mon@node3 
[node3][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@node3.service to /usr/lib/systemd/system/ceph-mon@.service. 
[node3][INFO ] Running command: sudo systemctl start ceph-mon@node3 
[node3][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node3.asok mon_status 
[node3][DEBUG ] ******************************************************************************** 
[node3][DEBUG ] status for monitor: mon.node3 
[node3][DEBUG ] { 
[node3][DEBUG ] "election_epoch": 1, 
[node3][DEBUG ] "extra_probe_peers": [ 
[node3][DEBUG ] "192.168.92.101:6789/0", 
[node3][DEBUG ] "192.168.92.102:6789/0" 
[node3][DEBUG ] ], 
[node3][DEBUG ] "monmap": { 
[node3][DEBUG ] "created": "2016-06-24 14:43:39.800046", 
[node3][DEBUG ] "epoch": 0, 
[node3][DEBUG ] "fsid": "4f8f6c46-9f67-4475-9cb5-52cafecb3e4c", 
[node3][DEBUG ] "modified": "2016-06-24 14:43:39.800046", 
[node3][DEBUG ] "mons": [ 
[node3][DEBUG ] { 
[node3][DEBUG ] "addr": "192.168.92.101:6789/0", 
[node3][DEBUG ] "name": "node1", 
[node3][DEBUG ] "rank": 0 
[node3][DEBUG ] }, 
[node3][DEBUG ] { 
[node3][DEBUG ] "addr": "192.168.92.102:6789/0", 
[node3][DEBUG ] "name": "node2", 
[node3][DEBUG ] "rank": 1 
[node3][DEBUG ] }, 
[node3][DEBUG ] { 
[node3][DEBUG ] "addr": "192.168.92.103:6789/0", 
[node3][DEBUG ] "name": "node3", 
[node3][DEBUG ] "rank": 2 
[node3][DEBUG ] } 
[node3][DEBUG ] ] 
[node3][DEBUG ] }, 
[node3][DEBUG ] "name": "node3", 
[node3][DEBUG ] "outside_quorum": [], 
[node3][DEBUG ] "quorum": [], 
[node3][DEBUG ] "rank": 2, 
[node3][DEBUG ] "state": "electing", 
[node3][DEBUG ] "sync_provider": [] 
[node3][DEBUG ] } 
[node3][DEBUG ] ******************************************************************************** 
[node3][INFO ] monitor: mon.node3 is running 
[node3][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node3.asok mon_status 
[ceph_deploy.mon][INFO ] processing monitor mon.node1 
[node1][DEBUG ] connection detected need for sudo 
[node1][DEBUG ] connected to host: node1 
[node1][DEBUG ] detect platform information from remote host 
[node1][DEBUG ] detect machine type 
[node1][DEBUG ] find the location of an executable 
[node1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status 
[ceph_deploy.mon][WARNIN] mon.node1 monitor is not yet in quorum, tries left: 5 
[ceph_deploy.mon][WARNIN] waiting 5 seconds before retrying 
[node1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status 
[ceph_deploy.mon][WARNIN] mon.node1 monitor is not yet in quorum, tries left: 4 
[ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying 
[node1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status 
[ceph_deploy.mon][INFO ] mon.node1 monitor has reached quorum! 
[ceph_deploy.mon][INFO ] processing monitor mon.node2 
[node2][DEBUG ] connection detected need for sudo 
[node2][DEBUG ] connected to host: node2 
[node2][DEBUG ] detect platform information from remote host 
[node2][DEBUG ] detect machine type 
[node2][DEBUG ] find the location of an executable 
[node2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status 
[ceph_deploy.mon][INFO ] mon.node2 monitor has reached quorum! 
[ceph_deploy.mon][INFO ] processing monitor mon.node3 
[node3][DEBUG ] connection detected need for sudo 
[node3][DEBUG ] connected to host: node3 
[node3][DEBUG ] detect platform information from remote host 
[node3][DEBUG ] detect machine type 
[node3][DEBUG ] find the location of an executable 
[node3][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node3.asok mon_status 
[ceph_deploy.mon][INFO ] mon.node3 monitor has reached quorum! 
[ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum 
[ceph_deploy.mon][INFO ] Running gatherkeys... 
[ceph_deploy.gatherkeys][INFO ] Storing keys in temp directory /tmp/tmp5_jcSr 
[node1][DEBUG ] connection detected need for sudo 
[node1][DEBUG ] connected to host: node1 
[node1][DEBUG ] detect platform information from remote host 
[node1][DEBUG ] detect machine type 
[node1][DEBUG ] get remote short hostname 
[node1][DEBUG ] fetch remote file 
[node1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.node1.asok mon_status 
[node1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-node1/keyring auth get-or-create client.admin osd allow * mds allow * mon allow * 
[node1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-node1/keyring auth get-or-create client.bootstrap-mds mon allow profile bootstrap-mds 
[node1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-node1/keyring auth get-or-create client.bootstrap-osd mon allow profile bootstrap-osd 
[node1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-node1/keyring auth get-or-create client.bootstrap-rgw mon allow profile bootstrap-rgw 
[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring 
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring 
[ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists 
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring 
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring 
[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmp5_jcSr 
[ceph@node0 cluster]$ 

解决方案:

或者本地

[ceph@node0 cluster]$ <strong>ceph-deploy new node1 node2 node3</strong> 
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf 
[ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy new node1 node2 node3 
[ceph_deploy.cli][INFO ] ceph-deploy options: 
[ceph_deploy.cli][INFO ] username : None 
[ceph_deploy.cli][INFO ] func : <function new at 0x29f2b18> 
[ceph_deploy.cli][INFO ] verbose : False 
[ceph_deploy.cli][INFO ] overwrite_conf : False 
[ceph_deploy.cli][INFO ] quiet : False 
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x2a15a70> 
[ceph_deploy.cli][INFO ] cluster : ceph 
[ceph_deploy.cli][INFO ] ssh_copykey : True 
[ceph_deploy.cli][INFO ] mon : ['node1', 'node2', 'node3'] 
[ceph_deploy.cli][INFO ] public_network : None 
[ceph_deploy.cli][INFO ] ceph_conf : None 
[ceph_deploy.cli][INFO ] cluster_network : None 
[ceph_deploy.cli][INFO ] default_release : False 
[ceph_deploy.cli][INFO ] fsid : None 
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph 
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds 
[node1][DEBUG ] connected to host: node0 
[node1][INFO ] Running command: ssh -CT -o BatchMode=yes node1 
[node1][DEBUG ] connection detected need for sudo 
[node1][DEBUG ] connected to host: node1 
[node1][DEBUG ] detect platform information from remote host 
[node1][DEBUG ] detect machine type 
[node1][DEBUG ] find the location of an executable 
[node1][INFO ] Running command: sudo /usr/sbin/ip link show 
[node1][INFO ] Running command: sudo /usr/sbin/ip addr show 
[node1][DEBUG ] IP addresses found: ['192.168.92.101', '192.168.1.102', '192.168.122.1'] 
[ceph_deploy.new][DEBUG ] Resolving host node1 
[ceph_deploy.new][DEBUG ] Monitor node1 at 192.168.92.101 
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds 
[node2][DEBUG ] connected to host: node0 
[node2][INFO ] Running command: ssh -CT -o BatchMode=yes node2 
[node2][DEBUG ] connection detected need for sudo 
[node2][DEBUG ] connected to host: node2 
[node2][DEBUG ] detect platform information from remote host 
[node2][DEBUG ] detect machine type 
[node2][DEBUG ] find the location of an executable 
[node2][INFO ] Running command: sudo /usr/sbin/ip link show 
[node2][INFO ] Running command: sudo /usr/sbin/ip addr show 
[node2][DEBUG ] IP addresses found: ['192.168.1.103', '192.168.122.1', '192.168.92.102'] 
[ceph_deploy.new][DEBUG ] Resolving host node2 
[ceph_deploy.new][DEBUG ] Monitor node2 at 192.168.92.102 
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds 
[node3][DEBUG ] connected to host: node0 
[node3][INFO ] Running command: ssh -CT -o BatchMode=yes node3 
[node3][DEBUG ] connection detected need for sudo 
[node3][DEBUG ] connected to host: node3 
[node3][DEBUG ] detect platform information from remote host 
[node3][DEBUG ] detect machine type 
[node3][DEBUG ] find the location of an executable 
[node3][INFO ] Running command: sudo /usr/sbin/ip link show 
[node3][INFO ] Running command: sudo /usr/sbin/ip addr show 
[node3][DEBUG ] IP addresses found: ['192.168.122.1', '192.168.1.104', '192.168.92.103'] 
[ceph_deploy.new][DEBUG ] Resolving host node3 
[ceph_deploy.new][DEBUG ] Monitor node3 at 192.168.92.103 
[ceph_deploy.new][DEBUG ] Monitor initial members are ['node1', 'node2', 'node3'] 
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.92.101', '192.168.92.102', '192.168.92.103'] 
[ceph_deploy.new][DEBUG ] Creating a random mon key... 
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring... 
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf... 
[ceph@node0 cluster]$ 

同时修改~/.ssh/config文件增加一下内容:

否则会出现 sudo: no tty present and no askpass program specified

[ceph@node0 cluster]$ <strong>ceph-deploy purgedata admin_node node1 node2 node3</strong> 
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf 
[ceph_deploy.cli][INFO ] Invoked (1.5.28): /usr/bin/ceph-deploy purgedata node0 node1 node2 node3 
… 
[node3][INFO ] Running command: sudo rm -rf --one-file-system -- /var/lib/ceph 
[node3][INFO ] Running command: sudo rm -rf --one-file-system -- /etc/ceph/ 
[ceph@node0 cluster]$ <strong>ceph-deploy forgetkeys</strong> 
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf 
[ceph_deploy.cli][INFO ] Invoked (1.5.28): /usr/bin/ceph-deploy forgetkeys 
… 
[ceph_deploy.cli][INFO ] default_release : False 
[ceph@node0 my-cluster]$ 

[ceph@node0 cluster]$ sudo yum install yum-plugin-priorities

如:

ceph-deploy purgedata {ceph-node} [{ceph-node}]

[root@localhost ceph]# systemctl stop firewalld.service 
[root@localhost ceph]# systemctl disable firewalld.service 
Removed symlink /etc/systemd/system/dbus-org.Fedoraproject.FirewallD1.service. 
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service. 
[root@localhost ceph]# 

[ceph@node0 ~]$ mkdir cluster
[ceph@node0 cluster]$ cd cluster

第二步:将第一步的key复制至其他节点

设置账户权限

[root@localhost ceph]# systemctl stop firewalld.service
[root@localhost ceph]# systemctl disable firewalld.service
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@localhost ceph]#

ssh-copy-id    ceph@node1

Bad owner or permissions on /home/ceph/.ssh/config fatal: The remote end hung up unexpectedly

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1

说明:(为了简单点命令执行时直接确定即可)

比如:

在 CentOS 7.1 上安装分布式存储系统 Ceph  http://www.linuxidc.com/Linux/2015-08/120990.htm

5.2.2 网络不唯一的处理
如果IP不唯一,即除ceph集群使用的网络外,还有其他的网络IP。

[ceph@node0 cluster]$ sudo systemctl stop firewall.service; 
[ceph@node0 cluster]$ sudo systemctl disable firewall.service 
 
[ceph@node0 cluster]$ sudo yum install yum-plugin-priorities 

[ceph@node0 cluster]$ ls
ceph.bootstrap-mds.keyring ceph.bootstrap-rgw.keyring ceph.conf ceph.mon.keyring
ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph-deploy-ceph.log
[ceph@node0 cluster]$

示例,如1.2.3所示

Defaults requiretty修改为 #Defaults requiretty, 表示不需要控制终端。

5 创建Ceph集群
以前面创建的ceph用户在管理节点节点上创建目录()

5.4 初始化monitor节点
初始化监控节点并收集keyring:

2 集群规划
在创建集群前,首先做好集群规划。

ceph-deploy osd prepare {ceph-node}:/path/to/directory

2.1 网络拓扑
基于VMware虚拟机部署ceph集群:

5.3 安装ceph
管理节点节点用ceph-deploy工具向各个节点安装ceph:

TAG标签:
版权声明:本文由美洲杯赌球发布于计算机教程,转载请注明出处:CentOS7下部署Ceph集群(版本10.2.2)