部署ceph
前提:因为ceph部署时要去国外源下载包,导致下载安装时会卡住,因此我们只需通过国内的源找到对应的rpm下载安装。 一、环境准备4台机器,1台机器当部署节点和客户端,3台ceph节点,ceph节点配置两块硬盘第二块作为osd数据盘。 1、所有节点设置静态域名解析[[email?protected] ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.42.129 node1 192.168.42.130 node2 192.168.42.128 node3 192.168.42.131 ceph 2、所有节点创建cent用户,并给root权限 # useradd cent && echo "123" | passwd --stdin cent # echo -e ‘Defaults:cent !requirettyncent ALL = (root) NOPASSWD:ALL‘ | tee /etc/sudoers.d/ceph # chmod 440 /etc/sudoers.d/ceph 3、在部署节点设置无密钥登录包括部署节点(root用户和cent用户分别执行一次) # ssh-keygen # ssh-copy-id node1 # ssh-copy-id node2 # ssh-copy-id node3 # ssh-copy-id ceph 注:切换到cent用户在执行一次以上命令 4、在部署节点切换到cent用户执行,创建一个文件定义所有节点和用户# vim~/.ssh/config Host ceph
Hostname ceph
User cent
Host node1
Hostname node1
User cent
Host node2
Hostname node2
User cent
Host node3
Hostname node3
User cent
# chmod 660 ~/.ssh/config 二、所有节点配置国内ceph源1、# cat /etc/yum.repos.d/ceph-test.repo [ceph-yunwei] name=ceph-yunwei-install baseurl=https://mirrors.aliyun.com/centos/7/storage/x86_64/ceph-jewel/ enable=1 gpgcheck=0 2、将下列的包下载到所有节点,其中ceph-deploy….只需在部署节点安装,其他节点不需 ceph-10.2.11-0.el7.x86_64.rpm ceph-base-10.2.11-0.el7.x86_64.rpm ceph-common-10.2.11-0.el7.x86_64.rpm ceph-deploy-1.5.39-0.noarch.rpm ceph-devel-compat-10.2.11-0.el7.x86_64.rpm cephfs-java-10.2.11-0.el7.x86_64.rpm ceph-fuse-10.2.11-0.el7.x86_64.rpm ceph-libs-compat-10.2.11-0.el7.x86_64.rpm ceph-mds-10.2.11-0.el7.x86_64.rpm ceph-mon-10.2.11-0.el7.x86_64.rpm ceph-osd-10.2.11-0.el7.x86_64.rpm ceph-radosgw-10.2.11-0.el7.x86_64.rpm ceph-resource-agents-10.2.11-0.el7.x86_64.rpm ceph-selinux-10.2.11-0.el7.x86_64.rpm ceph-test-10.2.11-0.el7.x86_64.rpm libcephfs1-10.2.11-0.el7.x86_64.rpm libcephfs1-devel-10.2.11-0.el7.x86_64.rpm libcephfs_jni1-10.2.11-0.el7.x86_64.rpm libcephfs_jni1-devel-10.2.11-0.el7.x86_64.rpm librados2-10.2.11-0.el7.x86_64.rpm librados2-devel-10.2.11-0.el7.x86_64.rpm libradosstriper1-10.2.11-0.el7.x86_64.rpm libradosstriper1-devel-10.2.11-0.el7.x86_64.rpm librbd1-10.2.11-0.el7.x86_64.rpm librbd1-devel-10.2.11-0.el7.x86_64.rpm librgw2-10.2.11-0.el7.x86_64.rpm librgw2-devel-10.2.11-0.el7.x86_64.rpm python-ceph-compat-10.2.11-0.el7.x86_64.rpm python-cephfs-10.2.11-0.el7.x86_64.rpm python-rados-10.2.11-0.el7.x86_64.rpm python-rbd-10.2.11-0.el7.x86_64.rpm rbd-fuse-10.2.11-0.el7.x86_64.rpm rbd-mirror-10.2.11-0.el7.x86_64.rpm rbd-nbd-10.2.11-0.el7.x86_64.rpm 3、在部署加点的cent用户下安装ceph-deploy # sudo yum install ceph-deploy 4、切换到root用户在所有的节点安装之前下载的rpm包 5、如果安装报错 6、在部署节点cent用户下执行 # mkdir ceph
# cd ceph
7、部署节点(cent用户下执行):配置新集群 $ ceph-deploy new node1 node2 node3 $ vim ceph.conf [[email?protected] ceph]$ cat ceph.conf [global] fsid = 442ab1b1-13ab-4c92-ad05-1ffb09d0d24e mon_initial_members = node1,node2,node3 mon_host = 192.168.42.129,192.168.42.130,192.168.42.128 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx osd_pool_default_size = 3 osd_pool_default_min_size = 1 osd_pool_default_pg_num = 128 osd_pool_default_pgp_num = 128 osd_crush_chooseleaf_type = 1 8、在部署节点执行,所有节点安装ceph软件(root用户下) # ceph-deploy install ceph node1 node2 node3 如果报这样的错: 解决办法:执行# yum remove ceph-release 9、在部署节点初始化集群(cent用户下执行) $ ceph-deploy mon create-initial 10、 列出节点磁盘:ceph-deploy disk list node1 11、准备Object Storage Daemon $ ceph-deploy osd prepare node1:/dev/sdb node2:/dev/sdb node3:/dev/sdb 12、激活Object Storage Daemon $ ceph-deploy osd activate node1:/dev/sdb node2:/dev/sdb node3:/dev/sdb 13、在部署节点将 config files文件传到ceph节点 $ ceph-deploy admin ceph node1 node2 node3 $ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring 14、在ceph集群中任意节点检测 # ceph -s 三、客户端设置1、客户端创建cent用户# useradd cent && echo "123" | passwd --stdin cent # echo-e ‘Defaults:cent !requirettyncent ALL = (root) NOPASSWD:ALL‘ | tee /etc/sudoers.d/ceph # chmod440 /etc/sudoers.d/ceph 2、在部署节点执行,安装ceph客户端及设置# ceph-deploy install clinet
# ceph-deploy admin clinet
3、客户端执行# sudo chmod 644 /etc/ceph/ceph.client.admin.keyring 4、客户端执行,块设备rbd配置# rbd create disk01 --size 5G --image-feature layering #创建rbd # rbd ls -l #查看rbd # rbd map disk01 #将rbd映射到镜像地图中 # rbd showmapped #显示map # mkfs.xfs /dev/rbd0 #格式化disk01文件系统xfs # mount /dev/rbd0 /mnt #挂载硬盘: # df -hT#验证是否挂着成功 5、文件系统配置 a、在部署节点执行,选择一个node来创建MDS(cent用户执行): # ceph osd pool create cephfs_data 128#用来存放数据的 # ceph osd pool create cephfs_metadata 128#存放元数据的 # ceph osd lspools#列出创建的存储池 开启pool: # ceph fs new cephfs ceph_data cephfs_metadata 四、删除环境# ceph-deploy purge dlp node1 node2 node3 controller # ceph-deploy purgedata dlp node1 node2 node3 controller # ceph-deploy forgetkeys # rm -rf ceph* (编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |