独立部署 GlusterFS+Heketi 实现 Kubernetes / OpenShift 共享存
发布时间:2020-12-14 00:19:17 所属栏目:Linux 来源:网络整理
导读:1,准备工作 1.1 硬件信息 主机名 IP地址 gfs1 192.168.160.131 gfs2 192.168.160.132 gfs3/heketi 192.168.160.133 20G 的裸盘 /dev/sdb Disk /dev/sdb: 21.5 GB,21474836480 bytes,41943040 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (lo
1,准备工作1.1 硬件信息
Disk /dev/sdb: 21.5 GB,21474836480 bytes,41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes 1.2 环境准备
sudo setsebool -P virt_sandbox_use_fusefs on sudo setsebool -P virt_use_fusefs on 1.3 载入指定的个别模块modprobe dm_snapshot modprobe dm_mirror modprobe dm_thin_pool 2,安装GlusterFS了yum -y install glusterfs glusterfs-server glusterfs-fuse 2.1 需要为GlusterFS peers打开几个基本TCP端口,以便与OpenShift进行通信并提供存储:firewall-cmd --add-port=24007-24008/tcp --add-port=49152-49664/tcp --add-port=2222/tcp firewall-cmd --runtime-to-permanent 2.2 启动GlusterFS的daemon进程了:systemctl enable glusterd systemctl start glusterd 3,在GlusterFS的一台虚拟机上安装heketiyum -y install heketi heketi-client 3.1 启动文件语法:
[Unit] Description=Heketi Server [Service] Type=simple WorkingDirectory=/var/lib/heketi EnvironmentFile=-/etc/heketi/heketi.json User=heketi ExecStart=/usr/bin/heketi --config=/etc/heketi/heketi.json Restart=on-failure StandardOutput=syslog StandardError=syslog [Install] WantedBy=multi-user.target 3.2 重启 heketisystemctl daemon-reload systemctl start heketi 3.3 创建密钥并分发ssh-keygen -f /etc/heketi/heketi_key -t rsa -N '' chown heketi:heketi /etc/heketi/heketi_key for i in gfs1 gfs2 gfs3 ;do ssh-copy-id -i /etc/heketi/heketi_key.pub $i ;done 3.4 配置heketi来使用SSH。 编辑/etc/heketi/heketi.json文件"executor":"ssh","_sshexec_comment":"SSH username and private key file information","sshexec":{ "keyfile":"/etc/heketi/heketi_key","user":"root","port":"22","fstab":"/etc/fstab" }, 3.5 heketi将监听8080端口,添加防火墙规则:firewall-cmd --add-port=8080/tcp firewall-cmd --runtime-to-permanent 3.6 重启heketi:systemctl enable heketi systemctl restart heketi 3.7 测试 heketi 运行状态:
Hello from Heketi 3.8 配置 GlusterFS 存储池
{ "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "gfs1" ],"storage": [ "192.168.160.131" ] },"zone": 1 },"devices": [ "/dev/sdb" ] },{ "node": { "hostnames": { "manage": [ "gfs2" ],"storage": [ "192.168.160.132" ] },{ "node": { "hostnames": { "manage": [ "gfs3" ],"storage": [ "192.168.160.133" ] },"devices": [ "/dev/sdb" ] } ] } ] } 3.9 创建 GlusterFS 存储池export HEKETI_CLI_SERVER=http://gfs3:8080 heketi-cli --server=http://gfs3:8080 topology load --json=/etc/heketi/topology.json
Creating cluster ... ID: d3a3f31dce28e06dbd1099268c4ebe84 Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node infra.test.com ... ID: ebfc1e8e2e7668311dc4304bfc1377cb Adding device /dev/sdb ... OK Creating node node1.test.com ... ID: 0ce162c3b8a65342be1aac96010251ef Adding device /dev/sdb ... OK Creating node node2.test.com ... ID: 62952de313e71eb5a4bfe5b76224e575 Adding device /dev/sdb ... OK 3.10 当前位于 gfs3, 查看集群信息
Cluster Id: d3a3f31dce28e06dbd1099268c4ebe84 File: true Block: true Volumes: Nodes: Node Id: 0ce162c3b8a65342be1aac96010251ef State: online Cluster Id: d3a3f31dce28e06dbd1099268c4ebe84 Zone: 1 Management Hostnames: node1.test.com Storage Hostnames: 192.168.160.132 Devices: Id:d6a5f0aba39a35d3d92f678dc9654eaa Name:/dev/sdb State:online Size (GiB):19 Used (GiB):0 Free (GiB):19 Bricks: Node Id: 62952de313e71eb5a4bfe5b76224e575 State: online Cluster Id: d3a3f31dce28e06dbd1099268c4ebe84 Zone: 1 Management Hostnames: node2.test.com Storage Hostnames: 192.168.160.133 Devices: Id:dfd697f2215d2a304a44c5af44d352da Name:/dev/sdb State:online Size (GiB):19 Used (GiB):0 Free (GiB):19 Bricks: Node Id: ebfc1e8e2e7668311dc4304bfc1377cb State: online Cluster Id: d3a3f31dce28e06dbd1099268c4ebe84 Zone: 1 Management Hostnames: infra.test.com Storage Hostnames: 192.168.160.131 Devices: Id:e06b794b0b9f20608158081fbb5b5102 Name:/dev/sdb State:online Size (GiB):19 Used (GiB):0 Free (GiB):19 Bricks:
Id:0ce162c3b8a65342be1aac96010251ef Cluster:d3a3f31dce28e06dbd1099268c4ebe84 Id:62952de313e71eb5a4bfe5b76224e575 Cluster:d3a3f31dce28e06dbd1099268c4ebe84 Id:ebfc1e8e2e7668311dc4304bfc1377cb Cluster:d3a3f31dce28e06dbd1099268c4ebe84
Number of Peers: 2 Hostname: gfs2 Uuid: ae6e998a-92c2-4c63-a7c6-c51a3b7e8fcb State: Peer in Cluster (Connected) Other names: gfs2 Hostname: gfs1 Uuid: c8c46558-a8f2-46db-940d-4b19947cf075 State: Peer in Cluster (Connected) 4,测试4.1 测试创建volume
{"size":3,"name":"vol_93060cd7698e9e48bd035f26bbfe57af","durability":{"type":"replicate","replicate":{"replica":3},"disperse":{"data":4,"redundancy":2}},"glustervolumeoptions":["",""],"snapshot":{"enable":false,"factor":1},"id":"93060cd7698e9e48bd035f26bbfe57af","cluster":"d3a3f31dce28e06dbd1099268c4ebe84","mount":{"glusterfs":{"hosts":["192.168.160.132","192.168.160.133","192.168.160.131"],"device":"192.168.160.132:vol_93060cd7698e9e48bd035f26bbfe57af","options":{"backup-volfile-servers":"192.168.160.133,192.168.160.131"}}},"blockinfo":{},"bricks":[{"id":"16b8ddb1f2b2d3aa588d4d4a52bb7f6b","path":"/var/lib/heketi/mounts/vg_e06b794b0b9f20608158081fbb5b5102/brick_16b8ddb1f2b2d3aa588d4d4a52bb7f6b/brick","device":"e06b794b0b9f20608158081fbb5b5102","node":"ebfc1e8e2e7668311dc4304bfc1377cb","volume":"93060cd7698e9e48bd035f26bbfe57af","size":3145728},{"id":"9e60ac3b7259c4e8803d4e1f6a235021","path":"/var/lib/heketi/mounts/vg_d6a5f0aba39a35d3d92f678dc9654eaa/brick_9e60ac3b7259c4e8803d4e1f6a235021/brick","device":"d6a5f0aba39a35d3d92f678dc9654eaa","node":"0ce162c3b8a65342be1aac96010251ef",{"id":"e3f5ec732d5a8fe4b478af67c9caf85b","path":"/var/lib/heketi/mounts/vg_dfd697f2215d2a304a44c5af44d352da/brick_e3f5ec732d5a8fe4b478af67c9caf85b/brick","device":"dfd697f2215d2a304a44c5af44d352da","node":"62952de313e71eb5a4bfe5b76224e575","size":3145728}]}
Id:93060cd7698e9e48bd035f26bbfe57af Cluster:d3a3f31dce28e06dbd1099268c4ebe84 Name:vol_93060cd7698e9e48bd035f26bbfe57af
Name: vol_93060cd7698e9e48bd035f26bbfe57af Size: 3 Volume Id: 93060cd7698e9e48bd035f26bbfe57af Cluster Id: d3a3f31dce28e06dbd1099268c4ebe84 Mount: 192.168.160.132:vol_93060cd7698e9e48bd035f26bbfe57af Mount Options: backup-volfile-servers=192.168.160.133,192.168.160.131 Block: false Free Size: 0 Reserved Size: 0 Block Hosting Restriction: (none) Block Volumes: [] Durability Type: replicate Distributed+Replica: 3
vol_93060cd7698e9e48bd035f26bbfe57af
Status of volume: vol_93060cd7698e9e48bd035f26bbfe57af Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 192.168.160.132:/var/lib/heketi/mount s/vg_d6a5f0aba39a35d3d92f678dc9654eaa/brick _9e60ac3b7259c4e8803d4e1f6a235021/brick 49153 0 Y 30660 Brick 192.168.160.131:/var/lib/heketi/mount s/vg_e06b794b0b9f20608158081fbb5b5102/brick _16b8ddb1f2b2d3aa588d4d4a52bb7f6b/brick 49153 0 Y 21979 Brick 192.168.160.133:/var/lib/heketi/mount s/vg_dfd697f2215d2a304a44c5af44d352da/brick _e3f5ec732d5a8fe4b478af67c9caf85b/brick 49152 0 Y 61274 Self-heal Daemon on localhost N/A N/A Y 61295 Self-heal Daemon on apps.test.com N/A N/A Y 22000 Self-heal Daemon on 192.168.160.132 N/A N/A Y 30681 Task Status of Volume vol_93060cd7698e9e48bd035f26bbfe57af ------------------------------------------------------------------------------ There are no active volume tasks
Volume Name: vol_93060cd7698e9e48bd035f26bbfe57af Type: Replicate Volume ID: ca4a9854-a33c-40ab-86c7-0d0d34004454 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 192.168.160.132:/var/lib/heketi/mounts/vg_d6a5f0aba39a35d3d92f678dc9654eaa/brick_9e60ac3b7259c4e8803d4e1f6a235021/brick Brick2: 192.168.160.131:/var/lib/heketi/mounts/vg_e06b794b0b9f20608158081fbb5b5102/brick_16b8ddb1f2b2d3aa588d4d4a52bb7f6b/brick Brick3: 192.168.160.133:/var/lib/heketi/mounts/vg_dfd697f2215d2a304a44c5af44d352da/brick_e3f5ec732d5a8fe4b478af67c9caf85b/brick Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off 5,在OpenShift中使用Gluster5.1 OpenShift 创建 StorageClass YAML文件:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: creationTimestamp: null name: gluster-heketi provisioner: kubernetes.io/glusterfs parameters: resturl: "http://gfs3:8080" restauthenabled: "true" volumetype: replicate:3
oc create -f storage-class.yaml
NAME PROVISIONER AGE gluster-heketi kubernetes.io/glusterfs 55m 5.2 OpenShift 创建 PVC YAML文件:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: gluster-heketi
oc create -f storage-class.yaml
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-57362c7f-e6c2-11e9-8634-000c299365cc 1Gi RWX Delete Bound default/test1 gluster-heketi 57m NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/test-pvc Bound pvc-57362c7f-e6c2-11e9-8634-000c299365cc 1Gi RWX gluster-heketi 57m
mount -t glusterfs 192.168.160.132:vol_b96d0e18cef937dd56a161ae5fa5b9cb /mnt
192.168.160.132:vol_b96d0e18cef937dd56a161ae5fa5b9cb 1014M 43M 972M 5% /mnt 6,常用命令查看集群节点:gluster pool list 查看集群状态(默认不显示当前主机): gluster peer status 查看集群volume :gluster volume list 查看volume 信息:gluster volume info <VOLNAME> 查看volume状态:gluster volume stats <VOLNAME> 强制启动volume:gluster volume start <VOLNAME> force 查看volume需要修复的文件:gluster volume heal <VOLNAME> info 启动完全修复:gluster volume heal <VOLNAME> full 查看修复成功的文件:gluster volume heal <VOLNAME> info healed 查看修复失败的文件:gluster volume heal <VOLNAME> info heal-failed 查看脑裂文件:gluster volume heal <VOLNAME> info split-brain 6.1 其它heketi客户端常用命令heketi-cli --server=http://localhost:8080 --user=admin --secret=kLd834dadEsfwcv cluster list heketi-cli --server=http://localhost:8080 --user=admin --secret=kLd834dadEsfwcv cluster info <cluster-id> heketi-cli --server=http://localhost:8080 --user=admin --secret=kLd834dadEsfwcv node info <node-id> heketi-cli --server=http://localhost:8080 --user=admin --secret=kLd834dadEsfwcv volume list heketi-cli --server=http://localhost:8080 --user=admin --secret=kLd834dadEsfwcv volume create --size=1 --replica=2 heketi-cli --server=http://localhost:8080 --user=admin --secret=kLd834dadEsfwcv volume info <volune-id> heketi-cli --server=http://localhost:8080 --user=admin --secret=kLd834dadEsfwcv volume expand --volume=<volune-id> --expand-size=1 heketi-cli --server=http://localhost:8080 --user=admin --secret=kLd834dadEsfwcv volume delete <volune-id> 6.2 初始化裸盘
pvcreate --metadatasize=128M --dataalignment=256K /dev/sdb 7, GlusterFS集群故障处理7.1 volume bricks掉线
gluster volume status <volume_name>
df -h |grep <BRICKNAME>
cat /etc/fstab |grep <BRICKNAME> |xargs -i mount {}
gluster volume start <VOLNAME> force 7.2 bricks文件不一致修复
gluster volume heal <VOLNAME> info
gluster volume heal <VOLNAME> full 7.3 bricks脑裂修复gluster volume heal <VOLNAME> info
1) 选择较大的文件作为源修复 gluster volume heal <VOLNAME> split-brain bigger-file <FILE> 2) 选择以最新的mtime作为源的文件 gluster volume heal <VOLNAME> split-brain latest-mtime <FILE> 3) 选择副本中的砖块之一作为特定文件的源 gluster volume heal <VOLNAME> split-brain source-brick <HOSTNAME:BRICKNAME> <FILE> 4) 选择副本的一个brick作为所有文件的源 gluster volume heal <VOLNAME> split-brain source-brick <HOSTNAME:BRICKNAME> 7.4 更换brick
gluster volume replace-brick <VOLNAME> Server1:/home/gfs/r2_0 Server1:/home/gfs/r2_5 start
gluster volume replace-brick <VOLNAME> Server1:/home/gfs/r2_0 Server1:/home/gfs/r2_5 status
gluster volume replace-brick <VOLNAME> Server1:/home/gfs/r2_0 Server1:/home/gfs/r2_5 commit 8, heketi服务故障处理
[heketi] ERROR 2018/07/02 09:08:19 /src/github.com/heketi/heketi/apps/glusterfs/app.go:172: Heketi was terminated while performing one or more operations. Server may refuse to start as long as pending operations are present in the db. heketi服务无法启动 1) 导出heketi的heketi.db文件,文件的路径在heketi.json文件里面 heketi db export --dbfile=/var/lib/heketi/heketi.db --jsonfile=/tmp/heketidb1.json 2) 打开导出的db文件,比如上文的/tmp/heketidb1.json,查找```pendingoperations```选项,找到之后把与它相关的内容删除 3) 将修改后的文件保存,切记要保存为json后缀。然后将db文件再按照如下命令导入 heketi db import --jsonfile=/tmp/succ.json --dbfile=/var/lib/heketi/heketi.db 4) 重启heketi 服务 systemctl start heketi 8,参考文档
(编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |