独立部署 GlusterFS+Heketi 实现 KubernetesOpenShift 共享存储

独立部署 GlusterFS+Heketi 实现 KubernetesOpenShift 共享存储,第1张

概述1,准备工作 1.1 硬件信息 主机名 IP地址 gfs1 192.168.160.131 gfs2 192.168.160.132 gfs3/heketi 192.168.160.133 20G 的裸盘 /dev/sdb Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectorsUnits = sectors of 1 * 512 = 1,准备工作 1.1 硬件信息
主机名 IP地址
gfs1 192.168.160.131
gfs2 192.168.160.132
gfs3/heketi 192.168.160.133
20G 的裸盘 /dev/sdb
disk /dev/sdb: 21.5 GB,21474836480 bytes,41943040 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytes
1.2 环境准备 默认情况下,SElinux不允许从Pod写入远程gluster服务器,要通过在每个节点上执行SElinux来写入glusterFS卷
sudo setsebool -P virt_sandBox_use_fusefs on sudo setsebool -P virt_use_fusefs on
1.3 载入指定的个别模块
modprobe dm_snapshotmodprobe dm_mirrormodprobe dm_thin_pool
2,安装glusterFS了
yum -y install glusterfs glusterfs-server glusterfs-fuse
2.1 需要为glusterFS peers打开几个基本TCP端口,以便与OpenShift进行通信并提供存储:
firewall-cmd --add-port=24007-24008/tcp --add-port=49152-49664/tcp --add-port=2222/tcpfirewall-cmd --runtime-to-permanent
2.2 启动glusterFS的daemon进程了:
systemctl enable glusterdsystemctl start glusterd
3,在glusterFS的一台虚拟机上安装heketi
yum -y install heketi heketi-clIEnt
3.1 启动文件语法: /usr/lib/systemd/system/heketi.service
[Unit]Description=Heketi Server[Service]Type=simpleWorkingDirectory=/var/lib/heketiEnvironmentfile=-/etc/heketi/heketi.JsonUser=heketiExecStart=/usr/bin/heketi --config=/etc/heketi/heketi.JsonRestart=on-failureStandardOutput=syslogStandardError=syslog[Install]WantedBy=multi-user.target
3.2 重启 heketi
systemctl daemon-reloadsystemctl start heketi
3.3 创建密钥并分发
ssh-keygen -f /etc/heketi/heketi_key -t rsa -N ''chown heketi:heketi /etc/heketi/heketi_keyfor i in gfs1 gfs2 gfs3 ;do ssh-copy-ID -i /etc/heketi/heketi_key.pub $i ;done
3.4 配置heketi来使用SSH。 编辑/etc/heketi/heketi.Json文件
"executor":"ssh","_sshexec_comment":"SSH username and private key file information","sshexec":{         "keyfile":"/etc/heketi/heketi_key","user":"root","port":"22","fstab":"/etc/fstab"      },
3.5 heketi将监听8080端口,添加防火墙规则:
firewall-cmd --add-port=8080/tcpfirewall-cmd --runtime-to-permanent
3.6 重启heketi:
systemctl enable heketisystemctl restart heketi
3.7 测试 heketi 运行状态: curl http://gfs1:8080/hello
Hello from Heketi
3.8 配置 glusterFS 存储池 vim /etc/heketi/topology.Json
{  "clusters": [    {      "nodes": [        {          "node": {            "hostnames": {              "manage": [                "gfs1"              ],"storage": [                "192.168.160.131"              ]            },"zone": 1          },"devices": [            "/dev/sdb"          ]        },{          "node": {            "hostnames": {              "manage": [                "gfs2"              ],"storage": [                "192.168.160.132"              ]            },{          "node": {            "hostnames": {              "manage": [                "gfs3"              ],"storage": [                "192.168.160.133"              ]            },"devices": [            "/dev/sdb"          ]        }      ]    }  ]}
3.9 创建 glusterFS 存储池
export HEKETI_Cli_SERVER=http://gfs3:8080heketi-cli --server=http://gfs3:8080 topology load --Json=/etc/heketi/topology.Json
输出信息
Creating cluster ... ID: d3a3f31dce28e06dbd1099268c4ebe84    Allowing file volumes on cluster.    Allowing block volumes on cluster.    Creating node infra.test.com ... ID: ebfc1e8e2e7668311dc4304bfc1377cb        Adding device /dev/sdb ... OK    Creating node node1.test.com ... ID: 0ce162c3b8a65342be1aac96010251ef        Adding device /dev/sdb ... OK    Creating node node2.test.com ... ID: 62952de313e71eb5a4bfe5b76224e575        Adding device /dev/sdb ...  OK
3.10 当前位于 gfs3, 查看集群信息 heketi-cli topology info
Cluster ID: d3a3f31dce28e06dbd1099268c4ebe84    file:  true    Block: true    Volumes:    Nodes:    Node ID: 0ce162c3b8a65342be1aac96010251ef    State: online    Cluster ID: d3a3f31dce28e06dbd1099268c4ebe84    Zone: 1    Management Hostnames: node1.test.com    Storage Hostnames: 192.168.160.132    Devices:        ID:d6a5f0aba39a35d3d92f678dc9654eaa   name:/dev/sdb            State:online    Size (GiB):19      Used (GiB):0       Free (GiB):19                  Bricks:    Node ID: 62952de313e71eb5a4bfe5b76224e575    State: online    Cluster ID: d3a3f31dce28e06dbd1099268c4ebe84    Zone: 1    Management Hostnames: node2.test.com    Storage Hostnames: 192.168.160.133    Devices:        ID:dfd697f2215d2a304a44c5af44d352da   name:/dev/sdb            State:online    Size (GiB):19      Used (GiB):0       Free (GiB):19                  Bricks:    Node ID: ebfc1e8e2e7668311dc4304bfc1377cb    State: online    Cluster ID: d3a3f31dce28e06dbd1099268c4ebe84    Zone: 1    Management Hostnames: infra.test.com    Storage Hostnames: 192.168.160.131    Devices:        ID:e06b794b0b9f20608158081fbb5b5102   name:/dev/sdb            State:online    Size (GiB):19      Used (GiB):0       Free (GiB):19                  Bricks:
heketi-cli node List
ID:0ce162c3b8a65342be1aac96010251ef Cluster:d3a3f31dce28e06dbd1099268c4ebe84ID:62952de313e71eb5a4bfe5b76224e575 Cluster:d3a3f31dce28e06dbd1099268c4ebe84ID:ebfc1e8e2e7668311dc4304bfc1377cb Cluster:d3a3f31dce28e06dbd1099268c4ebe84
gluster peer status
Number of Peers: 2Hostname: gfs2UuID: ae6e998a-92c2-4c63-a7c6-c51a3b7e8fcbState: Peer in Cluster (Connected)Other names:gfs2Hostname: gfs1UuID: c8c46558-a8f2-46db-940d-4b19947cf075State: Peer in Cluster (Connected)
4,测试 4.1 测试创建volume heketi-cli --Json volume create --size 3 --replica 3
{"size":3,"name":"vol_93060cd7698e9e48bd035f26bbfe57af","durability":{"type":"replicate","replicate":{"replica":3},"disperse":{"data":4,"redundancy":2}},"glustervolumeoptions":["",""],"snapshot":{"enable":false,"factor":1},"ID":"93060cd7698e9e48bd035f26bbfe57af","cluster":"d3a3f31dce28e06dbd1099268c4ebe84","mount":{"glusterfs":{"hosts":["192.168.160.132","192.168.160.133","192.168.160.131"],"device":"192.168.160.132:vol_93060cd7698e9e48bd035f26bbfe57af","options":{"backup-volfile-servers":"192.168.160.133,192.168.160.131"}}},"blockinfo":{},"bricks":[{"ID":"16b8ddb1f2b2d3aa588d4d4a52bb7f6b","path":"/var/lib/heketi/mounts/vg_e06b794b0b9f20608158081fbb5b5102/brick_16b8ddb1f2b2d3aa588d4d4a52bb7f6b/brick","device":"e06b794b0b9f20608158081fbb5b5102","node":"ebfc1e8e2e7668311dc4304bfc1377cb","volume":"93060cd7698e9e48bd035f26bbfe57af","size":3145728},{"ID":"9e60ac3b7259c4e8803d4e1f6a235021","path":"/var/lib/heketi/mounts/vg_d6a5f0aba39a35d3d92f678dc9654eaa/brick_9e60ac3b7259c4e8803d4e1f6a235021/brick","device":"d6a5f0aba39a35d3d92f678dc9654eaa","node":"0ce162c3b8a65342be1aac96010251ef",{"ID":"e3f5ec732d5a8fe4b478af67c9caf85b","path":"/var/lib/heketi/mounts/vg_dfd697f2215d2a304a44c5af44d352da/brick_e3f5ec732d5a8fe4b478af67c9caf85b/brick","device":"dfd697f2215d2a304a44c5af44d352da","node":"62952de313e71eb5a4bfe5b76224e575","size":3145728}]}
heketi-cli volume List
ID:93060cd7698e9e48bd035f26bbfe57af    Cluster:d3a3f31dce28e06dbd1099268c4ebe84    name:vol_93060cd7698e9e48bd035f26bbfe57af
heketi-cli volume info 93060cd7698e9e48bd035f26bbfe57af
name: vol_93060cd7698e9e48bd035f26bbfe57afSize: 3Volume ID: 93060cd7698e9e48bd035f26bbfe57afCluster ID: d3a3f31dce28e06dbd1099268c4ebe84Mount: 192.168.160.132:vol_93060cd7698e9e48bd035f26bbfe57afMount Options: backup-volfile-servers=192.168.160.133,192.168.160.131Block: falseFree Size: 0Reserved Size: 0Block Hosting Restriction: (none)Block Volumes: []Durability Type: replicatedistributed+Replica: 3
gluster volume List
vol_93060cd7698e9e48bd035f26bbfe57af
gluster volume status
Status of volume: vol_93060cd7698e9e48bd035f26bbfe57afgluster process                             TCP Port  RDMA Port  Online  PID------------------------------------------------------------------------------Brick 192.168.160.132:/var/lib/heketi/mounts/vg_d6a5f0aba39a35d3d92f678dc9654eaa/brick_9e60ac3b7259c4e8803d4e1f6a235021/brick     49153     0          Y       30660Brick 192.168.160.131:/var/lib/heketi/mounts/vg_e06b794b0b9f20608158081fbb5b5102/brick_16b8ddb1f2b2d3aa588d4d4a52bb7f6b/brick     49153     0          Y       21979Brick 192.168.160.133:/var/lib/heketi/mounts/vg_dfd697f2215d2a304a44c5af44d352da/brick_e3f5ec732d5a8fe4b478af67c9caf85b/brick     49152     0          Y       61274Self-heal Daemon on localhost               N/A       N/A        Y       61295Self-heal Daemon on apps.test.com           N/A       N/A        Y       22000Self-heal Daemon on 192.168.160.132         N/A       N/A        Y       30681 Task Status of Volume vol_93060cd7698e9e48bd035f26bbfe57af------------------------------------------------------------------------------There are no active volume tasks
gluster volume info vol_93060cd7698e9e48bd035f26bbfe57af
Volume name: vol_93060cd7698e9e48bd035f26bbfe57afType: ReplicateVolume ID: ca4a9854-a33c-40ab-86c7-0d0d34004454Status: StartedSnapshot Count: 0Number of Bricks: 1 x 3 = 3Transport-type: tcpBricks:Brick1: 192.168.160.132:/var/lib/heketi/mounts/vg_d6a5f0aba39a35d3d92f678dc9654eaa/brick_9e60ac3b7259c4e8803d4e1f6a235021/brickBrick2: 192.168.160.131:/var/lib/heketi/mounts/vg_e06b794b0b9f20608158081fbb5b5102/brick_16b8ddb1f2b2d3aa588d4d4a52bb7f6b/brickBrick3: 192.168.160.133:/var/lib/heketi/mounts/vg_dfd697f2215d2a304a44c5af44d352da/brick_e3f5ec732d5a8fe4b478af67c9caf85b/brickOptions Reconfigured:transport.address-family: inetnfs.disable: onperformance.clIEnt-io-threads: off
5,在OpenShift中使用gluster 5.1 OpenShift 创建 StorageClass YAML文件: 编辑storage-class.yaml,其中resturl为heketi的url,volumetype: replicate:3为副本卷brick数量,建议为3 cat storage-class.yaml
APIVersion: storage.k8s.io/v1kind: StorageClassMetadata:  creationTimestamp: null  name: gluster-heketiprovisioner: kubernetes.io/glusterfsparameters:  resturl: "http://gfs3:8080"  restauthenabled: "true"  volumetype: replicate:3
创建 StorageClass
oc create -f storage-class.yaml
查看 StorageClass oc get sc
name             PROVISIONER               AGEgluster-heketi   kubernetes.io/glusterfs   55m
5.2 OpenShift 创建 PVC YAML文件: cat pvc.yml
APIVersion: v1kind: PersistentVolumeClaimMetadata: name: test-pvcspec: accessModes:  - ReaDWriteMany resources:   requests:        storage: 1Gi storageClassname: gluster-heketi
创建 PVC
oc create -f storage-class.yaml
查看 PV,PVC oc get pv,pvc
name                                                        CAPACITY   ACCESS MODES   RECLaim POliCY   STATUS    CLaim           STORAGECLASS     REASON    AGEpersistentvolume/pvc-57362c7f-e6c2-11e9-8634-000c299365cc   1Gi        RWX            Delete           Bound     default/test1   gluster-heketi             57mname                          STATUS    VolUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGEpersistentvolumeclaim/test-pvc   Bound     pvc-57362c7f-e6c2-11e9-8634-000c299365cc  1Gi        RWX            gluster-heketi   57m
挂载到虚机
mount -t glusterfs 192.168.160.132:vol_b96d0e18cef937dd56a161ae5fa5b9cb /mnt
df -h | grep vol_b96d0e18cef937dd56a161ae5fa5b9cb
192.168.160.132:vol_b96d0e18cef937dd56a161ae5fa5b9cb                                   1014M   43M  972M   5% /mnt
6,常用命令
查看集群节点:gluster pool List查看集群状态(默认不显示当前主机): gluster peer status查看集群volume :gluster volume List查看volume 信息:gluster volume info <Volname>查看volume状态:gluster volume stats <Volname>强制启动volume:gluster volume start <Volname> force查看volume需要修复的文件:gluster volume heal <Volname> info启动完全修复:gluster volume heal <Volname> full查看修复成功的文件:gluster volume heal <Volname> info healed查看修复失败的文件:gluster volume heal <Volname> info heal-Failed查看脑裂文件:gluster volume heal <Volname> info split-brain
6.1 其它heketi客户端常用命令
heketi-cli --server=http://localhost:8080 --user=admin --secret=kLd834dadEsfwcv cluster Listheketi-cli --server=http://localhost:8080 --user=admin --secret=kLd834dadEsfwcv cluster info <cluster-ID>heketi-cli --server=http://localhost:8080 --user=admin --secret=kLd834dadEsfwcv node info <node-ID>heketi-cli --server=http://localhost:8080 --user=admin --secret=kLd834dadEsfwcv volume Listheketi-cli --server=http://localhost:8080 --user=admin --secret=kLd834dadEsfwcv volume create --size=1 --replica=2heketi-cli --server=http://localhost:8080 --user=admin --secret=kLd834dadEsfwcv volume info <volune-ID>heketi-cli --server=http://localhost:8080 --user=admin --secret=kLd834dadEsfwcv volume expand --volume=<volune-ID> --expand-size=1heketi-cli --server=http://localhost:8080 --user=admin --secret=kLd834dadEsfwcv volume delete <volune-ID>
6.2 初始化裸盘 每个glusterfs machine上挂一快同样大小的未格式化过磁盘。 如果已经格式化过,使用下面命令重置磁盘(假设磁盘挂在/dev/sdb):
pvcreate --Metadatasize=128M --dataalignment=256K /dev/sdb
7, glusterFS集群故障处理 7.1 volume bricks掉线 查看volume状态:
gluster volume status <volume_name>
当Online列有为N的表示当前bricks不在线,登录brick所在主机查看brick挂载
df -h |grep <BRICKname>
若未查看到需要重新mount
cat /etc/fstab |grep <BRICKname> |xargs -i mount {}
重新启动掉线的bricks
gluster volume start <Volname> force
7.2 bricks文件不一致修复 查看bricks是否存在文件不一致
gluster volume heal <Volname> info
启动自动修复
gluster volume heal <Volname> full
7.3 bricks脑裂修复
gluster volume heal <Volname> info
若存在Is in split-brain内容则发生脑裂
1)  选择较大的文件作为源修复gluster volume heal <Volname> split-brain bigger-file <file>2)  选择以最新的mtime作为源的文件gluster volume heal <Volname> split-brain latest-mtime <file>3)  选择副本中的砖块之一作为特定文件的源gluster volume heal <Volname> split-brain source-brick <HOSTname:BRICKname> <file>4)  选择副本的一个brick作为所有文件的源gluster volume heal <Volname> split-brain source-brick <HOSTname:BRICKname>
7.4 更换brick 将源bricks数据同步到新的bricks路径
gluster volume replace-brick <Volname> Server1:/home/gfs/r2_0 Server1:/home/gfs/r2_5 start
在数据迁移的过程中,可以查看替换任务是否完成
gluster volume replace-brick <Volname> Server1:/home/gfs/r2_0 Server1:/home/gfs/r2_5 status
在数据迁移结束之后,执行commit命令结束任务,则进行Brick替换。使用volume info命令可以查看到Brick已经被替换
gluster volume replace-brick <Volname> Server1:/home/gfs/r2_0 Server1:/home/gfs/r2_5 commit
8, heketi服务故障处理 报错:
[heketi] ERROR 2018/07/02 09:08:19 /src/github.com/heketi/heketi/apps/glusterfs/app.go:172: Heketi was terminated while performing one or more operations. Server may refuse to start as long as pending operations are present in the db.heketi服务无法启动
1) 导出heketi的heketi.db文件,文件的路径在heketi.Json文件里面heketi db export --dbfile=/var/lib/heketi/heketi.db --Jsonfile=/tmp/heketIDb1.Json2) 打开导出的db文件,比如上文的/tmp/heketIDb1.Json,查找```pendingoperations```选项,找到之后把与它相关的内容删除3) 将修改后的文件保存,切记要保存为Json后缀。然后将db文件再按照如下命令导入heketi db import --Jsonfile=/tmp/succ.Json --dbfile=/var/lib/heketi/heketi.db4) 重启heketi 服务systemctl start heketi
8,参考文档 https://docs.gluster.org/en/latest/ https://www.ctolib.com/docs/sfile/kubernetes-handbook/practice/storage-for-containers-using-glusterfs-with-openshift.html 总结

以上是内存溢出为你收集整理的独立部署 GlusterFS+Heketi 实现 Kubernetes / OpenShift 共享存储全部内容,希望文章能够帮你解决独立部署 GlusterFS+Heketi 实现 Kubernetes / OpenShift 共享存储所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

欢迎分享,转载请注明来源:内存溢出

原文地址:https://54852.com/yw/1022909.html

(0)
打赏 微信扫一扫微信扫一扫 支付宝扫一扫支付宝扫一扫
上一篇 2022-05-23
下一篇2022-05-23

发表评论

登录后才能评论

评论列表(0条)

    保存