[Gluster-users] Where does Gluster capture the hostnames from?

TomK tomkcpr at mdevsys.com
Mon Sep 23 12:01:50 UTC 2019


Do I *really* need specific /etc/hosts entries when I have IPA?

[root at mdskvm-p01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 
localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 
localhost6.localdomain6
[root at mdskvm-p01 ~]#

I really shouldn't need too.  ( Ref below, everything resolves fine. )

Cheers,
TK


On 9/23/2019 1:32 AM, Strahil wrote:
> Check your /etc/hosts for an entry like:
> 192.168.0.60 mdskvm-p01.nix.mds.xyz mdskvm-p01
> 
> Best Regards,
> Strahil NikolovOn Sep 23, 2019 06:58, TomK <tomkcpr at mdevsys.com> wrote:
>>
>> Hey All,
>>
>> Take the two hosts below as example.  One host shows NFS Server on
>> 192.168.0.60 (FQDN is mdskvm-p01.nix.mds.xyz).
>>
>> The other shows mdskvm-p02 (FQDN is mdskvm-p02.nix.mds.xyz).
>>
>> Why is there no consistency or correct hostname resolution?  Where does
>> gluster get the hostnames from?
>>
>>
>> [root at mdskvm-p02 glusterfs]# gluster volume status
>> Status of volume: mdsgv01
>> Gluster process                             TCP Port  RDMA Port  Online  Pid
>> ------------------------------------------------------------------------------
>> Brick mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/g
>> lusterv02                                   49153     0          Y
>> 17503
>> Brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/g
>> lusterv01                                   49153     0          Y
>> 15044
>> NFS Server on localhost                     N/A       N/A        N       N/A
>> Self-heal Daemon on localhost               N/A       N/A        Y
>> 17531
>> NFS Server on 192.168.0.60                  N/A       N/A        N       N/A
>> Self-heal Daemon on 192.168.0.60            N/A       N/A        Y
>> 15073
>>
>> Task Status of Volume mdsgv01
>> ------------------------------------------------------------------------------
>> There are no active volume tasks
>>
>> [root at mdskvm-p02 glusterfs]#
>>
>>
>>
>>
>> [root at mdskvm-p01 ~]# gluster volume status
>> Status of volume: mdsgv01
>> Gluster process                             TCP Port  RDMA Port  Online  Pid
>> ------------------------------------------------------------------------------
>> Brick mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/g
>> lusterv02                                   49153     0          Y
>> 17503
>> Brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/g
>> lusterv01                                   49153     0          Y
>> 15044
>> NFS Server on localhost                     N/A       N/A        N       N/A
>> Self-heal Daemon on localhost               N/A       N/A        Y
>> 15073
>> NFS Server on mdskvm-p02                    N/A       N/A        N       N/A
>> Self-heal Daemon on mdskvm-p02              N/A       N/A        Y
>> 17531
>>
>> Task Status of Volume mdsgv01
>> ------------------------------------------------------------------------------
>> There are no active volume tasks
>>
>> [root at mdskvm-p01 ~]#
>>
>>
>>
>> But when verifying everything all seems fine:
>>
>>
>> (1):
>> [root at mdskvm-p01 glusterfs]# dig -x 192.168.0.39
>> ;; QUESTION SECTION:
>> ;39.0.168.192.in-addr.arpa.     IN      PTR
>>
>> ;; ANSWER SECTION:
>> 39.0.168.192.in-addr.arpa. 1200 IN      PTR     mdskvm-p02.nix.mds.xyz.
>> [root at mdskvm-p01 glusterfs]# hostname -f
>> mdskvm-p01.nix.mds.xyz
>> [root at mdskvm-p01 glusterfs]# hostname -s
>> mdskvm-p01
>> [root at mdskvm-p01 glusterfs]# hostname
>> mdskvm-p01.nix.mds.xyz
>> [root at mdskvm-p01 glusterfs]#
>>
>>
>> (2):
>>
>> [root at mdskvm-p02 glusterfs]# dig -x 192.168.0.60
>> ;; QUESTION SECTION:
>> ;60.0.168.192.in-addr.arpa.     IN      PTR
>>
>> ;; ANSWER SECTION:
>> 60.0.168.192.in-addr.arpa. 1200 IN      PTR     mdskvm-p01.nix.mds.xyz.
>>
>> [root at mdskvm-p02 glusterfs]# hostname -s
>> mdskvm-p02
>> [root at mdskvm-p02 glusterfs]# hostname -f
>> mdskvm-p02.nix.mds.xyz
>> [root at mdskvm-p02 glusterfs]# hostname
>> mdskvm-p02.nix.mds.xyz
>> [root at mdskvm-p02 glusterfs]#
>>
>>
>> Gluster version used is:
>>
>> [root at mdskvm-p01 glusterfs]# rpm -aq|grep -Ei gluster
>> glusterfs-server-3.12.15-1.el7.x86_64
>> glusterfs-client-xlators-3.12.15-1.el7.x86_64
>> glusterfs-rdma-3.12.15-1.el7.x86_64
>> glusterfs-3.12.15-1.el7.x86_64
>> glusterfs-events-3.12.15-1.el7.x86_64
>> libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.12.x86_64
>> glusterfs-libs-3.12.15-1.el7.x86_64
>> glusterfs-fuse-3.12.15-1.el7.x86_64
>> glusterfs-geo-replication-3.12.15-1.el7.x86_64
>> python2-gluster-3.12.15-1.el7.x86_64
>> glusterfs-cli-3.12.15-1.el7.x86_64
>> vdsm-gluster-4.20.46-1.el7.x86_64
>> glusterfs-api-3.12.15-1.el7.x86_64
>> glusterfs-gnfs-3.12.15-1.el7.x86_64
>> [root at mdskvm-p01 glusterfs]#
>>
>>
>> -- 
>> Thx,
>> TK.
>> ________
>>
>> Community Meeting Calendar:
>>
>> APAC Schedule -
>> Every 2nd and 4th Tuesday at 11:30 AM IST
>> Bridge: https://bluejeans.com/118564314
>>
>> NA/EMEA Schedule -
>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>> Bridge: https://bluejeans.com/118564314
>>
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
> ________
> 
> Community Meeting Calendar:
> 
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/118564314
> 
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/118564314
> 
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 


-- 
Thx,
TK.


More information about the Gluster-users mailing list