<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-text-flowed" style="font-family: -moz-fixed;
font-size: 12px;" lang="x-western">Hello,
<br>
<br>
I recently setup a SLURM cluster with a shared filesystem using
Gluster. The Gluster nodes are connected to the rest of the
cluster with a 56Gb InfiniBand Interconnect.
<br>
<br>
Some of our users are receiving the following error when they run
VASP jobs that access files on Gluster:
<br>
<br>
forrtl: severe (51): inconsistent file organization, unit 12
/path/to/file/WAVECAR
<br>
<br>
Is this an error with VASP or Gluster? If it is an error with
Gluster how do I fix it? I do not know much about Gluster so I
need some help.
<br>
<br>
Here are some relevant specs:
<br>
[root@aci-storage-1 ~]# gluster --version
<br>
glusterfs 3.4.0beta2 built on May 24 2013 14:11:16
<br>
<br>
[root@aci-storage-1 ~]# gluster volume info
<br>
Volume Name: scratch
<br>
Type: Distribute
<br>
Volume ID: 2d30a015-0452-45a3-9a1d-42cee619d35f
<br>
Status: Started
<br>
Number of Bricks: 8
<br>
Transport-type: tcp
<br>
Bricks:
<br>
Brick1: 10.129.40.21:/data/glusterfs/brick1/scratch
<br>
Brick2: 10.129.40.21:/data/glusterfs/brick2/scratch
<br>
Brick3: 10.129.40.22:/data/glusterfs/brick1/scratch
<br>
Brick4: 10.129.40.22:/data/glusterfs/brick2/scratch
<br>
Brick5: 10.129.40.23:/data/glusterfs/brick1/scratch
<br>
Brick6: 10.129.40.23:/data/glusterfs/brick2/scratch
<br>
Brick7: 10.129.40.24:/data/glusterfs/brick1/scratch
<br>
Brick8: 10.129.40.24:/data/glusterfs/brick2/scratch
<br>
Options Reconfigured:
<br>
features.quota: on
<br>
features.limit-usage: /:80TB
<br>
<br>
Volume Name: home
<br>
Type: Distribute
<br>
Volume ID: 711465cf-db6c-4407-9b02-43e44ee4779b
<br>
Status: Started
<br>
Number of Bricks: 8
<br>
Transport-type: tcp
<br>
Bricks:
<br>
Brick1: 10.129.40.21:/data/glusterfs/brick1/home
<br>
Brick2: 10.129.40.21:/data/glusterfs/brick2/home
<br>
Brick3: 10.129.40.22:/data/glusterfs/brick1/home
<br>
Brick4: 10.129.40.22:/data/glusterfs/brick2/home
<br>
Brick5: 10.129.40.23:/data/glusterfs/brick1/home
<br>
Brick6: 10.129.40.23:/data/glusterfs/brick2/home
<br>
Brick7: 10.129.40.24:/data/glusterfs/brick1/home
<br>
Brick8: 10.129.40.24:/data/glusterfs/brick2/home
<br>
Options Reconfigured:
<br>
features.limit-usage: /:30TB
<br>
features.quota: on
<br>
<br>
There doesn't appear to be any significant errors in the log
files, but /var/log/glusterfs/scratch.log does have a lot of these
types of messages:
<br>
[2013-06-27 21:57:21.399355] W [quota.c:2167:quota_fstat_cbk]
0-scratch-quota: quota context not set in inode
(gfid:0b855d43-2a51-42bc-8707-fbe010cfe5b9)
<br>
[2013-06-27 21:59:29.188686] E [io-cache.c:557:ioc_open_cbk]
0-scratch-io-cache: inode context is NULL
(5555d554-41ff-44be-be88-af3b0d570876)
<br>
[2013-06-27 21:59:29.189095] W [quota.c:2301:quota_readv_cbk]
0-scratch-quota: quota context not set in inode
(gfid:5555d554-41ff-44be-be88-af3b0d570876)
<br>
[2013-06-27 21:59:34.296190] E [io-cache.c:557:ioc_open_cbk]
0-scratch-io-cache: inode context is NULL
(5555d554-41ff-44be-be88-af3b0d570876)
<br>
[2013-06-27 21:59:34.296686] W [quota.c:2301:quota_readv_cbk]
0-scratch-quota: quota context not set in inode
(gfid:5555d554-41ff-44be-be88-af3b0d570876)
<br>
[2013-06-27 22:01:41.415542] E [io-cache.c:557:ioc_open_cbk]
0-scratch-io-cache: inode context is NULL
(bb9a4fba-3cc9-4d2a-a937-00752ec6c5d2)
<br>
[2013-06-27 22:01:41.416062] W [quota.c:2301:quota_readv_cbk]
0-scratch-quota: quota context not set in inode
(gfid:bb9a4fba-3cc9-4d2a-a937-00752ec6c5d2)
<br>
[2013-06-27 22:01:43.570357] W [quota.c:1253:quota_unlink_cbk]
0-scratch-quota: quota context not set in inode
(gfid:bb9a4fba-3cc9-4d2a-a937-00752ec6c5d2)
<br>
[2013-06-27 22:01:43.571182] W [quota.c:1253:quota_unlink_cbk]
0-scratch-quota: quota context not set in inode
(gfid:592ca6e8-31f9-4e97-9fe3-68ecaa806f22)
<br>
<br>
Please let me know if you need anything else.
<br>
<br>
Thanks much,
<br>
<br>
Neil Van Lysel
<br>
<a class="moz-txt-link-abbreviated"
href="mailto:van-lyse@cs.wisc.edu">van-lyse@cs.wisc.edu</a>
<br>
UNIX Systems Administrator
<br>
Center for High Throughput Computing
<br>
University of Wisconsin - Madison
<br>
</div>
</body>
</html>