[Gluster-devel] Gluster 9.6 changes to fix gluster NFS bug

Jacobson, Erik erik.jacobson at hpe.com
Thu Mar 21 16:39:11 UTC 2024


Dear team. I made a new PR (sorry some experience showing in github.com I created a new PR instead of updating the old one. Seemed easier to close the old one and use the new one than fix the old one).

In the new PR, I integrated feedback. Thank you so much.
https://github.com/gluster/glusterfs/pull/4322

I am attaching to this email my notes on reproducing this environment. I used virtual machines and a constrained test environment to duplicate the problem and test the fix. I hope these notes resolve all the outstanding questions.

If not, please let me know! Thanks again to all.

Erik



From: Jacobson, Erik <erik.jacobson at hpe.com>
Date: Monday, March 18, 2024 at 10:22 AM
To: Aravinda <aravinda at kadalu.tech>
Cc: Gluster Devel <gluster-devel at gluster.org>
Subject: Re: [Gluster-devel] Gluster 9.6 changes to fix gluster NFS bug
I will need to set up a test case that is isolated.

In the meantime, I did a fork and a PR. I marked it as draft as I try to find an easier test case.

https://github.com/gluster/glusterfs/pull/4319

From: Aravinda <aravinda at kadalu.tech>
Date: Saturday, March 16, 2024 at 9:37 AM
To: Jacobson, Erik <erik.jacobson at hpe.com>
Cc: Gluster Devel <gluster-devel at gluster.org>
Subject: Re: [Gluster-devel] Gluster 9.6 changes to fix gluster NFS bug
> We ran into some trouble in Gluster 9.3 with the Gluster NFS server. We updated to a supported Gluster  9.6 and reproduced the problem.

Please share the reproducer steps. We can include in our tests if possible.

> We understand the Gluster team recommends the use of Ganesha for NFS but in our specific environment and use case, Ganesha isn’t fast enough. No disrespect intended; we never got the chance to work with the Ganesha team on it.

That is totally fine. I think gnfs is disabled in the later versions, you have to build from source to enable it. Only issue I see is gnfs doesn't support NFS v4 and the NFS+Gluster team shifted the focus to NFS Ganesha.

> We tried to avoid Ganesha and Gluster NFS altogether, using kernel NFS with fuse mounts exported, and that was faster, but failover didn’t work. We could make the mount point highly available but not open files (so when the IP failover happened, the mount point would still function but the open file – a squashfs in this example – would not fail over).

Was Gluster backup volfile server option used or any other method used for high availability?

> So we embarked on a mission to try to figure out what was going on with the NFS server. I am not an expert in network code or distributed filesystems. So, someone with a careful eye would need to check these changes out. However, what I generally found was that the Gluster NFS server requires the layers of gluster to report back ‘errno’ to determine if EINVAL is set (to determine is_eof). In some instances, errno was not being passed down the chain or was being reset to 0. This resulted in NFS traces showing multiple READs for a 1 byte file and the NFS client showing an “I/O” error. It seemed like files above 170M worked ok. This is likely due to how the layers of gluster change with changing and certain file sizes. However, we did not track this part down.

> We found in one case disabling the NFS performance IO cache would fix the problem for a non-sharded volume, but the problem persisted in a sharded volume. Testing found our environment takes the disabling of the NFS performance IO cache quite hard anyway, so it wasn’t an option for us.

> We were curious why the fuse client wouldn’t be impacted but our quick look found that fuse doesn’t really use or need errno in the same way Gluster NFS does.

> So, the attached patch fixed the issue. Accessing small files in either case above now work properly. We tried running md5sum against large files over NFS and fuse mounts and everything seemed fine.

> In our environment, the NFS-exported directories tend to contain squashfs files representing read-only root filesystems for compute nodes, and those worked fine over NFS after the change as well.

> If you do not wish to include this patch because Gluster NFS is deprecated, I would greatly appreciate it if someone could validate my work as our solution will need Gluster NFS enabled for the time being. I am concerned I could have missed a nuance and caused a hard to detect problem.

We can surely include this patch in Gluster repo since many tests are still using this feature and it is available for interested users. Thanks for the PR. Please submit the PR to Github repo, I will followup with the maintainers and update. Let me know if you need any help to submit the PR.

--
Thanks and Regards
Aravinda
Kadalu Technologies



---- On Thu, 14 Mar 2024 01:32:50 +0530 Jacobson, Erik <erik.jacobson at hpe.com> wrote ---

Hello team.

We ran into some trouble in Gluster 9.3 with the Gluster NFS server. We updated to a supported Gluster  9.6 and reproduced the problem.

We understand the Gluster team recommends the use of Ganesha for NFS but in our specific environment and use case, Ganesha isn’t fast enough. No disrespect intended; we never got the chance to work with the Ganesha team on it.

We tried to avoid Ganesha and Gluster NFS altogether, using kernel NFS with fuse mounts exported, and that was faster, but failover didn’t work. We could make the mount point highly available but not open files (so when the IP failover happened, the mount point would still function but the open file – a squashfs in this example – would not fail over).

So we embarked on a mission to try to figure out what was going on with the NFS server. I am not an expert in network code or distributed filesystems. So, someone with a careful eye would need to check these changes out. However, what I generally found was that the Gluster NFS server requires the layers of gluster to report back ‘errno’ to determine if EINVAL is set (to determine is_eof). In some instances, errno was not being passed down the chain or was being reset to 0. This resulted in NFS traces showing multiple READs for a 1 byte file and the NFS client showing an “I/O” error. It seemed like files above 170M worked ok. This is likely due to how the layers of gluster change with changing and certain file sizes. However, we did not track this part down.

We found in one case disabling the NFS performance IO cache would fix the problem for a non-sharded volume, but the problem persisted in a sharded volume. Testing found our environment takes the disabling of the NFS performance IO cache quite hard anyway, so it wasn’t an option for us.

We were curious why the fuse client wouldn’t be impacted but our quick look found that fuse doesn’t really use or need errno in the same way Gluster NFS does.

So, the attached patch fixed the issue. Accessing small files in either case above now work properly. We tried running md5sum against large files over NFS and fuse mounts and everything seemed fine.

In our environment, the NFS-exported directories tend to contain squashfs files representing read-only root filesystems for compute nodes, and those worked fine over NFS after the change as well.

If you do not wish to include this patch because Gluster NFS is deprecated, I would greatly appreciate it if someone could validate my work as our solution will need Gluster NFS enabled for the time being. I am concerned I could have missed a nuance and caused a hard to detect problem.

Thank you all!

patch.txt attached.
-------

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk<https://meet.google.com/cpu-eiue-hvk>

Gluster-devel mailing list
Gluster-devel at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel<https://lists.gluster.org/mailman/listinfo/gluster-devel>


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20240321/d07e877f/attachment-0001.html>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: notes.txt
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20240321/d07e877f/attachment-0001.txt>


More information about the Gluster-devel mailing list