<div dir="ltr">As explained before, it is currently NFS, not iSCSI.<br><br>Here is a sample of my nfs.log. I have tons of this:<br><br>[2014-03-05 23:09:47.293822] D [nfs3-helpers.c:3514:nfs3_log_readdir_call] 0-nfs<br>-nfsv3: XID: 27dce0a, READDIRPLUS: args: FH: exportid 27566f19-3945-4fda-bbea-3d<br>
3b1b29a32f, gfid 00000000-0000-0000-0000-000000000001, dircount: 1008, maxcount:<br> 8064<br>[2014-03-05 23:09:47.294285] D [nfs3-helpers.c:3480:nfs3_log_readdirp_res] 0-nfs<br>-nfsv3: XID: 27dce0a, READDIRPLUS: NFS: 0(Call completed successfully.), POSIX:<br>
117(Structure needs cleaning), dircount: 1008, maxcount: 8064, cverf: 30240636,<br>is_eof: 0<br>[2014-03-05 23:09:47.294522] D [nfs3-helpers.c:3514:nfs3_log_readdir_call] 0-nfs<br>-nfsv3: XID: 27dce0b, READDIRPLUS: args: FH: exportid 27566f19-3945-4fda-bbea-3d<br>
3b1b29a32f, gfid 00000000-0000-0000-0000-000000000001, dircount: 1008, maxcount:<br> 8064<br><br><br><br>one of the bricks:<div><br></div><div><div>[2014-03-05 23:21:42.469118] D [io-threads.c:325:iot_schedule] 0-stdata-io-threads: READDIRP scheduled as fast fop</div>
<div>[2014-03-05 23:21:42.469403] D [io-threads.c:325:iot_schedule] 0-stdata-io-threads: FSTAT scheduled as fast fop</div><div>[2014-03-05 23:21:42.470167] D [io-threads.c:325:iot_schedule] 0-stdata-io-threads: READDIRP scheduled as fast fop</div>
<div>[2014-03-05 23:21:42.470757] D [io-threads.c:325:iot_schedule] 0-stdata-io-threads: FSTAT scheduled as fast fop</div><div><br></div><div>volume definition:</div><div><br></div><div><div>Volume Name: stdata</div><div>
Type: Stripe</div><div>Volume ID: 27566f19-3945-4fda-bbea-3d3b1b29a32f</div><div>Status: Started</div><div>Number of Bricks: 1 x 2 = 2</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: 10.0.1.25:/stripe0</div>
<div>Brick2: 10.0.1.25:/stripe1</div><div>Options Reconfigured:</div><div>diagnostics.client-log-level: DEBUG</div><div>diagnostics.brick-log-level: DEBUG</div></div><div><br></div><div>If there is anything else I can provide you, to troubleshoot this volume on esxi, just let me know.</div>
<div><br></div><div>KR, </div><div><br></div><div>Carlos.</div><br><br><br><br>On Wed, Mar 5, 2014 at 6:35 PM, Anand Avati <<a href="mailto:avati@gluster.org">avati@gluster.org</a>> wrote:<br>><br>> Can you please post some logs (the client logs which is exporting ISCSI)? It is hard to diagnose issues without logs.<br>
><br>> thanks,<br>> Avati<br>><br>><br>> On Wed, Mar 5, 2014 at 9:28 AM, Carlos Capriotti <<a href="mailto:capriotti.carlos@gmail.com">capriotti.carlos@gmail.com</a>> wrote:<br>>><br>>> Hi all. Again.<br>
>><br>>> I am still fighting that "VMware esxi cannot use striped gluster volumes" thing, and a couple of crazy ideas are coming to mind.<br>>><br>>> One of them is using iSCSI WITH gluster, and esxi connecting via iSCSI.<br>
>><br>>> My experience with iSCSI is limited to a couple of FreeNAS test installs, and some tuning on FreeNAS and esxi in order to implement multipathing, but nothing dead serious.<br>>><br>>> I remember that after creating a volume and formating it (zvol), THEN space was allocated to iSCSI. Makes some sense, since iSCIS is a block device, and after it is available, the operating system will actually use it. But it is a bit foggy.<br>
>><br>>> I am trying to bypass the present limitation on Gluster, which refuses to talk to esxi using a striped volume.<br>>><br>>> So, here is the question: anyone here uses gluster and iSCSI ?<br>
>><br>>> Would anyone care to comment on performance of this kind of solution, pros and cons ?<br>>><br>>> Thanks.<br>>><br>>> _______________________________________________<br>>> Gluster-users mailing list<br>
>> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>>> <a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br>
><br>><br></div></div>