<div style="line-height:1.7;color:#000000;font-size:14px;font-family:Arial"><div style="line-height:1.7;color:#000000;font-size:14px;font-family:Arial">
        
        
        
        <style type="text/css">P { margin-bottom: 0.08in; }</style>
<p style="margin-bottom: 0in"><font size="4">Hi everyone, </font>
</p><div style="margin-bottom: 0in"><font size="4"><br> I am trying to run
multipal jobs using fio benchamark in replica volume</font><font size="4"><br><br>with 3 bricks, but some
hours later, warning message ¡°W [socket.c:195:__socket_rwv]</font><font size="4"><br><br>0-tcp.ida-server: readv
failed (Connection timed out)¡± appear in bricks logs, I think this</font><font size="4"><br><br>waring may due to high
work loads, glusterfs with high work loads can not respond</font><font size="4"><br><br>socket timely. So I add
codes in rpc/rpc-transport/socket/src/socket.c to expand timeout<br><br>threshold of socket, now
the SO_RCVTIMEO is 180s, KEEP_ALIVE is 300s, then run</font><font size="4"><br><br>work load again, but it
does not work.</font>
</div><p style="margin-bottom: 0in"><br>
</p><div><font size="4">My test enviroment is as
follow:</font><font size="4"><br> <br> Three nodes work as
gluster cluster, each nodes with 16GB memory, 8 core</font><font size="4"><br><br>3.3GHz cpu</font><font face="WenQuanYi Zen Hei"><font size="4"><span lang="zh-CN">£¬
</span></font></font><font size="4">two 10000baseT/full and one
1000baseT/full network cards, each nodes</font><font size="4"><br><br>use 16 * 2T raid5 disks
working as brick. The glusterfs version is 3.3.1.</font><font size="4"><br><br> I create a 1*3 replica
volume use this three nodes, every node use fuse to mount</font><font size="4"><br><br>volume through a
10000baseT/full network card. At the sametime, every node use cifs to</font><font size="4"><br><br>mount fuse_mount_point
through another 10000baseT/full card.</font><font size="4"><br><br> Each node run two fio
scripts, read and write jobs. Both scripts do operation in</font><br></div><div style="margin-bottom: 0in"><font size="4"><br>cifs_mount_point. scripts
is as follows:</font>
</div><div style="margin-bottom: 0in"><font size="4"><br>write_jobs:<br>while true</font><font size="4"><br>do</font><font size="4"><br>mkdir -p
${DIR}_write_${i}</font><font size="4"><br>/usr/local/bin/fio
--ioengine=libaio --iodepth=256 --numjobs=100 --rw=write --bs=1k
--size=1000m --directory=${DIR}_write_${i} --name=job01_1k_write >>
${DIR}_write_${i}/job01_1k_write.log </font>
</div><div><font size="4">i=`expr $i + 1`</font><font size="4"><br>done</font>
</div><p style="margin-bottom: 0in"><br>
</p><div><font size="4">read jobs:</font><font size="4"><br>mkdir -p ${DIR}_read_${i}</font><font size="4"><br>/usr/local/bin/fio
--ioengine=libaio --iodepth=256 --numjobs=100 --rw=read --bs=1k
--size=1000m --directory=${DIR}_read_${i} --name=job01_1k_read >>
${DIR}_read_${i}/job01_1k_read.log</font><font size="4"><br>i=`expr $i + 1`</font><font size="4"><br>done </font>
</div><div style="margin-bottom: 0in"><font size="4"> <br> I change iodepth from 256
to 16, numjobs from 100 to 25, but it still does not <br><br>work. Is there anybody pay
attention to this problem?</font>
</div></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>