<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
  <head>
    <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
  </head>
  <body bgcolor="#ffffff" text="#000000">
    Vale,<br>
    <br>
    Were you running commands from the cli from multiple machines
    simultaneously? <br>
    Could you attach glusterd logs of glusterd from all the machines in
    the cluster? <br>
    It would be either,<br>
 /usr/local/var/log/glusterfs/usr-local-etc-glusterfs-glusterd.vol.log<br>
    or <br>
    /var/log/glusterfs/etc-glusterfs-glusterd.vol.log<br>
    <br>
    based on your mode of installation.<br>
    <br>
    thanks,<br>
    kp<br>
    <br>
    On 11/03/2011 05:58 PM, M. Vale wrote:
    <blockquote
cite="mid:CAFX7kd2N=+QNzP-zL2dgfCNnADCgJ7VYwemrMXveh02Xvwd3cA@mail.gmail.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
      HI, using gluster in replicated, we have the following conf:<br>
      <br>
      Volume Name: volume01<br>
      Type: Distributed-Replicate<br>
      Status: Started<br>
      Number of Bricks: 4 x 2 = 8<br>
      Transport-type: tcp<br>
      Bricks:<br>
      Brick1: gluster01:/mnt<br>
      Brick2: gluster02:/mnt<br>
      Brick3: gluster03:/mnt<br>
      Brick4: gluster04:/mnt<br>
      Brick5: gluster05:/mnt<br>
      Brick6: gluster06:/mnt<br>
      Brick7: gluster51:/mnt<br>
      Brick8: gluster52:/mnt<br>
      Options Reconfigured:<br>
      cluster.data-self-heal-algorithm: full<br>
      performance.io-thread-count: 64<br>
      diagnostics.brick-log-level: INFO<br>
      <br>
      <br>
      The we did:<br>
      <br>
      gluster volume stop volume01<br>
      <br>
      And it took several minutes, after that running gluster volume
      info gives:<br>
      <br>
      <br>
      Volume Name: volume01<br>
      Type: Distributed-Replicate<br>
      Status: Stopped<br>
      Number of Bricks: 4 x 2 = 8<br>
      Transport-type: tcp<br>
      Bricks:<br>
      Brick1: gluster01:/mnt<br>
      Brick2: gluster02:/mnt<br>
      Brick3: gluster03:/mnt<br>
      Brick4: gluster04:/mnt<br>
      Brick5: gluster05:/mnt<br>
      Brick6: gluster06:/mnt<br>
      Brick7: gluster51:/mnt<br>
      Brick8: gluster52:/mnt<br>
      Options Reconfigured:<br>
      cluster.data-self-heal-algorithm: full<br>
      performance.io-thread-count: 64<br>
      diagnostics.brick-log-level: INFO<br>
      <br>
      <br>
      But now if I do: gluster volume start volume01, gives the
      following error:<br>
      <br>
      operation failed<br>
      <br>
      If I do gluster volume reset the same thing:<br>
      <br>
      gluster volume reset volume01<br>
      operation failed<br>
      <br>
      And if I try to stop again:<br>
      <br>
      gluster volume stop volume01<br>
      Stopping volume will make its data inaccessible. Do you want to
      continue? (y/n) y<br>
      operation failed<br>
      <br>
      <br>
      This occurs using gluster 3.2 on Centos 6.0<br>
      <br>
      <br>
      Where do I start looking so I can start the volume again?<br>
      <br>
      Thanks<br>
      MV<br>
      <pre wrap="">
<fieldset class="mimeAttachmentHeader"></fieldset>
_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="http://gluster.org/cgi-bin/mailman/listinfo/gluster-users">http://gluster.org/cgi-bin/mailman/listinfo/gluster-users</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>