<div dir="ltr"><br><div class="gmail_extra"><br>> What sort of read/write patterns are these global transaction logs?</div><div class="gmail_extra"><br></div><div class="gmail_extra">Generally, not too bad. The files are relatively small and are used by the WLS Transaction recovery process to ascertain how far along a particular global transaction set it has gotten, to allow it to ensure it can rollback/commit everything correctly across many transactions (each transaction could be a local resource, remote DB or remote service/composite, so it's an ACID process to ensure data integrity across the resources WLS accesses).</div>
<div class="gmail_extra"><br></div><div class="gmail_extra">We'll end up having a few clusters and hence differing performance requirements depending upon their use, but from our existing DEV/TEST setup and previous experience with this stuff, it's nothing too extreme. Each transaction log is usually a few MB in size. It grows fairly linearly and most of the time, most are over and done with in a short time. Long running processes or those that have a human task (manager authoriser button in a web app for instance), could be hanging around for a significant time.</div>
<div class="gmail_extra"><br></div><div class="gmail_extra">IOPS wise again, we'll have to test properly, but again from experience and what we've seen so far, nothing difficult. It doesn't touch the sides of our existing NAS head.</div>
<div class="gmail_extra"><br></div><div class="gmail_extra">Only one process (as in a single WLS instance == a single JVM == single process in Linux), will access each log at a time, so performance issues due to locking shouldn't be a real issue. The problem comes when disaster strikes and we need to recover any in-flight transactions. Another WLS instance can recover these, but they need access to the transaction logs, hence the usual use of NFS.</div>
<div class="gmail_extra"><br></div><div class="gmail_extra"><br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
I've found GlusterFS excels at use cases were you need to read or<br>
write entire files at a time, and in great volume from many sources.<br>
We use it as storage for our production users who read and write a lot<br>
of large media files (mostly from our render farm, which can push a<br>
lot of data around very quickly), and it works brilliantly.<br>
<br>
However, we did attempt to run applications off it (as we were running<br>
applications off an NFS share previously), and host our Linux user<br>
home drives from it, and both of these cases didn't quite work out for<br>
us. GlusterFS appears to not like reading portions of a file at a<br>
time (say, when you load a binary or library, and Linux only requires<br>
a small part of that file read). We ended up going back to NFS for<br>
home and apps, but keeping GlusterFS where it excelled - for large<br>
volume file writes and reads of entire files.<br>
<br></blockquote><div><br></div><div>Hmm, nice to know. We won't be running any apps directly off of it per-se, just keeping some artefacts such as these logs there. However, am pretty sure WLS appends to the file as it goes until it hits a particular (user definable) size, and then creates another one, so the use-case doesn't quite fit the 'write an entire file at a time' scenario.</div>
<div><br></div><div>Some serious testing is required :)</div><div><br></div><div>Thanks for your thoughts</div><div><br></div><div>Dan</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
-Dan<br>
<div><div class="h5"><br>
<br>
On 8 October 2013 00:27, Dan Hawker <<a href="mailto:danhawker@googlemail.com">danhawker@googlemail.com</a>> wrote:<br>
><br>
> Hi All,<br>
><br>
> We have a requirement for a common replicated filesystem between our two<br>
> datacentres, mostly for DR and patching purposes when running Weblogic<br>
> clusters.<br>
><br>
> For those that are not acquainted, Weblogic has a persistent store that it<br>
> uses for global transaction logs amongst other things. This store can be<br>
> hosted on shared disk (usually NFS), or in recent versions within an Oracle<br>
> DB. Unfortunately, some of the products that we use, have to use the disk<br>
> option.<br>
><br>
> Ordinarily I'd just follow Oracle's guidelines and use an Enterprise NAS<br>
> head and NFS, however the NAS head we have at my present role, has rather<br>
> lacklustre replication granularity (every 5mins), which just won't cut it,<br>
> so we're looking at alternatives, including throwing more cash at the<br>
> storage vendor.<br>
><br>
> The Linux team here use GlusterFS to host and replicate their Puppet<br>
> infrastructure between the datacentres. They like and understand it, and say<br>
> it's got good performance, so we were wondering if we could also leverage<br>
> Gluster for the persistent data stores that can't be DB hosted.<br>
><br>
> Wondered if anyone has tried this kind of thing with Weblogic or any other<br>
> JEE app server before, and if it is feasible. We'll obviously test this<br>
> extensively, but before we do spend time and resource, we're just after some<br>
> degree of confidence that it may work at all.<br>
><br>
> Thanks<br>
><br>
> Dan<br>
><br>
> --<br>
> Dan Hawker<br>
> --<br>
><br>
</div></div>> _______________________________________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
> <a href="http://supercolony.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://supercolony.gluster.org/mailman/listinfo/gluster-users</a><br>
<span class="HOEnZb"><font color="#888888"><br>
<br>
<br>
--<br>
Dan Mons<br>
R&D SysAdmin<br>
Cutting Edge<br>
<a href="http://cuttingedge.com.au" target="_blank">http://cuttingedge.com.au</a><br>
</font></span></blockquote></div><br><br clear="all"><div><br></div>-- <br>--<br>Dan Hawker<br><a href="mailto:danhawker@googlemail.com">danhawker@googlemail.com</a><br>07773 348975<br>--
</div></div>