Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hi,
in "Re: [DRBD-user] DRBD on ramdisk"
<43021A1B.200 at arx.net>
at Tue, 16 Aug 2005 19:53:47 +0300,
tchatzi at arx.net wrote:
> >> Is there a possibility of serious problem on DRBD on /dev/ramX?
> >
> >I don't know. We never really considered such usage.
> >
> >I know that someone one this list used to have his apache session data
> >on a drbd on ram disk, and it seemd to work well for him.
> >
> >
> That would be me :)
> It does work quite reliably, but I suspect that it won't do what you
> expect, if I read your post right.
> Something like NFS mounted directories is probably what you need. But
> you sure are welcome to drbd those...
I want to NFS export DRBD directory. Tomcat is running on each NFS
clients and Tomcat stores serialized session data into NFS mounted
directory and share session data with another Tomcat.
and I want to use memory base device (ramdisk, tmpfs) as backend of
DRBD instead of hard disk.
because of
- more faster. Tomcat should process many accesses.
- more troubleless. hard disk is breakable...
//
I have new trouble... I do stress test, then use% of /dev/drbd become
100% so remove all files and directories but use% is still 100%...
# df /mnt/drbd
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/drbd1 258768 258756 12 100% /mnt/drbd
# df -h /mnt/drbd
Filesystem Size Used Avail Use% Mounted on
/dev/drbd1 253M 253M 12K 100% /mnt/drbd
# find /mnt/drbd -type f
(no outpu)
# fuser -mv /mnt/drbd/
(no output)
Is there potential problem in DRBD on ramdisk? I would be very
grateful for any kind of information.
way to setup /dev/drbd1
boot with ramdisk_size=524288
drbd.conf
on drbd01 {
device /dev/drbd1;
disk /dev/ram3;
address XXX.XXX.XXX.XXX:7789;
meta-disk /dev/ram1[1];
}
mkfs.xfs -f -s size=1024 -b size=1024 -i size=512,maxpct=0 -d size=268435456 /dev/drbd1
NFS server exports following configuration.
/mnt/drbd XXX.XXX.XXX.XXX/255.255.255.0(rw,no_root_squash,async)
NFS clients mounts following mount option.
-o rw,soft,intr,rsize=8192,wsize=8192,noac
stress test is following.
- loop
fork 3 processes. and each process do following in background
repeat 1000 times
cat 0 > 1
rm -f 1
- loop
bonnie++ -s 32 -r 0 -n 20:4096:2048 -x 1
I do stress test on three NFS clients coincidently.
The 'used 100%' problem occurred about 9 hours later.
There is no problem in case of NFS exports /dev/ram3 directry. (stress
testing 24 hours)
--
HIROSE Masaaki