Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
I missed something important:
we are using DRBD protocol C. The drbd configuration is now attached.
sorry for forgetting that in the first place.
On Tue, 2006-03-21 at 16:31 +0100, Werner Fischer wrote:
> Hi all,
>
> I did some performance tests for a talk at the Webhostingday next week
> and want to share my experiences.
>
> I used the following setup for the tests (two identical machines):
> Server: Thomas-Krenn Supermicro SMCI-5015 (dual power supply)
> Board: PDSMP-i
> BIOS Date: 12/14/05, BIOS Rev 1.1
> CPU: Intel Pentium D 840 3.2 GHz (Dual-Core)
> RAM: 2048 MB (DDR2-533 ECC)
> Raid Controller: 3Ware Escalade 9500S-4LP (SATA)
> BIOS: BE9X 2.03.01.051
> Firmware: FE9X 2.06.00.009
> Hard Disks: 2 Seagate Barracuda 7200.8 250GByte (Model:ST3250823AS)
> Raid Config: RAID1
> OS: CentOS 4.2
> Kernel: Virtuozzo Kernel 2.6.8-022stab067.1-smp (is OpenVZ
> Kernel plus some special modules, OpenVZ kernel is
> available from http://openvz.org/download/kernel/)
> Details about the server can be found at:
> http://www.thomas-krenn.com/shopx/index.php/action.view/entity.detail_products/category.1/key.1465
> http://www.supermicro.com/products/motherboard/DualCore/E7230/PDSMP-i.cfm
>
> Virtuozzo/OpenVZ is a OS-virtualization technology. The host and all
> virtual machines (so-called VPSs = Virtual Private Servers) run all on
> the same kernel.
>
> I just did sequential writes using dd, writing a 200 Gigabyte file
> (about 187 Gibibyte). "/vz" on the host system is a seperate filesystem
> for the VPSs (203 GiB in size). "/vz" on the host is mirrored by DRBD.
>
> I did four tests:
> Test 1: host system, without DRBD (writing to /vz on the host)
> Test 2: VPS, without DRBD (writing to /root in the VPS)
> Test 3: host system, with DRBD (writing to /vz on the host)
> Test 4: VPS, with DRBD (writing to /root in the VPS)
>
>
> | | duration | |
> | | in sec. | |
> Test | Description | incl.sync | MB/sec | slower than Test 1 (in %)
> -----+-----------------+-----------+--------+---------------------------
> 1 | host, w/o DRBD | 4720.096 | 42.37 | 0.000 %
> 2 | VPS, w/o DRBD | 5015.390 | 39.88 | 5.888 %
> 3 | host, with DRBD | 5187.163 | 38.56 | 9.004 %
> 4 | VPS, with DRBD | 5481.256 | 36.49 | 13.887 %
>
> That means:
> -> Writing within a VPS was about 5-6% slower than on the host system
> itself (when comparing test 2 to test 1 or test 4 to test 3).
> -> Writing with using DRBD was about 8-9% slower than without DRBD
> (when comparing test 3 to test 1 or test 4 to test 2).
>
> For writing the 200 GB file, I used the following script "writeperf.sh":
> ----------------------------------------------------------------
> #!/bin/bash
> echo "--- Doing sync"
> time sync
> echo "--- Doing dd test"
> time dd if=/dev/zero of=/vz/file00-test bs=1000000 count=200000
> echo "--- Doing sync"
> time sync
> ----------------------------------------------------------------
> (When using the script within the VPS, I changed the of parameter of dd
> to "of=/root/file00-test")
>
> I executed the script with "nohup ./writeperf.sh &". The nohup.out-files
> of the four tests are attached.
>
> just jyi: When doing the tests, I started with test 3, then doing test
> 4. Then I switched off the second node, stopped heartbeat, mounted the
> underlying disk device directly to /vz, and started then virtuozzo to do
> test 1 and test 2. YOU MUST NOT DO SUCH A THING IN A PRODUCTION
> ENVIRONMENT. Never mount a underlying disk device directly - only when
> you really know what you are doing.
>
> greetings,
> Werner
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
-------------- next part --------------
resource r0 {
protocol C;
incon-degr-cmd "echo '!DRBD! pri on incon-degr' | wall ; sleep 60 ; halt -f";
startup {
degr-wfc-timeout 120;
}
net {
on-disconnect reconnect;
}
disk {
on-io-error detach;
}
syncer {
rate 30M;
group 1;
al-extents 257;
}
on node1 {
device /dev/drbd0;
disk /dev/sda4;
address 192.168.255.1:7788;
meta-disk internal;
}
on node2 {
device /dev/drbd0;
disk /dev/sda4;
address 192.168.255.2:7788;
meta-disk internal;
}
}