Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Ben wrote: >> 4MB/sec initial re-sync? Or 4MB/sec write performance in dom0 or in >> domU? If its write performance in domU, you should look into scheduling >> configuration. Because of the problems with drbd 8 (other thread) I have gone back to 0.7.17 for now so to get some numbers faster. I am using current xen-unstable (~ 3.0.2) and linux 2.6.16 (similar to the FC5 kernel but with less patches). Drbd is only setup in dom0, the devices are than exported ot the domU VMs using the virtual device model of Xen. For initial drbd sync, I get ~50,000K/sec over Gigabit Ethernet. This also seems to be the maximum seq write performance of the SATA disk. There is only one VM, scheduling configured as: xm sched-sedf Name ID Period(ms) Slice(ms) Lat(ms) Extra Weight Domain-0 0 20.0 15.0 0.0 1 0 vm1 5 100.0 0.0 0.0 1 0 I have done some *very* quick performance tests writing and reading a 1GB file in "vm1": reboot ... sync time dd if=/dev/zero of=/testfile bs=1024 count=1048576 reboot ... time dd if=/testfile of=/dev/null bs=1024 count=1048576 Time for writing was ~ 21 secs (real time), i.e. ~ 49MB/sec Time for reading was ~ 17 secs (real time), i.e. ~ 60MB/sec Using scp over a 100Mbit link I ~ 11MB/sec (I have not configured the Gigabit link for the vm). The results are very similar to what I get on bare metall. So this is very encouraging. Still, I would really like to get drbd v8 working. >> See for example this thread: >> http://lists.xensource.com/archives/html/xen-users/2006-04/ msg00115.html >> Doing something like the mentioned 'xm sched-sedf 0 0 0 0 1 1' could >> help. I don't know what version you are using, but at least with xen 3.0.2, the syntax is different now. Look at the output of "xm help sched-sedf". To get the scheduling as given in the table above, I used xm sched-sedf 0 -p 20 -s 15 -l 0 -e 1 xm sched-sedf vm1 -p 100 -s 0 -e 1 Best Regards, Michael Paesold