Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On 10/25/2012 08:37 AM, Shaun Thomas wrote: > So we've recently upgraded to DRBD 8.4.2, and have been noticing > some... odd behavior. Here's a sar extract for a glitch we noticed > last night: So, to reply to my own message... this is actually a very subtle problem in new Linux kernels, and definitely affects 3.2.0 and above. Newer kernels include a new cpuidle driver named "intel_idle" that overrides any CPU sleep settings you might have made in the BIOS. You can check this yourself: cat /sys/devices/system/cpu/cpuidle/current_driver If it says intel_idle, the Linux kernel will *aggressively* put your CPU to sleep. As you can imagine, the secondary DRBD node doesn't get much activity, so spends most of its time sleeping. Now the CPU has a lot more sleep time, and wake latency while trying to copy data. To fix this, you must actually disable the driver by picking your own C-state, probably the one you wanted in the BIOS in the first place. We did this by adding the following options to GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub, but your distro may differ. intel_idle.max_cstate=0 processor.max_cstate=0 idle=mwait Then reboot. Here are the benefits we got: * %util difference between backing device and DRBD went down by 30-40%. * TCP RTT is almost 10x faster. I'm totally not kidding about that last one. Due to the time necessary to wake a CPU to handle the network traffic, latency was massively increased using the intel_idle driver. Our RTT average was 0.375ms on a 10G link before. Now it's 0.04ms after using the settings above. Consider this a PSA. DRBD is unfairly being blamed for bad performance with the intel_idle cpuidle driver in newer kernels! If you have DRBD on a newer Intel system, I highly recommend you make the above changes. Thanks, everyone! -- Shaun Thomas OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604 312-444-8534 sthomas at optionshouse.com ______________________________________________ See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email