[DRBD-user] Secondary server works harder than primary

Pascal BERTON pascal.berton3 at free.fr
Mon Jun 20 23:03:50 CEST 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi Chris!

No more ideas than the last time except that thinking of it, on secondary's
side the "writer" is most likely DRBD itself, and your disks do nothing but
what their writer ask them to. If not, then the IO size would come from the
writer on primary side, and it would then generate identical sizes on both
nodes, which is contrary to what you reported.
If this assertion is true, then the answer is probably in Linbit's back
office, and without their developers opinion we're all stuck! Only them know
what's going on down under...

Linbit fellows, any comment ?

Best regards,

Pascal.
 

-----Message d'origine-----
De : Gouveia, Chris [mailto:Chris.Gouveia at spirent.com] 
Envoyé : lundi 20 juin 2011 20:54
À : Pascal BERTON; drbd-user at lists.linbit.com
Objet : RE: [DRBD-user] Secondary server works harder than primary

Does anyone have any ideas what could cause the secondary node to use
smaller IO, or anyway to debug the problem?

Thanks,
Chris

-----Original Message-----
From: Pascal BERTON [mailto:pascal.berton3 at free.fr] 
Sent: Friday, June 10, 2011 10:16 PM
To: Gouveia, Chris; drbd-user at lists.linbit.com
Subject: RE: [DRBD-user] Secondary server works harder than primary

Hello Chris!

Ok, these numbers make more sense, your platform is sane. However this shows
something very interesting about what's happening under the cover : The IO
profile is different on both side, the destination issuing smaller IOs. In
order to get something optimized, I'd rather expect the IO size information
on primary side being transmitted to the secondary one in order to achieve
"equal" things, unfortunately I don't have enough DRBD knowledge to tell if
that's  something one might act upon. If it turned to be impossible, the
underliying disk technology and cache would then probably have a significant
impact in terms of latencies, especially in C policy. Especially if one was
to use it for virtualization which is known to be a bigger disk stresser
than traditional contexts. I give hand to the DRBD experts now, just let's
wait for what they think about that... Maybe there's some magic parameter
that would allow us to make things go better. Anyhow, that's an interesting
point, good to know!

Best regards,

Pascal.

-----Message d'origine-----
De : drbd-user-bounces at lists.linbit.com
[mailto:drbd-user-bounces at lists.linbit.com] De la part de
Chris.Gouveia at spirent.com Envoyé : samedi 11 juin 2011 01:50 À :
drbd-user at lists.linbit.com Objet : [DRBD-user] Secondary server works harder
than primary

Thank you for your reply. Enclosed are more accurate and consistent results:
Primary machine
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.16    0.00    1.19   15.70    0.00   82.94

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda              37.33        24.00       985.33         72       2956
sdb              34.67        21.33       954.67         64       2864
sdd              40.33        16.00       950.67         48       2852
md0             729.33        61.33      2890.67        184       8672

Secondary machine
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.04    0.00    0.28    0.00    0.00   99.68

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda             194.00         0.00       978.67          0       2936
sdb             199.67         0.00       990.67          0       2972
sdc             191.33         0.00       948.00          0       2844
md0             734.00         0.00      2916.00          0       8748

Would it be accurate to interpret that DRBD is issuing the same number of
requests to both servers, but for whatever reason the software RAID on the
secondary server is breaking up each request into smaller IO requests?

If so, would anyone have any suggestions on how to debug the software RAID
portion?

Thanks for everyone's time,
Chris Gouveia


<DIV><FONT size="1">

E-mail confidentiality.
--------------------------------
This e-mail contains confidential and / or privileged information belonging
to Spirent Communications plc, its affiliates and / or subsidiaries. If you
are not the intended recipient, you are hereby notified that any disclosure,
copying, distribution and / or the taking of any action based upon reliance
on the contents of this transmission is strictly forbidden. If you have
received this message in error please notify the sender by return e-mail and
delete it from your system. 

Spirent Communications plc,
Northwood Park, Gatwick Road, Crawley, West Sussex, RH10 9XN, United
Kingdom.
Tel No. +44 (0) 1293 767676
Fax No. +44 (0) 1293 767677

Registered in England Number 470893
Registered at Northwood Park, Gatwick Road, Crawley, West Sussex, RH10 9XN,
United Kingdom.

Or if within the US,

Spirent Communications,
26750 Agoura Road, Calabasas, CA, 91302, USA.
Tel No. 1-818-676- 2300 

</FONT></DIV>


<DIV><FONT size="1">

E-mail confidentiality.
--------------------------------
This e-mail contains confidential and / or privileged information belonging
to Spirent Communications plc, its affiliates and / or subsidiaries. If you
are not the intended recipient, you are hereby notified that any disclosure,
copying, distribution and / or the taking of any action based upon reliance
on the contents of this transmission is strictly forbidden. If you have
received this message in error please notify the sender by return e-mail and
delete it from your system.

Spirent Communications plc
Northwood Park, Gatwick Road, Crawley, West Sussex, RH10 9XN, United
Kingdom.
Tel No. +44 (0) 1293 767676
Fax No. +44 (0) 1293 767677

Registered in England Number 470893
Registered at Northwood Park, Gatwick Road, Crawley, West Sussex, RH10 9XN,
United Kingdom.

Or if within the US,

Spirent Communications,
26750 Agoura Road, Calabasas, CA, 91302, USA.
Tel No. 1-818-676- 2300 

</FONT></DIV>




More information about the drbd-user mailing list