[DRBD-user] Requested Feature

Ben Clewett ben at roadrunner.uk.com
Tue Mar 18 15:42:24 CET 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

Hi Lars,

Thanks for answering my email.

Lars Ellenberg wrote:
 > On Tue, Mar 18, 2008 at 11:05:08AM +0000, Ben Clewett wrote:
 >> Dear DRBD,
 >> I would like to request a feature for DRBD.  I would be interested in
 >> showing the data rate sent over the network to the other host in
 >> /proc/drbd.  This would enable me to monitor bottlenecks and predict
 >> limitations easily.
 > how about
 > # dstat -D total,sdf -N bond0,ethx0

This does not give me a breakdown by DRBD service.  I have more than one 
database service on the same hardware.  If one of them is maxing out the 
bandwidth I have no easy way of knowing.

 >> I am interested in knowing whether this is the sort of feature it 
would be
 >> possible for a user like my self with a few years C experience to 
add my
 >> self?  If so, whether any user might give me a pointer or two?
 > how shal we measure the data transfer rate?  if currently there is
 > nothing written, then there is nothing to transfer, data rate is zero.
 > probably not an indicator of any bottleneck.

My file systems can write data faster than my DRBD ethernet link, 
~200MB/sec and ~100MB/sec respectively.  The requirement is to monitor 
this rate so that I can understand what limits are present in my system, 
and what hardware is going to max-out first.

Then, for instance, I could double up my NIC count, having one for each 
DRBD service.

My loading is not constant.  I have utilities which import large amounts 
of data in single hits to the ability of the system.  It important to 
know what is constraining that ability so that I can know what to 
enhance, and predict when no enhancement is available and more powerful 
hardware is then needed...

 >> For instance, keeping an array of the last 15 minutes, holding a 
total of
 >> data transfer in and out, so that a 1-minute, 5-minute and 15-minute 
 >> could be displayed, with the array rotated every new minute.
 > you can rrd graph the "dr/dw/nr/ns" counters.

If this gives me the solution then that's all I required.

The advantage of an embedded solution is that I don't always know I have 
a problem until the system goes solid.  At which time setting up an rrd 
graph of the counters may be too late.  I want to gain a single snapshot 
of all possible bottlenecks which I can simply view. :)

Of course I could permanently setup an rrd graph, but this is yet 
another service to install, monitor and administer.  I have enough 

(This was also a personal challenge.  Could I program DRBD to count the 
number of blocks it's moving, and report these through /proc/drbd?  But 
madness may lie in this direction for all of us.)

 >> Please also let me know if this is a really bad idea!
 > I think the idea is good, but probably best implemented outside DRBD,
 > using available tools.  in-kernel re-implementation of the round robin
 > data base concept is somewhat "interessting" :)

No worries.  If it's a bad idea, so be it.

 >> PS, another feature that would be very useful for me is another 
version of
 >> /proc/drbd showing output as XML, say /proc/drbd_xml.
 > I don't think it is a good idea to export kernel data as xml.
 >> Then my self or other users could program our own extensions with
 >> ease.
 > what extensions are you thinking of?
 > what prevents you from implementing them now?

If the /proc/drbd was machine readable (maybe not XML, any format will 
do) then many applications could be written.

For instance, last year I was experiencing considerable problems with 
both my NIC's and my RAID controllers.  It was only with the help of 
your self and understanding of the numbers in /proc/drbd that I was able 
to gain a solution.

If this was machine readable, simple utilities could be written to help 
users understand problems, and tune the /etc/drbd with better values, or 
show in big clear letters (in my case) that my hardware was not writing 
data as fast as DRBD was sending it.

Or /proc/drbd could be parsed to a CGI program showing the state of DRBD 
in atomic detail, with big clear labels of what all the numbers mean.

Other things like Nagios and Linux-ha plugins might be easier to write.

Distribution manufactures like SUSE could also make use of a machine 
readable file to help support DRBD.  (I note the current SUSE DRBD 
implementation can't make head or tail of my setup.)

 >> A good DTD with comments could aid users tracking problems.
 > problems of which kind?
 > what sort of comments?

This is a trick from linux-ha.  Their configuration and state is 
recorded into their cib.xml document, and they use the DTD as 
documentation.   As this is a reflection of every state the XML may 
enter, it gives the users a comprehensive manual of every state Linux-ha 
enters, and what is all means.   Linux-ha were also kind enough to 
extensively comment their DTD document.

 > if anything, we can provide some wrapper (python, perl, whatever),
 > that would do
 > # proc_drbd_to_xml < /proc/drbd | xml_parsing_thingy
 > to give you a standardized xml representation of /proc/drbd.
 > but honestly, I don't quite see the benefit from that.
 > what am I missing, what is it you are after, really?

Maybe the last point was just me thinking allowed, so thanks for going 
along with it.

I want to know of problems and bottlenecks on my systems.

DRBD is at the heart of the system.  It offers a powerful resource for 
identifying problems.  My thinking was that if DRBD offered a few more 
bits of information, and offered them in a machine readable form, then 
this might benefit a lot of users.  Certainly me :)

I have also casually noted that the number of email on this list about 
users experiencing sub-optimum performance is significant.  I have also 
noted that this is always something other than drbd, or something an 
edit to /etc/drbd will fix.  Something alone these ideas could deal with 
these problem faster.  Taking the user from the symptom to the solution 
without even thinking of blaming DRBD!

I do hope these comments do have some signal in the noise...


This e-mail is confidential and may be legally privileged. It is intended
solely for the use of the individual(s) to whom it is addressed. Any
content in this message is not necessarily a view or statement from Road
Tech Computer Systems Limited but is that of the individual sender. If
you are not the intended recipient, be advised that you have received
this e-mail in error and that any use, dissemination, forwarding,
printing, or copying of this e-mail is strictly prohibited. We use
reasonable endeavours to virus scan all e-mails leaving the company but
no warranty is given that this e-mail and any attachments are virus free.
You should undertake your own virus checking. The right to monitor e-mail
communications through our networks is reserved by us

  Road Tech Computer Systems Ltd. Shenley Hall, Rectory Lane, Shenley,
  Radlett, Hertfordshire, WD7 9AN. - VAT Registration No GB 449 3582 17
  Registered in England No: 02017435, Registered Address: Charter Court, 
  Midland Road, Hemel Hempstead,  Hertfordshire, HP2 5GE. 

More information about the drbd-user mailing list