[Drbd-dev] Running Protocol C with disk cache enabled
philipp.reisner at linbit.com
Fri Jun 22 20:41:34 CEST 2007
Lars and I discussed various implementation ideas about the issue
today. I was just about to write them down. -- But then this though
came to my head:
* The message would be: You no longer need good disk IO subsystems
that tell the operating system the truth. Go out and use the
cheapest RAID5 controllers with enormous on-controller memory,
without battery unit...
In case your secondary crash, DRBD will take care and replay
the data, that was lost in your controller's RAM.
But, how does this work on the primary ? Our activity-log
depends on an working disk subsystem. If you have an IO
subsystem with write-back caches on the primary, we will not
have a complete AL after the crash.
Does it make sense to solve an issue with broken hardware for
a DRBD node in secondary role, when we depend on working hardware
when the same node is in primary role ? -- I do not think so.
The bottom line:
There is lots of working hardware around. SCSI drives do not have
write-back caches (enabled). I guess SATA drives are okay as well,
but I do not know for sure. All serious raid5 controllers have
battery units. People have to use those.
Just a comment to this:
> Well, I look at this slightly differently; use of the on-disk cache is
> really the only way to get decent (i.e. competitive) performance out of
> rotating rust, so what we have to do is find ways to allow this and
> still be correct.
A disk drive or a controller is really fine to take over thousands of
IO operations. -- And in fact Linux (2.6) (and DRBD) takes advantage
of this. I have seen an HP raid5 controller that accepted up 10000
write requests at without blocking acceptance of further write requests.
-- But when the controller signals IO completion to the operating
system it is its task to ensure that the data either is on disk, or
save by other means ( battery backed up RAM ).
More information about the drbd-dev