[DRBD-user] cronjob for verify

Brady, Mike mike.brady at devnull.net.nz
Tue Oct 10 03:18:53 CEST 2017

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

On 2017-10-10 13:11, Jan Bakuwel wrote:
> Hi Veit,
> On 10/10/17 08:47, Veit Wahlich wrote:
>> Hi Jan,
>> Am Dienstag, den 10.10.2017, 06:56 +1300 schrieb Jan Bakuwel:
>>> I've seen OOS blocks in the past where storage stack appeared to be 
>>> fine (hardware wise). What possible causes could there be? Hardware 
>>> issues, bugs in storage stack including DRBD itself, network issues. 
>>> In most (all?) cases it seems prudent to me to keep the resources in 
>>> sync as much as possible and of course investigate once alerted by 
>>> the monitoring system.
>> a common configuration issue is using O_DIRECT for opening files or
>> block devices. O_DIRECT is used by userspace processes to bypass parts
>> of the kernel's I/O stack with the goal to reduce CPU cycles required
>> for I/O operations and to eliminate/minimize caching effects.
>> Unfortunately this also allows the content of buffers to be changed
>> while they are still "in-flight", speaking simplified, e.g. while 
>> being
>> read/mirrored by DRBD, software RAID, ...
>> The general use for O_DIRECT is for applications that either want to
>> bypass caching, such as benchmarks, or that implement caching by
>> themselves, which is the case for e.g. some DBMS. But also qemu (as 
>> used
>> by KVM and Xen) implements several kinds of caching and uses O_DIRECT
>> for VM disks depending on the configured caching mode.
> Thanks for that. Must say that possibility has escaped my attention so
> far. I'm using DRBD in combination with Xen and LVM for VMs so I
> assume O_DIRECT is in play here. Any suggestions where to go from
> here? A search for DRBD, LVM, Xen and O_DIRECT doesn't seem to bring
> up any results discussing this issue.
> kind regards,
> Jan


More information about the drbd-user mailing list