[DRBD-user] drdb performance with Fusion ioDrive and Postgres

Motoharu Kubo mkubo at 3ware.co.jp
Wed Dec 14 11:21:59 CET 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi Alex,

I have compared ioDrive against HDD with Infiniband as an interconnect
between DRBD nodes.  The pgbench's tps (transaction per second) is used
as an index.

The combination of DRBD, ioDrive, and Infiniband is amazing.

                            w/o DRBD   w/ DRBD
------------------------------------------------
 SAS HDD RAID0 (w/ BBWC)      725        719
 SATA SSD RAID0 (w/ BBWC)    2257       2182
 ioDrive                     4948       4449

The whole performance decreased with DRBD for all cases but this will
not be a show-stopper for most use cases, I think.

I am sorry to say that we do not have experience with 10GbE simply
because we could not prepare it.  However, because network latency
influences the performance of DRBD, and latency of Infiniband is said to
be too small than 10GbE, and price of Infiniband is not so expensive, I
would recommend to consider Infiniband (IPoIB).

Best Regards,
Motoharu Kubo

> Hi,
> 
> we have a postgres vm, currently running on a xen host, and want to
> migrate it to drdb.
> we are now thinking about installing the new vm on a Fusion ioDrive Duo,
> which promises about 1,5GB/s Bandwidth for both read and write.
> The two DRDB Nodes are connected with a 10GbE adapter.
> 
> 
> does anyone have experience with such a configuration on drdb ?
> 
> i'm not sure if drdb can sync this without loosing the whole performance
> gain of the SSD.
> 
> 
> Best regards,
> Alex



More information about the drbd-user mailing list