Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
The client volumes are not there because they have not been assigned. Just like a normal resource (with local storage), a client resource must be assigned to the node where it is supposed to become available. Normally, client resources are assigned by running: assign --client <resource> <node> [ <node> [ ... ] ] If the node is a storage-less node, then the '--client' option can be skipped, because storage-less nodes always get client assignments by default. This is a sample from my test cluster: criminy:~ # drbdmanage nodes ╭────────────────────────────────────────────────────────────────────────────────────────────────────╮ ┊ Name ┊ Pool Size ┊ Pool Free ┊ ┊ State ┊ ╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡ ┊ criminy ┊ 508 ┊ 500 ┊ ┊ ok ┊ ┊ dionysus ┊ 0 ┊ 0 ┊ ┊ no storage ┊ ┊ grimes ┊ 0 ┊ 0 ┊ ┊ no storage ┊ ┊ lilith ┊ 508 ┊ 500 ┊ ┊ ok ┊ ╰────────────────────────────────────────────────────────────────────────────────────────────────────╯ criminy:~ # drbdmanage new-volume files 100m --deploy 2 Operation completed successfully Operation completed successfully criminy:~ # drbdmanage deploy files 2 --with-clients Operation completed successfully criminy:~ # drbdmanage new-volume archive 100m Operation completed successfully criminy:~ # drbdmanage assign archive criminy lilith dionysus grimes Assigning to node 'criminy': Operation completed successfully Assigning to node 'lilith': Operation completed successfully Assigning to node 'dionysus': Operation completed successfully Assigning to node 'grimes': Operation completed successfully criminy:~ # drbdmanage assignments -R files ╭────────────────────────────────────────────────────────────────────────────────────────────────────╮ ┊ Node ┊ Resource ┊ Vol ID ┊ ┊ State ┊ ╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡ ┊ criminy ┊ files ┊ * ┊ ┊ ok ┊ ┊ dionysus ┊ files ┊ * ┊ ┊ client ┊ ┊ grimes ┊ files ┊ * ┊ ┊ client ┊ ┊ lilith ┊ files ┊ * ┊ ┊ ok ┊ ╰────────────────────────────────────────────────────────────────────────────────────────────────────╯ criminy:~ # drbdmanage assignments -R archive ╭────────────────────────────────────────────────────────────────────────────────────────────────────╮ ┊ Node ┊ Resource ┊ Vol ID ┊ ┊ State ┊ ╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡ ┊ criminy ┊ archive ┊ * ┊ ┊ ok ┊ ┊ dionysus ┊ archive ┊ * ┊ ┊ client ┊ ┊ grimes ┊ archive ┊ * ┊ ┊ client ┊ ┊ lilith ┊ archive ┊ * ┊ ┊ ok ┊ ╰────────────────────────────────────────────────────────────────────────────────────────────────────╯ criminy:~ # Best regards, Robert On 04/22/2016 03:35 PM, Adam Goryachev wrote: > > > On 22/04/2016 18:04, Roland Kammerer wrote: >> On Fri, Apr 22, 2016 at 01:40:39AM +1000, Adam Goryachev wrote: >>> Hi, >>> >>> Thanks for the quick response. That is really my question though. I >>> have a >>> DRBD Client, ie, no local storage, receiving the drbdmanage control >>> volume >>> via tcp/ip, but how do I mount the DRBD volume? eg, on the node >>> which has >>> the data locally, I can simply mount /dev/drbd101 /mnt, but now I >>> want to do >>> something similar from the DRBD client machine (not at the same >>> time, not >>> dual primary, just one node at a time should use the volume). >> Also on clients you simply mount it, no difference... On the client you >> also have a /dev/drbdXYZ. You mount it, the data comes via the network >> from the peer. >> > This seems to be the problem then. I don't have any /dev/drbd* devices > to mount on the "Client" node, but I do have them on the control nodes > (only the ones with the volume located on that specific node: > This is the same on every node: > root at san3:~# drbdmanage list-assignments > +------------------------------------------------------------------------------------------------------------+ > > | Node | Resource | Vol ID | | State | > |------------------------------------------------------------------------------------------------------------| > > | castle | filesrv | * | | ok | > | castle | oldNFS | * | | ok | > | san2.websitemanagers.com.au | filesrv | * | | ok | > | san2.websitemanagers.com.au | oldNFS | * | | ok | > +------------------------------------------------------------------------------------------------------------+ > > > > However, on node san2: > san2:~# ls -l /dev/drbd* > brw-rw---- 1 root disk 147, 0 Apr 21 21:46 /dev/drbd0 > brw-rw---- 1 root disk 147, 1 Apr 21 21:46 /dev/drbd1 > brw-rw---- 1 root disk 147, 100 Apr 21 21:45 /dev/drbd100 > brw-rw---- 1 root disk 147, 101 Apr 21 21:45 /dev/drbd101 > > From your post, I should see that also on san3 (controller node, but > doesn't have storage for this specififc volime: > root at san3:~# ls -l /dev/drbd* > brw-rw---- 1 root disk 147, 0 Apr 21 21:41 /dev/drbd0 > brw-rw---- 1 root disk 147, 1 Apr 21 21:41 /dev/drbd1 > > Instead all I get is the drbd internal volumes... > > Can you advise why it might not be getting the clietn volumes > > Thanks again for your time and any hists you can provide! > _______________________________________________ > drbd-user mailing list > drbd-user at lists.linbit.com > http://lists.linbit.com/mailman/listinfo/drbd-user