-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incorrect replica size after resource-definition auto-place #389
Comments
What DRBD versions are in use here? Also could you please attach from both nodes a |
DRBD version:
dmesg on Node1:
dmesg on Node2:
|
I see. Let me explain a bit of how DRBD works: DRBD has 2 "initial sync" options, one is the "full-sync", which simply sends all the data to the new peer, and the "partial-sync". The latter is what you expect - only send the changes to the new peer. However, the point you are missing here is what counts as a change.
If you create the FS with default settings, the FS will discard the whole device during the
Those That is a rough summary of why your partial-sync behaves like a full-sync. You can avoid this issue, but creating the FS without the initial discard, using Another thing you can check if the property |
Thank you for the explanation. |
Also for volume-defenition I see the option |
Yes, but only in combination with this property:
(just wanted to ask you to confirm it, but you just did :) ) Can you also verify if those values exist in the corresponding |
Yes, this values is present in res file.
but command |
The Did you check the |
Yes, I executed the command on both nodes. |
That was also my thought, so I tested this issue with the same version of LINSTOR and DRBD as you have, but could not reproduce this issue, regardless if I use Can you compare your setup with my test? Or feel free to do it the other way around, and show me the full reproducer on your side, including in the end the following:
(Feel free to filter the output of the commands to the relevant parts) My test:
|
This is strange, but having done a new deployment in a test environment, I was unable to reproduce the problem. Despite the fact that all deployments were done in the same one-to-one manner. |
Hello
This is probably expected behavior but I would like to understand if this is normal and how it can be avoided. The situation is as follows. There is a linstor storage version 1.25.1 of two nodes. Storage pool on top of lvm-thin.
linstor rg spawn MyRG testres1 10G
dd if=/dev/urandom of=/mnt/test/dd.file bs=1M count=5000 status=progress
So far so good:
Delete the resource from one of the nodes:
linstor r delete Node2 testres1
Do auto-place and wait for sync:
linstor rd ap testres1
...
linstor r lv -r testres1
As we can see the size of both replicas does not match. On the second node it is equal to the allocated space. This can be fixed by running fstrim but this is not always possible especially in a virtualization environment when the guest file system does not support the discard option or it is not executed for other reasons. So the question is: what can I do in such a situation and how can I avoid it?
The text was updated successfully, but these errors were encountered: