-
Notifications
You must be signed in to change notification settings - Fork 369
capacity: fix duplicate topology (attempt #2) #1450
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
When the controller starts, 2 sync() call will run simultaneously, one from HasSynced(), another from processNextWorkItem(). Each will produce an instance for the same topology segment, and pass it to callbacks. This will result in duplicated entries in capacities map, resulting in: either - Two CSIStorageCapacity object get created for the same topology, or - The same CSIStorageCapacity object get assigned to two keys in capacities map. When one of them is updated, the other one will hold an outdated object and all subsequent update will fail with conflict.
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: huww98 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
| func TestHasSynced(t *testing.T) { | ||
| synctest.Test(t, func(t *testing.T) { | ||
| client := fakeclientset.NewSimpleClientset() | ||
| informerFactory := informers.NewSharedInformerFactory(client, 1*time.Hour) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For why not setting resync period to 0, see kubernetes/kubernetes#133500
| go func() { | ||
| <-ctx.Done() | ||
| nt.queue.ShutDown() | ||
| }() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
synctest checks for leaked goroutines. So I have to clean it up.
|
@huww98: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
We need go 1.25 to use the synctest package :( |
| factoryForNamespace.Start(ctx.Done()) | ||
| } | ||
| if topologyInformer != nil { | ||
| go topologyInformer.RunWorker(ctx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add documentation which explains that RunWorker may only be called once per instance and why.
When the controller starts, 2 sync() call will run simultaneously, one from HasSynced(), another from processNextWorkItem(). Each will produce an instance for the same topology segment, and pass it to callbacks.
This will result in duplicated entries in capacities map, resulting in: either
What type of PR is this?
/kind bug
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Please see also #1435
Does this PR introduce a user-facing change?:
/cc @pohly