Skip to content

Conversation

@brendandburns
Copy link
Contributor

@brendandburns
Copy link
Contributor Author

@deads2k @smarterclayton for good measure.

@davidopp
Copy link
Contributor

Hey @brendandburns can you recommend a reviewer for me to assign this to?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this TODO still relevant?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the answer is "no"

@j3ffml
Copy link
Contributor

j3ffml commented Apr 24, 2015

Generally lgtm. Should run the e2e kubectl test to make sure existing behavior still works. Also need a test for the generation path; unit would be fine, though e2e is probably easiest (see test/e2e/kubectl.go).

@brendandburns
Copy link
Contributor Author

Comments addressed (and PR expanded somewhat both to fix a bug, and add a --dry-run flag)

If this looks good enough to merge, I'll add some testing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you pull the static validation into a separate method?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done.

@brendandburns
Copy link
Contributor Author

Comments addressed, please re-check.

Thanks!
--brendan

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

static validation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done.

@deads2k
Copy link
Contributor

deads2k commented Apr 24, 2015

Just a couple nits. lgtm.

@bgrant0607
Copy link
Member

Sorry for the delay. Taking a look now.

@bgrant0607
Copy link
Member

cc @ironcladlou

@brendandburns
Copy link
Contributor Author

Comments addressed, and initial unit test added. Please take another look.

Thanks!
--brendan

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replicas will just be set back to oldRc.Spec.Replicas, below, so this seems pointless.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally, this would use the label update command, which we'd factor out into a library package, at least for the pod updates. We also need to update the selector on the RC, but there's not currently a generic way to do that (though I suspect we'll need one), so I'm more sympathetic to custom code for that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO'd

@brendandburns
Copy link
Contributor Author

Comment addressed. Added additional tests.

Please take another look.

--brendan

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is racy. Really, the RC needs to be updated first to add the label to its template, then the pods need to be updated, then the selector needs to be changed. Or, delete the RC before updating pods and recreate it after.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What you propose is actually worse, because the minute you update the RC, it orphans the old pods, which causes it to notice that all pods are missing, freak out and creates N pods, so you end up with N orphans, and N new pods. Even if you delete the orphans, it's going to cause a bunch of restart churn in your system.

Pro-actively labeling, updating the pods, and then deleting any orphans that might sneak in, is the best you can do, I think.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, adding a new label key to the template doesn't orphan pods. That's why the selector update should be a separate step.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah, I see your point wrt to splitting the updates. Will send a PR to do that.

--brendan

@bgrant0607
Copy link
Member

This is more or less ok. Just some implementation details to iron out.

@brendandburns
Copy link
Contributor Author

Comments addressed (or at least responded to) ptal.

Thanks
--brendan

@bgrant0607
Copy link
Member

LGTM

bgrant0607 added a commit that referenced this pull request Apr 24, 2015
First part of improved rolling update, allow dynamic next replication controller generation.
@bgrant0607 bgrant0607 merged commit 7330ced into kubernetes:master Apr 24, 2015
@antoineco
Copy link
Contributor

Wow these changes are very interesting, however things do not work as expected for me:

$ kubectl run-container redis --image=redis:2.8.8 --replicas=4
CONTROLLER   CONTAINER(S)   IMAGE(S)      SELECTOR              REPLICAS
redis        redis          redis:2.8.8   run-container=redis   4
$ kubectl get pods -o template --template="{{range.items}}{{.metadata.name}} {{.status.phase}} - {{end}}"
redis-5k4tk Running - redis-6nh1m Running - redis-j3pjd Running - redis-j7w8q Running
$ kubectl rolling-update redis redis289 --image=redis:2.8.9 --v=4
I0501 20:16:53.627137    1746 helpers.go:52] server response object: [{
  "metadata": {},
  "status": "Failure",
  "message": "replicationControllers \"redis\" cannot be updated: the resource was updated to 1925",
  "reason": "Conflict",
  "details": {
    "id": "redis",
    "kind": "replicationControllers"
  },
  "code": 409
}]
F0501 20:16:53.627725    1746 rollingupdate.go:73] Error from server: replicationControllers "redis" cannot be updated: the resource was updated to 1925

Is there anything I didn't grasp?

@bgrant0607
Copy link
Member

Fix was merged: #7540

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants