Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix IPVS low throughput issue #71114

Merged
merged 1 commit into from
Nov 16, 2018
Merged

fix IPVS low throughput issue #71114

merged 1 commit into from
Nov 16, 2018

Conversation

Lion-Wei
Copy link

What type of PR is this?
/kind bug

What this PR does / why we need it:
This pr make IPVS proxier set net/ipv4/vs/conn_reuse_mode to 0 by default, which will fix the IPVS low throughput issue.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #70747

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

IPVS proxier now set net/ipv4/vs/conn_reuse_mode to 0 by default, which will highly improve IPVS proxier performance.

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. kind/bug Categorizes issue or PR as related to a bug. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Nov 16, 2018
@k8s-ci-robot k8s-ci-robot added sig/network Categorizes an issue or PR as relevant to SIG Network. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Nov 16, 2018
@Lion-Wei
Copy link
Author

Lion-Wei commented Nov 16, 2018

Here are the ab test results:
Before:

Server Software:        nginx/1.15.5
Server Hostname:        10.0.0.147
Server Port:            8080

Document Path:          /
Document Length:        612 bytes

Concurrency Level:      300
Time taken for tests:   120.017 seconds
Complete requests:      48983
Failed requests:        0
Total transferred:      41390635 bytes
HTML transferred:       29977596 bytes
Requests per second:    408.13 [#/sec] (mean)
Time per request:       735.051 [ms] (mean)
Time per request:       2.450 [ms] (mean, across all concurrent requests)
Transfer rate:          336.79 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0  722 457.9    999    3009
Processing:     0    9  23.3      1     424
Waiting:        0    9  23.2      1     424
Total:          0  732 449.6   1000    3201

Percentage of the requests served within a certain time (ms)
  50%   1000
  66%   1000
  75%   1000
  80%   1001
  90%   1002
  95%   1004
  98%   1007
  99%   1198
 100%   3201 (longest request)

After:

Server Software:        nginx/1.15.5
Server Hostname:        10.0.0.147
Server Port:            8080

Document Path:          /
Document Length:        612 bytes

Concurrency Level:      300
Time taken for tests:   106.312 seconds
Complete requests:      580000
Failed requests:        0
Total transferred:      490100000 bytes
HTML transferred:       354960000 bytes
Requests per second:    5455.62 [#/sec] (mean)
Time per request:       54.989 [ms] (mean)
Time per request:       0.183 [ms] (mean, across all concurrent requests)
Transfer rate:          4501.95 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   28 392.7      0   63157
Processing:     5   25  88.6     23   25652
Waiting:        4   25  88.6     23   25652
Total:         15   54 409.7     24   63348

Percentage of the requests served within a certain time (ms)
  50%     24
  66%     25
  75%     26
  80%     27
  90%     30
  95%     38
  98%     56
  99%   1025
 100%  63348 (longest request)

@Lion-Wei
Copy link
Author

/cc @m1093782566 @jsravn
/assign @m1093782566

@k8s-ci-robot
Copy link
Contributor

@Lion-Wei: GitHub didn't allow me to request PR reviews from the following users: jsravn.

Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs.

In response to this:

/cc @m1093782566 @jsravn
/assign @m1093782566

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@m1093782566
Copy link
Contributor

m1093782566 commented Nov 16, 2018

Good improvement, THANKS!

/lgtm

/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 16, 2018
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Lion-Wei, m1093782566

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 16, 2018
@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel comment for consistent failures.

@m1093782566
Copy link
Contributor

/milestone v1.13

@k8s-ci-robot k8s-ci-robot added this to the v1.13 milestone Nov 16, 2018
@nikopen
Copy link
Contributor

nikopen commented Nov 16, 2018

/priority important-soon

@k8s-ci-robot k8s-ci-robot added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Nov 16, 2018
@k8s-ci-robot k8s-ci-robot merged commit 4e9c2a7 into kubernetes:master Nov 16, 2018
k8s-ci-robot added a commit that referenced this pull request Dec 19, 2018
…4-upstream-release-1.12

Automated cherry pick of #71834 / #71114 upstream release 1.12
@yyx
Copy link

yyx commented Jun 11, 2020

Hello everyone:
We are very fortunate to tell you that this bug has been fixed by us and has been verified to work very well. The patch(ipvs: avoid drop first packet by reusing conntrack) is being submitted to the Linux kernel community. You can also apply this patch to your own kernel, and then only need to set net.ipv4.vs.conn_reuse_mode=1(default) and net.ipv4.vs.conn_reuse_old_conntrack=1(default). As the net.ipv4.vs.conn_reuse_old_conntrack sysctl switch is newly added. You can adapt the kube-proxy by judging whether there is net.ipv4.vs.conn_reuse_old_conntrack, if so, it means that the current kernel is the version that fixed this bug.
That Can solve the following problems:

  1. Rolling update, IPVS keeps scheduling traffic to the destroyed Pod
  2. Unbalanced IPVS traffic scheduling after scaled up or rolling update
  3. fix IPVS low throughput issue fix IPVS low throughput issue #71114
    fix IPVS low throughput issue #71114
  4. One second connection delay in masque
    https://marc.info/?t=151683118100004&r=1&w=2
  5. IPVS low throughput IPVS low throughput #70747
    IPVS low throughput #70747
  6. Apache Bench can fill up ipvs service proxy in seconds Support Restart policy in the kubelet (pre-design) #544
    Apache Bench can fill up ipvs service proxy in seconds cloudnativelabs/kube-router#544
  7. Additional 1s latency in host -> service IP -> pod when upgrading from 1.15.3 -> 1.18.1 on RHEL 8.1 Additional 1s latency in host -> service IP -> pod when upgrading from 1.15.3 -> 1.18.1 on RHEL 8.1 #90854
    Additional 1s latency in host -> service IP -> pod when upgrading from 1.15.3 -> 1.18.1 on RHEL 8.1 #90854
  8. kube-proxy ipvs conn_reuse_mode setting causes errors with high load from single client kube-proxy ipvs conn_reuse_mode setting causes errors with high load from single client #81775
    kube-proxy ipvs conn_reuse_mode setting causes errors with high load from single client #81775

Thank you.
By Yang Yuxi (TencentCloudContainerTeam)

@andrewsykim
Copy link
Member

Following-up on @yyx's comment above for posterity.

The patch mentioned above didn't make it to the kernel but there are two recently merged patches worth highlighting. One of them fixes the 1 second delay issue and the other fixes dropped packets when stale connection entries in the IPVS table are used:

  1. http://patchwork.ozlabs.org/project/netfilter-devel/patch/[email protected]/
  2. http://patchwork.ozlabs.org/project/netfilter-devel/patch/[email protected]/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lgtm "Looks good to me", indicates that a PR is ready to be merged. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/network Categorizes an issue or PR as relevant to SIG Network. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

IPVS low throughput
7 participants