This document discusses the Plan 9 operating system and network programming in Plan 9. It provides an overview of Plan 9's origins from UNIX and its networking APIs and model, including the use of file descriptors to represent network connections. It also demonstrates examples of echo clients and servers implemented using these networking APIs.
We build tribes. We help find, profile and activite your customer base into a tribe so you grow your customers and make more money from each of them. Simply put we help you win friends and influence people.
This document discusses optimizations for TCP/IP networking performance on multicore systems. It describes several inefficiencies in the Linux kernel TCP/IP stack related to shared resources between cores, broken data locality, and per-packet processing overhead. It then introduces mTCP, a user-level TCP/IP stack that addresses these issues through a thread model with pairwise threading, batch packet processing from I/O to applications, and a BSD-like socket API. mTCP achieves a 2.35x performance improvement over the kernel TCP/IP stack on a web server workload.
This document discusses different methods for virtualizing I/O in virtual machines. It covers virtual I/O approaches like virtio, PCI passthrough, and SR-IOV. It also explains the role of the VMM/hypervisor in managing I/O between VMs and physical devices using techniques like VT-d, Open vSwitch, and single root I/O virtualization. Finally, it discusses emerging standards for virtual switching like virtual Ethernet bridging.
The document discusses the performance of three SPEC CPU2006 benchmarks - 483.xalancbmk, 462.libquantum, and 471.omnetpp - under different last-level cache (LLC) configurations and when subjected to LLC cache interference from a background workload. Key findings include reduced performance for the benchmarks when run with a smaller LLC size or when interfered with by a LLC jammer workload, but maintained performance when QoS techniques were applied to isolate the benchmark workload in the LLC.
Cumulus Linux 導入事例 -ネットワークをDevOpsに統合した、エンジニアが幸せになるインフラ運用手法のご紹介-Takashi Sogabe
2014年11月13日に開催された CTC Open Platform Day で発表したスライドについて投稿します。昨今Facebook社のNetworkingに関する取り組みなどでも大きく話題になっている Disaggregation の重要性について、ソフトウェアエンジニア及びインフラエンジニアの観点でまとめています。
In the 5G era, various industries (service providers, enterprises, OTTs and public sectors) are working on open innovation based on open source in many areas. While Some 5G mobile software venders are implementing 5G UPF with FPGA in OpenShift/Kuberntes with Device Plugin, an new network start-up - Kaloom announced Cloud Edge switch fabric that can integrate UPF into P4 enabled Software Defined Fabric (SDF) connected to OpenShift container integrated platform. In the course of various ideas, this session introduced the latest trends of SDF among in OpenShift native infrastructure and discussed the future of data plane and 5G UPF.
【Interop tokyo 2014】 Internet of Everything / SDN と シスコ技術者認定シスコシステムズ合同会社
100Gbpsソフトウェアルータの実現可能性に関する論文
1. Building a Single-Box
100 Gbps Software Router
Sangjin Han, Keon Jang, KyoungSoo Park, Sue Moon
KAIST
In IEEE Workshop on Local and Metropolitan Area
Networks, 2010
id:y_uuki / @y_uuk1
1
2. PacketShader: a GPU-accelerated Software Router
Sangjin Han, Keon Jang, KyoungSoo Park and Sue Moon.
In proceedings of ACM SIGCOMM 2010, Delhi, India.
September 2010
SSLShader: Cheap SSL Acceleration
with Commodity Processors
Keon Jang, Sangjin Han, Seungyeop Han, Sue Moon, and
KyoungSoo Park.
In proceedings of USENIX NSDI 2011, Boston, MA, March
同じ研究グループの論文
2
15. I/O帯域幅 - QuickPath Interconnect
$ QPIリンクは4つの役割がある
$ ① CPUソケット to CPUソケット
$ ② IOH to IOH
$ ③ CPU to IOH
$ 各QPIリンクは双方向で102.4 Gbps
$ 最悪のシナリオは全パケットが片方のIOHで受信され,
もう片方のIOHにフォワーディングされることである
$ ②,③について片方向のリンク(50Gbps)しか使えない
$ リンク①については,各パケットが同じノードのCPUで
処理されて,NICがパケットを同じノードのメモリにコ
ピーする限り問題ない
①
②
③③
15