Presentation material for TokyoRubyKaigi11.
Describes techniques used by H2O, including: techniques to optimize TCP for responsiveness, server-push and cache digests.
How happy they became with H2O/mruby and the future of HTTPIchito Nagata
The document summarizes the process of migrating the RoomClip image resizing service from Nginx to H2O. Key points include:
- The complex Nginx configuration was difficult to debug and posed security risks. H2O provided better debuggability through Ruby.
- The migration took 1-2 months and involved refactoring image processing out of the web server and into separate Converter processes.
- Benchmarks showed H2O had comparable or better performance than Nginx, with lower latency percentiles and reduced disk and S3 usage.
- Additional benefits included the ability to write unit tests in mruby and new libraries like mruby-rack for running Ruby code on H
In this talk we will discuss how to build and run containers without root privileges. As part of the discussion, we will introduce new programs like fuse-overlayfs and slirp4netns and explain how it is possible to do this using user namespaces. fuse-overlayfs allows to use the same storage model as "root" containers and use layered images. slirp4netns emulates a TCP/IP stack in userland and allows to use a network namespace from a container and let it access the outside world (with some limitations).
We will also introduce Usernetes, and how to run Kubernetes in an unprivileged user namespace
https://sched.co/Jcgg
Migrating your clusters and workloads from Hadoop 2 to Hadoop 3DataWorks Summit
The Hadoop community announced Hadoop 3.0 GA in December, 2017 and 3.1 around April, 2018 loaded with a lot of features and improvements. One of the biggest challenges for any new major release of a software platform is its compatibility. Apache Hadoop community has focused on ensuring wire and binary compatibility for Hadoop 2 clients and workloads.
There are many challenges to be addressed by admins while upgrading to a major release of Hadoop. Users running workloads on Hadoop 2 should be able to seamlessly run or migrate their workloads onto Hadoop 3. This session will be deep diving into upgrade aspects in detail and provide a detailed preview of migration strategies with information on what works and what might not work. This talk would focus on the motivation for upgrading to Hadoop 3 and provide a cluster upgrade guide for admins and workload migration guide for users of Hadoop.
Speaker
Suma Shivaprasad, Hortonworks, Staff Engineer
Rohith Sharma, Hortonworks, Senior Software Engineer
NGINX ADC: Basics and Best Practices – EMEANGINX, Inc.
In this webinar we help you get started with NGINX, industry’s most ubiquitous web server and API gateway. We cover best practices for installing, configuring, and troubleshooting both NGINX Open Source and the enterprise-grade NGINX Plus. We provide insights about using NGINX Controller to manage your NGINX Plus instances.
Watch this webinar to learn:
- How to create NGINX configurations for web server, load balancer, etc.
- About improving performance using keepalives and other NGINX directives
- How the NGINX Controller Load Balancing Module can manage NGINX Plus instances at scale
- About augmenting your existing ADC with NGINX
https://www.nginx.com/resources/webinars/nginx-adc-basics-best-practices-emea/
The document discusses Kubernetes networking. It describes how Kubernetes networking allows pods to have routable IPs and communicate without NAT, unlike Docker networking which uses NAT. It covers how services provide stable virtual IPs to access pods, and how kube-proxy implements services by configuring iptables on nodes. It also discusses the DNS integration using SkyDNS and Ingress for layer 7 routing of HTTP traffic. Finally, it briefly mentions network plugins and how Kubernetes is designed to be open and customizable.
The document discusses using Senlin, an OpenStack clustering service, to provide autoscaling capabilities for multicloud platforms. Senlin allows for managing clusters of nodes across different cloud providers and includes features like load balancing, auto-healing, and scaling policies. It describes how Senlin was implemented at a company to provide a centralized autoscaling solution across OpenStack and VMware cloud environments. Some drawbacks of Senlin are also outlined, along with potential future work like multi-region clusters and global load balancing.
왜 컨테이너인가? - OpenShift 구축 사례와 컨테이너로 환경 전환 시 고려사항rockplace
[Microsoft Azure와 Red Hat OpenShift를 통한 비즈니스 스피드 업! 웨비나]
왜 컨테이너인가? - OpenShift 구축 사례와 컨테이너로 환경 전환 시 고려사항
락플레이스 구천모 상무
영상 다시보기 : https://youtu.be/i3yKrHLHYJI
What CloudStackers Need To Know About LINSTOR/DRBDShapeBlue
Philipp explains the best performing Open Source software-defined storage software available to Apache CloudStack today. It consists of two well-concerted components. LINSTOR and DRBD. Each of them also has its independent use cases, where it is deployed alone. In this presentation, the combination of these two is examined. They form the control plane and the data plane of the SDS. We will touch on: Performance, scalability, hyper-convergence (data-locality for high IO performance), resiliency through data replication (synchronous within a site, 2-way, 3-way, or more), snapshots, backup (to S3), encryption at rest, deduplication, compression, placement policies (regarding failure domains), management CLI and webGUI, monitoring interface, self-healing (restoring redundancy after device/node failure), the federation of multiple sites (async mirroring and repeatedly snapshot difference shipping), QoS control (noisy neighbors limitation) and of course: complete integration with CloudStack for KVM guests. It is Open Source software following the Unix philosophy. Each component solves one task, made for maximal re-usability. The solution leverages the Linux kernel, LVM and/or ZFS, and many Open Source software libraries. Building on these giant Open Source foundations, not only saves LINBIT from re-inventing the wheels, it also empowers your day 2 operation teams since they are already familiar with these technologies.
Philipp Reisner is one of the founders and CEO of LINBIT in Vienna/Austria. He holds a Dipl.-Ing. (comparable to MSc) degree in computer science from Technical University in Vienna. His professional career has been dominated by developing DRBD, a storage replication software for Linux. While in the early years (2001) this was writing kernel code, today he leads a company of 30 employees with locations in Austria and the USA. LINBIT is an Open Source company offering enterprise-level support subscriptions for its Open Source technologies.
-----------------------------------------
CloudStack Collaboration Conference 2022 took place on 14th-16th November in Sofia, Bulgaria and virtually. The day saw a hybrid get-together of the global CloudStack community hosting 370 attendees. The event hosted 43 sessions from leading CloudStack experts, users and skilful engineers from the open-source world, which included: technical talks, user stories, new features and integrations presentations and more.
The document provides an overview of Red Hat OpenShift Container Platform, including:
- OpenShift provides a fully automated Kubernetes container platform for any infrastructure.
- It offers integrated services like monitoring, logging, routing, and a container registry out of the box.
- The architecture runs everything in pods on worker nodes, with masters managing the control plane using Kubernetes APIs and OpenShift services.
- Key concepts include pods, services, routes, projects, configs and secrets that enable application deployment and management.
BPF of Berkeley Packet Filter mechanism was first introduced in linux in 1997 in version 2.1.75. It has seen a number of extensions of the years. Recently in versions 3.15 - 3.19 it received a major overhaul which drastically expanded it's applicability. This talk will cover how the instruction set looks today and why. It's architecture, capabilities, interface, just-in-time compilers. We will also talk about how it's being used in different areas of the kernel like tracing and networking and future plans.
OpenStack Best Practices and Considerations - terasky tech dayArthur Berezin
- Arthur Berezin presented on best practices for deploying enterprise-grade OpenStack implementations. The presentation covered OpenStack architecture, layout considerations including high availability, and best practices for compute, storage, and networking deployments. It provided guidance on choosing backend drivers, overcommitting resources, and networking designs.
Reactive app using actor model & apache sparkRahul Kumar
Developing Application with Big Data is really challenging work, scaling, fault tolerance and responsiveness some are the biggest challenge. Realtime bigdata application that have self healing feature is a dream these days. Apache Spark is a fast in-memory data processing system that gives a good backend for realtime application.In this talk I will show how to use reactive platform, Actor model and Apache Spark stack to develop a system that have responsiveness, resiliency, fault tolerance and message driven feature.
Sa introduction to big data pipelining with cassandra & spark west mins...Simon Ambridge
This document provides an overview and outline of a 1-hour introduction to building a big data pipeline using Docker, Cassandra, Spark, Spark-Notebook and Akka. The introduction is presented as a half-day workshop at Devoxx November 2015. It uses a data pipeline environment from Data Fellas and demonstrates how to use scalable distributed technologies like Docker, Spark, Spark-Notebook and Cassandra to build a reactive, repeatable big data pipeline. The key takeaway is understanding how to construct such a pipeline.
NGINX ADC: Basics and Best Practices – EMEANGINX, Inc.
In this webinar we help you get started with NGINX, industry’s most ubiquitous web server and API gateway. We cover best practices for installing, configuring, and troubleshooting both NGINX Open Source and the enterprise-grade NGINX Plus. We provide insights about using NGINX Controller to manage your NGINX Plus instances.
Watch this webinar to learn:
- How to create NGINX configurations for web server, load balancer, etc.
- About improving performance using keepalives and other NGINX directives
- How the NGINX Controller Load Balancing Module can manage NGINX Plus instances at scale
- About augmenting your existing ADC with NGINX
https://www.nginx.com/resources/webinars/nginx-adc-basics-best-practices-emea/
The document discusses Kubernetes networking. It describes how Kubernetes networking allows pods to have routable IPs and communicate without NAT, unlike Docker networking which uses NAT. It covers how services provide stable virtual IPs to access pods, and how kube-proxy implements services by configuring iptables on nodes. It also discusses the DNS integration using SkyDNS and Ingress for layer 7 routing of HTTP traffic. Finally, it briefly mentions network plugins and how Kubernetes is designed to be open and customizable.
The document discusses using Senlin, an OpenStack clustering service, to provide autoscaling capabilities for multicloud platforms. Senlin allows for managing clusters of nodes across different cloud providers and includes features like load balancing, auto-healing, and scaling policies. It describes how Senlin was implemented at a company to provide a centralized autoscaling solution across OpenStack and VMware cloud environments. Some drawbacks of Senlin are also outlined, along with potential future work like multi-region clusters and global load balancing.
왜 컨테이너인가? - OpenShift 구축 사례와 컨테이너로 환경 전환 시 고려사항rockplace
[Microsoft Azure와 Red Hat OpenShift를 통한 비즈니스 스피드 업! 웨비나]
왜 컨테이너인가? - OpenShift 구축 사례와 컨테이너로 환경 전환 시 고려사항
락플레이스 구천모 상무
영상 다시보기 : https://youtu.be/i3yKrHLHYJI
What CloudStackers Need To Know About LINSTOR/DRBDShapeBlue
Philipp explains the best performing Open Source software-defined storage software available to Apache CloudStack today. It consists of two well-concerted components. LINSTOR and DRBD. Each of them also has its independent use cases, where it is deployed alone. In this presentation, the combination of these two is examined. They form the control plane and the data plane of the SDS. We will touch on: Performance, scalability, hyper-convergence (data-locality for high IO performance), resiliency through data replication (synchronous within a site, 2-way, 3-way, or more), snapshots, backup (to S3), encryption at rest, deduplication, compression, placement policies (regarding failure domains), management CLI and webGUI, monitoring interface, self-healing (restoring redundancy after device/node failure), the federation of multiple sites (async mirroring and repeatedly snapshot difference shipping), QoS control (noisy neighbors limitation) and of course: complete integration with CloudStack for KVM guests. It is Open Source software following the Unix philosophy. Each component solves one task, made for maximal re-usability. The solution leverages the Linux kernel, LVM and/or ZFS, and many Open Source software libraries. Building on these giant Open Source foundations, not only saves LINBIT from re-inventing the wheels, it also empowers your day 2 operation teams since they are already familiar with these technologies.
Philipp Reisner is one of the founders and CEO of LINBIT in Vienna/Austria. He holds a Dipl.-Ing. (comparable to MSc) degree in computer science from Technical University in Vienna. His professional career has been dominated by developing DRBD, a storage replication software for Linux. While in the early years (2001) this was writing kernel code, today he leads a company of 30 employees with locations in Austria and the USA. LINBIT is an Open Source company offering enterprise-level support subscriptions for its Open Source technologies.
-----------------------------------------
CloudStack Collaboration Conference 2022 took place on 14th-16th November in Sofia, Bulgaria and virtually. The day saw a hybrid get-together of the global CloudStack community hosting 370 attendees. The event hosted 43 sessions from leading CloudStack experts, users and skilful engineers from the open-source world, which included: technical talks, user stories, new features and integrations presentations and more.
The document provides an overview of Red Hat OpenShift Container Platform, including:
- OpenShift provides a fully automated Kubernetes container platform for any infrastructure.
- It offers integrated services like monitoring, logging, routing, and a container registry out of the box.
- The architecture runs everything in pods on worker nodes, with masters managing the control plane using Kubernetes APIs and OpenShift services.
- Key concepts include pods, services, routes, projects, configs and secrets that enable application deployment and management.
BPF of Berkeley Packet Filter mechanism was first introduced in linux in 1997 in version 2.1.75. It has seen a number of extensions of the years. Recently in versions 3.15 - 3.19 it received a major overhaul which drastically expanded it's applicability. This talk will cover how the instruction set looks today and why. It's architecture, capabilities, interface, just-in-time compilers. We will also talk about how it's being used in different areas of the kernel like tracing and networking and future plans.
OpenStack Best Practices and Considerations - terasky tech dayArthur Berezin
- Arthur Berezin presented on best practices for deploying enterprise-grade OpenStack implementations. The presentation covered OpenStack architecture, layout considerations including high availability, and best practices for compute, storage, and networking deployments. It provided guidance on choosing backend drivers, overcommitting resources, and networking designs.
Reactive app using actor model & apache sparkRahul Kumar
Developing Application with Big Data is really challenging work, scaling, fault tolerance and responsiveness some are the biggest challenge. Realtime bigdata application that have self healing feature is a dream these days. Apache Spark is a fast in-memory data processing system that gives a good backend for realtime application.In this talk I will show how to use reactive platform, Actor model and Apache Spark stack to develop a system that have responsiveness, resiliency, fault tolerance and message driven feature.
Sa introduction to big data pipelining with cassandra & spark west mins...Simon Ambridge
This document provides an overview and outline of a 1-hour introduction to building a big data pipeline using Docker, Cassandra, Spark, Spark-Notebook and Akka. The introduction is presented as a half-day workshop at Devoxx November 2015. It uses a data pipeline environment from Data Fellas and demonstrates how to use scalable distributed technologies like Docker, Spark, Spark-Notebook and Cassandra to build a reactive, repeatable big data pipeline. The key takeaway is understanding how to construct such a pipeline.
Since 2014, Typesafe has been actively contributing to the Apache Spark project, and has become a certified development support partner of Databricks, the company started by the creators of Spark. Typesafe and Mesosphere have forged a partnership in which Typesafe is the official commercial support provider of Spark on Apache Mesos, along with Mesosphere’s Datacenter Operating Systems (DCOS).
In this webinar with Iulian Dragos, Spark team lead at Typesafe Inc., we reveal how Typesafe supports running Spark in various deployment modes, along with the improvements we made to Spark to help integrate backpressure signals into the underlying technologies, making it a better fit for Reactive Streams. He also show you the functionalities at work, and how to make it simple to deploy to Spark on Mesos with Typesafe.
We will introduce:
Various deployment modes for Spark: Standalone, Spark on Mesos, and Spark with Mesosphere DCOS
Overview of Mesos and how it relates to Mesosphere DCOS
Deeper look at how Spark runs on Mesos
How to manage coarse-grained and fine-grained scheduling modes on Mesos
What to know about a client vs. cluster deployment
A demo running Spark on Mesos
Streaming Analytics with Spark, Kafka, Cassandra and AkkaHelena Edelson
This document discusses a new approach to building scalable data processing systems using streaming analytics with Spark, Kafka, Cassandra, and Akka. It proposes moving away from architectures like Lambda and ETL that require duplicating data and logic. The new approach leverages Spark Streaming for a unified batch and stream processing runtime, Apache Kafka for scalable messaging, Apache Cassandra for distributed storage, and Akka for building fault tolerant distributed applications. This allows building real-time streaming applications that can join streaming and historical data with simplified architectures that remove the need for duplicating data extraction and loading.
Streaming Big Data with Spark, Kafka, Cassandra, Akka & Scala (from webinar)Helena Edelson
This document provides an overview of streaming big data with Spark, Kafka, Cassandra, Akka, and Scala. It discusses delivering meaning in near-real time at high velocity and an overview of Spark Streaming, Kafka and Akka. It also covers Cassandra and the Spark Cassandra Connector as well as integration in big data applications. The presentation is given by Helena Edelson, a Spark Cassandra Connector committer and Akka contributor who is a Scala and big data conference speaker working as a senior software engineer at DataStax.
This talk will address new architectures emerging for large scale streaming analytics. Some based on Spark, Mesos, Akka, Cassandra and Kafka (SMACK) and other newer streaming analytics platforms and frameworks using Apache Flink or GearPump. Popular architecture like Lambda separate layers of computation and delivery and require many technologies which have overlapping functionality. Some of this results in duplicated code, untyped processes, or high operational overhead, let alone the cost (e.g. ETL).
I will discuss the problem domain and what is needed in terms of strategies, architecture and application design and code to begin leveraging simpler data flows. We will cover how the particular set of technologies addresses common requirements and how collaboratively they work together to enrich and reinforce each other.
Real-Time Anomaly Detection with Spark MLlib, Akka and CassandraNatalino Busa
We present a solution for streaming anomaly detection, named “Coral”, based on Spark, Akka and Cassandra. In the system presented, we run Spark to run the data analytics pipeline for anomaly detection. By running Spark on the latest events and data, we make sure that the model is always up-to-date and that the amount of false positives is kept low, even under changing trends and conditions. Our machine learning pipeline uses Spark decision tree ensembles and k-means clustering. Once the model is trained by Spark, the model’s parameters are pushed to the Streaming Event Processing Layer, implemented in Akka. The Akka layer will then score 1000s of event per seconds according to the last model provided by Spark. Spark and Akka communicate which each other using Cassandra as a low-latency data store. By doing so, we make sure that every element of this solution is resilient and distributed. Spark performs micro-batches to keep the model up-to-date while Akka detects the new anomalies by using the latest Spark-generated data model. The project is currently hosted on Github. Have a look at : http://coral-streaming.github.io
Alpine academy apache spark series #1 introduction to cluster computing wit...Holden Karau
Alpine academy apache spark series #1 introduction to cluster computing with python & a wee bit of scala. This is the first in the series and is aimed at the intro level, the next one will cover MLLib & ML.
NOTE: This was converted to Powerpoint from Keynote. Slideshare does not play the embedded videos. You can download the powerpoint from slideshare and import it into keynote. The videos should work in the keynote.
Abstract:
In this presentation, we will describe the "Spark Kernel" which enables applications, such as end-user facing and interactive applications, to interface with Spark clusters. It provides a gateway to define and run Spark tasks and to collect results from a cluster without the friction associated with shipping jars and reading results from peripheral systems. Using the Spark Kernel as a proxy, applications can be hosted remotely from Spark.
This presentation includes a comprehensive introduction to Apache Spark. From an explanation of its rapid ascent to performance and developer advantages over MapReduce. We also explore its built-in functionality for application types involving streaming, machine learning, and Extract, Transform and Load (ETL).
Lambda Architecture with Spark Streaming, Kafka, Cassandra, Akka, ScalaHelena Edelson
Scala Days, Amsterdam, 2015: Lambda Architecture - Batch and Streaming with Spark, Cassandra, Kafka, Akka and Scala; Fault Tolerance, Data Pipelines, Data Flows, Data Locality, Akka Actors, Spark, Spark Cassandra Connector, Big Data, Asynchronous data flows. Time series data, KillrWeather, Scalable Infrastructure, Partition For Scale, Replicate For Resiliency, Parallelism
Isolation, Data Locality, Location Transparency
Reactive dashboard’s using apache sparkRahul Kumar
Apache Spark's Tutorial talk, In this talk i explained how to start working with Apache spark, feature of apache spark and how to compose data platform with spark. This talk also explains about reactive platform, tools and framework like Play, akka.
Using Spark, Kafka, Cassandra and Akka on Mesos for Real-Time PersonalizationPatrick Di Loreto
The gambling industry has arguably been one of the most comprehensively affected by the internet revolution, and if an organization such as William Hill hadn't adapted successfully it would have disappeared. We call this, “Going Reactive.”
The company's latest innovations are very cutting edge platforms for personalization, recommendation, and big data, which are based on Akka, Scala, Play Framework, Kafka, Cassandra, Spark, and Mesos.
Data Science lifecycle with Apache Zeppelin and Spark by Moonsoo LeeSpark Summit
This document discusses Apache Zeppelin, an open-source notebook for interactive data analytics. It provides an overview of Zeppelin's features, including interactive notebooks, multiple backends, interpreters, and a display system. The document also covers Zeppelin's adoption timeline, from its origins as a commercial product in 2012 to becoming an Apache Incubator project in 2014. Future projects involving Zeppelin like Helium and Z-Manager are also briefly described.
Lambda Architecture with Spark, Spark Streaming, Kafka, Cassandra, Akka and S...Helena Edelson
Regardless of the meaning we are searching for over our vast amounts of data, whether we are in science, finance, technology, energy, health care…, we all share the same problems that must be solved: How do we achieve that? What technologies best support the requirements? This talk is about how to leverage fast access to historical data with real time streaming data for predictive modeling for lambda architecture with Spark Streaming, Kafka, Cassandra, Akka and Scala. Efficient Stream Computation, Composable Data Pipelines, Data Locality, Cassandra data model and low latency, Kafka producers and HTTP endpoints as akka actors...
Spark Streaming makes it easy to build scalable fault-tolerant streaming applications. In this webinar, developers will learn:
*How Spark Streaming works - a quick review.
*Features in Spark Streaming that help prevent potential data loss.
*Complementary tools in a streaming pipeline - Kafka and Akka.
*Design and tuning tips for Reactive Spark Streaming applications.
Data processing platforms architectures with Spark, Mesos, Akka, Cassandra an...Anton Kirillov
This talk is about architecture designs for data processing platforms based on SMACK stack which stands for Spark, Mesos, Akka, Cassandra and Kafka. The main topics of the talk are:
- SMACK stack overview
- storage layer layout
- fixing NoSQL limitations (joins and group by)
- cluster resource management and dynamic allocation
- reliable scheduling and execution at scale
- different options for getting the data into your system
- preparing for failures with proper backup and patching strategies
Everyone in the Scala world is using or looking into using Akka for low-latency, scalable, distributed or concurrent systems. I'd like to share my story of developing and productionizing multiple Akka apps, including low-latency ingestion and real-time processing systems, and Spark-based applications.
When does one use actors vs futures?
Can we use Akka with, or in place of, Storm?
How did we set up instrumentation and monitoring in production?
How does one use VisualVM to debug Akka apps in production?
What happens if the mailbox gets full?
What is our Akka stack like?
I will share best practices for building Akka and Scala apps, pitfalls and things we'd like to avoid, and a vision of where we would like to go for ideal Akka monitoring, instrumentation, and debugging facilities. Plus backpressure and at-least-once processing.
Video: https://youtu.be/C_u4_l84ED8
Karl Isenberg reviews the history of distributed computing, clarifies terminology for layers in the container stack, and does a head to head comparison of several tools in the space, including Kubernetes, Marathon, and Docker Swarm. Learn which features and qualities are critical for container orchestration and how you can apply this knowledge when evaluating platforms.
Linux 4.x Tracing Tools: Using BPF SuperpowersBrendan Gregg
Talk for USENIX LISA 2016 by Brendan Gregg.
"Linux 4.x Tracing Tools: Using BPF Superpowers
The Linux 4.x series heralds a new era of Linux performance analysis, with the long-awaited integration of a programmable tracer: Enhanced BPF (eBPF). Formally the Berkeley Packet Filter, BPF has been enhanced in Linux to provide system tracing capabilities, and integrates with dynamic tracing (kprobes and uprobes) and static tracing (tracepoints and USDT). This has allowed dozens of new observability tools to be developed so far: for example, measuring latency distributions for file system I/O and run queue latency, printing details of storage device I/O and TCP retransmits, investigating blocked stack traces and memory leaks, and a whole lot more. These lead to performance wins large and small, especially when instrumenting areas that previously had zero visibility. Tracing superpowers have finally arrived.
In this talk I'll show you how to use BPF in the Linux 4.x series, and I'll summarize the different tools and front ends available, with a focus on iovisor bcc. bcc is an open source project to provide a Python front end for BPF, and comes with dozens of new observability tools (many of which I developed). These tools include new BPF versions of old classics, and many new tools, including: execsnoop, opensnoop, funccount, trace, biosnoop, bitesize, ext4slower, ext4dist, tcpconnect, tcpretrans, runqlat, offcputime, offwaketime, and many more. I'll also summarize use cases and some long-standing issues that can now be solved, and how we are using these capabilities at Netflix."
State of Containers and the Convergence of HPC and BigDatainside-BigData.com
In this deck from 2018 Swiss HPC Conference, Christian Kniep from Docker Inc. presents: State of Containers and the Convergence of HPC and BigData.
"This talk will recap the history of and what constitutes Linux Containers, before laying out how the technology is employed by various engines and what problems these engines have to solve. Afterward Christian will elaborate on why the advent of standards for images and runtimes moved the discussion from building and distributing containers to orchestrating containerized applications at scale. In conclusion attendees will get an update on how containers foster the convergence of Big Data and HPC workloads and the state of native HPC containers."
Learn more: http://docker.com
and
http://www.hpcadvisorycouncil.com/events/2018/swiss-workshop/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
FIWARE Global Summit - Real-time Media Stream Processing Using KurentoFIWARE
Kurento is an open source software that simplifies the creation of real-time communication applications involving audio and video streams. It provides a server that abstracts compatibility issues between senders and receivers and allows for manipulation or redistribution of streams. The server includes endpoints for stream input/output and filters for processing or transforming media as it flows through the pipeline. Example applications demonstrated by Kurento include an RTP receiver that redirects streams to a browser and a magic mirror that applies computer vision to detect and overlay images on a face in real-time video.
Distributed application usecase on dockerHiroshi Miura
1) The document discusses using Docker containers and the Hinemos monitoring system to automate operations for a distributed application running on container clusters.
2) Key benefits outlined include automated rebalancing of containers for performance and cost reduction, reduced downtime through automated fallback from failures, and consolidation of platforms through wrapping differences in container images.
3) Challenges addressed include complex data distribution management and inability to integrate environments for applications with different dependencies, which Docker containers help solve.
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph Ceph Community
Hyper Converged PLCloud with CEPH
This document discusses PowerLeader Cloud (PLCloud), a cloud computing platform that uses a hyper-converged infrastructure with OpenStack, Docker, and Ceph. It provides an overview of PLCloud and how it has adopted OpenStack, Ceph, and other open source technologies. It then describes PLCloud's hyper-converged architecture and how it leverages OpenStack, Docker, and Ceph. Finally, it discusses a specific use case where Ceph RADOS Gateway is used for media storage and access in PLCloud.
FIWARE Global Summit - Real-time Media Stream Processing Using KurentoFIWARE
Kurento Media Server is an open source platform for processing audio and video streams. It allows input streams to be processed and output streams to be manipulated or redistributed. The server has endpoints to receive media and filters that can transform or process the media. Client applications connect to Kurento to build processing pipelines with these components and control the streaming applications.
The document discusses the need for web servers to provide various web services for a company. It provides an overview of the history and development of the World Wide Web and web servers. It then describes key features and functions of the Apache web server, including caching, logging, mapping URLs to files, access control, server-side includes, and virtual hosting.
Here I covered the cores of Apache and also discuss each and every core. Virtual host, resistance server process some protocols like HTTP, SMTP, DNS FTP, are also be highlighted.
Focus on some installing part of apache.
The document discusses OpenStack and Fibre Channel storage. It provides an overview of OpenStack, including its goals of being an open platform with broad support and empowering users. It describes core OpenStack technologies like Compute, Object Storage, and Block Storage. It outlines the history and current state of Fibre Channel support in OpenStack, including the Fibre Channel Zone Manager that automates zoning. It diagrams the high-level architecture and components involved in provisioning Fibre Channel volumes to virtual machines from OpenStack.
OpenNebulaConf 2016 - Measuring and tuning VM performance by Boyan Krosnov, S...OpenNebula Project
In this session we'll explore measuring VM performance and evaluating changes to settings or infrastructure which can affect performance positively. We'll also share the best current practice for architecture for high performance clouds from our experience.
EDW CENIPA is a opensource project designed to enable analysis of aeronautical incidentes that occured in the brazilian civil aviation. The project uses techniques and BI tools that explore innovative low-cost technologies. Historically, Business Intelligence platforms are expensive and impracticable for small projects. BI projects require specialized skills and high development costs. This work aims to break this barrier.
Presentation from DockerCon EU '17 about how Aurea achieved over 50% cost reduction using Docker and about two major technical obstacles we had when dockerizing legacy applications.
This document provides an overview of application deployment on cloud platforms. It begins with an introduction to cloud computing and comparisons of SAAS, PAAS and IAAS models. The document then discusses benefits and challenges of cloud deployment. It also covers business and architectural considerations for moving applications to the cloud. Finally, it demonstrates several popular platform as a service providers like Firebase, AWS, Heroku and Cloud Foundry and provides guidance on deploying applications on each.
Apache Ratis is an open source Java library for the Raft Consensus Protocol. Raft is being used successfully as an alternative to Paxos to implement a consistently replicated log. Raft is proven to be safe and is designed to be simpler to understand. Ratis is a high performance implementation of Raft. Apache Ozone, Apache IoTDB and Alluxio use Apache Ratis for providing high availability and replicating raw data.
Ratis implements all the standard Raft features, including leader election, log replication, membership change and log compaction. Moreover, it is designed with data intensive applications in mind and fully supports asynchronous event-driven applications. It is highly customizable – allows pluggable state machines, pluggable RPC, pluggable Raft log and pluggable Metrics. It has been implemented to provide low latency and high throughput on transactions.
In this talk, we first give a brief introduction of Raft. We will discuss the features and use cases of Ratis. Finally, we discuss the current development status and the future work.
CoC EU 2024
Ch 22: Web Hosting and Internet Serverswebhostingguy
Web hosting involves providing space on a server for websites. Linux is commonly used for hosting due to its maintainability and performance. A web server software like Apache is installed to handle HTTP requests from browsers. URLs identify resources on the web using protocols like HTTP and FTP. CGI scripts allow dynamic content generation but pose security risks. Load balancing distributes server load across multiple systems. Choosing a server depends on factors like robustness, performance, updates, and cost. Apache is widely used and configurable using configuration files that control server parameters, resources, and access restrictions. Virtual interfaces allow a single server to host multiple websites. Caching and proxies can improve performance and security. Anonymous FTP allows public file downloads.
The document discusses the performance of HTTP/2 compared to HTTP/1.1 across different network conditions. It summarizes results from testing 8 real websites under 16 bandwidth and latency combinations with varying packet loss rates. Overall, HTTP/2 performs better for document complete time and speed index, especially on slower connections, though results vary depending on the specific site and metrics measured.
Reorganizing Website Architecture for HTTP/2 and BeyondKazuho Oku
This document discusses reorganizing website architecture for HTTP/2 and beyond. It summarizes some issues with HTTP/2 including errors in prioritization where some browsers fail to specify resource priority properly. It also discusses the problem of TCP head-of-line blocking where pending data in TCP buffers can delay higher priority resources. The document proposes solutions to these issues such as prioritizing resources on the server-side and writing only what can be sent immediately to avoid buffer blocking. It also examines the mixed success of HTTP/2 push and argues the server should not push already cached resources.
This document discusses programming TCP for responsiveness when sending HTTP/2 responses. It describes how to reduce head-of-line blocking by filling the TCP congestion window before sending data. The key points are reading TCP states via getsockopt to determine how much data can be sent immediately, and optimizing this only for high latency connections or small congestion windows to avoid additional response delays. Benchmarks show this approach can reduce response times from multiple round-trip times to a single RTT.
The document discusses optimizations to TCP and HTTP/2 to improve responsiveness on the web. It describes how TCP slow start works and the delays introduced in standard HTTP/2 usage from TCP/TLS handshakes. The author proposes adjusting the TCP send buffer polling threshold to allow switching between responses more quickly based on TCP congestion window state. Benchmark results show this can reduce response times by eliminating an extra round-trip delay.
Cache aware-server-push in H2O version 1.5Kazuho Oku
This document discusses cache-aware server push in H2O version 1.5. It describes calculating a fingerprint of cached assets using a Golomb compressed set to identify what assets need to be pushed from the server. It also discusses implementing this fingerprint using a cookie or service worker. The hybrid approach stores responses in the service worker cache and updates the cookie fingerprint. H2O 1.5 implements cookie-based fingerprints to cancel push indications for cached assets, potentially improving page load speeds.
JSON SQL Injection and the Lessons LearnedKazuho Oku
This document discusses JSON SQL injection and lessons learned from vulnerabilities in SQL query builders. It describes how user-supplied JSON input containing operators instead of scalar values could manipulate queries by injecting conditions like id!='-1' instead of a specific id value. This allows accessing unintended data. The document examines how SQL::QueryMaker and a strict mode in SQL::Maker address this by restricting query parameters to special operator objects or raising errors on non-scalar values. While helpful, strict mode may break existing code, requiring changes to parameter handling. The vulnerability also applies to other languages' frameworks that similarly convert arrays to SQL IN clauses.
This document discusses using the prove command-line tool to run tests and other scripts. Prove is a test runner that uses the Test Anything Protocol (TAP) to aggregate results. It can run tests and scripts written in any language by specifying the interpreter with --exec. Extensions other than .t can be run by setting --ext. Prove searches for tests in the t/ directory by default but can run any kind of scripts or tasks placed in t/, such as service monitoring scripts. The .proverc file can save common prove options for a project.
JSX - developing a statically-typed programming language for the WebKazuho Oku
Kazuho Oku presents JSX, a statically-typed programming language that compiles to JavaScript. JSX aims to improve productivity over JavaScript by enabling errors to be caught at compile-time rather than runtime. It also aims to optimize code size and execution speed compared to JavaScript through type information and compiler optimizations. Oku discusses JSX language features like classes and types, benchmarks showing improved performance over JavaScript, and efforts to bind JSX to W3C standards through automatic translation of interface definition languages.
Hybridize Functions: A Tool for Automatically Refactoring Imperative Deep Lea...Raffi Khatchadourian
Efficiency is essential to support responsiveness w.r.t. ever-growing datasets, especially for Deep Learning (DL) systems. DL frameworks have traditionally embraced deferred execution-style DL code—supporting symbolic, graph-based Deep Neural Network (DNN) computation. While scalable, such development is error-prone, non-intuitive, and difficult to debug. Consequently, more natural, imperative DL frameworks encouraging eager execution have emerged but at the expense of run-time performance. Though hybrid approaches aim for the “best of both worlds,” using them effectively requires subtle considerations to make code amenable to safe, accurate, and efficient graph execution—avoiding performance bottlenecks and semantically inequivalent results. We discuss the engineering aspects of a refactoring tool that automatically determines when it is safe and potentially advantageous to migrate imperative DL code to graph execution and vice-versa.
Transcript: Canadian book publishing: Insights from the latest salary survey ...BookNet Canada
Join us for a presentation in partnership with the Association of Canadian Publishers (ACP) as they share results from the recently conducted Canadian Book Publishing Industry Salary Survey. This comprehensive survey provides key insights into average salaries across departments, roles, and demographic metrics. Members of ACP’s Diversity and Inclusion Committee will join us to unpack what the findings mean in the context of justice, equity, diversity, and inclusion in the industry.
Results of the 2024 Canadian Book Publishing Industry Salary Survey: https://publishers.ca/wp-content/uploads/2025/04/ACP_Salary_Survey_FINAL-2.pdf
Link to presentation slides and transcript: https://bnctechforum.ca/sessions/canadian-book-publishing-insights-from-the-latest-salary-survey/
Presented by BookNet Canada and the Association of Canadian Publishers on May 1, 2025 with support from the Department of Canadian Heritage.
TrsLabs - Leverage the Power of UPI PaymentsTrs Labs
Revolutionize your Fintech growth with UPI Payments
"Riding the UPI strategy" refers to leveraging the Unified Payments Interface (UPI) to drive digital payments in India and beyond. This involves understanding UPI's features, benefits, and potential, and developing strategies to maximize its usage and impact. Essentially, it's about strategically utilizing UPI to promote digital payments, financial inclusion, and economic growth.
TrsLabs - Fintech Product & Business ConsultingTrs Labs
Hybrid Growth Mandate Model with TrsLabs
Strategic Investments, Inorganic Growth, Business Model Pivoting are critical activities that business don't do/change everyday. In cases like this, it may benefit your business to choose a temporary external consultant.
An unbiased plan driven by clearcut deliverables, market dynamics and without the influence of your internal office equations empower business leaders to make right choices.
Getting things done within a budget within a timeframe is key to Growing Business - No matter whether you are a start-up or a big company
Talk to us & Unlock the competitive advantage
Procurement Insights Cost To Value Guide.pptxJon Hansen
Procurement Insights integrated Historic Procurement Industry Archives, serves as a powerful complement — not a competitor — to other procurement industry firms. It fills critical gaps in depth, agility, and contextual insight that most traditional analyst and association models overlook.
Learn more about this value- driven proprietary service offering here.
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungenpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-und-verwaltung-von-multiuser-umgebungen/
HCL Nomad Web wird als die nächste Generation des HCL Notes-Clients gefeiert und bietet zahlreiche Vorteile, wie die Beseitigung des Bedarfs an Paketierung, Verteilung und Installation. Nomad Web-Client-Updates werden “automatisch” im Hintergrund installiert, was den administrativen Aufwand im Vergleich zu traditionellen HCL Notes-Clients erheblich reduziert. Allerdings stellt die Fehlerbehebung in Nomad Web im Vergleich zum Notes-Client einzigartige Herausforderungen dar.
Begleiten Sie Christoph und Marc, während sie demonstrieren, wie der Fehlerbehebungsprozess in HCL Nomad Web vereinfacht werden kann, um eine reibungslose und effiziente Benutzererfahrung zu gewährleisten.
In diesem Webinar werden wir effektive Strategien zur Diagnose und Lösung häufiger Probleme in HCL Nomad Web untersuchen, einschließlich
- Zugriff auf die Konsole
- Auffinden und Interpretieren von Protokolldateien
- Zugriff auf den Datenordner im Cache des Browsers (unter Verwendung von OPFS)
- Verständnis der Unterschiede zwischen Einzel- und Mehrbenutzerszenarien
- Nutzung der Client Clocking-Funktion
Spark is a powerhouse for large datasets, but when it comes to smaller data workloads, its overhead can sometimes slow things down. What if you could achieve high performance and efficiency without the need for Spark?
At S&P Global Commodity Insights, having a complete view of global energy and commodities markets enables customers to make data-driven decisions with confidence and create long-term, sustainable value. 🌍
Explore delta-rs + CDC and how these open-source innovations power lightweight, high-performance data applications beyond Spark! 🚀
Dev Dives: Automate and orchestrate your processes with UiPath MaestroUiPathCommunity
This session is designed to equip developers with the skills needed to build mission-critical, end-to-end processes that seamlessly orchestrate agents, people, and robots.
📕 Here's what you can expect:
- Modeling: Build end-to-end processes using BPMN.
- Implementing: Integrate agentic tasks, RPA, APIs, and advanced decisioning into processes.
- Operating: Control process instances with rewind, replay, pause, and stop functions.
- Monitoring: Use dashboards and embedded analytics for real-time insights into process instances.
This webinar is a must-attend for developers looking to enhance their agentic automation skills and orchestrate robust, mission-critical processes.
👨🏫 Speaker:
Andrei Vintila, Principal Product Manager @UiPath
This session streamed live on April 29, 2025, 16:00 CET.
Check out all our upcoming Dev Dives sessions at https://community.uipath.com/dev-dives-automation-developer-2025/.
This is the keynote of the Into the Box conference, highlighting the release of the BoxLang JVM language, its key enhancements, and its vision for the future.
Big Data Analytics Quick Research Guide by Arthur MorganArthur Morgan
This is a Quick Research Guide (QRG).
QRGs include the following:
- A brief, high-level overview of the QRG topic.
- A milestone timeline for the QRG topic.
- Links to various free online resource materials to provide a deeper dive into the QRG topic.
- Conclusion and a recommendation for at least two books available in the SJPL system on the QRG topic.
QRGs planned for the series:
- Artificial Intelligence QRG
- Quantum Computing QRG
- Big Data Analytics QRG
- Spacecraft Guidance, Navigation & Control QRG (coming 2026)
- UK Home Computing & The Birth of ARM QRG (coming 2027)
Any questions or comments?
- Please contact Arthur Morgan at [email protected].
100% human made.
Vibe Coding_ Develop a web application using AI (1).pdfBaiju Muthukadan
H2O - the optimized HTTP server
1. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
H2O
the optimized HTTP server
DeNA Co., Ltd.
Kazuho Oku
1
2. Who am I?
n long experience in network-‐‑‒related / high-‐‑‒
performance programming
n works in the field:
⁃ Palmscape / Xiino
• world's first web browser for Palm OS, bundled by
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Sony, IBM, NTT DoCoMo
⁃ MySQL extensions: Q4M, mycached, …
• MySQL Conference Community Awards (as DeNA)
⁃ JSX
• altJS with an optimizing compiler
H2O -‐‑‒ the optimized HTTP server2
3. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Agenda
n Introduction of H2O
n The motives behind
n Writing a fast server
n Writing H2O modules
n Current status the future
n Questions regarding HTTP/2
H2O -‐‑‒ the optimized HTTP server3
4. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Introducing H2O
H2O -‐‑‒ the optimized HTTP server4
5. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
H2O – the umbrella project
n h2o – the standalone HTTP server
⁃ libh2o – can be used as a library as well
n picohttpparser – the HTTP/1 parser
n picotest – TAP-‐‑‒compatible testing library
n qrintf – C preprocessor for optimizing s(n)printf
n yoml – DOM-‐‑‒like wrapper for libyaml
github.com/h2o
H2O -‐‑‒ the optimized HTTP server5
6. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
h2o
n the standalone HTTP server
n protocols:
⁃ HTTP/1.x
⁃ HTTP/2
• via Upgrade, NPN, ALPN, direct
⁃ WebSocket (uses wslay)
⁃ with SSL support (uses OpenSSL)
n modules:
⁃ file (static files), reverse-‐‑‒proxy, reproxy, deflate
n configuration using yaml
H2O -‐‑‒ the optimized HTTP server6
7. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
libh2o
n h2o is also available as a library
n event loop can be selected
⁃ libuv
⁃ h2o's embedded event loop
n configurable via API and/or yaml
⁃ dependency to libyaml is optional
H2O -‐‑‒ the optimized HTTP server7
9. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Testing
n two levels of testing for better quality
⁃ essential for keeping the protocol
implementations and module-‐‑‒level API apart
n unit-‐‑‒testing
⁃ every module has (can have) it's own unit-‐‑‒test
⁃ tests run using the loopback protocol handler
• module-‐‑‒level unit-‐‑‒tests do not depend on the
protocol
n end-‐‑‒to-‐‑‒end testing
⁃ spawns the server and connect via network
⁃ uses nghttp2
H2O -‐‑‒ the optimized HTTP server9
10. Internals
n uses h2o_buf_t (pair of [char*, size_̲t]) is used to
represent data
⁃ common header names are interned into tokens
• those defined in HPACK static_̲table + α
n mostly zero-‐‑‒copy
n incoming data allocated using: malloc, realloc,
mmap
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
⁃ requires 64-‐‑‒bit arch for heavy use
n uses writev for sending data
H2O -‐‑‒ the optimized HTTP server10
11. 6
bytes
1,024
bytes
10,240
bytes
6
bytes
1,024
bytes
10,240
bytes
6
bytes
1,024
bytes
10,240
bytes
6
bytes
1,024
bytes
10,240
bytes
HTTP/1
(local;
osx)
HTTP/1
(local;
linux)
HTTP/1
(remote;
linux)
HTTPS/1
(remote;
linux)
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Fast
120,000
100,000
80,000
60,000
40,000
20,000
0
Requests
/
second.core
nginx-‐1.7.7
h2o
Note:
used
MacBook
Pro
Early
2014
(Core
[email protected]),
Amazon
EC2
cc2.8xlarge,
no
logging
H2O -‐‑‒ the optimized HTTP server11
12. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Why is it fast?
Why should it be fast?
H2O -‐‑‒ the optimized HTTP server12
13. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
It all started with PSGI/Plack
n PSGI/Plack is the WSGI/Rack for Perl
n on Sep 7th 2010:
⁃ first commit to github.com/plack/Plack
⁃ I asked: why ever use FastCGI?
• at the time, HTTP was believed to be slow, and
FastCGI is necessary
⁃ the other choice was to use Apache+mod_̲perl
⁃ I proposed:
• write a fast HTTP parser in C, and use it from Perl
• get rid of specialized protocols / tightly-‐‑‒coupled
legacy servers
⁃ for ease of dev., H2O -‐‑‒ the optimized HTTP server deploy., admin.13
14. So I wrote HTTP::Parser::XS and picohttpparser.
H2O -‐‑‒ the optimized HTTP server14
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
15. How fast is picohttpparser?
n 10x faster than http-‐‑‒parser according to 3p bench.
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
⁃ github.com/fukamachi/fast-‐‑‒http
HTTP
Parser
Performance
Comparison
329,033
3,162,745
3,500,000
3,000,000
2,500,000
2,000,000
1,500,000
1,000,000
500,000
0
hYp-‐parser@5fd51fd
picohYpparser@56975cd
requests
/
second
H2O -‐‑‒ the optimized HTTP server15
16. HTTP::Parser::XS
n the de-‐‑‒facto HTTP parser used by PSGI/Plack
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
⁃ PSGI/Plack is the WSGI/Rack for Perl
n modern Perl-‐‑‒based services rarely use FastCGI or
mod_̲perl
n the application servers used (Starlet, Starman, etc.)
speak HTTP using HTTP::Parser::XS
⁃ application servers can be and in fact are written
in Perl, since the slow part is handled by
HTTP::Parser::XS
n picohttpparser is the C-‐‑‒based backend of
HTTP::Parser::XS
H2O -‐‑‒ the optimized HTTP server16
17. The lessons learned
n using one protocol (HTTP) everywhere reduces the
TCO
⁃ easier to develop, debug, test, monitor,
administer
⁃ popular protocols tend to be better designed
implemented thanks to the competition
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
n similar transition happens everywhere
⁃ WAP has been driven out by HTTP HTML
⁃ we rarely use FTP these days
H2O -‐‑‒ the optimized HTTP server17
18. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
but HTTP is not yet used everywhere
n web browser
⁃ HTTP/1 is used now, transiting to HTTP/2
n SOA / microservices
⁃ HTTP/1 is used now
• harder to transit to HTTP/2 since many proglangs
use blocking I/O
⁃ other protocols coexist: RDBMS, memcached, …
• are they the next target of HTTP (like FastCGI?)
n IoT
• MQTT is emerging
H2O -‐‑‒ the optimized HTTP server18
19. So I decided to write H2O
n in July 2014
n life of the developers becomes easier if all the
services use HTTP
n but for the purpose, it seems like we need to raise
the bar (of performance)
⁃ or other protocols may emerge / continue to be
used
n now (at the time of transition to HTTP/2) might be a
good moment to start a performance race between
HTTP implementers
H2O -‐‑‒ the optimized HTTP server19
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
20. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Writing a fast server
H2O -‐‑‒ the optimized HTTP server20
21. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Two things to be aware of
n characteristics of a fast program
1. executes less instructions
• speed is a result of simplicity, not complexity
2. causes less pipeline hazards
• minimum number of conditional branches / indirect
calls
• use branch-‐‑‒predictor-‐‑‒friendly logic
⁃ e.g. conditional branch exists, but it is taken
95%
H2O -‐‑‒ the optimized HTTP server21
22. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
H2O -‐‑‒ design principles
n do it right
⁃ local bottlenecks can be fixed afterwards
⁃ large-‐‑‒scale design issues are hard to notice / fix
n do it simple
⁃ as explained
⁃ provide / use hooks only at high-‐‑‒level
• hooks exist for: protocol, generator, filter, logger
H2O -‐‑‒ the optimized HTTP server22
23. The performance pitfalls
n many server implementations spend CPU cycles in
the following areas:
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
⁃ memory allocation
⁃ parsing input
⁃ stringifying output and logs
⁃ timeout handling
H2O -‐‑‒ the optimized HTTP server23
24. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Memory allocation
H2O -‐‑‒ the optimized HTTP server24
25. Memory allocation in H2O
n uses region-‐‑‒based memory management
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
⁃ memory pool of Apache
n strategy:
⁃ memory block is assigned to the Request object
⁃ small allocations returns portions of the block
⁃ memory is never returned to the block
⁃ The entire memory block gets freed when the
Request object is destroyed
H2O -‐‑‒ the optimized HTTP server25
26. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Memory allocation in H2O (cont'd)
n malloc (of small chunks)
void *h2o_mempool_alloc(h2o_mempool_t *pool, size_t sz)!
{!
(snip)!
void *ret = pool-chunks-bytes + pool-chunks-offset;!
pool-chunks-offset += sz;!
return ret;!
} !
n free
⁃ no code (as explained)
H2O -‐‑‒ the optimized HTTP server26
27. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Parsing input
H2O -‐‑‒ the optimized HTTP server27
28. Parsing input
n HTTP/1 request parser may or may not be a
bottleneck, depending on its performance
⁃ if the parser is capable of handling 1M reqs/sec,
then it will spend 10% of time if the server
handles 100K reqs/sec.
3,500,000
3,000,000
2,500,000
2,000,000
1,500,000
1,000,000
500,000
H2O -‐‑‒ the optimized HTTP server28
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
329,033
3,162,745
0
hYp-‐parser@5fd51fd
picohYpparser@56975cd
requests
/
second
HTTP/1
Parser
Performance
Comparison
29. Parsing input (cont'd)
n it's good to know the logical upper-‐‑‒bound
⁃ or we might try to optimize something that can
no more be faster
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
n Q. How fast could a text parser be?
H2O -‐‑‒ the optimized HTTP server29
30. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Q. How fast could a text server be?
Answer: around 1GB/sec. is a good target
⁃ since any parser needs to read every byte and
execute a conditional branch depending on the
value
• # of instructions: 1 load + 1 inc + 1 test + 1
conditional branch
• would likely take several CPU cycles (even if
superscalar)
• unless we use SIMD instructions
H2O -‐‑‒ the optimized HTTP server30
31. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Parsing input
n What's wrong with this parser?
for (; s != end; ++s) {!
int ch = *s;!
switch (ctx.state) {!
case AAA:!
if (ch == ' ')!
ctx.state = BBB;!
break;!
case BBB:!
...!
}!
H2O -‐‑‒ the optimized HTTP server31
32. Parsing input (cont'd)
n never write a character-‐‑‒level state machine if
performance matters
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
for (; s != end; ++s) {!
int ch = *s;!
switch (ctx.state) { // ß executed for every char!
case AAA:!
if (ch == ' ')!
ctx.state = BBB;!
break;!
case BBB:!
...!
}!
H2O -‐‑‒ the optimized HTTP server32
33. Parsing input fast
n each state should consume a sequence of bytes
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
while (s != end) {!
switch (ctx.state) {!
case AAA:!
do {!
if (*s++ == ' ') {!
ctx.state = BBB;!
break;!
}!
} while (s != end);!
break;!
case BBB:!
...
H2O -‐‑‒ the optimized HTTP server33
34. Stateless parsing
n stateless in the sense that no state value exists
⁃ stateless parsers are generally faster than
stateful parsers, since it does not have state -‐‑‒ a
variable used for a conditional branch
n HTTP/1 parsing can be stateless since the request-‐‑‒
line and the headers arrive in a single packet (in
most cases)
⁃ and even if they did not, it is easy to check if the
end-‐‑‒of-‐‑‒headers has arrived (by looking for CR-‐‑‒
LF-‐‑‒CR-‐‑‒LF) and then parse the input
• this countermeasure is essential to handle the
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Slowloris attack
H2O -‐‑‒ the optimized HTTP server34
35. picohttpparser is stateless
n states are the execution contexts (instead of being a
variable)
const char* parse_request(const char* buf, const char* buf_end, …)!
{!
/* parse request line */!
ADVANCE_TOKEN(*method, *method_len);!
++buf;!
ADVANCE_TOKEN(*path, *path_len);!
++buf;!
if ((buf = parse_http_version(buf, buf_end, minor_version, ret)) == NULL)!
return NULL;!
EXPECT_CHAR('015');!
EXPECT_CHAR('012');!
return parse_headers(buf, buf_end, headers, num_headers, max_headers, …);!
}!
H2O -‐‑‒ the optimized HTTP server35
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
36. loop exists within a function (≒state)
n the code looks for the end of the header value
#define IS_PRINTABLE(c) ((unsigned char)(c) - 040u 0137u)!
!
static const char* get_token_to_eol(const char* buf, const char* buf_end, …!
{!
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
while (likely(buf_end - buf = 8)) {!
#define DOIT() if (unlikely(! IS_PRINTABLE(*buf))) goto NonPrintable; ++buf!
DOIT(); DOIT(); DOIT(); DOIT();!
DOIT(); DOIT(); DOIT(); DOIT();!
#undef DOIT!
continue;!
NonPrintable:!
if ((likely((uchar)*buf '040') likely(*buf != '011'))!
|| unlikely(*buf == '177'))!
goto FOUND_CTL;!
}
H2O -‐‑‒ the optimized HTTP server36
37. The hottest loop of picohttpparser (cont'd)
n after compilation, uses 4 instructions per char
movzbl (%r9), %r11d!
movl %r11d, %eax!
addl $-32, %eax!
cmpl $94, %eax!
ja LBB5_5!
movzbl 1(%r9), %r11d // load char!
leal -32(%r11), %eax // subtract!
cmpl $94, %eax // and check if is printable!
ja LBB5_4 // if not, break!
movzbl 2(%r9), %r11d // load next char!
leal -32(%r11), %eax // subtract!
cmpl $94, %eax // and check if is printable!
ja LBB5_15 // if not, break!
movzbl 3(%r9), %r11d // load next char!
…
H2O -‐‑‒ the optimized HTTP server37
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
38. strlen
vs.
picoh?pparser
strlen
(simple)
picohYpparser@56975cd
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
strlen vs. picohttparser
n not as fast as strlen, but close
size_t strlen(const char *s) {!
const char *p = s;!
for (; *p != '0'; ++p)!
;!
return p - s;!
}!
n !
not much room
!
left for further
optimization (wo.
using SIMD
insns.)!
!
0.90
0.80
0.70
0.60
0.50
0.40
0.30
0.20
0.10
0.00
bytes
/
clock
H2O -‐‑‒ the optimized HTTP server38
39. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
picohttpparser is small and simple
$ wc picohttpparser.?!
376 1376 10900 picohttpparser.c!
62 333 2225 picohttpparser.h!
438 1709 13125 total!
$ !
!
n good example of do-‐‑‒it-‐‑‒simple-‐‑‒for-‐‑‒speed approach
⁃ H2O (incl. the HTTP/2 parser) is designed using
the approach
H2O -‐‑‒ the optimized HTTP server39
40. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Stringification
H2O -‐‑‒ the optimized HTTP server40
41. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Stringification
n HTTP/1 responses are in strings
sprintf(buf, HTTP/1.%d %d %srn, …)!
n s(n)printf is known to be slow
⁃ but the interface is great
⁃ it's tiresome to write like:
p = strappend_s(p, HTTP/1.);!
p = strappend_n(p, minor_version);!
*p++ = ' ';!
P = strappend_n(p, status);!
*p++ = ' ';!
p = strappend_s(p, reason);!
p = strappend_s(p, rn);
H2O -‐‑‒ the optimized HTTP server41
42. Stringification (cont'd)
n stringification is important for HTTP/2 servers too
⁃ many elements still need to be stringified
• headers (status, date, last-‐‑‒modified, etag, …)
• access log (IP address, date, # of bytes, …)
H2O -‐‑‒ the optimized HTTP server42
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
43. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Why is s(n)printf slow?
n it's a state machine
⁃ interprets the format string (e.g. hello: %s) at
runtime
n it uses the locale
⁃ not for all types of variables, but…
n it uses varargs
n it's complicated
⁃ sprintf may parse a number when used for
stringifying a number
sprintf(buf, %11d, status)!
H2O -‐‑‒ the optimized HTTP server43
44. How should we optimize s(n)printf?
n by compiling the format string at compile-‐‑‒time
⁃ instead of interpreting it at runtime
⁃ possible since the supplied format string is
almost always a string literal
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
n and that's qrintf
H2O -‐‑‒ the optimized HTTP server44
45. qrintf
n qrintf is a preprocessor that rewrites s(n)printf
invocations to set of functions calls specialized to
each format string
n qrintf-‐‑‒gcc is a wrapper of GCC that
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
⁃ first applies the GCC preprocessor
⁃ then applies the qrintf preprocessor
⁃ then calls the GCC compiler
n similar wrapper could be implemented for Clang
⁃ but it's a bit harder
⁃ help wanted!
H2O -‐‑‒ the optimized HTTP server45
46. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Example
// original code (248 nanoseconds)!
snprintf(buf, sizeof(buf), %u.%u.%u.%u, !
(addr 24) 0xff, (addr 16) 0xff, (addr 8) 0xff, addr 0xff);!
!
// after preprocessed by qrintf (21.5 nanoseconds)!
_qrintf_chk_finalize(!
_qrintf_chk_u(_qrintf_chk_c(!
_qrintf_chk_u(_qrintf_chk_c(!
_qrintf_chk_u(_qrintf_chk_c(!
_qrintf_chk_u(!
_qrintf_chk_init(buf, sizeof(buf)), (addr 24) 0xff),!
'.'), (addr 16) 0xff),!
'.'), (addr 8) 0xff),!
'.'), addr 0xff));
H2O -‐‑‒ the optimized HTTP server46
47. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Performance impact on H2O
n 20% performance gain
⁃ gcc: 82,900 reqs/sec
⁃ qrintf-‐‑‒gcc: 99,200 reqs/sec.
n benchmark condition:
⁃ 6-‐‑‒byte file GET over HTTP/1.1
⁃ access logging to /dev/null
H2O -‐‑‒ the optimized HTTP server47
48. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Timeout handling
H2O -‐‑‒ the optimized HTTP server48
49. Timeout handling by the event loops
n most event loops use balanced trees to handle
timeouts
⁃ so that timeout events can be triggered fast
⁃ cons. is that it takes time to set the timeouts
n in case of HTTP, timeout should be set at least once
per request
⁃ otherwise the server cannot close a stale
connection
H2O -‐‑‒ the optimized HTTP server49
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
50. Timeout requirements of a HTTP server
n much more set than triggered
⁃ is set more than once per request
⁃ most requests succeed before timeout
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
n the timeout values are uniform
⁃ e.g. request timeout for every connection would
be the same (or i/o timeout or whatever)
n balanced-‐‑‒tree does not seem like a good approach
⁃ any other choice?
H2O -‐‑‒ the optimized HTTP server50
51. Use pre-‐‑‒sorted link-‐‑‒list
n H2O maintains a linked-‐‑‒list for each timeout
configuration
⁃ request timeout has its own linked-‐‑‒list, i/o
timeout has its own, …
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
n how to set the timeout:
⁃ timeout entry is inserted at the end of the linked-‐‑‒
list
• thus the list is naturally sorted
n how the timeouts get triggered:
⁃ H2O iterates from the start of each linked-‐‑‒list,
and triggers those that have timed-‐‑‒out
H2O -‐‑‒ the optimized HTTP server51
52. note:
N:
number
of
]meout
entries,
M:
number
of
]meout
configura]ons,
trigger
performance
of
list
of
linked-‐list
can
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Comparison Chart
OperaAon
(frequency
in
HTTPD)Balanced-‐treeList
of
linked-‐list
set
(high)O(log
N)O(1)
clear
(high)O(log
N)O(1)
trigger
(low)O(1)O(M)
be
reduced
to
O(1)
H2O -‐‑‒ the optimized HTTP server52
53. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Miscellaneous
H2O -‐‑‒ the optimized HTTP server53
54. Miscellaneous
n the entire stack of H2O is carefully designed (for
simplicity and for performance)
⁃ for example, the built-‐‑‒in event loop of H2O
(which is the default for h2o), is faster than libuv
0
10,000
20,000
30,000
40,000
50,000
60,000
70,000
80,000
H2O -‐‑‒ the optimized HTTP server54
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
6
bytes
4,096
bytes
requests
/
sec.core
size
of
content
Benchmark:
libuv
vs.
internal
libuv-‐network-‐and-‐file@7876f53
libuv-‐network-‐only@da85742
internal
(master@a5d1105)
55. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Writing H2O modules
H2O -‐‑‒ the optimized HTTP server55
56. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Module types of H2O
n handler
⁃ generates the contents
• e.g. file handler, proxy handler
n filter
⁃ modifies the content
• e.g. chunked encoder, deflate
⁃ can be chained
n logger
H2O -‐‑‒ the optimized HTTP server56
57. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Writing a hello world handler
static int on_req(h2o_handler_t *self, h2o_req_t *req) {!
static h2o_generator_t generator = {};!
static h2o_buf_t body = H2O_STRLIT(hello worldn);!
if (! h2o_memis(req-method.base, req-method.len, H2O_STRLIT(GET)))!
return -1;!
req-res.status = 200;!
req-res.reason = OK;!
h2o_add_header(req-pool, req-res.headers, H2O_TOKEN_CONTENT_TYPE,!
H2O_STRLIT(text/plain));!
h2o_start_response(req, generator);!
h2o_send(req, body, 1, 1);!
return 0;!
}!
!
h2o_handler_t *handler = h2o_create_handler( host_config, sizeof(*handler));!
handler-on_req = on_req;
H2O -‐‑‒ the optimized HTTP server57
58. The handler API
/**!
* called by handlers to set the generator!
* @param req the request!
* @param generator the generator!
*/!
void h2o_start_response(h2o_req_t *req, h2o_generator_t *generator);!
/**!
* called by the generators to send output!
* note: generator should close the resources opened by itself after sending the
final chunk (i.e. calling the function with is_final set to true)!
* @param req the request!
* @param bufs an array of buffers!
* @param bufcnt length of the buffers array!
* @param is_final if the output is final!
*/!
void h2o_send(h2o_req_t *req, h2o_buf_t *bufs, size_t bufcnt, int is_final);!
H2O -‐‑‒ the optimized HTTP server58
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
59. The handler API (cont'd)
/**!
* an object that generates a response.!
* The object is typically constructed by handlers that call h2o_start_response.!
*/!
typedef struct st_h2o_generator_t {!
/**!
* called by the core to request new data to be pushed via h2o_send!
*/!
void (*proceed)(struct st_h2o_generator_t *self, h2o_req_t *req);!
/**!
* called by the core when there is a need to terminate the response!
*/!
void (*stop)(struct st_h2o_generator_t *self, h2o_req_t *req);!
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
} h2o_generator_t;!
H2O -‐‑‒ the optimized HTTP server59
60. Module examples
n Simple examples exist in the examples/ dir
n lib/chunked.c is a good example of the filter API
H2O -‐‑‒ the optimized HTTP server60
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
61. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Current Status the Future
H2O -‐‑‒ the optimized HTTP server61
62. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Development Status
n core
⁃ mostly feature complete
n protocol
⁃ http/1 – mostly feature complete
⁃ http/2 – interoperable
n modules
⁃ file – complete
⁃ proxy – interoperable
• name resolution is blocking
• does not support keep-‐‑‒alive
H2O -‐‑‒ the optimized HTTP server62
63. HTTP/2 status of H2O
n interoperable, but some parts are missing
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
⁃ HPACK resize
⁃ priority handling
n priority handling is essential for HTTP/2
⁃ without, HTTP/2 is slower than HTTP/1 L
n need to tweak performance
⁃ SSL-‐‑‒related code is not yet optimized
• first benchmark was taken last Saturday J
H2O -‐‑‒ the optimized HTTP server63
64. HTTP/2 over TLS benchmark
n need to fix the dropdown, likely caused by:
⁃ H2O uses writev to gather data into a single
socket op., but OpenSSL does not provide
scatter-‐‑‒gather I/O
120,000
100,000
80,000
60,000
40,000
20,000
H2O -‐‑‒ the optimized HTTP server64
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
0
6
bytes
1,024
bytes
10,240
bytes
HTTPS/2
(remote;
linux)
nghYpd
h2o
⁃ in H2O, every file
handler has its own
buffer and pushes
content to the
protocol layer
• nghttpd pulls
instead, which is
more memory-‐‑‒
efficient / no need
65. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Goal of the project
n to become the best HTTP/2 server
⁃ with excellent performance in serving static
files / as a reverse proxy
• note: picohttpserver and other libraries are also used
in the reverse proxy implementation
n to become the favored HTTP server library
⁃ esp. for server products
⁃ to widen the acceptance of HTTP protocol even
more
H2O -‐‑‒ the optimized HTTP server65
66. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Help wanted
n looking for contributors in all areas
⁃ addition of modules might be the easiest, since it
would not interfere with the development of the
core / protocol layer
⁃ examples, docs, tests are also welcome
n it's easy to start
⁃ since the code-‐‑‒base is young and simple
Subsystemwc
–l
(incl.
unit-‐tests)
Core2,334
Library1,856
Socket
event
loop1,771
HTTP/1
(incl.
picohYpparser)886
HTTP/22,507
Modules1,906
Server573
H2O -‐‑‒ the optimized HTTP server66
67. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Questions regarding HTTP/2
H2O -‐‑‒ the optimized HTTP server67
68. Sorry, I do not have much to talk
n since it is a well-‐‑‒designed protocol
n and in terms of performance, apparently binary
protocols are easier to implement than a text
protocol J
⁃ there's a efficient algorithm for the static
Huffman decoder
• @tatsuhiro-‐‑‒t implemented it, I copied
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
n OTOH I have some questions re HTTP/2
H2O -‐‑‒ the optimized HTTP server68
69. Q. would there be a max-‐‑‒open-‐‑‒files issue?
n according to the draft, recommended value of
MAX_̲CONCURRENT_̲STREAMS is = 100
n if max-‐‑‒connections is 1024, it would mean that the
max fd would be above 10k
⁃ on linux, the default (NR_̲OPEN) is 1,048,576
and is adjustable
⁃ but on other OS?
n H2O by default limits the number of in-‐‑‒flight
requests internally to 16
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
⁃ the value is configurable
H2O -‐‑‒ the optimized HTTP server69
70. Q. good way to determine the window size?
n initial window size (64k) might be too small to
saturate the avaiable bandwidth depending on the
latency
⁃ but for responsiveness we would not want the
value to be too high
⁃ is there any recommendation on how we should
tune the variable?
H2O -‐‑‒ the optimized HTTP server70
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
71. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Q. should we continue to use CDN?
n HTTP/2 has priority control
⁃ CDN and primary website would use different
TCP connection
• means that priority control would not work bet. CDN
and the primary website
n should we better serve all the asset files from the
primary website?
H2O -‐‑‒ the optimized HTTP server71
72. Never hide the Server header
n name and version info. is essential for interoperability
⁃ many (if not all) webapps use the User-‐‑‒Agent value to
evade bugs
⁃ used to be same at the HTTP/1 layer in the early days
n there will be interoperability problems bet. HTTP/2 impls.
⁃ the Server header is essential for implementing
workarounds
n some believe that hiding the header improves security
⁃ we should speak that they are wrong; that security-‐‑‒by-‐‑‒
obscurity does not work on the Net, and hiding the
value harms interoperability and the adoption of HTTP/
2
H2O -‐‑‒ the optimized HTTP server72
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
73. Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.
Summary
H2O -‐‑‒ the optimized HTTP server73
74. Summary
n H2O is an optimized HTTP server implementation
⁃ with neat design to support both HTTP/1 and
HTTP/2
⁃ is still very young
• lots of areas to work on!
• incl. improving the HTTP/2 support
n help wanted! Let's write the HTTPD of the future!
H2O -‐‑‒ the optimized HTTP server74
Copyright
(C)
2014
DeNA
Co.,Ltd.
All
Rights
Reserved.