This document discusses exactly once semantics in Apache Kafka 0.11. It provides an overview of how Kafka achieved exactly once delivery between producers and consumers. Key points include:
- Kafka 0.11 introduced exactly once semantics with changes to support transactions and deduplication.
- Producers can write in a transactional fashion and receive acknowledgments of committed writes from brokers.
- Brokers store commit markers to track the progress of transactions and ensure no data loss during failures.
- Consumers can read from brokers in a transactional mode and receive data only from committed transactions, guaranteeing no duplication of records.
- This allows reliable message delivery semantics between producers and consumers with Kafka acting as
2017/9/7 db tech showcase Tokyo 2017(JPOUG in 15 minutes)にて発表した内容です。
SQL大量発行に伴う処理遅延は、ミッションクリティカルシステムでありがちな性能問題のひとつです。
SQLをまとめて発行したり、処理の多重度を上げることができれば高速化可能です。ですが・・・
AP設計に起因する性能問題のため、開発工程の終盤においては対処が難しいことが多々あります。
そのような状況において、どのような改善手段があるのか、Oracleを例に解説します。
Beyond EXPLAIN: Query Optimization From Theory To CodeYuto Hayamizu
EXPLAIN is too much explained. Let's go "beyond EXPLAIN".
This talk will take you to an optimizer backstage tour: from theoretical background of state-of-the-art query optimization to close look at current implementation of PostgreSQL.
This document discusses exactly once semantics in Apache Kafka 0.11. It provides an overview of how Kafka achieved exactly once delivery between producers and consumers. Key points include:
- Kafka 0.11 introduced exactly once semantics with changes to support transactions and deduplication.
- Producers can write in a transactional fashion and receive acknowledgments of committed writes from brokers.
- Brokers store commit markers to track the progress of transactions and ensure no data loss during failures.
- Consumers can read from brokers in a transactional mode and receive data only from committed transactions, guaranteeing no duplication of records.
- This allows reliable message delivery semantics between producers and consumers with Kafka acting as
2017/9/7 db tech showcase Tokyo 2017(JPOUG in 15 minutes)にて発表した内容です。
SQL大量発行に伴う処理遅延は、ミッションクリティカルシステムでありがちな性能問題のひとつです。
SQLをまとめて発行したり、処理の多重度を上げることができれば高速化可能です。ですが・・・
AP設計に起因する性能問題のため、開発工程の終盤においては対処が難しいことが多々あります。
そのような状況において、どのような改善手段があるのか、Oracleを例に解説します。
Beyond EXPLAIN: Query Optimization From Theory To CodeYuto Hayamizu
EXPLAIN is too much explained. Let's go "beyond EXPLAIN".
This talk will take you to an optimizer backstage tour: from theoretical background of state-of-the-art query optimization to close look at current implementation of PostgreSQL.
Tensor Decomposition and its ApplicationsKeisuke OTAKI
This document discusses tensor factorizations and decompositions and their applications in data mining. It introduces tensors as multi-dimensional arrays and covers 2nd order tensors (matrices) and 3rd order tensors. It describes how tensor decompositions like the Tucker model and CANDECOMP/PARAFAC (CP) model can be used to decompose tensors into core elements to interpret data. It also discusses singular value decomposition (SVD) as a way to decompose matrices and reduce dimensions while approximating the original matrix.