Dynamic Time Warping を用いた高頻度取引データのLead-Lag 効果の推定Katsuya Ito
This paper investigates the Lead-Lag relationships in high-frequency data.
We propose Multinomial Dynamic Time Warping (MDTW) that deals with non-synchronous observation, vast data, and time-varying Lead-Lag.
MDTW directly estimates the Lead-Lags without lag candidates. Its computational complexity is linear with respect to the number of observation and it does not depend on the number of lag candidates.
The experiments adopting artificial data and market data illustrate the effectiveness of our method compared to the existing methods.
This document summarizes recent developments in action recognition using deep learning techniques. It discusses early approaches using improved dense trajectories and two-stream convolutional neural networks. It then focuses on advances using 3D convolutional networks, enabled by large video datasets like Kinetics. State-of-the-art results are achieved using inflated 3D convolutional networks and temporal aggregation methods like temporal linear encoding. The document provides an overview of popular datasets and challenges and concludes with tips on training models at scale.
The document summarizes a research paper that compares the performance of MLP-based models to Transformer-based models on various natural language processing and computer vision tasks. The key points are:
1. Gated MLP (gMLP) architectures can achieve performance comparable to Transformers on most tasks, demonstrating that attention mechanisms may not be strictly necessary.
2. However, attention still provides benefits for some NLP tasks, as models combining gMLP and attention outperformed pure gMLP models on certain benchmarks.
3. For computer vision, gMLP achieved results close to Vision Transformers and CNNs on image classification, indicating gMLP can match their data efficiency.
20220617_You_Only_Look_Once_Series.pdf
You Only Look Once: Unified, Real-Time Object Detection
https://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Redmon_You_Only_Look_CVPR_2016_paper.html
YOLO9000: Better, Faster, Stronger
https://openaccess.thecvf.com/content_cvpr_2017/html/Redmon_YOLO9000_Better_Faster_CVPR_2017_paper.html
YOLOv3: An Incremental Improvement
https://arxiv.org/abs/1804.02767
YOLOv4: Optimal Speed and Accuracy of Object Detection
https://arxiv.org/abs/2004.10934
YOLOv5
https://github.com/ultralytics/yolov5
YOLOX: Exceeding YOLO Series in 2021
https://arxiv.org/abs/2107.08430
You Only Look One-Level Feature
https://openaccess.thecvf.com/content/CVPR2021/html/Chen_You_Only_Look_One-Level_Feature_CVPR_2021_paper.html
You Only Watch Once: A Unified CNN Architecture for Real-Time Spatiotemporal Action Localization
https://openaccess.thecvf.com/content/ICCV2021/html/Chen_Watch_Only_Once_An_End-to-End_Video_Action_Detection_Framework_ICCV_2021_paper.html
This document summarizes several datasets for image captioning, video classification, action recognition, and temporal localization. It describes the purpose, collection process, annotation format, examples and references for datasets including MS COCO, Visual Genome, Flickr8K/30K, Kinetics, Charades, AVA, STAIR Captions and Actions. The datasets vary in scale from thousands to millions of images/videos and cover a wide range of tasks from image captioning to complex activity recognition.
This document summarizes recent developments in action recognition using deep learning techniques. It discusses early approaches using improved dense trajectories and two-stream convolutional neural networks. It then focuses on advances using 3D convolutional networks, enabled by large video datasets like Kinetics. State-of-the-art results are achieved using inflated 3D convolutional networks and temporal aggregation methods like temporal linear encoding. The document provides an overview of popular datasets and challenges and concludes with tips on training models at scale.
The document summarizes a research paper that compares the performance of MLP-based models to Transformer-based models on various natural language processing and computer vision tasks. The key points are:
1. Gated MLP (gMLP) architectures can achieve performance comparable to Transformers on most tasks, demonstrating that attention mechanisms may not be strictly necessary.
2. However, attention still provides benefits for some NLP tasks, as models combining gMLP and attention outperformed pure gMLP models on certain benchmarks.
3. For computer vision, gMLP achieved results close to Vision Transformers and CNNs on image classification, indicating gMLP can match their data efficiency.
20220617_You_Only_Look_Once_Series.pdf
You Only Look Once: Unified, Real-Time Object Detection
https://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Redmon_You_Only_Look_CVPR_2016_paper.html
YOLO9000: Better, Faster, Stronger
https://openaccess.thecvf.com/content_cvpr_2017/html/Redmon_YOLO9000_Better_Faster_CVPR_2017_paper.html
YOLOv3: An Incremental Improvement
https://arxiv.org/abs/1804.02767
YOLOv4: Optimal Speed and Accuracy of Object Detection
https://arxiv.org/abs/2004.10934
YOLOv5
https://github.com/ultralytics/yolov5
YOLOX: Exceeding YOLO Series in 2021
https://arxiv.org/abs/2107.08430
You Only Look One-Level Feature
https://openaccess.thecvf.com/content/CVPR2021/html/Chen_You_Only_Look_One-Level_Feature_CVPR_2021_paper.html
You Only Watch Once: A Unified CNN Architecture for Real-Time Spatiotemporal Action Localization
https://openaccess.thecvf.com/content/ICCV2021/html/Chen_Watch_Only_Once_An_End-to-End_Video_Action_Detection_Framework_ICCV_2021_paper.html
This document summarizes several datasets for image captioning, video classification, action recognition, and temporal localization. It describes the purpose, collection process, annotation format, examples and references for datasets including MS COCO, Visual Genome, Flickr8K/30K, Kinetics, Charades, AVA, STAIR Captions and Actions. The datasets vary in scale from thousands to millions of images/videos and cover a wide range of tasks from image captioning to complex activity recognition.
Introduction of Chainer, a framework for neural networks, v1.11. Slides used for the student seminar on July 20, 2016, at Sugiyama-Sato lab in the Univ. of Tokyo.
This document outlines Chainer's development plans, including past releases from versions 1.0 to 1.5, apologies about installation complications, and new policies and release schedules going forward from version 1.6. Key points include making installation easier, adding backwards compatibility, releasing minor versions every 6 weeks and revision versions every 2 weeks, and potential future features like profiling, debugging tools, and isolating CuPy.
1) The document discusses the development history and planned features of Chainer, a deep learning framework.
2) It describes Chainer's transition to a new model structure using Links and Chains to define networks in a more modular and reusable way.
3) The new structure will allow for easier saving, loading, and composition of network definitions compared to the previous FunctionSet/Optimizer approach.
IoT Devices Compliant with JC-STAR Using Linux as a Container OSTomohiro Saneyoshi
Security requirements for IoT devices are becoming more defined, as seen with the EU Cyber Resilience Act and Japan’s JC-STAR.
It's common for IoT devices to run Linux as their operating system. However, adopting general-purpose Linux distributions like Ubuntu or Debian, or Yocto-based Linux, presents certain difficulties. This article outlines those difficulties.
It also, it highlights the security benefits of using a Linux-based container OS and explains how to adopt it with JC-STAR, using the "Armadillo Base OS" as an example.
Feb.25.2025@JAWS-UG IoT
39. Reference
Bengio, Y., Louradour, J., Collobert, R., Weston, J. Curriculum Learning. ICML, 2009.
Bengio, Y. Evolving Culture vs Local Minima. arXiv, 2012.
Bengio, Y. Deep Learning of Representations: Looking Forward. arXiv, 2013.
Duchi, J., Hazan, E., Singer, Y. Adaptive Subgradient Methods for Online Learning and
Stochastic Optimization. COLT, 2010.
Gulcehre, C., Bengio, Y. Knowledge Matters: Importance of Prior Information for
Optimization. arXiv, 2013.
Hinton, G. E. Training Products of Experts by Minimizing Contrastive Divergence. Neural
Computation, 2002.
Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R. R.
Improving neural networks by preventing co-adaptation of feature detectors. arXiv, 2012.
39
40. Reference
40
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P. Gradient based learning applied to
document recognition. Proc. IEEE, 1998.
Schaul, T., Zhang, S., LeCun, Y. No More Pesky Learning Rates. ICML, 2013a.
Schaul, T., LeCun, Y. Adaptive learning rates and parallelization for stochastic, sparse,
non-smooth gradients. ICLR, 2013b.
Tang, Y. Deep Learning using Support Vector Machines. arXiv, 2013.
Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.-A. Extracting and Composing
Robust Features with Denoising Autoencoders. ICML, 2008.
Vinyals, O., Jia, Y., Deng, L., Darrell, T. Learning with Recursive Perceptual
Representations. NIPS, 2012.
Wang, S. I., Manning, C. D. Fast dropout training. ICML, 2013.
47. 付録 : Referrence
47
Elman, J. Finding structure in time. Cognitive Science, 1990.
Jordan, M. Serial order: A parallel distributed processing aproach. Tech. Rep., 1986.
Mesnil, G., He, X., Deng, L., Bengio, Y. Investigation of Recurrent-Neural-Network
Architectures and Learning Methods for Spoken Language Understanding.
INTERSPEECH, 2013.
Socher, R., Lin, C. C.-Y., Ng, A. Y., Manning, C. D. Parsing Natural Scenes and Natural
Language with Recursive Neural Networks. ICML, 2011.
Sutskever, I., Martens, J., Hinton, G. Generating Text with Recurrent Neural Networks.
ICML, 2011.