The Eclipse Deeplearning4J (DL4J) ecosystem is a set of projects intended to support all the needs of a JVM based deep learning application. This means starting with the raw data, loading and preprocessing it from wherever and whatever format it is in to building and tuning a wide variety of simple and complex deep learning networks.
Because Deeplearning4J runs on the JVM you can use it with a wide variety of JVM based languages other than Java, like Scala, Kotlin, Clojure and many more.
The DL4J stack comprises of:
- DL4J: High level API to build MultiLayerNetworks and ComputationGraphs with a variety of layers, including custom ones. Supports importing Keras models from h5, including tf.keras models (as of 1.0.0-beta7) and also supports distributed training on Apache Spark
- ND4J: General purpose linear algebra library with over 500 mathematical, linear algebra and deep learning operations. ND4J is based on the highly-optimized C++ codebase LibND4J that provides CPU (AVX2/512) and GPU (CUDA) support and acceleration by libraries such as OpenBLAS, OneDNN (MKL-DNN), cuDNN, cuBLAS, etc
- SameDiff : Part of the ND4J library, SameDiff is our automatic differentiation / deep learning framework. SameDiff uses a graph-based (define then run) approach, similar to TensorFlow graph mode. Eager graph (TensorFlow 2.x eager/PyTorch) graph execution is planned. SameDiff supports importing TensorFlow frozen model format .pb (protobuf) models. Import for ONNX, TensorFlow SavedModel and Keras models are planned. Deeplearning4j also has full SameDiff support for easily writing custom layers and loss functions.
- DataVec: ETL for machine learning data in a wide variety of formats and files (HDFS, Spark, Images, Video, Audio, CSV, Excel etc)
- LibND4J : C++ library that underpins everything. For more information on how the JVM acceses native arrays and operations refer to JavaCPP
- Python4J: Bundled cpython execution for the JVM
All projects in the DL4J ecosystem support Windows, Linux and macOS. Hardware support includes CUDA GPUs (10.0, 10.1, 10.2 except OSX), x86 CPU (x86_64, avx2, avx512), ARM CPU (arm, arm64, armhf) and PowerPC (ppc64le).
For support for the project, please go over to https://community.konduit.ai/
Deeplearning4J has quite a few dependencies. For this reason we only support usage with a build tool.
<dependencies>
<dependency>
<groupId>org.deeplearning4j</groupId>
<artifactId>deeplearning4j-core</artifactId>
<version>1.0.0-M2.1</version>
</dependency>
<dependency>
<groupId>org.nd4j</groupId>
<artifactId>nd4j-native-platform</artifactId>
<version>1.0.0-M2.1</version>
</dependency>
</dependencies>
Add these dependencies to your pom.xml file to use Deeplearning4J with the CPU backend. A full standalone project example is available in the example repository, if you want to start a new Maven project from scratch.
Due to DL4J being a multi faceted project with several modules in the mono repo, we recommend looking at the examples for a taste of different usages of the different modules. Below we'll link to examples for each module.
- ND4J: https://github.com/deeplearning4j/deeplearning4j-examples/tree/master/nd4j-ndarray-examples
- DL4J: https://github.com/deeplearning4j/deeplearning4j-examples/tree/master/dl4j-examples
- Samediff: https://github.com/deeplearning4j/deeplearning4j-examples/tree/master/samediff-examples
- Datavec: https://github.com/deeplearning4j/deeplearning4j-examples/tree/master/data-pipeline-examples
- Python4j: https://deeplearning4j.konduit.ai/python4j/tutorials/quickstart
For users looking for being able to run models from other frameworks, see:
- Onnx: https://github.com/deeplearning4j/deeplearning4j-examples/tree/master/onnx-import-examples
- Tensorflow/Keras: https://github.com/deeplearning4j/deeplearning4j-examples/tree/master/tensorflow-keras-import-examples
You can find the official documentation for Deeplearning4J and the other libraries of its ecosystem at http://deeplearning4j.konduit.ai/.
We have separate repository with various examples available: https://github.com/eclipse/deeplearning4j-examples
It is preferred to use the official pre-compiled releases (see above). But if you want to build from source, first take a look at the prerequisites for building from source here: https://deeplearning4j.konduit.ai/multi-project/how-to-guides/build-from-source. Various instructions for cpu and gpu builds can be found there. Please go to our forums for further help.
In order to run tests, please see the platform-tests module. This module only runs on jdk 11 (mostly due to spark and bugs with older scala versions + JDK 17)
platform-tests allows you to run dl4j for different backends. There are a few properties you can specify on the command line:
- backend.artifactId: this defaults to nd4j-native and will run tests on cpu,you can specify other backends like nd4j-cuda-11.6
- dl4j.version: You can change the dl4j version that the tests run against. This defaults to 1.0.0-SNAPSHOT.
More parameters can be found here:
deeplearning4j/platform-tests/pom.xml
Line 47 in c1bf871
- Ensure you follow https://stackoverflow.com/questions/45370178/exporting-a-package-from-system-module-is-not-allowed-with-release on jdk 9 or later
- Ignore all nd4j-shade submodules. Right click on each folder and click: Maven -> Ignore project
Deeplearning4J is actively developed by the team at Konduit K.K..
[If you need any commercial support feel free to reach out to us. at [email protected]