Release 2.18.0
TensorFlow
Breaking Changes
-
tf.lite
- C API:
- An optional, fourth parameter was added
TfLiteOperatorCreate
as a step forward towards a cleaner API forTfLiteOperator
. FunctionTfLiteOperatorCreate
was added recently, in TensorFlow Lite version 2.17.0, released on 7/11/2024, and we do not expect there will be much code using this function yet. Any code breakages can be easily resolved by passing nullptr as the new, 4th parameter.
- An optional, fourth parameter was added
- C API:
-
TensorRT support is disabled in CUDA builds for code health improvement.
-
Hermetic CUDA support is added.
Hermetic CUDA uses a specific downloadable version of CUDA instead of the user’s locally installed CUDA. Bazel will download CUDA, CUDNN and NCCL distributions, and then use CUDA libraries and tools as dependencies in various Bazel targets. This enables more reproducible builds for Google ML projects and supported CUDA versions.
Known Caveats
Major Features and Improvements
- TensorFlow now supports and is compiled with NumPy 2.0 by default. Please see the NumPy 2 release notes and the NumPy 2 migration guide.
- Note that NumPy's type promotion rules have been changed(See NEP 50for details). This may change the precision at which computations happen, leading either to type errors or to numerical changes to results.
- Tensorflow will continue to support NumPy 1.26 until 2025, aligning with community standard deprecation timeline here.
tf.lite
:- The LiteRT repo is live (see announcement), which means that in the coming months there will be changes to the development experience for TFLite. The TF Lite Runtime source will be moved later this year, and sometime after that we will start accepting contributions through that repo.
- SignatureRunner is now supported for models with no signatures.
Bug Fixes and Other Changes
-
tf.data
- Add optional
synchronous
argument tomap
, to specify that themap
should run synchronously, as opposed to be parallelizable whenoptions.experimental_optimization.map_parallelization=True
. This saves memory compared to settingnum_parallel_calls=1
. - Add optional
use_unbounded_threadpool
argument tomap
, to specify that themap
should use an unbounded threadpool instead of the default pool that is based on the number of cores on the machine. This can improve throughput for map functions which perform IO or otherwise release the CPU. - Add
tf.data.experimental.get_model_proto
to allow users to peek into the analytical model inside of a dataset iterator.
- Add optional
-
tf.lite
Dequantize
op supportsTensorType_INT4
.- This change includes per-channel dequantization.
- Add support for
stablehlo.composite
. EmbeddingLookup
op supports per-channel quantization andTensorType_INT4
values.FullyConnected
op supportsTensorType_INT16
activation andTensorType_Int4
weight per-channel quantization.
-
tf.tensor_scatter_update
,tf.tensor_scatter_add
and of other reduce types.- Support
bad_indices_policy
.
- Support
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Akhil Goel, akhilgoe, Alexander Pivovarov, Amir Samani, Andrew Goodbody, Andrey Portnoy, Anthony Platanios, bernardoArcari, Brett Taylor, buptzyb, Chao, Christian Clauss, Cocoa, Daniil Kutz, Darya Parygina, dependabot[bot], Dimitris Vardoulakis, Dragan Mladjenovic, Elfie Guo, eukub, Faijul Amin, flyingcat, Frédéric Bastien, ganyu.08, Georg Stefan Schmid, Grigory Reznikov, Harsha H S, Harshit Monish, Heiner, Ilia Sergachev, Jan, Jane Liu, Jaroslav Sevcik, Kaixi Hou, Kanvi Khanna, Kristof Maar, Kristóf Maár, LakshmiKalaKadali, Lbertho-Gpsw, lingzhi98, MarcoFalke, Masahiro Hiramori, Mmakevic-Amd, mraunak, Nobuo Tsukamoto, Notheisz57, Olli Lupton, Pearu Peterson, pemeliya, Peyara Nando, Philipp Hack, Phuong Nguyen, Pol Dellaiera, Rahul Batra, Ruturaj Vaidya, sachinmuradi, Sergey Kozub, Shanbin Ke, Sheng Yang, shengyu, Shraiysh, Shu Wang, Surya, sushreebarsa, Swatheesh-Mcw, syzygial, Tai Ly, terryysun, tilakrayal, Tj Xu, Trevor Morris, Tzung-Han Juang, wenchenvincent, wondertx, Xuefei Jiang, Ye Huang, Yimei Sun, Yunlong Liu, Zahid Iqbal, Zhan Lu, Zoranjovanovic-Ns, Zuri Obozuwa