Skip to content

Releases: onnx/onnx

v1.17.0

01 Oct 17:57
b8baa84
Compare
Choose a tag to compare

ONNX v1.17.0 is now available with exciting new features! We would like to thank everyone who contributed to this release!
Please visit onnx.ai to learn more about ONNX and associated projects.

Key Updates

ai.onnx Opset 22

Python Changes

  • Support for numpy >= 2.0

Bug fixes and infrastructure improvements

  • Fix Check URLs errors 5972
  • Use CMAKE_PREFIX_PATH in finding libprotobuf 5975
  • Bump main VERSION_NUMBER to 1.17.0 5968
  • Fix source and pip tar.gz builds on s390x systems 5984
  • Fix unique_name 5992
  • Fix SegFault bug in shape inference 5990
  • Fix onnx.compose when connecting subgraphs 5991
  • Fix conversion from split 11 to split 18 6020
  • Update error messages for NegativeLogLikelihoodLoss inference function 6021
  • Generalize input/output number check in shape inference 6005
  • Replace rank inference with shape inference for Einsum op 6010
  • build from source instruction with latest cmake change 6038
  • Handle OneHot's depth value during shape inference 5963
  • Not to install cmake in pyproject.toml on Windows 6045
  • fix a skipped shape infer code 6049
  • Include the ".onnxtext" extension in supported serialization format 6051
  • Allow ReferenceEvaluator to return intermediate results 6066
  • Fix 1 typo in numpy_helper.py 6041
  • Remove benchmarking code 6076
  • Prevent crash on import after GCC 8 builds 6048
  • Check graph outputs are defined 6083
  • Enable additional ruff rules 6032
  • Add missing shape inference check for DequantizeLinear 6080
  • Add bfloat16 to all relevant ops 6099
  • fix(ci): install python dependencies with --only-binary :all: in manylinux 6120
  • fix: install google-re2 with --only-binary option 6129
  • Specify axis parameter for DequantizeLinear when input rank is 1 6095
  • Pin onnxruntime to 1.17.3 for release CIs 6143
  • Fix INT4 TensorProto byte size is 5x larger than expected with negative values 6161
  • Mitigate tarball directory traversal risks 6164
  • Fix reference implementation for ScatterND with 4D tensors 6174
  • Addition of group > 1 in test and in backend for ConvTranspose 6175
  • Support for bfloat16 for binary, unary operators in reference implementation 6166
  • Refactor windows workflow to work on standard windows 6190
  • Fix a few crashes while running shape inference 6195
  • Update onnx to work with numpy>=2.0 6196
  • Use sets to improve performance of dfs search 6213
  • Upgrade reuse to v4.0.0 6216
  • Makes to_array, from_array support custom numpy dtype, support float16 type in parser 6170
  • Handle functions in external data helper 6233
  • Refactor safe extract method to fix issue 6215 6222
  • move examples dir 6230
  • Use MACOSX_DEPLOYMENT_TARGET=12.0 for macOS wheels 6242
  • Handle the optional input in infer_node_outputs 6250
  • Add check on dimensions in Gemm opset 6 6217
  • Update broken URLs 6255
  • The latest protobuf pkg 5.28.0 is failing on Windows. use the one pre… 6342
  • Remove unused variables 6303

Test improvements

  • Migrate CI to use Github Actions 6075
  • Add shape inference test for custom op 6068
  • chore(ci): build and test macOS universal2 wheels on macOS arm64 6117
  • Fix input names for quantize/dequantize ONNX backend tests 6122
  • Verify model deletion after testing 6127
  • Better name for Github Action and fix Windows build on CI 6173
  • Fix CI on Windows 3.12 6179
  • Rename test name with duplicated names, add logic to check it does not happen again 6194

Documentation updates

  • Fix typos in the comments and documentation 5944
  • Add more partner projects to be notified about new releases 6042
  • Update release process documentation 6043
  • Update CI pipeline README 6086
  • Add/Format License/Copyright headers [612...
Read more

v1.16.2

01 Aug 13:15
3bf92c0
Compare
Choose a tag to compare

ONNX v1.16.2 is a patch release based on v1.16.1.

Bug fixes

  • Mitigate tarball directory traversal risks #6164
  • Refactor safe extract method #6222
  • Add check on dimensions in Gemm opset 6 #6217
  • Update broken URLs #6255

Please visit onnx.ai to learn more about ONNX and associated projects.

v1.16.1

23 May 17:50
595228d
Compare
Choose a tag to compare

ONNX v1.16.1 is a patch release based on v1.16.0.

Bug fixes

  • Prevent crash on import after GCC 8 builds #6048
  • Add missing shape inference check for DequantizeLinear #6080
  • Fix input names for quantize/dequantize ONNX backend tests #6122
  • fix a skipped shape infer code #6049

Please visit onnx.ai to learn more about ONNX and associated projects.

v1.16.0

25 Mar 15:40
990217f
Compare
Choose a tag to compare

ONNX v1.16.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit onnx.ai to learn more about ONNX and associated projects.

Key Updates

ai.onnx Opset 21

ai.onnx.ml Opset 4

IR Version 10

  • Added support for UINT4, INT4 types
  • GraphProto, FunctionProto, NodeProto, TensorProto added metadata_props field
  • FunctionProto added value_info field
  • FunctionProto and NodeProto added overload field to support overloaded functions.

Python Changes

  • Support registering custom OpSchemas via Python interface
  • Support Python3.12

Security Updates

  • Fix path sanitization bypass leading to arbitrary read (CVE-2024-27318)
  • Fix Out of bounds read due to lack of string termination in assert (CVE-2024-27319)

Deprecation notice

Bug fixes and infrastructure improvements

  • Enable empty list of values as attribute (#5559)
  • Add backward conversions from 18->17 for reduce ops (#5606)
  • DFT-20 version converter (#5613)
  • Fix version-converter to generate valid identifiers (#5628)
  • Reserve removed proto fields (#5643)
  • Cleanup shape inference implementation (#5596)
  • Do not use LFS64 on non-glibc linux (#5669)
  • Drop "one of" default attribute check in LabelEncoder (#5673)
  • TreeEnsemble base values for the reference implementation (#5665)
  • Parser/printer support external data format (#5688)
  • [cmake] Place export target file in the correct directory (#5677)
  • Bump CMAKE_CXX_STANDARD as 17 globally (#5612)
  • Fix shape inference for DequantizeLinear (#5709)
  • Fix swapped version numbers in version converter (#5734)
  • Expose LexicalScopeContext in checker.py (#5693)
  • Create in-memory large models without serializing large initializers through protobuf (#5685)
  • Define all in onnx.reference (#5749)
  • Add default for check_function & Use lexical_scope_ctx for readability (#5757)
  • Make ReferenceEvaluator support ModelContainer (#5754)
  • Fix reference implementation for loops with optional number of iterations (#5752)
  • Print the actual and expected attribute types in checker (#5762)
  • Resurrect check function context logic (#5778)
  • Fix conversion to zero for E4M3FNUZ and E5M2FNUZ (#5764)
  • Support Unicode file paths when loading an ONNX file (#5806)
  • Removed unused string_view include (#5813)
  • Use mac-release 10.15 (#5820)
  • Process subgraphs in inliner (#5841)
  • Enable unity(Jumbo) builds (#5768)
  • Print tensor dtypes as strings in shape inference (#5856)
  • Bump up IR_VERSION to 10 (#5860)
  • Support Python 3.12 (#5743)
  • Fix corner case where output size need to reduce by one in MaxPool (#5741)
  • Bump Numpy minimal version to 1.20 (#5902)
  • Fix endianness conversion in numpy_helper.to_array() (#5904)
  • Add valueinfos field to FunctionProto (#5903)
  • Remove deprecated properties from FormalParameter (#5921)
  • Add proto support for overloaded functions (#5011)
  • Add parser support for int4 types (#5934)
  • Update proto to add metadata props (#5938)
  • The latest Cmake 3.28.3 is failing with "Could NOT find Protobuf (missing: Protobuf_LIBRARIES)". Use Cmake 3.27.9 (#5951)
  • Fix ReferenceEvaluator when run from a subclass (#5936)

Documentation updates

  • Update top-k documentation (#5948)
  • Updated docs for DynamicQuantizeLinear to be consistent with reference implementation (#5603)
  • Clarify cond to If must contain a single element (#5617)
  • Update README.md (#5630)
  • Fix affineGrid doc error - output shape shall has no 'C' in it (#5648)
  • Use absolute link in README.md entirely (#5663)
  • [Doc clarification] Added unidirectional text for LayerNorm (#5686)
  • Add documentation for inliner (#5712)
  • update release doc for tag creation (#5721)
  • Doc: Add exception checks in check_model (#5736)
  • Add perm length constraint in Transpose doc (#5857)
  • Fix label encoder definition in schema (#5863)
  • Update batchnorm documentation (number of outputs for training mode) (#5932)
  • Q/DQ docs readability + 4bit info in onnx.proto (#5937)

Installation

You can upgrade to the latest release using pip install onnx --upgrade or build from source following the README instructions.

Contributors

Thanks to these individuals for their contributions in this release since last 1.16.0 release:
Aditya Goel, Adrian Lizarraga, Andreas Fehlner, Charles Volzka, Daniel Richard G, Danni, G. Ramalingam, Gal Hubara-Agam, Ilya Lavrenov, Justin Chu, Tabari Alexander, Takeshi Watanabe, WORLD PEACE, Wouter Deconinck, Xavier Dupré, Yuan Yao, dependabot[bot], galagam, jslap-ubi, liqun Fu

v1.15.0

31 Oct 17:04
b86cc54
Compare
Choose a tag to compare

ONNX v1.15.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit onnx.ai to learn more about ONNX and associated projects.

Key Updates

ai.onnx opset version increased to 20 with following changes:

  • New Operators (ai.onnx):

    • ImageDecoder a new ImageDecoder operator to be used in preprocessing models
    • RegexFullMatch a new operator for regex matching that is commonly used in feature preprocessing
    • StringConcat takes two string tensors as input and returns the elementwise concatenation of the strings in each tensor
    • StringSplit takes a string tensor as input and splits each element based on a delimiter attribute and a maxsplit attribute
    • AffineGrid Generates a 2D or 3D flow field (sampling grid), given a batch of affine matrices theta
    • Gelu applies gaussian error linear unit function or its approximation to input
  • Operator Updates (ai.onnx):

ai.onnx.ml opset version increased to 4 with following changes:

  • Operator Updates (ai.onnx.ml):
    • LabelEncoder adds keys_as_tensor and values_as_tensor attributes

New functionality:

  • Enable empty list of values as attribute PR#5559
  • Update diff bakend node tests for auto update doc PR#5604
  • Enable pylint checks with Ruff and remove pylint from lintrunner PR#5589
  • Getting onnx to treat inf/-inf as float literals. PR#5528
  • Create the onnxtxt serialization format PR#5524
  • Support JSON as a serialization target PR#5523
  • Support for parsing and printing empty list value as attribute PR#5516
  • Add auto update doc pipeline to help developers update docs PR#5450
  • Implement GELU as function op PR#5277
  • Integrate function-inlining with version-conversion PR#5211
  • Extend function type inference to handle missing optional parameters PR#5169
  • Create repr functions for OpSchema PR#5117
  • Utility to inline model-local functions PR#5105
  • Faster reference implementation for operator Conv based on im2col PR#5069
  • Support textproto as a serialization format PR#5112

ONNX now supports serializing to JSON, Text Proto as well as the ONNX Text Representation

Users are now able to serialize the model proto to a text format by specifying supported file extensions or supplying the format= argument in save_model.

For example

# model: onnx.ModelProto
onnx.save_model(model, "model.json")

will save the model as a json file.

Shape inference enhancements

  • [Spec] output_shape for ConvTranspose should not have batch and channels PR#5400
  • Infer rank where reshape shape is inferred PR#5327

Bug fixes and infrastructure improvements

  • Do not use LFS64 on non-glibc linu PR#5669
  • [Web] Use tensor_dtype_to_np_dtype instead of deprecated function PR#5593
  • Reject absolute path when saving external data PR#5566
  • Support Python editable builds PR#5558
  • Test onnxruntime 1.15 with opset 19/IR 9 and fix test source distribution PR#5376
  • Supports float 8 initializers in ReferenceEvaluator PR#5295
  • Fix check_tensor to work with large models on UNIX PR#5286
  • Fix check_tensor to work with large models on Windows PR#5227
  • Transpose scalar shape inference PR#5204
  • Enable RUFF as a formatter PR#5176
  • correct averagepool kernel shape in dilation test case PR#5158
  • Fix type constraints of Reshape(19) PR#5146
  • Add github action to check urls are valid PR#5434 Y
  • Introduce optional cpplint in CI PR#5396 Y
  • Test the serialization API with custom serializers PR#5315 Y
  • [CI] Use ONNX Hub directly in test_model_zoo CI PR#5267 Y
  • Clean up setup.py in favor of pyproject.toml PR#4879 Y

Documentation updates

  • Merge the two contributing docs and create instructions for updating an op PR#5584
  • [Doc] Update README.md regarding Protobuf update and fix typo in Slice-13 spec PR#5435
  • Generate both onnx and onnx-ml operator docs when ONNX_ML=1 PR#5381
  • Publish md files under docs/ to the documentation site PR#5312
  • Update OpSchema docs to include new methods and classes PR#5297
  • Fix missing examples in documentation for ai.onnx.ml PR#5228
  • Modify OneHot operator explanation PR#5197
  • Update CIPipelines.md PR#5157
  • Extend python API documentation PR#5156
  • Update sphinx to create markdown pages for operators PR#5137

Installation

You can upgrade to the latest release using pip install onnx --upgrade or build from source following the README instructions.

python setup.py develop deprecation

Direct invocation of setup.py is deprecated following https://setuptools.pypa.io/en/latest/deprecated/commands.html. To build ONNX, users should switch to use

# Editable installation
# Before: python setup.py develop
# Now
pip install -e .

# Build wheel
# Before: python setup.py bdist_wheel
# Now
pip install --upgrade build
python -m build .

Contributors

Thanks to these individuals for their contributions in this release sinc...

Read more

v1.14.1

25 Aug 21:00
1014f41
Compare
Choose a tag to compare

ONNX v1.14.1 is a patch release based on v1.14.1.

Bug fixes

  • Fix shape data propagation function to handle missing optional parameters #5219
  • Fix a couple of shape inference issues #5223
  • Extend function type inference to handle missing optional parameters #5169
  • Fix check_tensor to work with large models on Windows #5227
  • Fix check_tensor to work with large models on UNIX #5286

v1.14.0

05 May 16:35
Compare
Choose a tag to compare

ONNX v1.14.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit onnx.ai to learn more about ONNX and associated projects.

Opset 19 is released

New operators

DeformConv added in #4783

Operator extensions

Equal - Support for string data type added in #4828
AveragePool - New attribute dilations #4790
Pad - Added new wrap to the mode attribute to support circular padding #4793
Resize - Added half_pixel_symmetric to the coordinate_transformation_mode attribute #4862

IR updates (bump to 9)

  • Support attributes with default values: #4911
  • Added 4 new 8-bit floating point data types: #4805

Backend tests

Replaced real models with light models in backend tests. #4861 #4960

Support Protobuf v21

Now ONNX supports Protobuf v21: #4956

Deprecation notice

  • Python 3.7 support will be deprecated due to EOL in next release: #5191
  • onnx-weekly package will be deprecated in TestPyPI. Please use them from PyPI instead: #4930
  • Properties in FormalParameter will be deprecated in future release. Please use newer properties name: #5074
  • Variables from mapping.py will be deprecated and become private implementation details. Please use public functions to get corresponding types from helper.py instead: #4554

Installation notice

You can upgrade to the latest release using pip install onnx --upgrade or build from source following the README instructions.

Contributors

Thanks to these individuals for their contributions in this release since last 1.13.0 release: @jcwchen, @andife, @gramalingam, @xadupre, @justinchuby, @liqunfu, @yuanyao-nv, @jbachurski, @p-wysocki, @prasanthpul, @jantonguirao, @take-cheeze, @smk2007, @AlexandreEichenberger, @snnn, @daquexian, @linkerzhang.

v1.13.1

22 Feb 18:47
ad834eb
Compare
Choose a tag to compare

ONNX v1.13.1 is a patch release based on v1.13.0.

Bug fixes

  • Add missing f-string for DeprecatedWarningDict in mapping.py #4707
  • Fix types deprecated in numpy==1.24 #4721
  • Update URL for real models from ONNX Runtime #4865
  • Fix attribute substitution within subgraphs during function type/shape inference #4792
  • Handle variants of constant op in shape inference #4824
  • Fix parser bug in handling non-tensor types #4863
  • Fix function shape inference bug #4880

Announcement

  • Deprecate real model tests from onnx repo in next ONNX release #4885
  • Move onnx-weekly package from TestPyPI to PyPI and stop uploading onnx-weekly to TestPyPI after next ONNX release #4930

v1.13.0

12 Dec 14:38
1ba7856
Compare
Choose a tag to compare

ONNX v1.13.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit onnx.ai to learn more about ONNX and associated projects.

New operators

Operator extensions

Function updates

Reference Python runtime

Reference Python runtime dependent on only Python and numpy has been added. #4483

Python 3.11 support

ONNX 1.13.0 supports Python 3.11. #4490

Apple Silicon support

Support for M1/M2 ARM processors has been added. #4642

More

ONNX 1.13.0 also comes with numerous:

  • bugfixes
  • infrastructure improvements
  • CI improvements
  • documentation updates
  • security updates

For full details see Logistics for ONNX Release 1.13.0.

Deprecation notice

  • TENSOR_TYPE_TO_STORAGE_TENSOR_TYPE has been deprecated #4270
  • ONNXIFI: ONNX Interface for Framework Integration has been deprecated #4431

Installation

You can upgrade to the latest release using pip install onnx --upgrade or build from source following the README instructions.

Contributors

Thanks to these individuals for their contributions in this release since last 1.12.0 release: @AnandKri, @cbourjau, @jcwchen, @gramalingam, @garymm, @GaetanLepage, @ilya-lavrenov, @jnovikov, @JackBoosY, @jbachurski, @tjich, @jantonguirao, @justinchuby, @natke, @philass, @prasanthpul, @p-wysocki, @SpaceIm, @stephenneuendorffer,@take-cheeze, @sechkova, @thiagocrepaldi, @xadupre, @mszhanyi, @yuanyao-nv, @andife, @daquexian, @kylesayrs, @liqunfu, @longlee0622, @HSQ79815, @williamberman, @YanBC

The list has been acquired with a script written by Aaron Bockover.

v1.12.0

18 Jun 02:58
f7ee1ac
Compare
Choose a tag to compare

ONNX v1.12.0 is now available with exciting new features! We would like to thank everyone who contributed to this release! Please visit onnx.ai to learn more about ONNX and associated projects.

Key Updates

ai.onnx opset version increased to 17 with following changes:

  • New operators (ai.onnx):
    - LayerNormalization (#4076)
    - SequenceMap (#3892)
    - Signal Operators: DFT, HannWindow, HammingWindow, BlackmanWindow, MelWeightMatrix, STFT (#3741)
  • Operator Updates (ai.onnx):
    - [Scan] Remove unused type constraint I for newer Scan (opset 9+)(#4012)

Shape inference enhancements

  • Extend InferShapes to expose result of data propagation (#3879)
  • Update shape inference for constant of shape (#4141)
  • Catch missing input type in function shape inference (#4123)
  • Add shape inference for Expand using symbolic shape input (#3789)
  • Fix Expand shape inference: stop rank inference if the shape is symbolic (#4019)

Bug fixes and infrastructure improvements

  • Fix a bug in _get_initializer_tensors() (#4118)
  • Fix bug of resizeShapeInference for Resize13 (#4140)
  • Fix bug in SCE function body (#4038)
  • Use correct pytest types in backend (#3990) (#3994)
  • Checker should validate the node's inputs/outputs have names when its formal parameter is Variadic (#3979)
  • Loose NumPy requirement to grant more flexibility (#4059)
  • Fix crash: Skip unused value_info for version_converter (#4079)
  • Use %d for integer in version_converter (#4182)
  • Extend parser to handle other types (#4136)

Documentation updates

  • Add documentation about functions to IR.md (#4180)
  • Clarify add new op documentation (#4150)
  • Clarify NonZero behavior for scalar input in spec (#4113)
  • Update shape inference documentation (#4163)
  • Fix a minor typo in operator Gather documentation (#4125)
  • Fix typo in CIPipelines.md (#4157)
  • Fix typo in slice doc (#4117)
  • Fix grammar in documents (#4094)
  • Clearer description of Slice (#3908)
  • Add OperatorSetId definition in docs (#4039)
  • Clean up protocol buffer definitions (#4201)
  • Change the wrong words of second layer input (#4044)
  • Clarify that op_type is case sensitive (#4096)

Installation

You can upgrade to the latest release using pip install onnx --upgrade or build from source following the README instructions.

Notes

  • Beware of the protobuf version gap issue (building onnx with protobuf>=3.12 is not compatible with older protobuf)

Contributors

Thanks to these individuals for their contributions in this release since last 1.11.0 release. (Contributor list obtained with: https://github.com/onnx/onnx/graphs/contributors?from=2022-02-08&to=2022-05-24&type=c): @jcwchen, @gramalingam, @xuzijian629, @garymm, @diyessi, @liqunfu, @jantonguirao, @daquexian, @fdwr, @andife, @wschin, @xadupre, @xkszltl, @snnn