HyPro - A C++ library for the representation of state sets for the reachability analysis of hybrid systems
The C++ library HyPro, which is a result of the project HyPro funded by the german research council, provides easy to use implementations of state set representations for the reachability analysis of hybrid systems via flowpipe-construction. It provides implementations of the most commonly used state set representations such as boxes, convex polytopes, support functions or zonotopes. All implementations conform to the same general interface, which allows the user to easily switch between representations (even during analysis).
Additionally, commonly used data-structures and algorithms required during the analysis are provided. The tool features a parser for Flow*-syntax and implementations of basic reachability analysis methods based on flowpipe-construction.
On the official project page you can find a collection of benchmarks as well.
The latest stable version can be obtained from the master-branch. The alpha-branch holds a development version - please check the CI-status to see whether this version is stable. Apart from compiling HyPro, a Docker image is available from DockerHub (both branches).
To be able to compile HyPro, the following dependencies must be met:
- cmake
- a c++-compiler (g++/clang++)
- CArL
- a Java runtime environment (e.g. openjdk-re)
- uuid-dev
- iputils-ping
After installing the dependencies obtaining the source code from the git-repository, HyPro can be configured via cmake and built afterwards. We recommend an out-of-source build:
$ mkdir build && cd build
$ cmake ..
$ make resources
$ make hypro
This minimal build will not compile tests. To compile tests as well and run them, you can use the target allTests
, e.g.;
$ make allTests
$ make test
HyPro registers itself to cmake which means that any further cmake-project which depends on HyPro does not neccessarily require a system-installation of HyPro. Instead, it suffices to call find_package(hypro)
from cmake to locate the local build.
Currently we provide an API documentation created with Doxygen as well as a Pdf manual.
HyPro comes with some examples shipped, which may give you a hint on how to use the library. All examples are listed in
the examples folder; all source files in this folder prefixed with example_
are automatically assigned a corresponding
target, i.e., the file example_box.cpp
can be compiled via the target example_box
.
The library comes with a selection of case studies which can be used for reproduction of experiments or as starting points for interested users to use HyPro. In the following we shortly describe the content and how to run specific case studies.
HyPro includes four benchmarks that interested users can use to test our reachability analysis algorithm for neural networks with piece-wise linear activation functions. The four benchmarks are: ACAS Xu, drone hovering, thermostat controller, and sonar classifier. In the following section, we list some example commands for running verification on each of these benchmarks. We provide the neural networks in .nnet format and the safety properties in a custom text format. To execute the verification with the corresponding parameters, we offer a convenient script called nnBenchmarkVerification.sh
, which is located in the root directory of HyPro. To run the script, we assume that the user's current directory is the build
folder.
The famous ACAS Xu benchmark consists of 45 neural networks, each following the naming convention ACASXU_experimental_v2a_x_y.nnet, where
To verify a specific property-network combination, you should first compile the corresponding binary file. Then, assuming that the current directory is the build
folder, running the over-approximate method on network ACASXU_experimental_v2a_2_4 with safety property 4 can be done as follows:
$ make example_ACASbenchmark_verification
$ ../nnBenchmarkVerification.sh acasxu overapprox ACASXU_experimental_v2a_2_4.nnet poly_prop4_input.in poly_prop4_safe.in
First of all, we would like to express our deepest gratitude to Dario Guidotti, Stefano Demarchi, and Armando Tacchella for generously sharing their drone hovering benchmark with us. This benchmark comprises eight neural networks. The first four consist of two hidden layers, and the other four networks consist of three hidden layers, each followed by a ReLU activation function. For each network, two safety properties are provided.
As an example, to compile the binary and verify the network AC7.nnet using the second safety property, please follow these commands:
$ make example_drones_verification
$ ../nnBenchmarkVerification.sh drones exact AC7.nnet prop_AC7_02.in safe_AC7_02.in
The benchmark discussed in this Master’s thesis includes a neural network controller that regulates room temperature within the range of 17°C to 23°C using a thermostat. It accomplishes this by activating (mode on) and deactivating (mode off) the heater based on the sensed temperature. The neural network representing the thermostat's controller is a feedforward neural network with three layers. The input comprises two neurons that represent the temperature and the current mode (on or off). Additionally, there are two hidden layers, each with ten neurons. Finally, using the unit step activation function, the output layer predicts whether the heater should turn on or off, generating the corresponding control input.
To compile and test this benchmark, please follow these commands:
$ make example_thermostat_verification
$ ../nnBenchmarkVerification.sh thermostat exact fc_thermostat.nnet poly_thermostat.in
Using this benchmark, we assess the robustness of a neural network designed for binary classification of a sonar dataset. This dataset characterizes sonar chirp returns reflecting off various objects and comprises 60 input variables representing the strength of the returned beams at different angles. The neural network under examination should demonstrate robust binary classification, distinguishing between rocks and metal cylinders. This neural network consists of two layers, with the first layer utilizing a ReLU activation function and the second layer employing a sigmoid activation function, which we switch to the hard sigmoid. Our goal is to verify the local robustness of this neural network.
The local-robustness properties were generated using examples/nn_benchmarks/properties/sonar/convert_sonar.py
script and then re-run the script to regenerate the input properties. For instance, to evaluate the robustness of input 39, where the ground truth label is "rock" (R), you can follow these steps:
$ make example_robustness
$ ../nnBenchmarkVerification.sh sonar overapprox sonar_detector.nnet sonar_39_R.in
The case study consists of three benchmarks which illustrate the benefits of subspace computations. We present results for original approaches as well as for our earlier approach published at QAPL'17 and a more recent approach (submitted, under review) which extends previous results by better synchronization.
The benchmarks are a leaking tank system, a system with two tanks and a thermostat model. All of these systems are extended by controllers, which run periodically.
The results from the tables of our most recent publication (submitted, under review) can be reproduced by running the
script
run_decomposition_benchmarks.sh
which is provided in the subfolder test/case_studies
. The script takes two
parameters as an input, one being the path to the HyDRA-tool and the base path of the model files
(examples/input/subspaceBenchmarks
). To build the HyDRA-tool, run
$ make hydra
Afterwards a complete call could look like (depending on your build folder)
$ sudo ./test/case_studies/run_decomposition_benchmarks.sh /home/me/hypro/build/bin/hydra
examples/input/subspaceBenchmarks
(here we assume your username is "me" and HyPro is located in your home folder [from which your run the script] with a build-folder named "build").
Additionally we have created a cmake-target to reproduce results from Table 3, which can be built by running
$ make cs_hypro_decomposition_benchmark
The resulting executable located in the bin
-subfolder of your build folder runs the respective experiments in which
running times for different numbers of clocks and subspaces are compared when using a box-representation.