Hi and welcome to the main depthai-ros respository! Here you can find ROS related code for OAK cameras from Luxonis. Don't have one? You can get them here!
Main features:
- You can use the cameras as classic RGBD sensors for your 3D vision needs.
- You can also load Neural Networks and get the inference results straight from camera!
You can develop your ROS applications in following ways:
- Use classes provided in
depthai_bridge
to construct your own driver (seestereo_inertial_node
example on how to do that) - Use
depthai_ros_driver
package (currently available on ROS2 Humble and ROS Noetic) to get default experience (see details below on how)
Supported ROS versions:
- Noetic
- Humble
- Iron
For usage check out respective git branches.
Add USB rules to your system
echo 'SUBSYSTEM=="usb", ATTRS{idVendor}=="03e7", MODE="0666"' | sudo tee /etc/udev/rules.d/80-movidius.rules
sudo udevadm control --reload-rules && sudo udevadm trigger
Install depthai-ros. (Available for Noetic, foxy, galactic and humble)
sudo apt install ros-<distro>-depthai-ros
The following script will install depthai-core and update usb rules and install depthai devices
sudo wget -qO- https://raw.githubusercontent.com/luxonis/depthai-ros/main/install_dependencies.sh | sudo bash
if you don't have opencv installed then try sudo apt install libopencv-dev
if you don't have rosdep installed and not initialized please execute the following steps:
sudo apt install python-rosdep
(melodic) orsudo apt install python3-rosdep
sudo rosdep init
rosdep update
The following setup procedure assumes you have cmake version >= 3.10.2 and OpenCV version >= 4.0.0. We selected dai_ws
as the name for a new folder, as it will be our depthai ros workspace.
mkdir -p dai_ws/src
cd dai_ws/src
git clone --branch <ros-distro> https://github.com/luxonis/depthai-ros.git
cd ..
rosdep install --from-paths src --ignore-src -r -y
source /opt/ros/<ros-distro>/setup.bash
catkin_make_isolated
(For ROS1)MAKEFLAGS="-j1 -l1" colcon build
(for ROS2)source devel/setup.bash
(For ROS1) &source install/setup.bash
(for ROS2)
Note If you are using a lower end PC or RPi, standard building may take a lot of RAM and clog your PC. To avoid that, you can use build.sh
command from your workspace (it just wraps colcon commands):
./src/depthai-ros/build.sh
You can additionally build and run docker images on your local machine. To do that, add USB rules as in above step.
Each tagged version has it's own prebuilt docker image. To download and run it:
xhost +local:docker
to enable GUI tools such as rviz or rqt.
Then
docker run -it -v /dev/:/dev/ --privileged -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix luxonis/depthai-ros:noetic-latest [CMD]
Where [CMD] is what's going to be executed when container is initially run and could be for example:
- bash (it will allow you to browse files and make modifications to the code and rebuild it)
- zsh (other shell that is installed that has some advantages over bash)
- roslaunch depthai_ros_driver camera.launch (this is just an example, any launch file will work here)
A side note here, launch files in depthai_ros_driver have some default parameters set by .yaml files inside the driver. You can either edit them inside the container itself, or you can make a .yaml file on your host (let's say
/home/YOUR_USERNAME_HERE/params/example_config.yaml
) and pass it as an argument to the executable, as follows:
docker run -it -v /dev/:/dev/ -v /home/YOUR_USERNAME_HERE/params:/params --network host --privileged -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix luxonis/depthai-ros:noetic-latest roslaunch depthai_ros_driver camera.launch params_file:=/params/example_config.yaml
Note, to make images more compact only some external dependencies are installed, for example if you want to try out RTABMap example in docker, you have to:
- Install it by running the container in bash/zsh mode
- Modify the Dockerfile so that it's installed during building - you'll have to rebuild the container after that.
- Run base camera node in our container and RTABMap separately on host/ in a separate container (see the launch file on what parameters/topic names need to be changed when running separately).
Clone the repository and inside it run (it matters on which branch you are on):
docker build --build-arg USE_RVIZ=1 -t depthai-ros .
If you find out that you run out of RAM during building, you can also set BUILD_SEQUENTIAL=1
to build packages one at a time, it should take longer, but use less RAM.
RUN_RVIZ
arg means rviz will be installed inside docker. If you want to run it you need to also execute following command (you'll have to do it again after restarting your PC):
xhost +local:docker
Then you can run your image in following way:
docker run -it -v /dev/:/dev/ --privileged -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix depthai-ros
will run an interactive docker session. You can also try:
docker run -it -v /dev/:/dev/ --privileged -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix depthai-ros roslaunch depthai_examples stereo_inertial_node.launch.py
to run a launch file of your choice.
NOTE ROS2 Humble docker image uses Cyclone as RMW implementation.
docker run -it -v /dev/:/dev/ --privileged -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix depthai_ros roslaunch depthai_examples stereo_inertial_node.launch
Will only start stereo_inertial_node
launch file (you can try different commands).
docker run -it -v /dev/:/dev/ --privileged -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix depthai_ros roslaunch depthai_examples stereo_inertial_node.launch.py
Currently, recommended way to launch cameras is to use executables from depthai_ros_driver package.
This runs your camera as a ROS2 Component/ROS Nodelet and gives you the ability to customize your camera using ROS parameters.
Main API difference between ROS2 and ROS1 is that parameter names use different convention - parameters in ROS2 are namespaced using .
and parameters in ROS1 are namespaced using _
, for example in ROS2: camera.i_pipeline_type
, ROS1 camera_i_pipeline_type
. For sake of simplicity, this Readme uses ROS2 convention. List of all available parameters with their default values can be found at the bottom of Readme.
Paramerers that begin with r_
can be freely modified during runtime, for example with rqt.
Parameters that begin with i_
are set when camera is initializing, to change them you have to call stop
and start
services. This can be used to hot swap NNs during runtime, changing resolutions, etc. Below you can see some examples:
By default, camera transforms are published from default URDF descriptions based on CAD models. This can be overriden by using TFPublisher class from depthai_bridge
, which based on Device's camera calibration data republishes the description with updated information. To enable this behavior in depthai_ros_driver
, you can use following parameters:
camera.i_publish_tf_from_calibration
- setting this to true launches TFPublisher
Then you can set following arguments:
camera.i_tf_camera_name
- if not set, defaults to the node namecamera.i_tf_camera_model
- if not set, it will be automatically detected. If the node is unable to detect STL file for the camera it is set toOAK-D
. To explicitly set it incamera.launch.py
, setoverride_cam_model:=true
camera.i_tf_base_frame
camera.i_tf_parent_frame
camera.i_tf_cam_pos_x
camera.i_tf_cam_pos_y
camera.i_tf_cam_pos_z
camera.i_tf_cam_roll
camera.i_tf_cam_pitch
camera.i_tf_cam_yaw
When using camera.launch.py
, you can set pass_tf_args_as_params:=true
so that TF arguments are used to fill those parameters. For example ros2 launch depthai_ros_driver camera.launch.py pass_tf_args_as_params:=true parent_frame:=map cam_pos_x:=1.0 imu_from_descr:=true
It is also possible to set custom URDF path (for now only absolute path works) and custom xacro arguments using camera.i_tf_custom_urdf_path
and camera.i_tf_custom_xacro_args
. Please note that robot_state_publisher must be running.
NOTE ON IMU EXTRINSICS
If your camera has uncalibrated IMU, a warning will be shown, and IMU will be published with zero rotation and translation. You can override this behavior by setting camera.i_tf_imu_from_descr
: true. This will publish default IMU extrinsics from URDF based on camera model.
- IMX378 - {"12MP", "4K", "1080P"}, default 1080P
- OV9282 - {"800P", "720P", "400P"}, default 800P
- OV9782 - {"800P", "720P", "400P"}, default 800P
- OV9281 - {"800P", "720P", "400P"}, default 800P
- IMX214 - {"13MP", "12MP", "4K", "1080P"}, default 1080P
- IMX412 - {"13MP", "12MP", "4K", "1080P"}, default 1080P
- OV7750 - {"480P", "400P"}, default 480P
- OV7251 - {"480P", "400P"}, default 480P
- IMX477 - {"12MP", "4K", "1080P"}, default 1080P
- IMX577 - {"12MP", "4K", "1080P"}, default 1080P
- AR0234 - {"1200P"}, default 1200P
- IMX582 - {"48MP", "12MP", "4K"}, default 4K
- LCM48 - {"48MP", "12MP", "4K"}, default 4K
By default RGB camera outputs ISP
frame. To set custom width and height of output image, you can set i_isp_num
and i_isp_den
which scale image dimensions (2 and 3 by default, so from 1920x1080 to 1280x720), note for RGBD alignment to work resulting width and height must be divisible by 16.
Additionally you can set i.output_isp: false
to use video
output and set custom size using i_width
and i_height
parameters.
When setting stereo.i_align_depth: true
, stereo output is aligned to board socket specified by stereo.i_board_socket_id
parameter (by default 0/CAM_A)
You can enable rectified Stereo streams by setting, for example in the case of right stream i_publish_right_rect: true
. You can also set i_publish_synced_rect_pair: true
to get both images with the same timestamps.
By default, right camera is treated as first when used for stereo computation, which is reflected in CameraInfo messages. If you want to reverse that logic, set stereo.i_reverse_stereo_socket_order: true
(this can be also set for individual sensors).
Configuration of which sensors are used for computing stereo pair can be done either programatically, by specifying them in a constructor of a Stereo node (for example when building a custom pipeline), or via parameters - stereo.i_left_socket_id
/stereo.i_right_socket_id
. Please note that currently if you want to use rgb/center socket instead of one of the given pairs you will need to build a custom pipeline for that.
Each sensor node (and rectified streams from Stereo node) has the option to add FeatureTracker node, which publishes depthai_ros_msgs/msg/TrackedFeatures
messages.
To enable features on, for example rgb node, set rgb: i_enable_feature_tracker: true
. To enable publishing on rectified streams, set for example stereo: i_left_rect_enable_feature_tracker
Parameters:
i_acc_freq: 400
- Accelerometer sensor frequencyi_acc_cov: 0.0
- Accelerometer covariancei_batch_report_threshold: 1
- Batch report sizei_enable_rotation: false
- Whether to enable rotation vector & magnetometer data (available when using IMU_WITH_MAGN/ IMU_WITH_MAGN_SPLIT message type)i_gyro_cov: 0.0
- Gyroscope covariancei_gyro_freq: 400
- Gyroscope frequencyi_mag_cov: 0.0
- Magnetometer covariancei_mag_freq: 100
- Magnetometer frequencyi_max_batch_reports: 10
- Max reports per batchi_message_type: IMU
- ROS publisher type:IMU
- sensor_msgs/ImuIMU_WITH_MAG
- depthai_ros_msgs/ImuWithMagneticFieldIMU_WITH_MAG_SPLIT
- two publishers - sensor_msgs/Imu & sensor_msgs/MagneticField
i_rot_cov: -1.0
- Rotation covariancei_rot_freq: 400
- Rotation frequencyi_sync_method: LINEAR_INTERPOLATE_ACCEL
- sync method. Available options:COPY
LINEAR_INTERPOLATE_GYRO
LINEAR_INTERPOLATE_ACCEL
i_rotation_vector_type
- type of rotation vector, for more information, refer to this link. Available options:ROTATION_VECTOR
GAME_ROTATION_VECTOR
GEOMAGNETIC_ROTATION_VECTOR
ARVR_STABILIZED_ROTATION_VECTOR
ARVR_STABILIZED_GAME_ROTATION_VECTOR
Stopping camera also can be used for power saving, as pipeline is removed from the device. Topics are also removed when camera is stopped.
As for the parameters themselves, there are a few crucial ones that decide on how camera behaves.
camera_i_pipeline_type
can be eitherRGB
- only publishes RGB stream , NN availableRGBD
- Publishes RGB + Depth streams (seti_publish_topic
for left and right cameras to enable them), NN & Spatial NN availableStereo
- Publishes streams from left and right sensors, NN not availableRGBStereo
- Publishes RGB + Left + Right streams, only RGB NN availableDepth
- Publishes only depth stream, no NN availableCamArray
- Publishes streams for all detected sensors, no NN available This tells the camera whether it should load stereo components. Default set toRGBD
. It is also possible to create a custom pipeline since all types are defined as plugins.
To do that, you can create a custom package (let's say test_plugins
), create an executable in that package (test_plugins.cpp
). Inside that file, define a cusom plugin that inherits from depthai_ros_driver::pipeline_gen::BasePipeline
and overrides createPipeline
method.
After that export plugin, for example:
#include <pluginlib/class_list_macros.hpp>
PLUGINLIB_EXPORT_CLASS(test_plugins::Test, depthai_ros_driver::pipeline_gen::BasePipeline)
Add plugin definition:
<library path="test_plugins">
<class type="test_plugins::Test" base_class_type="depthai_ros_driver::pipeline_gen::BasePipeline">
<description>Test Pipeline.</description>
</class>
</library>
Now you can use created plugin as pipeline, just set camera.i_pipeline_type
to test_plugins::Test
.
camera.i_nn_type
can be eithernone
,rgb
orspatial
. This is responsible for whether the NN that we load should also take depth information (and for example provide detections in 3D format). Default set tospatial
camera.i_mx_id
/camera.i_ip
/camera.i_usb_port_id
are for connecting to a specific camera. If not set, it automatically connects to the next available device. You can get those parameters from logs by running the default launch file.nn.i_nn_config_path
represents path to JSON that contains information on what type of NN to load, and what parameters to use. Currently we provide options to load MobileNet, Yolo and Segmentation (not in spatial) models. To see their example configs, navigate todepthai_ros_driver/config/nn
. Defaults tomobilenet.json
fromdepthai_ros_driver
To use provided example NN's, you can set the path to:
depthai_ros_driver/segmentation
depthai_ros_driver/mobilenet
depthai_ros_driver/yolo
All available camera-specific parameters and their default values can be seen in depthai_ros_driver/config/camera.yaml
.
Currently, we provide few examples:
camera.launch.py
launches camera in RGBD, and NN in spatial (Mobilenet) mode.rgbd_pcl.launch.py
launches camera in basic RGBD configuration, doesn't load any NNs. Also loads ROS depth processing nodes for RGBD pointcloud.example_multicam.launch.py
launches several cameras at once, each one in different container. Edit themulticam_example.yaml
config file inconfig
directory to change parametersexample_segmentation.launch.py
launches camera in RGBD + semantic segmentation (pipeline type=RGBD, nn_type=rgb)pointcloud.launch.py
- similar torgbd_pcl.launch.py
, but doesn't use RGB component for pointcloudexample_marker_publish.launch.py
launchescamera.launch.py
+ small python node that publishes detected objects as markers/tfsrtabmap.launch.py
launches camera and RTAB-MAP RGBD SLAM (you need to install it first -sudo apt install ros-$ROS_DISTRO-rtabmap-ros
). You might need to set manual focus via parameters here.
Since PoE cameras use protocol that has lower throughput than USB, running default camera launch can result in lags depending on chosen resolution/fps. To combat this issue, you can use encoded frames, which let you keep desired resolution/fps at the cost of image quality reduction due to compression. One additional difference is that subpixel
depth filtering is disabled in this mode. To enable low_bandwidth, for example for rgb camera, change parameters:
rgb.i_low_bandwidth
-true
to enablergb.i_low_bandwidth_quality
- desired quality % (default-50) Seelow_bandwidth.yaml
file for example parameters for all streams
To properly align with depth, you need to set rgb.i_resolution
parameter to 720
(see config/oak_d_w_pro.yaml
).
If you want to use other calibration values than the ones provided by the device, you can do it in following ways:
- Use
set_camera_info
services available for each of the image streams - Use
i_calibration_file
parameter available to point to the calibration file. Note camera name must start with/
, so for example/rgb
. Seedepthai_ros_driver/config/calibration
for example calibration files.calibration.launch
file is provided to start up a ROS camera calibrator node in both monocular and stereo configurations. Calibration file syntax (fromcamera_info_manager
):
- file:///full/path/to/local/file.yaml
- file:///full/path/to/videre/file.ini
- package://camera_info_manager/tests/test_calibration.yaml
- package://ros_package_name/calibrations/camera3.yaml
depthai_filters
contains small composable node examples that show how to work with data from multiple topics.
Available filters:
- Detection2DOverlay - subscribes to
/nn/detections
andrgb/preview/image_raw
topics. To see it in action, runros2 launch depthai_filters example_det2d_overla.launch.py
. Note here - If you see that detections misalign in the overlay, adjustrgb.i_preview_size
parameter. - SegmentationOverlay, overlays semantic segmentation from
/nn/image_raw
on top of image fromrgb/preview/image_raw
, to see it in action, runros2 launch depthai_filters example_seg_overlay.launch.py
- WLS filter - stereo depth filter that smooths out overall depth image based on disparity data. It subscribes to
stereo/image_raw
andleft/image raw
topics. Parameters needed to enable it -left.i_publish_topic
,stereo.i_output_disparity
an example can be seen by runningros2 launch depthai_filters example_wls_filter.launch.py
- SpatialBB - publishes bounding boxes as 3D line Markers based on spatial detections coming from driver node
- FeatureTrackerOverlay - publishes Tracked Features overlay based on features and images coming from the driver
- Features3D - uses depth image to republish features as 3D pointcloud
There is a possibility of using external image topics as input for NNs or Depth calculation.
Example scenarios:
-
We want to get segmentation or 2D detection output based on images published by usb camera node. This can be achieved by setting
rgb.i_simulate_from_topic
parameter totrue
. This createssensor_msgs/Image
subscriber listening by default on/<node_name>/rgb/input
topic that passes data into the pipeline on each callback. Topic names can be changed either by classic ROS topic remapping or settingrgb.i_simulated_topic_name
to a desired name. By default, original sensor node still runs and publishes data. Settingrgb.i_disable_node
to true will prevent it from spawning. Checkdet2d_usb_cam_overlay.launch.py
in `depthai_filters to see it in action. -
Calculating depth - both
left
andright
sensor nodes can be setup as in the example above to calculate calculate depth/disparity from external topics. Note that for this to work properly you need specify:
left.i_board_socket_id: 1
right.i_board_socket_id: 2
- Default stereo input size is set to 1280x720, in case of different image size, To set custom ones, set
stereo.i_set_input_size: true
and adjuststereo.i_input_width
andstereo.i_input_height
accordingly. - external calibration file path using
camera.i_external_calibration_path
parameter. To get calibration from the device you can either setcamera.i_calibration_dump
to true or callsave_calibration
service. Calibration will be saved to/tmp/<mx_id>_calibration.json
. An example can be seen instereo_from_rosbag.launch.py
indepthai_ros_driver
cd dai_ws
(Our workspace)source devel/setup.bash
roslaunch depthai_examples stereo_inertial_node.launch
- example node For more examples please check the launch files.
cd dai_ws
(Our workspace)source install/setup.bash
ros2 launch depthai_examples stereo_inertial_node.launch.py
- example node For more examples please check the launch files.
roslaunch depthai_examples mobile_publisher.launch camera_model:=OAK-D
roslaunch depthai_examples mobile_publisher.launch camera_model:=OAK-D-LITE
roslaunch depthai_examples mobile_publisher.launch | rqt_image_view -t /mobilenet_publisher/color/image
ros2 launch depthai_examples mobile_publisher.launch.py camera_model:=OAK-D
ros2 launch depthai_examples mobile_publisher.launch.py camera_model:=OAK-D-LITE
- ImageConverter - Tested using
roslaunch depthai_examples stereo_inertial_node.launch
&&roslaunch depthai_examples rgb_publisher.launch
' - ImgDetectionCnverter - tested using
roslaunch depthai_examples mobile_publisher.launch
- SpatialImgDetectionConverter - Ntested using
roslaunch depthai_examples stereo_inertial_node.launch
If there a standard Message or usecase for which we have not provided a ros msg or converter feel free to create a issue or reach out to us on our discord community. We would be happy to add more.
For easier development inside isolated workspace, one can use Visual Studio Code with DevContainers plugin, to do that:
- Create separate workspace
- Clone repository into src
- Copy
.devcontainer
directory into main workspace directory - Open workspace directory in VSCode
/oak:
ros__parameters:
camera:
i_calibration_dump: false
i_enable_imu: true
i_enable_ir: true
i_external_calibration_path: ''
i_floodlight_brightness: 0
i_ip: ''
i_laser_dot_brightness: 800
i_mx_id: ''
i_nn_type: spatial
i_pipeline_dump: false
i_pipeline_type: RGBD
i_publish_tf_from_calibration: false
i_tf_base_frame: oak
i_tf_cam_pitch: '0.0'
i_tf_cam_pos_x: '0.0'
i_tf_cam_pos_y: '0.0'
i_tf_cam_pos_z: '0.0'
i_tf_cam_roll: '0.0'
i_tf_cam_yaw: '0.0'
i_tf_camera_model: ''
i_tf_camera_name: oak
i_tf_custom_urdf_location: ''
i_tf_custom_xacro_args: ''
i_tf_imu_from_descr: 'false'
i_tf_parent_frame: oak-d-base-frame
i_usb_port_id: ''
i_usb_speed: SUPER_PLUS
diagnostic_updater:
period: 1.0
imu:
i_acc_cov: 0.0
i_acc_freq: 400
i_batch_report_threshold: 5
i_enable_rotation: false
i_get_base_device_timestamp: false
i_gyro_cov: 0.0
i_gyro_freq: 400
i_mag_cov: 0.0
i_max_batch_reports: 10
i_message_type: IMU
i_rot_cov: -1.0
i_rotation_vector_type: ROTATION_VECTOR
i_sync_method: LINEAR_INTERPOLATE_ACCEL
i_update_ros_base_time_on_ros_msg: false
left:
i_add_exposure_offset: false
i_board_socket_id: 1
i_calibration_file: ''
i_disable_node: false
i_enable_feature_tracker: false
i_enable_lazy_publisher: true
i_exposure_offset: 0
i_fps: 30.0
i_fsync_continuous: false
i_fsync_trigger: false
i_get_base_device_timestamp: false
i_height: 720
i_low_bandwidth: false
i_low_bandwidth_quality: 50
i_max_q_size: 30
i_publish_topic: false
i_resolution: '720'
i_reverse_stereo_socket_order: false
i_sensor_img_orientation: NORMAL
i_set_isp3a_fps: false
i_simulate_from_topic: false
i_simulated_topic_name: ''
i_update_ros_base_time_on_ros_msg: false
i_width: 1280
r_exposure: 1000
r_iso: 800
r_set_man_exposure: false
nn:
i_disable_resize: false
i_enable_passthrough: false
i_enable_passthrough_depth: false
i_get_base_device_timestamp: false
i_num_inference_threads: 2
i_num_pool_frames: 4
i_update_ros_base_time_on_ros_msg: false
rgb:
i_add_exposure_offset: false
i_board_socket_id: 0
i_calibration_file: ''
i_disable_node: false
i_enable_feature_tracker: false
i_enable_lazy_publisher: true
i_enable_preview: false
i_exposure_offset: 0
i_fps: 30.0
i_fsync_continuous: false
i_fsync_trigger: false
i_get_base_device_timestamp: false
i_height: 720
i_interleaved: false
i_isp_den: 3
i_isp_num: 2
i_keep_preview_aspect_ratio: true
i_low_bandwidth: false
i_low_bandwidth_quality: 50
i_max_q_size: 30
i_output_isp: true
i_preview_height: 300
i_preview_size: 300
i_preview_width: 300
i_publish_topic: true
i_resolution: 1080p
i_reverse_stereo_socket_order: false
i_sensor_img_orientation: NORMAL
i_set_isp3a_fps: false
i_set_isp_scale: true
i_simulate_from_topic: false
i_simulated_topic_name: ''
i_update_ros_base_time_on_ros_msg: false
i_width: 1280
r_exposure: 20000
r_focus: 1
r_iso: 800
r_set_man_exposure: false
r_set_man_focus: false
r_set_man_whitebalance: false
r_whitebalance: 3300
right:
i_add_exposure_offset: false
i_board_socket_id: 2
i_calibration_file: ''
i_disable_node: false
i_enable_feature_tracker: false
i_enable_lazy_publisher: true
i_exposure_offset: 0
i_fps: 30.0
i_fsync_continuous: false
i_fsync_trigger: false
i_get_base_device_timestamp: false
i_height: 720
i_low_bandwidth: false
i_low_bandwidth_quality: 50
i_max_q_size: 30
i_publish_topic: false
i_resolution: '720'
i_reverse_stereo_socket_order: false
i_sensor_img_orientation: NORMAL
i_set_isp3a_fps: false
i_simulate_from_topic: false
i_simulated_topic_name: ''
i_update_ros_base_time_on_ros_msg: false
i_width: 1280
r_exposure: 1000
r_iso: 800
r_set_man_exposure: false
stereo:
i_add_exposure_offset: false
i_align_depth: true
i_bilateral_sigma: 0
i_board_socket_id: 0
i_depth_filter_size: 5
i_depth_preset: HIGH_ACCURACY
i_disparity_width: DISPARITY_96
i_enable_alpha_scaling: false
i_enable_brightness_filter: false
i_enable_companding: false
i_enable_decimation_filter: false
i_enable_disparity_shift: false
i_enable_distortion_correction: false
i_enable_lazy_publisher: true
i_enable_spatial_filter: false
i_enable_speckle_filter: false
i_enable_temporal_filter: false
i_enable_threshold_filter: false
i_exposure_offset: 0
i_extended_disp: false
i_get_base_device_timestamp: false
i_height: 720
i_left_rect_add_exposure_offset: false
i_left_rect_enable_feature_tracker: false
i_left_rect_exposure_offset: 0
i_left_rect_low_bandwidth: false
i_left_rect_low_bandwidth_quality: 50
i_left_socket_id: 1
i_low_bandwidth: false
i_low_bandwidth_quality: 50
i_lr_check: true
i_lrc_threshold: 10
i_max_q_size: 30
i_output_disparity: false
i_publish_left_rect: false
i_publish_right_rect: false
i_publish_synced_rect_pair: false
i_publish_topic: true
i_rectify_edge_fill_color: 0
i_reverse_stereo_socket_order: false
i_right_rect_add_exposure_offset: false
i_right_rect_enable_feature_tracker: false
i_right_rect_exposure_offset: 0
i_right_rect_low_bandwidth: false
i_right_rect_low_bandwidth_quality: 50
i_right_socket_id: 2
i_set_input_size: false
i_socket_name: rgb
i_stereo_conf_threshold: 240
i_subpixel: false
i_update_ros_base_time_on_ros_msg: false
i_width: 1280
use_sim_time: false