This project enables web-based teleoperation support for different types of robots like MiniPupper. Video streaming is implemented via WebRTC, and teleoperation is done via ROS. Note that the backend expects you've already connected a depth camera like OAK-D Lite to your robot (only Luxonis products are supported at the moment). If you don't have one yet, you can still teleoperate the robot via keyboard w/o a video stream, or just use a simulator (guide TBD).
rp_demo_cut1.mp4
The following diagram reflects the most recent implementation:
- backend: Python BE, which streams camera video to the remote browser via WebRTC; it internally uses DepthAI API on real hardware, and Gazebo camera images in simulation mode;
- frontend: ReactJS FE which uses roslibjs to communicate with ROS bridge and WebRTC API for camera streaming;
- rosbridge: WS proxy between FE/BE and ROS that handles the following messages.
Source | Pub/Sub | Topic | Message Type | Example |
---|---|---|---|---|
FE | Pub | /key | std_msgs/String | i/I/1/, |
FE | Pub | /robot_pose/change | std_msgs/String | sit/stand |
FE | Sub | /teleop_status | std_msgs/Bool | true/false |
FE | Sub | /robot_pose/is_standing | std_msgs/Bool | true/false |
FE | Sub | /battery/state | sensor_msgs/BatteryState | |
FE | Sub | /memory/state | std_msgs/Float32 | 0.0-100.0 |
FE | Sub | /cpu/state | std_msgs/Float32 | 0.0-100.0 |
BE | Sub | /camera/color/image_raw/compressed | sensor_msgs/CompressedImage |
- ROS Bridge should be visible to a client to perform common operations via Web UI.
- You should provide
CompressedImage
to see a virtual camera stream from Gazebo in simulation mode. - Only
BatteryState.percentage
property is used for rendering for battery stats. - Teleoperation is impossible while
is_standing
orteleop_status
flag isfalse
. - Keyboard keys are sent in a raw format. You should implement ROS subscriber which transforms a key to
cmd_vel
. /robot_pose/change
is very robot-specific: FE just sends either asit
orstand
command.
Clone the source code:
git clone https://github.com/WaverleySoftware/robo-perception.git && cd robo-perception
Prepare required env files:
./generate_configs.sh [ROBOT_IP_ADDRESS]
Robot's IP is required for the FE to be able to communicate with the BE via remote browser.
Setup backend:
cd backend && python3 -m venv .venv
source .venv/bin/activate
pip3 install pip --upgrade
pip3 install -r requirements.txt
Setup frontend:
cd ../frontend
npm install
Start ROS bridge and all the custom ROS services that implement the message protocol described in architecture section.
Start backend:
cd robo-perception/backend && source .venv/bin/activate
./run.sh
Start frontend:
cd robo-perception/frontend && npm start
Open your web browser and go to: http://[ROBO_PERCEPTION_SERVICE_IP_ADDRESS]:3000
Run the following command to build FE and BE images:
docker compose build
- Polish FE and BE code
- Add local deployment instructions
- Add simulated environment instructions
- Add Docker instructions
- Migrate to ROS2
- Add interactive map for SLAM and navigation
- Add gamepad support