This project enables web-based teleoperation support for different types of robots like MiniPupper. Video streaming is implemented via WebRTC, and teleoperation is done via ROS. Note that the backend expects you've already connected a depth camera like OAK-D Lite to your robot (only Luxonis products are supported at the moment). If you don't have one yet, you can still teleoperate the robot via keyboard w/o a video stream, or just use a simulator (guide TBD).
rp_demo_cut1.mp4
The following diagram reflects the most recent implementation:
- backend: Python BE, which streams camera video to the remote browser via WebRTC; it internally uses DepthAI API on real hardware, and Gazebo camera images in simulation mode;
- frontend: ReactJS FE which uses roslibjs to communicate with ROS bridge and WebRTC API for camera streaming;
- rosbridge: WS proxy between FE/BE and ROS that handles the following messages.
| Source | Pub/Sub | Topic | Message Type | Example |
|---|---|---|---|---|
| FE | Pub | /key | std_msgs/String | i/I/1/, |
| FE | Pub | /robot_pose/change | std_msgs/String | sit/stand |
| FE | Sub | /teleop_status | std_msgs/Bool | true/false |
| FE | Sub | /robot_pose/is_standing | std_msgs/Bool | true/false |
| FE | Sub | /battery/state | sensor_msgs/BatteryState | |
| FE | Sub | /memory/state | std_msgs/Float32 | 0.0-100.0 |
| FE | Sub | /cpu/state | std_msgs/Float32 | 0.0-100.0 |
| BE | Sub | /camera/color/image_raw/compressed | sensor_msgs/CompressedImage |
- ROS Bridge should be visible to a client to perform common operations via Web UI.
- You should provide
CompressedImageto see a virtual camera stream from Gazebo in simulation mode. - Only
BatteryState.percentageproperty is used for rendering for battery stats. - Teleoperation is impossible while
is_standingorteleop_statusflag isfalse. - Keyboard keys are sent in a raw format. You should implement ROS subscriber which transforms a key to
cmd_vel. /robot_pose/changeis very robot-specific: FE just sends either asitorstandcommand.
Clone the source code:
git clone https://github.com/WaverleySoftware/robo-perception.git && cd robo-perceptionPrepare required env files:
./generate_configs.sh [ROBOT_IP_ADDRESS]Robot's IP is required for the FE to be able to communicate with the BE via remote browser.
Setup backend:
cd backend && python3 -m venv .venv
source .venv/bin/activate
pip3 install pip --upgrade
pip3 install -r requirements.txtSetup frontend:
cd ../frontend
npm installStart ROS bridge and all the custom ROS services that implement the message protocol described in architecture section.
Start backend:
cd robo-perception/backend && source .venv/bin/activate
./run.shStart frontend:
cd robo-perception/frontend && npm startOpen your web browser and go to: http://[ROBO_PERCEPTION_SERVICE_IP_ADDRESS]:3000
Run the following command to build FE and BE images:
docker compose build- Polish FE and BE code
- Add local deployment instructions
- Add simulated environment instructions
- Add Docker instructions
- Migrate to ROS2
- Add interactive map for SLAM and navigation
- Add gamepad support
