Here is an example:
\n\nleft network:
\nnetwork:\n - name: features_extractor\n input: permute(STATES[\"camera\"], (0, 3, 1, 2)) # PyTorch NHWC -> NCHW\n layers:\n - conv2d: {out_channels: 32, kernel_size: 8, stride: 4, padding: 0}\n - conv2d: {out_channels: 64, kernel_size: 4, stride: 2, padding: 0}\n - conv2d: {out_channels: 64, kernel_size: 3, stride: 1, padding: 0}\n - flatten\n activations: relu\n - name: net_state\n input: STATES[\"robot-state\"]\n layers: [128, 128, 64]\n activations: elu\n - name: net\n input: concatenate([features_extractor, net_state])\n layers: [512, 256, 128]\n activations: elu\noutput: ACTIONSright network:
\nnetwork:\n - name: features_extractor\n input: permute(STATES[\"camera\"], (0, 3, 1, 2)) # PyTorch NHWC -> NCHW\n layers:\n - conv2d: {out_channels: 32, kernel_size: 8, stride: 4, padding: 0}\n - conv2d: {out_channels: 64, kernel_size: 4, stride: 2, padding: 0}\n - conv2d: {out_channels: 64, kernel_size: 3, stride: 1, padding: 0}\n - flatten\n activations: relu\n - name: net\n input: concatenate([features_extractor, STATES[\"robot-state\"]])\n layers: [512, 256, 128]\n activations: elu\noutput: ACTIONSRefer to https://skrl.readthedocs.io/en/develop/api/utils/model_instantiators.html#inputs for more details about model definitions in skrl using Isaac Lab .yaml configs
\nAlso, see the skrl skrl_camera_ppo_cfg.yaml file for the Isaac-Cartpole-RGB-Camera-Direct-v0 example (CNN)
-
|
Hi, Apologies if this question has been asked before or if the answer is obvious. I’m still relatively new to reinforcement learning, robotics, and simulation, having started just a month ago. I’m currently working on a reinforcement learning task for a drone that needs to navigate through a level to reach a desired position on the opposite side. To achieve this, I’ve added a camera to the drone for perception. My question is: how should I approach training for such an application? In your opinion, what would be the fastest and most effective way to implement this? Would it be better to:
Thanks in advance for your responses |
Beta Was this translation helpful? Give feedback.
-
|
I also really want to know that currently the quadcopter example only has a direct version, not a manage_based version, and I don't know if it's possible to implement a manage_based version because it doesn't seem to be possible to implement it through ActionsCfg's EffortActionCfg, etc. Therefore, I personally think that it may be difficult to implement a ready-made visual quadcopter. |
Beta Was this translation helpful? Give feedback.
-
|
This is a great discussion. Will move it into our Discussions section for the team to follow upon. Thanks for posting this. |
Beta Was this translation helpful? Give feedback.
-
|
@JulienHansen in order to provide an appropriate solution there is necessary to know in details what is the expected observation and action space you want to use |
Beta Was this translation helpful? Give feedback.
-
|
Hi @Toni-SM Thanks for mentioning few keypoints for training with Vision based RL in Isaaclab. Do you've any suggestions or ideas regarding this problem statement. |
Beta Was this translation helpful? Give feedback.
Well, I asked for an initial network structure to get an idea of a possible skrl configuration definition.
Again, if you are going to use skrl you need to use the version 1.4.0 (not released yet): develop branch.
Here is an example:
left network: