(explicitly instructed to not read from .bash_profile
and .bashrc
files).
I tried adding an explicit shell: bash {0}
to each step, but this does not seem to have made any difference.
Below is an example workflow.
\nFor nvm: the --install
argument when sourcing into the shell tells nvm to auto-install & switch to the version of node found in the .nvmrc file
For rvm: the rvm_install_on_use_flag
environment variable tells rvm to auto-install & switch to the version of ruby found in the .ruby-version file.
frontend:\n runs-on: ubuntu-latest\nsteps:\n- name: Checkout repository\n uses: actions/checkout@v1\n
\n
- \n
name: Check versions (before)
\n
\nshell: bash {0}
\nrun: |
\nnode --version
\nnpm --version- \n
- \n
name: Install nvm
\n
\nshell: bash {0}
\nrun: |
\ncurl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash
\nexport NVM_DIR=\"$HOME/.nvm\"
\necho \"source $NVM_DIR/nvm.sh --install\" >> \"$HOME/.bash_profile\" \n - \n
name: Check versions (after)
\n
\nshell: bash {0}
\nrun: |
\nnode --version
\nnpm --version \n
backend:
\n
\nruns-on: ubuntu-lateststeps:
\n- \n
- name: Checkout repository
\nuses: actions/checkout@v1 \n
\n - \n
\n
name: Install rvm
\n
\nshell: bash {0}
\nrun: |
\ngpg --keyserver hkp://pool.sks-keyservers.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB
\ncurl -sSL https://get.rvm.io | bash -s stable
\nexport rvm_install_on_use_flag=1
\necho \"source $HOME/.rvm/scripts/rvm\" >> \"$HOME/.bash_profile\"name: Check versions (after)
\nshell: bash {0}
\nrun: |
\nrvm --version
\nruby --version
\ngem --version
\nbundler --version
\n
\n
\n
For the frontend job, I had expected node --version
to be v10.16.3 in the ‘before’ step; and v12.1.0 in the ‘after’ step; but it is v10.16.3 in both.
It seems that the changes to .bash_profile
are not seen by other steps? (for what it’s worth, I also tried .bashrc
instead…to no avail).
Is there a way that I can have the NVM/RVM scripts sourced into all subsequent shell processes in a job?
\n(also, having to repeat shell: bash {0}
in every step is pretty annoying).
You were close: You can actually just add shell: bash -l {0}
(the -l was missing from yours) for steps where you need to source the bash_profile. See: https://www.gnu.org/software/bash/manual/html_node/Bash-Startup-Files.html,
I am aso assuming that install script primarily exports/updates the PATH to include the nvm install location, so youd only need to add the shell option for steps that need to exec node
\nThe default was to invoke bash non-interactive and non-login since semantically thats closest to what we are doing- just running scripts, and I wanted to avoid adding potential side effects, unexpected behavior, re-running the same statup files repeatedly for each step, etc. The shell option was intended to allow you to manage these types of scenarios as necessary. Its not really the expected default behavior since we have setup actions that download/install tools and modify the PATH, etc, so that the tools installed are available everywhere not just from bash steps that pass down the environment, PATH is tracked by the runner across steps, etc
","upvoteCount":4,"url":"https://github.com/orgs/community/discussions/25061#discussioncomment-3246364"}}}How to share shell profile between steps? (or: how to use nvm/rvm in steps?) #25061
-
I have a workflow consisting of two jobs: frontend (nodejs) & backend (ruby); that I’m attempting to migrate from TravisCI. The frontend code uses NVM with an The backend code uses RVM with a TravisCI build environments come with both NVM and RVM pre-installed. The version of Ruby specified in In Github Actions virtual environments, default versions of both Ruby & Node.JS are pre-installed, however version managers such as NVM/RVM are not. The ‘official’ way to install a different Ruby/Node version in Github Actions appears to via actions/setup-ruby and actions/setup-node. Unfortunately, To emulate TravisCI, I’m trying to install both NVM and RVM as the first steps in their respective jobs, so that the jobs can use the .nvmrc and .ruby-version files to detect & install the appropriate runtime versions. The challenge I have is that both NVM and RVM expect to be sourced into the shell after installation, e.g.
As I’ve discovered, each
(explicitly instructed to not read from I tried adding an explicit Below is an example workflow. For nvm: the For rvm: the
For the frontend job, I had expected It seems that the changes to Is there a way that I can have the NVM/RVM scripts sourced into all subsequent shell processes in a job? (also, having to repeat |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
You were close: You can actually just add I am aso assuming that install script primarily exports/updates the PATH to include the nvm install location, so youd only need to add the shell option for steps that need to exec node The default was to invoke bash non-interactive and non-login since semantically thats closest to what we are doing- just running scripts, and I wanted to avoid adding potential side effects, unexpected behavior, re-running the same statup files repeatedly for each step, etc. The shell option was intended to allow you to manage these types of scenarios as necessary. Its not really the expected default behavior since we have setup actions that download/install tools and modify the PATH, etc, so that the tools installed are available everywhere not just from bash steps that pass down the environment, PATH is tracked by the runner across steps, etc |
Beta Was this translation helpful? Give feedback.
-
To have a working sample snippet. Adding a local Composer’s bin to the path. The Drupal step would fail without shell: bash -l {0}.
|
Beta Was this translation helpful? Give feedback.
-
Putting this here as it's an issue I ran into. If you don't also set -e the job status may not update correctly (this may only happen if the script is run in a composite action though). i.e. |
Beta Was this translation helpful? Give feedback.
You were close: You can actually just add
shell: bash -l {0}
(the -l was missing from yours) for steps where you need to source the bash_profile. See: https://www.gnu.org/software/bash/manual/html_node/Bash-Startup-Files.html,I am aso assuming that install script primarily exports/updates the PATH to include the nvm install location, so youd only need to add the shell option for steps that need to exec node
The default was to invoke bash non-interactive and non-login since semantically thats closest to what we are doing- just running scripts, and I wanted to avoid adding potential side effects, unexpected behavior, re-running the same statup files repeatedly for each step, etc. The shel…