You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am getting the following error (using latest code from main branch). (note that previously it was possible to run it without GPU) Thanks,
WARNING: You are welcome to use the default MSA server, however keep in mind that it's a
limited shared resource only capable of processing a few thousand MSAs per day. Please
submit jobs only from a single IP address. We reserve the right to limit access to the
server case-by-case when usage exceeds fair use. If you require more MSAs: You can
precompute all MSAs with `colabfold_search` or host your own API and pass it to `--host-url`
Traceback (most recent call last):
File "/usr/local/localcolabfold/colabfold-conda/lib/python3.10/site-packages/colabfold/batch.py", line 1281, in run
jax.tools.colab_tpu.setup_tpu()
File "/usr/local/localcolabfold/colabfold-conda/lib/python3.10/site-packages/jax/tools/colab_tpu.py", line 20, in setup_tpu
raise RuntimeError(
RuntimeError: jax.tools.colab_tpu.setup_tpu() was required for older JAX versions running on older generations of TPUs, and should no longer be used.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/localcolabfold/colabfold-conda/lib/python3.10/site-packages/jax/_src/xla_bridge.py", line 874, in backends
backend = _init_backend(platform)
File "/usr/local/localcolabfold/colabfold-conda/lib/python3.10/site-packages/jax/_src/xla_bridge.py", line 965, in _init_backend
backend = registration.factory()
File "/usr/local/localcolabfold/colabfold-conda/lib/python3.10/site-packages/jax/_src/xla_bridge.py", line 663, in factory
return xla_client.make_c_api_client(plugin_name, updated_options, None)
File "/usr/local/localcolabfold/colabfold-conda/lib/python3.10/site-packages/jaxlib/xla_client.py", line 199, in make_c_api_client
return _xla.get_c_api_client(plugin_name, options, distributed_client)
jaxlib.xla_extension.XlaRuntimeError: FAILED_PRECONDITION: No visible GPU devices.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/localcolabfold/colabfold-conda/bin/colabfold_batch", line 8, in <module>
sys.exit(main())
File "/usr/local/localcolabfold/colabfold-conda/lib/python3.10/site-packages/colabfold/batch.py", line 2046, in main
run(
File "/usr/local/localcolabfold/colabfold-conda/lib/python3.10/site-packages/colabfold/batch.py", line 1286, in run
if jax.local_devices()[0].platform == 'cpu':
File "/usr/local/localcolabfold/colabfold-conda/lib/python3.10/site-packages/jax/_src/xla_bridge.py", line 1135, in local_devices
process_index = get_backend(backend).process_index()
File "/usr/local/localcolabfold/colabfold-conda/lib/python3.10/site-packages/jax/_src/xla_bridge.py", line 1011, in get_backend
return _get_backend_uncached(platform)
File "/usr/local/localcolabfold/colabfold-conda/lib/python3.10/site-packages/jax/_src/xla_bridge.py", line 990, in _get_backend_uncached
bs = backends()
File "/usr/local/localcolabfold/colabfold-conda/lib/python3.10/site-packages/jax/_src/xla_bridge.py", line 890, in backends
raise RuntimeError(err_msg)
RuntimeError: Unable to initialize backend 'cuda': FAILED_PRECONDITION: No visible GPU devices. (you may need to uninstall the failing plugin package, or set JAX_PLATFORMS=cpu to skip this backend.)
Traceback (most recent call last):
File "/usr/local/localcolabfold/colabfold-conda/bin/rosie", line 4, in <module>
__import__('pkg_resources').run_script('rosie==0.0.1', 'rosie')
File "/usr/local/localcolabfold/colabfold-conda/lib/python3.10/site-packages/pkg_resources/__init__.py", line 752, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/usr/local/localcolabfold/colabfold-conda/lib/python3.10/site-packages/pkg_resources/__init__.py", line 1729, in run_script
exec(script_code, namespace, namespace)
File "/usr/local/localcolabfold/colabfold-conda/lib/python3.10/site-packages/rosie-0.0.1-py3.10.egg/EGG-INFO/scripts/rosie", line 6, in <module>
File "/usr/local/localcolabfold/colabfold-conda/lib/python3.10/site-packages/rosie-0.0.1-py3.10.egg/rosie/__init__.py", line 79, in main
File "/usr/local/localcolabfold/colabfold-conda/lib/python3.10/site-packages/rosie-0.0.1-py3.10.egg/rosie/__init__.py", line 50, in execute_flag_file
File "/usr/local/localcolabfold/colabfold-conda/lib/python3.10/site-packages/rosie-0.0.1-py3.10.egg/rosie/alpha-red.py", line 23, in colabfold
File "/usr/local/localcolabfold/colabfold-conda/lib/python3.10/subprocess.py", line 421, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/usr/local/localcolabfold/colabfold-conda/lib/python3.10/subprocess.py", line 526, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 'colabfold_batch ./alpha-fold.input ./alpha-fold.output --num-models 1 --model-type alphafold2_multimer_v3' returned non-zero exit status 1.
The text was updated successfully, but these errors were encountered:
I am getting the following error (using latest code from main branch). (note that previously it was possible to run it without GPU) Thanks,
The text was updated successfully, but these errors were encountered: