Skip to content

在使用了作者提供的模型后,任然报错element 0 of tensors does not require grad and does not have a grad_fn #7

Open
@xkai-boy

Description

@xkai-boy

Traceback (most recent call last):
File "/share_data/DEALRec-main/code/prune/prune.py", line 15, in
effort = get_effort_score(args)
File "/share_data/DEALRec-main/code/prune/effort_score.py", line 199, in get_effort_score
gradients = trainer.get_grad(resume_from_checkpoint=resume_from_checkpoint)
File "/share_data/DEALRec-main/code/prune/effort_util.py", line 361, in get_grad
return inner_training_loop(
File "/share_data/DEALRec-main/code/prune/effort_util.py", line 654, in _inner_training_loop
gradient = self.training_step(model, inputs)
File "/share_data/DEALRec-main/code/prune/effort_util.py", line 706, in training_step
self.accelerator.backward(torch.mean(loss), retain_graph=True)
File "/root/anaconda3/envs/DEALRec/lib/python3.9/site-packages/accelerate/accelerator.py", line 1921, in backward
self.scaler.scale(loss).backward(**kwargs)
File "/root/anaconda3/envs/DEALRec/lib/python3.9/site-packages/torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File "/root/anaconda3/envs/DEALRec/lib/python3.9/site-packages/torch/autograd/init.py", line 200, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
请问作者大大这是什么原因导致的呢,,似乎好像是前面并没有进行计算存储,导致现在这里没有张量可以再做计算

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions