-
Notifications
You must be signed in to change notification settings - Fork 226
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
volatile argument is not supported anymore. Use chainer.using_config #94
Comments
Not sure what the function of them being there was |
same issue @ChoclateRain how you remove volatile & test entries? you remove all the line? it doensnt work for me, thx alot |
Also had this issue, fixed it by downgrading chainer version |
This problem was caused by the version update of chainer. In Chainer v2, the concept of training mode is added. It is represented by a thread-local flag chainer.config.train, which is a part of the unified configuration. When chainer.config.train is True, functions of Chainer run in the training mode, and otherwise they run in the test mode. For example, BatchNormalization and dropout() behave differently in each mode. In Chainer v1, such a behavior was configured by the train or test argument of each function. This train/test argument has been removed in Chainer v2. If your code is using the train or test argument, you have to update it. In most cases, what you have to do is just removing the train / test argument from any function calls. Also you can find examples here: |
what is your vision of cuda and cudnn?@sebastianandreasson |
Removing "volatile" from .py file (I removed volatile as a parameter of a chainer function) did not help me to resolve the issue. Has anyone else fixed this issue? chainer.Variable(np.asarray(x_test[perm[j:j + batchsize]])) |
@dianaow did you found any solution ? |
Solution is written here I guess: "volatile argument is not supported anymore since v2. Instead, use chainer.no_backprop_mode()." x = chainer.Variable(np.array([1,], np.float32))
with chainer.no_backprop_mode():
y = x + 1
y.backward()
x.grad is None #True Hence your computation were your variables created with : |
Could you please send an example code. |
I found a solution, I just deleted the code |
I am running into this error when trying to train, and i have no idea why
The text was updated successfully, but these errors were encountered: