We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In the GeoMAN.py, Line 262 to Line 273:
local_x = local_attn * local_inp global_x = global_attn * global_inp
cell_output, state = cell(tf.concat([local_x, global_x], axis=1), state)
with tf.variable_scope('local_spatial_attn'): local_attn = local_attention(state) with tf.variable_scope('global_spatial_attn'): global_attn = global_attention(state) attn_weights.append((local_attn, global_attn))
Does this only consider cell state when calculating attention? In the paper Equation (1), cell state and hidden state are concatenated?
The text was updated successfully, but these errors were encountered:
The variable "state" actually contains both the cell state and the hidden state.
Sorry, something went wrong.
No branches or pull requests
In the GeoMAN.py, Line 262 to Line 273:
multiply attention weights with the original input
local_x = local_attn * local_inp
global_x = global_attn * global_inp
Run the BasicLSTM with the newly input
cell_output, state = cell(tf.concat([local_x, global_x], axis=1), state)
Run the attention mechanism.
with tf.variable_scope('local_spatial_attn'):
local_attn = local_attention(state)
with tf.variable_scope('global_spatial_attn'):
global_attn = global_attention(state)
attn_weights.append((local_attn, global_attn))
Does this only consider cell state when calculating attention? In the paper Equation (1), cell state and hidden state are concatenated?
The text was updated successfully, but these errors were encountered: