-
Notifications
You must be signed in to change notification settings - Fork 4
Fix DecomonMaxPooling2D #174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
nhuet
wants to merge
8
commits into
airbus:refactor
Choose a base branch
from
nhuet:maxpooling
base: refactor
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
It enables finding the afinne representation of the linear component of some layers (e.g. MaxPooling2D)
For forward affine propagation, we want to delay oracle bound computation after passing through linear block. We try here to do it with less code duplication - we stop copying call_forward and using a trick to pass perturbation_domain_inputs instead of input_constant_bounds - instead we define a flag for a DecomonLayer defaulting to False which can skip the oracle and make the perturbation_domain_inputs to be passed explicitely and officially to forward_affine_propagate() - we also thighten the oracle bounds by propagating also the ibp ones throudh the linear block in hybrid mode
In works on some machines and not on others.
Error msg:
.tox/py39-linux-tf/lib/python3.9/site-packages/jacobinet/layers/pooling/max_pooling2d.py:92: in __init__
backward_conv2d = BackwardDepthwiseConv2D(layer=self.conv_op)
.tox/py39-linux-tf/lib/python3.9/site-packages/jacobinet/layers/convolutional/depthwise_conv2d.py:101: in __init__
K.repeat(conv_t_i(K.zeros([1] + split_shape)), c_in, axis=self.axis)[0]
.tox/py39-linux-tf/lib/python3.9/site-packages/keras/src/utils/traceback_utils.py:122: in error_handler
raise e.with_traceback(filtered_tb) from None
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
op_name = b'Conv2DBackpropInput', num_outputs = 1
inputs = [<tf.Tensor: shape=(4,), dtype=int32, numpy=array([1, 1, 6, 2], dtype=int32)>, <tf.Tensor: shape=(2, 2, 1, 4), dtype=f...[0.]],
[[0.],
[0.],
[0.]],
[[0.],
[0.],
[0.]]]], dtype=float32)>]
attrs = ('T', 1, 'strides', [1, 1, 2, 2], 'use_cudnn_on_gpu', True, ...)
ctx = <tensorflow.python.eager.context.Context object at 0x7f010e05dbb0>
name = None
def quick_execute(op_name, num_outputs, inputs, attrs, ctx, name=None):
"""Execute a TensorFlow operation.
Args:
op_name: Name of the TensorFlow operation (see REGISTER_OP in C++ code) to
execute.
num_outputs: The number of outputs of the operation to fetch. (Explicitly
provided instead of being inferred for performance reasons).
inputs: A list of inputs to the operation. Each entry should be a Tensor, or
a value which can be passed to the Tensor constructor to create one.
attrs: A tuple with alternating string attr names and attr values for this
operation.
ctx: The value of context.context().
name: Customized name for the operation.
Returns:
List of output Tensor objects. The list is empty if there are no outputs
Raises:
An exception on error.
"""
device_name = ctx.device_name
# pylint: disable=protected-access
try:
ctx.ensure_initialized()
> tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
inputs, attrs, num_outputs)
E tensorflow.python.framework.errors_impl.InvalidArgumentError: Exception encountered when calling Conv2DTranspose.call().
E
E {{function_node __wrapped__Conv2DBackpropInput_device_/job:localhost/replica:0/task:0/device:CPU:0}} Conv2DCustomBackpropInputOp only supports NHWC. [Op:Conv2DBackpropInput]
E
E Arguments received by Conv2DTranspose.call():
E • inputs=tf.Tensor(shape=(1, 4, 3, 1), dtype=float32)
.tox/py39-linux-tf/lib/python3.9/site-packages/tensorflow/python/eager/execute.py:53: InvalidArgumentError
----------------------------- Captured stderr call -----------------------------
2025-09-15 12:01:06.482705: I tensorflow/core/framework/local_rendezvous.cc:407] Local rendezvous is aborting with status: INVALID_ARGUMENT: Conv2DCustomBackpropInputOp only supports NHWC.
876e785 to
dc84f1e
Compare
Skip for padding==same + forward affine + channels_last in unitary_layers tests: wrong bounds Skip autorlirpa/tests with padding == "same" + macos runner: wrong forward affine bounds
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
get_affine_representationmust be called onlinear_block=> we add an optional arglayerto it.call_forward(): instead of overwritting it (and duplicating most of the code fromDecomonLayer), we add a flagskip_forward_oracleto postpone the computation of constant bounds to after propagating through the linear block. And thus passperturbation_domain_inputstoforward_affine_propagate. (we also take the best bounds among ibp and affine ones and get tighter constant bounds for max block inputs)argmaxlimitation in tensorflow (max 7 axes) by merging axes before argmax inmax_primeand then splitting them backSome problems (wrong bounds) remain , we skip for now the corresponding tests: