-
Notifications
You must be signed in to change notification settings - Fork 54
Open
Description
at /ModernTCN/[ModernTCN-detection/models/ModernTCN.py , function forward_feature -> Source Here
def forward_feature(self, x, te=None):
B,M,L=x.shape
x = x.unsqueeze(-2)
for i in range(self.num_stage):
B, M, D, N = x.shape
x = x.reshape(B * M, D, N)
if i==0:
if self.patch_size != self.patch_stride:
# stem layer padding
pad_len = self.patch_size - self.patch_stride
pad = x[:,:,-1:].repeat(1,1,pad_len)
x = torch.cat([x,pad],dim=-1)
x = x.reshape(B,M,1,-1).squeeze(-2)
x = x.unfold(dimension=-1, size=self.patch_size, step=self.patch_stride)
x = self.downsample_layers[i](x)
x = x.permute(0,1,3,2)
else:
if N % self.downsample_ratio != 0:
pad_len = self.downsample_ratio - (N % self.downsample_ratio)
x = torch.cat([x, x[:, :, -pad_len:]],dim=-1)
x = self.downsample_layers[i](x)
_, D_, N_ = x.shape
x = x.reshape(B, M, D_, N_)
x = self.stages[i](x)
return xIt seems that multi-scale downsampling is not implemented here? Because if N % self.downsample_ratio == 0, then x = self.downsample_layers[i](x) will not be executed. Is this an indentation error? The condition if N % self.downsample_ratio != 0: only performs padding on x.
The modified code might be as follows:
else:
if N % self.downsample_ratio != 0:
pad_len = self.downsample_ratio - (N % self.downsample_ratio)
x = torch.cat([x, x[:, :, -pad_len:]],dim=-1)
x = self.downsample_layers[i](x)
_, D_, N_ = x.shape
x = x.reshape(B, M, D_, N_)Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels