Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Projecting images using class-conditional GANs #148

Open
yulonglin opened this issue Jul 21, 2021 · 4 comments
Open

Projecting images using class-conditional GANs #148

yulonglin opened this issue Jul 21, 2021 · 4 comments

Comments

@yulonglin
Copy link

Is it possible to project images using a class-conditional GAN in this repo? Looking at the code, there doesn't seem to be support for this.

@hkhailee
Copy link

hkhailee commented Jul 22, 2021

Personally when attempting to project an image into a trained class conditional GAN I receive a
AttributeError: 'NoneType' object has no attribute 'ndim' error. This is slightly addressed in #99 but it explains how its happening and doesn't address a solution.

Additionally: It appears even if you project an image into latent space of a non conditional GAN (using the same image data without labels) and then try to generate an image using the conditional GAN with the --projected_w instead of a --seed there is an assertion error, likely to vector size to a "mismatch" network.

@Gass2109
Copy link

Gass2109 commented Jul 23, 2021

To project target images using class-conditional GAN, we need to specify the label information in the generator mapping function. That is, we can modify this line in projector.py:

w_samples = G.mapping(torch.from_numpy(z_samples).to(device), None)

to, for example:

w_samples = G.mapping(torch.from_numpy(z_samples).to(device), torch.cat([label]*w_avg_samples, dim=0))

where label is the one-hot encoded vector of the class information (like used in generator.py).

Also, I found that the projection gives acceptable results (with slower optimization) even if we don't specify the label (torch.zeros([w_avg_samples, G.c_dim]).

@yulonglin
Copy link
Author

@Gass2109 Thanks! I'll try that

@hkhailee Yes I think the error shows up because None is passed in place of the "conditioning" part of the conditional GAN. In the simplest case, it can be a "one-hot encoded matrix" as shown by the code from @Gass2109

This PR also uses the same general idea: #79

@tengshaofeng
Copy link

tengshaofeng commented Jun 14, 2022

@yulonglin @hkhailee @Gass2109 @nurpax
you really provide many useful information to me. But When I load weights from "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/transfer-learning-source-nets/ffhq-res512-mirror-stylegan2-noaug.pkl",
and run the projector.py ,The synth_image is different from target image. Why? I need your help.
target image:

target
synth_image:

proj

train loss:

step 994/1000: dist 0.22 loss 0.22
step 995/1000: dist 0.22 loss 0.22
step 996/1000: dist 0.22 loss 0.22
step 997/1000: dist 0.22 loss 0.22
step 998/1000: dist 0.22 loss 0.22
step 999/1000: dist 0.22 loss 0.22
step 1000/1000: dist 0.22 loss 0.22
Elapsed: 75.3 s

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants