We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
作者您好,据我目前所了解,通过clip编码器得到的维度是[1, 512]的,您是如何把他们变为c , h, w的形状,并融入扩散模型?感谢您的回答。
The text was updated successfully, but these errors were encountered:
你好,这里跟stable diffusion一样是用的cross-attention,图像会被reshap成 (b, h*w, c)这样,具体代码可以看这个attention.py。
Sorry, something went wrong.
作者您好,按照您的回复,clip 编码器得到的 (1, 512) 的 feature 在 cross-attention 代码的 q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v)) 部分的 n == 1 对吗?不知道理解对不对
q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v))
n == 1
你好,这个我也不太记得了,你可以把变化前后的shape打印出来看一下。
No branches or pull requests
作者您好,据我目前所了解,通过clip编码器得到的维度是[1, 512]的,您是如何把他们变为c , h, w的形状,并融入扩散模型?感谢您的回答。
The text was updated successfully, but these errors were encountered: