First of all, thank you for your valuable contribution! I am currently studying your code and have a question about a particular section. Could you please help me understand the following line of code:
embedding_cat = torch.cat((prefix_projections, embedding_text), dim=1)
I noticed that prefix_projections and embedding_text are concatenated along dim=1. I'd like to inquire about the purpose behind combining these two tensors. Is it serving as a prompt?
Once again, thank you for your patience and contribution!
Best regards