You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You mentioned in your paper Code2Vec that there are two main challenges in your work: 1) how you can decompose a program into smaller building blocks such that:
large enough to be meaningful and
small enough to repeat across programs.
You then defined the previous 2 points as a bias-variance tradeoff. Can you please explain this idea more?
The text was updated successfully, but these errors were encountered:
Hi @Avv22 ,
Thank you for your interest in our work, and thank you for reading the paper carefully.
This is a great question!
The bias-variance trade-off is a general concept in machine learning, that expresses the main problem in designing features or representations in machine learning models.
Too specific features may describe the training data very well, but can cause overfitting;
too general/simple features may occur across many examples, but can be insufficiently expressive.
Hello Code2Vec Team,
You mentioned in your paper Code2Vec that there are two main challenges in your work: 1) how you can decompose a program into smaller building blocks such that:
You then defined the previous 2 points as a bias-variance tradeoff. Can you please explain this idea more?
The text was updated successfully, but these errors were encountered: