You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your attention to our work, and we really appreciate your attempt to expand our template library. In our implementation, we utilize LLMs to extract templates from our self-curated math dataset, which is structured with summarizations of problems along with related problems. Therefore, our specific prompts might not be directly applicable to your dataset.
However, we are happy to offer some general guidance and suggestions on how you can achieve this, tailored to different dataset types:
General Deductive Summarization (Varied Problem Sets): Use prompts like, "Analyze these problems, extract core logic and strategies, and create universal templates." This captures broad principles.
Task Decomposition (Detailed Solutions): Try, "Decompose solutions, identify key steps, analyze patterns, and synthesize techniques." This allows for granular analysis.
Type-Specific Comparison (Categorized Datasets): Consider, "Categorize problems, compare strategies, note similarities/differences, and develop type-specific templates, then a general framework." This highlights unique and shared strategies.
Heuristic/Dialectical Analysis (In-Depth Analysis): Use prompts like, "Act as a consultant, discuss insightful strategies, provide examples, limitations, and optimal approaches." This encourages deep exploration.
Crucially, make sure the LLM's output aligns with our template structure using few-shot examples or clear instructions for JSON output.
The optimal prompt is highly dependent on your specific dataset. We recommend selecting a suitable approach and tailoring it to your unique requirements. Please feel free to ask if you have further inquiries.
No description provided.
The text was updated successfully, but these errors were encountered: