Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What changes are you trying to make? (e.g. Adding or removing code, refactoring existing code, adding reports)
Completed Assignment 1.
What did you learn from the changes you have made?
I have learned about using the LLM to summarize an article, learned how to write formatted and detailed prompts. Also learned how can I evaluate the responses of LLM according to my requirement like adjusting the tone of response. Using the DeepEvaL parameters. Understanding the importance of temperature.
Was there another approach you were thinking about making? If so, what approach(es) were you thinking of?
It was a new learning for me I understand and implemented the approach provided.
Were there any challenges? If so, what issue(s) did you face? How did you overcome it?
Most of the Assignment was based on the labs that we have. Links to the documentation really helped. I asked questions in the Office Hours Sessions. I used the examples from the documentation and review the labs.
For assessment questions of SummarizationMetric, I was getting a score of 0.000 for all the times.
To identify the issue I took help from copilot. It responded that the reason might be assessment questions are stricter and are resulting in a 'No'. I modified my questions to get the response of SummarizationMetric.
How were these changes tested?
I have run the code and there are two versions of the prompt. The first one gave me the response

After I modified the system prompt, I added more detail to the role. It improved the overall Summarization Score and Tonality Score.
A reference to a related issue in your repository (if applicable)
Checklist