Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What changes are you trying to make? (e.g. Adding or removing code, refactoring existing code, adding reports)**
Add my completed solution for Assignment 1. I implemented a notebook that loads the GenAI Divide report PDF, generates a structured summary using the OpenAI SDK and Pydantic, and evaluates the summary with multiple DeepEval metrics.
What did you learn from the changes you have made?
I learned how to separate secrets from code with dotenv, how to build a structured output schema with Pydantic, and how to call the OpenAI client to produce summaries in a specific tone. I also learned how summarisation metrics like coverage, alignment, coherence, tone, and safety can be implemented and interpreted.
Was there another approach you were thinking about making? If so, what approach(es) were you thinking of?
I considered using a web article instead of the PDF and using a simpler evaluation (only one summarisation score), but chose a longer PDF plus multiple metrics to better test model behaviour.
Were there any challenges? If so, what issue(s) did you face? How did you overcome it?
One challenge was handling the 26‑page PDF and ensuring all page content was concatenated correctly before sending it to the model. Another was avoiding metric reuse in DeepEval, which could cache scores. I addressed these by joining page contents into a single document_text string and writing a factory function that returns fresh metric instances for each evaluation run.
How were these changes tested?
I ran the notebook end‑to‑end, verified that the summary and evaluation results were produced without errors, and confirmed that scores and reasons were updated when changing the summary generation prompt.
A reference to a related issue in your repository (if applicable)
N/A (course assignment).
Checklist