Skip to content

Commit

Permalink
fix links in customisation
Browse files Browse the repository at this point in the history
  • Loading branch information
shahules786 committed Oct 11, 2024
1 parent fd65549 commit ccbde9f
Show file tree
Hide file tree
Showing 5 changed files with 11 additions and 42 deletions.
6 changes: 3 additions & 3 deletions docs/howtos/customizations/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ How to customize various aspects of Ragas to suit your needs.

## General

- [Customize models](customise_models.md)
- [Customize models](customize_models.md)
- [Customize timeouts, retries and others](run_config.ipynb)

## Metrics
Expand All @@ -14,5 +14,5 @@ How to customize various aspects of Ragas to suit your needs.

## Testset Generation

- [Add your own test cases]()
- [Seed generations using production data]()
- [Add your own test cases](testgenerator/index.md)
- [Seed generations using production data](testgenerator/index.md)
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"id": "55050206-2e36-4c81-855c-3a2b2cce2b71",
"metadata": {},
"source": [
"## Adapting metrics to target language"
"# Adapting metrics to target language"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,11 @@
"id": "f2be25ec-dad8-47b0-8152-d32fb03cdcf0",
"metadata": {},
"source": [
"## Modifying prompts in metrics\n",
"# Modifying prompts in metrics\n",
"\n",
"Every metrics in ragas that uses LLM also uses one or more prompts to come up with intermediate results that is used for formulating scores. Prompts can be treated like hyperparameters when using LLM based metrics. An optimised prompt that suits your domain and use-case can increase the accuracy of your LLM based metrics by 10-20%. An optimal prompt is also depended on the LLM one is using, so as users you might want to tune prompts that powers each metric. \n",
"\n",
"Each prompt in Ragas is written using [Prompt Object](../). Please make sure you have an understanding of it before going further."
"Each prompt in Ragas is written using [Prompt Object](/concepts/components/prompt/). Please make sure you have an understanding of it before going further."
]
},
{
Expand Down
38 changes: 2 additions & 36 deletions docs/howtos/customizations/metrics/write_your_own_metric.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,9 @@
"source": [
"## Write your own Metric\n",
"\n",
"While evaluating your LLM application with Ragas metrics, you may find yourself needing to create a custom metric. This guide will help you do just that. When building your custom metric with Ragas, you also benefit from features such as asynchronous processing, metric language adaptation, and [aligning LLM metrics with human evaluators]().\n",
"While evaluating your LLM application with Ragas metrics, you may find yourself needing to create a custom metric. This guide will help you do just that. When building your custom metric with Ragas, you also benefit from features such as asynchronous processing, metric language adaptation, and aligning LLM metrics with human evaluators.\n",
"\n",
"It assumes that you are already familiar with the concepts of [Metrics]() and [Prompt Objects]() in Ragas. If not, please review those topics before proceeding.\n",
"It assumes that you are already familiar with the concepts of [Metrics](/docs/concepts/metrics/overview/index) and [Prompt Objects](/docs/concepts/components/prompt) in Ragas. If not, please review those topics before proceeding.\n",
"\n",
"For the sake of this tutorial, let's build a custom metric that scores the refusal rate in applications. \n"
]
Expand Down Expand Up @@ -147,7 +147,6 @@
" pass\n",
"\n",
" async def _single_turn_ascore(self, sample, callbacks):\n",
"\n",
" prompt_input = RefusalInput(\n",
" user_input=sample.user_input, response=sample.response\n",
" )\n",
Expand All @@ -157,7 +156,6 @@
" return int(prompt_response.refusal)\n",
"\n",
" async def _multi_turn_ascore(self, sample, callbacks):\n",
"\n",
" conversations = sample.user_input\n",
" conversations = [\n",
" message\n",
Expand Down Expand Up @@ -319,38 +317,6 @@
"source": [
"await scorer.multi_turn_ascore(sample)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9f545d1c-f327-4de8-88de-92eceaacf6f1",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "3086e941-35b9-4b15-8696-d4feb50703a4",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "a93b0a96-cbfc-4d61-944f-1abcf9ed046e",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "c831cb1e-3357-448b-aadb-42c2a115d301",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
Expand Down
3 changes: 3 additions & 0 deletions docs/howtos/customizations/testgenerator/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Customizing Test Data Generation

Synthetic test generation can save a lot of time and effort in creating test datasets for evaluating AI applications. We are working on adding more support to customized test set generation. If you have any specific requirements or would like to collaborate on this, please [talk to us](https://cal.com/shahul-ragas/30min).

0 comments on commit ccbde9f

Please sign in to comment.