Skip to content

Commit 2ef0513

Browse files
committed
add docs
1 parent b7f92f9 commit 2ef0513

File tree

4 files changed

+13
-14
lines changed

4 files changed

+13
-14
lines changed

docs/getstarted/testset_generation.md

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,19 @@ query_space = "large language models"
2525
documents = loader.load_data(query=query_space, limit=10)
2626
```
2727

28+
:::{note}
29+
Each Document object contains a metadata dictionary, which can be used to store additional information about the document which can be accessed with `Document.metadata`. Please ensure that the metadata dictionary contains a key called `file_name` as this will be used in the generation process.
30+
31+
An example of how to do this for `SemanticScholarReader` is shown below.
32+
33+
```{code-block} python
34+
for d in documents:
35+
d.metadata["file_name"] = d.metadata["title"]
36+
37+
documents[0].metadata
38+
```
39+
:::
40+
2841
At this point, we have a set of documents at our disposal, which will serve as the basis for creating synthetic Question/Context/Answer triplets.
2942

3043
## Data Generation
Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,2 @@
11
Integrations
22
============
3-
4-
.. toctree::
5-
langchain.rst
6-
llamaindex.rst

docs/references/integrations/langchain.rst

Lines changed: 0 additions & 5 deletions
This file was deleted.

docs/references/integrations/llamaindex.rst

Lines changed: 0 additions & 5 deletions
This file was deleted.

0 commit comments

Comments
 (0)