Skip to content

Commit d523ffb

Browse files
tzolovmarkpollack
authored andcommitted
Restructure documentation navigation
- Only the catalog is updated. No existing links are changed - Move 'AI Concepts' under 'Overview' - Reorganize API sections for better hierarchy - Move 'Vector Databases' into its own top-level section - Reposition 'Function Calling', 'Multimodality', 'Testing', and 'Structured Output' as top-level items - Remove redundant nesting in some sections - Comment out 'Generic Model' section - Rename AI Model API into AI Models - Get rid of the API suffix - Improve Multimodality doc - Update feature listing
1 parent cf99e77 commit d523ffb

File tree

3 files changed

+52
-51
lines changed

3 files changed

+52
-51
lines changed

spring-ai-docs/src/main/antora/modules/ROOT/nav.adoc

Lines changed: 34 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
* xref:index.adoc[Overview]
2-
* xref:concepts.adoc[AI Concepts]
2+
** xref:concepts.adoc[AI Concepts]
33
* xref:getting-started.adoc[Getting Started]
4-
* xref:api/index.adoc[]
5-
** xref:api/chatclient.adoc[]
6-
*** xref:api/advisors.adoc[Advisors]
7-
** xref:api/chatmodel.adoc[]
4+
* xref:api/chatclient.adoc[]
5+
** xref:api/advisors.adoc[Advisors]
6+
* xref:api/index.adoc[AI Models]
7+
** xref:api/chatmodel.adoc[Chat Models]
88
*** xref:api/bedrock-chat.adoc[Amazon Bedrock]
99
**** xref:api/chat/bedrock/bedrock-anthropic3.adoc[Anthropic3]
1010
**** xref:api/chat/bedrock/bedrock-anthropic.adoc[Anthropic2]
@@ -37,7 +37,7 @@
3737
*** xref:api/chat/zhipuai-chat.adoc[ZhiPu AI]
3838
// **** xref:api/chat/functions/zhipuai-chat-functions.adoc[Function Calling]
3939
*** xref:api/chat/watsonx-ai-chat.adoc[watsonx.AI]
40-
** xref:api/embeddings.adoc[]
40+
** xref:api/embeddings.adoc[Embedding Models]
4141
*** xref:api/bedrock.adoc[Amazon Bedrock]
4242
**** xref:api/embeddings/bedrock-cohere-embedding.adoc[Cohere]
4343
**** xref:api/embeddings/bedrock-titan-embedding.adoc[Titan]
@@ -56,49 +56,48 @@
5656
**** xref:api/embeddings/vertexai-embeddings-palm2.adoc[PaLM2 Embedding]
5757
*** xref:api/embeddings/watsonx-ai-embeddings.adoc[watsonx.AI]
5858
*** xref:api/embeddings/zhipuai-embeddings.adoc[ZhiPu AI]
59-
** xref:api/imageclient.adoc[]
59+
** xref:api/imageclient.adoc[Image Models]
6060
*** xref:api/image/azure-openai-image.adoc[Azure OpenAI]
6161
*** xref:api/image/openai-image.adoc[OpenAI]
6262
*** xref:api/image/stabilityai-image.adoc[Stability]
6363
*** xref:api/image/zhipuai-image.adoc[ZhiPuAI]
6464
*** xref:api/image/qianfan-image.adoc[QianFan]
65-
** xref:api/audio[Audio Model API]
65+
** xref:api/audio[Audio Models]
6666
*** xref:api/audio/transcriptions.adoc[]
6767
**** xref:api/audio/transcriptions/azure-openai-transcriptions.adoc[Azure OpenAI]
6868
**** xref:api/audio/transcriptions/openai-transcriptions.adoc[OpenAI]
6969
*** xref:api/audio/speech.adoc[]
7070
**** xref:api/audio/speech/openai-speech.adoc[OpenAI]
71-
** xref:api/moderation[Moderation Model API]
71+
** xref:api/moderation[Moderation Models]
7272
*** xref:api/moderation/openai-moderation.adoc[OpenAI]
73+
// ** xref:api/generic-model.adoc[]
7374
74-
** xref:api/vectordbs.adoc[]
75-
*** xref:api/vectordbs/azure.adoc[]
76-
*** xref:api/vectordbs/apache-cassandra.adoc[]
77-
*** xref:api/vectordbs/chroma.adoc[]
78-
*** xref:api/vectordbs/elasticsearch.adoc[]
79-
*** xref:api/vectordbs/gemfire.adoc[GemFire]
80-
*** xref:api/vectordbs/milvus.adoc[]
81-
*** xref:api/vectordbs/mongodb.adoc[]
82-
*** xref:api/vectordbs/neo4j.adoc[]
83-
*** xref:api/vectordbs/opensearch.adoc[]
84-
*** xref:api/vectordbs/oracle.adoc[Oracle]
85-
*** xref:api/vectordbs/pgvector.adoc[]
86-
*** xref:api/vectordbs/pinecone.adoc[]
87-
*** xref:api/vectordbs/qdrant.adoc[]
88-
*** xref:api/vectordbs/redis.adoc[]
89-
*** xref:api/vectordbs/hana.adoc[SAP Hana]
90-
*** xref:api/vectordbs/typesense.adoc[]
91-
*** xref:api/vectordbs/weaviate.adoc[]
92-
93-
** xref:api/functions.adoc[Function Calling]
94-
** xref:api/multimodality.adoc[Multimodality]
95-
** xref:api/prompt.adoc[]
96-
** xref:api/structured-output-converter.adoc[Structured Output]
97-
** xref:api/etl-pipeline.adoc[]
98-
** xref:api/testing.adoc[]
99-
** xref:api/generic-model.adoc[]
75+
* xref:api/vectordbs.adoc[]
76+
** xref:api/vectordbs/azure.adoc[]
77+
** xref:api/vectordbs/apache-cassandra.adoc[]
78+
** xref:api/vectordbs/chroma.adoc[]
79+
** xref:api/vectordbs/elasticsearch.adoc[]
80+
** xref:api/vectordbs/gemfire.adoc[GemFire]
81+
** xref:api/vectordbs/milvus.adoc[]
82+
** xref:api/vectordbs/mongodb.adoc[]
83+
** xref:api/vectordbs/neo4j.adoc[]
84+
** xref:api/vectordbs/opensearch.adoc[]
85+
** xref:api/vectordbs/oracle.adoc[Oracle]
86+
** xref:api/vectordbs/pgvector.adoc[]
87+
** xref:api/vectordbs/pinecone.adoc[]
88+
** xref:api/vectordbs/qdrant.adoc[]
89+
** xref:api/vectordbs/redis.adoc[]
90+
** xref:api/vectordbs/hana.adoc[SAP Hana]
91+
** xref:api/vectordbs/typesense.adoc[]
92+
** xref:api/vectordbs/weaviate.adoc[]
10093
10194
* xref:observabilty/index.adoc[]
95+
* xref:api/prompt.adoc[]
96+
* xref:api/structured-output-converter.adoc[Structured Output]
97+
* xref:api/functions.adoc[Function Calling]
98+
* xref:api/multimodality.adoc[Multimodality]
99+
* xref:api/etl-pipeline.adoc[]
100+
* xref:api/testing.adoc[AI Model Evaluation]
102101
103102
* Service Connections
104103
** xref:api/docker-compose.adoc[Docker Compose]

spring-ai-docs/src/main/antora/modules/ROOT/pages/api/multimodality.adoc

Lines changed: 8 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,21 @@
11
[[Multimodality]]
22
= Multimodality API
33

4+
// image::orbis-sensualium-pictus2.jpg[Orbis Sensualium Pictus, align="center"]
5+
6+
> "All things that are naturally connected ought to be taught in combination" - John Amos Comenius, "Orbis Sensualium Pictus", 1658
7+
48
Humans process knowledge, simultaneously across multiple modes of data inputs.
59
The way we learn, our experiences are all multimodal.
610
We don't have just vision, just audio and just text.
711

8-
These foundational principles of learning were articulated by the father of modern education link:https://en.wikipedia.org/wiki/John_Amos_Comenius[John Amos Comenius], in his work, "Orbis Sensualium Pictus", dating back to 1658.
9-
10-
image::orbis-sensualium-pictus2.jpg[Orbis Sensualium Pictus, align="center"]
11-
12-
> "All things that are naturally connected ought to be taught in combination"
13-
14-
Contrary to those principles, in the past, our approach to Machine Learning was often focused on specialized models tailored to process a single modality.
12+
Contrary to those principles, the Machine Learning was often focused on specialized models tailored to process a single modality.
1513
For instance, we developed audio models for tasks like text-to-speech or speech-to-text, and computer vision models for tasks such as object detection and classification.
1614

1715
However, a new wave of multimodal large language models starts to emerge.
18-
Examples include OpenAI's GPT-4 Vision, Google's Vertex AI Gemini Pro Vision, Anthropic's Claude3, and open source offerings LLaVA and balklava are able to accept multiple inputs, including text images, audio and video and generate text responses by integrating these inputs.
16+
Examples include OpenAI's GPT-4o , Google's Vertex AI Gemini 1.5, Anthropic's Claude3, and open source offerings Llama3.2, LLaVA and Balklava are able to accept multiple inputs, including text images, audio and video and generate text responses by integrating these inputs.
1917

20-
The multimodal large language model (LLM) features enable the models to process and generate text in conjunction with other modalities such as images, audio, or video.
18+
NOTE: The multimodal large language model (LLM) features enable the models to process and generate text in conjunction with other modalities such as images, audio, or video.
2119

2220
== Spring AI Multimodality
2321

@@ -68,7 +66,7 @@ and produce a response like:
6866
Spring AI provides multimodal support for the following chat models:
6967

7068
* xref:api/chat/openai-chat.adoc#_multimodal[OpenAI (e.g. GPT-4 and GPT-4o models)]
71-
* xref:api/chat/ollama-chat.adoc#_multimodal[Ollama (e.g. LlaVa and Baklava models)]
69+
* xref:api/chat/ollama-chat.adoc#_multimodal[Ollama (e.g. LlaVa, Baklava, Llama3.2 models)]
7270
* xref:api/chat/vertexai-gemini-chat.adoc#_multimodal[Vertex AI Gemini (e.g. gemini-1.5-pro-001, gemini-1.5-flash-001 models)]
7371
* xref:api/chat/anthropic-chat.adoc#_multimodal[Anthropic Claude 3]
7472
* xref:api/chat/bedrock/bedrock-anthropic3.adoc#_multimodal[AWS Bedrock Anthropic Claude 3]

spring-ai-docs/src/main/antora/modules/ROOT/pages/index.adoc

Lines changed: 10 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -15,15 +15,19 @@ These abstractions have multiple implementations, enabling easy component swappi
1515

1616
Spring AI provides the following features:
1717

18-
* Support for all major Model providers such as OpenAI, Microsoft, Amazon, Google, and Hugging Face.
19-
* Supported Model types are Chat, Text to Image, Audio Transcription, Text to Speech, Moderation, and more on the way.
18+
* Support for all major Model providers such as Anthropic, Azure OpenAI, Amazon Bedrock, Google, HuggingFace, Mistral, Oracle, Stability AI, Watson, Minimax, Moonshot, QianFan, ZhiPu AI, PostgresML, and ONXX Transformers.
19+
* Supported Model types are Chat, Embedding, Text to Image, Audio Transcription, Text to Speech, and Moderation. Multimodal models are also supported.
2020
* Portable API across AI providers for all models. Both synchronous and stream API options are supported. Dropping down to access model specific features is also supported.
21-
* Mapping of AI Model output to POJOs.
21+
* Spring Boot Auto Configuration for all models, simplifying setup and integration.
22+
* AOT (Ahead-Of-Time) native image support for improved performance and reduced startup times.
23+
* Enhanced observability leveraging Spring ecosystem features, providing insights into AI-related operations. Spring AI offers metrics and tracing capabilities for core components including ChatClient, ChatModel, EmbeddingModel, ImageModel, and VectorStore.
24+
* Structured Output to enable mapping of AI Model output to POJOs.
25+
* Function calling support.
2226
* Support for all major Vector Database providers such as Apache Cassandra, Azure Vector Search, Chroma, Milvus, MongoDB Atlas, Neo4j, Oracle, PostgreSQL/PGVector, PineCone, Qdrant, Redis, and Weaviate.
2327
* Portable API across Vector Store providers, including a novel SQL-like metadata filter API that is also portable.
24-
* Function calling.
25-
* Spring Boot Auto Configuration and Starters for AI Models and Vector Stores.
26-
* ETL framework for Data Engineering.
28+
* ETL framework for Data Engineering to load data into Vector Stores.
29+
* Evaluation Testing support for AI applications, allowing assessment of generated content to prevent hallucinated responses. This includes the ability to use AI models for self-evaluation, with the flexibility to choose the most suitable model for evaluation purposes.
30+
* Spring Boot autoconfiguration for establishing connections to model services or vector stores running via Testcontainers or Docker Compose.
2731

2832
This feature set lets you implement common use cases such as "`Q&A over your documentation`" or "`Chat with your documentation.`"
2933

0 commit comments

Comments
 (0)