-
Notifications
You must be signed in to change notification settings - Fork 176
Creates a page summarizing all Elastic's AI-powered features #3768
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…/elastic/docs-content into internal-455-list-genai-features
florent-leborgne
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a super nice start. I'd like this page (or set of pages) to do the following:
- List the AI features themselves (AI Assistants, Agent Builder) and clarify the link between these and AI connectors (and with EIS for the Elastic Managed LLM)
- List other features that rely on these AI features, and specify if this is optional, by default, etc -> This is important for users to make conscious choices about their config and about permissions of their own users, especially as they'll likely want to control pricing/token usage under control.
- Links to pricing pages for our default Elastic Managed LLM (at least) or to a page that focuses on AI features-related pricing impact
So that this inventory also becomes a source of understanding of how all of these relate to each other. We can chat about this at our sync :)
mdbirnstiehl
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One comment, but otherwise looks great!
| stack: preview 9.1, ga 9.2 | ||
| ``` | ||
| You can [generate Grok patterns](/solutions/observability/streams/management/extract/grok.md#streams-grok-patterns) using AI instead of writing them by by hand. | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought of one other AI component to streams, you can also partition wired streams based on AI suggestions. More info here. https://www.elastic.co/docs/solutions/observability/streams/management/partitioning. If you want some assistance with writing up the blurbs about these, I can help tomorrow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that would be great! I am pretty much in the dark on this topic :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feel free to make whatever edits you see fit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added the Partitioning description and added a little bit to the other descriptions.
|
On the ES/platform side, might be good to mention:
This would align with the PMM page: https://www.elastic.co/generative-ai Also we definitely need to mention:
@szabosteve will probably have additions :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left two small nits and a suggestion to drop some technical details that I think are too low-level for this overview. I really like the clarity and the structure of this page! It's great to see these features listed in one place. Everything that is on the page right now is LGTM. I think it would still be nice to add some more pieces to the page. Basically, all that Liam mentioned above. Some suggestions regarding the structure for the new content:
- I would suggest adding two or three new sections:
- Elastic inference or something similar (before AI-powered search), which could link to this page with a description like this: "Inference is a process of using a machine learning trained model to make predictions or operations - such as text embedding, or reranking - on your data." This subsection could mention EIS and the Inference API as the two main ways to use Elastic Inference.
- NLP models: this section could contain the built-in NLP models with ELSER highlighted, and the trained models deployed in your cluster.
- I think Elastic Managed LLM could have its own section. If you think it's too much, I would add it to the Elastic Inference section.
@leemthompo WDYT?
| serverless: | ||
| ``` | ||
|
|
||
| [AI-powered search](/solutions/search/ai-search/ai-search.md) helps you find data based on intent and contextual meaning using vector search technology, which uses machine learning models to capture meaning in content. These vector representations come in two forms: dense vectors that capture overall meaning, and sparse vectors that focus on key terms and their relationships. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit. Also, mentioning the two kinds of vectors here is too low-level a detail. We should focus on the semantic search workflow.
Edit: just saw Liam's suggestion; I agree to drop the tech details on vector types and add links to the two main paths users can choose from.
| [AI-powered search](/solutions/search/ai-search/ai-search.md) helps you find data based on intent and contextual meaning using vector search technology, which uses machine learning models to capture meaning in content. These vector representations come in two forms: dense vectors that capture overall meaning, and sparse vectors that focus on key terms and their relationships. | |
| [AI-powered search](/solutions/search/ai-search/ai-search.md) helps you find data based on intent and contextual meaning using vector search technology, which uses {{ml}} models to capture meaning in content. These vector representations come in two forms: dense vectors that capture overall meaning, and sparse vectors that focus on key terms and their relationships. |
…/elastic/docs-content into internal-455-list-genai-features
|
Thank you all for the edits and suggestions. Responding to your comments in reverse order: @szabosteve I think I addressed your first two points. Added sections for Elastic Inference and NLP. I am open to adding a section about Elastic Managed LLM, but my opinion for now is that linking to the LLM connector information at the top of the page (in the requirements section after my latest edits) might be enough — that page starts with a section about Elastic Managed LLM, and I think of it as a means of powering AI features, rather than a feature in itself. @leemthompo I added all your "definitely need to mention" items in the new Elastic Inference and NLP sections. Would you be willing to add sections for the "might be good to mention" items? I think your expertise would probably lead to a better outcome than if I attempted it. @mdbirnstiehl I made the changes you suggested, and gratefully accept your offer to write about partitioning functionality. @florent-leborgne I added a requirements section at the top of the page that states that many of the features require an LLM connector (not all, since my understanding is that some of the features such as NLP use ML models but not LLM connectors). I also linked to the pricing page. Unfortunately I don't think we have a pricing resource specific to Elastic Managed LLM usage, since I think costs will vary depending on your subscription and deployment method, so I just linked to the general pricing page. RE: linking to the Elastic Managed LLM page, I addressed this in my response to @szabosteve above. Please let me know what you think! I'm not sure I addressed all your feedback but am happy to iterate some more. |
eedugon
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks awesome! Great initiative to create the AI-features doc.
Also the applies_to badges look very good and clear to understand.
I don't think there's anything to say from admin-docs side, as we don't deal with AI features in general.
florent-leborgne
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So nice to see the progress on this. Great work so far @benironside. I see a few approvals already but it's not in a mergeable state yet so I'll block it for now while we continue improving it :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like this page to serve a clearer role in the narrative and answer the following questions for users:
- What has Elastic to offer in terms AI core capabilities?
- What features are augmented with AI?
- What do I need to know as a user to use these wisely in terms of configuration/customization options and pricing. For example, if I plug the AI Assistant to Open AI / ChatGPT-5, which of my features are now going to use this model, which ones are relying on a different config.
To slightly shift the narrative of the current page to answer these questions, can we:
- make a better distinction between the Elasticsearch platform's AI capabilities or architectural pieces (most notably the Elastic Inference Service and the Elastic Managed LLM, but also Gen AI connectors in general, or machine learning in general?), and AI-powered features that materialize in end-user flows in each solution. Said otherwise, some of these features here are not "search solution" features but rather platform capabilities. Think of this diagram (elastic internal)
- list more succinctly certain items here to find the right balance/emphasis to put on certain features. For example, sub-sections under Streams could be a list of bullet points.
- In the description of each feature, I think that instead of describing too much what the feature does, the goal of this page is rather to summarize how these features leverage AI, if that's automatic/by default or not (and if on by default, what does it use, what is customizable), what kind of AI-related configuration they rely on. For example, in the Attack Discovery docs, we can read:
Attack Discovery uses the same LLM connectors as AI Assistant. Does this mean that Attack Discovery's AI capabilities rely on your AI Assistant's config? - link not only to features but also relevant configuration documentation if necessary, and pricing. We know that pricing depends on the connector/model used. That's on users to know if they configure their own. But by default we have the Elastic Managed LLM enabled, which costs are controlled by Elastic and are documented per solution on our pricing pages
@benironside thank you for kicking off this PR. This is clearly a cross-team effort so if you can look after the Security piece of it on this page, that's great. In the meantime, @mdbirnstiehl @szabosteve @leemthompo can you help make these changes for your respective areas?
Co-authored-by: Liam Thompson <[email protected]>
Updated the AI features documentation to clarify the use of AI in suggesting queries based on data.
Fixes elastic/docs-content-internal/issues/455 by creating a new page that lists all our AI-powered features.
This PR also:
Also, reviewers, this is a minor point but what do you think of adding "AI" to our glossary and linking to this PR's new page?