From a4d5a37e77f22293c731d2250abdf8d378da2d2d Mon Sep 17 00:00:00 2001 From: Benjamin Ironside Goldstein Date: Thu, 6 Nov 2025 21:11:47 -0600 Subject: [PATCH 1/2] Docs-content component of consolidating AI connector guides --- redirects.yml | 8 +- .../security/ai/connect-to-amazon-bedrock.md | 125 ------------------ .../security/ai/connect-to-azure-openai.md | 94 ------------- .../security/ai/connect-to-google-vertex.md | 90 ------------- solutions/security/ai/connect-to-openai.md | 61 --------- ...onnectors-for-large-language-models-llm.md | 7 +- 6 files changed, 8 insertions(+), 377 deletions(-) delete mode 100644 solutions/security/ai/connect-to-amazon-bedrock.md delete mode 100644 solutions/security/ai/connect-to-azure-openai.md delete mode 100644 solutions/security/ai/connect-to-google-vertex.md delete mode 100644 solutions/security/ai/connect-to-openai.md diff --git a/redirects.yml b/redirects.yml index a5cc2b5ec7..5426702023 100644 --- a/redirects.yml +++ b/redirects.yml @@ -586,4 +586,10 @@ redirects: 'deploy-manage/monitor/autoops/cc-cloud-connect-autoops-faq.md': 'deploy-manage/monitor/autoops/ec-autoops-faq.md' # Related to https://github.com/elastic/docs-team/issues/104 - 'solutions/observability/get-started/what-is-elastic-observability': 'solutions/observability.md' \ No newline at end of file + 'solutions/observability/get-started/what-is-elastic-observability': 'solutions/observability.md' + +# Related to https://github.com/elastic/docs-content-internal/issues/487 + 'solutions/security/ai/connect-to-azure-openai.md': 'kibana://reference/connectors-kibana/openai-action-type.md' + 'solutions/security/ai/connect-to-amazon-bedrock.md': 'kibana://reference/connectors-kibana/bedrock-action-type.md' + 'solutions/security/ai/connect-to-openai.md': 'kibana://reference/connectors-kibana/openai-action-type.md' + 'solutions/security/ai/connect-to-google-vertex.md': 'kibana://reference/connectors-kibana/gemini-action-type.md' \ No newline at end of file diff --git a/solutions/security/ai/connect-to-amazon-bedrock.md b/solutions/security/ai/connect-to-amazon-bedrock.md deleted file mode 100644 index ab2fcc30b8..0000000000 --- a/solutions/security/ai/connect-to-amazon-bedrock.md +++ /dev/null @@ -1,125 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/security/current/assistant-connect-to-bedrock.html - - https://www.elastic.co/guide/en/serverless/current/security-connect-to-bedrock.html -applies_to: - stack: all - serverless: - security: all -products: - - id: security - - id: cloud-serverless ---- - -# Connect to Amazon Bedrock - -This page provides step-by-step instructions for setting up an Amazon Bedrock connector for the first time. This connector type enables you to leverage large language models (LLMs) within {{kib}}. You’ll first need to configure AWS, then configure the connector in {{kib}}. - -::::{note} -All models in Amazon Bedrock's `Claude` model group are supported. -:::: - - - -## Configure AWS [_configure_aws] - - -### Configure an IAM policy [_configure_an_iam_policy] - -First, configure an IAM policy with the necessary permissions: - -1. Log into the AWS console and search for Identity and Access Management (IAM). -2. From the **IAM** menu, select **Policies** → **Create policy**. -3. To provide the necessary permissions, paste the following JSON into the **Specify permissions** menu. - - ```json - { - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "VisualEditor0", - "Effect": "Allow", - "Action": [ - "bedrock:InvokeModel", - "bedrock:InvokeModelWithResponseStream" - ], - "Resource": "*" - } - ] - } - ``` - - ::::{note} - These are the minimum required permissions. IAM policies with additional permissions are also supported. - :::: - -4. Click **Next**. Name your policy. - -The following video demonstrates these steps (click to watch). - -[![azure-openai-configure-model-video](https://play.vidyard.com/ek6NpHaj6u4keZyEjPWXcT.jpg)](https://videos.elastic.co/watch/ek6NpHaj6u4keZyEjPWXcT?) - - - -### Configure an IAM User [_configure_an_iam_user] - -Next, assign the policy you just created to a new user: - -1. Return to the **IAM** menu. Select **Users** from the navigation menu, then click **Create User**. -2. Name the user, then click **Next**. -3. Select **Attach policies directly**. -4. In the **Permissions policies** field, search for the policy you created earlier, select it, and click **Next**. -5. Review the configuration then click **Create user**. - -The following video demonstrates these steps (click to watch). - -[![bedrock-iam-video](https://play.vidyard.com/5BQb2P818SMddRo6gA79hd.jpg)](https://videos.elastic.co/watch/5BQb2P818SMddRo6gA79hd?) - - - -### Create an access key [_create_an_access_key] - -Create the access keys that will authenticate your Elastic connector: - -1. Return to the **IAM** menu. Select **Users** from the navigation menu. -2. Search for the user you just created, and click its name. -3. Go to the **Security credentials** tab. -4. Under **Access keys**, click **Create access key**. -5. Select **Third-party service**, check the box under **Confirmation**, click **Next**, then click **Create access key**. -6. Click **Download .csv file** to download the key. Store it securely. - -The following video demonstrates these steps (click to watch). - -[![bedrock-accesskey-video](https://play.vidyard.com/8oXgP1fbaQCqjWUgncF9at.jpg)](https://videos.elastic.co/watch/8oXgP1fbaQCqjWUgncF9at?) - - - -## Configure the Amazon Bedrock connector [_configure_the_amazon_bedrock_connector] - -Finally, configure the connector in {{kib}}: - -1. Log in to {{kib}}. -2. Find the **Connectors** page in the navigation menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). Then click **Create Connector**, and select **Amazon Bedrock**. -3. Name your connector. -4. (Optional) Configure the Amazon Bedrock connector to use a different AWS region where Anthropic models are supported by editing the **URL** field, for example by changing `us-east-1` to `eu-central-1`. -5. (Optional) Add one of the following strings if you want to use a model other than the default. Note that these URLs should have a prefix of `us.` or `eu.`, depending on your region, for example `us.anthropic.claude-3-5-sonnet-20240620-v1:0` or `eu.anthropic.claude-3-5-sonnet-20240620-v1:0`. - - * Sonnet 3.5: `us.anthropic.claude-3-5-sonnet-20240620-v1:0` or `eu.anthropic.claude-3-5-sonnet-20240620-v1:0` - * Sonnet 3.5 v2: `us.anthropic.claude-3-5-sonnet-20241022-v2:0` or `eu.anthropic.claude-3-5-sonnet-20241022-v2:0` - * Sonnet 3.7: `us.anthropic.claude-3-7-sonnet-20250219-v1:0` or `eu.anthropic.claude-3-7-sonnet-20250219-v1:0` - * Haiku 3.5: `us.anthropic.claude-3-5-haiku-20241022-v1:0` or `eu.anthropic.claude-3-5-haiku-20241022-v1:0` - * Opus: `us.anthropic.claude-3-opus-20240229-v1:0` or `eu.anthropic.claude-3-opus-20240229-v1:0` - -6. Enter the **Access Key** and **Secret** that you generated earlier, then click **Save**. - - Your LLM connector is now configured. For more information on using Elastic AI Assistant, refer to [AI Assistant](/solutions/security/ai/ai-assistant.md). - - -::::{important} -If you’re using [provisioned throughput](https://docs.aws.amazon.com/bedrock/latest/userguide/prov-throughput.html), your ARN becomes the model ID, and the connector settings **URL** value must be [encoded](https://www.urlencoder.org/) to work. For example, if the non-encoded ARN is `arn:aws:bedrock:us-east-2:123456789102:provisioned-model/3Ztr7hbzmkrqy1`, the encoded ARN would be `arn%3Aaws%3Abedrock%3Aus-east-2%3A123456789102%3Aprovisioned-model%2F3Ztr7hbzmkrqy1`. -:::: - - -The following video demonstrates these steps (click to watch). - -[![bedrock-configure-model-video](https://play.vidyard.com/QJe4RcTJbp6S6m9CkReEXs.jpg)](https://videos.elastic.co/watch/QJe4RcTJbp6S6m9CkReEXs?) diff --git a/solutions/security/ai/connect-to-azure-openai.md b/solutions/security/ai/connect-to-azure-openai.md deleted file mode 100644 index 9e8cd872bb..0000000000 --- a/solutions/security/ai/connect-to-azure-openai.md +++ /dev/null @@ -1,94 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/security/current/assistant-connect-to-azure-openai.html - - https://www.elastic.co/guide/en/serverless/current/security-connect-to-azure-openai.html -applies_to: - stack: all - serverless: - security: all -products: - - id: security - - id: cloud-serverless ---- - -# Connect to Azure OpenAI - -This page provides step-by-step instructions for setting up an Azure OpenAI connector for the first time. This connector type enables you to leverage large language models (LLMs) within {{kib}}. You’ll first need to configure Azure, then configure the connector in {{kib}}. - - -## Configure Azure [_configure_azure] - - -### Configure a deployment [_configure_a_deployment] - -First, set up an Azure OpenAI deployment: - -1. Log in to the Azure console and search for Azure OpenAI. -2. In **Azure AI services**, select **Create**. -3. For the **Project Details**, select your subscription and resource group. If you don’t have a resource group, select **Create new** to make one. -4. For **Instance Details**, select the desired region and specify a name, such as `example-deployment-openai`. -5. Select the **Standard** pricing tier, then click **Next**. -6. Configure your network settings, click **Next**, optionally add tags, then click **Next**. -7. Review your deployment settings, then click **Create**. When complete, select **Go to resource**. - -The following video demonstrates these steps (click to watch). - -[![azure-openai-configure-deployment-video](https://play.vidyard.com/7NEa5VkVJ67RHWBuK8qMXA.jpg)](https://videos.elastic.co/watch/7NEa5VkVJ67RHWBuK8qMXA?) - -### Configure keys [_configure_keys] - -Next, create access keys for the deployment: - -1. From within your Azure OpenAI deployment, select **Click here to manage keys**. -2. Store your keys in a secure location. - -The following video demonstrates these steps (click to watch). - -[![azure-openai-configure-keys-video](https://play.vidyard.com/cQXw96XjaeF4RiB3V4EyTT.jpg)](https://videos.elastic.co/watch/cQXw96XjaeF4RiB3V4EyTT?) - - -### Configure a model [_configure_a_model] - -Now, set up the Azure OpenAI model: - -1. From within your Azure OpenAI deployment, select **Model deployments**, then click **Manage deployments**. -2. On the **Deployments** page, select **Create new deployment**. -3. Under **Select a model**, choose `gpt-4o` or `gpt-4 turbo`. -4. Set the model version to "Auto-update to default". - - :::{important} - The models available to you depend on [region availability](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability). For best results, use `GPT-4o 2024-05-13` with the maximum Tokens-Per-Minute (TPM) capacity. For more information on how different models perform for different tasks, refer to the [Large language model performance matrix](/solutions/security/ai/large-language-model-performance-matrix.md). - ::: - -5. Under **Deployment type**, select **Standard**. -6. Name your deployment. -7. Slide the **Tokens per Minute Rate Limit** to the maximum. The following example supports 80,000 TPM, but other regions might support higher limits. -8. Click **Create**. - -The following video demonstrates these steps (click to watch). - -[![azure-openai-configure-model-video](https://play.vidyard.com/PdadFyV1p1DbWRyCr95whT.jpg)](https://videos.elastic.co/watch/PdadFyV1p1DbWRyCr95whT?) - - - -## Configure Elastic AI Assistant [_configure_elastic_ai_assistant] - -Finally, configure the connector in {{kib}}: - -1. Log in to {{kib}}. -2. Find the **Connectors** page in the navigation menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). Then click **Create Connector**, and select **OpenAI**. -3. Give your connector a name to help you keep track of different models, such as `Azure OpenAI (GPT-4 Turbo v. 0125)`. -4. For **Select an OpenAI provider**, choose **Azure OpenAI**. -5. Update the **URL** field. We recommend doing the following: - - 1. Navigate to your deployment in Azure AI Studio and select **Open in Playground**. The **Chat playground** screen displays. - 2. Select **View code**, then from the drop-down, change the **Sample code** to `Curl`. - 3. Highlight and copy the URL without the quotes, then paste it into the **URL** field in {{kib}}. - 4. (Optional) Alternatively, refer to the [API documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference) to learn how to create the URL manually. - -6. Under **API key**, enter one of your API keys. -7. Click **Save & test**, then click **Run**. - -Your LLM connector is now configured. The following video demonstrates these steps (click to watch). - -[![azure-openai-configure-model-video](https://play.vidyard.com/RQZVcnXHokC3RcV6ZB2pmF.jpg)](https://videos.elastic.co/watch/RQZVcnXHokC3RcV6ZB2pmF?) \ No newline at end of file diff --git a/solutions/security/ai/connect-to-google-vertex.md b/solutions/security/ai/connect-to-google-vertex.md deleted file mode 100644 index a72513119b..0000000000 --- a/solutions/security/ai/connect-to-google-vertex.md +++ /dev/null @@ -1,90 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/security/current/connect-to-vertex.html - - https://www.elastic.co/guide/en/serverless/current/security-connect-to-google-vertex.html -applies_to: - stack: all - serverless: - security: all -products: - - id: security - - id: cloud-serverless ---- - -# Connect to Google Vertex - -% What needs to be done: Lift-and-shift - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/connect-to-vertex.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-connect-to-google-vertex.md - -This page provides step-by-step instructions for setting up a Google Vertex AI connector for the first time. This connector type enables you to leverage Vertex AI’s large language models (LLMs) within {{elastic-sec}}. You’ll first need to enable Vertex AI, then generate a key, and finally configure the connector in your {{elastic-sec}} project. - -::::{important} -Before continuing, you should have an active project in one of Google Vertex AI’s [supported regions](https://cloud.google.com/vertex-ai/docs/general/locations#feature-availability). -:::: - - - -## Enable the Vertex AI API [_enable_the_vertex_ai_api] - -1. Log in to the GCP console and navigate to **Vertex AI → Vertex AI Studio → Overview**. -2. If you’re new to Vertex AI, the **Get started with Vertex AI Studio** popup appears. Click **Vertex AI API**, then click **ENABLE**. - -The following video demonstrates these steps. - -[![connect-vertex-api-video](https://play.vidyard.com/vFhtbiCZiKhvdZGy2FjyeT.jpg)](https://videos.elastic.co/watch/vFhtbiCZiKhvdZGy2FjyeT?) - - -::::{note} -For more information about enabling the Vertex AI API, refer to [Google’s documentation](https://cloud.google.com/vertex-ai/docs/start/cloud-environment). -:::: - - - -## Create a Vertex AI service account [_create_a_vertex_ai_service_account] - -1. In the GCP console, navigate to **APIs & Services → Library**. -2. Search for **Vertex AI API**, select it, and click **MANAGE**. -3. In the left menu, navigate to **Credentials** then click **+ CREATE CREDENTIALS** and select **Service account**. -4. Name the new service account, then click **CREATE AND CONTINUE**. -5. Under **Select a role**, select **Vertex AI User**, then click **CONTINUE**. -6. Click **Done**. - -The following video demonstrates these steps. - -[![create-vertex-account-video](https://play.vidyard.com/tmresYYiags2w2nTv3Gac8.jpg)](https://videos.elastic.co/watch/tmresYYiags2w2nTv3Gac8?) - - -## Generate a key [_generate_an_api_key] - -1. Return to Vertex AI’s **Credentials** menu and click **Manage service accounts**. -2. Search for the service account you just created, select it, then click the link that appears under **Email**. -3. Go to the **KEYS** tab, click **ADD KEY**, then select **Create new key**. -4. Select **JSON**, then click **CREATE** to download the key. Keep it somewhere secure. - -The following video demonstrates these steps. - -[![create-vertex-key-video](https://play.vidyard.com/hrcy3F9AodwhJcV1i2yqbG.jpg)](https://videos.elastic.co/watch/hrcy3F9AodwhJcV1i2yqbG?) - - - -## Configure the Google Gemini connector [_configure_the_google_gemini_connector] - -Finally, configure the connector in your Elastic deployment: - -1. Log in to your Elastic deployment. -2. Find the **Connectors** page in the navigation menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). Then click **Create Connector**, select **Google Gemini**. -3. Name your connector to help keep track of the model version you are using. -4. Under **URL**, enter the URL for your region. -5. Enter your **GCP Region** and **GCP Project ID**. -6. Under **Default model**, specify either `gemini-1.5.pro` or `gemini-1.5-flash`. [Learn more about the models](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models). -7. Under **Authentication**, enter your credentials JSON. -8. Click **Save**. - -The following video demonstrates these steps. - - -[![configure-gemini-connector-video](https://play.vidyard.com/8L2WPm2HKN1cH872Gs5uvL.jpg)](https://videos.elastic.co/watch/8L2WPm2HKN1cH872Gs5uvL?) diff --git a/solutions/security/ai/connect-to-openai.md b/solutions/security/ai/connect-to-openai.md deleted file mode 100644 index 30df8f3560..0000000000 --- a/solutions/security/ai/connect-to-openai.md +++ /dev/null @@ -1,61 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/security/current/assistant-connect-to-openai.html - - https://www.elastic.co/guide/en/serverless/current/security-connect-to-openai.html -applies_to: - stack: all - serverless: - security: all -products: - - id: security - - id: cloud-serverless ---- - -# Connect to OpenAI - -This page provides step-by-step instructions for setting up an OpenAI connector for the first time. This connector type enables you to leverage OpenAI’s large language models (LLMs) within {{kib}}. You’ll first need to create an OpenAI API key, then configure the connector in {{kib}}. - - -## Configure OpenAI [_configure_openai] - - -### Select a model [_select_a_model] - -Before creating an API key, you must choose a model. Refer to the [OpenAI docs](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4) to select a model. Take note of the specific model name (for example `gpt-4-turbo`); you’ll need it when configuring {{kib}}. - -::::{note} -`GPT-4o` offers increased performance over previous versions. For more information on how different models perform for different tasks, refer to the [Large language model performance matrix](/solutions/security/ai/large-language-model-performance-matrix.md). -:::: - - - -### Create an API key [_create_an_api_key] - -To generate an API key: - -1. Log in to the OpenAI platform and navigate to **API keys**. -2. Select **Create new secret key**. -3. Name your key, select an OpenAI project, and set the desired permissions. -4. Click **Create secret key** and then copy and securely store the key. It will not be accessible after you leave this screen. - -The following video demonstrates these steps (click to watch). - -[![openai-apikey-video](https://play.vidyard.com/vbD7fGBGgyxK4TRbipeacL.jpg)](https://videos.elastic.co/watch/vbD7fGBGgyxK4TRbipeacL?) - - -## Configure the OpenAI connector [_configure_the_openai_connector] - -To integrate with {{kib}}: - -1. Log in to {{kib}}. -2. Find the **Connectors** page in the navigation menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). Then click **Create Connector**, and select **OpenAI**. -3. Provide a name for your connector, such as `OpenAI (GPT-4 Turbo Preview)`, to help keep track of the model and version you are using. -4. Under **Select an OpenAI provider**, choose **OpenAI**. -5. The **URL** field can be left as default. -6. Under **Default model**, specify which [model](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4) you want to use. -7. Paste the API key that you created into the corresponding field. -8. Click **Save**. - -The following video demonstrates these steps (click to watch). - -[![openai-configure-connector-video](https://play.vidyard.com/BGaQ73KBJCzeqWoxXkQvy9.jpg)](https://videos.elastic.co/watch/BGaQ73KBJCzeqWoxXkQvy9?) \ No newline at end of file diff --git a/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md b/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md index 7c28c156f4..742ee507dd 100644 --- a/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md +++ b/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md @@ -26,12 +26,7 @@ Different LLMs have varying performance when used to power different features an ## Connect to a third-party LLM -Follow these guides to connect to one or more third-party LLM providers: - -* [Azure OpenAI](/solutions/security/ai/connect-to-azure-openai.md) -* [Amazon Bedrock](/solutions/security/ai/connect-to-amazon-bedrock.md) -* [OpenAI](/solutions/security/ai/connect-to-openai.md) -* [Google Vertex](/solutions/security/ai/connect-to-google-vertex.md) +To access deployment guides for each available generative third-party LLM connector, refer to [GenAI Connectors](kibana://reference/connectors-kibana/gen-ai-connectors.md). ## Connect to a custom local LLM From b869b5dad932b202cd1ab4093899b7d04a245427 Mon Sep 17 00:00:00 2001 From: Benjamin Ironside Goldstein Date: Fri, 7 Nov 2025 11:33:19 -0600 Subject: [PATCH 2/2] removes deleted pages from ToC --- solutions/toc.yml | 4 ---- 1 file changed, 4 deletions(-) diff --git a/solutions/toc.yml b/solutions/toc.yml index 2a824dabd3..678118ec32 100644 --- a/solutions/toc.yml +++ b/solutions/toc.yml @@ -571,10 +571,6 @@ toc: - file: security/ai/set-up-connectors-for-large-language-models-llm.md children: - file: security/ai/large-language-model-performance-matrix.md - - file: security/ai/connect-to-azure-openai.md - - file: security/ai/connect-to-amazon-bedrock.md - - file: security/ai/connect-to-openai.md - - file: security/ai/connect-to-google-vertex.md - file: security/ai/connect-to-own-local-llm.md - file: security/ai/use-cases.md children: