You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**LLM Connector** is a plugin that provides out-of-the-box integrations with large language models (LLMs). The plugin ships with built-in support for 4 default LLM providers which are [**OpenAI**](docs/providers/OpenAI.md), [**Gemini**](/docs/providers/Gemini.md), [**WebLlm (in-browser)**](/docs/providers/WebLlm.md) and [**Wllama (in-browser)**](docs/providers/Wllama.md). Developers may also create their own providers beyond those that are provided to support niche or custom use cases. The plugin also provides generalized configurations for managing streaming behavior, chat history inclusion and audio output, greatly simplifying the amount of custom logic required from developers.
30
+
**LLM Connector** is a plugin that provides out-of-the-box integrations with large language models (LLMs). The plugin ships with built-in support for 3 default LLM providers which are [**OpenAI**](docs/providers/OpenAI.md), [**Gemini**](/docs/providers/Gemini.md) and [**WebLlm (in-browser)**](/docs/providers/WebLlm.md). Developers may also create their own providers beyond those that are provided to support niche or custom use cases. The plugin also provides generalized configurations for managing streaming behavior, chat history inclusion and audio output, greatly simplifying the amount of custom logic required from developers.
31
31
32
32
For support, join the plugin community on [**Discord**](https://discord.gg/J6pA4v3AMW) to connect with other developers and get help.
33
33
@@ -95,7 +95,7 @@ The quickstart above shows how LLM integrations can be done within the `llm_exam
95
95
- Configure size of message history to include
96
96
- Configure default error messages if responses fail
97
97
- Synchronized audio output (relies on core library audio configurations to read out LLM responses)
98
-
- Built-in common providers for easy integrations (OpenAI, Gemini, WebLlm & Wllama)
98
+
- Built-in common providers for easy integrations (OpenAI, Gemini & WebLlm)
99
99
- Ease of building your own providers for niche or custom use cases
100
100
101
101
### API Documentation
@@ -155,7 +155,6 @@ As you may have seen from earlier examples, providers are passed into the `provi
> Note that if your choice of provider falls outside the default ones provided but has API specifications aligned to default providers (e.g. OpenAI), you may still use the default providers.
@@ -164,7 +163,7 @@ In addition, React ChatBotify's documentation website also contains live example
164
163
165
164
-[**OpenAI Provider Live Example**](https://react-chatbotify.com/docs/examples/openai_integration)
166
165
-[**Gemini Provider Live Example**](https://react-chatbotify.com/docs/examples/gemini_integration)
167
-
-[**Browser Providers (WebLlm & Wllama) Live Example**](https://react-chatbotify.com/docs/examples/llm_conversation)
166
+
-[**WebLlm Live Example**](https://react-chatbotify.com/docs/examples/llm_conversation)
168
167
169
168
Developers may also write custom providers to integrate with their own solutions by importing and implementing the `Provider` interface. The only method enforced by the interface is `sendMessage`, which returns an `AsyncGenerator<string>` for the `LlmConnector` plugin to consume. A minimal example of a custom provider is shown below:
Copy file name to clipboardExpand all lines: docs/providers/Wllama.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,3 +1,6 @@
1
+
> [!WARNING]
2
+
> The WllamaProvider is **no longer shipped by default** with the plugin. If you wish, you may refer to the legacy WllamaProvider implementation [**here**](https://gist.github.com/tjtanjin/345fe484c6df26c8194381d2b177f66c) and copy it into your codebase, then reference the configuration guide below.
3
+
1
4
# WllamaProvider Configuration Guide
2
5
3
6
The `WllamaProvider` runs LLM models in the browser using the Wllama WebAssembly runtime. It exposes the Wllama [**AssetsPathConfig**](https://github.ngxson.com/wllama/docs/interfaces/AssetsPathConfig.html), [**WllamaConfig**](https://github.ngxson.com/wllama/docs/interfaces/WllamaConfig.html), [**LoadModelConfig**](https://github.ngxson.com/wllama/docs/interfaces/LoadModelConfig.html) and [**ChatCompletionOptions**](https://github.ngxson.com/wllama/docs/interfaces/ChatCompletionOptions.html).
0 commit comments