Skip to content

Commit cdeb077

Browse files
committed
chore: Remove wllama
1 parent 2ca0ca2 commit cdeb077

File tree

12 files changed

+14
-198
lines changed

12 files changed

+14
-198
lines changed

README.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@
2727
<img height="400px" src="https://github.com/user-attachments/assets/c78f88f9-bbb7-4bad-91a8-13633ce35d4a" />
2828
</p>
2929

30-
**LLM Connector** is a plugin that provides out-of-the-box integrations with large language models (LLMs). The plugin ships with built-in support for 4 default LLM providers which are [**OpenAI**](docs/providers/OpenAI.md), [**Gemini**](/docs/providers/Gemini.md), [**WebLlm (in-browser)**](/docs/providers/WebLlm.md) and [**Wllama (in-browser)**](docs/providers/Wllama.md). Developers may also create their own providers beyond those that are provided to support niche or custom use cases. The plugin also provides generalized configurations for managing streaming behavior, chat history inclusion and audio output, greatly simplifying the amount of custom logic required from developers.
30+
**LLM Connector** is a plugin that provides out-of-the-box integrations with large language models (LLMs). The plugin ships with built-in support for 3 default LLM providers which are [**OpenAI**](docs/providers/OpenAI.md), [**Gemini**](/docs/providers/Gemini.md) and [**WebLlm (in-browser)**](/docs/providers/WebLlm.md). Developers may also create their own providers beyond those that are provided to support niche or custom use cases. The plugin also provides generalized configurations for managing streaming behavior, chat history inclusion and audio output, greatly simplifying the amount of custom logic required from developers.
3131

3232
For support, join the plugin community on [**Discord**](https://discord.gg/J6pA4v3AMW) to connect with other developers and get help.
3333

@@ -95,7 +95,7 @@ The quickstart above shows how LLM integrations can be done within the `llm_exam
9595
- Configure size of message history to include
9696
- Configure default error messages if responses fail
9797
- Synchronized audio output (relies on core library audio configurations to read out LLM responses)
98-
- Built-in common providers for easy integrations (OpenAI, Gemini, WebLlm & Wllama)
98+
- Built-in common providers for easy integrations (OpenAI, Gemini & WebLlm)
9999
- Ease of building your own providers for niche or custom use cases
100100

101101
### API Documentation
@@ -155,7 +155,6 @@ As you may have seen from earlier examples, providers are passed into the `provi
155155
- [**OpenAIProvider Configurations**](/docs/providers/OpenAI.md)
156156
- [**GeminiProvider Configurations**](/docs/providers/Gemini.md)
157157
- [**WebLlmProvider Configurations**](/docs/providers/WebLlm.md)
158-
- [**WllamaProvider Configurations**](/docs/providers/Wllama.md)
159158

160159
> [!TIP]
161160
> Note that if your choice of provider falls outside the default ones provided but has API specifications aligned to default providers (e.g. OpenAI), you may still use the default providers.
@@ -164,7 +163,7 @@ In addition, React ChatBotify's documentation website also contains live example
164163

165164
- [**OpenAI Provider Live Example**](https://react-chatbotify.com/docs/examples/openai_integration)
166165
- [**Gemini Provider Live Example**](https://react-chatbotify.com/docs/examples/gemini_integration)
167-
- [**Browser Providers (WebLlm & Wllama) Live Example**](https://react-chatbotify.com/docs/examples/llm_conversation)
166+
- [**WebLlm Live Example**](https://react-chatbotify.com/docs/examples/llm_conversation)
168167

169168
Developers may also write custom providers to integrate with their own solutions by importing and implementing the `Provider` interface. The only method enforced by the interface is `sendMessage`, which returns an `AsyncGenerator<string>` for the `LlmConnector` plugin to consume. A minimal example of a custom provider is shown below:
170169

docs/providers/Wllama.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
> [!WARNING]
2+
> The WllamaProvider is **no longer shipped by default** with the plugin. If you wish, you may refer to the legacy WllamaProvider implementation [**here**](https://gist.github.com/tjtanjin/345fe484c6df26c8194381d2b177f66c) and copy it into your codebase, then reference the configuration guide below.
3+
14
# WllamaProvider Configuration Guide
25

36
The `WllamaProvider` runs LLM models in the browser using the Wllama WebAssembly runtime. It exposes the Wllama [**AssetsPathConfig**](https://github.ngxson.com/wllama/docs/interfaces/AssetsPathConfig.html), [**WllamaConfig**](https://github.ngxson.com/wllama/docs/interfaces/WllamaConfig.html), [**LoadModelConfig**](https://github.ngxson.com/wllama/docs/interfaces/LoadModelConfig.html) and [**ChatCompletionOptions**](https://github.ngxson.com/wllama/docs/interfaces/ChatCompletionOptions.html).

package-lock.json

Lines changed: 2 additions & 10 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

package.json

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -12,14 +12,13 @@
1212
"gemini",
1313
"openai",
1414
"webllm",
15-
"wllama",
1615
"react-chatbotify",
1716
"react-chatbotify-plugin"
1817
],
1918
"files": [
2019
"./dist"
2120
],
22-
"version": "0.2.0",
21+
"version": "0.3.0",
2322
"description": "A generic LLM connector for integrating Large Language Models (LLMs) in React ChatBotify!",
2423
"type": "module",
2524
"main": "./dist/index.cjs",
@@ -50,11 +49,10 @@
5049
"url": "https://github.com/React-ChatBotify-Plugins/llm-connector"
5150
},
5251
"peerDependencies": {
52+
"@mlc-ai/web-llm": "^0.2.78",
5353
"react": ">=16.14.0",
54-
"react-dom": ">=16.14.0",
5554
"react-chatbotify": "^2.0.0-beta.37",
56-
"@mlc-ai/web-llm": "^0.2.78",
57-
"@wllama/wllama": "^2.3.1"
55+
"react-dom": ">=16.14.0"
5856
},
5957
"devDependencies": {
6058
"@eslint/js": "^9.20.0",

public/multi-thread/wllama.wasm

-1.67 MB
Binary file not shown.

public/single-thread/wllama.wasm

-1.64 MB
Binary file not shown.

src/App.tsx

Lines changed: 2 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,6 @@ import ChatBot, { Flow, Message, Params } from 'react-chatbotify';
33
import RcbPlugin from './factory/RcbPluginFactory';
44
import { LlmConnectorBlock } from './types/LlmConnectorBlock';
55
import GeminiProvider from './providers/GeminiProvider';
6-
import WllamaProvider from './providers/WllamaProvider';
76
import OpenaiProvider from './providers/OpenaiProvider';
87
import WebLlmProvider from './providers/WebLlmProvider';
98

@@ -41,11 +40,11 @@ const App = () => {
4140
}
4241
return 'Pick another model to try!';
4342
},
44-
options: ['WebLlm', 'Wllama', 'Gemini', 'OpenAI'],
43+
options: ['WebLlm', 'Gemini', 'OpenAI'],
4544
chatDisabled: true,
4645
path: async (params: Params) => {
4746
// if browser model chosen, give a gentle warning about performance
48-
if (params.userInput === 'WebLlm' || params.userInput === 'Wllama') {
47+
if (params.userInput === 'WebLlm') {
4948
await params.simulateStreamMessage(
5049
`You selected ${params.userInput}. This model runs in your browser, so responses may be slower and less accurate.`
5150
);
@@ -81,22 +80,6 @@ const App = () => {
8180
},
8281
},
8382
} as LlmConnectorBlock,
84-
wllama: {
85-
llmConnector: {
86-
provider: new WllamaProvider({
87-
modelUrl:
88-
'https://huggingface.co/HuggingFaceTB/SmolLM2-360M-Instruct-GGUF/resolve/main/smollm2-360m-instruct-q8_0.gguf',
89-
loadModelConfig: {
90-
n_ctx: 8192,
91-
},
92-
}),
93-
outputType: 'character',
94-
stopConditions: {
95-
onUserMessage: onUserMessageCheck,
96-
onKeyDown: onKeyDownCheck,
97-
},
98-
},
99-
} as LlmConnectorBlock,
10083
gemini: {
10184
llmConnector: {
10285
provider: new GeminiProvider({

src/index.tsx

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,15 +5,14 @@ import LlmConnector from './factory/RcbPluginFactory';
55
import GeminiProvider from './providers/GeminiProvider';
66
import OpenaiProvider from './providers/OpenaiProvider';
77
import WebLlmProvider from './providers/WebLlmProvider';
8-
import WllamaProvider from './providers/WllamaProvider';
98

109
// type imports
1110
import { LlmConnectorBlock } from './types/LlmConnectorBlock';
1211
import { PluginConfig } from './types/PluginConfig';
1312
import { Provider } from './types/Provider';
1413

1514
// default provider exports
16-
export { GeminiProvider, OpenaiProvider, WebLlmProvider, WllamaProvider };
15+
export { GeminiProvider, OpenaiProvider, WebLlmProvider };
1716

1817
// type exports
1918
export type { LlmConnectorBlock, PluginConfig, Provider };

src/providers/WllamaProvider.ts

Lines changed: 0 additions & 126 deletions
This file was deleted.

src/types/provider-config/WllamaProviderConfig.ts

Lines changed: 0 additions & 19 deletions
This file was deleted.

0 commit comments

Comments
 (0)