You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+16-14Lines changed: 16 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,6 +18,7 @@ Visit us at [AvantMaker.com](https://www.avantmaker.com) where we've crafted a c
18
18
ESP32_AI_Connect is an Arduino library that enables ESP32 microcontrollers to interact seamlessly with popular AI APIs including:
19
19
- OpenAI
20
20
- Google Gemini
21
+
- Anthroic Claude
21
22
- DeepSeek
22
23
- OpenAI-Compatible (HuggingFace, Qwen, etc.)
23
24
- And more to be included...
@@ -49,7 +50,7 @@ ESP32_AI_Connect is an Arduino library that enables ESP32 microcontrollers to in
49
50
| Anthropic Claude |`"claude"`| claude-3.7-sonnet, claude-3.5-haiku, etc. | Yes | Under Development |
50
51
| OpenAI Compatible |`"openai-compatible"`| HuggingFace, Qwen, etc. | See Note 1 below | Under Development |
51
52
52
-
**Note 1:**Support for tool calls varies across platforms and models. Therefore, the availability of the `tool_calls`functionality of the OpenAI Compatible platform depends on the specific platform and model you select.
53
+
**Note 1:**Tool call support differs by platform and model, so the availability of the `tool_calls`feature on the OpenAI Compatible platform depends on your chosen platform and model.
53
54
54
55
**Note 2:** We are actively working to add Grok and Ollama to the list of supported platforms.
ESP32_AI_Connect is a powerful, flexible library designed to connect ESP32 microcontrollers to various Large Language Model (LLM) platforms. This library simplifies the process of integrating AI capabilities into your ESP32 projects, allowing you to create intelligent IoT devices, smart assistants, and interactive applications with minimal effort.
6
6
7
7
## Key Features
8
8
9
-
-**Multi-Platform Support**: Connect to multiple AI platforms including OpenAI, Google Gemini, and DeepSeek
9
+
-**Multi-Platform Support**: Connect to multiple AI platforms including OpenAI, Anthropic Claude, Google Gemini, and DeepSeek
10
10
-**Simple API**: Easy-to-use interface for sending prompts and receiving responses
11
11
-**Tool Calls Support**: Enable your ESP32 to use LLM function calling capabilities
12
12
-**Memory Efficient**: Optimized for the limited resources of ESP32 devices
@@ -31,7 +31,7 @@ The library follows a well-structured design pattern:
31
31
32
32
1.**ESP32_AI_Connect**: The main class that users interact with
33
33
2.**AI_API_Platform_Handler**: An abstract base class that defines a common interface for all AI platforms
34
-
3.**Platform-Specific Handlers**: Implementations for each supported AI platform (OpenAI, Gemini, DeepSeek)
34
+
3.**Platform-Specific Handlers**: Implementations for each supported AI platform (OpenAI, Gemini, DeepSeek, Anthropic, etc.)
35
35
36
36
This architecture allows you to switch between different AI platforms with minimal code changes, while also making it easy to extend the library with support for new platforms.
37
37
@@ -41,10 +41,10 @@ To use ESP32_AI_Connect in your project, you'll need:
41
41
42
42
- An ESP32 development board
43
43
- Arduino IDE or PlatformIO
44
-
- An API key for your chosen AI platform (OpenAI, Google Gemini, or DeepSeek)
44
+
- An API key for your chosen AI platform (OpenAI, Google Gemini, Anropic Claude or DeepSeek)
45
45
- WiFi connectivity
46
46
47
-
The library can be installed through the Arduino Library Manager or by downloading the source code from the repository.
47
+
The library can be installed by downloading the source code from the repository.
This line initializes the AI client with three parameters:
55
-
- The platform identifier (`"openai"` in this example, but you can also use `"gemini"` or `"deepseek"`)
55
+
- The platform identifier (`"openai"` in this example, but you can also use `"gemini"`, `"claude"`, `"deepseek"` or `"openai-compatible"`)
56
56
- Your API key
57
57
- The model name (`"gpt-3.5-turbo"` for OpenAI)
58
58
@@ -93,9 +93,9 @@ After establishing the WiFi connection, we configure the AI client with specific
93
93
94
94
```cpp:basic_example.ino
95
95
// Configure AI client parameters:
96
-
ai.setChatTemperature(0.7); // Set response creativity (0.0-2.0)
97
-
ai.setChatMaxTokens(200); // Limit response length (in tokens)
98
-
ai.setChatSystemRole("You are a helpful assistant"); // Set assistant behavior
96
+
aiClient.setChatTemperature(0.7); // Set response creativity (0.0-2.0)
97
+
aiClient.setChatMaxTokens(200); // Limit response length (in tokens)
98
+
aiClient.setChatSystemRole("You are a helpful assistant"); // Set assistant behavior
99
99
```
100
100
101
101
These configuration options allow you to customize the behavior of the AI:
@@ -104,6 +104,9 @@ These configuration options allow you to customize the behavior of the AI:
104
104
-`setChatMaxTokens(200)`: Limits the maximum length of the AI's response in tokens (roughly 4 characters per token). This helps control response size and API costs.
105
105
-`setChatSystemRole("You are a helpful assistant")`: Sets the system message that defines the AI's behavior and personality.
106
106
107
+
**Note: The parameters set by the above methods are optional. If you do not explicitly configure these parameters, the LLM will use its default values for temperature, max tokens, and system role.**
108
+
109
+
107
110
## Step 6: Verifying Configuration with Getter Methods
108
111
109
112
You can verify your configuration settings using the corresponding getter methods:
@@ -112,11 +115,11 @@ You can verify your configuration settings using the corresponding getter method
112
115
// Retrieve and display the current configuration
113
116
Serial.println("\nAI Configuration:");
114
117
Serial.print("System Role: ");
115
-
Serial.println(ai.getChatSystemRole());
118
+
Serial.println(aiClient.getChatSystemRole());
116
119
Serial.print("Temperature: ");
117
-
Serial.println(ai.getChatTemperature());
120
+
Serial.println(aiClient.getChatTemperature());
118
121
Serial.print("Max Tokens: ");
119
-
Serial.println(ai.getChatMaxTokens());
122
+
Serial.println(aiClient.getChatMaxTokens());
120
123
```
121
124
122
125
These getter methods allow you to:
@@ -131,21 +134,21 @@ Now we're ready to send a message to the AI and receive a response:
131
134
```cpp:basic_example.ino
132
135
// Send a test message to the AI and get response
133
136
Serial.println("\nSending message to AI...");
134
-
String response = ai.chat("Hello! Who are you?");
137
+
String response = aiClient.chat("Hello! Who are you?");
135
138
136
139
// Print the AI's response
137
140
Serial.println("\nAI Response:");
138
141
Serial.println(response);
139
142
140
143
// Check for errors (empty response indicates an error occurred)
This is useful for self-hosted models or alternative API providers that are compatible with the OpenAI API format.
262
271
272
+
For more detailed information on how to use OpenAI compatible API, please refer to the custom_llm_chat.ino example code in the examples folder of the ESP32_AI_Connect Library.
273
+
263
274
## Accessing Raw API Responses
264
275
265
276
For advanced usage, you might want to access the complete raw JSON response from the API. The library provides methods to retrieve these:
266
277
267
278
```cpp
268
279
// Get the raw JSON response from the last chat request
0 commit comments