Skip to content

Commit b11d17d

Browse files
committed
Revised to reflect updates
1 parent 7f23621 commit b11d17d

File tree

3 files changed

+78
-56
lines changed

3 files changed

+78
-56
lines changed

README.md

Lines changed: 16 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ Visit us at [AvantMaker.com](https://www.avantmaker.com) where we've crafted a c
1818
ESP32_AI_Connect is an Arduino library that enables ESP32 microcontrollers to interact seamlessly with popular AI APIs including:
1919
- OpenAI
2020
- Google Gemini
21+
- Anthroic Claude
2122
- DeepSeek
2223
- OpenAI-Compatible (HuggingFace, Qwen, etc.)
2324
- And more to be included...
@@ -49,7 +50,7 @@ ESP32_AI_Connect is an Arduino library that enables ESP32 microcontrollers to in
4950
| Anthropic Claude | `"claude"`| claude-3.7-sonnet, claude-3.5-haiku, etc. | Yes | Under Development |
5051
| OpenAI Compatible | `"openai-compatible"`| HuggingFace, Qwen, etc. | See Note 1 below | Under Development |
5152

52-
**Note 1:** Support for tool calls varies across platforms and models. Therefore, the availability of the `tool_calls` functionality of the OpenAI Compatible platform depends on the specific platform and model you select.
53+
**Note 1:** Tool call support differs by platform and model, so the availability of the `tool_calls` feature on the OpenAI Compatible platform depends on your chosen platform and model.
5354

5455
**Note 2:** We are actively working to add Grok and Ollama to the list of supported platforms.
5556

@@ -88,10 +89,10 @@ const char* password = "your_PASSWORD"; // Your WiFi password
8889
const char* apiKey = "your_API_KEY"; // Your OpenAI API key (keep this secure!)
8990

9091
// Initialize AI client with:
91-
// 1. Platform identifier ("openai", "gemini", or "deepseek")
92+
// 1. Platform identifier ("openai", "gemini", "claude", or "deepseek")
9293
// 2. Your API key
9394
// 3. Model name ("gpt-3.5-turbo" for this example)
94-
ESP32_AI_Connect ai("openai", apiKey, "gpt-3.5-turbo");
95+
ESP32_AI_Connect aiClient("openai", apiKey, "gpt-3.5-turbo");
9596

9697
void setup() {
9798
// Initialize serial communication for debugging
@@ -109,33 +110,34 @@ void setup() {
109110

110111
// WiFi connected - print IP address
111112
Serial.println("\nWiFi connected!");
112-
Serial.print("IP address: ");
113-
Serial.println(WiFi.localIP());
114113

115-
// Configure AI client parameters:
116-
ai.setChatTemperature(0.7); // Set response creativity (0.0-2.0)
117-
ai.setChatMaxTokens(200); // Limit response length (in tokens)
118-
ai.setChatSystemRole("You are a helpful assistant"); // Set assistant behavior
114+
// Optional: You can use the following methods to Configure AI client parameters, such as
115+
// System Role, Max Tokens, etc.
116+
// The LLM will use default values when interacting with AI Client if these parameters
117+
// are not set.
118+
aiClient.setChatTemperature(0.7); // Set response creativity (0.0-2.0)
119+
aiClient.setChatMaxTokens(200); // Limit response length (in tokens)
120+
aiClient.setChatSystemRole("You are a helpful assistant"); // Set assistant behavior
119121

120122
// You can retrieve current settings using getter methods:
121123
Serial.print("Current temperature: ");
122-
Serial.println(ai.getChatTemperature());
124+
Serial.println(aiClient.getChatTemperature());
123125
Serial.print("Maximum tokens: ");
124-
Serial.println(ai.getChatMaxTokens());
126+
Serial.println(aiClient.getChatMaxTokens());
125127
Serial.print("System role: ");
126-
Serial.println(ai.getChatSystemRole());
128+
Serial.println(aiClient.getChatSystemRole());
127129

128130
// Send a test message to the AI and get response
129131
Serial.println("\nSending message to AI...");
130-
String response = ai.chat("Hello! Who are you?");
132+
String response = aiClient.chat("Hello! Who are you?");
131133

132134
// Print the AI's response
133135
Serial.println("\nAI Response:");
134136
Serial.println(response);
135137

136138
// Check for errors (empty response indicates an error occurred)
137139
if (response.isEmpty()) {
138-
Serial.println("Error: " + ai.getLastError());
140+
Serial.println("Error: " + aiClient.getLastError());
139141
}
140142
}
141143

doc/User Guides/1 Introduction.md

Lines changed: 25 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
# ESP32_AI_Connect Library User Guide - 1 Introduction
2-
> **Version 0.0.2** • Revised: May 09, 2025 • Author: AvantMaker • [https://www.AvantMaker.com](https://www.AvantMaker.com)
2+
> **Version 0.0.3** • Revised: May 10, 2025 • Author: AvantMaker • [https://www.AvantMaker.com](https://www.AvantMaker.com)
33
## Overview
44

55
ESP32_AI_Connect is a powerful, flexible library designed to connect ESP32 microcontrollers to various Large Language Model (LLM) platforms. This library simplifies the process of integrating AI capabilities into your ESP32 projects, allowing you to create intelligent IoT devices, smart assistants, and interactive applications with minimal effort.
66

77
## Key Features
88

9-
- **Multi-Platform Support**: Connect to multiple AI platforms including OpenAI, Google Gemini, and DeepSeek
9+
- **Multi-Platform Support**: Connect to multiple AI platforms including OpenAI, Anthropic Claude, Google Gemini, and DeepSeek
1010
- **Simple API**: Easy-to-use interface for sending prompts and receiving responses
1111
- **Tool Calls Support**: Enable your ESP32 to use LLM function calling capabilities
1212
- **Memory Efficient**: Optimized for the limited resources of ESP32 devices
@@ -31,7 +31,7 @@ The library follows a well-structured design pattern:
3131

3232
1. **ESP32_AI_Connect**: The main class that users interact with
3333
2. **AI_API_Platform_Handler**: An abstract base class that defines a common interface for all AI platforms
34-
3. **Platform-Specific Handlers**: Implementations for each supported AI platform (OpenAI, Gemini, DeepSeek)
34+
3. **Platform-Specific Handlers**: Implementations for each supported AI platform (OpenAI, Gemini, DeepSeek, Anthropic, etc.)
3535

3636
This architecture allows you to switch between different AI platforms with minimal code changes, while also making it easy to extend the library with support for new platforms.
3737

@@ -41,10 +41,10 @@ To use ESP32_AI_Connect in your project, you'll need:
4141

4242
- An ESP32 development board
4343
- Arduino IDE or PlatformIO
44-
- An API key for your chosen AI platform (OpenAI, Google Gemini, or DeepSeek)
44+
- An API key for your chosen AI platform (OpenAI, Google Gemini, Anropic Claude or DeepSeek)
4545
- WiFi connectivity
4646

47-
The library can be installed through the Arduino Library Manager or by downloading the source code from the repository.
47+
The library can be installed by downloading the source code from the repository.
4848

4949
## Basic Usage Example
5050

@@ -65,10 +65,10 @@ const char* password = "your_PASSWORD"; // Your WiFi password
6565
const char* apiKey = "your_API_KEY"; // Your OpenAI API key (keep this secure!)
6666

6767
// Initialize AI client with:
68-
// 1. Platform identifier ("openai", "gemini", or "deepseek")
68+
// 1. Platform identifier ("openai", "gemini", "claude", or "deepseek")
6969
// 2. Your API key
7070
// 3. Model name ("gpt-3.5-turbo" for this example)
71-
ESP32_AI_Connect ai("openai", apiKey, "gpt-3.5-turbo");
71+
ESP32_AI_Connect aiClient("openai", apiKey, "gpt-3.5-turbo");
7272

7373
void setup() {
7474
// Initialize serial communication for debugging
@@ -86,25 +86,34 @@ void setup() {
8686

8787
// WiFi connected - print IP address
8888
Serial.println("\nWiFi connected!");
89-
Serial.print("IP address: ");
90-
Serial.println(WiFi.localIP());
9189

92-
// Configure AI client parameters:
93-
ai.setTemperature(0.7); // Set response creativity (0.0-2.0)
94-
ai.setMaxTokens(200); // Limit response length (in tokens)
95-
ai.setSystemRole("You are a helpful assistant"); // Set assistant behavior
90+
// Optional: You can use the following methods to Configure AI client parameters, such as
91+
// System Role, Max Tokens, etc.
92+
// The LLM will use default values when interacting with AI Client if these parameters
93+
// are not set.
94+
aiClient.setChatTemperature(0.7); // Set response creativity (0.0-2.0)
95+
aiClient.setChatMaxTokens(200); // Limit response length (in tokens)
96+
aiClient.setChatSystemRole("You are a helpful assistant"); // Set assistant behavior
97+
98+
// You can retrieve current settings using getter methods:
99+
Serial.print("Current temperature: ");
100+
Serial.println(aiClient.getChatTemperature());
101+
Serial.print("Maximum tokens: ");
102+
Serial.println(aiClient.getChatMaxTokens());
103+
Serial.print("System role: ");
104+
Serial.println(aiClient.getChatSystemRole());
96105

97106
// Send a test message to the AI and get response
98107
Serial.println("\nSending message to AI...");
99-
String response = ai.chat("Hello! Who are you?");
108+
String response = aiClient.chat("Hello! Who are you?");
100109

101110
// Print the AI's response
102111
Serial.println("\nAI Response:");
103112
Serial.println(response);
104113

105114
// Check for errors (empty response indicates an error occurred)
106115
if (response.isEmpty()) {
107-
Serial.println("Error: " + ai.getLastError());
116+
Serial.println("Error: " + aiClient.getLastError());
108117
}
109118
}
110119

@@ -119,7 +128,7 @@ void loop() {
119128
This introduction provides a high-level overview of the ESP32_AI_Connect library. In the following guides, we'll explore:
120129
121130
1. **Basic Chat with LLMs**: How to set up and conduct conversations with different AI models
122-
2. **Tool Calls**: How to enable your ESP32 to use LLM function calling capabilities
131+
2. **Tool Calls**: How to enable your ESP32 to use LLM tool calling capabilities
123132
3. And more as new features are added
124133
125134
Each guide will include detailed explanations, code examples, and best practices to help you get the most out of the ESP32_AI_Connect library.

doc/User Guides/2 Basic LLM Chat Implementation.md

Lines changed: 37 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# ESP32_AI_Connect Library User Guide - 2 Basic LLM Chat Implementation
2-
> **Version 0.0.2** • Revised: May 09, 2025 • Author: AvantMaker • [https://www.AvantMaker.com](https://www.AvantMaker.com)
2+
> **Version 0.0.3** • Revised: May 10, 2025 • Author: AvantMaker • [https://www.AvantMaker.com](https://www.AvantMaker.com)
33
44
## Overview
55

@@ -13,7 +13,7 @@ Before you begin, make sure you have:
1313
- Arduino IDE installed with ESP32 board support
1414
- ESP32_AI_Connect library installed
1515
- WiFi connectivity
16-
- An API key for your chosen AI platform (OpenAI, Google Gemini, or DeepSeek)
16+
- An API key for your chosen AI platform (OpenAI, Google Gemini, Anthropic Claude or DeepSeek)
1717

1818
## Step 1: Include Required Libraries
1919

@@ -48,11 +48,11 @@ Now we create an instance of the `ESP32_AI_Connect` class:
4848
// 1. Platform identifier ("openai", "gemini", or "deepseek")
4949
// 2. Your API key
5050
// 3. Model name ("gpt-3.5-turbo" for this example)
51-
ESP32_AI_Connect ai("openai", apiKey, "gpt-3.5-turbo");
51+
ESP32_AI_Connect aiClient("openai", apiKey, "gpt-3.5-turbo");
5252
```
5353
5454
This line initializes the AI client with three parameters:
55-
- The platform identifier (`"openai"` in this example, but you can also use `"gemini"` or `"deepseek"`)
55+
- The platform identifier (`"openai"` in this example, but you can also use `"gemini"`, `"claude"`, `"deepseek"` or `"openai-compatible"`)
5656
- Your API key
5757
- The model name (`"gpt-3.5-turbo"` for OpenAI)
5858
@@ -93,9 +93,9 @@ After establishing the WiFi connection, we configure the AI client with specific
9393

9494
```cpp:basic_example.ino
9595
// Configure AI client parameters:
96-
ai.setChatTemperature(0.7); // Set response creativity (0.0-2.0)
97-
ai.setChatMaxTokens(200); // Limit response length (in tokens)
98-
ai.setChatSystemRole("You are a helpful assistant"); // Set assistant behavior
96+
aiClient.setChatTemperature(0.7); // Set response creativity (0.0-2.0)
97+
aiClient.setChatMaxTokens(200); // Limit response length (in tokens)
98+
aiClient.setChatSystemRole("You are a helpful assistant"); // Set assistant behavior
9999
```
100100

101101
These configuration options allow you to customize the behavior of the AI:
@@ -104,6 +104,9 @@ These configuration options allow you to customize the behavior of the AI:
104104
- `setChatMaxTokens(200)`: Limits the maximum length of the AI's response in tokens (roughly 4 characters per token). This helps control response size and API costs.
105105
- `setChatSystemRole("You are a helpful assistant")`: Sets the system message that defines the AI's behavior and personality.
106106

107+
**Note: The parameters set by the above methods are optional. If you do not explicitly configure these parameters, the LLM will use its default values for temperature, max tokens, and system role.**
108+
109+
107110
## Step 6: Verifying Configuration with Getter Methods
108111

109112
You can verify your configuration settings using the corresponding getter methods:
@@ -112,11 +115,11 @@ You can verify your configuration settings using the corresponding getter method
112115
// Retrieve and display the current configuration
113116
Serial.println("\nAI Configuration:");
114117
Serial.print("System Role: ");
115-
Serial.println(ai.getChatSystemRole());
118+
Serial.println(aiClient.getChatSystemRole());
116119
Serial.print("Temperature: ");
117-
Serial.println(ai.getChatTemperature());
120+
Serial.println(aiClient.getChatTemperature());
118121
Serial.print("Max Tokens: ");
119-
Serial.println(ai.getChatMaxTokens());
122+
Serial.println(aiClient.getChatMaxTokens());
120123
```
121124

122125
These getter methods allow you to:
@@ -131,21 +134,21 @@ Now we're ready to send a message to the AI and receive a response:
131134
```cpp:basic_example.ino
132135
// Send a test message to the AI and get response
133136
Serial.println("\nSending message to AI...");
134-
String response = ai.chat("Hello! Who are you?");
137+
String response = aiClient.chat("Hello! Who are you?");
135138

136139
// Print the AI's response
137140
Serial.println("\nAI Response:");
138141
Serial.println(response);
139142

140143
// Check for errors (empty response indicates an error occurred)
141144
if (response.isEmpty()) {
142-
Serial.println("Error: " + ai.getLastError());
145+
Serial.println("Error: " + aiClient.getLastError());
143146
}
144147
}
145148
```
146149

147-
The key function here is `ai.chat()`, which:
148-
1. Takes a string parameter containing your message to the AI
150+
The key function here is `aiClient.chat()`, which:
151+
1. Takes a string parameter containing your prompt message to the AI
149152
2. Sends the request to the AI platform
150153
3. Returns the AI's response as a string
151154

@@ -182,7 +185,7 @@ void loop() {
182185

183186
// Send the message to the AI
184187
Serial.println("Sending to AI...");
185-
String response = ai.chat(userMessage);
188+
String response = aiClient.chat(userMessage);
186189

187190
// Print the AI's response
188191
Serial.println("AI: " + response);
@@ -212,16 +215,16 @@ If you need to reset the chat configuration to default values, you can use the `
212215

213216
```cpp
214217
// Reset chat configuration to defaults
215-
ai.chatReset();
218+
aiClient.chatReset();
216219

217220
// Verify reset was successful
218221
Serial.println("After reset:");
219222
Serial.print("System Role: ");
220-
Serial.println(ai.getChatSystemRole()); // Should be empty
223+
Serial.println(aiClient.getChatSystemRole()); // Should be empty
221224
Serial.print("Temperature: ");
222-
Serial.println(ai.getChatTemperature()); // Should be -1.0 (default)
225+
Serial.println(aiClient.getChatTemperature()); // Should be -1.0 (default)
223226
Serial.print("Max Tokens: ");
224-
Serial.println(ai.getChatMaxTokens()); // Should be -1 (default)
227+
Serial.println(aiClient.getChatMaxTokens()); // Should be -1 (default)
225228
```
226229

227230
This is useful when you want to start a fresh conversation with different settings or return to the default configuration.
@@ -232,12 +235,17 @@ One of the key features of the ESP32_AI_Connect library is its ability to work w
232235

233236
### For Google Gemini:
234237
```cpp
235-
ESP32_AI_Connect ai("gemini", apiKey, "gemini-2.0.flash");
238+
ESP32_AI_Connect aiClient("gemini", apiKey, "gemini-2.0.flash");
239+
240+
```
241+
### For Anthropic Claude:
242+
```cpp
243+
ESP32_AI_Connect aiClient("claude", apiKey, "claude-3.7-sonnet");
236244
```
237245

238246
### For DeepSeek:
239247
```cpp
240-
ESP32_AI_Connect ai("deepseek", apiKey, "deepseek-chat");
248+
ESP32_AI_Connect aiClient("deepseek", apiKey, "deepseek-chat");
241249
```
242250
243251
Make sure the corresponding platform is enabled in the `ESP32_AI_Connect_config.h` file:
@@ -247,31 +255,34 @@ Make sure the corresponding platform is enabled in the `ESP32_AI_Connect_config.
247255
#define USE_AI_API_OPENAI // Enable OpenAI and OpenAI-compatible APIs
248256
#define USE_AI_API_GEMINI // Enable Google Gemini API
249257
#define USE_AI_API_DEEPSEEK // Enable DeepSeek API
258+
#define USE_AI_API_CLAUDE // Enable Anthropic Claude API
250259
```
251260

252261
## Using a Custom Endpoint
253262

254-
If you're using an OpenAI compatible API that requires a custom endpoint URL, you can use the alternative constructor:
263+
If you're using an OpenAI compatible API that requires a custom endpoint URL, you can use the alternative constructor :
255264

256265
```cpp
257266
const char* customEndpoint = "https://your-custom-endpoint.com/v1/chat/completions";
258-
ESP32_AI_Connect ai("openai-compatible", apiKey, "model-name", customEndpoint);
267+
ESP32_AI_Connect aiClient("openai-compatible", apiKey, "model-name", customEndpoint);
259268
```
260269
261270
This is useful for self-hosted models or alternative API providers that are compatible with the OpenAI API format.
262271
272+
For more detailed information on how to use OpenAI compatible API, please refer to the custom_llm_chat.ino example code in the examples folder of the ESP32_AI_Connect Library.
273+
263274
## Accessing Raw API Responses
264275
265276
For advanced usage, you might want to access the complete raw JSON response from the API. The library provides methods to retrieve these:
266277
267278
```cpp
268279
// Get the raw JSON response from the last chat request
269-
String rawResponse = ai.getChatRawResponse();
280+
String rawResponse = aiClient.getChatRawResponse();
270281
Serial.println("Raw API Response:");
271282
Serial.println(rawResponse);
272283
273284
// For tool calling, you can also get the raw response
274-
String rawToolResponse = ai.getTCRawResponse();
285+
String rawToolResponse = aiClient.getTCRawResponse();
275286
```
276287

277288
These methods allow you to access the full API response data for custom processing or debugging.
@@ -280,7 +291,7 @@ These methods allow you to access the full API response data for custom processi
280291

281292
If you encounter issues with your AI chat application, here are some common problems and solutions:
282293

283-
1. **Empty Response**: If `ai.chat()` returns an empty string, check `ai.getLastError()` for details about what went wrong.
294+
1. **Empty Response**: If `aiClient.chat()` returns an empty string, check `aiClient.getLastError()` for details about what went wrong.
284295

285296
2. **WiFi Connection Issues**: Make sure your WiFi credentials are correct and that your ESP32 is within range of your WiFi network.
286297

0 commit comments

Comments
 (0)