Skip to content

Commit ba34096

Browse files
committed
Linting/Formatting of main app files
1 parent 1aa5878 commit ba34096

File tree

9 files changed

+215
-88
lines changed

9 files changed

+215
-88
lines changed

Makefile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,4 +7,4 @@ run:
77
--queue-size 100
88

99
install:
10-
pip install -e .
10+
pip install -e .

README.md

Lines changed: 40 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,9 @@ The server supports six types of MLX models:
108108
109109
### Flux-Series Image Models
110110

111-
The server supports multiple Flux and Qwen model configurations for advanced image generation and editing:
111+
> **⚠️ Note:** Image generation and editing capabilities require installation of `mflux`: `pip install mlx-openai-server[image-generation]` or `pip install git+https://github.com/cubist38/mflux.git`
112+
113+
The server supports multiple Flux model configurations for advanced image generation and editing:
112114

113115
#### Image Generation Models
114116
- **`flux-schnell`** - Fast generation with 4 default steps, no guidance (best for quick iterations)
@@ -202,6 +204,9 @@ Follow these steps to set up the MLX-powered server:
202204
git clone https://github.com/cubist38/mlx-openai-server.git
203205
cd mlx-openai-server
204206
pip install -e .
207+
208+
# Optional: For image generation/editing support
209+
pip install -e .[image-generation]
205210
```
206211

207212
### Using Conda (Recommended)
@@ -236,6 +241,9 @@ For better environment management and to avoid architecture issues, we recommend
236241
git clone https://github.com/cubist38/mlx-openai-server.git
237242
cd mlx-openai-server
238243
pip install -e .
244+
245+
# Optional: For image generation/editing support
246+
pip install -e .[image-generation]
239247
```
240248

241249
### Optional Dependencies
@@ -253,15 +261,44 @@ pip install mlx-openai-server
253261
- All core API endpoints and functionality
254262

255263
#### Image Generation & Editing Support
256-
The server includes support for image generation and editing capabilities:
264+
For image generation and editing capabilities, install with the image-generation extra:
265+
266+
```bash
267+
# Install with image generation support
268+
pip install mlx-openai-server[image-generation]
269+
```
270+
271+
Or install manually:
272+
```bash
273+
# First install the base server
274+
pip install mlx-openai-server
275+
276+
# Then install mflux for image generation/editing support
277+
pip install git+https://github.com/cubist38/mflux.git
278+
```
257279

258-
**Additional features:**
280+
**Additional features with mflux:**
259281
- Image generation models (`--model-type image-generation`)
260282
- Image editing models (`--model-type image-edit`)
261283
- MLX Flux-series model support
262284
- Qwen Image model support
263285
- LoRA adapter support for fine-tuned generation and editing
264286

287+
#### Enhanced Caching Support
288+
For enhanced caching and performance when working with complex ML models and objects, install with the enhanced-caching extra:
289+
290+
```bash
291+
# Install with enhanced caching support
292+
pip install mlx-openai-server[enhanced-caching]
293+
```
294+
295+
This enables better serialization and caching of objects from:
296+
- spaCy (NLP processing)
297+
- regex (regular expressions)
298+
- tiktoken (tokenization)
299+
- torch (PyTorch tensors and models)
300+
- transformers (Hugging Face models)
301+
265302
#### Whisper Models Support
266303
For whisper models to work properly, you need to install ffmpeg:
267304

app/__init__.py

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1 @@
1-
import os
2-
from .version import __version__
3-
4-
# Suppress transformers warnings
5-
os.environ['TRANSFORMERS_VERBOSITY'] = 'error'
6-
7-
__all__ = ["__version__"]
1+
"""MLX OpenAI Server package."""

app/cli.py

Lines changed: 71 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,8 @@
55
the ASGI server.
66
"""
77

8+
from __future__ import annotations
9+
810
import asyncio
911
import sys
1012

@@ -17,7 +19,7 @@
1719
from .version import __version__
1820

1921

20-
class UpperChoice(click.Choice):
22+
class UpperChoice(click.Choice[str]):
2123
"""Case-insensitive choice type that returns uppercase values.
2224
2325
This small convenience subclass normalizes user input in a
@@ -26,7 +28,7 @@ class UpperChoice(click.Choice):
2628
where the internal representation is uppercased.
2729
"""
2830

29-
def normalize_choice(self, choice, ctx):
31+
def normalize_choice(self, choice: str | None, ctx: click.Context | None) -> str | None: # type: ignore[override]
3032
"""Return the canonical uppercase choice or raise BadParameter.
3133
3234
Parameters
@@ -75,20 +77,19 @@ def normalize_choice(self, choice, ctx):
7577
🚀 Version: %(version)s
7678
""",
7779
)
78-
def cli():
80+
def cli() -> None:
7981
"""Top-level Click command group for the MLX server CLI.
8082
8183
Subcommands (such as ``launch``) are registered on this group and
8284
invoked by the console entry point.
8385
"""
84-
pass
8586

8687

87-
@cli.command()
88+
@cli.command(help="Start the MLX OpenAI Server with the supplied flags")
8889
@click.option(
8990
"--model-path",
9091
required=True,
91-
help="Path to the model (required for lm, multimodal, embeddings, image-generation, image-edit, whisper model types). With `image-generation` or `image-edit` model types, it should be the local path to the model.",
92+
help="Path to the model (required for lm, multimodal, embeddings, image-generation, image-edit, whisper model types). Can be a local path or Hugging Face repository ID (e.g., 'blackforestlabs/FLUX.1-dev').",
9293
)
9394
@click.option(
9495
"--model-type",
@@ -186,35 +187,77 @@ def cli():
186187
help="Path to a custom chat template file. Only works with language models (lm) and multimodal models.",
187188
)
188189
def launch(
189-
model_path,
190-
model_type,
191-
context_length,
192-
port,
193-
host,
194-
max_concurrency,
195-
queue_timeout,
196-
queue_size,
197-
quantize,
198-
config_name,
199-
lora_paths,
200-
lora_scales,
201-
disable_auto_resize,
202-
log_file,
203-
no_log_file,
204-
log_level,
205-
enable_auto_tool_choice,
206-
tool_call_parser,
207-
reasoning_parser,
208-
trust_remote_code,
209-
chat_template_file,
190+
model_path: str,
191+
model_type: str,
192+
context_length: int,
193+
port: int,
194+
host: str,
195+
max_concurrency: int,
196+
queue_timeout: int,
197+
queue_size: int,
198+
quantize: int,
199+
config_name: str | None,
200+
lora_paths: str | None,
201+
lora_scales: str | None,
202+
disable_auto_resize: bool,
203+
log_file: str | None,
204+
no_log_file: bool,
205+
log_level: str,
206+
enable_auto_tool_choice: bool,
207+
tool_call_parser: str | None,
208+
reasoning_parser: str | None,
209+
trust_remote_code: bool,
210+
chat_template_file: str | None,
210211
) -> None:
211212
"""Start the FastAPI/Uvicorn server with the supplied flags.
212213
213214
The command builds a server configuration object using
214215
``MLXServerConfig`` and then calls the async ``start`` routine
215216
which handles the event loop and server lifecycle.
216-
"""
217217
218+
Parameters
219+
----------
220+
model_path : str
221+
Path to the model (required for lm, multimodal, embeddings, image-generation, image-edit, whisper model types).
222+
model_type : str
223+
Type of model to run (lm, multimodal, image-generation, image-edit, embeddings, whisper).
224+
context_length : int
225+
Context length for language models.
226+
port : int
227+
Port to run the server on.
228+
host : str
229+
Host to run the server on.
230+
max_concurrency : int
231+
Maximum number of concurrent requests.
232+
queue_timeout : int
233+
Request timeout in seconds.
234+
queue_size : int
235+
Maximum queue size for pending requests.
236+
quantize : int
237+
Quantization level for the model.
238+
config_name : str or None
239+
Config name of the model.
240+
lora_paths : str or None
241+
Path to the LoRA file(s).
242+
lora_scales : str or None
243+
Scale factor for the LoRA file(s).
244+
disable_auto_resize : bool
245+
Disable automatic model resizing.
246+
log_file : str or None
247+
Path to log file.
248+
no_log_file : bool
249+
Disable file logging entirely.
250+
log_level : str
251+
Set the logging level.
252+
enable_auto_tool_choice : bool
253+
Enable automatic tool choice.
254+
tool_call_parser : str or None
255+
Specify tool call parser to use.
256+
reasoning_parser : str or None
257+
Specify reasoning parser to use.
258+
trust_remote_code : bool
259+
Enable trust_remote_code when loading models.
260+
"""
218261
args = MLXServerConfig(
219262
model_path=model_path,
220263
model_type=model_type,

app/config.py

Lines changed: 3 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -47,15 +47,14 @@ class MLXServerConfig:
4747
lora_paths_str: str | None = None
4848
lora_scales_str: str | None = None
4949

50-
def __post_init__(self):
50+
def __post_init__(self) -> None:
5151
"""Normalize certain CLI fields after instantiation.
5252
5353
- Convert comma-separated ``lora_paths`` and ``lora_scales`` into
5454
lists when provided.
5555
- Apply small model-type-specific defaults for ``config_name``
5656
and emit warnings when values appear inconsistent.
5757
"""
58-
5958
# Process comma-separated LoRA paths and scales into lists (or None)
6059
if self.lora_paths_str:
6160
self.lora_paths = [p.strip() for p in self.lora_paths_str.split(",") if p.strip()]
@@ -74,11 +73,9 @@ def __post_init__(self):
7473
# image-edit model types. If missing for those types, set defaults.
7574
if self.config_name and self.model_type not in ["image-generation", "image-edit"]:
7675
logger.warning(
77-
"Config name parameter '%s' provided but model type is '%s'. "
76+
f"Config name parameter '{self.config_name}' provided but model type is '{self.model_type}'. "
7877
"Config name is only used with image-generation "
79-
"and image-edit models.",
80-
self.config_name,
81-
self.model_type,
78+
"and image-edit models."
8279
)
8380
elif self.model_type == "image-generation" and not self.config_name:
8481
logger.warning(

app/main.py

Lines changed: 20 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -27,13 +27,19 @@
2727
from .version import __version__
2828

2929

30-
def print_startup_banner(config_args):
31-
"""Log a compact startup banner describing the selected config.
30+
def print_startup_banner(config_args: MLXServerConfig) -> None:
31+
"""
32+
Log a compact startup banner describing the selected config.
3233
3334
The function emits human-friendly log messages that summarize the
3435
runtime configuration (model path/type, host/port, concurrency,
3536
LoRA settings, and logging options). Intended for the user-facing
3637
startup output only.
38+
39+
Parameters
40+
----------
41+
config_args : MLXServerConfig
42+
Configuration object containing runtime settings to display.
3743
"""
3844
logger.info("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━")
3945
logger.info(f"✨ MLX Server v{__version__} Starting ✨")
@@ -78,12 +84,18 @@ def print_startup_banner(config_args):
7884

7985

8086
async def start(config: MLXServerConfig) -> None:
81-
"""Run the ASGI server using the provided configuration.
87+
"""
88+
Run the ASGI server using the provided configuration.
8289
8390
This coroutine wires the configuration into the server setup
8491
routine, logs progress, and starts the Uvicorn server. It handles
8592
KeyboardInterrupt and logs any startup failures before exiting the
8693
process with a non-zero code.
94+
95+
Parameters
96+
----------
97+
config : MLXServerConfig
98+
Configuration object for server setup.
8799
"""
88100
try:
89101
# Display startup information
@@ -98,19 +110,20 @@ async def start(config: MLXServerConfig) -> None:
98110
except KeyboardInterrupt:
99111
logger.info("Server shutdown requested by user. Exiting...")
100112
except Exception as e:
101-
logger.error(f"Server startup failed: {str(e)}")
113+
logger.error(f"Server startup failed. {type(e).__name__}: {e}")
102114
sys.exit(1)
103115

104116

105-
def main():
106-
"""Normalize process args and dispatch to the Click CLI.
117+
def main() -> None:
118+
"""
119+
Normalize process args and dispatch to the Click CLI.
107120
108121
This helper gathers command-line arguments, inserts the "launch"
109122
subcommand when a subcommand is omitted for backwards compatibility,
110123
and delegates execution to :func:`app.cli.cli` through
111124
``cli.main``.
112125
"""
113-
from .cli import cli
126+
from .cli import cli # noqa: PLC0415
114127

115128
args = [str(x) for x in sys.argv[1:]]
116129
# Keep backwards compatibility: Add 'launch' subcommand if none is provided

0 commit comments

Comments
 (0)