Skip to content

Commit 5979e0d

Browse files
authored
[Docs] Remove LoRA column for not supported use cases, remove duplicated documents & assets (#2981)
## Description This PR removes outdated documentation files and assets, consolidates documentation to a central location, and removes unsupported LoRA-related fields from model configuration files. Preview: https://yatarkan.github.io/openvino.genai/docs/supported-models/#visual-language-models-vlms ## Checklist: - [x] Tests have been updated or added to cover the new code - N/A, docs update only - [x] This patch fully addresses the ticket - [x] I have made corresponding changes to the documentation
1 parent 91dc71e commit 5979e0d

File tree

19 files changed

+58
-363
lines changed

19 files changed

+58
-363
lines changed

.github/CONTRIBUTING.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
1. See [pull_request_template.md](./pull_request_template.md) for pull request (PR) requirements.
2-
2. See [BUILD.md](../src/docs/BUILD.md) for instructions on how to build `OpenVINO™ GenAI`.
1+
1. See [pull_request_template.md](/.github/pull_request_template.md) for pull request (PR) requirements.
2+
2. See [BUILD.md](/src/docs/BUILD.md) for instructions on how to build `OpenVINO™ GenAI`.
33
3. Code style is determined by the file the change is made in. If ambiguous, look into the neighboring files of the same type. In case of contradiction, pick any of the options but stay consistent in your choice.
44
4. Don't push branches directly to the upstream repository. Once a branch is pushed to upstream, non-admins lose push access to it, preventing you from updating your changes. Instead, push to your fork and open PRs from there.
55
5. Your PR will be tested after one of the developers approves the tests run.

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212
![Python](https://img.shields.io/badge/python-3.10+-green)
1313
![OS](https://img.shields.io/badge/OS-Linux_|_Windows_|_MacOS-blue)
1414

15-
![](src/docs/openvino_genai.svg)
15+
![](site/static/img/openvino-genai-workflow.svg)
1616

1717
</div>
1818

site/docs/guides/debug-logging.mdx

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -76,3 +76,13 @@ Accepted token rate, %: 51
7676
===============================
7777
Request_id: 0 ||| 40 0 40 20 0 0 40 40 0 20 20 20 0 40 0 0 20 80 0 80 20 0 0 0 40 80 0 40 60 40 80 0 0 0 0 40 20 20 0 40 20 40 0 20 0 0 0
7878
```
79+
80+
When a GGUF model is passed to the pipeline, the detailed debug info will also be printed.
81+
82+
```sh title="Output:"
83+
[GGUF Reader]: Loading and unpacking model from: gguf_models/qwen2.5-0.5b-instruct-q4_0.gguf
84+
[GGUF Reader]: Loading and unpacking model done. Time: 196ms
85+
[GGUF Reader]: Start generating OpenVINO model...
86+
[GGUF Reader]: Save generated OpenVINO model to: gguf_models/openvino_model.xml done. Time: 466 ms
87+
[GGUF Reader]: Model generation done. Time: 757ms
88+
```

site/docs/supported-models/_components/speech-generation-models-table/index.tsx

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
import React from 'react';
2-
import { BaseModelsTable, LinksCell, StatusCell } from '../base-models-table';
2+
import { BaseModelsTable, LinksCell } from '../base-models-table';
33
import { SPEECH_GENERATION_MODELS } from './models';
44

55
export default function SpeechGenerationModelsTable(): React.JSX.Element {
6-
const headers = ['Architecture', 'Models', 'LoRA Support', 'Example HuggingFace Models'];
6+
const headers = ['Architecture', 'Models', 'Example HuggingFace Models'];
77

88
const rows = SPEECH_GENERATION_MODELS.map(({ architecture, models }) => (
99
<>
@@ -12,13 +12,11 @@ export default function SpeechGenerationModelsTable(): React.JSX.Element {
1212
<code>{architecture}</code>
1313
</td>
1414
<td>{models[0].name}</td>
15-
<StatusCell value={models[0].loraSupport} />
1615
<LinksCell links={models[0].links} />
1716
</tr>
18-
{models.slice(1).map(({ name, loraSupport, links }) => (
17+
{models.slice(1).map(({ name, links }) => (
1918
<tr key={name}>
2019
<td>{name}</td>
21-
<StatusCell value={loraSupport} />
2220
<LinksCell links={links} />
2321
</tr>
2422
))}

site/docs/supported-models/_components/speech-generation-models-table/models.ts

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,6 @@ type SpeechGenerationModelType = {
22
architecture: string;
33
models: Array<{
44
name: string;
5-
loraSupport: boolean;
65
links: string[];
76
}>;
87
};
@@ -13,7 +12,6 @@ export const SPEECH_GENERATION_MODELS: SpeechGenerationModelType[] = [
1312
models: [
1413
{
1514
name: 'SpeechT5 TTS',
16-
loraSupport: false,
1715
links: ['https://huggingface.co/microsoft/speecht5_tts'],
1816
},
1917
],

site/docs/supported-models/_components/vlm-models-table/index.tsx

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
import Link from '@docusaurus/Link';
22
import React from 'react';
3-
import { BaseModelsTable, LinksCell, StatusCell } from '../base-models-table';
3+
import { BaseModelsTable, LinksCell } from '../base-models-table';
44
import { VLM_MODELS } from './models';
55

66
export default function VLMModelsTable(): React.JSX.Element {
7-
const headers = ['Architecture', 'Models', 'LoRA Support', 'Example HuggingFace Models'];
7+
const headers = ['Architecture', 'Models', 'Example HuggingFace Models'];
88

99
const rows = VLM_MODELS.map(({ architecture, models }) => (
1010
<>
@@ -20,13 +20,11 @@ export default function VLMModelsTable(): React.JSX.Element {
2020
</>
2121
)}
2222
</td>
23-
<StatusCell value={models[0].loraSupport} />
2423
<LinksCell links={models[0].links} />
2524
</tr>
26-
{models.slice(1).map(({ name, loraSupport, links }) => (
25+
{models.slice(1).map(({ name, links }) => (
2726
<tr key={name}>
2827
<td>{name}</td>
29-
<StatusCell value={loraSupport} />
3028
<LinksCell links={links} />
3129
</tr>
3230
))}

site/docs/supported-models/_components/vlm-models-table/models.ts

Lines changed: 0 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,6 @@ type VLMModelType = {
22
architecture: string;
33
models: Array<{
44
name: string;
5-
loraSupport: boolean;
65
links: string[];
76
notesLink?: string;
87
}>;
@@ -14,7 +13,6 @@ export const VLM_MODELS: VLMModelType[] = [
1413
models: [
1514
{
1615
name: 'InternVLChatModel',
17-
loraSupport: false,
1816
links: [
1917
'https://huggingface.co/OpenGVLab/InternVL2-1B',
2018
'https://huggingface.co/OpenGVLab/InternVL2-2B',
@@ -39,7 +37,6 @@ export const VLM_MODELS: VLMModelType[] = [
3937
models: [
4038
{
4139
name: 'LLaVA-v1.5',
42-
loraSupport: false,
4340
links: ['https://huggingface.co/llava-hf/llava-1.5-7b-hf'],
4441
},
4542
],
@@ -49,12 +46,10 @@ export const VLM_MODELS: VLMModelType[] = [
4946
models: [
5047
{
5148
name: 'nanoLLaVA',
52-
loraSupport: false,
5349
links: ['https://huggingface.co/qnguyen3/nanoLLaVA'],
5450
},
5551
{
5652
name: 'nanoLLaVA-1.5',
57-
loraSupport: false,
5853
links: ['https://huggingface.co/qnguyen3/nanoLLaVA-1.5'],
5954
},
6055
],
@@ -64,7 +59,6 @@ export const VLM_MODELS: VLMModelType[] = [
6459
models: [
6560
{
6661
name: 'LLaVA-v1.6',
67-
loraSupport: false,
6862
links: [
6963
'https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf',
7064
'https://huggingface.co/llava-hf/llava-v1.6-vicuna-7b-hf',
@@ -78,7 +72,6 @@ export const VLM_MODELS: VLMModelType[] = [
7872
models: [
7973
{
8074
name: 'LLaVA-Next-Video',
81-
loraSupport: false,
8275
links: [
8376
'https://huggingface.co/llava-hf/LLaVA-NeXT-Video-7B-hf',
8477
],
@@ -90,7 +83,6 @@ export const VLM_MODELS: VLMModelType[] = [
9083
models: [
9184
{
9285
name: 'MiniCPM-o-2_6',
93-
loraSupport: false,
9486
links: ['https://huggingface.co/openbmb/MiniCPM-o-2_6'],
9587
notesLink: '#minicpm-o-notes',
9688
},
@@ -101,7 +93,6 @@ export const VLM_MODELS: VLMModelType[] = [
10193
models: [
10294
{
10395
name: 'MiniCPM-V-2_6',
104-
loraSupport: false,
10596
links: ['https://huggingface.co/openbmb/MiniCPM-V-2_6'],
10697
},
10798
],
@@ -111,7 +102,6 @@ export const VLM_MODELS: VLMModelType[] = [
111102
models: [
112103
{
113104
name: 'phi3_v',
114-
loraSupport: false,
115105
links: [
116106
'https://huggingface.co/microsoft/Phi-3-vision-128k-instruct',
117107
'https://huggingface.co/microsoft/Phi-3.5-vision-instruct',
@@ -125,7 +115,6 @@ export const VLM_MODELS: VLMModelType[] = [
125115
models: [
126116
{
127117
name: 'phi4mm',
128-
loraSupport: false,
129118
links: [
130119
'https://huggingface.co/microsoft/Phi-4-multimodal-instruct',
131120
],
@@ -138,7 +127,6 @@ export const VLM_MODELS: VLMModelType[] = [
138127
models: [
139128
{
140129
name: 'Qwen2-VL',
141-
loraSupport: false,
142130
links: [
143131
'https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct',
144132
'https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct',
@@ -153,7 +141,6 @@ export const VLM_MODELS: VLMModelType[] = [
153141
models: [
154142
{
155143
name: 'Qwen2.5-VL',
156-
loraSupport: false,
157144
links: [
158145
'https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct',
159146
'https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct',
@@ -166,7 +153,6 @@ export const VLM_MODELS: VLMModelType[] = [
166153
models: [
167154
{
168155
name: 'gemma3',
169-
loraSupport: false,
170156
links: [
171157
'https://huggingface.co/google/gemma-3-4b-it',
172158
'https://huggingface.co/google/gemma-3-12b-it',

site/docs/supported-models/_components/whisper-models-table/index.tsx

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
import React from 'react';
2-
import { BaseModelsTable, LinksCell, StatusCell } from '../base-models-table';
2+
import { BaseModelsTable, LinksCell } from '../base-models-table';
33
import { WHISPER_MODELS } from './models';
44

55
export default function WhisperModelsTable(): React.JSX.Element {
6-
const headers = ['Architecture', 'Models', 'LoRA Support', 'Example HuggingFace Models'];
6+
const headers = ['Architecture', 'Models', 'Example HuggingFace Models'];
77

88
const rows = WHISPER_MODELS.map(({ architecture, models }) => (
99
<>
@@ -12,13 +12,11 @@ export default function WhisperModelsTable(): React.JSX.Element {
1212
<code>{architecture}</code>
1313
</td>
1414
<td>{models[0].name}</td>
15-
<StatusCell value={models[0].loraSupport} />
1615
<LinksCell links={models[0].links} />
1716
</tr>
18-
{models.slice(1).map(({ name, loraSupport, links }) => (
17+
{models.slice(1).map(({ name, links }) => (
1918
<tr key={name}>
2019
<td>{name}</td>
21-
<StatusCell value={loraSupport} />
2220
<LinksCell links={links} />
2321
</tr>
2422
))}

site/docs/supported-models/_components/whisper-models-table/models.ts

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,6 @@ type WhisperModelType = {
22
architecture: string;
33
models: Array<{
44
name: string;
5-
loraSupport: boolean;
65
links: string[];
76
}>;
87
};
@@ -13,7 +12,6 @@ export const WHISPER_MODELS: WhisperModelType[] = [
1312
models: [
1413
{
1514
name: 'Whisper',
16-
loraSupport: false,
1715
links: [
1816
'https://huggingface.co/openai/whisper-tiny',
1917
'https://huggingface.co/openai/whisper-tiny.en',
@@ -28,7 +26,6 @@ export const WHISPER_MODELS: WhisperModelType[] = [
2826
},
2927
{
3028
name: 'Distil-Whisper',
31-
loraSupport: false,
3229
links: [
3330
'https://huggingface.co/distil-whisper/distil-small.en',
3431
'https://huggingface.co/distil-whisper/distil-medium.en',

site/docs/supported-models/index.mdx

Lines changed: 33 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -9,26 +9,22 @@ import TextRerankModelsTable from './_components/text-rerank-models-table';
99

1010
# Supported Models
1111

12-
:::info
13-
12+
:::info Models Compatibility
1413
Other models with similar architectures may also work successfully even if not explicitly validated.
1514
Consider testing any unlisted models to verify compatibility with your specific use case.
16-
1715
:::
1816

1917
## Large Language Models (LLMs)
2018

21-
<LLMModelsTable />
22-
23-
:::info
24-
25-
LoRA adapters are supported.
26-
19+
:::tip LoRA Support
20+
LLM pipeline supports LoRA adapters.
2721
:::
2822

23+
<LLMModelsTable />
24+
2925
::::info
3026

31-
The pipeline can work with other similar topologies produced by `optimum-intel` with the same model signature.
27+
The LLM pipeline can work with other similar topologies produced by `optimum-intel` with the same model signature.
3228
The model is required to have the following inputs after the conversion:
3329

3430
1. `input_ids` contains the tokens.
@@ -50,6 +46,10 @@ Models should belong to the same family and have the same tokenizers.
5046

5147
## Visual Language Models (VLMs)
5248

49+
:::info LoRA Support
50+
VLM pipeline does **not** support LoRA adapters.
51+
:::
52+
5353
<VLMModelsTable />
5454

5555
:::warning VLM Models Notes
@@ -62,7 +62,7 @@ pip install timm einops
6262
```
6363
#### MiniCPMO {#minicpm-o-notes}
6464

65-
1. `openbmb/MiniCPM-o-2_6` doesn't support transformers>=4.52 which is required for `optimum-cli` export.
65+
1. `openbmb/MiniCPM-o-2_6` doesn't support `transformers>=4.52` which is required for `optimum-cli` export.
6666
2. `--task image-text-to-text` is required for `optimum-cli export openvino --trust-remote-code` because `image-text-to-text` isn't `MiniCPM-o-2_6`'s native task.
6767

6868
#### phi3_v {#phi3_v-notes}
@@ -73,42 +73,52 @@ generation_config.set_eos_token_id(pipe.get_tokenizer().get_eos_token_id())
7373
```
7474
#### phi4mm {#phi4mm-notes}
7575

76-
Apply https://huggingface.co/microsoft/Phi-4-multimodal-instruct/discussions/78/files to fix the model export for transformers>=4.50
76+
Apply https://huggingface.co/microsoft/Phi-4-multimodal-instruct/discussions/78/files to fix the model export for `transformers>=4.50`
7777
:::
7878

7979
## Speech Recognition Models (Whisper-based)
8080

81+
:::info LoRA Support
82+
Speech recognition pipeline does **not** support LoRA adapters.
83+
:::
84+
8185
<WhisperModelsTable />
8286

8387
## Speech Generation Models
8488

89+
:::info LoRA Support
90+
Speech generation pipeline does **not** support LoRA adapters.
91+
:::
92+
8593
<SpeechGenerationModelsTable />
8694

8795
## Text Embeddings Models
8896

89-
<TextEmbeddingsModelsTable />
90-
91-
:::info
92-
LoRA adapters are not supported.
97+
:::info LoRA Support
98+
Text embeddings pipeline does **not** support LoRA adapters.
9399
:::
94100

95-
:::info
101+
<TextEmbeddingsModelsTable />
102+
103+
:::warning Text Embeddings Models Notes
96104
Qwen3 Embedding models require `--task feature-extraction` during the conversion with `optimum-cli`.
97105
:::
98106

99107
## Text Rerank Models
100108

101-
<TextRerankModelsTable />
102-
103-
:::info
104-
LoRA adapters are not supported.
109+
:::info LoRA Support
110+
Text rerank pipeline does **not** support LoRA adapters.
105111
:::
106112

107-
:::info
113+
<TextRerankModelsTable />
114+
115+
:::warning Text Rerank Models Notes
108116
Text Rerank models require appropriate `--task` provided during the conversion with `optimum-cli`. Task can be found in the table above.
109117
:::
110118

111-
:::info
119+
___
120+
121+
:::info Hugging Face Notes
112122
Some models may require access request submission on the Hugging Face page to be downloaded.
113123

114124
If https://huggingface.co/ is down, the conversion step won't be able to download the models.

0 commit comments

Comments
 (0)