Skip to content

Commit 4f92c49

Browse files
authored
update doc for release 2.6.10 (#5352)
* update llm readme for release 2.6 * update doc torch compile * explain intel compiler compatible error case while installing ipex
1 parent 8ede969 commit 4f92c49

File tree

3 files changed

+69
-29
lines changed

3 files changed

+69
-29
lines changed

docs/tutorials/features/torch_compile_gpu.md

Lines changed: 3 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -9,20 +9,10 @@ Intel® Extension for PyTorch\* now empowers users to seamlessly harness graph c
99
# Required Dependencies
1010

1111
**Verified version**:
12-
- `torch` : v2.5
13-
- `intel_extension_for_pytorch` : v2.5
14-
- `triton` : v3.1.0+91b14bf559
12+
- `torch` : v2.6
13+
- `intel_extension_for_pytorch` : v2.6
1514

16-
17-
Install [Intel® oneAPI DPC++/C++ Compiler 2025.0.4](https://www.intel.com/content/www/us/en/developer/tools/oneapi/dpc-compiler-download.html).
18-
19-
Follow [Intel® Extension for PyTorch\* Installation](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/) to install `torch` and `intel_extension_for_pytorch` firstly.
20-
21-
Triton could be directly installed using the following command:
22-
23-
```Bash
24-
pip install --pre pytorch-triton-xpu==3.1.0+91b14bf559 --index-url https://download.pytorch.org/whl/nightly/xpu
25-
```
15+
Follow [ https://pytorch-extension.intel.com/installation?platform=gpu&version=v2.6.10%2Bxpu) to install `torch` and `intel_extension_for_pytorch`. Triton is installed along with torch.
2616

2717
The cached files would be generated if you had run `torch.compile` with a previous version of triton, but they are generally conflicting with the new version.
2818
So, if the folder `~/.triton` exists before your first running of the `torch.compile` script in the current environment, please delete it.
@@ -32,17 +22,8 @@ So, if the folder `~/.triton` exists before your first running of the `torch.com
3222
rm -rf ~/.triton
3323
```
3424

35-
Remember to activate the oneAPI DPC++/C++ Compiler by following commands.
36-
37-
```bash
38-
# {dpcpproot} is the location for dpcpp ROOT path and it is where you installed oneAPI DPCPP, usually it is /opt/intel/oneapi/compiler/latest or ~/intel/oneapi/compiler/latest
39-
source {dpcpproot}/env/vars.sh
40-
```
41-
42-
4325
# Example Usage
4426

45-
4627
## Inferenece with torch.compile
4728

4829
```python

docs/tutorials/known_issues.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -117,6 +117,15 @@ Troubleshooting
117117
pip install --pre pytorch-triton-xpu==3.1.0+91b14bf559 --index-url https://download.pytorch.org/whl/nightly/xpu
118118
```
119119

120+
- **Problem**: ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
121+
torch 2.6.0+xpu requires intel-cmplr-lib-rt==2025.0.2, but you have intel-cmplr-lib-rt 2025.0.4 which is incompatible.
122+
torch 2.6.0+xpu requires intel-cmplr-lib-ur==2025.0.2, but you have intel-cmplr-lib-ur 2025.0.4 which is incompatible.
123+
torch 2.6.0+xpu requires intel-cmplr-lic-rt==2025.0.2, but you have intel-cmplr-lic-rt 2025.0.4 which is incompatible.
124+
torch 2.6.0+xpu requires intel-sycl-rt==2025.0.2, but you have intel-sycl-rt 2025.0.4 which is incompatible.
125+
- **Cause**: The intel-extension-for-pytorch v2.6.10+xpu uses Intel Compiler 2025.0.4 for a distributed feature fix, while torch v2.6.0+xpu is pinned with 2025.0.2.
126+
- **Solution**: Ignore the Error since actually torch v2.6.0+xpu is compatible with Intel Compiler 2025.0.4.
127+
128+
120129
## Performance Issue
121130
122131
- **Problem**: Extended durations for data transfers from the host system to the device (H2D) and from the device back to the host system (D2H).

examples/gpu/llm/README.md

Lines changed: 57 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -8,21 +8,70 @@ Here you can find benchmarking scripts for large language models (LLM) text gene
88

99
## Environment Setup
1010

11-
### [Recommended] Docker-based environment setup with compilation from source
11+
### [Recommended] Docker-based environment setup with prebuilt wheel files
1212

1313
```bash
1414
# Get the Intel® Extension for PyTorch* source code
1515
git clone https://github.com/intel/intel-extension-for-pytorch.git
1616
cd intel-extension-for-pytorch
17-
git checkout xpu-main
17+
git checkout release/xpu/2.6.10
18+
git submodule sync
19+
git submodule update --init --recursive
20+
21+
# Build an image with the provided Dockerfile by installing Intel® Extension for PyTorch* with prebuilt wheels
22+
docker build -f examples/gpu/llm/Dockerfile -t ipex-llm:26010 .
23+
24+
# Run the container with command below
25+
docker run -it --rm --privileged -v /dev/dri/by-path:/dev/dri/by-path ipex-llm:26010 bash
26+
27+
# When the command prompt shows inside the docker container, enter llm examples directory
28+
cd llm
29+
30+
# Activate environment variables
31+
source ./tools/env_activate.sh [inference|fine-tuning]
32+
```
33+
34+
### Conda-based environment setup with prebuilt wheel files
35+
36+
Make sure the driver packages are installed. Refer to [Installation Guide](https://intel.github.io/intel-extension-for-pytorch/#installation?platform=gpu&version=v2.6.10%2Bxpu&os=linux%2Fwsl2&package=pip).
37+
38+
```bash
39+
40+
# Get the Intel® Extension for PyTorch* source code
41+
git clone https://github.com/intel/intel-extension-for-pytorch.git
42+
cd intel-extension-for-pytorch
43+
git checkout release/xpu/2.6.10
44+
git submodule sync
45+
git submodule update --init --recursive
46+
47+
# Make sure you have GCC >= 11 is installed on your system.
48+
# Create a conda environment
49+
conda create -n llm python=3.10 -y
50+
conda activate llm
51+
# Setup the environment with the provided script
52+
cd examples/gpu/llm
53+
# If you want to install Intel® Extension for PyTorch\* with prebuilt wheels, use the commands below:
54+
bash ./tools/env_setup.sh 0x07
55+
conda deactivate
56+
conda activate llm
57+
source ./tools/env_activate.sh [inference|fine-tuning]
58+
```
59+
60+
### Docker-based environment setup with compilation from source
61+
62+
```bash
63+
# Get the Intel® Extension for PyTorch* source code
64+
git clone https://github.com/intel/intel-extension-for-pytorch.git
65+
cd intel-extension-for-pytorch
66+
git checkout release/xpu/2.6.10
1867
git submodule sync
1968
git submodule update --init --recursive
2069

2170
# Build an image with the provided Dockerfile by compiling Intel® Extension for PyTorch* from source
22-
docker build -f examples/gpu/llm/Dockerfile --build-arg COMPILE=ON -t ipex-llm:xpu-main .
71+
docker build -f examples/gpu/llm/Dockerfile --build-arg COMPILE=ON -t ipex-llm:26010 .
2372

2473
# Run the container with command below
25-
docker run -it --rm --privileged -v /dev/dri/by-path:/dev/dri/by-path ipex-llm:xpu-main bash
74+
docker run -it --rm --privileged -v /dev/dri/by-path:/dev/dri/by-path ipex-llm:26010 bash
2675

2776
# When the command prompt shows inside the docker container, enter llm examples directory
2877
cd llm
@@ -33,14 +82,14 @@ source ./tools/env_activate.sh [inference|fine-tuning]
3382

3483
### Conda-based environment setup with compilation from source
3584

36-
Make sure the driver and Base Toolkit are installed. Refer to [Installation Guide](https://intel.github.io/intel-extension-for-pytorch/#installation?platform=gpu&version=v2.3.110%2Bxpu&os=linux%2Fwsl2&package=source).
85+
Make sure the driver and Base Toolkit are installed. Refer to [Installation Guide](https://intel.github.io/intel-extension-for-pytorch/#installation?platform=gpu&version=v2.6.10%2Bxpu&os=linux%2Fwsl2&package=source).
3786

3887
```bash
3988

4089
# Get the Intel® Extension for PyTorch* source code
4190
git clone https://github.com/intel/intel-extension-for-pytorch.git
4291
cd intel-extension-for-pytorch
43-
git checkout xpu-main
92+
git checkout release/xpu/2.6.10
4493
git submodule sync
4594
git submodule update --init --recursive
4695

@@ -51,7 +100,8 @@ conda activate llm
51100
# Setup the environment with the provided script
52101
cd examples/gpu/llm
53102
# If you want to install Intel® Extension for PyTorch\* from source, use the commands below:
54-
# e.g. bash ./tools/env_setup.sh 3 /opt/intel/oneapi pvc
103+
104+
# e.g. bash ./tools/env_setup.sh 0x03 /opt/intel/oneapi/ pvc
55105
bash ./tools/env_setup.sh 3 <ONEAPI_ROOT_DIR> <AOT>
56106

57107
conda deactivate

0 commit comments

Comments
 (0)