Skip to content

Commit e8317aa

Browse files
ZhaoqiongZzhuyuhua-vtye1
authored
Document and LLM dockerfile update for v2.1.30+xpu (#4181)
* update woq int4 to llm readme and llm.rst * add doc for weight only quantization * update version to 2.1.30 * fix doc missing files and typo * remove gpt-j woq and add woq script * add ipex_log to toctree and mark Prototype * remove idex compilation and idex_dependency related * update woq link and llm related link * add torch ccl internal link for test * add public torch-ccl 2.1.300 * remove llama2-34b and falcon * fix compile issue in LLM dockerfile * fix typo of run_bc_woq file and remove useless configuration * add extra required packages for woq and remove code snippet * Correct Llama 2 usage and replace PVC and ARC to official name. * Update docker/README.md * add ipex log link in features page * seperate docker build for compile source and prebuilt * sync woq int4 instruction * fix installation command * woq int4 link update for release branch --------- Signed-off-by: ZhaoqiongZ <zhaoqiong.zheng@intel.com> Co-authored-by: zhuyuhua-v <yuhua.zhu@intel.com> Co-authored-by: Ye Ting <ting.ye@intel.com>
1 parent 8cd6038 commit e8317aa

21 files changed

+400
-137
lines changed

README.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -50,9 +50,9 @@ Compilation instruction of the latest CPU code base `main` branch can be found i
5050
You can install Intel® Extension for PyTorch\* for GPU via command below.
5151

5252
```bash
53-
python -m pip install torch==2.1.0a0 torchvision==0.16.0a0 torchaudio==2.1.0a0 intel-extension-for-pytorch==2.1.10+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
53+
python -m pip install torch==2.1.0.post2 torchvision==0.16.0.post2 torchaudio==2.1.0.post2 intel-extension-for-pytorch==2.1.30+xpu oneccl_bind_pt==2.1.300+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
5454
# for PRC user, you can check with the following link
55-
python -m pip install torch==2.1.0a0 torchvision==0.16.0a0 torchaudio==2.1.0a0 intel-extension-for-pytorch==2.1.10+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/
55+
python -m pip install torch==2.1.0.post2 torchvision==0.16.0.post2 torchaudio==2.1.0.post2 intel-extension-for-pytorch==2.1.30+xpu oneccl_bind_pt==2.1.300+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/
5656

5757
```
5858

@@ -115,3 +115,4 @@ for information on how to report a potential security issue or vulnerability.
115115

116116
See also: [Security Policy](SECURITY.md)
117117

118+

dependency_version.yml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,10 +14,12 @@ torchvision:
1414
commit: v0.16.0
1515
torch-ccl:
1616
repo: https://github.com/intel/torch-ccl.git
17-
commit: a2164779c09bc421ad171968d5b7a9b26dcd8f0b
17+
commit: 1053f1354f6293abc11e93af085524fe3664219f
1818
version: 2.1.300+xpu
1919
deepspeed:
2020
version: 0.14.0
21+
intel-extension-for-deepspeed:
22+
version: 2.1.30
2123
transformers:
2224
version: 4.31.0
2325
commit: v4.31.0

docker/README.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,10 +19,10 @@ Run the following commands to build a docker image by compiling from source.
1919
```
2020
git clone https://github.com/intel/intel-extension-for-pytorch.git
2121
cd intel-extension-for-pytorch
22-
git checkout xpu-main
22+
git checkout release/xpu/2.1.30
2323
git submodule sync
2424
git submodule update --init --recursive
25-
docker build -f docker/Dockerfile.compile --build-arg GID_RENDER=$(getent group render | sed -E 's,^render:[^:]*:([^:]*):.*$,\1,') -t intel-extension-for-pytorch:xpu .
25+
docker build -f docker/Dockerfile.compile --build-arg GID_RENDER=$(getent group render | sed -E 's,^render:[^:]*:([^:]*):.*$,\1,') -t intel/intel-extension-for-pytorch:2.1.30-xpu .
2626
```
2727

2828
Alternatively, `./build.sh` script has docker build command to install prebuilt wheel files, update all the relevant build arguments and execute the script. Run the command below in current directory.
@@ -98,3 +98,4 @@ Sample output looks like below:
9898

9999
Now you are inside container with Python 3.10, PyTorch, and Intel® Extension for PyTorch\* preinstalled. You can run your own script
100100
to run on Intel GPU.
101+
92.5 KB
Loading
126 KB
Loading
119 KB
Loading

docs/tutorials/contribution.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Once you implement and test your feature or bug-fix, submit a Pull Request to ht
1616

1717
## Developing Intel® Extension for PyTorch\* on XPU
1818

19-
A full set of instructions on installing Intel® Extension for PyTorch\* from source is in the [Installation document](../../../index.html#installation?platform=gpu&version=v2.1.10%2Bxpu).
19+
A full set of instructions on installing Intel® Extension for PyTorch\* from source is in the [Installation document](../../../index.html#installation?platform=gpu&version=v2.1.30%2Bxpu).
2020

2121
To develop on your machine, here are some tips:
2222

@@ -126,3 +126,4 @@ To build the documentation:
126126
#### Tips
127127

128128
The `.rst` source files live in `docs/tutorials` folder. Some of the `.rst` files pull in docstrings from Intel® Extension for PyTorch\* Python code (for example, via the `autofunction` or `autoclass` directives). To shorten doc build times, it is helpful to remove the files you are not working on, only keeping the base `index.rst` file and the files you are editing. The Sphinx build will produce missing file warnings but will still complete.
129+

docs/tutorials/features.rst

Lines changed: 17 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -178,7 +178,7 @@ For more detailed information, check `Profiler Kineto <features/profiler_kineto.
178178

179179

180180
Compute Engine (Prototype feature for debug)
181-
-----------------------------------------------
181+
--------------------------------------------
182182

183183
Compute engine is a prototype feature which provides the capacity to choose specific backend for operators with multiple implementations.
184184

@@ -191,3 +191,19 @@ For more detailed information, check `Compute Engine <features/compute_engine.md
191191
features/compute_engine
192192

193193

194+
195+
196+
IPEX LOG (Prototype)
197+
--------------------
198+
199+
IPEX_LOGGING provides the capacity to log IPEX internal information. If you would like to use torch-style log, that is, the log/verbose is introduced by torch, and refer to cuda code, pls still use torch macro to show the log. For example, TORCH_CHECK, TORCH_ERROR. If the log is IPEX specific, or is going to trace IPEX execution, pls use IPEX_LOGGING. For some part of usage are still discussed with habana side, if has change some feature will update here.
200+
201+
For more detailed information, check `IPEX LOG <features/ipex_log.md>`_.
202+
203+
.. toctree::
204+
:hidden:
205+
:maxdepth: 1
206+
207+
features/ipex_log
208+
209+

docs/tutorials/features/torch_compile_gpu.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,9 @@ Intel® Extension for PyTorch\* now empowers users to seamlessly harness graph c
1010
## Required Dependencies
1111

1212
**Verified version**:
13-
- `torch` : v2.1.0
13+
- `torch` : > v2.1.0
1414
- `intel_extension_for_pytorch` : > v2.1.10
15-
- `triton` : [v2.1.0](https://github.com/intel/intel-xpu-backend-for-triton/releases/tag/v2.1.0) with Intel® XPU Backend for Triton* backend enabled.
15+
- `triton` : > [v2.1.0](https://github.com/intel/intel-xpu-backend-for-triton/releases/tag/v2.1.0) with Intel® XPU Backend for Triton* backend enabled.
1616

1717
Follow [Intel® Extension for PyTorch\* Installation](https://intel.github.io/intel-extension-for-pytorch/xpu/2.1.30+xpu/tutorials/installation.html) to install `torch` and `intel_extension_for_pytorch` firstly.
1818

@@ -71,3 +71,4 @@ optimizer.zero_grad()
7171
loss.backward()
7272
optimizer.step()
7373
```
74+

docs/tutorials/getting_started.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Quick Start
22

3-
The following instructions assume you have installed the Intel® Extension for PyTorch\*. For installation instructions, refer to [Installation](../../../index.html#installation?platform=gpu&version=v2.1.10%2Bxpu).
3+
The following instructions assume you have installed the Intel® Extension for PyTorch\*. For installation instructions, refer to [Installation](../../../index.html#installation?platform=gpu&version=v2.1.30%2Bxpu).
44

55
To start using the Intel® Extension for PyTorch\* in your code, you need to make the following changes:
66

@@ -58,3 +58,4 @@ source /opt/intel/oneapi/compiler/latest/env/vars.sh
5858
source /opt/intel/oneapi/mkl/latest/env/vars.sh
5959
python <script>
6060
```
61+

0 commit comments

Comments
 (0)