You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Follow [Intel® Extension for PyTorch\* Installation](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/) to install `torch` and `intel_extension_for_pytorch` firstly.
20
-
21
-
Triton could be directly installed using the following command:
Follow [https://pytorch-extension.intel.com/installation?platform=gpu&version=v2.6.10%2Bxpu) to install `torch` and `intel_extension_for_pytorch`. Triton is installed along with torch.
26
16
27
17
The cached files would be generated if you had run `torch.compile` with a previous version of triton, but they are generally conflicting with the new version.
28
18
So, if the folder `~/.triton` exists before your first running of the `torch.compile` script in the current environment, please delete it.
@@ -32,17 +22,8 @@ So, if the folder `~/.triton` exists before your first running of the `torch.com
32
22
rm -rf ~/.triton
33
23
```
34
24
35
-
Remember to activate the oneAPI DPC++/C++ Compiler by following commands.
36
-
37
-
```bash
38
-
# {dpcpproot} is the location for dpcpp ROOT path and it is where you installed oneAPI DPCPP, usually it is /opt/intel/oneapi/compiler/latest or ~/intel/oneapi/compiler/latest
- **Problem**: ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
121
+
torch 2.6.0+xpu requires intel-cmplr-lib-rt==2025.0.2, but you have intel-cmplr-lib-rt 2025.0.4 which is incompatible.
122
+
torch 2.6.0+xpu requires intel-cmplr-lib-ur==2025.0.2, but you have intel-cmplr-lib-ur 2025.0.4 which is incompatible.
123
+
torch 2.6.0+xpu requires intel-cmplr-lic-rt==2025.0.2, but you have intel-cmplr-lic-rt 2025.0.4 which is incompatible.
124
+
torch 2.6.0+xpu requires intel-sycl-rt==2025.0.2, but you have intel-sycl-rt 2025.0.4 which is incompatible.
125
+
- **Cause**: The intel-extension-for-pytorch v2.6.10+xpu uses Intel Compiler 2025.0.4 for a distributed feature fix, while torch v2.6.0+xpu is pinned with 2025.0.2.
126
+
- **Solution**: Ignore the Error since actually torch v2.6.0+xpu is compatible with Intel Compiler 2025.0.4.
127
+
128
+
120
129
## Performance Issue
121
130
122
131
- **Problem**: Extended durations for data transfers from the host system to the device (H2D) and from the device back to the host system (D2H).
### Conda-based environment setup with prebuilt wheel files
35
+
36
+
Make sure the driver packages are installed. Refer to [Installation Guide](https://intel.github.io/intel-extension-for-pytorch/#installation?platform=gpu&version=v2.6.10%2Bxpu&os=linux%2Fwsl2&package=pip).
37
+
38
+
```bash
39
+
40
+
# Get the Intel® Extension for PyTorch* source code
### Conda-based environment setup with compilation from source
35
84
36
-
Make sure the driver and Base Toolkit are installed. Refer to [Installation Guide](https://intel.github.io/intel-extension-for-pytorch/#installation?platform=gpu&version=v2.3.110%2Bxpu&os=linux%2Fwsl2&package=source).
85
+
Make sure the driver and Base Toolkit are installed. Refer to [Installation Guide](https://intel.github.io/intel-extension-for-pytorch/#installation?platform=gpu&version=v2.6.10%2Bxpu&os=linux%2Fwsl2&package=source).
37
86
38
87
```bash
39
88
40
89
# Get the Intel® Extension for PyTorch* source code
0 commit comments