Skip to content

Commit b47faf5

Browse files
ZailiWangjingxu10
andauthored
fix doc link (#2344)
* .rst link fix * bug fix in docstring --------- Co-authored-by: Jing Xu <jing.xu@intel.com>
1 parent dcbfe91 commit b47faf5

File tree

2 files changed

+3
-4
lines changed

2 files changed

+3
-4
lines changed

docs/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ Optimizations take advantage of Intel® Advanced Vector Extensions 512 (Intel®
1010
Moreover, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs through the PyTorch* ``xpu`` device.
1111

1212
In the current technological landscape, Generative AI (GenAI) workloads and models have gained widespread attention and popularity. Large Language Models (LLMs) have emerged as the dominant models driving these GenAI applications. Starting from 2.1.0, specific optimizations for certain
13-
LLM models are introduced in the Intel® Extension for PyTorch*. For more information on LLM optimizations, refer to the `Large Language Models (LLM) <llm.html>`_ section.
13+
LLM models are introduced in the Intel® Extension for PyTorch*. For more information on LLM optimizations, refer to the `Large Language Models (LLM) <tutorials/llm.rst>`_ section.
1414

1515
The extension can be loaded as a Python module for Python programs or linked as a C++ library for C++ programs. In Python scripts, users can enable it dynamically by importing ``intel_extension_for_pytorch``.
1616

intel_extension_for_pytorch/quantization/_qconfig.py

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -72,9 +72,8 @@ def get_smooth_quant_qconfig_mapping(
7272
For nn.Linear with SmoothQuant enabled, it calculates q-params
7373
after applying scaling factors. PerChannelMinMaxObserver by
7474
default.
75-
Example: ``torch.ao.quantization.PerChannelMinMaxObserver.with_args(
76-
dtype=torch.qint8, qscheme=torch.per_channel_symmetric
77-
)``
75+
Example: ``torch.ao.quantization.PerChannelMinMaxObserver.with_args(\
76+
dtype=torch.qint8, qscheme=torch.per_channel_symmetric)``
7877
wei_ic_observer: Per-input-channel Observer for weight.
7978
For nn.Linear with SmoothQuant enabled only.
8079
PerChannelMinMaxObserver by default.

0 commit comments

Comments
 (0)