Skip to content

Commit cb620cf

Browse files
committed
up
1 parent 0c45ef9 commit cb620cf

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

recipes_source/regional_aot.py

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -10,14 +10,14 @@
1010
just-in-time (JiT) compilation.
1111
1212
This recipe shows how to apply similar principles when compiling a model ahead-of-time (AoT). If you
13-
are not familiar with AOTInductor and `torch.export`, we recommend you to check out [this tutorial](https://docs.pytorch.org/tutorials/recipes/torch_export_aoti_python.html).
13+
are not familiar with AOTInductor and ``torch.export``, we recommend you to check out [this tutorial](https://docs.pytorch.org/tutorials/recipes/torch_export_aoti_python.html).
1414
1515
Prerequisites
1616
----------------
1717
1818
* Pytorch 2.6 or later
1919
* Familiarity with regional compilation
20-
* Familiarity with AOTInductor and `torch.export`
20+
* Familiarity with AOTInductor and ``torch.export``
2121
2222
Setup
2323
-----
@@ -85,7 +85,7 @@ def __init__(self):
8585
self.layers = torch.nn.ModuleList([Layer() for _ in range(64)])
8686

8787
def forward(self, x):
88-
# In regional compilation, the self.linear is outside of the scope of `torch.compile`.
88+
# In regional compilation, the self.linear is outside of the scope of ``torch.compile``.
8989
x = self.linear(x)
9090
for layer in self.layers:
9191
x = layer(x)
@@ -96,7 +96,7 @@ def forward(self, x):
9696
# Since we're compiling the model ahead-of-time, we need to prepare representative
9797
# input examples, that we expect the model to see during actual deployments.
9898
#
99-
# Let's create an instance of `Model` and pass it some sample input data.
99+
# Let's create an instance of ``Model`` and pass it some sample input data.
100100
#
101101

102102
model = Model().cuda()
@@ -105,8 +105,8 @@ def forward(self, x):
105105
print(f"{output.shape=}")
106106

107107
####################################################
108-
# Now, let's compile our model ahead-of-time. We will use `input` created above to pass
109-
# to `torch.export`. This will yield a `torch.export.ExportedProgram` which we can compile.
108+
# Now, let's compile our model ahead-of-time. We will use ``input`` created above to pass
109+
# to ``torch.export``. This will yield a ``torch.export.ExportedProgram`` which we can compile.
110110

111111
path = torch._inductor.aoti_compile_and_package(
112112
torch.export.export(model, args=(input,))
@@ -136,7 +136,7 @@ def forward(self, x):
136136
)
137137

138138
###################################################
139-
# An exported program (``torch.export.ExportedProgram``) contains the Tensor computation,
139+
# An exported program (```torch.export.ExportedProgram```) contains the Tensor computation,
140140
# a state_dict containing tensor values of all lifted parameters and buffer alongside
141141
# other metadata. We specify the ``aot_inductor.package_constants_in_so`` to be ``False`` to
142142
# not serialize the model parameters in the generated artifact.
@@ -168,7 +168,7 @@ def forward(self, x):
168168
###################################################
169169
# Next, let's measure the compilation time of the full model and the regional compilation.
170170
#
171-
# ``torch.compile`` is a JIT compiler, which means that it compiles on the first invocation.
171+
# ```torch.compile``` is a JIT compiler, which means that it compiles on the first invocation.
172172
# In the code below, we measure the total time spent in the first invocation. While this method is not
173173
# precise, it provides a good estimate since the majority of the time is spent in
174174
# compilation.

0 commit comments

Comments
 (0)