@@ -113,7 +113,7 @@ def forward(self, x):
113113)
114114
115115####################################################
116- # We can load from this `path` and use it to perform inference.
116+ # We can load from this `` path` ` and use it to perform inference.
117117
118118compiled_binary = torch ._inductor .aoti_load_package (path )
119119output_compiled = compiled_binary (input )
@@ -136,7 +136,7 @@ def forward(self, x):
136136)
137137
138138###################################################
139- # An exported program (``` torch.export.ExportedProgram` ``) contains the Tensor computation,
139+ # An exported program (``torch.export.ExportedProgram``) contains the Tensor computation,
140140# a state_dict containing tensor values of all lifted parameters and buffer alongside
141141# other metadata. We specify the ``aot_inductor.package_constants_in_so`` to be ``False`` to
142142# not serialize the model parameters in the generated artifact.
@@ -168,7 +168,7 @@ def forward(self, x):
168168###################################################
169169# Next, let's measure the compilation time of the full model and the regional compilation.
170170#
171- # ``` torch.compile` `` is a JIT compiler, which means that it compiles on the first invocation.
171+ # ``torch.compile`` is a JIT compiler, which means that it compiles on the first invocation.
172172# In the code below, we measure the total time spent in the first invocation. While this method is not
173173# precise, it provides a good estimate since the majority of the time is spent in
174174# compilation.
0 commit comments