Skip to content

Commit 8ca200a

Browse files
committed
Update
1 parent 277fa98 commit 8ca200a

15 files changed

+1125
-351
lines changed

beginner_source/intro.rst

-40
This file was deleted.

compilers.rst

+268
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,268 @@
1+
Compilers
2+
=========
3+
4+
Explore PyTorch compilers to optimize and deploy models efficiently.
5+
Learn about APIs like ``torch.compile`` and ``torch.export``
6+
that let you enhance model performance and streamline deployment
7+
processes.
8+
Explore advanced topics such as compiled autograd, dynamic compilation
9+
control, as well as third-party backend solutions.
10+
11+
.. warning::
12+
13+
TorchScript is no longer in active development.
14+
15+
.. raw:: html
16+
17+
<div id="tutorial-cards-container">
18+
19+
<nav class="navbar navbar-expand-lg navbar-light tutorials-nav col-12">
20+
<div class="tutorial-tags-container">
21+
<div id="dropdown-filter-tags">
22+
<div class="tutorial-filter-menu">
23+
<div class="tutorial-filter filter-btn all-tag-selected" data-tag="all">All</div>
24+
</div>
25+
</div>
26+
</div>
27+
</nav>
28+
29+
<hr class="tutorials-hr">
30+
31+
<div class="row">
32+
33+
<div id="tutorial-cards">
34+
<div class="list">
35+
36+
.. customcarditem::
37+
:header: torch.compile Tutorial
38+
:card_description: Speed up your models with minimal code changes using torch.compile, the latest PyTorch compiler solution.
39+
:image: _static/img/thumbnails/cropped/generic-pytorch-logo.png
40+
:link: intermediate/torch_compile_tutorial.html
41+
:tags: Model-Optimization,torch.compile
42+
43+
.. customcarditem::
44+
:header: Compiled Autograd: Capturing a larger backward graph for ``torch.compile``
45+
:card_description: Learn how to use compiled autograd to capture a larger backward graph.
46+
:image: _static/img/thumbnails/cropped/generic-pytorch-logo.png
47+
:link: intermediate/compiled_autograd_tutorial
48+
:tags: Model-Optimization,CUDA,torch.compile
49+
50+
.. customcarditem::
51+
:header: Inductor CPU Backend Debugging and Profiling
52+
:card_description: Learn the usage, debugging and performance profiling for ``torch.compile`` with Inductor CPU backend.
53+
:image: _static/img/thumbnails/cropped/generic-pytorch-logo.png
54+
:link: intermediate/inductor_debug_cpu.html
55+
:tags: Model-Optimization,torch.compile
56+
57+
.. customcarditem::
58+
:header: Dynamic Compilation Control with ``torch.compiler.set_stance``
59+
:card_description: Learn how to use torch.compiler.set_stance
60+
:image: ../_static/img/thumbnails/cropped/generic-pytorch-logo.png
61+
:link: ../recipes/torch_compiler_set_stance_tutorial.html
62+
:tags: Model-Optimization,torch.compile
63+
64+
.. customcarditem::
65+
:header: Demonstration of torch.export flow, common challenges and the solutions to address them
66+
:card_description: Learn how to export models for popular usecases
67+
:image: ../_static/img/thumbnails/cropped/generic-pytorch-logo.png
68+
:link: ../recipes/torch_export_challenges_solutions.html
69+
:tags: Model-Optimization,torch.compile
70+
71+
.. customcarditem::
72+
:header: (beta) Compiling the Optimizer with torch.compile
73+
:card_description: Speed up the optimizer using torch.compile
74+
:image: ../_static/img/thumbnails/cropped/generic-pytorch-logo.png
75+
:link: ../recipes/compiling_optimizer.html
76+
:tags: Model-Optimization,torch.compile
77+
78+
.. customcarditem::
79+
:header: (beta) Running the compiled optimizer with an LR Scheduler
80+
:card_description: Speed up training with LRScheduler and torch.compiled optimizer
81+
:image: ../_static/img/thumbnails/cropped/generic-pytorch-logo.png
82+
:link: ../recipes/compiling_optimizer_lr_scheduler.html
83+
:tags: Model-Optimization,torch.compile
84+
85+
.. customcarditem::
86+
:header: Using User-Defined Triton Kernels with ``torch.compile``
87+
:card_description: Learn how to use user-defined kernels with ``torch.compile``
88+
:image: ../_static/img/thumbnails/cropped/generic-pytorch-logo.png
89+
:link: ../recipes/torch_compile_user_defined_triton_kernel_tutorial.html
90+
:tags: Model-Optimization,torch.compile
91+
92+
.. customcarditem::
93+
:header: Compile Time Caching in ``torch.compile``
94+
:card_description: Learn how to use compile time caching in ``torch.compile``
95+
:image: ../_static/img/thumbnails/cropped/generic-pytorch-logo.png
96+
:link: ../recipes/torch_compile_caching_tutorial.html
97+
:tags: Model-Optimization,torch.compile
98+
99+
.. customcarditem::
100+
:header: Compile Time Caching Configurations
101+
:card_description: Learn how to configure compile time caching in ``torch.compile``
102+
:image: ../_static/img/thumbnails/cropped/generic-pytorch-logo.png
103+
:link: ../recipes/torch_compile_caching_configuration_tutorial.html
104+
:tags: Model-Optimization,torch.compile
105+
106+
.. customcarditem::
107+
:header: Reducing torch.compile cold start compilation time with regional compilation
108+
:card_description: Learn how to use regional compilation to control cold start compile time
109+
:image: ../_static/img/thumbnails/cropped/generic-pytorch-logo.png
110+
:link: ../recipes/regional_compilation.html
111+
:tags: Model-Optimization,torch.compile
112+
113+
.. Export
114+
115+
.. customcarditem::
116+
:header: torch.export AOTInductor Tutorial for Python runtime
117+
:card_description: Learn an end-to-end example of how to use AOTInductor for python runtime.
118+
:image: ../_static/img/thumbnails/cropped/generic-pytorch-logo.png
119+
:link: ../recipes/torch_export_aoti_python.html
120+
:tags: Basics,torch.export
121+
122+
.. customcarditem::
123+
:header: Deep dive into torch.export
124+
:card_description: Learn how to use torch.export to export PyTorch models into standardized model representations.
125+
:image: ../_static/img/thumbnails/cropped/generic-pytorch-logo.png
126+
:link: torch_export_tutorial.html
127+
:tags: Basics,torch.export
128+
129+
.. ONNX
130+
131+
.. customcarditem::
132+
:header: (optional) Exporting a PyTorch model to ONNX using TorchDynamo backend and Running it using ONNX Runtime
133+
:card_description: Build a image classifier model in PyTorch and convert it to ONNX before deploying it with ONNX Runtime.
134+
:image: _static/img/thumbnails/cropped/Exporting-PyTorch-Models-to-ONNX-Graphs.png
135+
:link: beginner/onnx/export_simple_model_to_onnx_tutorial.html
136+
:tags: Production,ONNX,Backends
137+
138+
.. customcarditem::
139+
:header: Extending the ONNX exporter operator support
140+
:card_description: Demonstrate end-to-end how to address unsupported operators in ONNX.
141+
:image: _static/img/thumbnails/cropped/Exporting-PyTorch-Models-to-ONNX-Graphs.png
142+
:link: beginner/onnx/onnx_registry_tutorial.html
143+
:tags: Production,ONNX,Backends
144+
145+
.. customcarditem::
146+
:header: Exporting a model with control flow to ONNX
147+
:card_description: Demonstrate how to handle control flow logic while exporting a PyTorch model to ONNX.
148+
:image: _static/img/thumbnails/cropped/Exporting-PyTorch-Models-to-ONNX-Graphs.png
149+
:link: beginner/onnx/export_control_flow_model_to_onnx_tutorial.html
150+
:tags: Production,ONNX,Backends
151+
152+
.. Code Transformations with FX
153+
154+
.. customcarditem::
155+
:header: Building a Convolution/Batch Norm fuser in FX
156+
:card_description: Build a simple FX pass that fuses batch norm into convolution to improve performance during inference.
157+
:image: _static/img/thumbnails/cropped/Deploying-PyTorch-in-Python-via-a-REST-API-with-Flask.png
158+
:link: intermediate/fx_conv_bn_fuser.html
159+
:tags: FX
160+
161+
.. customcarditem::
162+
:header: Building a Simple Performance Profiler with FX
163+
:card_description: Build a simple FX interpreter to record the runtime of op, module, and function calls and report statistics
164+
:image: _static/img/thumbnails/cropped/Deploying-PyTorch-in-Python-via-a-REST-API-with-Flask.png
165+
:link: intermediate/fx_profiling_tutorial.html
166+
:tags: FX
167+
168+
.. TorchScript
169+
170+
.. customcarditem::
171+
:header: Introduction to TorchScript
172+
:card_description: Introduction to TorchScript, an intermediate representation of a PyTorch model (subclass of nn.Module) that can then be run in a high-performance environment such as C++.
173+
:image: _static/img/thumbnails/cropped/Introduction-to-TorchScript.png
174+
:link: beginner/Intro_to_TorchScript_tutorial.html
175+
:tags: Production,TorchScript
176+
177+
.. customcarditem::
178+
:header: Loading a TorchScript Model in C++
179+
:card_description: Learn how PyTorch provides to go from an existing Python model to a serialized representation that can be loaded and executed purely from C++, with no dependency on Python.
180+
:image: _static/img/thumbnails/cropped/Loading-a-TorchScript-Model-in-Cpp.png
181+
:link: advanced/cpp_export.html
182+
:tags: Production,TorchScript
183+
184+
.. customcarditem::
185+
:header: Distributed Optimizer with TorchScript support
186+
:card_description: How to enable TorchScript support for Distributed Optimizer.
187+
:image: ../_static/img/thumbnails/cropped/profiler.png
188+
:link: ../recipes/distributed_optim_torchscript.html
189+
:tags: Distributed-Training,TorchScript
190+
191+
.. customcarditem::
192+
:header: TorchScript for Deployment
193+
:card_description: Learn how to export your trained model in TorchScript format and how to load your TorchScript model in C++ and do inference.
194+
:image: ../_static/img/thumbnails/cropped/torchscript_overview.png
195+
:link: ../recipes/torchscript_inference.html
196+
:tags: TorchScript
197+
198+
.. customcarditem::
199+
:header: Deploying with Flask
200+
:card_description: Learn how to use Flask, a lightweight web server, to quickly setup a web API from your trained PyTorch model.
201+
:image: ../_static/img/thumbnails/cropped/using-flask-create-restful-api.png
202+
:link: ../recipes/deployment_with_flask.html
203+
:tags: Production,TorchScript
204+
205+
.. raw:: html
206+
207+
</div>
208+
</div>
209+
210+
.. End of tutorial cards section
211+
212+
.. -----------------------------------------
213+
.. Page TOC
214+
.. -----------------------------------------
215+
216+
.. toctree::
217+
:maxdepth: 2
218+
:hidden:
219+
:caption: torch.compile
220+
221+
intermediate/torch_compile_tutorial
222+
intermediate/compiled_autograd_tutorial
223+
intermediate/inductor_debug_cpu
224+
recipes/torch_compiler_set_stance_tutorial
225+
recipes/torch_export_challenges_solutions
226+
recipes/compiling_optimizer
227+
recipes/compiling_optimizer_lr_scheduler
228+
recipes/torch_compile_user_defined_triton_kernel_tutorial
229+
recipes/torch_compile_caching_tutorial
230+
recipes/regional_compilation
231+
232+
.. toctree::
233+
:maxdepth: 2
234+
:hidden:
235+
:caption: torch.export
236+
237+
intermediate/torch_export_tutorial
238+
recipes/torch_export_aoti_python
239+
recipes/torch_export_challenges_solutions
240+
241+
.. toctree::
242+
:maxdepth: 2
243+
:hidden:
244+
:caption: ONNX
245+
246+
beginner/onnx/intro_onnx
247+
beginner/onnx/export_simple_model_to_onnx_tutorial
248+
beginner/onnx/onnx_registry_tutorial
249+
beginner/onnx/export_control_flow_model_to_onnx_tutorial
250+
251+
.. toctree::
252+
:maxdepth: 2
253+
:includehidden:
254+
:hidden:
255+
:caption: Code Transforms with FX
256+
257+
intermediate/fx_conv_bn_fuser
258+
intermediate/fx_profiling_tutorial
259+
260+
.. toctree::
261+
:maxdepth: 2
262+
:hidden:
263+
:caption: TorchScript
264+
265+
beginner/Intro_to_TorchScript_tutorial
266+
recipes/torchscript_inference
267+
recipes/distributed_optim_torchscript
268+
advanced/cpp_export

conf.py

+8-5
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@
8888
"sphinx_design",
8989
"sphinx_sitemap",
9090
"sphinxcontrib.mermaid",
91-
"pytorch_sphinx_theme2"
91+
"pytorch_sphinx_theme2",
9292
]
9393

9494
myst_enable_extensions = [
@@ -181,10 +181,13 @@ def reset_seeds(gallery_conf, fname):
181181
},
182182
],
183183
"use_edit_page_button": True,
184-
"logo": {
185-
"text": "Home",
184+
#"logo": {
185+
# "text": "Home",
186+
#},
186187
"header_links_before_dropdown": 9,
187-
},
188+
"navbar_start": ["pytorch_version"],
189+
"navbar_center": "navbar-nav",
190+
"display_version": True,
188191
}
189192

190193
theme_variables = pytorch_sphinx_theme2.get_theme_variables()
@@ -253,7 +256,7 @@ def reset_seeds(gallery_conf, fname):
253256
# built documents.
254257

255258
# The short X.Y version.
256-
version = str(torch.__version__)
259+
version = "v" + str(torch.__version__)
257260
# The full version, including alpha/beta/rc tags.
258261
release = str(torch.__version__)
259262

0 commit comments

Comments
 (0)