-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathsparse.html
2322 lines (2126 loc) · 234 KB
/
sparse.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE html>
<!--[if IE 8]><html class="no-js lt-ie9" lang="en" > <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]-->
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>torch.sparse — PyTorch 2.2 documentation</title>
<link rel="canonical" href="https://pytorch.org/docs/stable/sparse.html"/>
<link rel="stylesheet" href="_static/css/theme.css" type="text/css" />
<!-- <link rel="stylesheet" href="_static/pygments.css" type="text/css" /> -->
<link rel="stylesheet" href="_static/pygments.css" type="text/css" />
<link rel="stylesheet" href="_static/css/theme.css" type="text/css" />
<link rel="stylesheet" href="_static/copybutton.css" type="text/css" />
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/katex.min.css" type="text/css" />
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/katex.min.css" type="text/css" />
<link rel="stylesheet" href="_static/katex-math.css" type="text/css" />
<link rel="stylesheet" href="_static/sphinx-dropdown.css" type="text/css" />
<link rel="stylesheet" href="_static/panels-bootstrap.min.css" type="text/css" />
<link rel="stylesheet" href="_static/css/jit.css" type="text/css" />
<link rel="index" title="Index" href="genindex.html" />
<link rel="search" title="Search" href="search.html" />
<link rel="next" title="torch.Tensor.is_sparse_csr" href="generated/torch.Tensor.is_sparse_csr.html" />
<link rel="prev" title="torch.nested" href="nested.html" />
<!-- Google Tag Manager -->
<script>(function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start':
new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],
j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src=
'https://www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f);
})(window,document,'script','dataLayer','GTM-T8XT4PS');</script>
<!-- End Google Tag Manager -->
<script src="_static/js/modernizr.min.js"></script>
<!-- Preload the theme fonts -->
<link rel="preload" href="_static/fonts/FreightSans/freight-sans-book.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="preload" href="_static/fonts/FreightSans/freight-sans-medium.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="preload" href="_static/fonts/IBMPlexMono/IBMPlexMono-Medium.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="preload" href="_static/fonts/FreightSans/freight-sans-bold.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="preload" href="_static/fonts/FreightSans/freight-sans-medium-italic.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="preload" href="_static/fonts/IBMPlexMono/IBMPlexMono-SemiBold.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<!-- Preload the katex fonts -->
<link rel="preload" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/fonts/KaTeX_Math-Italic.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="preload" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/fonts/KaTeX_Main-Regular.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="preload" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/fonts/KaTeX_Main-Bold.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="preload" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/fonts/KaTeX_Size1-Regular.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="preload" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/fonts/KaTeX_Size4-Regular.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="preload" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/fonts/KaTeX_Size2-Regular.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="preload" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/fonts/KaTeX_Size3-Regular.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="preload" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/fonts/KaTeX_Caligraphic-Regular.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.15.2/css/all.css" integrity="sha384-vSIIfh2YWi9wW0r9iZe7RJPrKwp6bG+s9QZMoITbCckVJqGCCRhc+ccxNcdpHuYu" crossorigin="anonymous">
</head>
<div class="container-fluid header-holder tutorials-header" id="header-holder">
<div class="container">
<div class="header-container">
<a class="header-logo" href="https://pytorch.org/" aria-label="PyTorch"></a>
<div class="main-menu">
<ul>
<li>
<a href="https://pytorch.org/get-started">Get Started</a>
</li>
<li>
<a href="https://pytorch.org/ecosystem">Ecosystem</a>
</li>
<li>
<div id="resourcesDropdownButton" data-toggle="resources-dropdown" class="resources-dropdown">
<a class="resource-option with-down-arrow">
Edge
</a>
<div class="resources-dropdown-menu">
<a class="nav-dropdown-item" href="https://pytorch.org/edge">
<span class="dropdown-title">About PyTorch Edge</span>
</a>
<a class="nav-dropdown-item" href="https://pytorch.org/executorch">
<span class="dropdown-title">ExecuTorch</span>
</a>
</div>
</div>
</li>
<li>
<a href="https://pytorch.org/blog/">Blog</a>
</li>
<li>
<a href="https://pytorch.org/tutorials">Tutorials</a>
</li>
<li class="active docs-active">
<div id="resourcesDropdownButton" data-toggle="resources-dropdown" class="resources-dropdown">
<a class="resource-option with-down-orange-arrow">
Docs
</a>
<div class="resources-dropdown-menu">
<a class="doc-dropdown-option nav-dropdown-item" href="https://pytorch.org/docs/stable/index.html">
<span class="dropdown-title">PyTorch</span>
<p></p>
</a>
<a class="doc-dropdown-option nav-dropdown-item" href="https://pytorch.org/audio/stable/index.html">
<span class="dropdown-title">torchaudio</span>
<p></p>
</a>
<a class="doc-dropdown-option nav-dropdown-item" href="https://pytorch.org/text/stable/index.html">
<span class="dropdown-title">torchtext</span>
<p></p>
</a>
<a class="doc-dropdown-option nav-dropdown-item" href="https://pytorch.org/vision/stable/index.html">
<span class="dropdown-title">torchvision</span>
<p></p>
</a>
<a class="doc-dropdown-option nav-dropdown-item" href="https://pytorch.org/torcharrow">
<span class="dropdown-title">torcharrow</span>
<p></p>
</a>
<a class="doc-dropdown-option nav-dropdown-item" href="https://pytorch.org/data">
<span class="dropdown-title">TorchData</span>
<p></p>
</a>
<a class="doc-dropdown-option nav-dropdown-item" href="https://pytorch.org/torchrec">
<span class="dropdown-title">TorchRec</span>
<p></p>
</a>
<a class="doc-dropdown-option nav-dropdown-item" href="https://pytorch.org/serve/">
<span class="dropdown-title">TorchServe</span>
<p></p>
</a>
<a class="doc-dropdown-option nav-dropdown-item" href="https://pytorch.org/torchx/">
<span class="dropdown-title">TorchX</span>
<p></p>
</a>
<a class="doc-dropdown-option nav-dropdown-item" href="https://pytorch.org/xla">
<span class="dropdown-title">PyTorch on XLA Devices</span>
<p></p>
</a>
</div>
</li>
<li>
<div id="resourcesDropdownButton" data-toggle="resources-dropdown" class="resources-dropdown">
<a class="resource-option with-down-arrow">
Resources
</a>
<div class="resources-dropdown-menu">
<a class="nav-dropdown-item" href="https://pytorch.org/features">
<span class="dropdown-title">About</span>
<p>Learn about PyTorch’s features and capabilities</p>
</a>
<a class="nav-dropdown-item" href="https://pytorch.org/foundation">
<span class="dropdown-title">PyTorch Foundation</span>
<p>Learn about the PyTorch foundation</p>
</a>
<a class="nav-dropdown-item" href="https://pytorch.org/#community-module">
<span class="dropdown-title">Community</span>
<p>Join the PyTorch developer community to contribute, learn, and get your questions answered.</p>
</a>
<a class="nav-dropdown-item" href="https://pytorch.org/community-stories">
<span class="dropdown-title">Community Stories</span>
<p>Learn how our community solves real, everyday machine learning problems with PyTorch.</p>
</a>
<a class="nav-dropdown-item" href="https://pytorch.org/resources">
<span class="dropdown-title">Developer Resources</span>
<p>Find resources and get questions answered</p>
</a>
<a class="nav-dropdown-item" href="https://pytorch.org/events">
<span class="dropdown-title">Events</span>
<p>Find events, webinars, and podcasts</p>
</a>
<a class="nav-dropdown-item" href="https://discuss.pytorch.org/" target="_blank">
<span class="dropdown-title">Forums</span>
<p>A place to discuss PyTorch code, issues, install, research</p>
</a>
<a class="nav-dropdown-item" href="https://pytorch.org/hub">
<span class="dropdown-title">Models (Beta)</span>
<p>Discover, publish, and reuse pre-trained models</p>
</a>
</div>
</div>
</li>
<li>
<a href="https://github.com/pytorch/pytorch">GitHub</a>
</li>
</ul>
</div>
<a class="main-menu-open-button" href="#" data-behavior="open-mobile-menu"></a>
</div>
</div>
</div>
<body class="pytorch-body">
<div class="table-of-contents-link-wrapper">
<span>Table of Contents</span>
<a href="#" class="toggle-table-of-contents" data-behavior="toggle-table-of-contents"></a>
</div>
<nav data-toggle="wy-nav-shift" class="pytorch-left-menu" id="pytorch-left-menu">
<div class="pytorch-side-scroll">
<div class="pytorch-menu pytorch-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
<div class="pytorch-left-menu-search">
<div class="version">
<a href='https://pytorch.org/docs/versions.html'>2.2 ▼</a>
</div>
<div role="search">
<form id="rtd-search-form" class="wy-form" action="search.html" method="get">
<input type="text" name="q" placeholder="Search Docs" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div>
<p class="caption" role="heading"><span class="caption-text">Community</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="community/build_ci_governance.html">PyTorch Governance | Build + CI</a></li>
<li class="toctree-l1"><a class="reference internal" href="community/contribution_guide.html">PyTorch Contribution Guide</a></li>
<li class="toctree-l1"><a class="reference internal" href="community/design.html">PyTorch Design Philosophy</a></li>
<li class="toctree-l1"><a class="reference internal" href="community/governance.html">PyTorch Governance | Mechanics</a></li>
<li class="toctree-l1"><a class="reference internal" href="community/persons_of_interest.html">PyTorch Governance | Maintainers</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Developer Notes</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="notes/amp_examples.html">CUDA Automatic Mixed Precision examples</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/autograd.html">Autograd mechanics</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/broadcasting.html">Broadcasting semantics</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/cpu_threading_torchscript_inference.html">CPU threading and TorchScript inference</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/cuda.html">CUDA semantics</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/ddp.html">Distributed Data Parallel</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/extending.html">Extending PyTorch</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/extending.func.html">Extending torch.func with autograd.Function</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/faq.html">Frequently Asked Questions</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/gradcheck.html">Gradcheck mechanics</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/hip.html">HIP (ROCm) semantics</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/large_scale_deployments.html">Features for large-scale deployments</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/modules.html">Modules</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/mps.html">MPS backend</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/multiprocessing.html">Multiprocessing best practices</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/numerical_accuracy.html">Numerical accuracy</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/randomness.html">Reproducibility</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/serialization.html">Serialization semantics</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/windows.html">Windows FAQ</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Language Bindings</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="cpp_index.html">C++</a></li>
<li class="toctree-l1"><a class="reference external" href="https://pytorch.org/javadoc/">Javadoc</a></li>
<li class="toctree-l1"><a class="reference internal" href="deploy.html">torch::deploy</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Python API</span></p>
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="torch.html">torch</a></li>
<li class="toctree-l1"><a class="reference internal" href="nn.html">torch.nn</a></li>
<li class="toctree-l1"><a class="reference internal" href="nn.functional.html">torch.nn.functional</a></li>
<li class="toctree-l1"><a class="reference internal" href="tensors.html">torch.Tensor</a></li>
<li class="toctree-l1"><a class="reference internal" href="tensor_attributes.html">Tensor Attributes</a></li>
<li class="toctree-l1"><a class="reference internal" href="tensor_view.html">Tensor Views</a></li>
<li class="toctree-l1"><a class="reference internal" href="amp.html">torch.amp</a></li>
<li class="toctree-l1"><a class="reference internal" href="autograd.html">torch.autograd</a></li>
<li class="toctree-l1"><a class="reference internal" href="library.html">torch.library</a></li>
<li class="toctree-l1"><a class="reference internal" href="cpu.html">torch.cpu</a></li>
<li class="toctree-l1"><a class="reference internal" href="cuda.html">torch.cuda</a></li>
<li class="toctree-l1"><a class="reference internal" href="torch_cuda_memory.html">Understanding CUDA Memory Usage</a></li>
<li class="toctree-l1"><a class="reference internal" href="torch_cuda_memory.html#generating-a-snapshot">Generating a Snapshot</a></li>
<li class="toctree-l1"><a class="reference internal" href="torch_cuda_memory.html#using-the-visualizer">Using the visualizer</a></li>
<li class="toctree-l1"><a class="reference internal" href="torch_cuda_memory.html#snapshot-api-reference">Snapshot API Reference</a></li>
<li class="toctree-l1"><a class="reference internal" href="mps.html">torch.mps</a></li>
<li class="toctree-l1"><a class="reference internal" href="backends.html">torch.backends</a></li>
<li class="toctree-l1"><a class="reference internal" href="export.html">torch.export</a></li>
<li class="toctree-l1"><a class="reference internal" href="distributed.html">torch.distributed</a></li>
<li class="toctree-l1"><a class="reference internal" href="distributed.algorithms.join.html">torch.distributed.algorithms.join</a></li>
<li class="toctree-l1"><a class="reference internal" href="distributed.elastic.html">torch.distributed.elastic</a></li>
<li class="toctree-l1"><a class="reference internal" href="fsdp.html">torch.distributed.fsdp</a></li>
<li class="toctree-l1"><a class="reference internal" href="distributed.optim.html">torch.distributed.optim</a></li>
<li class="toctree-l1"><a class="reference internal" href="distributed.tensor.parallel.html">torch.distributed.tensor.parallel</a></li>
<li class="toctree-l1"><a class="reference internal" href="distributed.checkpoint.html">torch.distributed.checkpoint</a></li>
<li class="toctree-l1"><a class="reference internal" href="distributions.html">torch.distributions</a></li>
<li class="toctree-l1"><a class="reference internal" href="torch.compiler.html">torch.compiler</a></li>
<li class="toctree-l1"><a class="reference internal" href="fft.html">torch.fft</a></li>
<li class="toctree-l1"><a class="reference internal" href="func.html">torch.func</a></li>
<li class="toctree-l1"><a class="reference internal" href="futures.html">torch.futures</a></li>
<li class="toctree-l1"><a class="reference internal" href="fx.html">torch.fx</a></li>
<li class="toctree-l1"><a class="reference internal" href="hub.html">torch.hub</a></li>
<li class="toctree-l1"><a class="reference internal" href="jit.html">torch.jit</a></li>
<li class="toctree-l1"><a class="reference internal" href="linalg.html">torch.linalg</a></li>
<li class="toctree-l1"><a class="reference internal" href="monitor.html">torch.monitor</a></li>
<li class="toctree-l1"><a class="reference internal" href="signal.html">torch.signal</a></li>
<li class="toctree-l1"><a class="reference internal" href="special.html">torch.special</a></li>
<li class="toctree-l1"><a class="reference internal" href="torch.overrides.html">torch.overrides</a></li>
<li class="toctree-l1"><a class="reference internal" href="package.html">torch.package</a></li>
<li class="toctree-l1"><a class="reference internal" href="profiler.html">torch.profiler</a></li>
<li class="toctree-l1"><a class="reference internal" href="nn.init.html">torch.nn.init</a></li>
<li class="toctree-l1"><a class="reference internal" href="onnx.html">torch.onnx</a></li>
<li class="toctree-l1"><a class="reference internal" href="optim.html">torch.optim</a></li>
<li class="toctree-l1"><a class="reference internal" href="complex_numbers.html">Complex Numbers</a></li>
<li class="toctree-l1"><a class="reference internal" href="ddp_comm_hooks.html">DDP Communication Hooks</a></li>
<li class="toctree-l1"><a class="reference internal" href="pipeline.html">Pipeline Parallelism</a></li>
<li class="toctree-l1"><a class="reference internal" href="quantization.html">Quantization</a></li>
<li class="toctree-l1"><a class="reference internal" href="rpc.html">Distributed RPC Framework</a></li>
<li class="toctree-l1"><a class="reference internal" href="random.html">torch.random</a></li>
<li class="toctree-l1"><a class="reference internal" href="masked.html">torch.masked</a></li>
<li class="toctree-l1"><a class="reference internal" href="nested.html">torch.nested</a></li>
<li class="toctree-l1 current"><a class="current reference internal" href="#">torch.sparse</a></li>
<li class="toctree-l1"><a class="reference internal" href="storage.html">torch.Storage</a></li>
<li class="toctree-l1"><a class="reference internal" href="testing.html">torch.testing</a></li>
<li class="toctree-l1"><a class="reference internal" href="utils.html">torch.utils</a></li>
<li class="toctree-l1"><a class="reference internal" href="benchmark_utils.html">torch.utils.benchmark</a></li>
<li class="toctree-l1"><a class="reference internal" href="bottleneck.html">torch.utils.bottleneck</a></li>
<li class="toctree-l1"><a class="reference internal" href="checkpoint.html">torch.utils.checkpoint</a></li>
<li class="toctree-l1"><a class="reference internal" href="cpp_extension.html">torch.utils.cpp_extension</a></li>
<li class="toctree-l1"><a class="reference internal" href="data.html">torch.utils.data</a></li>
<li class="toctree-l1"><a class="reference internal" href="deterministic.html">torch.utils.deterministic</a></li>
<li class="toctree-l1"><a class="reference internal" href="jit_utils.html">torch.utils.jit</a></li>
<li class="toctree-l1"><a class="reference internal" href="dlpack.html">torch.utils.dlpack</a></li>
<li class="toctree-l1"><a class="reference internal" href="mobile_optimizer.html">torch.utils.mobile_optimizer</a></li>
<li class="toctree-l1"><a class="reference internal" href="model_zoo.html">torch.utils.model_zoo</a></li>
<li class="toctree-l1"><a class="reference internal" href="tensorboard.html">torch.utils.tensorboard</a></li>
<li class="toctree-l1"><a class="reference internal" href="type_info.html">Type Info</a></li>
<li class="toctree-l1"><a class="reference internal" href="named_tensor.html">Named Tensors</a></li>
<li class="toctree-l1"><a class="reference internal" href="name_inference.html">Named Tensors operator coverage</a></li>
<li class="toctree-l1"><a class="reference internal" href="config_mod.html">torch.__config__</a></li>
<li class="toctree-l1"><a class="reference internal" href="logging.html">torch._logging</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Libraries</span></p>
<ul>
<li class="toctree-l1"><a class="reference external" href="https://pytorch.org/audio/stable">torchaudio</a></li>
<li class="toctree-l1"><a class="reference external" href="https://pytorch.org/data">TorchData</a></li>
<li class="toctree-l1"><a class="reference external" href="https://pytorch.org/torchrec">TorchRec</a></li>
<li class="toctree-l1"><a class="reference external" href="https://pytorch.org/serve">TorchServe</a></li>
<li class="toctree-l1"><a class="reference external" href="https://pytorch.org/text/stable">torchtext</a></li>
<li class="toctree-l1"><a class="reference external" href="https://pytorch.org/vision/stable">torchvision</a></li>
<li class="toctree-l1"><a class="reference external" href="https://pytorch.org/xla/">PyTorch on XLA Devices</a></li>
</ul>
</div>
</div>
</nav>
<div class="pytorch-container">
<div class="pytorch-page-level-bar" id="pytorch-page-level-bar">
<div class="pytorch-breadcrumbs-wrapper">
<div role="navigation" aria-label="breadcrumbs navigation">
<ul class="pytorch-breadcrumbs">
<li>
<a href="index.html">
Docs
</a> >
</li>
<li>torch.sparse</li>
<li class="pytorch-breadcrumbs-aside">
<a href="_sources/sparse.rst.txt" rel="nofollow"><img src="_static/images/view-page-source-icon.svg"></a>
</li>
</ul>
</div>
</div>
<div class="pytorch-shortcuts-wrapper" id="pytorch-shortcuts-wrapper">
Shortcuts
</div>
</div>
<section data-toggle="wy-nav-shift" id="pytorch-content-wrap" class="pytorch-content-wrap">
<div class="pytorch-content-left">
<!-- Google Tag Manager (noscript) -->
<noscript><iframe src="https://www.googletagmanager.com/ns.html?id=GTM-T8XT4PS"
height="0" width="0" style="display:none;visibility:hidden"></iframe></noscript>
<!-- End Google Tag Manager (noscript) -->
<div class="rst-content">
<div role="main" class="main-content" itemscope="itemscope" itemtype="http://schema.org/Article">
<article itemprop="articleBody" id="pytorch-article" class="pytorch-article">
<span class="target" id="module-torch.sparse"></span><div class="section" id="torch-sparse">
<span id="sparse-docs"></span><h1>torch.sparse<a class="headerlink" href="#torch-sparse" title="Permalink to this heading">¶</a></h1>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>The PyTorch API of sparse tensors is in beta and may change in the near future.
We highly welcome feature requests, bug reports and general suggestions as GitHub issues.</p>
</div>
<div class="section" id="why-and-when-to-use-sparsity">
<h2>Why and when to use sparsity<a class="headerlink" href="#why-and-when-to-use-sparsity" title="Permalink to this heading">¶</a></h2>
<p>By default PyTorch stores <a class="reference internal" href="tensors.html#torch.Tensor" title="torch.Tensor"><code class="xref py py-class docutils literal notranslate"><span class="pre">torch.Tensor</span></code></a> stores elements contiguously
physical memory. This leads to efficient implementations of various array
processing algorithms that require fast access to elements.</p>
<p>Now, some users might decide to represent data such as graph adjacency
matrices, pruned weights or points clouds by Tensors whose <em>elements are
mostly zero valued</em>. We recognize these are important applications and aim
to provide performance optimizations for these use cases via sparse storage formats.</p>
<p>Various sparse storage formats such as COO, CSR/CSC, semi-structured, LIL, etc. have been
developed over the years. While they differ in exact layouts, they all
compress data through efficient representation of zero valued elements.
We call the uncompressed values <em>specified</em> in contrast to <em>unspecified</em>,
compressed elements.</p>
<p>By compressing repeat zeros sparse storage formats aim to save memory
and computational resources on various CPUs and GPUs. Especially for high
degrees of sparsity or highly structured sparsity this can have significant
performance implications. As such sparse storage formats can be seen as a
performance optimization.</p>
<p>Like many other performance optimization sparse storage formats are not
always advantageous. When trying sparse formats for your use case
you might find your execution time to increase rather than decrease.</p>
<p>Please feel encouraged to open a GitHub issue if you analytically
expected to see a stark increase in performance but measured a
degradation instead. This helps us prioritize the implementation
of efficient kernels and wider performance optimizations.</p>
<p>We make it easy to try different sparsity layouts, and convert between them,
without being opinionated on what’s best for your particular application.</p>
</div>
<div class="section" id="functionality-overview">
<h2>Functionality overview<a class="headerlink" href="#functionality-overview" title="Permalink to this heading">¶</a></h2>
<p>We want it to be straightforward to construct a sparse Tensor from a
given dense Tensor by providing conversion routines for each layout.</p>
<p>In the next example we convert a 2D Tensor with default dense (strided)
layout to a 2D Tensor backed by the COO memory layout. Only values and
indices of non-zero elements are stored in this case.</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">a</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">tensor</span><span class="p">([[</span><span class="mi">0</span><span class="p">,</span> <span class="mf">2.</span><span class="p">],</span> <span class="p">[</span><span class="mi">3</span><span class="p">,</span> <span class="mi">0</span><span class="p">]])</span>
<span class="gp">>>> </span><span class="n">a</span><span class="o">.</span><span class="n">to_sparse</span><span class="p">()</span>
<span class="go">tensor(indices=tensor([[0, 1],</span>
<span class="go"> [1, 0]]),</span>
<span class="go"> values=tensor([2., 3.]),</span>
<span class="go"> size=(2, 2), nnz=2, layout=torch.sparse_coo)</span>
</pre></div>
</div>
<p>PyTorch currently supports <a class="reference internal" href="#sparse-coo-docs"><span class="std std-ref">COO</span></a>, <a class="reference internal" href="#sparse-csr-docs"><span class="std std-ref">CSR</span></a>,
<a class="reference internal" href="#sparse-csc-docs"><span class="std std-ref">CSC</span></a>, <a class="reference internal" href="#sparse-bsr-docs"><span class="std std-ref">BSR</span></a>, and <a class="reference internal" href="#sparse-bsc-docs"><span class="std std-ref">BSC</span></a>.</p>
<p>We also have a prototype implementation to support :ref: <cite>semi-structured sparsity<sparse-semi-structured-docs></cite>.
Please see the references for more details.</p>
<p>Note that we provide slight generalizations of these formats.</p>
<p>Batching: Devices such as GPUs require batching for optimal performance and
thus we support batch dimensions.</p>
<p>We currently offer a very simple version of batching where each component of a sparse format
itself is batched. This also requires the same number of specified elements per batch entry.
In this example we construct a 3D (batched) CSR Tensor from a 3D dense Tensor.</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">t</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">tensor</span><span class="p">([[[</span><span class="mf">1.</span><span class="p">,</span> <span class="mi">0</span><span class="p">],</span> <span class="p">[</span><span class="mf">2.</span><span class="p">,</span> <span class="mf">3.</span><span class="p">]],</span> <span class="p">[[</span><span class="mf">4.</span><span class="p">,</span> <span class="mi">0</span><span class="p">],</span> <span class="p">[</span><span class="mf">5.</span><span class="p">,</span> <span class="mf">6.</span><span class="p">]]])</span>
<span class="gp">>>> </span><span class="n">t</span><span class="o">.</span><span class="n">dim</span><span class="p">()</span>
<span class="go">3</span>
<span class="gp">>>> </span><span class="n">t</span><span class="o">.</span><span class="n">to_sparse_csr</span><span class="p">()</span>
<span class="go">tensor(crow_indices=tensor([[0, 1, 3],</span>
<span class="go"> [0, 1, 3]]),</span>
<span class="go"> col_indices=tensor([[0, 0, 1],</span>
<span class="go"> [0, 0, 1]]),</span>
<span class="go"> values=tensor([[1., 2., 3.],</span>
<span class="go"> [4., 5., 6.]]), size=(2, 2, 2), nnz=3,</span>
<span class="go"> layout=torch.sparse_csr)</span>
</pre></div>
</div>
<p>Dense dimensions: On the other hand, some data such as Graph embeddings might be
better viewed as sparse collections of vectors instead of scalars.</p>
<p>In this example we create a 3D Hybrid COO Tensor with 2 sparse and 1 dense dimension
from a 3D strided Tensor. If an entire row in the 3D strided Tensor is zero, it is
not stored. If however any of the values in the row are non-zero, they are stored
entirely. This reduces the number of indices since we need one index one per row instead
of one per element. But it also increases the amount of storage for the values. Since
only rows that are <em>entirely</em> zero can be emitted and the presence of any non-zero
valued elements cause the entire row to be stored.</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">t</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">tensor</span><span class="p">([[[</span><span class="mf">0.</span><span class="p">,</span> <span class="mi">0</span><span class="p">],</span> <span class="p">[</span><span class="mf">1.</span><span class="p">,</span> <span class="mf">2.</span><span class="p">]],</span> <span class="p">[[</span><span class="mf">0.</span><span class="p">,</span> <span class="mi">0</span><span class="p">],</span> <span class="p">[</span><span class="mf">3.</span><span class="p">,</span> <span class="mf">4.</span><span class="p">]]])</span>
<span class="gp">>>> </span><span class="n">t</span><span class="o">.</span><span class="n">to_sparse</span><span class="p">(</span><span class="n">sparse_dim</span><span class="o">=</span><span class="mi">2</span><span class="p">)</span>
<span class="go">tensor(indices=tensor([[0, 1],</span>
<span class="go"> [1, 1]]),</span>
<span class="go"> values=tensor([[1., 2.],</span>
<span class="go"> [3., 4.]]),</span>
<span class="go"> size=(2, 2, 2), nnz=2, layout=torch.sparse_coo)</span>
</pre></div>
</div>
</div>
<div class="section" id="operator-overview">
<h2>Operator overview<a class="headerlink" href="#operator-overview" title="Permalink to this heading">¶</a></h2>
<p>Fundamentally, operations on Tensor with sparse storage formats behave the same as
operations on Tensor with strided (or other) storage formats. The particularities of
storage, that is the physical layout of the data, influences the performance of
an operation but should not influence the semantics.</p>
<p>We are actively increasing operator coverage for sparse tensors. Users should not
expect support same level of support as for dense Tensors yet.
See our <a class="reference internal" href="#sparse-ops-docs"><span class="std std-ref">operator</span></a> documentation for a list.</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">b</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">tensor</span><span class="p">([[</span><span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">0</span><span class="p">],</span> <span class="p">[</span><span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">6</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">]])</span>
<span class="gp">>>> </span><span class="n">b_s</span> <span class="o">=</span> <span class="n">b</span><span class="o">.</span><span class="n">to_sparse_csr</span><span class="p">()</span>
<span class="gp">>>> </span><span class="n">b_s</span><span class="o">.</span><span class="n">cos</span><span class="p">()</span>
<span class="gt">Traceback (most recent call last):</span>
File <span class="nb">"<stdin>"</span>, line <span class="m">1</span>, in <span class="n"><module></span>
<span class="gr">RuntimeError</span>: <span class="n">unsupported tensor layout: SparseCsr</span>
<span class="gp">>>> </span><span class="n">b_s</span><span class="o">.</span><span class="n">sin</span><span class="p">()</span>
<span class="go">tensor(crow_indices=tensor([0, 3, 6]),</span>
<span class="go"> col_indices=tensor([2, 3, 4, 0, 1, 3]),</span>
<span class="go"> values=tensor([ 0.8415, 0.9093, 0.1411, -0.7568, -0.9589, -0.2794]),</span>
<span class="go"> size=(2, 6), nnz=6, layout=torch.sparse_csr)</span>
</pre></div>
</div>
<p>As shown in the example above, we don’t support non-zero preserving unary
operators such as cos. The output of a non-zero preserving unary operation
will not be able to take advantage of sparse storage formats to the same
extent as the input and potentially result in a catastrophic increase in memory.
We instead rely on the user to explicitly convert to a dense Tensor first and
then run the operation.</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">b_s</span><span class="o">.</span><span class="n">to_dense</span><span class="p">()</span><span class="o">.</span><span class="n">cos</span><span class="p">()</span>
<span class="go">tensor([[ 1.0000, -0.4161],</span>
<span class="go"> [-0.9900, 1.0000]])</span>
</pre></div>
</div>
<p>We are aware that some users want to ignore compressed zeros for operations such
as <cite>cos</cite> instead of preserving the exact semantics of the operation. For this we
can point to torch.masked and its MaskedTensor, which is in turn also backed and
powered by sparse storage formats and kernels.</p>
<p>Also note that, for now, the user doesn’t have a choice of the output layout. For example,
adding a sparse Tensor to a regular strided Tensor results in a strided Tensor. Some
users might prefer for this to stay a sparse layout, because they know the result will
still be sufficiently sparse.</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">a</span> <span class="o">+</span> <span class="n">b</span><span class="o">.</span><span class="n">to_sparse</span><span class="p">()</span>
<span class="go">tensor([[0., 3.],</span>
<span class="go"> [3., 0.]])</span>
</pre></div>
</div>
<p>We acknowledge that access to kernels that can efficiently produce different output
layouts can be very useful. A subsequent operation might significantly benefit from
receiving a particular layout. We are working on an API to control the result layout
and recognize it is an important feature to plan a more optimal path of execution for
any given model.</p>
</div>
<div class="section" id="sparse-semi-structured-tensors">
<span id="sparse-semi-structured-docs"></span><h2>Sparse Semi-Structured Tensors<a class="headerlink" href="#sparse-semi-structured-tensors" title="Permalink to this heading">¶</a></h2>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>Sparse semi-structured tensors are currently a prototype feature and subject to change. Please feel free to open an issue to report a bug or if you have feedback to share.</p>
</div>
<p>Semi-Structured sparsity is a sparse data layout that was first introduced in NVIDIA’s Ampere architecture. It is also referred to as <strong>fine-grained structured sparsity</strong> or <strong>2:4 structured sparsity</strong>.</p>
<p>This sparse layout stores <cite>n</cite> elements out of every <cite>2n</cite> elements, with <cite>n</cite> being determined by the width of the Tensor’s data type (dtype). The most frequently used dtype is float16, where <cite>n=2</cite>, thus the term “2:4 structured sparsity.”</p>
<p>Semi-structured sparsity is explained in greater detail in <a class="reference external" href="https://developer.nvidia.com/blog/exploiting-ampere-structured-sparsity-with-cusparselt">this NVIDIA blog post</a>.</p>
<p>In PyTorch, semi-structured sparsity is implemented via a Tensor subclass.
By subclassing, we can override <code class="docutils literal notranslate"><span class="pre">__torch_dispatch__</span></code> , allowing us to use faster sparse kernels when performing matrix multiplication.
We can also store the tensor in it’s compressed form inside the subclass to reduce memory overhead.</p>
<p>In this compressed form, the sparse tensor is stored by retaining only the <em>specified</em> elements and some metadata, which encodes the mask.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>The specified elements and metadata mask of a semi-structured sparse tensor are stored together in a single
flat compressed tensor. They are appended to each other to form a contiguous chunk of memory.</p>
<p>compressed tensor = [ specified elements of original tensor | metadata_mask ]</p>
<p>For an original tensor of size <cite>(r, c)</cite> we expect the first <cite>m * k // 2</cite> elements to be the kept elements
and the rest of the tensor is metadata.</p>
<p>In order to make it easier for the user to view the specified elements
and mask, one can use <code class="docutils literal notranslate"><span class="pre">.indices()</span></code> and <code class="docutils literal notranslate"><span class="pre">.values()</span></code> to access the mask and specified elements respectively.</p>
<ul class="simple">
<li><p><code class="docutils literal notranslate"><span class="pre">.values()</span></code> returns the specified elements in a tensor of size <cite>(r, c//2)</cite> and with the same dtype as the dense matrix.</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">.indices()</span></code> returns the metadata_mask in a tensor of size <cite>(r, c//2 )</cite> and with element type <code class="docutils literal notranslate"><span class="pre">torch.int16</span></code> if dtype is torch.float16 or torch.bfloat16, and element type <code class="docutils literal notranslate"><span class="pre">torch.int32</span></code> if dtype is torch.int8.</p></li>
</ul>
</div>
<p>For 2:4 sparse tensors, the metadata overhead is minor - just 2 bits per specified element.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>It’s important to note that <code class="docutils literal notranslate"><span class="pre">torch.float32</span></code> is only supported for 1:2 sparsity. Therefore, it does not follow the same formula as above.</p>
</div>
<p>Here, we break down how to calculate the compression ratio ( size dense / size sparse) of a 2:4 sparse tensor.</p>
<p>Let <cite>(r, c) = tensor.shape</cite> and <cite>e = bitwidth(tensor.dtype)</cite>, so <cite>e = 16</cite> for <code class="docutils literal notranslate"><span class="pre">torch.float16</span></code> and <code class="docutils literal notranslate"><span class="pre">torch.bfloat16</span></code> and <cite>e = 8</cite> for <code class="docutils literal notranslate"><span class="pre">torch.int8</span></code>.</p>
<div class="math">
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><msub><mi>M</mi><mrow><mi>d</mi><mi>e</mi><mi>n</mi><mi>s</mi><mi>e</mi></mrow></msub><mo>=</mo><mi>r</mi><mo>×</mo><mi>c</mi><mo>×</mo><mi>e</mi><mspace linebreak="newline"></mspace><msub><mi>M</mi><mrow><mi>s</mi><mi>p</mi><mi>a</mi><mi>r</mi><mi>s</mi><mi>e</mi></mrow></msub><mo>=</mo><msub><mi>M</mi><mrow><mi>s</mi><mi>p</mi><mi>e</mi><mi>c</mi><mi>i</mi><mi>f</mi><mi>i</mi><mi>e</mi><mi>d</mi></mrow></msub><mo>+</mo><msub><mi>M</mi><mrow><mi>m</mi><mi>e</mi><mi>t</mi><mi>a</mi><mi>d</mi><mi>a</mi><mi>t</mi><mi>a</mi></mrow></msub><mo>=</mo><mi>r</mi><mo>×</mo><mfrac><mi>c</mi><mn>2</mn></mfrac><mo>×</mo><mi>e</mi><mo>+</mo><mi>r</mi><mo>×</mo><mfrac><mi>c</mi><mn>2</mn></mfrac><mo>×</mo><mn>2</mn><mo>=</mo><mfrac><mrow><mi>r</mi><mi>c</mi><mi>e</mi></mrow><mn>2</mn></mfrac><mo>+</mo><mi>r</mi><mi>c</mi><mo>=</mo><mi>r</mi><mi>c</mi><mi>e</mi><mo stretchy="false">(</mo><mfrac><mn>1</mn><mn>2</mn></mfrac><mo>+</mo><mfrac><mn>1</mn><mi>e</mi></mfrac><mo stretchy="false">)</mo></mrow><annotation encoding="application/x-tex">M_{dense} = r \times c \times e \\
M_{sparse} = M_{specified} + M_{metadata} = r \times \frac{c}{2} \times e + r \times \frac{c}{2} \times 2 = \frac{rce}{2} + rc =rce(\frac{1}{2} +\frac{1}{e})
</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8333em;vertical-align:-0.15em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.10903em;">M</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3361em;"><span style="top:-2.55em;margin-left:-0.109em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathnormal mtight">d</span><span class="mord mathnormal mtight">e</span><span class="mord mathnormal mtight">n</span><span class="mord mathnormal mtight">se</span></span></span></span></span><span class="vlist-s"></span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.6667em;vertical-align:-0.0833em;"></span><span class="mord mathnormal" style="margin-right:0.02778em;">r</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">×</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.6667em;vertical-align:-0.0833em;"></span><span class="mord mathnormal">c</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">×</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal">e</span></span><span class="mspace newline"></span><span class="base"><span class="strut" style="height:0.9694em;vertical-align:-0.2861em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.10903em;">M</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.1514em;"><span style="top:-2.55em;margin-left:-0.109em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathnormal mtight">s</span><span class="mord mathnormal mtight">p</span><span class="mord mathnormal mtight">a</span><span class="mord mathnormal mtight">rse</span></span></span></span></span><span class="vlist-s"></span></span><span class="vlist-r"><span class="vlist" style="height:0.2861em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.9694em;vertical-align:-0.2861em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.10903em;">M</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3361em;"><span style="top:-2.55em;margin-left:-0.109em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathnormal mtight">s</span><span class="mord mathnormal mtight">p</span><span class="mord mathnormal mtight">ec</span><span class="mord mathnormal mtight">i</span><span class="mord mathnormal mtight" style="margin-right:0.10764em;">f</span><span class="mord mathnormal mtight">i</span><span class="mord mathnormal mtight">e</span><span class="mord mathnormal mtight">d</span></span></span></span></span><span class="vlist-s"></span></span><span class="vlist-r"><span class="vlist" style="height:0.2861em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8333em;vertical-align:-0.15em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.10903em;">M</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3361em;"><span style="top:-2.55em;margin-left:-0.109em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathnormal mtight">m</span><span class="mord mathnormal mtight">e</span><span class="mord mathnormal mtight">t</span><span class="mord mathnormal mtight">a</span><span class="mord mathnormal mtight">d</span><span class="mord mathnormal mtight">a</span><span class="mord mathnormal mtight">t</span><span class="mord mathnormal mtight">a</span></span></span></span></span><span class="vlist-s"></span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.6667em;vertical-align:-0.0833em;"></span><span class="mord mathnormal" style="margin-right:0.02778em;">r</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">×</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:1.7936em;vertical-align:-0.686em;"></span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.1076em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">2</span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.677em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal">c</span></span></span></span><span class="vlist-s"></span></span><span class="vlist-r"><span class="vlist" style="height:0.686em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">×</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.6667em;vertical-align:-0.0833em;"></span><span class="mord mathnormal">e</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.6667em;vertical-align:-0.0833em;"></span><span class="mord mathnormal" style="margin-right:0.02778em;">r</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">×</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:1.7936em;vertical-align:-0.686em;"></span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.1076em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">2</span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.677em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal">c</span></span></span></span><span class="vlist-s"></span></span><span class="vlist-r"><span class="vlist" style="height:0.686em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">×</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.6444em;"></span><span class="mord">2</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:1.7936em;vertical-align:-0.686em;"></span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.1076em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">2</span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.677em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal">rce</span></span></span></span><span class="vlist-s"></span></span><span class="vlist-r"><span class="vlist" style="height:0.686em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal">rc</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:2.0074em;vertical-align:-0.686em;"></span><span class="mord mathnormal">rce</span><span class="mopen">(</span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.3214em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">2</span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.677em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s"></span></span><span class="vlist-r"><span class="vlist" style="height:0.686em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:2.0074em;vertical-align:-0.686em;"></span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.3214em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal">e</span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.677em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s"></span></span><span class="vlist-r"><span class="vlist" style="height:0.686em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mclose">)</span></span></span></span></span></div><p>Using these calculations, we can determine the total memory footprint for both the original dense and the new sparse representation.</p>
<p>This gives us a simple formula for the compression ratio, which is dependent only on the bitwidth of the tensor datatype.</p>
<div class="math">
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mi>C</mi><mo>=</mo><mfrac><msub><mi>M</mi><mrow><mi>s</mi><mi>p</mi><mi>a</mi><mi>r</mi><mi>s</mi><mi>e</mi></mrow></msub><msub><mi>M</mi><mrow><mi>d</mi><mi>e</mi><mi>n</mi><mi>s</mi><mi>e</mi></mrow></msub></mfrac><mo>=</mo><mfrac><mn>1</mn><mn>2</mn></mfrac><mo>+</mo><mfrac><mn>1</mn><mi>e</mi></mfrac></mrow><annotation encoding="application/x-tex">C = \frac{M_{sparse}}{M_{dense}} = \frac{1}{2} + \frac{1}{e}
</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.07153em;">C</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:2.1963em;vertical-align:-0.836em;"></span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.3603em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord"><span class="mord mathnormal" style="margin-right:0.10903em;">M</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3361em;"><span style="top:-2.55em;margin-left:-0.109em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathnormal mtight">d</span><span class="mord mathnormal mtight">e</span><span class="mord mathnormal mtight">n</span><span class="mord mathnormal mtight">se</span></span></span></span></span><span class="vlist-s"></span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.677em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord"><span class="mord mathnormal" style="margin-right:0.10903em;">M</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.1514em;"><span style="top:-2.55em;margin-left:-0.109em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathnormal mtight">s</span><span class="mord mathnormal mtight">p</span><span class="mord mathnormal mtight">a</span><span class="mord mathnormal mtight">rse</span></span></span></span></span><span class="vlist-s"></span></span><span class="vlist-r"><span class="vlist" style="height:0.2861em;"><span></span></span></span></span></span></span></span></span></span><span class="vlist-s"></span></span><span class="vlist-r"><span class="vlist" style="height:0.836em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:2.0074em;vertical-align:-0.686em;"></span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.3214em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">2</span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.677em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s"></span></span><span class="vlist-r"><span class="vlist" style="height:0.686em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:2.0074em;vertical-align:-0.686em;"></span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.3214em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal">e</span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.677em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s"></span></span><span class="vlist-r"><span class="vlist" style="height:0.686em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span></span></span></span></span></div><p>By using this formula, we find that the compression ratio is 56.25% for <code class="docutils literal notranslate"><span class="pre">torch.float16</span></code> or <code class="docutils literal notranslate"><span class="pre">torch.bfloat16</span></code>, and 62.5% for <code class="docutils literal notranslate"><span class="pre">torch.int8</span></code>.</p>
<div class="section" id="constructing-sparse-semi-structured-tensors">
<h3>Constructing Sparse Semi-Structured Tensors<a class="headerlink" href="#constructing-sparse-semi-structured-tensors" title="Permalink to this heading">¶</a></h3>
<p>You can transform a dense tensor into a sparse semi-structured tensor by simply using the <code class="docutils literal notranslate"><span class="pre">torch.to_sparse_semi_structured</span></code> function.</p>
<p>Please also note that we only support CUDA tensors since hardware compatibility for semi-structured sparsity is limited to NVIDIA GPUs.</p>
<p>The following datatypes are supported for semi-structured sparsity. Note that each datatype has its own shape constraints and compression factor.</p>
<table class="colwidths-given docutils colwidths-auto align-default">
<colgroup>
<col style="width: 19%" />
<col style="width: 56%" />
<col style="width: 13%" />
<col style="width: 13%" />
</colgroup>
<thead>
<tr class="row-odd"><th class="head"><p>PyTorch dtype</p></th>
<th class="head"><p>Shape Constraints</p></th>
<th class="head"><p>Compression Factor</p></th>
<th class="head"><p>Sparsity Pattern</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p><code class="docutils literal notranslate"><span class="pre">torch.float16</span></code></p></td>
<td><p>Tensor must be 2D and (r, c) must both be a positive multiple of 64</p></td>
<td><p>9/16</p></td>
<td><p>2:4</p></td>
</tr>
<tr class="row-odd"><td><p><code class="docutils literal notranslate"><span class="pre">torch.bfloat16</span></code></p></td>
<td><p>Tensor must be 2D and (r, c) must both be a positive multiple of 64</p></td>
<td><p>9/16</p></td>
<td><p>2:4</p></td>
</tr>
<tr class="row-even"><td><p><code class="docutils literal notranslate"><span class="pre">torch.int8</span></code></p></td>
<td><p>Tensor must be 2D and (r, c) must both be a positive multiple of 128</p></td>
<td><p>10/16</p></td>
<td><p>2:4</p></td>
</tr>
</tbody>
</table>
<p>To construct a semi-structured sparse tensor, start by creating a regular dense tensor that adheres to a 2:4 (or semi-structured) sparse format.
To do this we tile a small 1x4 strip to create a 16x16 dense float16 tensor.
Afterwards, we can call <code class="docutils literal notranslate"><span class="pre">to_sparse_semi_structured</span></code> function to compress it for accelerated inference.</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="kn">from</span> <span class="nn">torch.sparse</span> <span class="kn">import</span> <span class="n">to_sparse_semi_structured</span>
<span class="gp">>>> </span><span class="n">A</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">])</span><span class="o">.</span><span class="n">tile</span><span class="p">((</span><span class="mi">128</span><span class="p">,</span> <span class="mi">32</span><span class="p">))</span><span class="o">.</span><span class="n">half</span><span class="p">()</span><span class="o">.</span><span class="n">cuda</span><span class="p">()</span>
<span class="go">tensor([[0., 0., 1., ..., 0., 1., 1.],</span>
<span class="go"> [0., 0., 1., ..., 0., 1., 1.],</span>
<span class="go"> [0., 0., 1., ..., 0., 1., 1.],</span>
<span class="go"> ...,</span>
<span class="go"> [0., 0., 1., ..., 0., 1., 1.],</span>
<span class="go"> [0., 0., 1., ..., 0., 1., 1.],</span>
<span class="go"> [0., 0., 1., ..., 0., 1., 1.]], device='cuda:0', dtype=torch.float16)</span>
<span class="gp">>>> </span><span class="n">A_sparse</span> <span class="o">=</span> <span class="n">to_sparse_semi_structured</span><span class="p">(</span><span class="n">A</span><span class="p">)</span>
<span class="go">SparseSemiStructuredTensor(shape=torch.Size([128, 128]), transposed=False, values=tensor([[1., 1., 1., ..., 1., 1., 1.],</span>
<span class="go"> [1., 1., 1., ..., 1., 1., 1.],</span>
<span class="go"> [1., 1., 1., ..., 1., 1., 1.],</span>
<span class="go"> ...,</span>
<span class="go"> [1., 1., 1., ..., 1., 1., 1.],</span>
<span class="go"> [1., 1., 1., ..., 1., 1., 1.],</span>
<span class="go"> [1., 1., 1., ..., 1., 1., 1.]], device='cuda:0', dtype=torch.float16), metadata=tensor([[-4370, -4370, -4370, ..., -4370, -4370, -4370],</span>
<span class="go"> [-4370, -4370, -4370, ..., -4370, -4370, -4370],</span>
<span class="go"> [-4370, -4370, -4370, ..., -4370, -4370, -4370],</span>
<span class="go"> ...,</span>
<span class="go"> [-4370, -4370, -4370, ..., -4370, -4370, -4370],</span>
<span class="go"> [-4370, -4370, -4370, ..., -4370, -4370, -4370],</span>
<span class="go"> [-4370, -4370, -4370, ..., -4370, -4370, -4370]], device='cuda:0',</span>
<span class="go">dtype=torch.int16))</span>
</pre></div>
</div>
</div>
<div class="section" id="sparse-semi-structured-tensor-operations">
<h3>Sparse Semi-Structured Tensor Operations<a class="headerlink" href="#sparse-semi-structured-tensor-operations" title="Permalink to this heading">¶</a></h3>
<p>Currently, the following operations are supported for semi-structured sparse tensors:</p>
<ul class="simple">
<li><p>torch.addmm(bias, dense, sparse.t())</p></li>
<li><p>torch.mm(dense, sparse)</p></li>
<li><p>torch.mm(sparse, dense)</p></li>
<li><p>aten.linear.default(dense, sparse, bias)</p></li>
<li><p>aten.t.default(sparse)</p></li>
<li><p>aten.t.detach(sparse)</p></li>
</ul>
<p>To use these ops, simply pass the output of <code class="docutils literal notranslate"><span class="pre">to_sparse_semi_structured(tensor)</span></code> instead of using <code class="docutils literal notranslate"><span class="pre">tensor</span></code> once your tensor has 0s in a semi-structured sparse format, like this:</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">a</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">])</span><span class="o">.</span><span class="n">tile</span><span class="p">((</span><span class="mi">64</span><span class="p">,</span> <span class="mi">16</span><span class="p">))</span><span class="o">.</span><span class="n">half</span><span class="p">()</span><span class="o">.</span><span class="n">cuda</span><span class="p">()</span>
<span class="gp">>>> </span><span class="n">b</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">rand</span><span class="p">(</span><span class="mi">64</span><span class="p">,</span> <span class="mi">64</span><span class="p">)</span><span class="o">.</span><span class="n">half</span><span class="p">()</span><span class="o">.</span><span class="n">cuda</span><span class="p">()</span>
<span class="gp">>>> </span><span class="n">c</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">mm</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">)</span>
<span class="gp">>>> </span><span class="n">a_sparse</span> <span class="o">=</span> <span class="n">to_sparse_semi_structured</span><span class="p">(</span><span class="n">a</span><span class="p">)</span>
<span class="gp">>>> </span><span class="n">torch</span><span class="o">.</span><span class="n">allclose</span><span class="p">(</span><span class="n">c</span><span class="p">,</span> <span class="n">torch</span><span class="o">.</span><span class="n">mm</span><span class="p">(</span><span class="n">a_sparse</span><span class="p">,</span> <span class="n">b</span><span class="p">))</span>
<span class="go">True</span>
</pre></div>
</div>
</div>
<div class="section" id="accelerating-nn-linear-with-semi-structured-sparsity">
<h3>Accelerating nn.Linear with semi-structured sparsity<a class="headerlink" href="#accelerating-nn-linear-with-semi-structured-sparsity" title="Permalink to this heading">¶</a></h3>
<p>You can accelerate the linear layers in your model if the weights are already semi-structured sparse with just a few lines of code:</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="nb">input</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">rand</span><span class="p">(</span><span class="mi">64</span><span class="p">,</span> <span class="mi">64</span><span class="p">)</span><span class="o">.</span><span class="n">half</span><span class="p">()</span><span class="o">.</span><span class="n">cuda</span><span class="p">()</span>
<span class="gp">>>> </span><span class="n">mask</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">])</span><span class="o">.</span><span class="n">tile</span><span class="p">((</span><span class="mi">64</span><span class="p">,</span> <span class="mi">16</span><span class="p">))</span><span class="o">.</span><span class="n">cuda</span><span class="p">()</span><span class="o">.</span><span class="n">bool</span><span class="p">()</span>
<span class="gp">>>> </span><span class="n">linear</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Linear</span><span class="p">(</span><span class="mi">64</span><span class="p">,</span> <span class="mi">64</span><span class="p">)</span><span class="o">.</span><span class="n">half</span><span class="p">()</span><span class="o">.</span><span class="n">cuda</span><span class="p">()</span>
<span class="gp">>>> </span><span class="n">linear</span><span class="o">.</span><span class="n">weight</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Parameter</span><span class="p">(</span><span class="n">to_sparse_semi_structured</span><span class="p">(</span><span class="n">linear</span><span class="o">.</span><span class="n">weight</span><span class="o">.</span><span class="n">masked_fill</span><span class="p">(</span><span class="o">~</span><span class="n">mask</span><span class="p">,</span> <span class="mi">0</span><span class="p">)))</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="sparse-coo-tensors">
<span id="sparse-coo-docs"></span><h2>Sparse COO tensors<a class="headerlink" href="#sparse-coo-tensors" title="Permalink to this heading">¶</a></h2>
<p>PyTorch implements the so-called Coordinate format, or COO
format, as one of the storage formats for implementing sparse
tensors. In COO format, the specified elements are stored as tuples
of element indices and the corresponding values. In particular,</p>
<blockquote>
<div><ul class="simple">
<li><p>the indices of specified elements are collected in <code class="docutils literal notranslate"><span class="pre">indices</span></code>
tensor of size <code class="docutils literal notranslate"><span class="pre">(ndim,</span> <span class="pre">nse)</span></code> and with element type
<code class="docutils literal notranslate"><span class="pre">torch.int64</span></code>,</p></li>
<li><p>the corresponding values are collected in <code class="docutils literal notranslate"><span class="pre">values</span></code> tensor of
size <code class="docutils literal notranslate"><span class="pre">(nse,)</span></code> and with an arbitrary integer or floating point
number element type,</p></li>
</ul>
</div></blockquote>
<p>where <code class="docutils literal notranslate"><span class="pre">ndim</span></code> is the dimensionality of the tensor and <code class="docutils literal notranslate"><span class="pre">nse</span></code> is the
number of specified elements.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>The memory consumption of a sparse COO tensor is at least <code class="docutils literal notranslate"><span class="pre">(ndim</span> <span class="pre">*</span>
<span class="pre">8</span> <span class="pre">+</span> <span class="pre"><size</span> <span class="pre">of</span> <span class="pre">element</span> <span class="pre">type</span> <span class="pre">in</span> <span class="pre">bytes>)</span> <span class="pre">*</span> <span class="pre">nse</span></code> bytes (plus a constant
overhead from storing other tensor data).</p>
<p>The memory consumption of a strided tensor is at least
<code class="docutils literal notranslate"><span class="pre">product(<tensor</span> <span class="pre">shape>)</span> <span class="pre">*</span> <span class="pre"><size</span> <span class="pre">of</span> <span class="pre">element</span> <span class="pre">type</span> <span class="pre">in</span> <span class="pre">bytes></span></code>.</p>
<p>For example, the memory consumption of a 10 000 x 10 000 tensor
with 100 000 non-zero 32-bit floating point numbers is at least
<code class="docutils literal notranslate"><span class="pre">(2</span> <span class="pre">*</span> <span class="pre">8</span> <span class="pre">+</span> <span class="pre">4)</span> <span class="pre">*</span> <span class="pre">100</span> <span class="pre">000</span> <span class="pre">=</span> <span class="pre">2</span> <span class="pre">000</span> <span class="pre">000</span></code> bytes when using COO tensor
layout and <code class="docutils literal notranslate"><span class="pre">10</span> <span class="pre">000</span> <span class="pre">*</span> <span class="pre">10</span> <span class="pre">000</span> <span class="pre">*</span> <span class="pre">4</span> <span class="pre">=</span> <span class="pre">400</span> <span class="pre">000</span> <span class="pre">000</span></code> bytes when using
the default strided tensor layout. Notice the 200 fold memory
saving from using the COO storage format.</p>
</div>
<div class="section" id="construction">
<h3>Construction<a class="headerlink" href="#construction" title="Permalink to this heading">¶</a></h3>
<p>A sparse COO tensor can be constructed by providing the two tensors of
indices and values, as well as the size of the sparse tensor (when it
cannot be inferred from the indices and values tensors) to a function
<a class="reference internal" href="generated/torch.sparse_coo_tensor.html#torch.sparse_coo_tensor" title="torch.sparse_coo_tensor"><code class="xref py py-func docutils literal notranslate"><span class="pre">torch.sparse_coo_tensor()</span></code></a>.</p>
<p>Suppose we want to define a sparse tensor with the entry 3 at location
(0, 2), entry 4 at location (1, 0), and entry 5 at location (1, 2).
Unspecified elements are assumed to have the same value, fill value,
which is zero by default. We would then write:</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">i</span> <span class="o">=</span> <span class="p">[[</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">],</span>
<span class="go"> [2, 0, 2]]</span>
<span class="gp">>>> </span><span class="n">v</span> <span class="o">=</span> <span class="p">[</span><span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">]</span>
<span class="gp">>>> </span><span class="n">s</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">sparse_coo_tensor</span><span class="p">(</span><span class="n">i</span><span class="p">,</span> <span class="n">v</span><span class="p">,</span> <span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">))</span>
<span class="gp">>>> </span><span class="n">s</span>
<span class="go">tensor(indices=tensor([[0, 1, 1],</span>
<span class="go"> [2, 0, 2]]),</span>
<span class="go"> values=tensor([3, 4, 5]),</span>
<span class="go"> size=(2, 3), nnz=3, layout=torch.sparse_coo)</span>
<span class="gp">>>> </span><span class="n">s</span><span class="o">.</span><span class="n">to_dense</span><span class="p">()</span>
<span class="go">tensor([[0, 0, 3],</span>
<span class="go"> [4, 0, 5]])</span>
</pre></div>
</div>
<p>Note that the input <code class="docutils literal notranslate"><span class="pre">i</span></code> is NOT a list of index tuples. If you want
to write your indices this way, you should transpose before passing them to
the sparse constructor:</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">i</span> <span class="o">=</span> <span class="p">[[</span><span class="mi">0</span><span class="p">,</span> <span class="mi">2</span><span class="p">],</span> <span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">0</span><span class="p">],</span> <span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">]]</span>
<span class="gp">>>> </span><span class="n">v</span> <span class="o">=</span> <span class="p">[</span><span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">5</span> <span class="p">]</span>
<span class="gp">>>> </span><span class="n">s</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">sparse_coo_tensor</span><span class="p">(</span><span class="nb">list</span><span class="p">(</span><span class="nb">zip</span><span class="p">(</span><span class="o">*</span><span class="n">i</span><span class="p">)),</span> <span class="n">v</span><span class="p">,</span> <span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">))</span>
<span class="gp">>>> </span><span class="c1"># Or another equivalent formulation to get s</span>
<span class="gp">>>> </span><span class="n">s</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">sparse_coo_tensor</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">tensor</span><span class="p">(</span><span class="n">i</span><span class="p">)</span><span class="o">.</span><span class="n">t</span><span class="p">(),</span> <span class="n">v</span><span class="p">,</span> <span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">))</span>
<span class="gp">>>> </span><span class="n">torch</span><span class="o">.</span><span class="n">sparse_coo_tensor</span><span class="p">(</span><span class="n">i</span><span class="o">.</span><span class="n">t</span><span class="p">(),</span> <span class="n">v</span><span class="p">,</span> <span class="n">torch</span><span class="o">.</span><span class="n">Size</span><span class="p">([</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">]))</span><span class="o">.</span><span class="n">to_dense</span><span class="p">()</span>
<span class="go">tensor([[0, 0, 3],</span>
<span class="go"> [4, 0, 5]])</span>
</pre></div>
</div>
<p>An empty sparse COO tensor can be constructed by specifying its size
only:</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">torch</span><span class="o">.</span><span class="n">sparse_coo_tensor</span><span class="p">(</span><span class="n">size</span><span class="o">=</span><span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">))</span>
<span class="go">tensor(indices=tensor([], size=(2, 0)),</span>
<span class="go"> values=tensor([], size=(0,)),</span>
<span class="go"> size=(2, 3), nnz=0, layout=torch.sparse_coo)</span>
</pre></div>
</div>
</div>
<div class="section" id="sparse-hybrid-coo-tensors">
<span id="sparse-hybrid-coo-docs"></span><h3>Sparse hybrid COO tensors<a class="headerlink" href="#sparse-hybrid-coo-tensors" title="Permalink to this heading">¶</a></h3>
<p>PyTorch implements an extension of sparse tensors with scalar values
to sparse tensors with (contiguous) tensor values. Such tensors are
called hybrid tensors.</p>
<p>PyTorch hybrid COO tensor extends the sparse COO tensor by allowing
the <code class="docutils literal notranslate"><span class="pre">values</span></code> tensor to be a multi-dimensional tensor so that we
have:</p>
<blockquote>
<div><ul class="simple">
<li><p>the indices of specified elements are collected in <code class="docutils literal notranslate"><span class="pre">indices</span></code>
tensor of size <code class="docutils literal notranslate"><span class="pre">(sparse_dims,</span> <span class="pre">nse)</span></code> and with element type
<code class="docutils literal notranslate"><span class="pre">torch.int64</span></code>,</p></li>
<li><p>the corresponding (tensor) values are collected in <code class="docutils literal notranslate"><span class="pre">values</span></code>
tensor of size <code class="docutils literal notranslate"><span class="pre">(nse,</span> <span class="pre">dense_dims)</span></code> and with an arbitrary integer
or floating point number element type.</p></li>
</ul>
</div></blockquote>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>We use (M + K)-dimensional tensor to denote a N-dimensional sparse
hybrid tensor, where M and K are the numbers of sparse and dense
dimensions, respectively, such that M + K == N holds.</p>
</div>
<p>Suppose we want to create a (2 + 1)-dimensional tensor with the entry
[3, 4] at location (0, 2), entry [5, 6] at location (1, 0), and entry
[7, 8] at location (1, 2). We would write</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">i</span> <span class="o">=</span> <span class="p">[[</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">],</span>
<span class="go"> [2, 0, 2]]</span>
<span class="gp">>>> </span><span class="n">v</span> <span class="o">=</span> <span class="p">[[</span><span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">],</span> <span class="p">[</span><span class="mi">5</span><span class="p">,</span> <span class="mi">6</span><span class="p">],</span> <span class="p">[</span><span class="mi">7</span><span class="p">,</span> <span class="mi">8</span><span class="p">]]</span>
<span class="gp">>>> </span><span class="n">s</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">sparse_coo_tensor</span><span class="p">(</span><span class="n">i</span><span class="p">,</span> <span class="n">v</span><span class="p">,</span> <span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">2</span><span class="p">))</span>
<span class="gp">>>> </span><span class="n">s</span>
<span class="go">tensor(indices=tensor([[0, 1, 1],</span>
<span class="go"> [2, 0, 2]]),</span>
<span class="go"> values=tensor([[3, 4],</span>
<span class="go"> [5, 6],</span>
<span class="go"> [7, 8]]),</span>
<span class="go"> size=(2, 3, 2), nnz=3, layout=torch.sparse_coo)</span>
</pre></div>
</div>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">s</span><span class="o">.</span><span class="n">to_dense</span><span class="p">()</span>
<span class="go">tensor([[[0, 0],</span>
<span class="go"> [0, 0],</span>
<span class="go"> [3, 4]],</span>
<span class="go"> [[5, 6],</span>
<span class="go"> [0, 0],</span>
<span class="go"> [7, 8]]])</span>
</pre></div>
</div>
<p>In general, if <code class="docutils literal notranslate"><span class="pre">s</span></code> is a sparse COO tensor and <code class="docutils literal notranslate"><span class="pre">M</span> <span class="pre">=</span>
<span class="pre">s.sparse_dim()</span></code>, <code class="docutils literal notranslate"><span class="pre">K</span> <span class="pre">=</span> <span class="pre">s.dense_dim()</span></code>, then we have the following
invariants:</p>
<blockquote>
<div><ul class="simple">
<li><p><code class="docutils literal notranslate"><span class="pre">M</span> <span class="pre">+</span> <span class="pre">K</span> <span class="pre">==</span> <span class="pre">len(s.shape)</span> <span class="pre">==</span> <span class="pre">s.ndim</span></code> - dimensionality of a tensor
is the sum of the number of sparse and dense dimensions,</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">s.indices().shape</span> <span class="pre">==</span> <span class="pre">(M,</span> <span class="pre">nse)</span></code> - sparse indices are stored
explicitly,</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">s.values().shape</span> <span class="pre">==</span> <span class="pre">(nse,)</span> <span class="pre">+</span> <span class="pre">s.shape[M</span> <span class="pre">:</span> <span class="pre">M</span> <span class="pre">+</span> <span class="pre">K]</span></code> - the values
of a hybrid tensor are K-dimensional tensors,</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">s.values().layout</span> <span class="pre">==</span> <span class="pre">torch.strided</span></code> - values are stored as
strided tensors.</p></li>
</ul>
</div></blockquote>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>Dense dimensions always follow sparse dimensions, that is, mixing
of dense and sparse dimensions is not supported.</p>
</div>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>To be sure that a constructed sparse tensor has consistent indices,
values, and size, the invariant checks can be enabled per tensor
creation via <code class="docutils literal notranslate"><span class="pre">check_invariants=True</span></code> keyword argument, or
globally using <a class="reference internal" href="generated/torch.sparse.check_sparse_tensor_invariants.html#torch.sparse.check_sparse_tensor_invariants" title="torch.sparse.check_sparse_tensor_invariants"><code class="xref py py-class docutils literal notranslate"><span class="pre">torch.sparse.check_sparse_tensor_invariants</span></code></a>
context manager instance. By default, the sparse tensor invariants
checks are disabled.</p>
</div>
</div>
<div class="section" id="uncoalesced-sparse-coo-tensors">
<span id="sparse-uncoalesced-coo-docs"></span><h3>Uncoalesced sparse COO tensors<a class="headerlink" href="#uncoalesced-sparse-coo-tensors" title="Permalink to this heading">¶</a></h3>
<p>PyTorch sparse COO tensor format permits sparse <em>uncoalesced</em> tensors,
where there may be duplicate coordinates in the indices; in this case,
the interpretation is that the value at that index is the sum of all
duplicate value entries. For example, one can specify multiple values,
<code class="docutils literal notranslate"><span class="pre">3</span></code> and <code class="docutils literal notranslate"><span class="pre">4</span></code>, for the same index <code class="docutils literal notranslate"><span class="pre">1</span></code>, that leads to an 1-D
uncoalesced tensor:</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">i</span> <span class="o">=</span> <span class="p">[[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">]]</span>
<span class="gp">>>> </span><span class="n">v</span> <span class="o">=</span> <span class="p">[</span><span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">]</span>
<span class="gp">>>> </span><span class="n">s</span><span class="o">=</span><span class="n">torch</span><span class="o">.</span><span class="n">sparse_coo_tensor</span><span class="p">(</span><span class="n">i</span><span class="p">,</span> <span class="n">v</span><span class="p">,</span> <span class="p">(</span><span class="mi">3</span><span class="p">,))</span>
<span class="gp">>>> </span><span class="n">s</span>
<span class="go">tensor(indices=tensor([[1, 1]]),</span>
<span class="go"> values=tensor( [3, 4]),</span>
<span class="go"> size=(3,), nnz=2, layout=torch.sparse_coo)</span>
</pre></div>
</div>
<p>while the coalescing process will accumulate the multi-valued elements
into a single value using summation:</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">s</span><span class="o">.</span><span class="n">coalesce</span><span class="p">()</span>
<span class="go">tensor(indices=tensor([[1]]),</span>
<span class="go"> values=tensor([7]),</span>
<span class="go"> size=(3,), nnz=1, layout=torch.sparse_coo)</span>
</pre></div>
</div>
<p>In general, the output of <a class="reference internal" href="generated/torch.Tensor.coalesce.html#torch.Tensor.coalesce" title="torch.Tensor.coalesce"><code class="xref py py-meth docutils literal notranslate"><span class="pre">torch.Tensor.coalesce()</span></code></a> method is a
sparse tensor with the following properties:</p>
<ul class="simple">
<li><p>the indices of specified tensor elements are unique,</p></li>
<li><p>the indices are sorted in lexicographical order,</p></li>
<li><p><a class="reference internal" href="generated/torch.Tensor.is_coalesced.html#torch.Tensor.is_coalesced" title="torch.Tensor.is_coalesced"><code class="xref py py-meth docutils literal notranslate"><span class="pre">torch.Tensor.is_coalesced()</span></code></a> returns <code class="docutils literal notranslate"><span class="pre">True</span></code>.</p></li>
</ul>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>For the most part, you shouldn’t have to care whether or not a
sparse tensor is coalesced or not, as most operations will work
identically given a sparse coalesced or uncoalesced tensor.</p>
<p>However, some operations can be implemented more efficiently on
uncoalesced tensors, and some on coalesced tensors.</p>
<p>For instance, addition of sparse COO tensors is implemented by
simply concatenating the indices and values tensors:</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">a</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">sparse_coo_tensor</span><span class="p">([[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">]],</span> <span class="p">[</span><span class="mi">5</span><span class="p">,</span> <span class="mi">6</span><span class="p">],</span> <span class="p">(</span><span class="mi">2</span><span class="p">,))</span>
<span class="gp">>>> </span><span class="n">b</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">sparse_coo_tensor</span><span class="p">([[</span><span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">]],</span> <span class="p">[</span><span class="mi">7</span><span class="p">,</span> <span class="mi">8</span><span class="p">],</span> <span class="p">(</span><span class="mi">2</span><span class="p">,))</span>
<span class="gp">>>> </span><span class="n">a</span> <span class="o">+</span> <span class="n">b</span>
<span class="go">tensor(indices=tensor([[0, 0, 1, 1]]),</span>
<span class="go"> values=tensor([7, 8, 5, 6]),</span>
<span class="go"> size=(2,), nnz=4, layout=torch.sparse_coo)</span>
</pre></div>
</div>
<p>If you repeatedly perform an operation that can produce duplicate
entries (e.g., <a class="reference internal" href="generated/torch.Tensor.add.html#torch.Tensor.add" title="torch.Tensor.add"><code class="xref py py-func docutils literal notranslate"><span class="pre">torch.Tensor.add()</span></code></a>), you should occasionally
coalesce your sparse tensors to prevent them from growing too large.</p>
<p>On the other hand, the lexicographical ordering of indices can be
advantageous for implementing algorithms that involve many element
selection operations, such as slicing or matrix products.</p>
</div>
</div>
<div class="section" id="working-with-sparse-coo-tensors">
<h3>Working with sparse COO tensors<a class="headerlink" href="#working-with-sparse-coo-tensors" title="Permalink to this heading">¶</a></h3>
<p>Let’s consider the following example:</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">i</span> <span class="o">=</span> <span class="p">[[</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">],</span>
<span class="go"> [2, 0, 2]]</span>
<span class="gp">>>> </span><span class="n">v</span> <span class="o">=</span> <span class="p">[[</span><span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">],</span> <span class="p">[</span><span class="mi">5</span><span class="p">,</span> <span class="mi">6</span><span class="p">],</span> <span class="p">[</span><span class="mi">7</span><span class="p">,</span> <span class="mi">8</span><span class="p">]]</span>
<span class="gp">>>> </span><span class="n">s</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">sparse_coo_tensor</span><span class="p">(</span><span class="n">i</span><span class="p">,</span> <span class="n">v</span><span class="p">,</span> <span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">2</span><span class="p">))</span>
</pre></div>
</div>
<p>As mentioned above, a sparse COO tensor is a <a class="reference internal" href="tensors.html#torch.Tensor" title="torch.Tensor"><code class="xref py py-class docutils literal notranslate"><span class="pre">torch.Tensor</span></code></a>
instance and to distinguish it from the <cite>Tensor</cite> instances that use
some other layout, on can use <a class="reference internal" href="generated/torch.Tensor.is_sparse.html#torch.Tensor.is_sparse" title="torch.Tensor.is_sparse"><code class="xref py py-attr docutils literal notranslate"><span class="pre">torch.Tensor.is_sparse</span></code></a> or
<code class="xref py py-attr docutils literal notranslate"><span class="pre">torch.Tensor.layout</span></code> properties:</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="nb">isinstance</span><span class="p">(</span><span class="n">s</span><span class="p">,</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">)</span>
<span class="go">True</span>
<span class="gp">>>> </span><span class="n">s</span><span class="o">.</span><span class="n">is_sparse</span>
<span class="go">True</span>
<span class="gp">>>> </span><span class="n">s</span><span class="o">.</span><span class="n">layout</span> <span class="o">==</span> <span class="n">torch</span><span class="o">.</span><span class="n">sparse_coo</span>
<span class="go">True</span>
</pre></div>
</div>
<p>The number of sparse and dense dimensions can be acquired using
methods <a class="reference internal" href="generated/torch.Tensor.sparse_dim.html#torch.Tensor.sparse_dim" title="torch.Tensor.sparse_dim"><code class="xref py py-meth docutils literal notranslate"><span class="pre">torch.Tensor.sparse_dim()</span></code></a> and
<a class="reference internal" href="generated/torch.Tensor.dense_dim.html#torch.Tensor.dense_dim" title="torch.Tensor.dense_dim"><code class="xref py py-meth docutils literal notranslate"><span class="pre">torch.Tensor.dense_dim()</span></code></a>, respectively. For instance:</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">s</span><span class="o">.</span><span class="n">sparse_dim</span><span class="p">(),</span> <span class="n">s</span><span class="o">.</span><span class="n">dense_dim</span><span class="p">()</span>
<span class="go">(2, 1)</span>
</pre></div>
</div>
<p>If <code class="docutils literal notranslate"><span class="pre">s</span></code> is a sparse COO tensor then its COO format data can be
acquired using methods <a class="reference internal" href="generated/torch.Tensor.indices.html#torch.Tensor.indices" title="torch.Tensor.indices"><code class="xref py py-meth docutils literal notranslate"><span class="pre">torch.Tensor.indices()</span></code></a> and
<a class="reference internal" href="generated/torch.Tensor.values.html#torch.Tensor.values" title="torch.Tensor.values"><code class="xref py py-meth docutils literal notranslate"><span class="pre">torch.Tensor.values()</span></code></a>.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>Currently, one can acquire the COO format data only when the tensor
instance is coalesced:</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="n">s</span><span class="o">.</span><span class="n">indices</span><span class="p">()</span>