Rate this Page

torch.mtia#

Created On: Jul 11, 2023 | Last Updated: Dec 21, 2024

The MTIA backend is implemented out of the tree, only interfaces are be defined here.

This package enables an interface for accessing MTIA backend in python

StreamContext

Context-manager that selects a given stream.

current_device

Return the index of a currently selected device.

current_stream

Return the currently selected Stream for a given device.

default_stream

Return the default Stream for a given device.

device_count

Return the number of MTIA devices available.

init

is_available

Return true if MTIA device is available

is_initialized

Return whether PyTorch's MTIA state has been initialized.

memory_stats

Return a dictionary of MTIA memory allocator statistics for a given device.

get_device_capability

Return capability of a given device as a tuple of (major version, minor version).

empty_cache

Empty the MTIA device cache.

record_memory_history

Enable/Disable the memory profiler on MTIA allocator

snapshot

Return a dictionary of MTIA memory allocator history

set_device

Set the current device.

set_stream

Set the current stream.This is a wrapper API to set the stream.

stream

Wrap around the Context-manager StreamContext that selects a given stream.

synchronize

Waits for all jobs in all streams on a MTIA device to complete.

device

Context-manager that changes the selected device.

set_rng_state

Sets the random number generator state.

get_rng_state

Returns the random number generator state as a ByteTensor.

DeferredMtiaCallError

Streams and events#

Event

Query and record Stream status to identify or control dependencies across Stream and measure timing.

Stream

An in-order queue of executing the respective tasks asynchronously in first in first out (FIFO) order.