torch.from_file#
- torch.from_file(filename, shared=None, size=0, *, dtype=None, layout=None, device=None, pin_memory=False)#
Creates a CPU tensor with a storage backed by a memory-mapped file.
If
sharedis True, then memory is shared between processes. All changes are written to the file. Ifsharedis False, then changes to the tensor do not affect the file.sizeis the number of elements in the Tensor. IfsharedisFalse, then the file must contain at leastsize * sizeof(dtype)bytes. IfsharedisTruethe file will be created if needed.Note
Only CPU tensors can be mapped to files.
Note
For now, tensors with storages backed by a memory-mapped file cannot be created in pinned memory.
- Parameters
filename (str) – file name to map
shared (bool) – whether to share memory (whether
MAP_SHAREDorMAP_PRIVATEis passed to the underlying mmap(2) call)size (int) – number of elements in the tensor
- Keyword Arguments
dtype (
torch.dtype, optional) – the desired data type of returned tensor. Default: ifNone, uses a global default (seetorch.set_default_dtype()).layout (
torch.layout, optional) – the desired layout of returned Tensor. Default:torch.strided.device (
torch.device, optional) – the desired device of returned tensor. Default: ifNone, uses the current device for the default tensor type (seetorch.set_default_device()).devicewill be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.pin_memory (bool, optional) – If set, returned tensor would be allocated in the pinned memory. Works only for CPU tensors. Default:
False.
Example:
>>> t = torch.randn(2, 5, dtype=torch.float64) >>> t.numpy().tofile('storage.pt') >>> t_mapped = torch.from_file('storage.pt', shared=False, size=10, dtype=torch.float64)