Pytorch 学习集锦

Nov. 4, 2019, 8:33 p.m.

  • contiguous()

    调用view之前最好先contiguous, 即x.contiguous().view()
    因为view需要tensor的内存是整块的。
    contiguous:view只能用在contiguous的variable上。如果在view之前用了transpose, permute等,需要用contiguous()来返回一个contiguous copy。

  • index_select()

    index_select(x, 1, indices)
    1: 代表维度,也就是列;
    indices: 筛选的索引序号

功能为索引查找,附上几个例子,稍微体会就能明白:

import torch


x = torch.linspace(1, 12, steps=12).view(3,4)
print(x)
‘‘‘
tensor([[  1.,   2.,   3.,   4.],
        [  5.,   6.,   7.,   8.],
        [  9.,  10.,  11.,  12.]])
’’’

indices = torch.LongTensor([0, 2])
y = torch.index_select(x, 0, indices)
print(y)
‘‘‘
tensor([[  1.,   2.,   3.,   4.],
        [  9.,  10.,  11.,  12.]])
’’’

z = torch.index_select(x, 1, indices)
print(z)
‘‘‘
tensor([[  1.,   3.],
        [  5.,   7.],
        [  9.,  11.]])
’’’

z = torch.index_select(y, 1, indices)
print(z)
‘‘‘
tensor([[  1.,   3.],
        [  9.,  11.]])
’’’
  • 将labels转为one-hot

>>>v=torch.Tensor([[1],[2],[3]])
>>> v
tensor([[1.],
        [2.],
        [3.]])
>>> v.size(0)
3
>>> n=v.size(0)
>>> one_hot = torch.zeros(n,10).long()
>>> one_hot.scatter_(dim=1, index=v.long(), src=torch.ones(n, 10).long())
tensor([[0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 1, 0, 0, 0, 0, 0, 0]])
>>> one_hot
tensor([[0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
        [0, 0, 0, 1, 0, 0, 0, 0, 0, 0]])
  • torch.device()

    torch.device('cpu')
    torch.device('cuda')

  • tensor VS numpy

    tensor to numpy: b = a.numpy()
    numpy to tensor: b = torch.from_numpy(a)

  • 更新学习速率

    torch.optim.lr_scheduler.StepLR(optimizer, step_size, gamma=0.1, last_epoch=-1)

>>> # Assuming optimizer uses lr = 0.05 for all groups
>>> # lr = 0.05     if epoch < 30
>>> # lr = 0.005    if 30 <= epoch < 60
>>> # lr = 0.0005   if 60 <= epoch < 90
>>> # ...
>>> scheduler = StepLR(optimizer, step_size=30, gamma=0.1)
>>> for epoch in range(100):
>>>     train(...)
>>>     validate(...)
>>>     scheduler.step()

torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones, gamma=0.1, last_epoch=-1)

>>> # Assuming optimizer uses lr = 0.05 for all groups
>>> # lr = 0.05     if epoch < 30
>>> # lr = 0.005    if 30 <= epoch < 80
>>> # lr = 0.0005   if epoch >= 80
>>> scheduler = MultiStepLR(optimizer, milestones=[30,80], gamma=0.1)
>>> for epoch in range(100):
>>>     train(...)
>>>     validate(...)
>>>     scheduler.step()

torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma, last_epoch=-1)
torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max, eta_min=0, last_epoch=-1)
torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.1, patience=10, verbose=False, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08)
torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr, max_lr, step_size_up=2000, step_size_down=None, mode='triangular', gamma=1.0, scale_fn=None, scale_mode='cycle', cycle_momentum=True, base_momentum=0.8, max_momentum=0.9, last_epoch=-1)
torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr, total_steps=None, epochs=None, steps_per_epoch=None, pct_start=0.3, anneal_strategy='cos', cycle_momentum=True, base_momentum=0.85, max_momentum=0.95, div_factor=25.0, final_div_factor=10000.0, last_epoch=-1)
torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizer, T_0, T_mult=1, eta_min=0, last_epoch=-1)

  • 转置

    python3.7直接转置即可,.T
    python3.6使用如下常用版本:
    Tensor.permute(a,b,c,d,): 可以对任意高维矩阵进行转置
    torch.Transpose(Tensor, a,b): 只能操作2D矩阵

  • 复制

    一共有两个,一个是clone, 一个是detach
    clone: 可以返回一个完全相同的tensor,新的tensor开辟新的内存,但是仍然留在计算图中。
    detach: 可以返回一个完全相同的tensor,新的tensor开辟与旧的tensor共享内存,新的tensor会脱离计算图,不会牵扯梯度计算。

Kaldi-SRE学习集锦

KALDI声纹识别交流群 群号:729152186 xvector in Kaldi nnet3https://www.cnblogs.com/JarvanWang/p/10183576.html…

深度学习基础知识点

文章标题:深度学习基础知识点文章内容:勤快点,勤快点,勤快点!!! 带洞卷积 import torch import torch.nn as nn import torch.nn.init as …

推荐使用 Firefox 访问此站点 | 友情链接: 胡鹏的博客  | Developed by zhangpeng | Copyright © 2018-2020 zhangpeng.ai