Transformer笔记

Transformer笔记

本笔记主要供自己复习,只记录一些关键的点。参考链接:http://nlp.seas.harvard.edu/2018/04/03/attention.html#prelims

模型架构

一般的神经序列模型都包含encoder-decoder架构。其中,encoder将输入序列$(x_1,x_2,..,x_n)$的符号表示(symbol representations)映射到连续表示序列$z=(z_1,z_2,…z_n)$。给定$z$,decoder随后一次一个元素地生成输出序列$(y_1,y_2,…,y_m)$。在每个步骤中,模型是自动回归(auto-regressive),在生成下一个时,把先前生成的符号作为附加输入。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
class EncoderDecoder(nn.Module):
"""
A standard Encoder-Decoder architecture. Base for this and many
other models.
"""
def __init__(self, encoder, decoder, src_embed, tgt_embed, generator):
super(EncoderDecoder, self).__init__()
self.encoder = encoder
self.decoder = decoder
self.src_embed = src_embed
self.tgt_embed = tgt_embed
self.generator = generator

def forward(self, src, tgt, src_mask, tgt_mask):
"Take in and process masked src and target sequences."
return self.decode(self.encode(src, src_mask), src_mask,
tgt, tgt_mask)

def encode(self, src, src_mask):
return self.encoder(self.src_embed(src), src_mask)

def decode(self, memory, src_mask, tgt, tgt_mask):
return self.decoder(self.tgt_embed(tgt), memory, src_mask, tgt_mask)
1
2
3
4
5
6
7
8
class Generator(nn.Module):
"Define standard linear + softmax generation step."
def __init__(self, d_model, vocab):
super(Generator, self).__init__()
self.proj = nn.Linear(d_model, vocab)

def forward(self, x):
return F.log_softmax(self.proj(x), dim=-1)

Transformer的模型架构如下:

image.png

Encoder and Decoder Stack

Encoder

Transformer的encoder由$N=6$个独立块组成,可看模型框架图得知,位于图的左半部分。

1
2
3
def clones(module, N):
"Produce N identical layers."
return nn.ModuleList([copy.deepcopy(module) for _ in range(N)])
1
2
3
4
5
6
7
8
9
10
11
12
13
class Encoder(nn.Module):
"Core encoder is a stack of N layers"
# 此处是Encoder部分,是由6个layers组成的stack
def __init__(self, layer, N):
super(Encoder, self).__init__()
self.layers = clones(layer, N)
self.norm = LayerNorm(layer.size)

def forward(self, x, mask):
"Pass the input (and mask) through each layer in turn."
for layer in self.layers:
x = layer(x, mask)
return self.norm(x)

另外,每个layer(块)包括了两个sub-layers. 其中,第一个layer是多头注意力机制,第二个layer是简单的全连接层。

1
2
3
4
5
6
7
8
9
10
11
12
13
class EncoderLayer(nn.Module):
"Encoder is made up of self-attn and feed forward (defined below)"
def __init__(self, size, self_attn, feed_forward, dropout):
super(EncoderLayer, self).__init__()
self.self_attn = self_attn
self.feed_forward = feed_forward
self.sublayer = clones(SublayerConnection(size, dropout), 2)
self.size = size

def forward(self, x, mask):
"Follow Figure 1 (left) for connections."
x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, mask))
return self.sublayer[1](x, self.feed_forward)

另外,从模型图中可以看到,每个子layer中都有残差连接部分,随后跟随着layer norm层。

1
2
3
4
5
6
7
8
9
10
11
12
13
class SublayerConnection(nn.Module):
"""
A residual connection followed by a layer norm.
Note for code simplicity the norm is first as opposed to last.
"""
def __init__(self, size, dropout):
super(SublayerConnection, self).__init__()
self.norm = LayerNorm(size)
self.dropout = nn.Dropout(dropout)

def forward(self, x, sublayer):
"Apply residual connection to any sublayer with the same size."
return x + self.dropout(sublayer(self.norm(x)))
1
2
3
4
5
6
7
8
9
10
11
12
class LayerNorm(nn.Module):
"Construct a layernorm module (See citation for details)."
def __init__(self, features, eps=1e-6):
super(LayerNorm, self).__init__()
self.a_2 = nn.Parameter(torch.ones(features))
self.b_2 = nn.Parameter(torch.zeros(features))
self.eps = eps

def forward(self, x):
mean = x.mean(-1, keepdim=True)
std = x.std(-1, keepdim=True)
return self.a_2 * (x - mean) / (std + self.eps) + self.b_2

Decoder

decoder也是由6个块构成。

1
2
3
4
5
6
7
8
9
10
11
class Decoder(nn.Module):
"Generic N layer decoder with masking."
def __init__(self, layer, N):
super(Decoder, self).__init__()
self.layers = clones(layer, N)
self.norm = LayerNorm(layer.size)

def forward(self, x, memory, src_mask, tgt_mask):
for layer in self.layers:
x = layer(x, memory, src_mask, tgt_mask)
return self.norm(x)

除了每个encoder层中的两个子层之外,decoder还插入第三子层,其对堆叠encoder的输出执行多头注意(multi-head attention)。与编码器类似,我们在每个子层周围使用残差连接(residual connections),然后进行层规范化(layer normalization)。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
class DecoderLayer(nn.Module):
"Decoder is made of self-attn, src-attn, and feed forward (defined below)"
def __init__(self, size, self_attn, src_attn, feed_forward, dropout):
super(DecoderLayer, self).__init__()
self.size = size
self.self_attn = self_attn
self.src_attn = src_attn
self.feed_forward = feed_forward
self.sublayer = clones(SublayerConnection(size, dropout), 3)

def forward(self, x, memory, src_mask, tgt_mask):
"Follow Figure 1 (right) for connections."
m = memory
# 对源语言与目标语言的mask机制
# self attention机制,是针对目标语言的,因此需要引入tgt_mask,这个mask矩阵是由已预测的单词构成的
x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, tgt_mask))
# 这个是对encoder的结果的attention, 由于encoder阶段有padding,所以这个 mask 矩阵和 encoder 阶段的mask 矩阵是一样的
x = self.sublayer[1](x, lambda x: self.src_attn(x, m, m, src_mask))
return self.sublayer[2](x, self.feed_forward)

另外,需要修改解码器中的自注意子层(self-attention sub-layer)以防止位置出现在后续位置(subsequent positions)。这种掩蔽与输出嵌入偏移一个位置的事实相结合,确保了位置i的预测仅依赖于小于i的位置处的已知输出

1
2
3
4
5
def subsequent_mask(size):
"Mask out subsequent positions."
attn_shape = (1, size, size)
subsequent_mask = np.triu(np.ones(attn_shape), k=1).astype('uint8')
return torch.from_numpy(subsequent_mask) == 0

我们可以通过图示来展示这个掩码的作用(其中,显示了每个tgt单词(行)允许查看的位置(列)):

image.png

Attention

An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.

Scaled Dot-Product Attention

image.png

这个Attention机制称为”Scaled Dot-Product Attention”。对应公式如下:

实现如下:

1
2
3
4
5
6
7
8
9
10
11
12
def attention(query, key, value, mask=None, dropout=None):
"Compute 'Scaled Dot Product Attention'"
d_k = query.size(-1)
scores = torch.matmul(query, key.transpose(-2, -1)) \
/ math.sqrt(d_k)
if mask is not None:
scores = scores.masked_fill(mask == 0, -1e9)
# mask机制与padding时mask的机制是相同的
p_attn = F.softmax(scores, dim = -1)
if dropout is not None:
p_attn = dropout(p_attn)
return torch.matmul(p_attn, value), p_attn

截止参考博客发出来的时候,比较常用的两种attention机制分别是 additive attention(使用具有单隐层的神经网络来计算compatibility function)和dot-product (multiplicative) attention(这种attention和scaled dot-product attention基本一致,除了多了分母部分)。两者相比,后者的运行速度和空间存储利用更好。

之所以除$\sqrt{d_k}$,解释如下,主要是为了消除点乘的值可能会太大,导致softmax函数进行低梯度空间的情况:

image.png

Multi-head Attention

image.png

其中,投影矩阵为$W_i^Q \in \mathbb{R}^{d_{model} \times d_k}$, $W_i^K \in \mathbb{R}^{d_{model} \times d_k}$, $W_i^V \in \mathbb{R}^{d_{model} \times d_v}$。在Transformer中,一共用了8个头,其中$d_k=d_v=d_{model}/h$。由于每个头部的维数减少,总的计算成本与全维度的单头部注意的计算成本相似。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
class MultiHeadedAttention(nn.Module):
def __init__(self, h, d_model, dropout=0.1):
"Take in model size and number of heads."
super(MultiHeadedAttention, self).__init__()
assert d_model % h == 0
# We assume d_v always equals d_k
self.d_k = d_model // h
self.h = h
self.linears = clones(nn.Linear(d_model, d_model), 4)
self.attn = None
self.dropout = nn.Dropout(p=dropout)

def forward(self, query, key, value, mask=None):
"Implements Figure 2"
if mask is not None:
# Same mask applied to all h heads.
mask = mask.unsqueeze(1)
nbatches = query.size(0)

# 1) Do all the linear projections in batch from d_model => h x d_k
query, key, value = \
[l(x).view(nbatches, -1, self.h, self.d_k).transpose(1, 2)
for l, x in zip(self.linears, (query, key, value))]

# 2) Apply attention on all the projected vectors in batch.
x, self.attn = attention(query, key, value, mask=mask,
dropout=self.dropout)

# 3) "Concat" using a view and apply a final linear.
x = x.transpose(1, 2).contiguous() \
.view(nbatches, -1, self.h * self.d_k)
return self.linears[-1](x)

模型中Attention的应用

  1. 在encoder-decoder层中,queries来自先前时刻的decoder输出,而keys和values来自encoder层的输出。这样可以允许每个位置的decoder输出能够注意到输入序列的全部位置,这模仿了典型的encoder-decoder注意机制模型。

  2. encoder层使用了自注意力机制,其中,query,key,value的值都相同,即encoder的输出。这样的话,每个位置的encoder都可以与encoder之前的所有时刻有关联。而self-attention的实际意义是:在序列内部做Attention,寻找序列内部的联系。

  3. 在decoder层也使用了自注意力机制,即允许解码器中的每个时刻可以关注在该时刻之前的所有时刻。为了保持decoder层的自回归特性,需要防止解码器中的信息向左流动(因为是并行训练,防止看到后面的信息),因此要通过mask掉输入中所有非法连接的值(设置为负无穷大)。具体可以参考下图,感觉讲的应该有点道理,之后再补充苏剑林老师的理解。

    image.png

Attention机制的好处

Attention层的好处是能够一步到位捕捉到全局的联系,因为它直接把序列两两比较(代价是计算量变为$O(n^2)$,当然由于是纯矩阵运算,这个计算量也不是很严重);相比之下,RNN需要一步步递推才能捕捉到,而CNN则需要通过层叠来扩大感受野,这是Attention层的明显优势。

Position-wise Feed-Forward Network

每个子层都包含了全连接FFN,分别独立的应用于每个position中。其实它的作用有点类似卷积核大小为1的情况。输入和输出的维度都是$d_{model}=512$,中间隐层维度为$d_{ff}=2048$。

1
2
3
4
5
6
7
8
9
10
class PositionwiseFeedForward(nn.Module):
"Implements FFN equation."
def __init__(self, d_model, d_ff, dropout=0.1):
super(PositionwiseFeedForward, self).__init__()
self.w_1 = nn.Linear(d_model, d_ff)
self.w_2 = nn.Linear(d_ff, d_model)
self.dropout = nn.Dropout(dropout)

def forward(self, x):
return self.w_2(self.dropout(F.relu(self.w_1(x))))

Embeddings and Softmax

1
2
3
4
5
6
7
8
class Embeddings(nn.Module):
def __init__(self, d_model, vocab):
super(Embeddings, self).__init__()
self.lut = nn.Embedding(vocab, d_model)
self.d_model = d_model

def forward(self, x):
return self.lut(x) * math.sqrt(self.d_model)

Postional Encoding

由于模型中即没有recurrence和convolution,为了可以利用语句中的词序,我们必须将位置信息想办法加进去。因此,模型中对在encoder和decoder的输入处加入了positional encoding。

image.png

对应公式如下:

其中,$pos$是position,$i$是维度。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
class PositionalEncoding(nn.Module):
"Implement the PE function."
def __init__(self, d_model, dropout, max_len=5000):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(p=dropout)

# Compute the positional encodings once in log space.
pe = torch.zeros(max_len, d_model)
position = torch.arange(0, max_len).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2) *
-(math.log(10000.0) / d_model))
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0)
self.register_buffer('pe', pe)

def forward(self, x):
x = x + Variable(self.pe[:, :x.size(1)],
requires_grad=False)
return self.dropout(x)

完整模型

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
def make_model(src_vocab, tgt_vocab, N=6, 
d_model=512, d_ff=2048, h=8, dropout=0.1):
"Helper: Construct a model from hyperparameters."
c = copy.deepcopy
attn = MultiHeadedAttention(h, d_model)
ff = PositionwiseFeedForward(d_model, d_ff, dropout)
position = PositionalEncoding(d_model, dropout)
model = EncoderDecoder(
Encoder(EncoderLayer(d_model, c(attn), c(ff), dropout), N),
Decoder(DecoderLayer(d_model, c(attn), c(attn),
c(ff), dropout), N),
nn.Sequential(Embeddings(d_model, src_vocab), c(position)),
nn.Sequential(Embeddings(d_model, tgt_vocab), c(position)),
Generator(d_model, tgt_vocab))

# This was important from their code.
# Initialize parameters with Glorot / fan_avg.
for p in model.parameters():
if p.dim() > 1:
nn.init.xavier_uniform(p)
return model

tmp_model = make_model(10, 10, 2)

接下里就是模型的训练部分了,不进行讲解。

Optimizer

要学会这种写法。

image.png

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
class NoamOpt:
"Optim wrapper that implements rate."
def __init__(self, model_size, factor, warmup, optimizer):
self.optimizer = optimizer
self._step = 0
self.warmup = warmup
self.factor = factor
self.model_size = model_size
self._rate = 0

def step(self):
"Update parameters and rate"
self._step += 1
rate = self.rate()
for p in self.optimizer.param_groups:
p['lr'] = rate
self._rate = rate
self.optimizer.step()

def rate(self, step = None):
"Implement `lrate` above"
if step is None:
step = self._step
return self.factor * \
(self.model_size ** (-0.5) *
min(step ** (-0.5), step * self.warmup ** (-1.5)))

def get_std_opt(model):
return NoamOpt(model.src_embed[0].d_model, 2, 4000,
torch.optim.Adam(model.parameters(), lr=0, betas=(0.9, 0.98), eps=1e-9))

opts = [NoamOpt(512, 1, 4000, None),
NoamOpt(512, 1, 8000, None),
NoamOpt(256, 1, 4000, None)]
plt.plot(np.arange(1, 20000), [[opt.rate(i) for opt in opts] for i in range(1, 20000)])
plt.legend(["512:4000", "512:8000", "256:4000"])

image.png

正则化

Label Smoothing

在训练中,使用了label smoothing。这种做法会使perplexity增大,as the model learns to be more unsure, but improves accuracy and BLEU score. (中文好差,不知道怎么翻译得好一些)。标签平滑的优势是能够防止模型追求确切概率而不影响模型学习正确分类。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
class LabelSmoothing(nn.Module):
"Implement label smoothing."
def __init__(self, size, padding_idx, smoothing=0.0):
super(LabelSmoothing, self).__init__()
self.criterion = nn.KLDivLoss(size_average=False)
self.padding_idx = padding_idx
self.confidence = 1.0 - smoothing
self.smoothing = smoothing
self.size = size
self.true_dist = None

def forward(self, x, target):
assert x.size(1) == self.size
true_dist = x.data.clone()
true_dist.fill_(self.smoothing / (self.size - 2)) # 将该tensor用指定的数值填充
true_dist.scatter_(1, target.data.unsqueeze(1), self.confidence)
true_dist[:, self.padding_idx] = 0
mask = torch.nonzero(target.data == self.padding_idx)
if mask.dim() > 0:
true_dist.index_fill_(0, mask.squeeze(), 0.0)
self.true_dist = true_dist
return self.criterion(x, Variable(true_dist, requires_grad=False))
  • scatter_函数理解举例

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    x = torch.rand(2, 5)
    #0.4319 0.6500 0.4080 0.8760 0.2355
    #0.2609 0.4711 0.8486 0.8573 0.1029
    # LongTensor的shape刚好与x的shape对应,也就是LongTensor每个index指定x中一个数据的填充位置。
    # dim=0,表示按行填充,主要理解按行填充。
    torch.zeros(3, 5).scatter_(0, torch.LongTensor([[0, 1, 2, 0, 0], [2, 0, 0, 1, 2]]), x)
    # 举例LongTensor中的第0行第2列index=2,表示在第2行(从0开始)进行填充填充,对应到zeros(3, 5)中就是位置(2,2)。 所以此处要求zeros(3, 5)的列数要与x列数相同,而LongTensor中的index最大值应与zeros(3, 5)行数相一致。
    0.4319 0.4711 0.8486 0.8760 0.2355
    0.0000 0.6500 0.0000 0.8573 0.0000
    0.2609 0.0000 0.4080 0.0000 0.1029
    参考链接:https://www.cnblogs.com/dogecheng/p/11938009.html,有公式说明

    scatter()一般可以用来对标签进行one-hot编码,举例如下:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    class_num = 10
    batch_size = 4
    label = torch.LongTensor(batch_size, 1).random_() % class_num
    #tensor([[6],
    # [0],
    # [3],
    # [2]])
    torch.zeros(batch_size, class_num).scatter_(1, label, 1)
    #tensor([[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
    # [1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
    # [0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
    # [0., 0., 1., 0., 0., 0., 0., 0., 0., 0.]])

普通的label smoothing写法:

1
new_labels = (1.0 - label_smoothing) * one_hot_labels + label_smoothing / num_classes