使用transformers的tokenizer

单句子的分词和编码

1
2
3
4
5
6
7
8
9
10
11
12
from transformers import BertTokenizer

tokenizer=BertTokenizer.from_pretrained('bert-base-cased')
sequence='A Titan RTX has 24GB of VRAM'
print('Original sequence: ',sequence)
tokenized_sequence=tokenizer.tokenize(sequence)
print('Tokenized sequence: ',tokenized_sequence)
encodings=tokenizer(sequence)
encoded_sequence=encodings['input_ids']
print('Encoded sequence: ',encoded_sequence)
decoded_encodings=tokenizer.decode(encoded_sequence)
print('Decoded sequence: ',decoded_encodings)

结果

1
2
3
4
Original sequence:  A Titan RTX has 24GB of VRAM
Tokenized sequence: ['A', 'Titan', 'R', '##T', '##X', 'has', '24', '##GB', 'of', 'V', '##RA', '##M']
Encoded sequence: [101, 138, 18696, 155, 1942, 3190, 1144, 1572, 13745, 1104, 159, 9664, 2107, 102]
Decoded sequence: [CLS] A Titan RTX has 24GB of VRAM [SEP]

多句子的分词和编码需要填充

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Transformer's tokenizer - attention_mask
from transformers import BertTokenizer

tokenizer=BertTokenizer.from_pretrained('bert-base-cased')

sequence_a = "This is a short sequence."
sequence_b = "This is a rather long sequence. It is at least longer than the sequence A."
print("Sequence a: ",sequence_a)
print("Sequence b: ",sequence_b)
encoded_sequence_a = tokenizer(sequence_a)["input_ids"]
encoded_sequence_b = tokenizer(sequence_b)["input_ids"]
print("A's encoding length={}. \nB's encoding length={}".format(len(encoded_sequence_a),len(encoded_sequence_b)))
padded_sequence_ab = tokenizer([sequence_a,sequence_b],padding=True)
print("Padded sequence(A,B):", padded_sequence_ab["input_ids"])
print("Attention mask(A,B):", padded_sequence_ab["attention_mask"])

输出

1
2
3
4
5
6
Sequence a:  This is a short sequence.
Sequence b: This is a rather long sequence. It is at least longer than the sequence A.
A's encoding length=8.
B's encoding length=19
Padded sequence(A,B): [[101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]]
Attention mask(A,B): [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]

多个分句组成一个句子,说明句子所属哪个子句

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
from transformers import BertTokenizer

tokenizer=BertTokenizer.from_pretrained('bert-base-cased')

sequence_a = "This is a short sequence."
sequence_b = "This is a rather long sequence. It is at least longer than the sequence A."
print("Sequence a: ",sequence_a)
print("Sequence b: ",sequence_b)
# Transformer's tokenizer - token type id

print(tokenizer.tokenize(sequence_a+sequence_b))
encodings_ab = tokenizer(sequence_a, sequence_b)
print("Encoded sequence(AB):", encodings_ab["input_ids"])
decoded_ab = tokenizer.decode(encodings_ab["input_ids"])
print("Decoded sequence(AB):", decoded_ab)
print("Token type ids(AB):", encodings_ab["token_type_ids"])

结果

1
2
3
4
5
6
Sequence a:  This is a short sequence.
Sequence b: This is a rather long sequence. It is at least longer than the sequence A.
['This', 'is', 'a', 'short', 'sequence', '.', 'This', 'is', 'a', 'rather', 'long', 'sequence', '.', 'It', 'is', 'at', 'least', 'longer', 'than', 'the', 'sequence', 'A', '.']
Encoded sequence(AB): [101, 1188, 1110, 170, 1603, 4954, 119, 102, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]
Decoded sequence(AB): [CLS] This is a short sequence. [SEP] This is a rather long sequence. It is at least longer than the sequence A. [SEP]
Token type ids(AB): [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]

可以看到“[CLS] This is a short sequence. [SEP]”属于第一分句,剩下的属于第二分句。

https://blog.csdn.net/yosemite1998/article/details/122306758 Pytorch Transformer Tokenizer常见输入输出实战详解

202311221847