Refer to code to figure it out
how tokens consisted for a word.
Code show you tokens list for a word.
..
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
example = "This is a tokenization example"
print('input sentence: ', example)
print('---')
print('tokens :')
print( tokenizer.encode(example, add_special_tokens=False, return_attention_mask=False, return_token_type_ids=False) )
print('---')
print('word and tokens :')
print({x : tokenizer.encode(x, add_special_tokens=False, return_attention_mask=False, return_token_type_ids=False) for x in example.split()})
print('---')
idx = 1
enc =[tokenizer.encode(x, add_special_tokens=False, return_attention_mask=False, return_token_type_ids=False) for x in example.split()]
desired_output = []
for token in enc:
tokenoutput = []
for ids in token:
tokenoutput.append(idx)
idx +=1
desired_output.append(tokenoutput)
print('tokens in grouped list')
print(desired_output)
print('---')
..
input sentence: This is a tokenization example --- tokens : [713, 16, 10, 19233, 1938, 1246] --- word and tokens : {'This': [713], 'is': [354], 'a': [102], 'tokenization': [46657, 1938], 'example': [46781]} --- tokens in grouped list [[1], [2], [3], [4, 5], [6]] ---
Thank you.
www.marearts.com
No comments:
Post a Comment