Search Keras documentation...
Named Entity Recognition
using Transformers
► Code examples / Natural Language Processing / Named Entity Recognition using Transformers ◆ Introduction
◆ Install the open source datasets
library from HuggingFace
Named Entity Recognition using ◆ Build the NER model class as a
keras.Model subclass
Transformers ◆ Load the CoNLL 2003 dataset from
the datasets library and process it
Author: Varun Singh ◆ Make the NER label lookup table
Date created: 2021/06/23 ◆ Compile and t the model
Last modi ed: 2024/04/05 ◆ Metrics calculation
Description: NER using the Transformers and data from CoNLL 2003 shared task.
◆ Conclusions
ⓘ This example uses Keras 3
View in Colab • GitHub source
Introduction
Named Entity Recognition (NER) is the process of identifying named entities in text. Example of
named entities are: "Person", "Location", "Organization", "Dates" etc. NER is essentially a token
classi cation task where every token is classi ed into one or more predetermined categories.
In this exercise, we will train a simple Transformer based model to perform NER. We will be using
the data from CoNLL 2003 shared task. For more information about the dataset, please visit the
dataset website. However, since obtaining this data requires an additional step of getting a free
license, we will be using HuggingFace's datasets library which contains a processed version of this
dataset.
Install the open source datasets library from
HuggingFace
We also download the script used to evaluate NER models.
!pip3 install datasets
!wget https://raw.githubusercontent.com/sighsmile/conlleval/master/conlleval.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133,
185.199.110.133, 185.199.111.133, ...
Connecting to raw.githubusercontent.com
(raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7502 (7.3K) [text/plain]
Saving to: ‘conlleval.py’
conlleval.py 100%[===================>] 7.33K --.-KB/s in 0s
2023-11-10 16:58:25 (217 MB/s) - ‘conlleval.py’ saved [7502/7502]
import os
os.environ["KERAS_BACKEND"] = "tensorflow"
import keras Named Entity Recognition
from keras import ops using Transformers
import numpy as np
◆ Introduction
import tensorflow as tf
from keras import layers ◆ Install the open source datasets
from datasets import load_dataset library from HuggingFace
from collections import Counter ◆ Build the NER model class as a
from conlleval import evaluate keras.Model subclass
◆ Load the CoNLL 2003 dataset from
We will be using the transformer implementation from this fantastic example. the datasets library and process it
◆ Make the NER label lookup table
Let's start by de ning a TransformerBlock layer: ◆ Compile and t the model
◆ Metrics calculation
class TransformerBlock(layers.Layer): ◆ Conclusions
def __init__(self, embed_dim, num_heads, ff_dim, rate=0.1):
super().__init__()
self.att = keras.layers.MultiHeadAttention(
num_heads=num_heads, key_dim=embed_dim
)
self.ffn = keras.Sequential(
[
keras.layers.Dense(ff_dim, activation="relu"),
keras.layers.Dense(embed_dim),
]
)
self.layernorm1 = keras.layers.LayerNormalization(epsilon=1e-6)
self.layernorm2 = keras.layers.LayerNormalization(epsilon=1e-6)
self.dropout1 = keras.layers.Dropout(rate)
self.dropout2 = keras.layers.Dropout(rate)
def call(self, inputs, training=False):
attn_output = self.att(inputs, inputs)
attn_output = self.dropout1(attn_output, training=training)
out1 = self.layernorm1(inputs + attn_output)
ffn_output = self.ffn(out1)
ffn_output = self.dropout2(ffn_output, training=training)
return self.layernorm2(out1 + ffn_output)
Next, let's de ne a TokenAndPositionEmbedding layer:
class TokenAndPositionEmbedding(layers.Layer):
def __init__(self, maxlen, vocab_size, embed_dim):
super().__init__()
self.token_emb = keras.layers.Embedding(
input_dim=vocab_size, output_dim=embed_dim
)
self.pos_emb = keras.layers.Embedding(input_dim=maxlen, output_dim=embed_dim)
def call(self, inputs):
maxlen = ops.shape(inputs)[-1]
positions = ops.arange(start=0, stop=maxlen, step=1)
position_embeddings = self.pos_emb(positions)
token_embeddings = self.token_emb(inputs)
return token_embeddings + position_embeddings
Build the NER model class as a keras.Model subclass
class NERModel(keras.Model):
def __init__(
self, num_tags, vocab_size, maxlen=128, embed_dim=32, num_heads=2, ff_dim=32
):
super().__init__() Named Entity Recognition
self.embedding_layer = TokenAndPositionEmbedding(maxlen, vocab_size, embed_dim) using Transformers
self.transformer_block = TransformerBlock(embed_dim, num_heads, ff_dim)
◆ Introduction
self.dropout1 = layers.Dropout(0.1)
self.ff = layers.Dense(ff_dim, activation="relu") ◆ Install the open source datasets
self.dropout2 = layers.Dropout(0.1) library from HuggingFace
self.ff_final = layers.Dense(num_tags, activation="softmax") ◆ Build the NER model class as a
keras.Model subclass
def call(self, inputs, training=False):
◆ Load the CoNLL 2003 dataset from
x = self.embedding_layer(inputs)
the datasets library and process it
x = self.transformer_block(x)
x = self.dropout1(x, training=training) ◆ Make the NER label lookup table
x = self.ff(x) ◆ Compile and t the model
x = self.dropout2(x, training=training)
◆ Metrics calculation
x = self.ff_final(x)
return x ◆ Conclusions
Load the CoNLL 2003 dataset from the datasets library
and process it
conll_data = load_dataset("conll2003")
We will export this data to a tab-separated le format which will be easy to read as a
tf.data.Dataset object.
def export_to_file(export_file_path, data):
with open(export_file_path, "w") as f:
for record in data:
ner_tags = record["ner_tags"]
tokens = record["tokens"]
if len(tokens) > 0:
f.write(
str(len(tokens))
+ "\t"
+ "\t".join(tokens)
+ "\t"
+ "\t".join(map(str, ner_tags))
+ "\n"
)
os.mkdir("data")
export_to_file("./data/conll_train.txt", conll_data["train"])
export_to_file("./data/conll_val.txt", conll_data["validation"])
Make the NER label lookup table
NER labels are usually provided in IOB, IOB2 or IOBES formats. Checkout this link for more
information: Wikipedia
Note that we start our label numbering from 1 since 0 will be reserved for padding. We have a total
of 10 labels: 9 from the NER dataset and one for padding.
def make_tag_lookup_table():
iob_labels = ["B", "I"]
ner_labels = ["PER", "ORG", "LOC", "MISC"]
all_labels = [(label1, label2) for label2 in ner_labels for label1 in iob_labels]
all_labels = ["-".join([a, b]) for a, b in all_labels] Named Entity Recognition
all_labels = ["[PAD]", "O"] + all_labels using Transformers
return dict(zip(range(0, len(all_labels) + 1), all_labels))
◆ Introduction
◆ Install the open source datasets
mapping = make_tag_lookup_table() library from HuggingFace
print(mapping) ◆ Build the NER model class as a
keras.Model subclass
◆ Load the CoNLL 2003 dataset from
{0: '[PAD]', 1: 'O', 2: 'B-PER', 3: 'I-PER', 4: 'B-ORG', 5: 'I-ORG', 6: 'B-LOC', 7: 'I- the datasets library and process it
LOC', 8: 'B-MISC', 9: 'I-MISC'} ◆ Make the NER label lookup table
◆ Compile and t the model
Get a list of all tokens in the training dataset. This will be used to create the vocabulary. ◆ Metrics calculation
◆ Conclusions
all_tokens = sum(conll_data["train"]["tokens"], [])
all_tokens_array = np.array(list(map(str.lower, all_tokens)))
counter = Counter(all_tokens_array)
print(len(counter))
num_tags = len(mapping)
vocab_size = 20000
# We only take (vocab_size - 2) most commons words from the training data since
# the `StringLookup` class uses 2 additional tokens - one denoting an unknown
# token and another one denoting a masking token
vocabulary = [token for token, count in counter.most_common(vocab_size - 2)]
# The StringLook class will convert tokens to token IDs
lookup_layer = keras.layers.StringLookup(vocabulary=vocabulary)
21009
Create 2 new Dataset objects from the training and validation data
train_data = tf.data.TextLineDataset("./data/conll_train.txt")
val_data = tf.data.TextLineDataset("./data/conll_val.txt")
Print out one line to make sure it looks good. The rst record in the line is the number of tokens.
After that we will have all the tokens followed by all the ner tags.
print(list(train_data.take(1).as_numpy_iterator()))
[b'9\tEU\trejects\tGerman\tcall\tto\tboycott\tBritish\tlamb\t.\t3\t0\t7\t0\t0\t0\t7\t0\t
We will be using the following map function to transform the data in the dataset:
def map_record_to_training_data(record):
record = tf.strings.split(record, sep="\t")
length = tf.strings.to_number(record[0], out_type=tf.int32)
tokens = record[1 : length + 1]
tags = record[length + 1 :] Named Entity Recognition
tags = tf.strings.to_number(tags, out_type=tf.int64) using Transformers
tags += 1
◆ Introduction
return tokens, tags
◆ Install the open source datasets
library from HuggingFace
def lowercase_and_convert_to_ids(tokens): ◆ Build the NER model class as a
tokens = tf.strings.lower(tokens) keras.Model subclass
return lookup_layer(tokens)
◆ Load the CoNLL 2003 dataset from
the datasets library and process it
# We use `padded_batch` here because each record in the dataset has a ◆ Make the NER label lookup table
# different length. ◆ Compile and t the model
batch_size = 32
◆ Metrics calculation
train_dataset = (
train_data.map(map_record_to_training_data) ◆ Conclusions
.map(lambda x, y: (lowercase_and_convert_to_ids(x), y))
.padded_batch(batch_size)
)
val_dataset = (
val_data.map(map_record_to_training_data)
.map(lambda x, y: (lowercase_and_convert_to_ids(x), y))
.padded_batch(batch_size)
)
ner_model = NERModel(num_tags, vocab_size, embed_dim=32, num_heads=4, ff_dim=64)
We will be using a custom loss function that will ignore the loss from padded tokens.
class CustomNonPaddingTokenLoss(keras.losses.Loss):
def __init__(self, name="custom_ner_loss"):
super().__init__(name=name)
def call(self, y_true, y_pred):
loss_fn = keras.losses.SparseCategoricalCrossentropy(
from_logits=False, reduction=None
)
loss = loss_fn(y_true, y_pred)
mask = ops.cast((y_true > 0), dtype="float32")
loss = loss * mask
return ops.sum(loss) / ops.sum(mask)
loss = CustomNonPaddingTokenLoss()
Compile and t the model
tf.config.run_functions_eagerly(True)
ner_model.compile(optimizer="adam", loss=loss)
ner_model.fit(train_dataset, epochs=10)
Named Entity Recognition
def tokenize_and_convert_to_ids(text): using Transformers
tokens = text.split()
◆ Introduction
return lowercase_and_convert_to_ids(tokens)
◆ Install the open source datasets
library from HuggingFace
# Sample inference using the trained model ◆ Build the NER model class as a
sample_input = tokenize_and_convert_to_ids( keras.Model subclass
"eu rejects german call to boycott british lamb"
◆ Load the CoNLL 2003 dataset from
)
the datasets library and process it
sample_input = ops.reshape(sample_input, shape=[1, -1])
print(sample_input) ◆ Make the NER label lookup table
◆ Compile and t the model
output = ner_model.predict(sample_input)
◆ Metrics calculation
prediction = np.argmax(output, axis=-1)[0]
prediction = [mapping[i] for i in prediction] ◆ Conclusions
# eu -> B-ORG, german -> B-MISC, british -> B-MISC
print(prediction)
Epoch 1/10
439/439 ━━━━━━━━━━━━━━━━━━━━ 300s 671ms/step - loss: 0.9260
Epoch 2/10
439/439 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.2909
Epoch 3/10
439/439 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.1589
Epoch 4/10
439/439 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.1176
Epoch 5/10
439/439 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0941
Epoch 6/10
439/439 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0747
Epoch 7/10
439/439 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0597
Epoch 8/10
439/439 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0534
Epoch 9/10
439/439 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0459
Epoch 10/10
439/439 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 0.0408
tf.Tensor([[ 988 10950 204 628 6 3938 215 5773]], shape=(1, 8),
dtype=int64)
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 600ms/step
['B-ORG', 'O', 'B-MISC', 'O', 'O', 'O', 'B-MISC', 'O']
Metrics calculation
Here is a function to calculate the metrics. The function calculates F1 score for the overall NER
dataset as well as individual scores for each NER tag.
def calculate_metrics(dataset):
all_true_tag_ids, all_predicted_tag_ids = [], []
for x, y in dataset:
output = ner_model.predict(x, verbose=0) Named Entity Recognition
predictions = ops.argmax(output, axis=-1) using Transformers
predictions = ops.reshape(predictions, [-1])
◆ Introduction
true_tag_ids = ops.reshape(y, [-1]) ◆ Install the open source datasets
library from HuggingFace
mask = (true_tag_ids > 0) & (predictions > 0) ◆ Build the NER model class as a
true_tag_ids = true_tag_ids[mask] keras.Model subclass
predicted_tag_ids = predictions[mask]
◆ Load the CoNLL 2003 dataset from
the datasets library and process it
all_true_tag_ids.append(true_tag_ids)
all_predicted_tag_ids.append(predicted_tag_ids) ◆ Make the NER label lookup table
◆ Compile and t the model
all_true_tag_ids = np.concatenate(all_true_tag_ids)
◆ Metrics calculation
all_predicted_tag_ids = np.concatenate(all_predicted_tag_ids)
◆ Conclusions
predicted_tags = [mapping[tag] for tag in all_predicted_tag_ids]
real_tags = [mapping[tag] for tag in all_true_tag_ids]
evaluate(real_tags, predicted_tags)
calculate_metrics(val_dataset)
processed 51362 tokens with 5942 phrases; found: 5659 phrases; correct: 3941.
accuracy: 64.49%; (non-O)
accuracy: 93.23%; precision: 69.64%; recall: 66.32%; FB1: 67.94
LOC: precision: 82.77%; recall: 79.26%; FB1: 80.98 1759
MISC: precision: 74.94%; recall: 68.11%; FB1: 71.36 838
ORG: precision: 55.94%; recall: 65.32%; FB1: 60.27 1566
PER: precision: 65.57%; recall: 53.26%; FB1: 58.78 1496
Conclusions
In this exercise, we created a simple transformer based named entity recognition model. We trained
it on the CoNLL 2003 shared task data and got an overall F1 score of around 70%. State of the art
NER models ne-tuned on pretrained models such as BERT or ELECTRA can easily get much higher
F1 score -between 90-95% on this dataset owing to the inherent knowledge of words as part of the
pretraining process and the usage of subword tokenization.
You can use the trained model hosted on Hugging Face Hub and try the demo on Hugging Face
Spaces."""
Terms | Privacy