0% found this document useful (0 votes)
2 views

Copy of TalentCLEF_TaskA_DevSet_Baseline.ipynb - Colab

This document outlines the baseline development set for TalentCLEF Task A, focusing on Natural Language Processing in Human Capital Management. It details the process of downloading the dataset, applying a multilingual embedding model, and generating output files for submission. The initiative aims to enhance HR practices through collaborative model evaluation and development, with the inaugural event scheduled for 2025 in Madrid.

Uploaded by

ayeshasaeed2002
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Copy of TalentCLEF_TaskA_DevSet_Baseline.ipynb - Colab

This document outlines the baseline development set for TalentCLEF Task A, focusing on Natural Language Processing in Human Capital Management. It details the process of downloading the dataset, applying a multilingual embedding model, and generating output files for submission. The initiative aims to enhance HR practices through collaborative model evaluation and development, with the inaugural event scheduled for 2025 in Madrid.

Uploaded by

ayeshasaeed2002
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

25/03/2025, 22:03 Copy of TalentCLEF_TaskA_DevSet_Baseline.

ipynb - Colab

keyboard_arrow_down Tutorial: Development set - Baseline TalentCLEF 2025 - Task A


In this notebook, we provide the Development Set baseline for TalentCLEF Task A. This document includes downloading theTask A dataset,
applying a multilingual embedding model as a baseline to generate .trec files, which will then be compressed and uploaded to the Codalab
platform.

TalentCLEF is an initiative to advance Natural Language Processing (NLP) in Human Capital Management (HCM). It aims to create a public
benchmark for model evaluation and promote collaboration to develop fair, multilingual, and flexible systems that improve Human Resources
(HR) practices across different industries.

This shared-task's inaugural edition is part of the Conference and Labs of the Evaluation Forum (CLEF), scheduled to be held in Madrid in
2025. If you are interested in registering, you can find registration form here.

keyboard_arrow_down Imports
!pip install pandas numpy sentence-transformers codecarbon

Requirement already satisfied: pynvml in /usr/local/lib/python3.11/dist packages (from codecarbon) (12.0.0)


Collecting questionary (from codecarbon)
Downloading questionary-2.1.0-py3-none-any.whl.metadata (5.4 kB)
Collecting rapidfuzz (from codecarbon)
Downloading rapidfuzz-3.12.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (12 kB)
Requirement already satisfied: requests in /usr/local/lib/python3.11/dist-packages (from codecarbon) (2.32.3)
Requirement already satisfied: rich in /usr/local/lib/python3.11/dist-packages (from codecarbon) (13.9.4)
Requirement already satisfied: typer in /usr/local/lib/python3.11/dist-packages (from codecarbon) (0.15.2)
Requirement already satisfied: filelock in /usr/local/lib/python3.11/dist-packages (from huggingface-hub>=0.20.0->sentence-transfo
Requirement already satisfied: fsspec>=2023.5.0 in /usr/local/lib/python3.11/dist-packages (from huggingface-hub>=0.20.0->sentence
Requirement already satisfied: packaging>=20.9 in /usr/local/lib/python3.11/dist-packages (from huggingface-hub>=0.20.0->sentence-
Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.11/dist-packages (from huggingface-hub>=0.20.0->sentence-tran
Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.11/dist-packages (from huggingface-hub>=0.20.0
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.11/dist-packages (from python-dateutil>=2.8.2->pandas) (1.17.0)
Requirement already satisfied: networkx in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers) (3
Requirement already satisfied: jinja2 in /usr/local/lib/python3.11/dist-packages (from torch>=1.11.0->sentence-transformers) (3.1
Collecting nvidia-cuda-nvrtc-cu12==12.4.127 (from torch>=1.11.0->sentence-transformers)
Downloading nvidia_cuda_nvrtc_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cuda-runtime-cu12==12.4.127 (from torch>=1.11.0->sentence-transformers)
Downloading nvidia_cuda_runtime_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cuda-cupti-cu12==12.4.127 (from torch>=1.11.0->sentence-transformers)
Downloading nvidia_cuda_cupti_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cudnn-cu12==9.1.0.70 (from torch>=1.11.0->sentence-transformers)
Downloading nvidia_cudnn_cu12-9.1.0.70-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cublas-cu12==12.4.5.8 (from torch>=1.11.0->sentence-transformers)
Downloading nvidia_cublas_cu12-12.4.5.8-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cufft-cu12==11.2.1.3 (from torch>=1.11.0->sentence-transformers)
Downloading nvidia_cufft_cu12-11.2.1.3-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-curand-cu12==10.3.5.147 (from torch>=1.11.0->sentence-transformers)
Downloading nvidia_curand_cu12-10.3.5.147-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cusolver-cu12==11.6.1.9 (from torch>=1.11.0->sentence-transformers)
Downloading nvidia_cusolver_cu12-11.6.1.9-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cusparse-cu12==12.3.1.170 (from torch>=1.11.0->sentence-transformers)
Downloading nvidia_cusparse_cu12-12.3.1.170-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)

https://colab.research.google.com/drive/1fxi6dchzMWct_JemIiarbCCRa0Wh8A5T?usp=sharing#printMode=true 1/5
25/03/2025, 22:03 Copy of TalentCLEF_TaskA_DevSet_Baseline.ipynb - Colab

import json
import pandas as pd
import numpy as np
from sentence_transformers import SentenceTransformer, util
from codecarbon import EmissionsTracker

keyboard_arrow_down Download Task A files


First, let's download the Task A from Zenodo.

# Download
!wget https://zenodo.org/records/14879510/files/TaskA.zip

--2025-03-25 13:11:22-- https://zenodo.org/records/14879510/files/TaskA.zip


Resolving zenodo.org (zenodo.org)... 188.185.43.25, 188.185.48.194, 188.185.45.92, ...
Connecting to zenodo.org (zenodo.org)|188.185.43.25|:443... connected.
HTTP request sent, awaiting response... 503 Service Temporarily Unavailable
2025-03-25 13:11:23 ERROR 503: Service Temporarily Unavailable.

unzip: cannot find or open TaskA.zip, TaskA.zip.zip or TaskA.zip.ZIP.

!unzip TaskA.zip -d taskA

Archive: TaskA.zip
creating: taskA/test/
creating: taskA/training/
creating: taskA/training/english/
inflating: taskA/training/english/taskA_training_en.tsv
creating: taskA/training/spanish/
inflating: taskA/training/spanish/taskA_training_es.tsv
creating: taskA/training/german/
inflating: taskA/training/german/taskA_training_de.tsv
creating: taskA/validation/
creating: taskA/validation/chinese/
inflating: taskA/validation/chinese/corpus_elements
inflating: taskA/validation/chinese/queries
inflating: taskA/validation/chinese/qrels.tsv
creating: taskA/validation/english/
inflating: taskA/validation/english/corpus_elements
inflating: taskA/validation/english/queries
inflating: taskA/validation/english/qrels.tsv
creating: taskA/validation/german/
inflating: taskA/validation/german/corpus_elements
inflating: taskA/validation/german/queries
inflating: taskA/validation/german/qrels.tsv
creating: taskA/validation/spanish/
inflating: taskA/validation/spanish/corpus_elements
inflating: taskA/validation/spanish/queries
inflating: taskA/validation/spanish/qrels.tsv

keyboard_arrow_down Baseline
Define language directionalities (queries-documents):

language_pairs = ["de-de","es-es","en-en","zh-zh"]
languages = ["german","spanish","english","chinese"]

The baseline model is sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2

models = ["sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2"]

Apply the model and save results:

import os
if os.path.exists('/tmp/.codecarbon.lock'):
os.remove('/tmp/.codecarbon.lock')

emissions = {}

for model_name in models:


tracker = EmissionsTracker()

https://colab.research.google.com/drive/1fxi6dchzMWct_JemIiarbCCRa0Wh8A5T?usp=sharing#printMode=true 2/5
25/03/2025, 22:03 Copy of TalentCLEF_TaskA_DevSet_Baseline.ipynb - Colab
tracker.start_task(model_name)
# Download and load embedding model
model = SentenceTransformer(model_name)
for language_pair,language in zip(language_pairs,languages):
# Read queries and corpus elements for the specific language
queries = f"/content/taskA/validation/{language}/queries"
corpus_elements = f"/content/taskA/validation/{language}/corpus_elements"
queries = pd.read_csv(queries,sep="\t")
corpus_elements = pd.read_csv(corpus_elements, sep="\t")
# Get ids, strings and generate a mapping dictionary for queries
queries_ids = queries.q_id.to_list()
queries_texts = queries.jobtitle.to_list()
map_queries = dict(zip(queries_ids,queries_texts))
# Get ids, strings and generate a mapping dictionary for corpus elements
corpus_ids = corpus_elements.c_id.to_list()
corpus_texts = corpus_elements.jobtitle.to_list()
map_corpus = dict(zip(queries_ids,queries_texts))
# Encode queries and corpus elements with the baseline model.
query_embeddings = model.encode(queries_texts, convert_to_tensor=True)
corpus_embeddings = model.encode(corpus_texts, convert_to_tensor=True)

# Compute similarities between query and corpus element embeddings


similarities = util.cos_sim(query_embeddings, corpus_embeddings).cpu().numpy()

# Process results and prepare output file


results = []
for q_idx, q_id in enumerate(queries_ids):
sorted_indices = np.argsort(-similarities[q_idx]) # Decrease order
for rank, c_idx in enumerate(sorted_indices):
doc_id = corpus_ids[c_idx]
score = similarities[q_idx, c_idx]
results.append(f"{str(q_id)} Q0 {str(doc_id)} {rank+1} {score:.4f} baseline_model")

# Save the predictions in a trecfile. Follow the naming guidelines


with open(f"run_{language_pair}_devbaseline-{model_name.split('/')[1]}.trec", "w", encoding="utf-8") as f:
f.write("\n".join(results))
pass
emissions[model_name]: float = dict(tracker.stop_task(model_name).values)

json.dump(emissions, open("./emissions.json", "w"), ensure_ascii=False, indent=4)

Zip the results that will be uploaded into the Task A Codabench

!zip taskA_baseline.zip run_* emissions.json

updating: run_de-de_devbaseline-paraphrase-multilingual-MiniLM-L12-v2.trec (deflated 80%)


updating: run_en-en_devbaseline-paraphrase-multilingual-MiniLM-L12-v2.trec (deflated 80%)
updating: run_es-es_devbaseline-paraphrase-multilingual-MiniLM-L12-v2.trec (deflated 81%)
updating: run_zh-zh_devbaseline-paraphrase-multilingual-MiniLM-L12-v2.trec (deflated 80%)
updating: emissions.json (deflated 52%)

keyboard_arrow_down Evaluation:
!git clone https://github.com/TalentCLEF/talentclef25_evaluation_script.git
!pip install -r /content/talentclef25_evaluation_script/requirements.txt

https://colab.research.google.com/drive/1fxi6dchzMWct_JemIiarbCCRa0Wh8A5T?usp=sharing#printMode=true 3/5
25/03/2025, 22:03 Copy of TalentCLEF_TaskA_DevSet_Baseline.ipynb - Colab
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.11/dist-packages (from requests>=2.22.0->ir-datasets->ranx->
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.11/dist-packages (from requests>=2.22.0->ir-datasets->
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.11/dist-packages (from requests>=2.22.0->ir-datasets->
Collecting cbor>=1.0.0 (from trec-car-tools>=2.5.4->ir-datasets->ranx->-r /content/talentclef25_evaluation_script/requirements.txt
Downloading cbor-1.0.0.tar.gz (20 kB)
Preparing metadata (setup.py) ... done
Downloading ranx-0.3.20-py3-none-any.whl (99 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 99.3/99.3 kB 4.2 MB/s eta 0:00:00
Downloading cbor2-5.6.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (249 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 249.2/249.2 kB 10.4 MB/s eta 0:00:00
Downloading fastparquet-2024.11.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.8 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.8/1.8 MB 49.3 MB/s eta 0:00:00
Downloading ir_datasets-0.5.10-py3-none-any.whl (859 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 859.0/859.0 kB 40.5 MB/s eta 0:00:00
Downloading lz4-4.4.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.3/1.3 MB 46.3 MB/s eta 0:00:00
Downloading ijson-3.3.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (119 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 119.2/119.2 kB 7.7 MB/s eta 0:00:00
Downloading inscriptis-2.6.0-py3-none-any.whl (45 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.1/45.1 kB 2.6 MB/s eta 0:00:00
Downloading trec_car_tools-2.6-py3-none-any.whl (8.4 kB)
Downloading unlzw3-0.2.3-py3-none-any.whl (6.7 kB)
Downloading warc3_wet-0.2.5-py3-none-any.whl (18 kB)
Downloading zlib_state-0.1.9-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (21 kB)
Building wheels for collected packages: warc3-wet-clueweb09, cbor
Building wheel for warc3-wet-clueweb09 (setup.py) ... done
Created wheel for warc3-wet-clueweb09: filename=warc3_wet_clueweb09-0.2.5-py3-none-any.whl size=18920 sha256=81a9a15b7d3c612c3df
Stored in directory: /root/.cache/pip/wheels/63/f9/dc/2dd16d3330e327236e4d407941975c42d5159d200cdb7922d8
Building wheel for cbor (setup.py) ... done
Created wheel for cbor: filename=cbor-1.0.0-cp311-cp311-linux_x86_64.whl size=53931 sha256=94db066ac88cd83ebeabeb4a93013f7df2096
Stored in directory: /root/.cache/pip/wheels/21/6b/45/0c34253b1af07d1d9dc524f6d44d74a6b191c43152e6aaf641
Successfully built warc3-wet-clueweb09 cbor
Installing collected packages: warc3-wet-clueweb09, warc3-wet, ijson, cbor, zlib-state, unlzw3, trec-car-tools, lz4, cbor2, inscri
Successfully installed cbor-1.0.0 cbor2-5.6.5 fastparquet-2024.11.0 ijson-3.3.0 inscriptis-2.6.0 ir-datasets-0.5.10 lz4-4.4.3 ranx

Then, select the Qrels file and the Run file to perform the evaluation.

For English:

qrels_file = "/content/taskA/validation/english/qrels.tsv"
run_file = "/content/run_en-en_devbaseline-paraphrase-multilingual-MiniLM-L12-v2.trec"

import subprocess
command = ["python", "/content/talentclef25_evaluation_script/talentclef_evaluate.py", "--qrels", qrels_file, "--run", run_file]
result = subprocess.run(command, capture_output=True, text=True)
print(result.stdout)

Received parameters:
qrels: /content/taskA/validation/english/qrels.tsv
run: /content/run_en-en_devbaseline-paraphrase-multilingual-MiniLM-L12-v2.trec
Loading qrels...
Loading run...
Running evaluation...

=== Evaluation Results ===


map: 0.4992
mrr: 0.7625
ndcg: 0.7692
precision@5: 0.6533
precision@10: 0.6010
precision@100: 0.1678

For Spanish:

qrels_file = "/content/taskA/validation/spanish/qrels.tsv"
run_file = "/content/run_es-es_devbaseline-paraphrase-multilingual-MiniLM-L12-v2.trec"

import subprocess
command = ["python", "/content/talentclef25_evaluation_script/talentclef_evaluate.py", "--qrels", qrels_file, "--run", run_file]
result = subprocess.run(command, capture_output=True, text=True)
print(result.stdout)

Received parameters:
qrels: /content/taskA/validation/spanish/qrels.tsv
run: /content/run_es-es_devbaseline-paraphrase-multilingual-MiniLM-L12-v2.trec
Loading qrels...
Loading run...
Running evaluation...

=== Evaluation Results ===


map: 0.3767

https://colab.research.google.com/drive/1fxi6dchzMWct_JemIiarbCCRa0Wh8A5T?usp=sharing#printMode=true 4/5
25/03/2025, 22:03 Copy of TalentCLEF_TaskA_DevSet_Baseline.ipynb - Colab
mrr: 0.5446
ndcg: 0.6980
precision@5: 0.6076
precision@10: 0.5730
precision@100: 0.2118

For German:

qrels_file = "/content/taskA/validation/german/qrels.tsv"
run_file = "/content/run_de-de_devbaseline-paraphrase-multilingual-MiniLM-L12-v2.trec"

import subprocess
command = ["python", "/content/talentclef25_evaluation_script/talentclef_evaluate.py", "--qrels", qrels_file, "--run", run_file]
result = subprocess.run(command, capture_output=True, text=True)
print(result.stdout)

Received parameters:
qrels: /content/taskA/validation/german/qrels.tsv
run: /content/run_de-de_devbaseline-paraphrase-multilingual-MiniLM-L12-v2.trec
Loading qrels...
Loading run...
Running evaluation...

=== Evaluation Results ===


map: 0.2840
mrr: 0.4797
ndcg: 0.6261
precision@5: 0.4670
precision@10: 0.4709
precision@100: 0.1794

For Chinese:

qrels_file = "/content/taskA/validation/chinese/qrels.tsv"
run_file = "/content/run_zh-zh_devbaseline-paraphrase-multilingual-MiniLM-L12-v2.trec"

import subprocess
command = ["python", "/content/talentclef25_evaluation_script/talentclef_evaluate.py", "--qrels", qrels_file, "--run", run_file]
result = subprocess.run(command, capture_output=True, text=True)
print(result.stdout)

Received parameters:
qrels: /content/taskA/validation/chinese/qrels.tsv
run: /content/run_zh-zh_devbaseline-paraphrase-multilingual-MiniLM-L12-v2.trec
Loading qrels...
Loading run...
Running evaluation...

=== Evaluation Results ===


map: 0.4371
mrr: 0.7720
ndcg: 0.7308
precision@5: 0.5903
precision@10: 0.5126
precision@100: 0.1449

https://colab.research.google.com/drive/1fxi6dchzMWct_JemIiarbCCRa0Wh8A5T?usp=sharing#printMode=true 5/5

You might also like