Jump to content

Labeled data

From Wikipedia, the free encyclopedia

Labeled data is a group of samples that have been tagged with one or more labels. Labeling typically takes a set of unlabeled data and augments each piece of it with informative tags. For example, a data label might indicate whether a photo contains a horse or a cow, which words were uttered in an audio recording, what type of action is being performed in a video, what the topic of a news article is, what the overall sentiment of a tweet is, or whether a dot in an X-ray is a tumor.

Labels can be obtained by asking humans to make judgments about a given piece of unlabeled data.[1] Labeled data is significantly more expensive to obtain than the raw unlabeled data.

The quality of labeled data directly influences the performance of supervised machine learning models in operation, as these models learn from the provided labels.[2]

Crowdsourced labeled data

[edit]

In 2006, Fei-Fei Li, the co-director of the Stanford Human-Centered AI Institute, initiated research to improve the artificial intelligence models and algorithms for image recognition by significantly enlarging the training data. The researchers downloaded millions of images from the World Wide Web and a team of undergraduates started to apply labels for objects to each image. In 2007, Li outsourced the data labeling work on Amazon Mechanical Turk, an online marketplace for digital piece work. The 3.2 million images that were labeled by more than 49,000 workers formed the basis for ImageNet, one of the largest hand-labeled database for outline of object recognition.[3]

Automated data labelling

[edit]

After obtaining a labeled dataset, machine learning models can be applied to the data so that new unlabeled data can be presented to the model and a likely label can be guessed or predicted for that piece of unlabeled data.[4]

Challenges with Labeled Data

[edit]

Data-driven bias

[edit]

Algorithmic decision-making is subject to programmer-driven bias as well as data-driven bias. Training data that relies on bias labeled data will result in prejudices and omissions in a predictive model, despite the machine learning algorithm being legitimate. The labeled data used to train a specific machine learning algorithm needs to be a statistically representative sample to not bias the results.[5] For example, in facial recognition systems underrepresented groups are subsequently often misclassified if the labeled data available to train has not been representative of the population,. In 2018, a study by Joy Buolamwini and Timnit Gebru demonstrated that two facial analysis datasets that have been used to train facial recognition algorithms, IJB-A and Adience, are composed of 79.6% and 86.2% lighter skinned humans respectively.[6]

Human Error and Inconsistency

[edit]

Human annotators are prone to errors and biases when labeling data. This can lead to inconsistent labels and affect the quality of the data set. The inconsistency can affect the machine learning model's ability to generalize well.[7]

Domain Expertise

[edit]

Certain fields, such as legal document analysis or medical imaging, require annotators with specialized domain knowledge. Without the expertise, the annotations or labeled data may be inaccurate, negatively impacting the machine learning model's performance in a real-world scenario.[8]


References

[edit]
  1. ^ "What is Data Labeling? - Data Labeling Explained - AWS". Amazon Web Services, Inc. Retrieved 2024-07-16.
  2. ^ Fredriksson, Teodor; Mattos, David Issa; Bosch, Jan; Olsson, Helena Holmström (2020), Morisio, Maurizio; Torchiano, Marco; Jedlitschka, Andreas (eds.), "Data Labeling: An Empirical Investigation into Industrial Challenges and Mitigation Strategies", Product-Focused Software Process Improvement, vol. 12562, Cham: Springer International Publishing, pp. 202–216, doi:10.1007/978-3-030-64148-1_13, ISBN 978-3-030-64147-4, retrieved 2024-07-13
  3. ^ Mary L. Gray; Siddharth Suri (2019). Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Houghton Mifflin Harcourt. p. 7. ISBN 978-1-328-56628-7.
  4. ^ Johnson, Leif. "What is the difference between labeled and unlabeled data?", Stack Overflow, 4 October 2013. Retrieved on 13 May 2017.  This article incorporates text by lmjohns3 available under the CC BY-SA 3.0 license.
  5. ^ Xianhong Hu; Bhanu Neupane; Lucia Flores Echaiz; Prateek Sibal; Macarena Rivera Lam (2019). Steering AI and advanced ICTs for knowledge societies: a Rights, Openness, Access, and Multi-stakeholder Perspective. UNESCO Publishing. p. 64. ISBN 978-92-3-100363-9.
  6. ^ Xianhong Hu; Bhanu Neupane; Lucia Flores Echaiz; Prateek Sibal; Macarena Rivera Lam (2019). Steering AI and advanced ICTs for knowledge societies: a Rights, Openness, Access, and Multi-stakeholder Perspective. UNESCO Publishing. p. 66. ISBN 978-92-3-100363-9.
  7. ^ Geiger, R. Stuart; Cope, Dominique; Ip, Jamie; Lotosh, Marsha; Shah, Aayush; Weng, Jenny; Tang, Rebekah (2021-11-05). ""Garbage in, garbage out" revisited: What do machine learning application papers report about human-labeled training data?". Quantitative Science Studies. 2 (3): 795–827. arXiv:2107.02278. doi:10.1162/qss_a_00144. ISSN 2641-3337.
  8. ^ Alzubaidi, Laith; Bai, Jinshuai; Al-Sabaawi, Aiman; Santamaría, Jose; Albahri, A. S.; Al-dabbagh, Bashar Sami Nayyef; Fadhel, Mohammed A.; Manoufali, Mohamed; Zhang, Jinglan; Al-Timemy, Ali H.; Duan, Ye; Abdullah, Amjed; Farhan, Laith; Lu, Yi; Gupta, Ashish (2023-04-14). "A survey on deep learning tools dealing with data scarcity: definitions, challenges, solutions, tips, and applications". Journal of Big Data. 10 (1): 46. doi:10.1186/s40537-023-00727-2. ISSN 2196-1115.