0% found this document useful (0 votes)
13 views12 pages

Springerbook 20224

Uploaded by

araba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views12 pages

Springerbook 20224

Uploaded by

araba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/372198411

BERT-Based Ensemble Learning Approach for Sentiment Analysis

Chapter · July 2023


DOI: 10.1007/978-3-031-35924-8_7

CITATIONS READS
0 224

2 authors:

Hasna Chouikhi Fethi Jarray

9 PUBLICATIONS 97 CITATIONS
Higher institute of computer science, Medenine,Tunisia
67 PUBLICATIONS 1,361 CITATIONS
SEE PROFILE
SEE PROFILE

All content following this page was uploaded by Hasna Chouikhi on 20 July 2023.

The user has requested enhancement of the downloaded file.


BERT-based ensemble learning approach for
sentiment analysis

Hasna Chouikhi1[0000−0002−3733−2063] and Fethi Jarray2[0000−0002−5110−1173]


1
LIMTIC Laboratory, UTM University, Tunisia
[email protected]
2
Higher institute of computer science of Medenine, Tunisia
[email protected]

Abstract. Sentiment Analysis is a fundamental problem in social media


and aims to determine the attitude of a writer. Recently, transformer-
based models have shown great success in sentiment analysis and have
been considered the state-of-the-art model for different NLP tasks. How-
ever, the accuracy of sentiment analysis for low resource Languages still
needs improvements. In this paper, we are concerned with sentiment
analysis for Arabic documents. We first applied data augmentation tech-
niques on publicly available datasets to improve the robustness of super-
vised sentiment analysis models. Then we proposed an ensemble archi-
tecture of Arabic sentiment analysis by combing different BERT models.
We validated these methods using three available datasets. Our results
showed that the BERT-based ensemble method achieves an accuracy
score of 96%.

Keywords: Arabic sentiment analysis · BERT models · Ensemble learn-


ing · large scale dataset.

1 Introduction

Sentiment Analysis (SA) is a Natural Language Processing (NLP) research field


that handles people’s opinions, sentiments, and emotions. SA techniques are
categorized into symbolic and sub-symbolic approaches. The former use lexical
and ontologies [2] to encode the associated polarity with words and multi-word
expressions. The latter consist of supervised, semi-supervised and unsupervised
machine learning techniques that perform sentiment classification based on word
co-occurrence frequencies. Among all these techniques, the most popular are
based on deep neural networks. Some hybrid frameworks leverage both symbolic
and sub-symbolic approaches. SA can be seen as a multistep process including
data retrieval, data extraction, data pre-processing, and feature extraction. The
ultimate subtasks of sentiment classification allow three types of classification:
polarity classification, intensity classification, and emotion identification. The
first type classifies the text as positive, negative, or neutral, while the second
type identifies the degree of polarity as very positive, positive, negative, or very
negative. The third classification identifies emotions such as sadness, anger, or
2 H. Chouikhi et al.

happiness. Practically, the Arabic language has a complex nature, due to its
ambiguity and rich morphological system. This nature associated with various
dialects and the lack of resources represent a challenge to the progress of Arabic
sentiment analysis research. The major contributions of our present work are as
follows:

– Create large-scale sentiment Arabic datasets (LargeASA) for Arabic senti-


ment analysis’s task.
– Use an adjusted model ASA Medium BERT.
– Design a new staking approach (also known as Stacked Generalization ) for
Arabic Sentiment Analysis by combing three BERT based models (Arabic-
BERT [3], AraBERT [4] and mBERT [5]).
– Perform data augmentation using a back-translation method by exploiting
the pretrained English–Arabic translation model. Data are translated into
English and back to Arabic to generate new augmented data.

2 Related work

The learning based approaches of ASA can be classified into two categories:
classical machine learning approaches and deep learning approaches.
Machine learning (ML) methods have been widely used for sentiment analy-
sis. ML addresses sentiment analysis as a text classification problem. Many ap-
proaches such as the support vector machine (SVM), maximum entropy (ME),
naive Bayes (NB) algorithm, and artificial neural networks (ANNs) have been
proposed to handle ASA. NB and SVM are the most widely exploited ma-
chine learning algorithms for solving the sentiment classification problem [6]
Al-Rubaiee et al. [7] performed polarity classification and rating classification
using SVM, MNB, and BNB. They achieved 90% accuracy polarity classifica-
tion and 50% accuracy rating classification.
The use of DL is less common in Arabic SA than in English SA. [8] proposed
an approach based on RNN (recurrent neural network) which is trained on a
constructed sentiment treebank and improved sentence-level sentiment analysis
in English datasets. [9] used CNN model for SA tasks and a Stanford segmenter
to perform tweet tokenization and normalization. They used Word2vec for word
embedding with ASTD datasets. [10] used a LSTM-CNN model with only two
unbalanced classes (Positive and negative) among four classes (objective, sub-
jective positive, subjective negative, and subjective mixed) form ASTD.
Since its release in 2018, many pretrained versions of BERT [18] has been
proposed for sequence learning, such as ASA. The recent trend in sentiment
analysis is based on the BERT representation. Let us briefly describe and recall
BERT and the different versions that handle Arabic texts. BERT (Bidirectional
Encoder Representations from Transformers) is pre-trained by conditioning on
both left and right context in all layers, unlike previous language representa-
tion models. Applying BERT to any NLP task requires only to fine-tune one
additional output layer to the downstream task (see Figure 1).
BERT-based ensemble learning approach for sentiment analysis 3

Fig. 1. BERT based architecture for ASA.

The multilingual BERT (mBERT) [5] model is trained in many languages,


including Arabic, and serves as a universal language model. ElJundi et al. [12]
developed an Arabic specific universal language model (ULM), hULMonA.
They fine-tuned mBERT ULM for ASA. They collected a benchmark dataset
for ULM evaluation with sentiment analysis. Safaya, A et al. [3] proposed Ara-
bicBERT which is a set of pre-trained transformer language models for arabic
language. They used a base version of arabic BERT model (bert-base-arabic).
Antoun, W et al. [4] created AraBERTv02 based on the BERT model. It
was trained on Arabic corpora consisting of internet text and news articles of
(8.6B tokens). [13] introduced GigaBERTv3 which is a bilingual BERT for
English and Arabic. It was pre-trained on a large corpra (Gigaword, Oscar and
Wikipedia). [14] designed MARBERT and ArBERT. Both are built on the
basis of the BERT-based model, except for MARBERT. ArBERT was trained
on a collection of Arabic datasets, mostly books and articles written in Modern
Standard Arabic (MSA). While MARBERT trained both Dialectal (DA) and
MSA tweets, it does not output the next sentence prediction (NSP) objective,
as it is trained on short tweets. Additionally, MARBERT and ArBERT were ex-
perimented on the ArSarcasm dataset [15]. Finally, [16] trained QARiB (QCRI
Arabic and Dialectal BERT) on a collection of Arabic tweets and sentences of
text written on MSA.
4 H. Chouikhi et al.

3 Proposed approach

In this paper, we design an ensemble learning model by stacking BERT models


to address the Arabic sentiment analysis problem. More specifically, we combine
the predictions of three BERT models dedicated to the Arabic language (Arabic-
BERT, mBERT and AraBERT).
We follow three steps to design a sentiment analysis system:

– First, we split the input texts into tokens by tokenization. Figure 2 presents
the result of Arabic-BERT and mBERT tokenizers applied to an example
sentence (S). We observe that the Arabic-BERT tokenizer is more appropri-
ate for Arabic because it considers the characteristic of Arabic morphology.

Fig. 2. Comparison between Arabic BERT (a) and mBERT (b) tokenizer.

– Second, we convert each text to a BERT’s format by adding the special [CLS]
token at the beginning of each text and the [SEP] token between sentences
and the end. Then we execute BERT to get the vector representation of each
word.
– Finally, we add a classification layer on top of the [CLS] token representation
to predict the text’s sentiment polarity.

3.1 Data augmentation for ASA

Data augmentation (DA) consists of artificially increasing the size of the training
data set by generating new data points from existing data. It is used for low-
resource languages, such as Arabic, to avoid overfitting and create more diversity
in the data set. Data augmentation techniques can be applied at the character,
word, and sentence levels. There are various data augmentation methods, in-
cluding Easy Data Augmentation (EDA) methods, Text Generation and Back
Translation. This work uses a back-translation strategy[32] of translating Ara-
bic sentences into English and back into Arabic. We have run back translation
on all the available ASA data sets, including the AJGT, LABR, HARD, and
LargeASA datasets.
BERT-based ensemble learning approach for sentiment analysis 5

3.2 Refine tuning of Arabic BERT model (ASA-medium BERT)

In this section, we propose an ASA based on the Arabic BERT model. As men-
tioned in the original paper, Arabic BERT is available in four versions: bert-mini-
arabic, bert-medium-arabic, bert-base-arabic, and bert-large-arabic. We applied
a grid search strategy to find the best Arabic BERT version with the best hy-
perparameters [31]. Table 1 represents the hyperparameters of Arabic BERT for
ASA used after our fine-tuning. We used the AJGT dataset [21] as a testing
dataset.
Among all the works cited, the approach of Ali Safaya [3] is the closest to
our approach. Figure 3 depicts the proposed architecture for arabic SA. Our
architecture consists of three blocks. The first block describes the text prepro-
cessing step, where we used an Arabic BERT tokenizer to split the word into
tokens. The second block is the training model. Arabic BERT model is used
with only 8 encoder (Medium case [3]). The outputs of the last four hidden lay-
ers are concatenated to get a fixed-size representation. The third block is about
the classifier, where we used a dropout layer for some regularization and a fully
connected layer for our output.

Fig. 3. ASA-medium BERT architecture.

Table 1. Hyper-parameters of ASA-medium BERT.

Hyper-parameters Batch-size dropout Max length Hidden size learning rate


Value 16 0.1 128 512 2e-5

Table 1 displays the hyperparameters of the proposed model. The overall


model is trained by the AdamW optimizer. We note that with hyperparameters
optimization by grid search strategy, we outperform the approach of [3].
Table 2 explains the architectural difference between the ASA-medium BERT
model [1], the Arabic BERT [3] and the AraBERT [4] model. It shows that
with an Arabic tokenizer, the number of encoders in the Arabic BERT model
influences the accuracy value.
6 H. Chouikhi et al.

Table 2. Architectural characteristics of ASA-medium BERT, AraBERT and Ara-


bic BERT models.

Models Batch-size Epochs Layers Activation function Tokenizer


ASA-medium BERT 16 5 8 Softmax Arabic-BERT
Arabic BERT [3] 16/32 10 12 ReLU Arabic-BERT
AraBERT [4] 512/128 27 12 Softmax AraBERTV02-Base

3.3 Stacking BERT based model for ASA


A Stacking model is a hierarchical model ensemble framework in which the
predictions, generated by using various machine learning base modes, are used
as inputs for a meta-model. The objective of the meta-model is to optimally
combine the base model predictions to form a new classifier. In this work, we
used the medium Arabic BERT [3], AraBERT [4] and mBERT [5] as the base
model and a fully connected layer as a meta-model (Figure 4).

Fig. 4. Stacking of the BERT-baseds model for ASA

In the experimental section, we imagine three stacking scenarios: automatic


stacking of each BERT, pairwise stacking, and full stacking of three BERT mod-
els

4 Experiments and results


4.1 Datasets
In this paper we perform experiments on three available datasets HARD, LABR
and AJGT (Table 3) and a LargeASA constructed by merging the other datasets.
All were split into two subsets: 80% for training, and 20% for testing.
– Hotel Arabic Reviews Dataset (HARD) [19] contains 93,700 reviews. Each
one has two parts: positive comments and negative comments. It covers 1858
hotels contributed by 30889 users.
BERT-based ensemble learning approach for sentiment analysis 7

– Large-scale Arabic Book Reviews (LABR) [20] contains more than 63,000
book reviews in Arabic.
– Arabic Jordanian General Tweets (AJGT) [21] contains 1,800 tweets anno-
tated as positive and negative.
– Large-scale Arabic Sentiment Analysis (LargeASA). We aggregate the HARD,
LABRR, and AJGT datasets into a large corpus for ASA. This dataset is
publicly available upon request.

Table 3. Statistics of the datasets used

Dataset Samples Labels Dialect


LABR 63,000 P/N MSA/DA
HARD 93,700 P/N MSA/DA
AJGT 1,800 P/N DA (Jordan)
LargeASA 158,500 P/N MSA/DA

4.2 Refine tuning results

Table 4 indicates the variation of the accuracy value according to the method
used and the data sets. It shows that our model (ASA-medium BERT) and
AraBERT[4] achieve a very similar result. Our model gives the best result for
LABR, AJGT and ArsenTD-Lev ([23]) datasets; while [4] works give the best
result with ASTD and HARD datasets. We found a slight difference in the ac-
curacy value between the two works (92,6% compared to 91% for ASTD dataset
([24]) and 86,7% compared to 87% for LABR datasets). However, our model
gives a very good result with the ArsenTD-Lev dataset (75% compared to an
accuracy value that does not exceed 60% with the other models).

Table 4. ASA-medium BERT accuracy vs previous ASA approaches.

Model \Dataset AJGT LABR HARD ASTD ArsenTD-Lev


Arabic-BERT base - - - 71.4% 55.2%
hULMonA [12] - - 95.7% 69.9% 52.4%
AraBERT 93.8% 86.7% 96.2% 92.6% 59.4%
mBERT 83.6% 83% 95.7% - -
ASA-medium BERT 96.11% 87% 95% 91% 75%

The first row block of Table 5 shows a comparison between the three base
models that are used in the stacking approach. It shows that Medium Arabic
BERT is the most performant and mBERT is the less performant. We will en-
visage different stacking strategies for these base models to strengthen them.
8 H. Chouikhi et al.

4.3 Stacking results


The second row block of Table 5 shows the performance of stacking each model
with its own. By cross comparing the first and second row blocks of Table 5 ,
we conclude that performance of the model did not improve by auto stacking
a model with itself. This is may be due to the fact that we have only a small
number of classes; positive and negative, and it may be interesting to check
the efficiency of auto stacking for large number of classes such as the sentiment
analysis with intensity. The third row block of Table 5 details the results obtained
by pairwise stacking of different models. It shows that the BERT based models
can be strengthened by stacking them with ASA-medium BERT. The last row
block of Table 5 shows the results of fully stacking the three base models: Arabic
BERT, AraBERT and mBERT. From all stacking scenarios, we conclude that
the best way is to autostack ASA-medium BERT with itself.

Table 5. Comparison between accuracies of BERT stacking strategies

Stacking Model \Dataset AJGT LargeASA LABR HARD


ASA-medium BERT 96.11% 90% 87% 95%
Base
mBERT 83.6% 83% 85% 95.7%
models
AraBERT 93.8% 85% 86.7% 96.2%
ASA-medium BERT x2 94% 90% 90% 96%
Auto
mBERT x2 83% 86% 87% 95%
stacking
AraBERT x2 77 % 87 % 86% 96%
ASA-medium BERT+ mBERT 94% 90% 90% 96 %
Pairwise
ASA-medium BERT+ AraBERT 90% 90% 88% 96%
stacking
mBERT + AraBERT 78 % 88% 88% 95%
Full ASA-medium BERT+ mBERT +AraBERT 93% 91% 88% 95%
stacking ASA-medium BERT+ mBERT +AraBERT+DA 93.03% 91.09% 88.02% 95.07%

4.4 Effect of data augmentation


Table 6 shows the impact of data augmentation on accuracy measures. As can
be seen from the table with data augmentation, the ensemble learning model
perform better as the number of training samples will increase.

5 Conclusion
In this paper, we proposed a BERT based ensemble learning for Arabic senti-
ment analysis. We used medium Arabic BERT, AraBERT, and mBERT as base
models. First, we proved that by refine tunig Arabic BERT model we outper-
form the state-of-the-art for ASA. Second, the experiment results showed that
the stacking strategy improves the accuracy. As a continuation of this contribu-
tion, we plan to generalize our results to the sentiment analysis with intensities
case and investigate more data augmentation techniques.
BERT-based ensemble learning approach for sentiment analysis 9

Table 6. Effect of data augmentation on accuracy values

Method Models AJGT LargeASA LABR HARD


ASA-medium BERT x2 94% 90% 90% 96%
AraBERT x2 77% 87% 86% 96%
mBERT x2 83% 86% 87% 95%
Without DA ASA-medium BERT+ mBERT 94% 90% 90% 96%
ASA-medium BERT+ AraBERT 90% 90 % 88% 96%
mBERT + AraBERT 78 % 88% 88% 95%
ASA-medium BERT x2 96% 93% 93% 97%
AraBERT x2 96%% 88% 89% 96%
mBERT x2 96% 92% 90% 97%
With DA ASA-medium BERT+ mBERT 96% 95% 94% 97%
ASA-medium BERT+ AraBERT 96% 94% 93% 97%
mBERT + AraBERT 95% 94 % 92% 96%

References
1. Chouikhi, H.; Chniter, H. and Jarray, F.: Stacking BERT based Models for Ara-
bic Sentiment Analysis. In Proceedings of the 13th International Joint Conference
on Knowledge Discovery, Knowledge Engineering and Knowledge Management -
KEOD, ISBN 978-989-758-533-3; ISSN 2184-3228, pages 144-150. (2021). DOI:
10.5220/0010648400003064.
2. Dragoni, M., Poria, S., Cambria, E.: OntoSenticNet: A commonsense ontology for
sentiment analysis. IEEE Intelligent Systems, 33(3), 77-85.(2018).
3. Safaya, A., Abdullatif, M., Yuret, D.: Kuisail at semeval-2020 task 12: Bert-cnn
for offensive speech identification in social media. In Proceedings of the Fourteenth
Workshop on Semantic Evaluation, 2054-2059.(2020, December).
4. Antoun, W., Baly, F., Hajj, H.: Arabert: Transformer-based model for arabic lan-
guage understanding. arXiv preprint arXiv:2003.00104.(2020).
5. Devlin, J., Chang, M. W., Lee, K., Toutanova, K.: Bert: Pre-training of
deep bidirectional transformers for language understanding. arXiv preprint
arXiv:1810.04805.(2018).
6. Imran, A., Faiyaz, M., Akhtar, F.: An enhanced approach for quantitative prediction
of personality in facebook posts. International Journal of Education and Manage-
ment Engineering (IJEME), 8(2), 8-19.(2018).
7. Al-Rubaiee, H., Qiu, R., Li, D.: Identifying Mubasher software products through
sentiment analysis of Arabic tweets. In 2016 International Conference on Industrial
Informatics and Computer Systems (CIICS) (pp. 1-6). IEEE.(2016, March).
8. Socher, R., Perelygin, A., Wu, J., Chuang, J., Manning, C. D., Ng, A. Y., Potts,
C.: Recursive deep models for semantic compositionality over a sentiment treebank.
In Proceedings of the 2013 conference on empirical methods in natural language
processing,1631-1642.(2013, October).
9. Rangel, F., Rosso, P., Charfi, A., Zaghouani, W., Ghanem, B., Sánchez-Junquera,
J.: Overview of the track on author profiling and deception detection in arabic.
Working Notes of FIRE 2019. CEUR-WS. org, vol. 2517, 70-83. (2019).
10. Alhumoud, S., Albuhairi, T., Alohaideb, W.: Hybrid sentiment analyser for Arabic
tweets using R. In 2015 7th International Joint Conference on Knowledge Discovery,
Knowledge Engineering and Knowledge Management (IC3K), Vol. 1, 417-424. IEEE.
(2015, November).
10 H. Chouikhi et al.

11. Zahran, M. A., Magooda, A., Mahgoub, A. Y., Raafat, H., Rashwan, M., Atyia,
A.: Word representations in vector space and their applications for arabic. In Inter-
national Conference on Intelligent Text Processing and Computational Linguistics,
430-443. Springer, Cham. (2015, April).
12. ElJundi, O., Antoun, W., El Droubi, N., Hajj, H., El-Hajj, W., Shaban, K.: hul-
mona: The universal language model in arabic. In Proceedings of the fourth arabic
natural language processing workshop, 68-77. (2019, August).
13. Lan, W., Chen, Y., Xu, W., Ritter, A.: An empirical study of pre-trained trans-
formers for Arabic information extraction. arXiv preprint arXiv:2004.14519.(2020).
14. Abdul-Mageed, M., Elmadany, A., Nagoudi, E. M. B.: ARBERT MARBERT: deep
bidirectional transformers for Arabic. arXiv preprint arXiv:2101.01785. (2020).
15. Farha, I. A., Magdy, W.: From arabic sentiment analysis to sarcasm detection:
The arsarcasm dataset. In Proceedings of the 4th Workshop on Open-Source Arabic
Corpora and Processing Tools, with a Shared Task on Offensive Language Detection,
32-39. (2020, May).
16. Abdelali, A., Hassan, S., Mubarak, H., Darwish, K., Samih, Y.: Pre-training bert
on arabic tweets: Practical considerations. arXiv preprint arXiv:2102.10684. (2021).
17. Grave, E., Bojanowski, P., Gupta, P., Joulin, A., Mikolov, T.: Learning word vec-
tors for 157 languages. arXiv preprint arXiv:1802.06893. (2018).
18. Devlin, J., Chang, M. W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidi-
rectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
(2018).
19. Elnagar, A., Khalifa, Y. S., Einea, A.: Hotel Arabic-reviews dataset construction for
sentiment analysis applications. In Intelligent natural language processing: Trends
and applications, 35-52. Springer, Cham. (2018).
20. Aly, M., Atiya, A.: Labr: A large scale arabic book reviews dataset. In Proceed-
ings of the 51st Annual Meeting of the Association for Computational Linguistics,
(Volume 2: Short Papers), 494-498. (2013, August).
21. Alomari, K. M., ElSherif, H. M., Shaalan, K.: Arabic tweets sentimental analysis
using machine learning. In International Conference on Industrial, Engineering and
Other Applications of Applied Intelligent Systems, 602-610. Springer, Cham. (2017,
June).
22. Joulin, A., Grave, E., Bojanowski, P., Mikolov, T.: Bag of tricks for efficient text
classification. arXiv preprint arXiv:1607.01759. (2016).
23. Baly, R., Khaddaj, A., Hajj, H., El-Hajj, W., Shaban, K. B.: Arsentd-lev: A multi-
topic corpus for target-based sentiment analysis in arabic levantine tweets. arXiv
preprint arXiv:1906.01830. (2019).
24. Nabil, M., Aly, M., Atiya, A.: Astd: Arabic sentiment tweets dataset. In Proceed-
ings of the 2015 conference on empirical methods in natural language processing,
2515-2519. (2015, September).
25. Ghanem, B., Karoui, J., Benamara, F., Moriceau, V., Rosso, P.: Idat at fire2019:
Overview of the track on irony detection in Arabic tweets. In Proceedings of the
11th Forum for Information Retrieval Evaluation, 10-13. (2019, December).
26. Shoukry, A., Rafea, A.: Sentence-level Arabic sentiment analysis. In 2012 interna-
tional conference on collaboration technologies and systems (CTS), 546-550. IEEE.
(2012, May).
27. Alhumoud, S., Albuhairi, T., Alohaideb, W.: Hybrid sentiment analyzer for Arabic
tweets using R. In 2015 7th International Joint Conference on Knowledge Discovery,
Knowledge Engineering and Knowledge Management (IC3K), Vol. 1, 417-424. IEEE.
(2015, November).
BERT-based ensemble learning approach for sentiment analysis 11

28. Eskander, R., Rambow, O.: Slsa: A sentiment lexicon for standard arabic. In Pro-
ceedings of the 2015 conference on empirical methods in natural language processing,
2545-2550. (2015, September).
29. Dahou, A., Elaziz, M. A., Zhou, J., Xiong, S.: Arabic sentiment classification using
convolutional neural network and differential evolution algorithm. Computational
intelligence and neuroscience. (2019).
30. Harrat, S., Meftouh, K., Smaili, K.: Machine translation for Arabic dialects (sur-
vey). Information Processing Management, 56(2), 262-273. (2019).
31. Chouikhi, H., Chniter, H., Jarray, F.: Arabic sentiment analysis using BERT
model. In International Conference on Computational Collective Intelligence, 621-
632. Springer, Cham. (2021, September).
32. Ma, J., and Li, L.: Data Augmentation For Chinese Text Classification Using Back-
Translation. In Journal of Physics: Conference Series (Vol. 1651, No. 1, p. 012039).
IOP Publishing. (2020, November).

View publication stats

You might also like