0% found this document useful (0 votes)
2 views15 pages

Plan Then Generate Controlled Data To-Text Generation Via Planning

The document presents a novel Plan-then-Generate (PlanGen) framework for controlled data-to-text generation, addressing the limitations of existing neural models in output structure control. The framework includes a content planner and a sequence generator, which together enhance the naturalness and diversity of generated text by allowing explicit control over the output structure. Experimental results demonstrate that the PlanGen model outperforms state-of-the-art methods on benchmark datasets, achieving improved generation quality and controllability.

Uploaded by

Jing Ma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views15 pages

Plan Then Generate Controlled Data To-Text Generation Via Planning

The document presents a novel Plan-then-Generate (PlanGen) framework for controlled data-to-text generation, addressing the limitations of existing neural models in output structure control. The framework includes a content planner and a sequence generator, which together enhance the naturalness and diversity of generated text by allowing explicit control over the output structure. Experimental results demonstrate that the PlanGen model outperforms state-of-the-art methods on benchmark datasets, achieving improved generation quality and controllability.

Uploaded by

Jing Ma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Plan-then-Generate: Controlled Data-to-Text Generation via Planning

Yixuan Su♠,∗ David Vandyke♥ Sihui Wang♥ Yimai Fang♥ Nigel Collier♠

Language Technology Lab, University of Cambridge

Apple
{ys484,nhc30}@cam.ac.uk
{dvandyke,sihui_wang,yimai_fang}@apple.com

Abstract
Recent developments in neural networks have
led to the advance in data-to-text generation.
However, the lack of ability of neural mod- Table 1: An Example of Knowledge Table
els to control the structure of generated out-
put can be limiting in certain real-world ap-
plications. In this study, we propose a novel
Plan-then-Generate (PlanGen) framework to two reasons. (1) Arranging the structure of the
improve the controllability of neural data-to- output in a certain form enables it to have greater
text models. Extensive experiments and analy- naturalness, as the structure of the sentence often
ses are conducted on two benchmark datasets, reflects the salience of the entities it contains (Poe-
ToTTo and WebNLG. The results show that sio et al., 2004). Suppose we have a digital assis-
our model is able to control both the intra-
tant which replies to user queries based on knowl-
sentence and inter-sentence structure of the
generated output. Furthermore, empirical com- edge tables like Table 1. Then, for a user query
parisons against previous state-of-the-art meth- “Who played Evelyn in Kids in Love?”, a natural
ods show that our model improves the genera- response is “Evelyn in Kids in Love was played
tion quality as well as the output diversity as by Alma Jodorowsky.”. In contrast, to a different
judged by human and automatic evaluations. query “What role did Alma Jodorowsky play in
1 Introduction Kids in Love?”, a natural response would be “Alma
Jodorowsky played Evelyn in Kids in Love.”. While
Generating natural language from structured data both answers are semantically equivalent, produc-
(Gatt and Krahmer, 2018), i.e. data-to-text genera- ing the answer with the most appropriate structure
tion, is a research problem that is crucial to many allows the system to sound less robotic and be eas-
downstream NLP applications. Some examples are ily understood. (2) It allows the model to generate
dialogue systems (Wen et al., 2016), restaurant as- outputs with diverse structures by simply changing
sistant (Novikova et al., 2017), and open domain the input planning information (i.e. a content plan),
question answering (Chen et al., 2021). which could potentially benefit other applications
To address this task, many researchers have de- such as paraphrasing and data augmentation.
signed sophisticated neural models based on vari- To control the output structure, we need an in-
ous methods, such as soft-template (Wiseman et al., termediate “planning” signal (i.e. a content plan)
2018), copy mechanism (Gehrmann et al., 2018), which informs the model what to generate and in
and pre-trained language models (Kale and Rastogi, what order. To this end, we propose a Plan-then-
2020; Ribeiro et al., 2020). While achieving im- Generate (PlanGen) framework which consists of
pressive results, most existing studies only focused two components: a content planner and a sequence
on producing results that are close to the references. generator. Given the input data, the content planner
On the other hand, the controllability of such mod- first predicts the most plausible content plan that
els is still under-explored, i.e. what to generate and the output should follow. Then, the sequence gen-
in what order (the output structure) in their outputs erator takes the data and the content plan as input
cannot be explicitly controlled by the users. to generate the result. To further ensure the control-
We argue that the model’s ability to control the lability of our model, we propose a structure-aware
structure of its output is highly desirable for at least reinforcement learning (RL) objective that encour-

Work done while the author was an intern at Apple. ages the generated output to adhere to the given
895
Findings of the Association for Computational Linguistics: EMNLP 2021, pages 895–909
November 7–11, 2021. ©2021 Association for Computational Linguistics
Figure 1: Plot illustrating the relationship between the structured data, the content plan, and the reference text for
examples from (a) ToTTo dataset (tabular data) and (b) WebNLG dataset (graphical data with RDF structure).

content plan. In this work, we formulate the inter- with different strategies like soft-templates (Wise-
mediate content plan as an ordered list of tokens for man et al., 2018; Ye et al., 2020), attention aware-
its simplicity and wide applicability to data with ness (Liu et al., 2018; Colin and Gardent, 2019),
different structures. For tabular data, each token and retrieved prototypes (Li et al., 2020; Su et al.,
in the content plan is a slot key from the table. As 2021b). Gehrmann et al. (2018), Puduppully et al.
for graphical data with RDF structure, each token (2019a,b), and Chen et al. (2020b) adopted copy
represents the predicate from an RDF triple. In mechanism for content selection to improve the
Figure 1, we provide examples for both cases. information coverage of the outputs. With recent
To fully evaluate our approach, we test the pro- advance in pre-trained language models (PLMs)
posed model on two benchmarks with different data (Devlin et al., 2019; Liu et al., 2019; Raffel et al.,
structures: (i) ToTTo dataset (Parikh et al., 2020) 2020; Lewis et al., 2020), several researchers (Chen
with tabular data, and (ii) WebNLG dataset (Colin et al., 2020a,b; Kale and Rastogi, 2020; Ribeiro
et al., 2016; Gardent et al., 2017) with graphical et al., 2020) have studied the ways to adapt PLMs
data. Compared with previous state-of-the-art ap- into the data-to-text generation task.
proaches, our model achieves better performance
in terms of generation quality as judged by both Pipeline Models. Another line of research inves-
human and automatic evaluations. In particular, the tigates ways to tackle the generation problem in a
results also show that the outputs of our model are pipeline framework. Ma et al. (2019) proposed to
highly controllable and contain diverse structures. first use a classifier to select the key contents. The
In summary, our contributions are: (1) A novel planning and surface realisation of the selected con-
Plan-then-Generate (PlanGen) framework that con- tents are then addressed by a subsequent Seq2seq
sists of a content planner and a sequence generator model. More related to our work, some researchers
for data-to-text generation. (2) Extensive automatic studied how neural models can benefit from tradi-
and human evaluations reporting state-of-the-art re- tional NLG steps (Kukich, 1983; McKeown, 1992),
sults on two benchmark datasets. (3) In-depth anal- that is, (i) content planning and (ii) surface reali-
ysis revealing the merits of the proposed approach sation. To simultaneously select the key contents
in terms of controllability and diversity. and arrange their orderings (i.e. content planning),
different strategies are proposed such as the most
2 Related Work probable traversal of graph trees (Moryossef et al.,
2019), the ordering of graph nodes (Zhao et al.,
Data-to-text generation is a long-standing problem
2020), and the multi-step pipeline that includes
(Reiter and Dale, 1997) that aims at producing nat-
discourse ordering, lexicalization, and regular ex-
ural language descriptions of structured data. Tra-
pression generation (Ferreira et al., 2019). While
ditional systems are primarily built on template-
achieving satisfactory results, these approaches can
based algorithms (Oh and Rudnicky, 2000; Stent
only be applied to data with graphical structure.
et al., 2004; Kondadadi et al., 2013). With recent
Compared with previous studies, we show that our
advances in deep learning, researchers have shifted
content planning approach is more accurate and
their attention to neural generation models that can
less dependent on the data structure. In addition,
be summarized into two categories.
by providing the desired content plan, our model
End-to-End Models. Many existing studies are can control the output structure on both the intra-
dedicated to building end-to-end neural models sentence and inter-sentence levels (§7.3).
896
Figure 2: PlanGen Framework: Given the structured data (T ), a content plan (C) is first predicted by the content
planner (left). The sequence generator (right) then takes the structured data and the predicted content plan as input
to generate the output (S). Note that, the content plan can also be specified by the user for a controlled generation.

3 Preliminaries to build the reference content plan by replacing


the parts of the reference text that comes from the
Dataset. In this study, our training dataset is de- objects of the RDF triples with the corresponding
|D|
fined as D = {(T, C, S)i }i=1 . (1) T is the lin- predicates. In Figure 1, we show examples of refer-
earized structured data and it is defined as T = ence content plan for both cases.
{t1 , ..., t|T | }. For data with tabular structure, each
item ti = {ki , vi } is a pair of slot key ki and slot 4 Methodology
value vi (e.g., (Date, 1956) in Figure 1(a)). As
for graphical data with RDF structure, each item Figure 2 depicts the proposed Plan-then-Generate
ti = {si , pi , oi } represents a RDF triple, where si , (P2G) framework. Given the input data, the con-
pi , and oi are subject, predicate, and object, respec- tent planner (§4.1) first predicts the most probable
tively. For instance, in Figure 1(b), (“Alan Bean”, content plan. The sequence generator (§4.2) then
“status”, “Retired”) is a RDF triple. (2) The refer- takes the structured data and the predicted content
ence content plan C is defined as C = {c1 , .., c|C| }, plan to generate the output. In the following, we
where each token ci either denotes a slot key (for elaborate the details of the proposed framework.
tabular data) or a predicate (for graphical data). The 4.1 Content Planner
content plan is thus a selection of the content from
the structured data that should appear in the out- Our content planner consists of two components.
put, in a particular order. (3) The S = {s1 , .., s|S| } The first part is a content encoder which takes the
denotes the reference text. data T as input and produces its representation
HT ∈ R|T |×n , where n is the output size. We
Content Plan Construction. Note that the orig- construct our content encoder with a pre-trained
inal ToTTo and WebNLG datasets only consist of BERT-base model (Devlin et al., 2019).
pairs of structured data and reference text. Thus, After getting the data representation, we select
we use a heuristic delexicalizer F to construct the the hidden states from HT that corresponds to the
reference content plan. For a tabular data T , given tokens1 that might appear in the content plan. Here,
the reference text S, the content plan C = F(T, S) we denote the selected hidden states HC ∈ R|C|×n
is built by replacing the parts of the reference text as HC = {hc1 , ..., hc|C| }, where |C| is the number
that comes from the table slot values with the corre- of selected tokens from the input data. Next, HC is
sponding slot keys. For instance, suppose we have fed into the ordering predictor which predicts the
a text “Alma Jodorowsky played Evelyn in Kids in orderings of the selected tokens in the predicted
Love.” and Table 1, then the resulting content plan 1
For tabular data, the selected tokens correspond to all slot
is {“Name”→“Role”→“Title”}. For graphical data keys from the table. Similarly, for graphical data, the selected
with RDF structure, we apply a similar procedure tokens correspond to the predicates of all input RDF triples.
897
content plan. Inspired by Su et al. (2021a), we 4.3 Structure-Aware RL Training
model the ordering predictor as a linear-chain con- We note that the structure of the generated sequence
ditional random field (CRF) (Lafferty et al., 2001) can only be accurately measured on the sequence-
for its ability to compute the global optimal order- level, which is not directly optimized by the token-
ing sequence. When predicting the ordering, the level objective (Eq. (2)). Therefore, to encourage
ordering predictor is allowed to emit an empty label the generator to follow the sequence-level struc-
∅ which indicates the omission of the correspond- ture defined by the content plan, we incorporate
ing token in the content plan. reinforcement learning into our training process.
During training, the likelihood of the ordering Formally, in training, given the structured data T
sequence Y defined by the content plan is and the reference content plan C, the generator first
samples an output sequence S 0 = (S10 , ..., S|S 0 ),
0|
ef (Y,HC ) 0
PCRF (Y |HC ) = P f (Y 0 ,HC )
where St is the token sampled at time step t. The
Y0 e generator parameters θ are then updated using the
|C| |C| (1)
1 X X REINFORCE algorithm (Williams, 1992) as
= exp( Φyi (hci ) + Myi−1 ,yi ).
Z
i=1 i=2 LRL = −ES 0 ∼Pθ (T,C) [R(S, S 0 , T, C)] = (3)
|S 0 |
Here, Φyi (hci ) is the label score of yi at step i, X
where label yi indicates the position of the token in − R(S, S 0 , T, C) log PG (Si0 |S<i
0
; E([T : C])).
i=1
the final content plan. Taking Figure 2 as an exam-
ple, the position of the “Name” key is 1, meaning The reward function R(S, S 0 , T, C) measures the
that “Name” should appear in the front of the con- structure of the sampled sequence S 0 against the
tent plan. By predicting the positions instead of the input content plan C, and its surface form against
actual slot keys, at test time, our model can han- the reference text S as
dle tables with out-of-vocabulary slot keys that did
not appear in the training set. In practice, Φ is pa- R(S, S 0 , T, C) = B(S, S 0 ) + B(C, C 0 ), (4)
rameterized by a feed-forward layer. The Myi−1 ,yi
denotes the transition score from position yi−1 to where B(·, ·) is the BLEU score (Papineni et al.,
position yi , and M is a learnable transition matrix. 2002). C 0 = F(T, S 0 ), and F is described in §3.
During inference, the ordering sequence is pre- By optimizing Eq. (3), the structure of the output
dicted as Ỹ as Ỹ = arg maxY 0 PCRF (Y 0 |HC ). As is encouraged to follow the content plan.
shown in the example of Figure 2, given all the slot 4.4 Learning
keys {“Year”, “Name”, “Role”, “Notes”, “Title”}
The learning objective of the content planner is
from the table, the predicted ordering sequence
LCRF = − log PCRF and PCRF is defined in Eq. (1).
is {3, 1, 2, ∅, 4}. The content plan {“Name” →
For the sequence generator, at the first 10k steps,
“Role” → “Year” → “Title”} can then be predicted
we train it with LLM as described in Eq. (2). Then,
by omitting the “Notes” key and re-arranging other
we incorporate the structure-aware RL objective
keys following the predicted ordering sequence.
(Eq. (3)) and further train the sequence generator
4.2 Sequence Generator with LLM + LRL for 5k more steps.
Our sequence generator is built on a BART-base 5 Experiment Setup
model (Lewis et al., 2020) which consists of a trans-
former based encoder-decoder architecture. 5.1 Datasets and Evaluation Metrics
Given the structured data T , the reference con- ToTTo Dataset (Parikh et al., 2020) consists of
tent plan C, and the reference text S, the learning Wikipedia tables paired with human-written de-
objective of the sequence generator is defined as scriptions. Each input is a full table with high-
lighted cells and the model is required to generate
|S|
X the text that describes the highlighted cells. Similar
LLM = − log PG (Si |S<i ; E([T : C])), (2)
to previous studies (Parikh et al., 2020; Kale and
i=1
Rastogi, 2020), we only use the highlighted cells
where E, G are the encoder and decoder, and [· : ·] 2
https://github.com/
denotes the concatenation operation. google-research-datasets/ToTTo
898
Overall Overlap Non-Overlap
Model
BLEU PARENT BLEURT BLEU PARENT BLEURT BLEU PARENT BLEURT
NCP 19.2 29.2 -0.576 24.5 32.5 -0.491 13.9 25.8 -0.662
Pointer-Generator 41.6 51.6 0.076 50.6 58.0 0.244 32.2 45.2 -0.092
BERT-to-BERT 44.0 52.6 0.121 52.7 58.4 0.259 35.1 46.8 -0.017
T5-3B 49.5 58.4 0.230 57.5 62.6 0.351 41.4 54.2 0.108
Ours 49.2 58.7 0.249 56.9 62.8 0.371 41.5 54.6 0.126

Table 2: ToTTo test set results: All reported results, including ours, can be found in the official Leaderboard.2

as the model input. We report the automatic re- on the PARENT metric suggest that our model can
sult of BLEU-4, PARENT3 (Dhingra et al., 2019), generate more factually accurate text. Moreover,
and a learnt metric BLEURT (Sellam et al., 2020). in the Non-Overlap subset, our model achieves the
Note that ToTTo features a hidden test set with best result on all metrics, showing its robustness to
two splits: Overlap and Non-Overlap. The Non- out-of-domain examples.
Overlap set contains out-of-domain examples. To
get the test set result, a submission must be made 6.2 WebNLG Results
to the leaderboard. We compare our approach with two types of mod-
els on WebNLG dataset. The first type of models
WebNLG Dataset is used in the WebNLG chal- does not use pre-trained language models (PLMs),
lenge (Gardent et al., 2017). For each data instance, including GTR-LSTM (Trisedya et al., 2018),
the input is a set of RDF triples from DBPedia Transformer (Ferreira et al., 2019), Step-by-Step
and the output is their textual description. The test (Moryossef et al., 2019), and PLANENC (Zhao
set of WebNLG features a Seen and Unseen sub- et al., 2020). Similar to ours, the latter three are
set. The Unseen subset contains out-of-domain pipeline models that utilize different methods to
instances. Following previous studies, we report decide the output planning before generating the
the the automatic result of BLEU and METEOR result. The second line of research utilizes PLMs,
(Banerjee and Lavie, 2005). including Switch-GPT (Chen et al., 2020b), T5
5.2 Implementation Details (Kale and Rastogi, 2020), and T5+Prefix (Ribeiro
et al., 2020). The Switch-GPT model applies a
Our implementation is based on the Huggingface copy mechanism to copy content from the source
Library (Wolf et al., 2019). We optimize the model to the output. We also include the top systems of
using Adam (Kingma and Ba, 2015) with a learning the WebNLG challenge, including ADAPT, TILB-
rate of 2e−5 and a batch size of 64. SMT, and MELBOURNE.
6 Results Evaluation on Text Generation. Table 3 lists
the results of different methods in terms of text
In this section, we report the experimental results. generation. We see that our approach outperforms
6.1 ToTTo Results all prior works. Compared with previous models
that utilize PLMs, our performance improvements
We compare our model with the latest models on suggest that the incorporation of an explicit content
ToTTo dataset, including NCP (Puduppully et al., plan can provide effective guiding signal for the
2019a), Pointer-Generator (See et al., 2017), BERT- model to achieve better generation results.
to-BERT (Rothe et al., 2020) and T5-3B (Kale and
Rastogi, 2020). Similar to our model, the later two Evaluation on Content Planning. Next, we
are also based on pre-trained language models. compare our content planner with other pipeline
Table 2 lists the results on ToTTo test set. For models in terms of content planning performance.
most of the metrics, our model with 140M parame- Following Zhao et al. (2020), we report the results
ters outperforms the current state-of-the-art T5-3B on planning accuracy (P-A) and planning BLEU-2
model which has over 2.8B parameters. The results score (B-2) against the human-generated plans4 . In
3
addition, we examine two ablated variants of our
PARENT is a word-overlap based metric that reflects the
4
factual accuracy of the generated text in relation to both the The human-generated plans are provided in the enriched
input table and the reference sentence. WebNLG dataset (Ferreira et al., 2018).
899
Seen Unseen Overall Faithfulness Fluency Accuracy
Model
B. M. B. M. B. M. Agreement 0.663 0.617 0.518
ADAPT† 60.59 0.44 10.53 0.19 31.06 0.31 Reference 1.819 1.762 1.753
TILB-SMT† 54.29 0.42 29.88 0.33 44.28 0.38 BERT-to-BERT 1.589 1.593 -
MELBOURNE† 54.52 0.41 33.27 0.33 45.13 0.37 T5-3B 1.701 1.696 -
GTR-LSTM† 54.00 0.37 29.20 0.28 37.10 0.31 Ours(CP) 1.794 1.753 1.742
Transformer† 56.28 0.42 23.04 0.21 47.24 0.39 Ours(Shuffled CP) 1.778 1.746 1.552
Step-by-Step† 53.30 0.44 38.23 0.34 47.24 0.39
PLANENC† 64.42 0.45 38.23 0.37 52.78 0.41 Table 5: Human Evaluation Results
Based on PLMs
Switch-GPT 60.98 0.43 40.67 0.34 52.17 0.40
T5‡ 63.90 0.46 52.80 0.41 57.10 0.44
each of the following features5 :
T5+Prefix‡ 64.71 0.45 53.67 0.42 59.70 0.44
Ours 65.42 0.48 54.52 0.44 60.51 0.46
• Faithfulness: Whether the sentence is factu-
Table 3: Text generation results on WebNLG datasets, ally consistent with the input data.
where B. and M. represent BLEU and METEOR met- • Fluency: Whether the sentence is fluent and
rics. † and ‡ results are cited from Zhao et al. (2020) easy to understand.
and Ribeiro et al. (2020), respectively.
• Accuracy: How accurately the sentence fol-
Seen Unseen Overall
lows the input content plans6 .
Model
Acc. B-2 Acc. B-2 Acc. B-2
Transformer† 0.56 74.30 0.09 20.90 0.34 49.30 Table 5 lists the results, with the first row show-
GRU† 0.56 75.80 0.10 25.40 0.35 52.20 ing strong inter-annotator agreements as measured
Step-by-Step† 0.49 73.20 0.44 68.00 0.47 70.80 by Fleiss0 kappa coefficient (Fleiss et al., 1971).
PLANENC† 0.63 80.80 0.61 79.30 0.62 80.10
Ours 0.74 86.01 0.70 83.79 0.72 84.97
Comparing with BERT-to-BERT and T5-3B, our
w/o CRF 0.67 82.92 0.63 80.65 0.65 81.73 model achieves best results on both measures. Fur-
w/o PLMs 0.70 84.05 0.65 81.98 0.68 83.02 thermore, on the faithfulness and fluency metrics,
† our model with both CP and Shuffled CP performs
Table 4: Evaluation results on content planning. re-
sults are copied from Zhao et al. (2020).
comparably with the reference sentence (Sign Test
with p-value > 0.4). On the accuracy metric, our
CP model also performs comparably with the ref-
content planner by either removing the CRF layer erence as judged by the Sign Test. However, with
(w/o CRF) or using randomly initialized parame- randomly shuffled content plan, our model (Shuf-
ters instead of the pre-trained BERT (w/o PLMs). fled CP) fails to match the accuracy of the reference
Table 4 lists the results. We see that our content (p-value < 0.05). Our analysis is that the random
planner outperforms all the baselines on both mea- content plans could contain patterns that are rare or
sures. Moreover, the results show that both the CRF unseen during training. In such cases, our model
layer and the pre-trained parameters positively con- might fail to produce results that precisely follow
tribute to the overall performance which further the content plan, resulting in a lower accuracy score.
justifies our design of the content planner. Nonetheless, the human results suggest that, while
being able to produce fluent and correct sentences,
6.3 Human Evaluation our model is also highly controllable. Finally, we
We also conduct a human evaluation to assess our note that on the accuracy metric, even the reference
model, using graders proficient in English from an sentence does not score a perfect 2.0. This suggests
internal grading platform. We randomly selected that our simple heuristic delexicalizer F introduced
200 samples from the ToTTo validation set. For in §3 still lapses behind human performance. We
each sample, we first use our sequence generator to leave to future work of designing better F.
produce the result with the content plan (CP) pre-
dicted by the content planner. Next, we randomly 7 Further Analysis
shuffle the predicted content plan and generate five
different results (Shuffled CP). For comparison, In this section, we present and discuss more empir-
we also include results of BERT-to-BERT and T5- ical analyses of the proposed model.
3B using greedy decoding. All generated results, 5
More evaluation details are provided in the Appendix A.
plus the reference sentence, are evaluated by three 6
As BERT-to-BERT and T5-3B do not take the content plan
graders on a 3-point Likert scale (0, 1, or 2) for as input, thus we do not report their accuracy score.
900
Quality Diversity Model CP RL Type BLEU PARENT S-BLEU
Model
BLEU PARENT Self-BLEU↓ iBLEU 1 × × - 47.50 56.92 43.87
Greedy 44.15 53.08 100.00 15.32 2 × X - 48.10 57.34 48.93
Beam 41.58 49.87 75.04 18.26 Predict 48.53 57.87 57.92
B2B 3 X ×
Top-k 42.47 50.43 82.20 17.54 Oracle 53.82 61.99 75.59
Nucleus 42.92 50.91 84.26 17.48 Predict 49.10 58.27 62.27
Greedy 48.43 57.80 100.00 18.74 Ours X X
Oracle 54.43 62.75 80.32
Beam 45.12 55.20 83.68 19.36
T5-3B
Top-k 46.31 55.90 88.86 19.28
Nucleus 46.53 56.30 90.11 19.20 Table 7: Ablation Studies on the overall ToTTo valida-
Ours tion set. Model 1 gives a baseline for the BART model.
CP 49.10 58.27 100.00 19.28
Predict
Shuffled CP 40.75 51.96 25.91 27.42
CP 54.43 62.75 100.00 23.54
Oracle that the variation of content plan encourages our
Shuffled CP 42.99 56.17 26.90 29.01
model to produce diverse results that have different
Table 6: Experimental results on the overall ToTTo val- structures than the reference.
idation set, where ↓ means lower is better.
Furthermore, we see that, even with different de-
coding strategies, the baseline models still generate
7.1 Evaluation on Generation Diversity results that are very similar to the ones acquired
from greedy search, with their BLEU and PAR-
Setup. We first evaluate the ability of different ENT scores relatively unchanged. The results on
models in generating diverse results on the overall the diversity metrics also verify the superiority of
ToTTo validation set. We compare our model with our model which outperforms the strong T5-3B
two strong baselines, BERT-to-BERT (B2B) and model by over 57 and 8 points on Self-BLEU and
T5-3B. Given the input data, the baseline models iBLEU9 . The performance gains suggest that the
generate the results with different decoding strate- controllable property of our model is beneficial in
gies7 , including greedy search, beam search (beam producing high-quality as well as diverse results.
size of 10), top-k sampling (k = 50) (Fan et al.,
2018), and Nucleus sampling (p = 0.9) (Holtzman 7.2 Ablation Study
et al., 2020). For our model, to generate diverse
In this part, we evaluate the importance of each
results, we simply vary the input content plan and
component of our model on the overall ToTTo val-
use greedy decoding. We use two variants of the in-
idation set. Specifically, we study the effect of
put content plan: (1) the content plan predicted by
content plan (CP) and the RL training by removing
the content planner (Predict), or (2) the reference
them iteratively. In addition to BLEU and PAR-
content plan (Oracle). For each variant, five re-
ENT, we measure the structure of the model output
sults are generated by either using the input content
against the reference content plan with a S-BLEU
plan (CP), or using five randomly shuffled forms
metric. Given the data T , the reference content plan
of the content plan (Shuffled CP). The outputs are
C, and the model output S 0 , S-BLEU is defined as
expected to vary in the latter case only.
B(C, C 0 ), where B(·, ·) measures the BLEU score,
Metric. To measure the output quality, BLEU C 0 = F(T, S 0 ), and F is the heuristic delexicalizer
and PARENT scores are reported. To evaluate the described in §3. The results are listed in Table 7
generation diversity, we use Self-BLEU (Zhu et al., with the first row showing the baseline results of
2018) and iBLEU (Sun and Zhou, 2012) metrics8 . BART model.

Results. Table 6 lists the results in which our Necessity of Content Plan. By comparing mod-
model ranks best on all metrics. On the quality els with and without the content plan (model 1 vs.
metrics, we observe notable performance improve- 3 and model 2 vs. ours), we observe that the con-
ments from our model by using the reference con- tent plan is an effective guiding signal that leads
tent plan (Oracle), suggesting that the choice of to better results. Moreover, we see that the Ora-
content plan has a significant impact on the outputs. cle results outperform the Predict results by a large
By shuffling the content plan, our model shows the margin, showing that the quality of the content plan
largest decrease in BLEU and PARENT, showing is an important factor of the model performance
and future research can focus more on this aspect.
7
For each decoding strategy, five results are generated.
8 9
For all evaluation metrics, we use the same hyper-parameters By definition, models using greedy search get 100 Self-BLEU
as in the original works that proposed the metric. as the generated results are always the same.
901
Table: Title[George Washington Colonials football] Date[1956] Game[Sun Bowl] Result[W 13-0] Opponent[Texas Western] Notes[Bowl Games]
Reference: In 1956, George Washington Colonials scored 13–0 against Texas Western at the Sun Bowl.
T5-3B: Greedy Search Ours: CP
ICP: Date → Title → Result → Opponent → Game
George Washington Colonials football won the Sun Bowl (1956)
In 1956, George Washington Colonials football team scored 13–0 against Texas
over Texas Western.
Western in the Sun Bowl.
T5-3B: Beam Search Ours: Shuffled CP
ICP: Date → Result → Opponent → Title → Game
1: George Washington Colonials football won the 1956 Sun Bowl
In 1956, with a 13–0 victory over Texas Western, the Colonials football team
against Texas Western.
won the Sun Bowl.
ICP: Title → Game → Date → Result → Opponent
2: George Washington Colonials won the 1956 Sun Bowl against
George Washington Colonials football won the Sun Bowl in 1956 with a 13–0
Texas Western.
victory over Texas Western.
ICP: Game → Result → Title → Opponent → Date
3: In 1956, George Washington Colonials won the Sun Bowl against
In the Sun Bowl, a 13–0 victory for George Washington Colonials over Texas
Texas Western.
Western in 1956.
ICP: Title → Opponent → Game → Result → Date
4: George Washington Colonials won the Sun Bowl against Texas
George Washington Colonials football team defeated Texas Western in the Sun
Western in 1956.
Bowl, with 13–0, in 1956.
ICP: Opponent → Game → Date → Result → Title
5: George Washington Colonials football won the 1956 Sun Bowl
The Colonials defeated Texas Western in the Sun Bowl 1956, with a 13–0 score,
over Texas Western.
by George Washington Colonials.

Table 8: Case study on ToTTo dataset. Given the input data, we present the generated results from various models
using different decoding strategies. ICP denotes the “input content plan". (Best viewed in color)

(Alan Bean | nationality | United States), (Alan Bean | occupation | Test pilot), (Alan Bean | birthPlace | Wheeler , Texas),
Tripleset
(Alan Bean | selectedByNASA | 1963), (Alan Bean | status | "Retired")
Reference Alan Bean is a US national born in Wheeler, Texas. He is a retired test pilot who joined NASA in 1963.
ICP: nationality → birthPlace → selectedByNASA → status → occupation
Alan Bean is a US national who was born in Wheeler, Texas. He was selected by NASA in 1963 and is now retired. He was a test pilot.
Ours ICP: nationality → occupation → selectedByNASA → birthPlace → status
(Shuffled CP) Alan Bean is a US national who served as a test pilot and was selected by NASA in 1963. He was born in Wheeler, Texas and is now retired.
ICP: selectedByNASA → occupation → status → birthPlace → nationality
Alan Bean was selected by NASA in 1963 as a test pilot. He is now retired. He was born in Wheeler, Texas and is a United States national.

Table 9: Case study of our model’s results on WebNLG dataset. (best viewed in color)

Effect of RL. By comparing the models trained Diversity and Controllability. Next, we exam-
with and without RL (model 1 vs. 2 and model 3 vs. ine the output diversity and controllability. For the
ours), we see that training with our proposed RL T5-3B model, when using beam search, only the
objective consistently improves the model perfor- position of the term “1956” varies, showing its
mance. The most notable improvement is observed reduced ability to generate diverse outputs. For
in S-BLEU which means that the generated outputs our model, the variation of content plan leads to
better follow the input content plan. This is in line outputs with diverse structures. Furthermore, the
with our hypothesis that our reward function in Eq. results show that our model is not only able to con-
(4) helps to improve the model’s adherence to the trol the intra-sentence output structure as shown in
output structure defined by the content plan. Table 8 but also to control the inter-sentence output
structure as shown in Table 9.
7.3 Case Study
Error Analysis. We show one failure case in the
To gain more insights into our model, we present bottom right cell of Table 8, in which it repeats
generated examples from ToTTo and WebNLG the Title key twice in the output. Our analysis for
datasets10 in Table 8 and Table 9, respectively. such error is that the randomly shuffled content
plan might contain patterns that are rarely seen in
Quality. In Table 8, we compare our model with training. One possible solution is filtering out rare
predicted content plan against T5-3B. We see that content plan patterns via statistical approaches such
T5-3B fails to produce the key game result (i.e. as bigram statistics.
13-0) in its outputs. In contrast, by following the
content plan, our model is able to maintain all key 8 Conclusion
information in its generated results.
In this study, we propose a new Plan-then-Generate
10
More examples are shown in the Appendix B. (PlanGen) framework for data-to-text generation
902
which can be easily applied to data with different Émilie Colin, Claire Gardent, Yassine Mrabet, Shashi
structures. Extensive experiments and analyses Narayan, and Laura Perez-Beltrachini. 2016. The
webnlg challenge: Generating text from dbpedia
are conducted on two benchmark datasets. Both
data. In INLG 2016 - Proceedings of the Ninth Inter-
automatic and human evaluation results demon- national Natural Language Generation Conference,
strate that our model is highly controllable. Fur- September 5-8, 2016, Edinburgh, UK, pages 163–
thermore, compared with previous studies, our 167. The Association for Computer Linguistics.
model achieves better results both in terms of
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
the generation quality as well as the output diver- Kristina Toutanova. 2019. BERT: pre-training of
sity. Our code, models and other related resources deep bidirectional transformers for language under-
can be found in https://github.com/yxuansu/ standing. In Proceedings of the 2019 Conference
PlanGen/ of the North American Chapter of the Association
for Computational Linguistics: Human Language
Technologies, NAACL-HLT 2019, Minneapolis, MN,
Acknowledgments USA, June 2-7, 2019, Volume 1 (Long and Short Pa-
pers), pages 4171–4186. Association for Computa-
The authors wish to thank Ehsan Shareghi, Zaiqiao
tional Linguistics.
Meng, Piji Li, and Benjamin Muller for their in-
sightful discussions and support. Many thanks to Bhuwan Dhingra, Manaal Faruqui, Ankur P. Parikh,
our anonymous reviewers for their suggestions and Ming-Wei Chang, Dipanjan Das, and William W.
Cohen. 2019. Handling divergent reference texts
comments.
when evaluating table-to-text generation. In Pro-
ceedings of the 57th Conference of the Association
for Computational Linguistics, ACL 2019, Florence,
References Italy, July 28- August 2, 2019, Volume 1: Long Pa-
Satanjeev Banerjee and Alon Lavie. 2005. METEOR: pers, pages 4884–4895. Association for Computa-
an automatic metric for MT evaluation with im- tional Linguistics.
proved correlation with human judgments. In Pro-
ceedings of the Workshop on Intrinsic and Ex- Angela Fan, Mike Lewis, and Yann N. Dauphin. 2018.
trinsic Evaluation Measures for Machine Transla- Hierarchical neural story generation. In Proceed-
tion and/or Summarization@ACL 2005, Ann Arbor, ings of the 56th Annual Meeting of the Associa-
Michigan, USA, June 29, 2005, pages 65–72. Asso- tion for Computational Linguistics, ACL 2018, Mel-
ciation for Computational Linguistics. bourne, Australia, July 15-20, 2018, Volume 1: Long
Papers, pages 889–898. Association for Computa-
Wenhu Chen, Ming-Wei Chang, Eva Schlinger, tional Linguistics.
William Yang Wang, and William W. Cohen. 2021.
Open question answering over tables and text. In Thiago Castro Ferreira, Diego Moussallem, Emiel
9th International Conference on Learning Represen- Krahmer, and Sander Wubben. 2018. Enriching the
tations, ICLR 2021, Virtual Event, Austria, May 3-7, webnlg corpus. In Proceedings of the 11th Interna-
2021. OpenReview.net. tional Conference on Natural Language Generation,
Tilburg University, The Netherlands, November 5-
Wenhu Chen, Yu Su, Xifeng Yan, and William Yang 8, 2018, pages 171–176. Association for Computa-
Wang. 2020a. KGPT: knowledge-grounded pre- tional Linguistics.
training for data-to-text generation. In Proceedings
of the 2020 Conference on Empirical Methods in Thiago Castro Ferreira, Chris van der Lee, Emiel van
Natural Language Processing, EMNLP 2020, On- Miltenburg, and Emiel Krahmer. 2019. Neural data-
line, November 16-20, 2020, pages 8635–8648. As- to-text generation: A comparison between pipeline
sociation for Computational Linguistics. and end-to-end architectures. In Proceedings of the
2019 Conference on Empirical Methods in Natu-
Zhiyu Chen, Harini Eavani, Wenhu Chen, Yinyin Liu, ral Language Processing and the 9th International
and William Yang Wang. 2020b. Few-shot NLG Joint Conference on Natural Language Processing,
with pre-trained language model. In Proceedings EMNLP-IJCNLP 2019, Hong Kong, China, Novem-
of the 58th Annual Meeting of the Association for ber 3-7, 2019, pages 552–562. Association for Com-
Computational Linguistics, ACL 2020, Online, July putational Linguistics.
5-10, 2020, pages 183–190. Association for Compu-
tational Linguistics. J.L. Fleiss et al. 1971. Measuring nominal scale agree-
ment among many raters. Psychological Bulletin,
Émilie Colin and Claire Gardent. 2019. Generating 76(5):378–382.
text from anonymised structures. In Proceedings of
the 12th International Conference on Natural Lan- Claire Gardent, Anastasia Shimorina, Shashi Narayan,
guage Generation, INLG 2019, Tokyo, Japan, Octo- and Laura Perez-Beltrachini. 2017. The webnlg
ber 29 - November 1, 2019, pages 112–117. Associ- challenge: Generating text from RDF data. In Pro-
ation for Computational Linguistics. ceedings of the 10th International Conference on
903
Natural Language Generation, INLG 2017, Santi- Linguistics, ACL 2020, Online, July 5-10, 2020,
ago de Compostela, Spain, September 4-7, 2017, pages 7871–7880. Association for Computational
pages 124–133. Association for Computational Lin- Linguistics.
guistics.
Ziran Li, Zibo Lin, Ning Ding, Hai-Tao Zheng, and
Albert Gatt and Emiel Krahmer. 2018. Survey of the Ying Shen. 2020. Triple-to-text generation with
state of the art in natural language generation: Core an anchor-to-prototype framework. In Proceedings
tasks, applications and evaluation. J. Artif. Intell. of the Twenty-Ninth International Joint Conference
Res., 61:65–170. on Artificial Intelligence, IJCAI 2020, pages 3780–
Sebastian Gehrmann, Falcon Z. Dai, Henry Elder, and 3786. ijcai.org.
Alexander M. Rush. 2018. End-to-end content and
plan selection for data-to-text generation. In Pro- Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang,
ceedings of the 11th International Conference on and Zhifang Sui. 2018. Table-to-text generation
Natural Language Generation, Tilburg University, by structure-aware seq2seq learning. In Proceed-
The Netherlands, November 5-8, 2018, pages 46–56. ings of the Thirty-Second AAAI Conference on Ar-
Association for Computational Linguistics. tificial Intelligence, (AAAI-18), the 30th innovative
Applications of Artificial Intelligence (IAAI-18), and
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and the 8th AAAI Symposium on Educational Advances
Yejin Choi. 2020. The curious case of neural text in Artificial Intelligence (EAAI-18), New Orleans,
degeneration. In 8th International Conference on Louisiana, USA, February 2-7, 2018, pages 4881–
Learning Representations, ICLR 2020, Addis Ababa, 4888. AAAI Press.
Ethiopia, April 26-30, 2020. OpenReview.net.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man-
Mihir Kale and Abhinav Rastogi. 2020. Text-to-text dar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
pre-training for data-to-text tasks. In Proceedings of Luke Zettlemoyer, and Veselin Stoyanov. 2019.
the 13th International Conference on Natural Lan- Roberta: A robustly optimized BERT pretraining ap-
guage Generation, INLG 2020, Dublin, Ireland, De- proach. CoRR, abs/1907.11692.
cember 15-18, 2020, pages 97–102. Association for
Computational Linguistics. Shuming Ma, Pengcheng Yang, Tianyu Liu, Peng Li,
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Jie Zhou, and Xu Sun. 2019. Key fact as pivot: A
method for stochastic optimization. In 3rd Inter- two-stage model for low resource table-to-text gen-
national Conference on Learning Representations, eration. In Proceedings of the 57th Conference of
ICLR 2015, San Diego, CA, USA, May 7-9, 2015, the Association for Computational Linguistics, ACL
Conference Track Proceedings. 2019, Florence, Italy, July 28- August 2, 2019, Vol-
ume 1: Long Papers, pages 2047–2057. Association
Ravi Kondadadi, Blake Howald, and Frank Schilder. for Computational Linguistics.
2013. A statistical NLG framework for aggregated
planning and realization. In Proceedings of the Kathleen R. McKeown. 1992. Text generation - using
51st Annual Meeting of the Association for Computa- discourse strategies and focus constraints to gener-
tional Linguistics, ACL 2013, 4-9 August 2013, Sofia, ate natural language text. Studies in natural lan-
Bulgaria, Volume 1: Long Papers, pages 1406–1415. guage processing. Cambridge University Press.
The Association for Computer Linguistics.
Amit Moryossef, Yoav Goldberg, and Ido Dagan. 2019.
Karen Kukich. 1983. Design of a knowledge-based re- Step-by-step: Separating planning from realization
port generator. In 21st Annual Meeting of the Associ- in neural data-to-text generation. In Proceedings of
ation for Computational Linguistics, Massachusetts the 2019 Conference of the North American Chap-
Institute of Technology, Cambridge, Massachusetts, ter of the Association for Computational Linguistics:
USA, June 15-17, 1983, pages 145–150. ACL. Human Language Technologies, NAACL-HLT 2019,
Minneapolis, MN, USA, June 2-7, 2019, Volume 1
John D. Lafferty, Andrew McCallum, and Fernando
(Long and Short Papers), pages 2267–2277. Associ-
C. N. Pereira. 2001. Conditional random fields:
ation for Computational Linguistics.
Probabilistic models for segmenting and labeling se-
quence data. In Proceedings of the Eighteenth Inter-
national Conference on Machine Learning (ICML Jekaterina Novikova, Ondrej Dusek, and Verena Rieser.
2001), Williams College, Williamstown, MA, USA, 2017. The E2E dataset: New challenges for end-to-
June 28 - July 1, 2001, pages 282–289. end generation. In Proceedings of the 18th Annual
SIGdial Meeting on Discourse and Dialogue, Saar-
Mike Lewis, Yinhan Liu, Naman Goyal, Mar- brücken, Germany, August 15-17, 2017, pages 201–
jan Ghazvininejad, Abdelrahman Mohamed, Omer 206. Association for Computational Linguistics.
Levy, Veselin Stoyanov, and Luke Zettlemoyer.
2020. BART: denoising sequence-to-sequence pre- Alice H. Oh and Alexander I. Rudnicky. 2000. Stochas-
training for natural language generation, translation, tic language generation for spoken dialogue systems.
and comprehension. In Proceedings of the 58th An- In ANLP-NAACL 2000 Workshop: Conversational
nual Meeting of the Association for Computational Systems.
904
Kishore Papineni, Salim Roukos, Todd Ward, and Wei- August 4, Volume 1: Long Papers, pages 1073–1083.
Jing Zhu. 2002. Bleu: a method for automatic eval- Association for Computational Linguistics.
uation of machine translation. In Proceedings of the
40th Annual Meeting of the Association for Compu- Thibault Sellam, Dipanjan Das, and Ankur P. Parikh.
tational Linguistics, July 6-12, 2002, Philadelphia, 2020. BLEURT: learning robust metrics for text
PA, USA, pages 311–318. ACL. generation. In Proceedings of the 58th Annual Meet-
ing of the Association for Computational Linguistics,
Ankur P. Parikh, Xuezhi Wang, Sebastian Gehrmann, ACL 2020, Online, July 5-10, 2020, pages 7881–
Manaal Faruqui, Bhuwan Dhingra, Diyi Yang, and 7892. Association for Computational Linguistics.
Dipanjan Das. 2020. Totto: A controlled table-to-
text generation dataset. In Proceedings of the 2020 Amanda Stent, Rashmi Prasad, and Marilyn Walker.
Conference on Empirical Methods in Natural Lan- 2004. Trainable sentence planning for complex in-
guage Processing, EMNLP 2020, Online, Novem- formation presentations in spoken dialog systems.
ber 16-20, 2020, pages 1173–1186. Association for In Proceedings of the 42nd Annual Meeting of
Computational Linguistics. the Association for Computational Linguistics (ACL-
04), pages 79–86, Barcelona, Spain.
Massimo Poesio, Rosemary Stevenson, Barbara Di Eu-
genio, and Janet Hitzeman. 2004. Centering: A para-
metric theory and its instantiations. Comput. Lin- Yixuan Su, Deng Cai, Yan Wang, David Vandyke, Si-
guistics, 30(3):309–363. mon Baker, Piji Li, and Nigel Collier. 2021a. Non-
autoregressive text generation with pre-trained lan-
Ratish Puduppully, Li Dong, and Mirella Lapata. guage models. In Proceedings of the 16th Confer-
2019a. Data-to-text generation with content selec- ence of the European Chapter of the Association
tion and planning. In The Thirty-Third AAAI Con- for Computational Linguistics: Main Volume, EACL
ference on Artificial Intelligence, AAAI 2019, The 2021, Online, April 19 - 23, 2021, pages 234–243.
Thirty-First Innovative Applications of Artificial In- Association for Computational Linguistics.
telligence Conference, IAAI 2019, The Ninth AAAI
Symposium on Educational Advances in Artificial Yixuan Su, Zaiqiao Meng, Simon Baker, and Nigel Col-
Intelligence, EAAI 2019, Honolulu, Hawaii, USA, lier. 2021b. Few-shot table-to-text generation with
January 27 - February 1, 2019, pages 6908–6915. prototype memory. In Findings of the Association
AAAI Press. for Computational Linguistics: EMNLP 2021. Asso-
ciation for Computational Linguistics.
Ratish Puduppully, Li Dong, and Mirella Lapata.
2019b. Data-to-text generation with entity model- Hong Sun and Ming Zhou. 2012. Joint learning of a
ing. In Proceedings of the 57th Conference of the As- dual SMT system for paraphrase generation. In The
sociation for Computational Linguistics, ACL 2019, 50th Annual Meeting of the Association for Com-
Florence, Italy, July 28- August 2, 2019, Volume putational Linguistics, Proceedings of the Confer-
1: Long Papers, pages 2023–2035. Association for ence, July 8-14, 2012, Jeju Island, Korea - Volume
Computational Linguistics. 2: Short Papers, pages 38–42. The Association for
Computer Linguistics.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine
Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Bayu Distiawan Trisedya, Jianzhong Qi, Rui Zhang,
Wei Li, and Peter J. Liu. 2020. Exploring the limits and Wei Wang. 2018. GTR-LSTM: A triple encoder
of transfer learning with a unified text-to-text trans- for sentence generation from RDF data. In Pro-
former. J. Mach. Learn. Res., 21:140:1–140:67. ceedings of the 56th Annual Meeting of the Associa-
tion for Computational Linguistics, ACL 2018, Mel-
Ehud Reiter and Robert Dale. 1997. Building applied bourne, Australia, July 15-20, 2018, Volume 1: Long
natural language generation systems. Nat. Lang. Papers, pages 1627–1637. Association for Computa-
Eng., 3(1):57–87. tional Linguistics.
Leonardo F. R. Ribeiro, Martin Schmitt, Hinrich
Schütze, and Iryna Gurevych. 2020. Investigating Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic,
pretrained language models for graph-to-text gener- Lina Maria Rojas-Barahona, Pei-Hao Su, David
ation. CoRR, abs/2007.08426. Vandyke, and Steve J. Young. 2016. Multi-domain
neural network language generation for spoken di-
Sascha Rothe, Shashi Narayan, and Aliaksei Severyn. alogue systems. In NAACL HLT 2016, The 2016
2020. Leveraging pre-trained checkpoints for se- Conference of the North American Chapter of the
quence generation tasks. Trans. Assoc. Comput. Lin- Association for Computational Linguistics: Human
guistics, 8:264–280. Language Technologies, San Diego California, USA,
June 12-17, 2016, pages 120–129. The Association
Abigail See, Peter J. Liu, and Christopher D. Manning. for Computational Linguistics.
2017. Get to the point: Summarization with pointer-
generator networks. In Proceedings of the 55th An- Ronald J. Williams. 1992. Simple statistical gradient-
nual Meeting of the Association for Computational following algorithms for connectionist reinforce-
Linguistics, ACL 2017, Vancouver, Canada, July 30 - ment learning. Machine Learning, 8:229–256.
905
Sam Wiseman, Stuart M. Shieber, and Alexander M.
Rush. 2018. Learning neural templates for text gen-
eration. In Proceedings of the 2018 Conference on
Empirical Methods in Natural Language Process-
ing, Brussels, Belgium, October 31 - November 4,
2018, pages 3174–3187. Association for Computa-
tional Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien
Chaumond, Clement Delangue, Anthony Moi, Pier-
ric Cistac, Tim Rault, Rémi Louf, Morgan Funtow-
icz, and Jamie Brew. 2019. Huggingface’s trans-
formers: State-of-the-art natural language process-
ing. CoRR, abs/1910.03771.
Rong Ye, Wenxian Shi, Hao Zhou, Zhongyu Wei, and
Lei Li. 2020. Variational template machine for
data-to-text generation. In 8th International Confer-
ence on Learning Representations, ICLR 2020, Ad-
dis Ababa, Ethiopia, April 26-30, 2020. OpenRe-
view.net.

Chao Zhao, Marilyn A. Walker, and Snigdha


Chaturvedi. 2020. Bridging the structural gap be-
tween encoding and decoding for data-to-text gener-
ation. In Proceedings of the 58th Annual Meeting of
the Association for Computational Linguistics, ACL
2020, Online, July 5-10, 2020, pages 2481–2491.
Association for Computational Linguistics.
Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo,
Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texy-
gen: A benchmarking platform for text generation
models. In The 41st International ACM SIGIR Con-
ference on Research & Development in Information
Retrieval, SIGIR 2018, Ann Arbor, MI, USA, July 08-
12, 2018, pages 1097–1100. ACM.

906
A Details of Human Evaluation Setup
To perform human evaluation, we randomly sample 200 samples from the ToTTo validation set. For each
sampled data, we use each baseline model (BERT-to-BERT and T5-3B) to produce one result. As for
our model, we produce 6 different results (one with the predicted content plan, the other five with five
randomly shuffled versions of the predicted content plan). Therefore, for each case, we have 9 different
results (1 from BERT-to-BERT, 1 from T5-3B, 6 from our model, and 1 reference). To reduce human bias,
we randomly shuffle these 1800 data points before presenting them to three annotators. Each annotator is
asked to assess all these 1800 data points. Because BERT-to-BERT and T5-3B do not take the content plan
as input, thus we only measure the accuracy score for the results generated by our model and the reference
sentence. Note that the accuracy score of the reference sentence is measured against the reference content
plan. In Figure 3, we show an example of the human evaluation interface.

Figure 3: Example of Human Evaluation Interface

907
B More Examples of Generated Result
In this part, we provide more generated examples of our model. The generated results on samples from
WebNLG and ToTTo datasets are shown in Table 10 and 11, respectively. From the results, we can see
that our model is able to generate fluent and diverse sentence while maintaining the structure defined
by the desired content plan. In particular, our model is able to control the output structure both on
the inter-sentence level (i.e. the structure across multiple sentences) as shown in Table 9 and on the
intra-sentence level (i.e. the structure within a single sentence) as shown in Table 11. These results further
demonstrate the applicability and generalization ability of our model.
(Allama Iqbal International Airport | location | Pakistan),
(Allama Iqbal International Airport | runwayLength | 2900.0),
Tripleset
(Allama Iqbal International Airport | cityServed | Lahore),
(Allama Iqbal International Airport | operatingOrganisation | Pakistan Civil Aviation Authority)
Allama Iqbal International Airport is located in Lahore at Pakistan. It has a runway length of 2900
Reference
and is operated by the Pakistan Civil Aviation Authority.
ICP: cityServed → location → runwayLength → operatingOrganisation
Allama Iqbal International Airport serves the city of Lahore and is located in Pakistan. The runway
length is 2900.0 and the airport is operated by the Pakistan Civil Aviation Authority.
ICP: operatingOrganisation → cityServed → location → runwayLength
The Pakistan Civil Aviation Authority is the operating organisation of the Allama Iqbal International
Ours Airport which serves the city of Lahore in Pakistan. The airport has a runway length of 2900.0.
(Shuffled CP) ICP: runwayLength → cityServed → location → operatingOrganisation
Allama Iqbal International Airport has a runway length of 2900.0 and serves the city of Lahore in
Pakistan. The airport is operated by the Pakistan Civil Aviation Authority.
ICP: location → operatingOrganisation → cityServed → runwayLength
Allama Iqbal International Airport is located in Pakistan and is operated by the Pakistan Civil Aviation
Authority. The airport serves the city of Lahore and has a runway length of 2900.0.
ICP: runwayLength → operatingOrganisation → cityServed → location
Allama Iqbal International Airport has a runway length of 2900.0 and is operated by the Pakistan Civil
Aviation Authority. The airport serves the city of Lahore and is located in Pakistan.
(Turkey | leaderTitle | president of Turkey), (Turkey | leader | Ahmet Davutoglu), (Turkey | capital | Ankara),
Tripleset
(Turkey | largestCity | Istanbul), (Atatürk Monument (Izmir) | location | Turkey)
The Ataturk Monument (Izmir) is located in Turkey, whose largest city is Istanbul and whose capital is
Reference
Ankara. President of Turkey is the official title of the Turkish leader and his name is Ahmet Davutoglu.
ICP: location → leaderTitle → capital → leader → largestCity
The Atatürk Monument (Izmir) is located in Turkey, where the leader is known as the president and the
capital city is Ankara. Ahmet Davutoglu is the leader of Turkey and the largest city is Istanbul.
ICP: capital → leaderTitle → leader → largestCity → location
Ankara is the capital of Turkey, which is lead by president Ahmet Davutoglu. The largest city is Istanbul
Ours and the Atatürk Monument is located in Turkey.
(Shuffled CP) ICP: largestCity → location → capital → leaderTitle → leader
Istanbul is the largest city in Turkey, where the Ataturk Monument is located and the capital is Ankara.
The leader of Turkey is known as the president and Ahmet Davutoglu is the leader.
ICP: location → capital → leader → largestCity → leaderTitle
The Atatürk Monument (Izmir) is located in Turkey, where the capital is Ankara, the leader is Ahmet
Davutoglu and the largest city is Istanbul. The leader of Turkey is known as the president of Turkey.
ICP: location → largestCity → capital → leaderTitle → leader
The Atatürk Monument (Izmir) is located in Turkey, where the largest city is Istanbul and the capital
is Ankara. The leader of Turkey is known as the president and Ahmet Davutoglu is the leader.

Table 10: Examples of generated result from WebNLG dataset, where ICP denotes “input content plan". The
expressions correspond to different contents are highlighted with different colors. (best viewed in color)

908
Input Table
Page_Title[List of New Zealand records in swimming] Event[400m freestyle] Time[4:03.63] Name[Lauren Boyle]
Club[New Zealand] Date[29 July 2012] Meet[Olympic Games] Location[London, United Kingdom] Section_Title[Women]
Reference Sentence
At the 2012 Olympics in London, Boyle was fourth fastest in the heats of the 400m freestyle in a New Zealand record 4:03.63.
Controlled Data-to-Text Generation
ICP: Date → Name → Event → Time → Location → Meet → Page_Title
On 29 July 2012, New Zealand’s Lauren Boyle finished the 400 metre freestyle in 4:03.63 at London Olympics, which was a
New Zealand record.
ICP: Page_Title → Location → Meet → Name → Event → Time → Date
The New Zealand swimming record was set in London at the 2012 London Olympics, where Lauren Boyle finished the 400 metre
freestyle in 4:03.63, in July 2012
ICP: Meet → Location → Date → Name → Time → Event → Page_Title
At the Olympic Games in London in July 2012, Lauren Boyle finished in 4:03.63 in the 400 metre freestyle, setting a New Zealand
record.
ICP: Name → Event → Date → Meet → Location → Page_Title → Time
Lauren Boyle competed in the 400 metre freestyle at the 2012 Olympic Games in London, finishing with a New Zealand record
time of 4:03.63.
ICP: Location → Event → Name → Time → Page_Title → Date → Meet
In London, in the 400m freestyle, Lauren Boyle finished in 4:03.63, setting a New Zealand record at the 2012 Olympic Games.
Input Table
Page_Title[Kazunari Murakami] Result[Loss] Opponent[Akihiro Gono] Method[KO (head kick)]
Date[October 13, 1995] Event[Lumax Cup: Tournament of J ’95] Section_Title[Mixed martial arts record]
Reference Sentence
Murakami began his career fighting in the Lumax Cup: Tournament of J ’95 on October 13, 1995, where he lost his first fight to
Akihiro Gono by knockout.
Controlled Data-to-Text Generation
ICP: Page_Title → Date → Event → Opponent
Kazunari Murakami made his debut on October 13, 1995 at Lumax Cup: Tournament of J ’95, losing to Akihiro Gono by KO.
ICP: Event → Date → Page_Title → Opponent
At Lumax Cup: Tournament of J ’95 on October 13, 1995, Kazunari Murakami lost to Akihiro Gono by KO.
ICP: Opponent → Page_Tilte → Event → Date
Akihiro Gono defeated Kazunari Murakami at Lumax Cup: Tournament of J ’95 on October 13, 1995.
ICP: Date → Opponent → Page_Title → Event
On October 13, 1995, Akihiro Gono defeated Kazunari Murakami at Lumax Cup: Tournament of J ’95.
ICP: Event → Page_Title → Opponent → Date
At Lumax Cup: Tournament of J ’95, Kazunari Murakami lost to Akihiro Gono by KO on October 13, 1995.
Input Table
Page_Title[Reform Party of the United States of America] Year[2008]
Name[Frank McEnulty] Home_state[California] Section_Title[Presidential tickets]
Reference Sentence
Frank McEnulty of California, was nominated to be the Reform Party’s 2008 presidential candidate.
Controlled Data-to-Text Generation
ICP: Year → Page_Title → Name → Home_state → Section_Title
In 2008, the Reform Party of the United States of America nominated Frank McEnulty of California as its presidential candidate.
ICP: Page_Title → Section_Title → Home_state → Year → Name
Reform Party of the United States of America nominated its first presidential nominee from California in 2008, Frank McEnulty.
ICP: Home_state → Name → Section_Title → Year → Page_Title
California’s Frank McEnulty was nominated as presidential candidate in 2008 by the Reform Party of the United States of America.
ICP: Page_Title → Name → Home_state → Section_Title → Year
Reform Party of the United States of America nominated Frank McEnulty of California as its presidential candidate in 2008.
ICP: Year → Name → Page_Title → Home_state → Section_Title
In 2008, Frank McEnulty of Reform Party of the United States of America from California ran for the presidential election.

Table 11: Examples of generated result from ToTTo dataset, where ICP denotes “input content plan". The expres-
sions correspond to different contents are highlighted with different colors. (best viewed in color)

909

You might also like