Using Genetic Algorithms To Create Meaningful Poetic Text
Using Genetic Algorithms To Create Meaningful Poetic Text
To cite this article: Ruli Manurung , Graeme Ritchie & Henry Thompson (2012) Using genetic
algorithms to create meaningful poetic text, Journal of Experimental & Theoretical Artificial
Intelligence, 24:1, 43-64, DOI: 10.1080/0952813X.2010.539029
Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or
howsoever caused arising directly or indirectly in connection with, in relation to or arising
out of the use of the Content.
This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/terms-
and-conditions
Downloaded by [McMaster University] at 11:49 28 December 2014
Journal of Experimental & Theoretical Artificial Intelligence
Vol. 24, No. 1, March 2012, 43–64
1. Introduction
The task of automatically generating texts that would be regarded as poetry is challenging,
as it involves many aspects of language. Moreover, there is no well-defined route by which
poems are created by human poets. However, it is possible to set down some properties
that a poetic text should manifest, which suggests that the generation might be achieved by
an algorithm that couples a suitably effective search process with a relatively separate
evaluation of the end-product; stochastic search, and in particular evolutionary
algorithms, operate in this way.
We start here with a brief survey of work on automatic poetry generation (Section 2),
before setting out a restricted definition of poetry (Section 3) as a text that embodies
meaningfulness, grammaticality and poeticness, and a model of its generation as a
stochastic search process (Section 4). We then describe McGONAGALL (Section 5), our
implemented system that adopts this model and uses genetic algorithms (Mitchell 1996) to
generate texts that are syntactically well-formed, meet certain pre-specified patterns of
metre (Attridge 1995) and broadly convey some given meaning. Finally, we present results
of some experiments conducted with McGONAGALL (Section 6) and conclude this article
with some discussion about future avenues of research (Section 7).
2.3. COLIBRI
COLIBRI (Diaz-Agudo, Gervás, and González-Calero 2002) uses case-based reasoning
(Aamodt and Plaza 1994) to generate formal Spanish poetry that satisfies the constraints
of strophic forms such as romances, cuartetos and tercetos encadenados. Its input is a list of
keywords representing the meaning to be conveyed, and a specification of the particular
strophic form required.
Journal of Experimental & Theoretical Artificial Intelligence 45
Figure 3. COLIBRI: (a) input, (b) verse pattern, (c) substituted and (d) revised.
Figure 3 shows how COLIBRI works. In (a) we see keywords representing the required
meaning for the poem. In (b) a case is retrieved from a corpus of human-authored poems
with an appropriate strophic form. In (c) the keywords from (a), marked in boldface, have
been placed in the text, while maintaining syntactic well-formedness. Finally, in (d) we see
the result of a revision, where the words marked with * have been substituted in to ensure
the metre and rhyme specified by the strophic form.
COLIBRI differs from the systems in the previous sections because, as well as aiming
for syntactic well-formedness (cf. Section 2.1) and appropriate rhythm (like RKCP), its
goal is a text that is ‘about’ a given message, even if this is approximated rather trivially.
However, when COLIBRI revises a poem to maintain metre and rhyme it pays no heed to
meaning. Modifications to the text at this stage may destroy whatever meaning has already
been set up (cf. Section 4.1).
Figure 4. Sample (a) input semantics and (b) output limerick from our chart generator.
expensive, even in our tests on a tiny ‘toy’ grammar and lexicon. It is also not very flexible,
as it can generate only immaculate results – if the linguistic resources are insufficient to
produce the required output, it fails completely. We believe that a poetry generator should
be robust, in the sense that it produces the best possible output with the resources at its
Downloaded by [McMaster University] at 11:49 28 December 2014
disposal.
3. Defining poetry
There is no universally accepted definition of what counts as poetry, but we will argue for
the use of a particular restricted definition in our investigations.
‘Poetry is a literary form in which language is used in a concentrated blend of sound and
imagery to create an emotional response.’ (Levin 1962)
This points to a strong interaction, or unity, between the form and content of a poem.
The diction and grammatical construction of a poem affects the message that it conveys to
the reader over and above its obvious denotative meaning. This contrasts with the
commonly-held linguistic notion that a text serves as a medium for conveying its semantic
content.
program that is claimed to generate poetry, to what extent can we allow deviations
(e.g. semantic incoherence) in the text, by invoking ‘poetic license’?
We believe the opposite of Boden’s claim to be the case: automatic generation of poetry
is harder than that of prose. Poetry is further constrained by rules of form that prose need
not adhere to. Using true poetic license to turn a phrase in an effective way requires even
greater mastery of the language than that for producing prose. Deviations from the rules
Downloaded by [McMaster University] at 11:49 28 December 2014
Table 1. Poetry generators (Section 2) and how they address our constraints.
ALFRED Yes No No
RKCP Yes Yes No
COLIBRI Yes Yes Yes
Chart generation Yes Yes Yes
4. Evolving poetry
We now turn to the question of how to generate texts automatically that satisfy the three
constraints of grammaticality, meaningfulness and poeticness.
Downloaded by [McMaster University] at 11:49 28 December 2014
space, through the design of representation and genetic operators. In doing this, we can
call on the immense amount of linguistic work on mechanisms for ensuring
grammaticality.
The remaining constraints of meaningfulness and poeticness will be implemented as
preferences via the evaluation functions of the genetic algorithm.
a NP VP
V NP
Yesterday
John Yesterday Book
V NP
Read
(a)
S
aJohn Read
NP
Adv S
abook
Downloaded by [McMaster University] at 11:49 28 December 2014
N
aread NP
Yesterday NP VP
1 2.2 0 John
D N
aJohn abook b yesterday N V NP aa
1 D Book
John Read D N
aa
a
a Book
(b) (c) (d)
For example, book subtitutes into read at address 2.2, the object NP node. The process of
composing the elementary trees is illustrated in (d).
The derivation tree can therefore be seen as the basic formal object that is constructed
during sentence generation from a semantic representation (Joshi 1987), and is the
appropriate data structure on which to perform nonmonotonic operations in our
stochastic generation framework (Section 5.2). Essentially, the LTAG derivation tree
forms the genotypic representation of our candidate solutions, from which we can compute
the phenotypic information of semantic (Section 5.4) and prosodic (Section 5.3) features via
the derived tree.
We adopt a simple ‘flat’ semantic representation (Hobbs 1985) that is often used in
NLG (Koller and Striegnitz 2002). A semantic expression is a set of first order logic literals,
which is logically interpreted as a conjunction of all its members. The arguments of these
literals represent concepts in the domain such as objects and events, while the functors
state relations between these concepts. For example, the representation of the semantics of
the sentence ‘John loves Mary’ is { john( j), mary(m), love(l, j, m)}, where l is the event of j,
who has the property of ‘being John’, loving m, who has the property of ‘being Mary’.
The semantic expression of a tree is the union of the semantic expressions of its
constituent elementary trees, with appropriate binding of variables during substitution and
adjunction through the use of semantic signatures to control predicate-argument structure;
cf. Stone and Doran (1997) and Kallmeyer (2002). Ultimately semantic information
Journal of Experimental & Theoretical Artificial Intelligence 51
S X S X
NP Y VP X NP Y VP X
V X NP Z N Y V X NP Z
Mary _,Z
α John ; SαJohn ={john(_,A)} αMary; SαMary ={mary(_,B)}
NP A NP B
Downloaded by [McMaster University] at 11:49 28 December 2014
N A N B
originates from the lexicon, where words are annotated with the meanings they convey.
Figure 7 shows an example of the substitution of two noun phrases as the subject and
object of a transitive tree. During the substitution of John at the subject NP position, the
signatures Y and A are unified. Likewise, for the substitution of Mary at the object NP
position, the signatures Z and B are unified. From this, we can recover the semantics of
the derivation S ¼ {love(X, Y, Z), john(_, Y), mary(_, Z)}.
Each word is associated with its phonetic spelling, taken from the CMU pronouncing
dictionary (Weide 1996). Vowels are marked for lexical stress, with 0 for no stress, 1 for
primary stress and 2 for secondary stress. For example, the spelling of ‘dictionary’ is
[D, IH1, K, SH, AH0, N, EH2, R, IY0]. For monosyllables, closed class words (e.g. the)
receive no stress, and open class words (e.g. cat) receive primary stress.
Figure 9. Target forms for (a) a limerick, (b) a haiku, and (c) ‘The Lion’.
candidate solution. Conversely, when deleting content, operators try to remove extraneous
items, i.e. semantics realised by the candidate solution but not present in the target
semantics.
a Adj NP
Downloaded by [McMaster University] at 11:49 28 December 2014
Candidate form:
Big N
[ 0 1 , 1 n , 0 n , 1 1 , 0 1, 1 1 , 1 1 , b ]
Head
minimal sum of costs of operations (symbol insertion, deletion and substitution) that
transform one string into another. The minimum edit distance can be efficiently computed
(Jurafsky and Martin 2000) in a way that produces a pairwise syllable alignment between
candidate and target, thus indicating the operations that yield the minimum cost. The costs
are shown in Table 2(a)–(c), which reflect our intuitions, as follows:
(i) There should be no penalty for aligning candidate unstressed, stressed or
linebreak syllables with target syllables of that same class, nor where
non-linebreak candidate syllables align with a wildcard in the target.
(ii) Linebreaks cannot align with non-linebreaks, in either direction.
(iii) Having a linebreak in the candidate where none exists in the target (enjambment,
the running on of a sentence from one line of a poem to the next) should be
relatively expensive. Allocating no penalty for deletion of a candidate linebreak
lets a line consist of more than one clause.
(iv) All other operations should incur some penalty. Insertion and deletion should be
costlier for stressed syllables than for unstressed syllables.
Our candidate forms indicate lexical stress patterns as if the words were pronounced in
isolation. Within poetic text, context can affect stress. To compensate for this, the system
iterates over the minimum edit distance alignment, detecting certain patterns and adjusting
the similarity value. Two types of patterns are implemented: two consecutive deletions, or
two consecutive insertions, of non-linebreaks, increases the cost by 1; the destressing of a
stressed candidate syllable adjacent to a stressed target syllable, or the stressing of an
unstressed candidate syllable adjacent to an unstressed target syllable, decreases the
cost by 1.
Our metre evaluation function, F metre, takes the value computed by the minimum edit
distance algorithm, adjusts it using our context-sensitive compensation scheme and
normalises it to the interval [0, 1].
54 R. Manurung et al.
Table 2. Operation costs for (a) substitution, (b) insertion, and (c) deletion.
(a)
Cost w s x b
01 0 2 0 1
0n 0 2 0 1
11 3 0 0 1
1n 3 0 0 1
2n 1 1 0 1
b 1 1 1 0
(b)
Cost
Downloaded by [McMaster University] at 11:49 28 December 2014
w 1
s 3
x 1
b 10
(c)
Cost
01 1
0n 1
11 3
1n 3
2n 2
b 0
The Lion, the Lion, he dwells in the [01, 1n, 0n, b, 01, 1n, 0n, b, 01, 11, 01, 01, 0.787
waste. He has a big head and a very 11, b, 01, 11, 01, 11, 11, 01, 01, 1n, 0n,
small waist. But his shoulders are 11, 11, b, 01, 01, 1n, 0n, 01, 11, b, 01,
stark, and his jaws they are grim, 01, 11, 01, 01, 11, b, 01, 01, 11, 1n, 0n,
and a good little child will not play 11, 01, 01, 11, 01, 01, b]
with him.
There was an old man with a beard, who [01, 01, 01, 11, 11, 01, 01, 11, b, 01, 11, b, 0.686
said, ‘it is just as i feared! two owls 01, 01, 11, 01, 01, 11, b, 11, 11, 01, 01,
and a hen, four larks and a wren, have 11, b, 11, 11, 01, 01, 11, b, 11, 11, 11,
all built their nests in my beard!’ 01, 11, 01, 01, 11, b]
Poetry is a unique artifact of the human [1n, 0n, 0n, 01, 01, 0n, 1n, 1n, 0n, 2n, 01, 0.539
language faculty, with its defining 01, 1n, 0n, 1n, 0n, 1n, 0n, 0n, b, 01, 01,
feature being a strong unity between 0n, 1n, 0n, 1n, 0n, 1n, 0n, 01, 11, 1n,
content and form. 0n, 0n, 0n, 1n, 1n, 0n, 01, 11, b]
John loves Mary. [11, 11, 1n, 0n, b] 0.264
Journal of Experimental & Theoretical Artificial Intelligence 55
Table 3 shows F metre values for various candidate texts, against the target form in
Figure 9(c). The first is Belloc’s actual poem, which itself contains some metrical
imperfections; the second is a limerick by Edward Lear; the third is an extract from an
academic text, containing roughly the correct number of syllables; the last is chosen for its
inappropriateness. The F metre scores do not conflict radically with our intuitions of poetic
metre.
The first two texts convey a subset of the target; the third text conveys an altogether
different fact about the lion; the fourth text is purposely inappropriate; and the last text,
interestingly, conveys the semantics of the first text in its object to the verb ‘love’. As with
our metre similarity function, we believe that the F sem scores roughly approximate human
intuitions.
. NLG test. This test measured the ability of McGONAGALL to generate solutions
that optimise meaningfulness, without concern for metre. The evaluation function
was F sem (Section 5.4). This effectively causes McGONAGALL to behave as a
conventional NLG system, i.e. it attempts to produce a text that conveys a given
input semantics. The target semantics used is a representation of the first two lines
of ‘The Lion’, with a slight alteration where we have replaced the original opening
noun phrase ‘the lion, the lion’ with ‘the african lion’, i.e. ‘The african lion, he dwells
in the waste. He has a big head and a very small waist’. The target semantic
expression is as follows:
Starget ¼ flionð , l Þ, africanð , l Þ, dwell ðd, l Þ, insideð , d, wasÞ, wasteð , wasÞ,
ownð , l, hÞ, head ð , hÞ, bigð , hÞ, ownð , l, waiÞ, waistð , waiÞ,
small ðs, waiÞ, veryð , sÞg
We conducted two variants of this test, one with the syntactically-aware ‘blind’
operators and one with the syntactically and semantically-aware ‘smart’ operators
(Section 5.2).
. Poetry generation. In this test we measured the ability of McGONAGALL to perform
the task it was designed for, namely the generation of texts that simultaneously
satisfy grammaticality, meaningfulness and poeticness. We took a very simple
approach to combining the metre similarity and semantic similarity functions –
the arithmetic mean of their scores, i.e. F poetry ¼ (F metre þ F sem)/2. We used the
same target form and target semantics as in the previous tests. As in the previous
test, two variants were conducted: with the ‘blind’ operators and with the ‘smart’
operators.
6.1. GA parameters
Throughout our testing, we employed one of the simplest selection algorithms,
proportionate selection, which assigns a distribution that accords parents a probability
to reproduce that is proportional to its fitness (Bäck, Fogel, and Michalewicz 1997).
Individuals are sampled from this distribution using stochastic universal sampling, which
minimises chance fluctuations in sampling (Baker 1987). To reduce the chances of
premature convergence or stagnation, we used an elitist population of 20% of the entire
Journal of Experimental & Theoretical Artificial Intelligence 57
Stage 1:
Metrical generation 0.86 1.00 0.93 0.05
Stage 2:
NLG test (blind operators) 0.74 0.93 0.85 0.06
NLG test (smart operators) 0.93 0.99 0.97 0.03
Stage 3:
Poetry generation test (blind operators) 0.60 0.81 0.69 0.06
Poetry generation test (smart operators) 0.75 0.83 0.79 0.02
Downloaded by [McMaster University] at 11:49 28 December 2014
population, which was found to yield the best results during an initial experiment with
different ratios. The population size was set to 40, inspired by Cheng (2002), who in turn
points to empirical studies by Goldberg (1989). Each test was run five times, and each run
lasted for 500 iterations.
The three mutation operators used, along with their probabilities, were creation (0.5),
adjunction (0.3), and deletion (0.2). The choice of these probabilities is intended to account
for the fact that the substitution operation typically introduces the essential linguistic
structures, and that adjunction is for auxiliary constituents. For crossover, the subtree
swapping operator was used. The probabilities of applying genetic operators were
pmutation ¼ 0.6, pcrossover ¼ 0.4, for both the blind and smart variants.
A handcrafted grammar and lexicon was used, with 33 elementary trees and 134 lexical
items, 28 of which were closed class words. Most of the content words were taken from
‘The Bad Child’s Book of Beasts’ (Belloc 1991).
(a)
1
0.8
Fitness score
0.6
0.4
0.2
Maximum
Average
0
0 50 100 150 200 250 300 350 400 450 500
Iteration
(b) (c)
1 1
0.8 0.8
Downloaded by [McMaster University] at 11:49 28 December 2014
Fitness score
Fitness score
0.6 0.6
0.4 0.4
0.2 0.2
Maximum Maximum
Average Average
0 0
0 50 100 150 200 250 300 350 400 450 500 0 50 100 150 200 250 300 350 400 450 500
Iteration Iteration
(d) (e)
1 1
Fitness score
0.8 0.8
Fitness score
0.6 0.6
0.4 0.4
0.2 0.2
Maximum Maximum
Average Average
0 0
0 50 100 150 200 250 300 350 400 450 500 0 50 100 150 200 250 300 350 400 450 500
Iteration Iteration
Figure 11. Maximum and average of best fitness scores for (a) metrical generation test, NLG test
with (b) blind and (c) smart operators, poetry generation test with (d) blind and (e) smart operators.
Table 7. Best found solution for NLG test with blind operators.
Table 8. Best found solution for NLG test with smart operators.
Table 9. Best found solution for poetry generation test with blind operators.
to the previous tests, no optimal solution was found to this much more difficult
multi-objective optimisation task.
The text is metrically perfect. However, in terms of conveying Starget, it performs
suboptimally. The unmatched target literals show that the text is failing to convey three
concepts from Starget, i.e. that the lion is african, that its head is big, and that the waist is
very small. The text contains three instances of the phrase ‘will be rare’, which is unrelated
to the semantics of Starget. While this phrase contributes positively to achieving the target
metre, it leads to unmatched candidate literals. Also, the second line is a repeat of the first.
Although it contributes positively to the metre, our mapping algorithm is unable to match
the semantics it conveys, resulting in unmatched candidate literals that penalises the
semantic similarity score. This does not seem to reflect the intuition that most humans
would have reading the text, i.e. that it is simply a repetition of the first line.
Journal of Experimental & Theoretical Artificial Intelligence 61
Table 10. Best found solution for poetry generation test with smart operators.
target
Unmatched {very(_137, _135), big(_141, _136), waist(_145, _143), very(_153, _152),
candidate african(_154, _140)}
The original text that conveys Starget has 22 syllables, whereas a limerick has
34 syllables. This may explain why the generator had to ‘pad out’ the remaining syllables
by adding unrelated content such as ‘will be rare’ and repeating the first line, to the
detriment of semantic similarity.
7. Summary
We have presented a model of poetry generation as stochastic search, where a goal state is
a text that satisfies the constraints of meaningfulness, grammaticality and poeticness.
62 R. Manurung et al.
Acknowledgements
The work described was carried out while the authors were at the School of Informatics, University
of Edinburgh. We would also like to thank members of the University of Aberdeen NLG group and
the anonymous reviewers for their comments in preparing this article.
Notes
1. According to the Oxford English Dictionary, poetic license is the ‘deviation from recognised
form or rule, indulged in by a writer or artist for the sake of effect’.
2. A Gorn address is a list of integers that indicates the position of a node within a tree. The Gorn
address of the i-th child of node n is the Gorn address of n with i appended to the end of the list.
The Gorn address of a root node is , or sometimes 0 for convenience.
References
Aamodt, A., and Plaza, E. (1994), ‘Case-based Reasoning: Foundational Issues, Methodological
Variations, and System Approaches’, AI Communications, 7, 39–59.
Angeline, P.J. (1996), ‘Genetic Programming’s Continued Evolution’, in Advances in Genetic
Programming (Vol 2), eds. P.J. Angeline and K.E. Kinnear, Cambridge, USA: MIT Press,
pp. 89–110.
Attridge, D. (1995), Poetic Rhythm: An Introduction, Cambridge, UK: Cambridge University Press.
Journal of Experimental & Theoretical Artificial Intelligence 63
Bäck, T., Fogel, D., and Michalewicz, Z. (Eds.) (1997), Handbook of Evolutionary Computation, New
York: Oxford University Press and Bristol: Institute of Physics Publishing.
Bailey, R.W. (1974), ‘Computer-assisted Poetry: the Writing Machine Is for Everybody’,
in Computers in the Humanities, ed. J.L. Mitchell, Edinburgh, UK: Edinburgh University
Press, pp. 283–295.
Baker, J.E. (1987), ‘Reducing Bias and Inefficiency in the Selection Algorithm’, in Proceedings of the
Second International Conference on Genetic Algorithms, ed. J.J. Grefenstette, Cambridge,
USA: Lawrence Erlbaum Associates, pp. 14–21.
Belloc, J.H.P. (1991), The Bad Child’s Book of Beasts, London, UK: Jonathan Cape.
Boden, M.A. (1990), The Creative Mind: Myths and Mechanisms, London, UK: Weidenfeld and
Nicolson.
Cheng, H. 2002, Modelling Aggregation Motivated Interactions in Descriptive Text Generation,
Ph.D. thesis, University of Edinburgh, Division of Informatics.
Davis, L., and Steenstrup, M. (1987), ‘Genetic Algorithms and Simulated Annealing: An Overview’,
Downloaded by [McMaster University] at 11:49 28 December 2014
in Genetic Algorithms and Simulated Annealing, ed. L. Davis, London, UK: Pitman, pp. 1–11.
Diaz-Agudo, B., Gervás, P., and González-Calero, P. (2002), ‘Poetry generation in COLIBRI’, in
Proceedings of the 6th European Conference on Case Based Reasoning (ECCBR 2002),
Aberdeen, UK, pp. 73–102.
Donald, B.R. (n.d.), ‘ALFRED the Mail Agent’, http://www.cs.cornell.edu/home/brd/poem.html.
Falkenhainer, B., Forbus, K.D., and Gentner, D. (1989), ‘The Structure-mapping Engine: Algorithm
and Examples’, Artificial Intelligence, 41, 1–63.
Gervás, P. (2000), ‘WaSP: Evaluation of Different Strategies for the Automatic Generation of
Spanish Verse’, in Proceedings of the AISB’00 Symposium on Creative and Cultural Aspects and
Applications of AI and Cognitive Science, Birmingham, UK: AISB, pp. 93–100.
Gervás, P. (2002), ‘Exploring Quantitative Evaluations of the Creativity of Automatic Poets’,
in Proceedings of the 2nd. Workshop on Creative Systems, Approaches to Creativity in Artificial
Intelligence and Cognitive Science, 15th European Conference on Artificial Intelligence (ECAI
2002), Lyon, France, pp. 39–46.
Goldberg, D.E. (1989), Genetic Algorithms in Search, Optimisation, and Machine Learning, Reading,
USA: Addison-Wesley.
Hartman, C.O. (1996), Virtual Muse: Experiments in Computer Poetry, Middletown, CT: Wesleyan
University Press.
Hobbs, J. (1985), ‘Ontological Promiscuity’, in Proceedings of the 23rd Annual Meeting of the
Association for Computational Linguistics, Chicago, USA: The Association for Computational
Linguistics, pp. 61–69.
Joshi, A.K. (1987), ‘The Relevance of Tree Adjoining Grammars to Generation’, in Natural
Language Generation: New Results in Artificial Intelligence, ed. G. Kempen, Dordrecht,
The Netherlands: Martinus Nijhoff Press, pp. 233–252.
Jurafsky, D.S., and Martin, J.H. (2000), Speech and Language Processing: an Introduction to
Natural Language Processing, Computational Linguistics, and Speech Recognition, Upper
Saddle River, NJ: Prentice-Hall.
Kallmeyer, L. (2002), ‘Using an Enriched TAG Derivation Structure As Basis for Semantics’, in
Proceedings of the Sixth International Workshop on Tree Adjoining Grammar and Related
Frameworks (TAGþ6), Universita di Venezia, Venezia, pp. 101–110.
Kay, M. (1996), ‘Chart Generation’, in Proceedings of the 34th Annual Meeting of the Association for
Computational Linguistics, Santa Cruz, USA: ACL, pp. 200–204.
Koller, A., and Striegnitz, K. (2002), ‘Generation As Dependency Parsing’, in Proceedings of the
40th Anniversary Meeting of the Association for Computational Linguistics, Philadelphia, USA,
pp. 17–24.
Kurzweil, R. (2001), ‘Ray Kurzweil’s Cybernetic Poet’, http://www.kurzweilcyberart.com/poetry.
Levin, S.R. (1962), ‘Linguistic Structures in Poetry’, (Number 23), in Janua Linguarum,
Mouton, Gravenhage.
64 R. Manurung et al.
Love, B.C. (2000), ‘A Computational Level Theory of Similarity’, in Proceedings of the 22nd Annual
Meeting of the Cognitive Science Society, Philadelphia, USA, pp. 316–321.
Manurung, H.M. (1999), ‘A Chart Generator for Rhythm Patterned Text’, in Proceedings of the
First International Workshop on Literature in Cognition and Computer, Tokyo, Japan,
pp. 15–19.
Manurung, H. M. (2003), ‘An Evolutionary Algorithm Approach to Poetry Generation’, PhD thesis,
University of Edinburgh, School of Informatics.
Meteer, M. (1991), ‘Bridging the Generation Gap Between Text Planning and Linguistic
Realisation’, Computational Intelligence, 7, 296–304.
Mitchell, M. (1996), An Introduction to Genetic Algorithms, Cambridge, USA: MIT Press.
Nicolov, N., Mellish, C., and Ritchie, G. (1995), ‘Sentence Generation from Conceptual Graphs’, in
Proceedings of the International Conference on Conceptual Structures, (Number 954), in,
Springer-Verlag, Santa Cruz, USA Lecture Notes in Artificial Intelligence.
Quinn, A. (1982), Figures of Speech: 60 Ways to Turn a Phrase, Davis, CA: Hermagoras Press.
Reiter, E., and Dale, R. (2000), Building Natural Language Generation Systems (1st ed.), Cambridge,
Downloaded by [McMaster University] at 11:49 28 December 2014