refer to object by name, use section hierarcy has name path (cf file
system path).

automatic numbering of figures, refering to other sections, auto table
of contents.

scheme->html, for the thesis.  scheme hypertext system.

needs document level abstraction (not just file level)!

make sections enclose their bodies {section tag {title} body}

make documents be the same way, ie instead of using a FILES field,
just use INSERT command.  this requires something new to switch to the
next html file.

hypertext link types identification.  types: citation, note, glossary
definition, digression, cross reference (?), figure, code, example,
etc.

idea: should use netscape multi columns tables whatnot to flow text
around figures.  real multi column (paper layout) possible?
desireable?  page size (think hypercard).

{index key} (doesn't appear)
{cite key}
{note key}
{ref key} -> {link key.html} for now
{figure foo.ps}
{defn key word body?}
{section key appearance} support need arb graph, latex produces
depth-first traversal by default
{bib key appearance}? includes semantic fields.  need converter from
bibtex format.  needs to sort its entries
{defundesc ???} for lisp documentation.  includes several semantic
fields (eg `type', `see also', `source code', `superclass').

[] means citation.  use abbreves: "fig" "eg" "note". perhaps use <> {}
?  small icons.

header:  title (include version (hold on this)),
   simple nav tools (up prev next)
footer:  full navigation tools, annotations (hold on this),

nav
1) search (returns gloss entries (definitions in the text, not in
separate list) first, then index, then fullbody hits)
2) "next", other links from this page (local map)
3) prev version, next version.

use dot to provide a clickable map (ps + imagemap). this functions as
a table of contents.  these can appear for sections as well, providing
a map of the subsections.

also support parameterization (eg personal/local/public editions)

support annotation (http://union.ncsa.uiuc.edu/hypernews/about.shtml)

stretch text/outlining (encode in URL?) hold on this.  (<- -> icon)

support from scheme strutures to .dot (done)
support from .dot to .ps (done)

{figure foo.ps} -> {ps foo.ps} more primitive
render to large .ppm, then filter it (done) 
down and convert to gif.  but include link to original ps. (done)
provide scaling option

extensions to markup:

{def name {any amount of value text}} {name} expands to "any amount of
value text"

{define var (+ 1 2)} ; set a scheme variable.  different name?
{if expr {ttext} {ftext}} (done)
{(substring "foo" 0 1)}
{define bigstring  (random-lisp {begin a
long string that can extend over multiple lines.
this string is processed recursively, so eg
{insert foo.text} you can define a lisp variable
to be the value of a file as a string.  this is
supposed to be like backquote})}

ok, i've done some further speculative design work. this might be too
far out/ambitious, but it feels good:

the idea is that we should primarily think about the interactive
system: a big lisp structure (OO prob good here) that takes several
messages: follow a link, list links, format to html and format to
latex (next: ascii, ps).  links are directed (but bidirectional) and
typed.  then production of the latex/paper version amounts to a
particular traversal, ie depth first.

the primary navigation control is the function `build a map'.  in
normal operation, this shows all major links (ie section links)
to/from the current section, including the links between those
sections.  this makes a graph which we can format with dot, highlight
the current and (default) next/prev nodes (in latex order).  if in a
summary node, then it will have links to all the subsections, they
will show up like a table of contents.  if we are in a normal
subsection, then there will be four nodes in this graph: up, prev,
current, and next.  this perfectly generalized the typical controls
and tables of contents.  at the top of the doc you can do a traversal
to depth more than 1 to build the global table of contents.  dot
graphs don't come out how i want with default settings for common
patterns. tweek or hack (recognize those patterns) or bail (back to
hardwired topology (hierarchy+refs+?))

text files read by lisp reader (including macro execution).  becomes
heap structure, names replaced with pointers.

evalsto macro only has to give the "source", it evals as scheme, and
displays the source and its value.  also consider a macro-eval macro,
which displays some metalanguage code, and then the effect of
formatting it, good for self-documentation.

four compiler targets/traversal methods: words (for full-text
indexing), latex, html, links-only.  -> direct interactive lisp
browser (use that scheme ps glue from olin's student).

16-bit characters? kanji (jis? eus?), unicode.

for quotations, need a env that's normal except linebreaks are
respected (but a line that runs over is broken and indented).  use
<dl> <dd> line <dd> line </dl> to force linebreaks in HTML.  how to
impl in tex?

prefix tex images with filenames. (done)

memoize tex images

tex images are coming out too small.

option: scale all images so background matches netscape's.


there's a problem with the antialiasing, it doesn't look quite right.
related to 0 width lines and 
ppmtogif: maxval is not 255 - automatically rescaling colors
message and how the scale factor is applied?

layer on top of tex?? ie instead of {m \frac{1}{2}} you could have
{frac 1 2}, may be useful for super/subscripts since they may
eventually appear in HTML (check those proposals).  also, layer on top
of things like BNF/type definitions.  interesting.

right now calls out to latex rather than just tex.  this is generally
excessive.  any way to get good part of latex macros without all the
overhead?  perhaps an alternate form that loaded in less garbage (ie
straight tex)?  autoloading possible?

BNF/types:
non-terminals
syntax-production
alternation

in the Latex version should display links with a little arrow icon or
something.   underline?

----------------

levels of computation:

read macros
argumentless `eval twice' macros = abbrevs!
syntax rules with arguments
procedures.
<stop right there>

read macro character: #

thus #{ } could be completely literal string constant (quote).  what
we want is be able to load in a couple of text files and recreate a
pointer-rich heap.  this is a human writable/editable heap format for
literate hyperdocuments.  should it be reversable?  best not try for
now.

but it's better if it's a backquote: #{} is a literal string except
that {} within it return to scheme (the next level up, imagine it
could be returning from pure quotation to abbrevs enabled, or from
`quoted but scanned lisp' to evaluated lisp. the metalanguage.
abbrevs connection reminds me of meta key and teco).

`intermediate code' is objects of document-logical-semantic
types-classes.

primal question: how for down to go?  paragraphs? sentences? words?
letters?  ``double quotes (and parens?)''?  kanji?  bit?  it's
interesting that both english and scheme are very similar at this
level. though punctuation is different, both are white-space
insensitive, and *mostly* case insensitive (generally only not in
quotations and code fragments, ie other sub-lexical environments).

so the near question is: what is a good structured-literary-lexical
syntax?  a good place to start is probably words.  letters is
certainly feasible you don't have any interactive aspirations, but
it feels to early for this project.  so:

sentence = list of words and punctuation-marks
word = symbol = string
punctuation-marks = symbol
document = graph of sections
section = list of paragraphs
paragraph = list of sentences | enumerate | itemize | definitions
enumerate = list of paragraphs
itemize = unordered list of paragraphs
definitions = list of bold/indented pairs

(the next level: hierarchy with 2D layout rules, eg v/h stack, grid,
justify)

now {bold text} and {quote double quotes {parens and parens?}} are
environments, so that sounds like a fluid variable, or statically
executed code.  it's like you've changed the prototype word object,
for the code that gets executed within.  the result is still a list of
words.  crossing?!  similarly, a list of words becomes a list of
sentences, broken at some punctuation.

think about itemize using o.  its code produces that list.  it takes
words and produces a list of paragraphs.

think about stretch text.

abbrevs: ibm etc eg tex...

if you're in the symbol table, you're an abbrev, otherwise you're a
string.  then again, why not just hash them all?

what about words with mixed typeface eg {c cdr}s?

how exactly are paragraph breaks generated?  i like the double-newline
notation.  how does this work?  similar to breaking of paragraphs into
sentences and item lists into paragraphs (shit, what if you want more
than one paragraph in a list item?)

URLs that appear should be in smaller font, esp when in latex.

---------------------------------------------

generalize:

display-foo
foo->dot-file
foo->ps

make pretty printer table of predicates driven

break into modules (in order)

0) match

1) more utilities

2) ps->ppm

3) tex->ppm

4) graph printer

5) the rest
or
5) markup, latex, and html? (using the same interface for latex and
   html)

--------

layout organization
one interface includes functions for defining and installing new
layout functions (predicate/handlers) into the table of drivers

the simple version only exports the ui code


the driver handles hash table, checking table of layout functions.

each layout fn takes a value
it returns the name of node and
it spits out dot to stdout (?)

ui glue:
1) slap on headers on dot output, convert to ps (page options, file or stdout)
2) call above f /tmp/foo.ps,  imm display with gv

--------

how to use identity unparsing?

ask olin, he understands.

--------
