MSM-SQL Data Dictionary Guide v3.4 (Micronetics) 1997
MSM-SQL Data Dictionary Guide v3.4 (Micronetics) 1997
© 1995, 1997 by Micronetics Design Corporation. All rights reserved.
Micronetics Design Corporation, Rockville, Maryland, USA.
Printed in the United States of America.
First Printing, 1997
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Syntax Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Style Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
The Organization of this Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Chapter 4: Table Filers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Read & Write Locks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
SQL Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Automatic Table Filers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Statement Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Filer Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Manual Table Filers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
SQL Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Database Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Development Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Sample SQL_TEST.EMPLOYEES Table Report . . . . . . . . . . . . . . . . . . 111
Sample Table Filer Routine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Options for Creating Table Filers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Chapter 5: The DDL Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
The Import DDL Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Order of Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Using a Global DDL Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Using a Host DDL Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
DDL Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Syntactical Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
The Export DDL Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Export DDL Interface Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
The purpose of the MSM-SQL Data Dictionary Guide is to explain
relational tables and the process of mapping M globals to a data
dictionary. The data dictionary is needed by MSM-SQL to retrieve data
from your M database. This manual also provides information about a
new technology used to update M globals as well as an alternative
mapping process for these globals.
This manual is written for the technical resource who is responsible for
the overall management of the MSM-SQL system.
This manual uses the following syntax conventions when explaining
the syntax of an MSM-SQL statement.
Feature Example Meaning
[] table [AS alias] The item(s) within the
brackets form an optional
composite item. Including
this item may change the
meaning of the clause. Do
not enter the bracket.
... column [,column]... An ellipsis indicates that
the item which precedes
the ellipsis may be
repeated one or more
times.
(a) or (b) or (c) ASCII(c) Character literals
composed of one or more
characters enclosed within
quotes (e.g., 'abc').
(m) or (n) CHAR(n) Numeric literals (e.g., 123
or 1.23)
To help you locate and identify material easily, Micronetics uses the
following style conventions throughout this manual.
[key]
Key names appear enclosed in square brackets.
Example: To save the information you entered, type Y and
press [enter].
{compile-time variables}
References to compile-time replacement variables are enclosed in curly
braces. The names are case sensitive.
Example: {BASE}
italics
Italics are used to reference prompt names (entry fields) and terms that
may be new to you. All notes are placed in italics.
Example: The primary key of the table is defined as the set of columns
that is required to retrieve a single row from the table.
Windows
The manual includes many illustrations of windows. Window names
are highlighted by a double underline.
Prompt: data type (length) [key]
The manual includes information about all of the system prompts.
Each prompt will include the data type, length, and any special keys
allowed.
If the prompt is followed by a colon (Prompt:), you may enter a value
for the prompt.
If a prompt is followed by an equal sign (Prompt= ), it is for display
purposes only.
If the prompt is followed by a question mark (Prompt?), you can enter
a value of YES or NO.
^GLOBAL
All M global names will be prefixed by the '^' character.
Tag^Routine
All M routine references appear as tag^routines.
The MSM-SQL Data Dictionary Guide first discusses the internal
design of the MSM-SQL data dictionary and then points up some
strategies for mapping your M globals to a data dictionary. Next, the
manual walks you through each of the menu options available for the
mapping process. An important new concept, table filers, is
introduced along with a look at the process by which this technology
updates globals. Lastly, the MSM-SQL Data Definition Language
interface is presented as an alternative for mapping existing globals.
1
The MSM-SQL Data
Dictionary
SCHEMA
Your existing M
applications will be
mapped into MSM-SQL
data dictionary
TABLE components.
Primary Foreign
COLUMN INDEX
KEY KEY
The DATA_DICTIONARY schema contains the fundamental tables
for MSM-SQL. For details about individual tables, use the TABLE
PRINT procedure (DATA DICTIONARY/REPORTS/TABLE
PRINT).
The schema provides a high-level organization to groups of related
tables. You may choose to define a schema for each of your
applications (as shown by the diagram below), and then define
additional schemas to provide a more detailed separation of tables.
Tables Schemas
Guarantors
Patient
Visits
Registration
Patients
Items
Order
Control
Orders
Staffing
Nursing
Acuity
A single table may be linked to only one schema. Some tables may be
accessed by multiple applications. This situation can be managed by
defining a general, or central, schema to include those tables that are
referenced by several applications.
A relational database consists of a collection of tables. All data in the
database is stored in one of these tables.
Schema
A schema contains
many table definitions
Conceptually, each table is a simple two-dimensional structure, made
up of some number of rows and columns. Each column in a table is
assigned a unique name and contains a particular type of data such as
characters or numbers. Each row contains a value for each of the
columns in the table. The intersection of the columns with each row
defines the values in the rows.
Column
Row Value
MSM-SQL supports several data types. These data type definitions
cannot be modified, nor can additional data types be added to the
system. A list of the data types is provided below.
If data is stored in your system in a way that cannot be expressed using
one of the default data types, you must define a domain to describe the
stored format. A domain can also be useful when several columns
have the same data type and output format. Instead of linking each
column to both a data type and an output format, the column can be
linked to a single domain definition. This has an additional benefit: if
a change must be made to either the domain or to the output format, all
columns are changed at the same time.
Each domain is linked to a data type
Data Type
{BASE}
Domain
{INT} Each data type has
a default output format
The primary key of a table is the set of one or more columns that is
unique for each row. In some cases, the key is a computer generated
internal number. In others, some unique external value can be used as
the key. In any case, no part of the primary key can ever be empty or
NULL.
When the complete primary key value from one table is stored as
columns in another table, these columns can be used as a foreign key.
Many queries can take advantage of foreign key to primary key joins.
Notice how the join between a column in the PATIENTS table is
explicitly joined with the primary key of the DOCTORS table in the
following query.
-- list patients and their primary physicians
select PATIENTS.NAME, DOCTORS.NAME
from PATIENTS, DOCTORS
where PATIENTS.PRIMARY_DOCTOR = DOCTORS.DOCTOR_ID
On the following page, note the special syntax used to indicate the use
of a foreign key.
-- list patients and their primary physicians
select NAME, PRIMARY_DOCTOR_LINK@NAME
from PATIENTS
Base Table
Index Table
For example, an index table on patient names would include the name
column from the base table. The index, PATIENT_BY_NAME,
would be linked to the base table, PATIENTS. The index column,
NAME, would be linked to the NAME column in the base table.
There are three different formats shown for date values. The internal
date value is always the first part of the $H value. One format uses a
YYYYMMDD format to facilitate searches by year, month, and day.
The descending format indexes dates in reverse order, with more
recent dates at the top of the index. The YYYY format facilitates
searches by year. The YYYY format is an example of a many-to-one
transformation, since a value cannot be transformed back to the base
format without losing accuracy. The name format, another example of
a many-to-one transformation, strips all punctuation from a base value.
The MSM-SQL data dictionary is structured to manage information on
all of the tables and columns of your database. After studying the
relational data dictionary model as implemented in MSM-SQL, you are
ready to begin the process of defining your M globals to the data
dictionary.
2
Global Mapping Strategies
In order to furnish users with relational tables, the DBA must map the
existing globals into tables in the relational data dictionary. We
recommend the following strategy:
It may not be necessary to map all globals in the system, rather just
those that will be most beneficial to your users. As in any significant
task, the planning phase should be as complete as possible. If all
schemas, tables, domains, output formats, and key formats have been
defined, the global mapping process can focus on the definition of the
columns.
One major function of MSM-SQL is to reference existing M globals
using SQL statements. In order to accomplish this, you must translate
the global definitions into corresponding relational table definitions.
In many cases, this process can be partially automated, translating the
information in your on-line data dictionary into the MSM-SQL model.
In other cases, where documentation exists only on paper, or not at all,
you have more investigation to do in the analysis stage. In either case,
reaching our goal requires a fundamental understanding of how
MSM-SQL sees globals as the internal representation of relational
tables.
PATIENTS
^PAT(10123,1)=DAVE;M;43918
VISITS ^PAT(10395,1)=ROB;M;43405
^VIS("11-11-11",1)=10395;54600 ^PAT(10444,1)=JAMES;M;50134
,2)=Pneumonia ^PAT(10456,1)=RICK;M;42380
^VIS("22-22-22",1)=10444;50134 ^PAT(11209,1)=POLLY;F;45185
,2)=Birth ^PAT(12110,1)=SHANNON;F;54520
ORDERS
^ORD("0233",1)=22-22-22
,2)=A391 DOCTORS
^ORD("0390",1)=11-11-11 ^DOC(A230)=WELBY
,2)=A230 ^DOC(A391)=THOMPSON
In this simple example, the ^VIS, ^PAT, ^ORD, and ^DOC globals
map into the VISITS, PATIENTS, ORDERS, and DOCTORS tables.
This example is considered simple because the global subscripts are
easily identified as the primary keys of the tables. If your system uses
this type of global design, the mapping process will be very direct.
VISITS PATIENTS
ACCT_NO MRUN
MRUN NAME
VISIT_DATE SEX
REASON BIRTHDATE
ORDERS DOCTORS
ORDER_NO DOCTOR_NO
ACCT_NO NAME
DOCTOR_NO
= Primary Key
Before any data relationships can be defined, the DBA must provide
the physical definition of the columns in the tables. The physical
definition is comprised of a global reference and a piece reference. A
column can have either or both parts. Primary keys often have just a
global reference. Data columns often have just a piece reference.
^TEST(A,B,C)=D^E^F^G,H
Global Piece
Column Parent Reference Reference Sample Reference
A ^TEST( ^TEST(A
B A , ^TEST(A,B
C B , ^TEST(A,B,C
DATA C ) ^TEST(A,B,C)
D DATA "^",1) $P(DATA,"^",1)
E DATA "^",2) $P(DATA,"^",2)
F DATA "^",3) $P(DATA,"^",3)
GH DATA "^",4) $P(DATA,"^",4)
G GH ",",1) $P(GH,",",1)
H GH ",",2) $P(GH,",",2)
The subscripts A, B, and C become columns with vertical prefixes. A
data column is defined for the string of data stored at the global
reference defined by the columns A, B, and C. The data columns D, E,
F, G, and H are defined with horizontal suffixes for a $PIECE
reference. (MSM-SQL also supports a $EXTRACT reference.) Note
how the column GH is used as an “artificial” column in order to
simplify the definition of columns G and H. This style is often used
for columns that store dates in $H format, where G is the date and H is
the time component.
The primary key of the table is the set of columns that is required to
uniquely identify a single row from the table. The system can generate
code to traverse each key using a standard row traversal operation.
Although this situation is dominant, more complex data structures
exist.
^TEST(A,B,C)=D^E^F^G1,G2,G3,...,GN
Global
Pkey # Column Parent Reference Sample Looping Code
1 A ^TEST( S A=$O(^TEST(A))
2 B A , S B=$O(^TEST(A,B))
3 C B , S C=$O(^TEST(A,B,C))
4 G S X1=$P(^TEST(A,B,C),"^",4)
S X2=$L(X1,","),X3=0
S X3=X3+1
S G=$P(X1,",",X3)
The primary keys 1, 2, and 3 are simple subscripts. The system uses
$ORDER to loop through these values. Primary key 4 is complex. It
uses custom logic to traverse the entries in the list. Refer to the
“PRIMARY KEYS Option” section in Chapter 3 for a complete
description of the custom primary key logic.
3
Creating Your Data Dictionary
If you enter the name of a new domain, the Add Domain window
appears. Select YES to add a domain. Otherwise, if domains exist, you
can press [list] to view the names in the selection window. From the
selection window, you can press [insert] to add a new domain, or select
the domain you wish to edit. If you want to delete a domain, highlight
it and press [delete].
Domain definitions should be added in the beginning of the global
mapping process. Once a domain definition has been added, you may
link columns to that domain.
Edits to domains affect all columns that reference that domain. Be sure
to determine if any columns are using the domain before editing the
domain definition.
You may not delete a domain that is referenced by a column.
Otherwise, the domain can be deleted.
Output format: character (30) [list]
Each domain may be linked to an output format. If no output format is
specified, all values are formatted using the default output format for
the data type.
Override collating sequence? YES/NO
The default is NO. Answer YES if the value is numeric, but the
internal format collates as a string.
Skip search optimization? YES/NO
If the domain cannot be optimized, then enter YES; otherwise, enter
NO. This overrides the Skip collation optimization prompt.
Note: If MSM-SQL tries to optimize a key that shouldn’t be, the query
returns the wrong results.
Skip collation optimization? YES/NO
If the domain cannot be optimized by applying the greater than, less
than, or BETWEEN operators, then enter YES; otherwise, enter NO.
Regardless of the answer to this prompt, the query is optimized for =
and IN operators.
Different from base? YES/NO
Answer YES if the internal storage format of the domain is different
from the internal format of the base data type. Otherwise, the system
assumes that the stored format is the same as that of the data type.
When you answer YES, the Reverse > and < comparisons and the
Perform conversion on null values prompts are enabled.
Reverse > and < comparisons? YES/NO
Answer YES if the internal format is stored in a format that requires
comparison operators to be reversed. (For example, a date which is
stored internally as a negative number must have the reverse
comparisons flag set.) After you answer YES, the Domain Logic
window appears which is discussed on the next page.
Note: If the stored format is different from the data type format, you
must enter conversion code. If a numeric or integer value is stored
with either leading or trailing zeros, that value must be defined using a
custom domain that converts it to a numeric or integer value without
leading or trailing zeros. A discussion of this process begins on the
next page.
If you are adding a domain with a DATE, TIME, or MOMENT data
type, you can use MSM-SQL’s date and time conversion routines
explained in Chapter 10 of the MSM-SQL Database Administrator’s
Guide.
If you answered YES to the Different from base prompt, the Domain
Logic window appears. It contains a menu of conversion options. If
the stored format is different from the data type format, you must enter
conversion code. You may supply either an expression (selecting the
FROM and TO EXPRESSION options) or a line of code (selecting the
FROM and TO EXECUTE options).
The examples below illustrate how the expression and execute code
can be used to accomplish the conversions. (The code is for illustrative
purposes only.) Although expressions are preferred, you should use
the form that is most effective for your situation. (You may want to use
execute code if you can’t accomplish what you need to in one
expression.)
This code converts the internal to base format for use in comparisons
against other date values.
The Override data type logic? prompt in the Domain Information
window is useful when you want to override a default conversion and
data type for a particular domain. It allows you to customize your data
types.
Override data type logic? YES/NO
Answer YES if you plan to allow special input values normally not
acceptable for this domain OR if you wish to use a more restrictive
data type validation.
The Override Data Type window appears with two menu options. Each
option’s corresponding drop-down window, beginning with
EXTERNAL TO BASE EXECUTE, is shown in sequence on the next
page.
This code uses a site-specific routine to convert date values to $H
format.
This code checks for non-negative numeric values.
This procedure can be used by the DBA to add, edit, and delete key
format definitions. Because you can store data in a data column one
way and store it in the primary key a different way, you can use key
format definitions to indicate how data is to be stored in the primary
keys of index tables.
If you enter the name of a new key format, the Add Key Format
window appears. Select YES to add a key format. Otherwise, if key
formats exist, you can press [list] to view them in the selection
window. From the selection window, you can press [insert] to add a
new key format, or select the key format you wish to edit. If you want
to delete a key format, highlight it and press [delete].
Key format definitions should be added in the beginning of the global
mapping process. Once a key format is defined, it can be referenced by
the primary keys of index tables.
Edits to key format definitions can affect the query access planning
process. Be sure to identify the index primary keys that may be
affected before making any changes.
A key format may not be deleted if it is referenced by an index table
primary key. Otherwise, the key format may be deleted.
The key format conversion is a one-way transformation of a data type
{BASE} format to an internal {INT} index format. Unlike the domain
conversions, it is not necessary to be able to transfer from the key
format back to the internal format. As with other conversion logic, you
may use either an expression or execute code. Use an expression when
possible. However, you may want to use execute code if you can’t
accomplish what you need to in one expression.
The OUTPUT FORMAT EDIT option may be used to add, edit, and
delete output format definitions. Output format definitions provide
information about how data is displayed. Micronetics provides you
with seven different data types (as shown in the Select Data Type
window) and output formats for each. You may edit these formats or
add additional ones.
The following selection window lists the output formats for the
CHARACTER data type.
You may add an output format for every type of display value that
exists in your system. Once defined, the output format can be
referenced by domains.
Changes to an output format can affect all queries that reference that
output format. Be especially careful when changing the display length
of the column. This can have adverse effects on column alignment in
reports.
You may not delete an output format that is referenced by either a data
type or a domain. Otherwise, the output format can be deleted.
The output format conversion is a one-way transformation of a data
type internal {BASE} value to an external {EXT} display format.
Unlike the data type and domain conversions, it is not necessary to be
able to transfer from the external back to the internal format. As with
other conversion logic, you may use either an expression or execute
code. (You may want to use execute code if you can’t accomplish what
you need to in one expression.) The logic can include the following
parameters.
Compile-Time Description
Parameter
{BASE} Data type format
{LENGTH} External length of value
{SCALE} Number of digits to the right of decimal point
{EXT} External value
This procedure can be used by the DBA to add, edit, and delete schema
definitions. The schema definition is fundamental to the relational data
dictionary. The schema can be considered a logical group of tables,
related by owner or function. Great care should be taken before
modifying any schema definition.
If you have not defined any schemas, the Add Schema window
appears. Select YES to add a schema. Otherwise, if schemas exist, the
Select, Insert, Delete selection window appears. You can press [insert]
to add a new schema, or select the schema you wish to edit. If you
want to delete a schema, highlight it and press [delete].
Schema definitions should be added in the beginning of the global
mapping process. Once a schema definition has been defined, you may
create tables within that schema.
Schema definitions can be edited with caution. Any references to the
former schema name are flagged as errors. If the global name is
changed, be sure to verify the status of any data stored under the
former name. There is no automatic transfer of data to the new global
name.
You may not delete a schema that is referenced by tables in your
system. The delete function should be used with caution in those
circumstances where a schema was entered by mistake.
Filer base routine: character (5) [list]
These are base routines used for table filers and are similar to the base
routines used for queries. We recommend you use base routines for
separation of object types. The data characteristics and length are
inherited from BASE ROUTINE EDIT. If you do not specify a filer
base routine, the system uses the default base routine for the site.
Note: Our preferred tool for mapping globals is the DDL interface
(explained in Chapter 5) which lets you translate a foreign M data
dictionary into a MSM-SQL data dictionary using a script file. This
section, explains MSM-SQL’s initial method for mapping M globals.
In order to refer to your data using SQL, you must represent your M
globals as a relational data dictionary. Having done any necessary
preliminary work of defining schemas, domains, output formats, and
key formats, you may begin to define the tables and columns of your
database.
SCHEMA
Your existing M
applications will be
mapped into MSM-SQL
data dictionary
TABLE components.
Primary Foreign
COLUMN INDEX
KEY KEY
When mapping globals we suggest the following procedure:
A table is a set of related rows where each row has a value of one or
more columns of the table. Rows are distinguished from one another
by having a unique primary key value. To add, edit, or delete tables,
select MAP EXISTING GLOBALS from the Select DATA
DICTIONARY window.
To add a table, enter a name at the Table prompt in the Schema and
Table Name window. After you have supplied all the information
pertaining to the table (creating columns and specifying the primary
key columns), the Commit? prompt appears. If you want to save all the
information you entered, type Y and press [enter]. Users can then write
queries and retrieve information from the table you added.
You can edit a table at any time. Existing queries must be recompiled
if you delete columns, change primary keys, or otherwise alter the
current definition of the table.
You can delete a table but queries that reference the table will have to
be altered to reference another table, and then the queries must be
recompiled. To delete a table, highlight the table’s name from the
selection window and press [delete].
If the table exists, you may skip the schema prompt and enter the table
name directly. If you enter a name of a table that does not exist, the
Add TABLE window appears after you specify a schema. Select YES
to add the new table.
The MAP EXISTING GLOBALS procedure is organized with an
internal menu system to streamline the input process. The COLUMNS
option is highlighted as the default option. You may use either [skip]
or the EXIT option to exit the mapping procedure.
The TABLE INFORMATION option allows you to edit the name,
schema, density, and description of the table.
Schema: character (30) [list]
You can change the schema that the table is associated with by
selecting a different schema name at this prompt. All table and index
information is associated with the new schema.
Default delimiter: integer (3)
The table default delimiter is the ASCII value for the default character
used to separate data values. For example, the default delimiter for a
semicolon is 59. You can assign a default delimiter for each site (refer
to the discussion on the SITE EDIT option in the MSM-SQL Database
Administrator’s Guide) and for each table within each site. If you
specify a table default delimiter, it overrides the site default delimiter.
If you do not specify a table default delimiter, the system refers to the
site default delimiter.
Density: character (20) [list]
To obtain the cost of a query, the optimizer takes the cost of traversing
a table and divides it by the table’s density value. We recommend that
instead of using this prompt to assign a density value for a particular
table, you assign density values at the site level (using the
CONFIGUARTION/SITE EDIT/ACCESS PLAN INFO option) which
will apply to all tables.
Last assigned sequence= integer (4)
This value represents the highest sequence number that has been
assigned to a column in this table. If you want a column to be
updatable, you assign it a sequence number which is referenced by the
table filer program. You can assign a column’s sequence number by
using the Real Column Information window. Refer to the section on
Writing the Table Filer in Chapter 4:Table Filers to see how column
sequence numbers are used by the table filer program to update a table.
Allow updates? YES/NO
Answer YES if you intend to allow updates to the data in this table via
the INSERT, UPDATE, and DELETE commands.
IMPORTANT:
You must write a table filer routine in order to execute updates. Refer
to Chapter 4 in this guide for more information on the subject.
The Update Table Logic window appears if you answer YES to the
Allow updates prompt.
The Read Lock Logic window appears if you answer NO to the Allow
updates prompt. It lets you specify M execute code to read lock a row
and read unlock a row.
In an effort to ensure that statistics are run for each table, the Compile
Table Statistics window appears when you exit from the Table Options
menu. The MSM-SQL query access planner uses a table’s statistics to
determine the best path for a query. For more information on
calculating statistics or to calculate statistics for all the tables in a
schema, refer to the MSM-SQL Database Administrator’s Guide.
If needed
Compile only if statistics do not exist and table is referenced by a
query.
Now
Compile statistics for this table now (add process to background
queue).
No
Don’t compile statistics, table cannot be used by queries.
Use the COLUMNS option to add, edit, or delete the basic information
about the columns in the table.
Add a column by selecting the COLUMNS option, assigning a name to
the column, and supplying the necessary information in the windows
that follow. After you add that column, it can be referenced by queries.
If you change a column definition, any query that referenced the table
that contains the column must be recompiled.
If you delete a column definition, any query that referenced the table
that contains the column must be recompiled. Also any reference in the
query to the deleted column must be changed to reference an existing
column. To delete a column, highlight the column’s name in the
selection window and press [delete].
Default header: character (30)
By default, MSM-SQL displays the column name as the heading over a
column of data in a query result. You can override this default by
specifying a default header text. You can include the vertical bar ( | ) to
force a multi-line heading.
For programmers only? NO/YES
Certain columns are defined for use by programmers only. Answer
YES, to restrict the view of this column to programmers. If you answer
YES, the column is accessible only through the SQL Editor. For
example, you may want to restrict access to DATA_1 which is the
string of all columns stored on the first node of a global.
Conceal on SELECT *? NO/YES
For tables with ten or more columns, the display produced by a
SELECT * command can be difficult to interpret. Answer YES to
conceal extraneous or programmer columns from the display.
Change sequence: integer (4)
This value is a unique identification number that you assign to the
column. The SQL INSERT and UPDATE statements use this number
to indicate which column values have been changed. Therefore, a
value must be assigned if you wish to allow updates to this column. If
you do not wish to allow updates to the column, set this value to zero.
If you do not enter a value, and you use the TABLE^SQL0S utility to
generate a filer routine, the utility will assign the next available
sequence number.
Note: If a table is not modifiable, this prompt has no effect.
Last assigned sequence= integer (4)
This value represents the highest change sequence number that has
been assigned to a column in this table. If you want a column to be
updatable, you assign it a change sequence number which is referenced
by the table filer program. Refer to the section on Writing the Table
Filer in Chapter 4:Table Filers to see how column sequence numbers
are used.
Virtual column? NO/YES
Since most columns have a physical storage requirement, the default
answer is NO. A NO answer will result in the appearance of the Real
Column Information window. Answer YES, if this column is a virtual
column having no physical storage requirement in this table. A YES
answer will bring up the Virtual Column Definition window.
Extract from: ____ To: ____
If your data values are not delimited, you can use the extract prompt to
specify a starting (from) and ending (to) point for this data value. This
is an alternative to using the EXTRACT() function in a virtual column
definition, and it provides for more optimal code generation.
IMPORTANT: You cannot use both the piece reference and the
extract.
The PRIMARY KEYS option allows you to specify those columns that
are required to reference a row from the table. If you have not
previously created any primary keys for a table, MSM-SQL attempts to
automatically create them based on the columns you have created. The
message, “Default primary key created,” appears at the bottom of the
screen. If MSM-SQL can not automatically create them, the message
“Unable to create default primary keys” appears.
If MSM-SQL was unable to create default primary keys and you have
not defined any primary keys for this table/schema, the Add Primary
Key window appears. Select YES to add a primary key. Otherwise, if
primary keys do exist, the Select, Insert, Delete selection window
appears. MSM-SQL displays the primary keys in sequence number
order. You can press [insert] to add a new primary key, or select the
key you wish to edit. If you want to delete a key, highlight it and press
[delete].
To build primary keys for a table, the primary key columns are
required to have a vertical prefix and must not have a horizontal suffix.
They must have a global reference and must not have a piece or extract
reference. Custom primary keys do not have a vertical or a horizontal
address. The definition comes from the primary key custom logic.
Each primary key must be assigned a sequence number and be linked
to a column from the table. The primary key with sequence number 1
is considered to be the most significant key.
Note: Unless you completely understand optimization, we recommend
you run the statistics compiler (UTILITIES/STATISTICS/COMPILE
STATISTICS) and have the system supply a value for Avg subscripts.
Skip search optimization? NO/YES
If the primary key cannot be optimized, then enter YES; otherwise,
enter NO. Two reasons for not optimizing a primary key are: 1) if the
key collates as a string instead of as a numeric, and 2) if you have
provided custom primary key logic that alters the standard logic to an
extent that optimization of the key would produce erroneous results. If
you have altered the primary key logic and are not sure whether or not
you should skip optimization, consult your MSM-SQL technical
support representative.
You could create your own domain based on MOMENT that does
collate correctly. For example, if your MOMENTS collate like
numbers, search optimization applies. You can override the skip
setting in your domain to skip search optimization for a primary key by
setting this Skip search optimization prompt to YES.
Note: Optimization applies only to the equal to, greater than, less
than, or in operators. If MSM-SQL tries to optimize a key that
shouldn’t be optimized, the query returns the wrong results.
Allow NULL values? NO/YES
If your system allows null values as subscripts and if this key may be
null, enter YES; otherwise, enter NO. For more information on null
values, refer to Appendix B in the MSM-SQL Database
Administrator’s Guide.
The majority of M globals will not require any primary key logic.
Usually, the primary key columns are the subscripts of a global. Row
traversal is accomplished by the $ORDER function. Sometimes, M
globals include repeating groups of data in a field separated by
delimiters. Since the relational model does not allow a column to
contain repeating groups, the group must be defined as a table. Each
member of the group is treated as a separate row. The primary key
logic provides a way for you to customize the default logic for row
traversal.
The primary key logic consists of M executables and expressions that
allow you to write your own row traversal logic. The following
parameters can be referenced by the primary key logic.
Compile-Time Description
Parameter
{KEY(#)} Key value for other primary key for this table
{KEY} Current primary key value
{VAR(#)} Temporary variable for this table
The following frame shows an example of standard primary key logic.
Note how the looping starts and ends with the empty string or NULL
value.
The following frame contains an example of custom primary key logic.
Note the placement of the custom logic executes. You may traverse
complex primary keys using these executes and conditions.
The code shown in the Pre-select Execute window below is executed
prior to the beginning of the traversal loop. As an example, consider a
programmer column of codes separated by semicolons, where each
code is a primary key value. The pre-select execute code could save the
number of codes in a scratch variable for reference in the end
condition.
The default method for traversal of primary keys is the M $ORDER
function. For complex primary keys, you may specify custom logic for
getting the next key. In our example, we increment a counter for
stepping through the pieces of a data column.
This execute can be used to determine if a selected key value is a valid
primary key value. Continuing with our example, consider if certain
code pieces are equal to the asterisk (*) character. Enter an M
condition on which the key would be skipped if the test fails.
In some cases, even a valid key may need some manipulation. This
execute does not affect the looping logic since the code is applied after
the key has been selected. This example strips all occurrences of the
asterisk (*) character from the key value.
If you have not defined any foreign keys for this table/schema, the Add
Foreign Key window appears. Select YES to add a foreign key.
Otherwise, if foreign keys do exist, the Select, Insert, Delete selection
window appears. You can press [insert] to add a new foreign key, or
select the key you wish to edit. If you want to delete a key, highlight it
and press [delete].
An index is an M global.
An index cannot include data that is not in the base table.
An index must include all of the primary keys of the base table as
columns.
Once defined, an index can be used implicitly by the access planner to
satisfy requests for information from the base table.
Once defined, an index can be used explicitly in a FROM clause (or as
the value for the Use table prompt in EZQ) to satisfy requests for
information from the index table.
We recommend that users write their queries accessing the base table
and trust the MSM-SQL optimizer to select the correct index table. To
eliminate any possible confusion, the DBA could give users access to
the base table but not access to any corresponding indices.
The table:
PART (base table)
columns (p_no, name, cost)
global ^P(p_no) = name ^ cost
The index:
PART_BY_NAME (index table)
columns (name, p_no)
global ^P(“A”,name,p_no) = “”
The query:
SELECT p_no, name, cost
FROM part
WHERE name BETWEEN ‘A’ and ‘KZZ’
The plan:
Get table PART
Using Index PART_BY_NAME
Optimize primary key (name)
The result:
The access planner chooses to use the PART_by_NAME index to
satisfy the query.
If you have not defined any indices for a table/schema, the Add Index
window appears. Select YES to add an index. Otherwise, if indices do
exist, the Select, Insert, Delete selection window appears. You can
press [insert] to add a new index, or select the index you wish to edit.
If you want to delete an index, highlight it and press [delete].
If you are adding a new index, the Table Index window appears;
otherwise, if you are editing an index, the Index Options window
appears.
Since MSM-SQL allows indices to be used like any other table, the
definition of an index is very similar to that of a base table. By
defining an index for a table, you provide more information to be used
by the data access planner during compilation of queries.
The INDEX INFORMATION option allows you to modify the name,
density, and description of the index.
If you have not defined any index columns, the Add Index Column
window appears. Select YES to add an index column. Otherwise, if
index columns exist, the Select, Insert, Delete selection window
appears. You can press [insert] to add a new index column, or select
the index column you wish to edit. If you want to delete an index
column, highlight it and press [delete].
Global reference: character (20)
The global reference is the string of characters in the M global address
that come after any previously defined columns and before the current
column. For example, the first primary key often has the global prefix
as the vertical prefix. Each successive key specifies only the comma (,)
that separates the M subscripts.
Piece reference: character (20)
The piece reference is the string of characters in the M global address
that complete the $PIECE function reference for this column. For
example, the string “;”,2) specifies the second semi-colon piece.
Extract from: ___ To: ___
If your data values are not delimited, you can use the extract option to
specify a starting (from) and ending (to) point for this value. This is an
alternative to using the EXTRACT() function in a virtual column
definition, and it provides for more optimal code generation.
IMPORTANT: You cannot use both the piece reference and the
extract.
If MSM-SQL was unable to create default primary keys for this index,
and if you have not defined any primary keys for the index, the Add
Primary Key Column window appears. Select YES to add a primary
key. Otherwise, if index primary keys exist, the Select, Insert, Delete
selection window appears. You can press [insert] to add a new primary
key, or select the primary key you wish to edit. If you want to delete a
primary key, highlight it and press [delete].
Avg subscripts: character (20) [list]
You may specify the average number of unique values for this primary
key using numbers or fuzzy size values. If left blank, the system uses
the default average number of distinct entries as specified for your site.
Skip search optimization: NO/YES
If the primary key cannot be optimized by applying the greater than,
less than, or contains operators, then enter YES; otherwise, enter NO.
Allow NULL values: NO/YES
If your system allows null values as subscripts and if this key may be
null, enter YES; otherwise, enter NO.
Primary key logic for index tables is similar to the logic for base
tables. Refer to the discussion on Primary Key Logic earlier in this
chapter.
Generally, an index contains a foreign key that points back to the base
table. This allows the index to be viewed as logically equivalent to the
base table.
If you have not defined any foreign keys for the index, the Add Foreign
Key window appears. Select YES to add a foreign key. Otherwise, if
index foreign keys exist, the Select, Insert, Delete selection window
appears. You can press [insert] to add a new foreign key, or select the
foreign key you wish to edit. If you want to delete a foreign key,
highlight it and press [delete].
The windows that you see during this process are similar to those used
to add/edit a foreign key. Refer to the section “FOREIGN KEYS
Option” for more detailed instructions.
The data dictionary reports are valuable tools to be used by the DBA
and others during the global mapping process. Each report can be used
to validate the structures defined at certain checkpoints in the process.
The reports include any expressions or executes that are used for data
transformations in your system. A hard copy of each report should be
available to the DBA at all times.
This procedure can be used to print a report of all data storage methods
defined for your system. The report includes all domain parameters
and all transform expressions or executes for a range of domain names.
This procedure can be used to print a report of all key formats defined
for your system. The report includes all key format parameters and all
transform expressions or executes for a range of key formats.
This procedure can be used to print a report of all output formats
defined for your system. The report includes the name, length, and
justification parameters for a range of output format names.
This procedure can be used to print a list of all application and user
group schemas defined for your system.
This procedure can be used to print the logical table definitions for a
selected schema. The report includes the definitions for all columns,
primary keys, foreign keys, and indices for a range of table names. If
desired, the report prints the physical definitions, including vertical
and horizontal addresses.
This report can also be used to view the table and column definitions
for the MSM-SQL Data Dictionary (the DATA_DICTIONARY
schema).
From table name: character (30) [list]
Thru table name: character (30) [list]
You can define a range of tables to be included in the report. You may
either enter beginning and ending values, or you can press [list] at each
prompt to select from the list of tables.
Print globals? YES/NO
Answer YES if you want the report to include the physical definitions.
Break at table? YES/NO
Answer YES if you want a page break to occur at the end of each
table’s definition.
This procedure can be used to print a list of all view names and the
SQL text needed to build each view.
4
Table Filers
MSM-SQL separates SQL statement and column validation from the
table row validation and filing operations. The M routine generated by
the INSERT, UPDATE, or DELETE statement creates a value array
global and performs row locking and column level validation,
including required columns and data type checks. A table filer routine
performs row level validation, including referential integrity and
unique indices, and updates the M globals with the changes specified
in the value array.
These topics are examined more fully later. Before we begin, let’s
establish the basis for that discussion.
Application
In this chapter we refer to your code
as the application. Any SQL code
SQL Statement you reference using ESQL or the
API/M is referred to as an SQL
Black Box
Note: Once the relevant information exists, you may produce printouts
of table definition reports and table filer routines similar to those
shown in this chapter using the DATA DICTIONARY/
REPORTS/TABLE PRINT option.
†
Term Definition
Business rules A type of row validation that checks for
relationships between columns in one or more table
rows. Business rules are enforced by the table filer.
While it is possible to have a business rule that only
references a single column value, checks of that type
are typically performed as a column validation step
in the SQL statement.
Column validation Validation tests that can be applied to the discrete
column value, without referencing any other column
values. These include the NOT NULL (required)
attribute, and either domain or data type validation.
Domain and data type validation include correct
format, maximum length, and comparisons to
constants.
Concurrency The system’s ability to manage more than one
concurrent transaction without database corruption.
Within SQL, concurrent transactions each have an
isolation level that determines the allowable
interaction between transactions.
Database integrity Ensures that the database contains valid data, and
that the correct relationships between different rows
and tables are maintained.
Referential A type of row validation used on row delete to
integrity ensure that the delete does not leave behind foreign
keys (pointers) to the deleted row.
†
The concepts of connections, transactions, and cursors are covered in more detail in the
MSM-SQL Programmer’s Reference Guide.
Term Definition
Row locks A semaphore that provides concurrency protection
for a particular row in a particular table. Row locks
may be either exclusive (WRITE lock) or
non-exclusive (READ lock). WRITE locks prevent
two concurrent transactions from updating the same
row. READ locks prevent transactions from
modifying a row that has been read by a different
transaction.
Row validation Any validation test that references two or more
column values. This includes unique indices,
business rules, and referential integrity.
Table filer An M routine that applies the changes from
INSERT, UPDATE, and DELETE statements to the
M global database.
Transaction A group of one or more SQL statements that
reference or modify the database.
Unique indices A type of row validation that ensures a computed
value, composed of the first N keys of an index, is
unique within the table.
Before an SQL statement can reference or update rows in the database,
it must first perform the appropriate row locks.
Created tables use a default locking scheme. You must enter the row
lock code for mapped tables using the MAP EXISTING GLOBALS
option. Any row lock code you enter should be consistent with the
locking strategies that are used by your existing applications.
READ locks ensure that retrieved rows are not in the process of being
updated. It is possible your M applications may not use READ locks or
checks. If this is the case, then you should not enter READ lock code
for those tables. READ locks are used if the transaction’s isolation
level is other than READ UNCOMMITTED.
The SELECT statement reads data. If the isolation level is anything
other than READ UNCOMMITTED, the SELECTed row performs a
READ lock to prevent dirty reads and other concurrency violations.
Any persistent components of a READ lock are removed only after a
COMMIT or ROLLBACK.
This statement adds new rows to a table. The M routine that performs
the INSERT automatically checks that all required columns have non-
null values, and performs any domain or data type validation logic to
ensure acceptable values.
The UPDATE statement does not allow changes to the primary key
columns, and performs a WRITE lock prior to invoking the update
table filer.
This statement removes rows from a table. The M routine that
performs the DELETE, performs a WRITE lock prior to invoking the
delete table filer.
Since the table filers use a file-as-you-go approach, the COMMIT
statement simply performs any necessary row unlocks. However, the
ROLLBACK statement must undo all table row changes. The
ROLLBACK operation inverts the previous processed statements,
effectively using the table filers in reverse.
Table filer routines are automatically generated for tables produced
using the CREATE TABLE statement (after the appropriate statement
reference). The CREATE TABLE statement typically occurs in a
sequence such as the following:
Site: Micronetics Design Corp. Schema: SQL_TEST
Table definition, printed on 05/24/95 at 4:32 PM
CREATED_EMPLOYEES - 771
LOGICAL
PHYSICAL
INSERT filer execute: D I^XX78
UPDATE filer execute: D U^XX78
DELETE filer execute: D D^XX78
INDICES
CREATED_EMPLOYEES_BY_NAME - 772
^SQLT(772,NAME,EMP_SSN)
The compiled SQL statements use the table filer to save any database
changes. The SQL statements also use tags in the utility routine
SQL0E to perform row locks and unlocks. The WRITE^SQL0E tag
always performs an exclusive WRITE lock. The READ^SQL0E tag
performs an action appropriate to the transaction’s isolation level. If
the level is READ UNCOMMITTED, the READ^SQL0E simply
quits. If the transaction is READ COMMITTED, the READ^SQL0E
tag checks to ensure that no other transaction has a WRITE lock on the
specified row.
The table filer communicates with the SQL statement using the
^SQLJ(SQL(1),99,SQLTCTR) global array. The first subscript of this
global is the connection handle, which uniquely identifies the
connection. The second subscript, '99', isolates the table filer
information from other connection-related information. The third
subscript, 'SQLTCTR' , is a counter which identifies a particular row.
If an SQL statement modifies more than one row, the table filer is
called once for each row, and each time it has a different SQLTCTR
value. A complete description of the ^SQLJ global array and related
variables is provided later in this chapter.
The table filer for the created table begins on the next page.
XX78 ;Table filer for 771 [V3.0];05/24/95@4:25 PM
; filer for SQL_TEST.CREATED_EMPLOYEES
D ; delete
N C,D,K,O,X
; check pkeys
S SQLROWID=$P(^SQLJ(SQL(1),99,SQLTCTR),"~",2,999),K(1)=SQLROWID
I '$D(SQLTLEV) S SQLTLEV=0
S (X,SQLTLEV)=SQLTLEV+1 K X(X) S (X,SQLTLEV)=SQLTLEV-1
;kill data
S (C(0),^SQLJ(SQL(1),99,SQLTCTR,0,0))="",D=$G(^SQLT(771,K(1),2))
I D'="" S $E(C(0),2)=1,O(2)=D,^SQLJ(SQL(1),99,SQLTCTR,-2)=O(2)
S D=$G(^SQLT(771,K(1),3))
I D'="" S $E(C(0),3)=1,^SQLJ(SQL(1),99,SQLTCTR,-3)=D
S D=$G(^SQLT(771,K(1),4))
I D'="" S $E(C(0),4)=1,^SQLJ(SQL(1),99,SQLTCTR,-4)=D
S ^SQLJ(SQL(1),99,SQLTCTR,0,0)=C(0) K ^SQLT(771,K(1))
; kill indices
I $E(C(0),2) K ^SQLT(772,O(2),K(1))
Q
I ; insert
N C,D,F,K,N S SQLTBL=771
; check pkeys
S SQLROWID=$P(^SQLJ(SQL(1),99,SQLTCTR),"~",2,999),K(1)=SQLROWID
I K(1)="" S SQLERR=583 D ER^SQLV3 Q
I '$D(^SQLT(771,K(1))) G 1
K ^SQLJ(SQL(1),99,SQLTCTR) S SQLERR=43 D ER^SQLV3 G 4
1 D WRITE^SQL0E
I SQLCODE<0 K ^SQLJ(SQL(1),99,SQLTCTR) Q
S D=""
; load change array
S C(0)=^SQLJ(SQL(1),99,SQLTCTR,0,0)
; insert data
S F=0
I $E(C(0),2) S N(2)=^SQLJ(SQL(1),99,SQLTCTR,2),^SQLT(771,K(1),2)=
N(2),F=1
I $E(C(0),3) S ^SQLT(771,K(1),3)=^SQLJ(SQL(1),99,SQLTCTR,3),F=1
I $E(C(0),4) S ^SQLT(771,K(1),4)=^SQLJ(SQL(1),99,SQLTCTR,4),F=1
I 'F S ^SQLT(771,K(1))=""
; set indices
I $E(C(0),2) S ^SQLT(772,N(2),K(1))=""
Q
U ; update
N C,D,K,N,O
; check pkeys
S SQLROWID=$P(^SQLJ(SQL(1),99,SQLTCTR),"~",2,999),K(1)=SQLROWID
; load change array
4 D ARBACK^SQL0E
Q
The SQL grammar supports six statements that are relevant to the
process: SELECT, INSERT, UPDATE, DELETE, COMMIT, and
ROLLBACK. Any number of rows may be processed by each of the
first four statements. The SELECT statement reads data; the INSERT,
UPDATE, and DELETE statements change rows in the database; and
the COMMIT and ROLLBACK statements finalize or discard changes
already made by the INSERT, UPDATE, and DELETE statements. In
addition COMMIT and ROLLBACK free any READ or WRITE locks.
There are two major components to database integrity: data validation
and concurrency. Data validation includes both column level
constraints and table row level constraints. Column level constraints
include data type or domain validation logic and required column
checks. The column level constraints are enforced on specific columns
by the INSERT and UPDATE statements prior to calling the table
filer. Table row level constraints, which include unique indices,
referential integrity, and business rules, must be enforced by the table
filer.
Most of the concurrency checks are performed by the SQL statements
prior to executing the table filer logic. However, the INSERT filer
logic is responsible for performing a WRITE lock on the new row, and
READ locks may be performed if additional rows are referenced by the
table filer.
Application
Black Box
Integral to the design process is
the provision for error handling.
SQL Statement When viewed from the SQL
perspective, each statement may
Multiple rows process many rows, and either
completes successfully for all
Table Filer rows or fails and has no effect on
any row in the database. However,
One row at a time from the perspective of the table
filer routine, each statement is
viewed as a sequence of single
row actions, rather than a single
multi-row action. If any row
action fails, the filer must
ROLLBACK any changes made to
the current row and quit with SQLCODE=-1. Control is returned to the
statement level. At that time, the compiled SQL statement manages the
ROLLBACK of any rows that were processed prior to the failure.
Three steps are necessary to provide SQL modification of a mapped
table:
add additional domain logic;
enter additional table information in the MAP EXISTING
GLOBALS option;
write the table filer routine.
Each column in a mapped table is linked to a domain. Each domain is
linked to a base data type (CHARACTER, DATE, FLAG, INTEGER,
MOMENT, NUMERIC or TIME). Each data type provides basic
EXTERNAL TO BASE conversion and VALIDATION logic.
If this default logic is sufficient, then you can skip this step. However,
if you want the SQL statements to support additional, or different,
conversion or VALIDATION logic for a particular domain, you must
enter that logic using the DOMAIN EDIT option (DATA
DICTIONARY/DOMAIN EDIT). The prompt Override data type
logic? in DOMAIN EDIT is used to access the EXTERNAL TO
BASE and VALIDATION execute logic. Both of these executes use
the variable X for input and output.
The VALIDATION execute expects X to be in base format. The
execute then makes any additional checks that are necessary to ensure
the value is appropriate. For example, the validation might check that a
date value is not in the future by using the code below:
I X>+$H K X
The MAP EXISTING GLOBALS option contains several prompts that
are related to row locking and table filers.
This section focuses on how values supplied in the prompts are applied
by a table filer. For information regarding the location and format of
the prompts within the MAP EXISTING GLOBALS option, refer to
Chapter 3: Creating Your Data Dictionary.
To preface our coverage of this option, note that READ and WRITE
lock functions provide the following inputs:
Variable Description
SQLTBL Table number (from ^SQL(4) table).
SQLROWID Table row primary key (delimited).
Both the READ and WRITE locks use the variable SQLROWID to
represent the composite of all primary key columns. If the primary key
is composed of more than one column, you may need to enter a
primary key delimiter value at the prompt in the Table Features
window. This value should be an ASCII code representing a character
that does not occur within any of the primary key component columns.
The primary key delimiter separates the various primary key columns
in the SQLROWID variable.
If you do not enter a primary key delimiter, the tab character (ASCII
code = 9) is used by default. The Allow updates? prompt determines if
this table may be updated. If you enter NO, you are still allowed to
enter READ lock logic. If you enter YES, you must enter WRITE
lock/unlock and INSERT, UPDATE, and DELETE FILER logic in the
sequential Update Table Logic window.
The WRITE lock code sets an exclusive, persistent lock on the table
SQLTBL and row SQLROWID. The WRITE unlock code clears a
previously established WRITE lock.
The use of the READ lock depends on the transaction’s isolation level
and your current application’s code. If the transaction’s isolation level
is READ UNCOMMITTED, the READ lock code is not executed. If
the level is READ COMMITTED, the code should check to ensure
that no other transaction has a WRITE lock on the specified row. The
READ unlock resets any persistent READ locks.
If your existing applications do not use READ locks, you may choose
to skip the READ lock logic. If the READ lock only performs a check,
and does not leave a persistent lock, the READ unlock may be
skipped.
The format of these executes depends on how your table filers work.
Typically, these executes contain a DO to a tag in the table filer
routine.
Each real column that may be modified using SQL must have a change
sequence number. The change sequence is an integer value that
uniquely identifies the column within the table. It is an alternative
identifier that is shorter than the column name. These values should be
assigned as consecutive positive integers starting with one (1). Gaps
between change sequence values should be avoided if possible.
The Compute Key on Insert code under the INSERT KEY option
should only be used for primary key columns that are automatically
generated. The ideal code to enter for this execute is a DO to your
existing application’s code.
The Number of unique keys prompt is asked for each index. Since the
primary key of the base table is always unique, and since each index
contains the complete primary key of the base table, each index row is
always guaranteed to be unique. However, certain indices are used to
enforce a unique constraint when based on only some of the index
keys. If the index is used to support a unique constraint, you should
enter the number of keys that comprise the unique portion of the index.
The next three statements can be entered by accessing their
corresponding options in the Update Table Logic window.
The INSERT FILER statement in the Update Table Logic window
adds new rows to a table. The INSERT FILER statement establishes
the following entries in the ^SQLJ global.
^SQLJ(SQL(1),99,SQLTCTR)="I"_SQLTBL
^SQLJ(SQL(1),99,SQLTCTR,0,change_node)=change_flag_string
^SQLJ(SQL(1),99,SQLTCTR,column_sequence)=new_value (if not null)
This statement changes column values in table rows. The UPDATE
FILER statement establishes the following entries in the ^SQLJ global:
^SQLJ(SQL(1),99,SQLTCTR)="U"_SQLTBL_"~"_SQLROWID
^SQLJ(SQL(1),99,SQLTCTR,0,change_node)=change_flag_string
^SQLJ(SQL(1),99,SQLTCTR,-column_sequence)=old_value (if not null)
^SQLJ(SQL(1),99,SQLTCTR,column_sequence)=new_value (if not null)
The DELETE FILER statement removes rows from a table and
establishes the following entries in the ^SQLJ global:
^SQLJ(SQL(1),99,SQLTCTR)="D"_SQLTBL_"~"_SQLROWID
Logic has been added using the MAP EXISTING GLOBALS option to
both table filer and WRITE lock prompts. Each of the three data values
has a change sequence value indicated by the (cs=N) displayed after
the column name in the PHYSICAL section of the report. The DATA
column is a composite value that cannot be directly edited, and
therefore has no change sequence value.
There is no method for calculating the primary key for this table, but if
such code existed, it would be listed along with the other primary key
information.
EMPLOYEES - 10002
This is a table of all employees.
LOGICAL
MANAGER_LINK EMPLOYEES_ID
This is a link to the employees' table.
Foreign key (MANAGER) to EMPLOYEES
NAME CHARACTER(15) NOT NULL
This is the employee's name.
SALARY NUMERIC(5,2)
This is the employee's hourly salary.
PHYSICAL
INSERT filer execute: D I^TF10002
UPDATE filer execute: D U^TF10002
DELETE filer execute: D D^TF10002
READ lock: L +^SQLEMP(SQLROWID):0 S:'$T SQLERR="Unable to
access",SQLCODE=-1 L -^SQLEMP(SQLROWID)
WRITE lock: L +^SQLEMP(SQLROWID):0 E S SQLERR="Unable to
lock",SQLCODE=-1
WRITE unlock: L -^SQLEMP(SQLROWID)
^SQLEMP(EMP_SSN) = DATA
";",1) NAME (cs=2)
";",2) SALARY (cs=4)
";",3) MANAGER (cs=5)
INDICES
EMP_BY_NAME - 609
employees by name index
^SQLEMPN(NAME,EMP_SSN)
The DELETE code loads the primary key into the variable K(1), saves
the old values and the change flag string in the ^SQLJ global, and then
DELETEs the row and index entries.
The INSERT code loads the primary key into the variable K(1), checks
for both a null primary key and duplicate entries, locks the table, loads
the change flag string, sets up the data node D, saves the row, and
creates the index entry.
The UPDATE code loads the primary key and change flag string, saves
any changed values, and resets the index if necessary.
; check pkeys
S K(1)=$P(^SQLJ(SQL(1),99,SQLTCTR),"~",2,999)
I K(1)="" S SQLERR="Missing primary key" G ERROR
I '$D(^SQLEMP(K(1))) G 2
K ^SQLJ(SQL(1),99,SQLTCTR)
S SQLERR="Duplicate primary key entry exists" G ERROR
2 L +^SQLEMP(K(1)):0 E S SQLERR="Unable to lock" G ERROR
; load change array
S C(0)=^SQLJ(SQL(1),99,SQLTCTR,0,0)
; insert data
S D=""
I $E(C(0),2) S $P(D,";",1)=^SQLJ(SQL(1),99,SQLTCTR,2)
I $E(C(0),4) S $P(D,";",2)=^SQLJ(SQL(1),99,SQLTCTR,4)
I $E(C(0),5) S $P(D,";",3)=^SQLJ(SQL(1),99,SQLTCTR,5)
I $TR(D,";")="" S ^SQLEMP(K(1))="" Q
S ^SQLEMP(K(1))=D
; set indices
I $E(C(0),2) S ^SQLEMPN($P(D,";",1),K(1))=""
Q
;
; update
U N C,D,K,N,O
; load pkeys
S K(1)=$P(^SQLJ(SQL(1),99,SQLTCTR),"~",2,999)
; load change array
S C(0)=^SQLJ(SQL(1),99,SQLTCTR,0,0)
; insert data
S D=^SQLEMP(K(1))
I $E(C(0),2) S O(2)=$G(^SQLJ(SQL(1),99,SQLTCTR,-2)),N(2)=$G(^SQLJ(
SQL(1), 99,SQLTCTR,2)),$P(D,";",1)=N(2)
I $E(C(0),4) S $P(D,";",2)=$G(^SQLJ(SQL(1),99,SQLTCTR,4))
I $E(C(0),5) S $P(D,";",3)=$G(^SQLJ(SQL(1),99,SQLTCTR,5))
I $TR(D,";")="" S ^SQLEMP(K(1))="" Q
S ^SQLEMP(K(1))=D
; update indices
I '$E(C(0),2) Q
I O(2)'="" K ^SQLEMPN(O(2),K(1))
I N(2)'="" S ^SQLEMPN(N(2),K(1))=""
6 Q
;
ERROR S SQLCODE=-1 K ^SQLJ(SQL(1),99,SQLTCTR)
Q
The TABLE^SQL0S utility requires the following inputs:
Variable Description
SQLSRC Table number from ^SQL(4) or
'schema_name.table_name'.
SQLRTN M routine for table filer.
SQLDTYPE Valid device type name.
SQLUSER Valid password.
The following code samples generate a table filer for the PROJECTS
and EMPLOYEES table:
; compile PROJECTS (K4=10000)
K
S SQLSCR=10000,SQLRTN="XXPROJ"
S SQLDTYPE="DEFAULT",SQLUSER="SHARK"
D TABLE^SQL0S I SQLCODE<0 W !,"Unable to compile!"
; compile EMPLOYEES
K
S SQLSCR="SQL_TABLE.EMPLOYEES",SQLRTN="XXEMP"
S SQLDTYPE="DEFAULT",SQLUSER="SHARK"
D TABLE^SQL0S I SQLCODE<0 W !,"Unable to compile!"
This Import DDL interface provides an alternative to the interactive
DBA option MAP EXISTING GLOBALS procedure. The Import DDL
interface processes a script file that may be either a global or a host
file. Host files are particularly convenient since they may be edited
and printed using a word processor. The script file is subsequently
input to the Import DDL interface where DDL commands are
processed for the MSM-SQL Data Dictionary. The script file
technique provides you with a text-based, portable file that insulates
your definitions from changes to the internal data dictionary structure.
The script file may also be easier to update, more readable, and it is
reusable.
DDL enhancements group object and function in the following
manner:
The DDL syntax is based on ANSI standard SQL DDL with significant
extensions to support M global structures. The CREATE command
creates and modifies existing database objects. Use ALTER only to
change a database object name. The DROP command continues to
remove objects from the data dictionary; however, DROP SCHEMA
now supports the cascade of component tables.
While DDL statements may occur in any order, that order impacts
resulting action. The MSM-SQL engine processes each statement
sequentially.
Micronetics recommends the following statement sequence:
Your DDL script file should start with all DROP statements to delete
any old objects. Then follow with all ALTER statements to change the
names of any objects. Finally, all CREATE statements should be
listed to create any undefined objects or alter any existing objects.
The ALTER statement should be used only for changing the name of
an existing object. For example, to change an old table name and load
its current definition, use the ALTER followed by the CREATE. This
is preferable to using the DROP and CREATE combination because
DROP and CREATE DO NOT preserve privileges and links.
If a schema is dropped, all tables, indices, and foreign keys associated
with the schema are automatically deleted.
If a table is dropped, all indices and foreign keys associated with the
table are deleted.
To use the DDL interface, you must create a DDL script file that is
either a global or host file. Regardless of the approach, the DDL must
conform to the syntax we describe in this chapter. See the
alphabetized reference section that follows. In addition, the DDL
must strictly adhere to the following rules:
IMPORTANT:
^SQLIN(SQL(1),sequence)=DDL text
†
†
The product of this file is the Employees table used throughout the MSM-SQL Data
Dictionary Guide.
S SQLUSER="USER",SQL(1)=1
D DDLI^SQL
I SQLCODE<0 W !,SQLERR
The DDLI interface requires that you supply the following variables:
SQLUSER=
This variable is the user’s password.
SQLCODE=
This variable returns status information.
Depending on your situation, you may wish to set some of the optional
variables below that are available to you during the DDL execution.
IMPORTANT: If the SQLDEBUG, SQLLOG, or SQLTOTAL
variables are set, the last 10 lines of the script file will be echoed back
to the user in the event of an import error.
SQLDEBUG=
To turn on debug mode, set this variable to 1 (on).
In debug mode, the DDLI performs extra actions during the parse step
to track progress. Each time the parser encounters an ALTER,
CREATE, DROP, or SET statement, it places a bookmark at the DDL
script line. For this feature to work, each of these statements should be
placed at the beginning of the DDL script line. Each new table-rowid
value that is parsed and added to the ^SQLIX import global is
also tracked under the bookmark. If an error occurs, the DDLI parser
deletes all table-rowid entries added since the last bookmark, leaving
the ^SQLIX import global in a valid, although incomplete, state. In
addition to the usual error information, the variable SQLLINE is also
returned. The SQLLINE variable is composed of the line number
where the bookmark was set and the actual text of the line, separated
by a space.
If an error occurred in debug mode, the operator has two options, other
than abandoning the effort.
1) By setting the variable SQLDEBUG="IMPORT" and repeating the
DO DDLI^SQL tag, the operator may import the incomplete ^SQLIX
file. The operator could then edit the DDL script file, deleting the
portion prior to the line SQLLINE and perform another DDLI pass to
import the remainder of the script. This may cause problems,
however, if the tables in the first import contain foreign keys to tables
in the second part, since the foreign keys cannot be resolved.
2) The operator may fix the DDLI script file and set
SQLDEBUG="RESTART" and repeat the DO DDLI^SQL tag. This
causes the parser to skip to the line SQLLINE and resume the parse. If
additional errors occur, the operator may repeat the process. When
restarting the parser, the operator must ensure that the SQL(1) variable
has not changed, since the import global and bookmark information
are indexed by the original SQL(1) value.
SQLDEV=
You may specify a device identifier that you wish to use as a log
device. The device identifier must be a valid argument for the M
OPEN, USE, and CLOSE commands.
SQLLOG=
Specify a value of 1 to print the entire output of DDL parsing.
If you elect to use the SQLDEV with SQLLOG=1, you may produce a
considerable amount of output.
SQLTOTAL=
This value is the total number of lines in the DDL host file or global.
Setting this value displays the line number currently processing. This
display is suppressed if the SQLDEV option has been set.
Following are several examples of what you may expect to see using
some of these options during a successful and unsuccessful parse.
Now we illustrate what you may expect to see in cases of a successful
parse using two of the optional variables.
The command line for the example below is:
S SQLUSER="USER",SQL(1)=1,SQLTOTAL=82
D DDLI^SQL
I SQLCODE<0 W !,SQLERR
82 / 82 lines
Parse complete!
Import complete!
Parse complete!
Import complete!
For the next examples, we have purposely inserted an error in line 11
of the global DDL script. The example below demonstrates the use of
SQLTOTAL. The command line reads:
S SQLUSER="USER",SQL(1)=1,SQLTOTAL=82
D DDLI^SQL
I SQLCODE W !,SQLERR
11 / 82 lines
The message indicates the line on which the error occurred and the
token identifies the component of the DDL statement that produced the
error. The state value and type are values that should be provided to
your vendor if you are unable to resolve the error on your own.
A variation on this theme sends the results to a printer, SQLDEV=3. The
command line is:
S SQLUSER="USER",SQL(1)=1,SQLDEV=3
D DDLI^SQL
I SQLCODE<0 W !,SQLERR
The DDL interface procedure we have just described for using a global
DDL script remains the same here. The only difference is the method
in which you create your host DDL script and the manner in which it is
referenced on the command line for DDL execution.
---------------------------------------------------------
----------------- MSM-SQL V3.4
---------------------------
---Data Dictionary Language Interface(DDLI)Example-------
----(Note: lines that start with two or more dashes------
----------------- are ignored) ------------------------
---------------------------------------------------------
-----------------CREATE DOMAIN Examples------------------
----------------CREATE TABLE Examples-----------------
CREATE INDEX FM_TABLE_LEVEL_2_X1 FOR
FMTEST.FM_TABLE_LEVEL_2
( FM_TABLE_ID GLOBAL ^MICRO(
,LEVEL_2 PARENT FM_TABLE_ID GLOBAL , "L1","B",
,FM_TABLE_LEVEL_2_ID PARENT LEVEL_2 GLOBAL ,
,PRIMARY KEY(FM_TABLE_ID,LEVEL_2,FM_TABLE_LEVEL_2_ID)
)
Once you create your host DDL script file, it may be loaded to the
DDL interface using the following command lines:
S SQLUSER="USER"
S SQLFILE="\DDL_FILE.TXT"
D DDLI^SQL
I SQLCODE<0 W !,SQLERR
ALTER INDEX
ALTER INDEX [schema_name.]index_name RENAME name
ALTER DOMAIN
ALTER DOMAIN domain_name RENAME name
ALTER SCHEMA
ALTER SCHEMA schema_name RENAME name
ALTER TABLE
ALTER TABLE [schema_name.]table_name RENAME name
CREATE DOMAIN
CREATE DOMAIN domain_name [AS] data_type_name
[COMMENT literal]
[LENGTH integer [SCALE integer]]
[OUTPUT FORMAT output_format_name]
[FROM BASE {EXECUTE(m_execute) or
EXPRESSION(m_expression)]
[TO BASE {EXECUTE(m_execute) or
EXPRESSION(m_expression)]
[NULLS EXIST]
[REVERSE COMPARISONS]
[OVERRIDE COLLATION]
[NO SEARCH OPTIMIZATION]
[NO COLLATION OPTIMIZATION]
CREATE INDEX
CREATE [UNIQUE] INDEX [schema_name.]index_name
FOR [schema_name.]table_name [COMMENT literal]
(
column_name[address_specification]
[, column_name[address_specification]]...
CREATE KEY FORMAT
CREATE KEY FORMAT key_format_name
FROM BASE {EXECUTE(m_execute) or
EXPRESSION(m_expression)}
[COMMENT literal]
[EQUIVALENT]
[NON NULLS EXIST]
[NULLS EXIST]
[REVERSE COMPARISONS]
CREATE SCHEMA
CREATE SCHEMA schema_name
[COMMENT literal]
[GLOBAL m_fragment]
CREATE TABLE
CREATE TABLE [schema_name.]table_name
[COMMENT literal]
[DELIMITER integer]
[external_file_specification]
(
table_column_specification [, table_column_specification]...
DROP DOMAIN
DROP DOMAIN domain_name
DROP INDEX
DROP INDEX [schema_name.]index_name
DROP SCHEMA
DROP SCHEMA schema_name
DROP TABLE
DROP TABLE [schema_name.]table_name
address_specification
[PARENT column_name]
[CHANGE SEQUENCE integer]
[GLOBAL m_fragment]
[PIECE m_fragment]
[EXTRACT FROM integer TO integer]
Rules
The CHANGE SEQUENCE is ignored for index columns.
If m_fragments are used after the GLOBAL or PIECE key words, they
must either be the last token on the line or followed by a space
character.
condition
A valid SQL condition that returns true, false or unknown.
domain_specification
{domain_name [(length [,scale])]} or table_reference
external_field_specification
{EXTERNAL FILE literal FIELD literal} or
fileman_field_specification
external_file_specification
{EXTERNAL FILE literal} or
{FILEMAN FILE numeric}
fileman_field_specification
FILEMAN FILE numeric FIELD numeric
[CODES literal]
[COMPUTED]
literal
quote_character [any_non_quote or embedded_quote]...
quote_character
Rule
Any_non_quote is any printable character other than the single
quote character (') and the double quote character (").
m_execute
One or more M commands that may be executed.
m_expression
An M expression that evaluates to a value.
m_fragment
A partial M expression that does not contain embedded spaces other
than within literals.
Rule
M_fragments must either be the last token on the line or followed by a
space character.
primary_key_specification
[AVG DISTINCT numeric]
[KEY FORMAT key_format_name]
[INSERT KEY (m_execute)]
[START AT literal]
[END AT literal]
[SKIP SEARCH OPTIMIZATION]
[ALLOW NULL VALUES]
[PRESELECT(m_execute)]
[CALCULATE KEY (m_execute)]
[VALID KEY (m_expression)]
[END IF (m_expression)]
[POSTSELECT (m_execute)]
Rule
KEY FORMAT and ALLOW NULL VALUES may only be used with
indices. Similarly, INSERT KEY may only be used with tables.
table_column_specification
column_name
domain_specification
{address_specification or {VIRTUAL(sql_expression)}}
[COMMENT literal]
[HEADING literal]
[OUTPUT FORMAT output_format_name]
[PROGRAMMER ONLY]
[CONCEAL]
[PRIMARY primary_key_specification]
[NOT NULL]
[UNIQUE]
[external_field_specification]
table_reference
REFERENCES [schema_name].table_name
[ON DELETE {NO ACTION or CASCADE or SET DEFAULT or
SET NULL}]
Existing MSM-SQL data dictionary definitions can be exported as a
DDLI script. This can be helpful in the mapping process to illustrate
how current tables were (or could have been) defined using DDLI
statements.
SQLUSER=
This variable is the user's password.
SQLDTYPE=
This variable is the device type to be used for the export.
DDLI("SCHEMA")=
This is the SCHEMA (or '*' for all) to be exported.
DDLI("TABLE")=
This is the TABLE (or Internal ID) to be exported. If table name is not
unique, specify the schema name also.
SchemaName.TableName
SchemaName.*
TableName
DDLI("OUTPUT FORMAT")=
The output format name to be exported.
OutputFormatName (or '*' for all)
DDLI("KEY FORMAT")=
The key format name to be exported.
KeyFormatName (or '*' for all)
DDLI("TAB")=
The defaults for the tab character string are as follows:
Output to file : $C (9)
Global : 4 - Spaces
DDLI("TO_GLOBAL")=
The default for the global reference is ^SQLX ($JOB,"DDLI").
DDLI("TO_FILE")=
Enter the filename to receive the output.
DDLI("SILENT")=
If this variable is set, output messages will not be displayed while
working.
DDLI("SINGLE")=
This variable will export only a single definition. When extracting
table definitions, the default behavior is to extract all related objects.
DDLI("MSG")=
If this variable is set, an informational message will be displayed after
completion of the export.
SQLERR=
This variable displays any error text.
This example shows how to use the Export DDLI for all tables in the
SQL_TEST schema.
This example shows how to use the Export DDLI to export just the
definition for the EMPLOYEES table.
INDEX
COLUMN INFORMATION
^ B option
and Table filers, 104,
^SQLIN, 123 BASE parameter, 38 106
^SQLIX import global, 126 BASE to EXT Conversion Column Information window,
Execute window, 39 53
BASE to EXT Conversion Column name prompt, 52
A Expression window, 39 Column Name window, 52
Black box, 89 Column sequence number, 55
Add Domain window, 23 Break at table prompt, 85 see also Change
Add Foreign Key window, 68, Business rules, 90, 91, 101 sequence number
81 Column validation, 88
Add Index window, 73 C definition of, 90
Add Primary Key Column COLUMNS option, 51
window, 78 CALCULATE KEY option, 65 COMMIT statement
Add Primary Key window, 58 Calculate Key Value window, and Table filers, 95
Add Schema window, 40 65 Comparison operators, 26, 33
Address_specification, 140 Change sequence number, 48, Compile Table Statistics
Allow NULL values prompt, 54, 104, 106 window, 50
62, 80 Change sequence prompt, 54 Compute Key on Insert
Allow updates prompt, 48 Changing the name of an object, window, 67
and Table filers, 105 121 and Table filers, 107
ALTER DOMAIN, 135 Collating sequence Conceal on SELECT *
ALTER INDEX, 135 overriding, 25 prompt, 54, 76
ALTER KEY FORMAT, 135 Collation optimization Concurrency
ALTER OUTPUT FORMAT, skipping, 25 and database
135 Column integrity, 101
ALTER SCHEMA, 135 adding, 51 definition of, 90
ALTER statement, 119, 120 allowing updates, 54 Concurrency checks, 102
use of in script file, definition of, 4 Concurrency violations, 93
121 deleting, 51 Condition, 140
ALTER TABLE, 135 editing, 51 Connection handle
Avg subscripts prompt global reference for, and Table filers, 98
in the Index Primary 17 Conversions
Key Information physical definition of, domain format, 27
window, 80 17 key format, 34
in the Primary Key piece reference for, 17 output format, 38
Information window, restricting access to, Convert from BASE to INT
60 54 Execute window, 28, 34
Column constraints, 101 Convert from BASE to INT
Column Description window, Expression window, 28, 34
52
Index I-1
INDEX
Convert from INT to BASE Data value parse, 125
Execute window, 28 specifying starting and parse fails, 129
Convert from INT to BASE ending point, 57, 77 parse succeeds, 128
Expression window, 28 Database integrity, 90, 101 required variables,
Convert X to Base Execute definition of, 90 125, 144
window, 30 Date conversion routines, 27 script files, 117
CREATE DOMAIN, 136 Date formats, 10 SQL(1), 123
CREATE INDEX, 136 DDL SQLCODE, 125
CREATE KEY FORMAT, commands, SQLCODE variable,
137 alphabetical listing of, 125
CREATE OUTPUT 135 SQLDEBUG, 126
FORMAT, 137 interface, 117 SQLDEV, 127
CREATE SCHEMA, 137 rules, 122 SQLDTYPE, 144
CREATE statement, 119, 120 syntax, alphabetical SQLERR, 125, 146
use of in script file, listing of, 140 SQLLOG, 127
121 trailing space, 122 SQLTOTAL, 127
CREATE TABLE, 88, 138 DDL interface SQLUSER, 125, 144
^SQLIN, 123 statement sequence,
DDLI(INDEX), 145 120
D DDLI(KEY using a global DDL
FORMAT), 145 script, 123
DATA_DICTIONARY DDLI(MSG), 146 using a host script,
schema, 2 DDLI(OUTPUT 132
listing the tables and FORMAT, 145 warning, 121
indexes of, 84 DDLI(SCHEMA), 144 DDL script file
Data_Dictionary TABLE table DDLI(SILENT), 145 order of statements,
and Table filers, 96 DDLI(SINGLE), 146 120
Data dictionary DDLI(TAB), 145 DDLI
an overview, 21 DDLI(TABLE), 144 see DDL interface
mapping globals, 42 DDLI(TO_FILE), 145 DDLI(INDEX)
Data dictionary tables, 2 DDLI(TO_GLOBAL), and the DDL
Data type checks, 88 145 interface, 145
Data type logic definition of, 117 DDLI(KEY FORMAT)
overriding it, 29 executing, 125, 134 and the DDL
Data type prompt export DDL interface, interface, 145
in Domain 135 DDLI(MSG)
Information window, import, 125 and the DDL
24 operation of, 122 interface, 146
Data types, 5, 6 optional variables, 126 DDLI(OUTPUT FORMAT
and Table filers, 103 order of statements, and the DDL
customizing, 29 119 interface, 145
Data validation, 101 overview of, 118
I-2 Index
INDEX
DDLI(SCHEMA) in Schema Information END CONDITION option, 66
and the DDL window, 41 End of Key Condition
interface, 144 Different from base prompt, 25 window, 66
DDLI(SILENT) Do all non NULL values exist Error handling
and the DDL prompt, 33 and Table filers, 102
interface, 145 Domain, 5 Example prompt, 37
DDLI(SINGLE) adding, 23 EXDDLI^SQL
and the DDL and Table filers, 93, see Export DDL
interface, 146 103 interface
DDLI(TAB) conversions, 27 Executes
and the DDL definition of, 6, 22 and Table filers, 103,
interface, 145 deleting, 23 106
DDLI(TABLE) editing, 23 Export DDL interface, 144
and the DDL Domain_specification, 140 DDLI(INDEX), 145
interface, 144 DOMAIN EDIT option, 22 DDLI(KEY
DDLI(TO_FILE) and Table filers, 103 FORMAT), 145
and the DDL DOMAIN Information window, DDLI(MSG), 146
interface, 145 24 DDLI(OUTPUT
DDLI(TO_GLOBAL) Domain Logic window, 27 FORMAT), 145
and the DDL Domain name prompt DDLI(SCHEMA),
interface, 145 in Domain Information 144
DDLI parse window, 24 DDLI(SILENT), 145
abandoning it, 127 in Domain Name DDLI(SINGLE), 146
Debug mode window, 22 DDLI(TAB), 145
turning on, 126 Domain Name window, 22 DDLI(TABLE), 144
Default delimiter prompt, 48 DOMAIN PRINT option, 83 DDLI(TO_FILE),
Default header prompt, 54 Domain prompt, 53 145
DELETE FILER statement Domain validation, 101 DDLI(TO_GLOBAL
and Table filers, 109 DROP DOMAIN, 139 ), 145
DELETE statement DROP INDEX, 139 examples of, 147,
and Table filers, 94 DROP KEY FORMAT, 139 148
Density prompt, 48 DROP OUTPUT FORMAT, optional input
Description prompt 139 variables, 144
in Domain DROP SCHEMA, 139 optional output
Information window, DROP statement, 119, 120 variables, 146
24 use of in script file, required variables,
in Key Format 121 144
Information window, DROP TABLE, 139 SQLDTYPE, 144
33 SQLERR, 146
in Output Format E SQLUSER, 144
Information window, EXT parameter, 38
37 End at value prompt, 60, 79
Index I-3
INDEX
External_field_specification, Global DDL script file statement sequence,
140 example of, 124 120
External_file_specification, Global mapping strategies, 13 using a global DDL
140 Global name script, 123
EXTERNAL TO BASE and Table filers, 96 using a host script,
conversion Global name prompt, 41 132
and Table filers, 103 Global reference, 17 warning, 121
EXTRACT() function, 57, 77 Global reference prompt, 56, 77 Index
Extract from/to prompt Globals defining, 73
in Index Column definition of, 15 definition of, 9, 71
window, 77 translating into tables, example of using, 72
in Real Column 15 INDEX COLUMN option, 76
Information window, updating, 88 Index Column window, 76
57 Index density prompt, 75
H Index Description window, 74
INDEX FOREIGN KEYS
F Heading Option, 81
multi-line, 54 INDEX INFORMATION
Fileman_field_specification, Host DDL script file option, 74
141 example of, 132 Index Information window, 75
FileMan app. INDEX option
building a table filer I and Table Filers, 104,
for, 115 107
Filer base routine prompt, 42, Import Index Options window, 73
96 in DDL interface, 125 Index Primary Key Column
For data type prompt, 37 Import DDL Interface window, 78
For programmers only prompt, ^SQLIN, 123 Index Primary Key
54 executing, 134 Information window, 79
Foreign key, 16 operation of, 122 INDEX PRIMARY KEYS
definition of, 7, 68 optional variables, 126 option, 78
Foreign key column prompt, order of statements, Index prompt
70 119 in Table Index
Foreign Key Column window, parse fails, 129 window, 74
70 parse succeeds, 128 Index tables
Foreign Key Description Required variables, primary keys of, 31
window, 69 125 Indices
Foreign key name prompt, 69 SQL(1), 123 definition of, 71
Foreign Key window, 69 SQLCODE, 125 See also Index, 71
FOREIGN KEYS option, 68 SQLDEBUG, 126 INDICES option, 71
From table name prompt, 85 SQLDEV, 127 INSERT FILER statement
SQLLOG, 127 and Table filers, 108
G SQLTOTAL, 127 INSERT KEY COMPUTE
SQLUSER, 125
I-4 Index
INDEX
option, 67 Information window, definition of, 61
INSERT KEY option 54 MSM-SQL Data Dictionary,
and Table filers, 107 in Table Features 125
INSERT statement window, 48 listing the tables and
and Table filers, 93 Legacy globals indexes of, 2, 84
use of change seq. management of, 87 model of, 1
nbr., 54 LENGTH parameter, 38
Isolation level Length prompt, 37 N
of transactions, 90 in Column Information
window, 53 Name prompt
in Domain Information in Key Format
J window, 24 Information window,
Literal, 141 33
Joins in Output Format
explicit, 7 Locks Information window,
implicit, 8 READ, 92 37
Justification prompt, 37 row, 92 Null values, 33, 62, 80
WRITE, 92 Number of unique keys
K prompt, 75, 107
Index I-5
INDEX
OUTPUT FORMAT PRINT Pre-Select Execute window, 65 READ locks
option, 83 PRE-SELECT option, 65 and Table filers, 91,
Output format prompt, 53 Primary_key_specification, 142 92, 93
in Domain Primary key, 16, 18 Real column
Information window, creating, 58 and Table filers, 106
25 definition of, 7, 18 Real Column Information
Output formats, 6 optimizing, 61 window, 56
Override collating sequence sequence number for, Reference to table prompt, 69
prompt, 25 59 Referential integrity, 88, 91,
Override data type logic traversal logic, 65, 66 101
prompt, 29 Primary key column prompt, 70 definition of, 90
and Table filers, 103 Primary Key Column window, Relational tables, 16
Override Data Type window, 59 Removing an object, 121
29 Primary key delimiter REPORTS option
and Table filers, 104, on the DATA
P 105 DICTIONARY
Primary key delimiter prompt, menu, 82
Parameters 47 Required prompt, 53
for output format Primary Key Information Required variables
conversions, 38 window, 60 for DDLI, 125, 144
Parent column prompt, 56, 76 Primary key logic Reverse > and < comparisons
Parse custom, 64 prompt, 26
failed, 129 for index tables, 80 Reverse > and < operators
in DDL interface, parameters for, 63 prompt, 33
125 standard, 63 ROLLBACK statement
successful, 128 Primary Key Logic window, 62, and Table filers, 95
Password, 121 80 Row
Perform conversion on null PRIMARY KEYS option definition of, 4
values prompt, 26 and Table filers, 104, Row constraints, 101
Physical definition 107 Row locks
of columns in tables, Print globals prompt, 85 definition of, 91
17 Print Tables window, 84 Row validation
Piece reference, 17 definition of, 91
Piece reference prompt
in Index Column Q
window, 77 S
in Real Column Quotes
Information window, use of in DDLI, 141 SCALE parameter, 38
56 Scale prompt
Post-Select Execute window, in Column
67 R Information window,
POST-SELECT option, 67 53
Read Lock Logic window, 49
I-6 Index
INDEX
in Domain Select Reports window, 82 SQLLOG, 127
Information window, SELECT statement SQLTCTR
24 and Table filers, 93 and Table filers, 98
Schema Sequence number, SQLTOTAL
definition of, 3 see Change sequence and the DDL
dropping, 121 number interface, 127
Schema and Table Name Sequence subscript, 123 SQLUSER, 121
window, 45 Skip collation optimization and the DDL
Schema definitions prompt, 25 interface, 125, 144
adding, 40 Skip index optimization prompt, Start at value prompt, 60, 79
deleting, 41 75 Statistics
editing, 41 Skip search optimization compiling, 50
SCHEMA EDIT option, 40 prompt, 25, 80
Schema Information window, in Primary Key Subscripts
41 Information window, and Table filers, 96
and Table filers, 96 61 definition of, 15
Schema name prompt Sort NULL as prompt, 79
in Schema SQL(1), 123 T
Information window, SQL_IDENTIFIER
41 definition of, 22 Table
SCHEMA PRINT option, 83 SQL IDENTIFIER adding, 44
Schema prompt definition of, 135 definition of, 4, 44
in Table Information SQL statements deleting, 44
window, 47 and Table filers, 89, 93 dropping, 121
in the Print Tables SQL0E utility routine editing, 44
window, 84 and Table filers, 98 Table_column_specification,
in the Schema and SQLCODE, 125 143
Table Name window, and Table filers, 102 Table_reference, 143
45 SQLDEBUG TABLE^SQL0S utility, 54
Script file, 123 and the DDL interface, TABLE^SQL0S utility routine
global, example of, 126 and Table filers, 115
124 SQLDEV Table Description prompt, 47
host, example of, 132 and the DDL interface, Table Description window, 47
order of statements, 127 Table Features window, 47
120 SQLDTYPE and Table filers, 105
Search optimization and the DDL interface, Table filers
skipping, 25 144 and executes, 103
Select, Insert, Delete Index SQLERR, 125 application’s
window, 73 and Table filers, 106 perspective, 89
Select, Insert, Delete Output and the DDL interface, automatically
Format window, 36 146 generated, 96
Select Data Type window, 35 SQLLINE, 126 communication with
Index I-7
INDEX
SQL, 98 and Table filers, 96 VALIDATION logic
concurrency Table prompt and Table filers, 103
violations, 93 in Table Index Value
constraints, 101 window, 74 definition of, 4
database integrity, in Table Information Variables
90, 101 window, 46 for DDLI, 125, 126,
definition of, 91 in the Schema and 127, 144
error handling, 102 Table Name window, VIEW PRINT option, 85
examples of, 98, 113 44, 45 Virtual column definition
for FileMan-based Table statistics prompt, 55
apps., 115 compiling, 50 Virtual Column Definition
hand-coded, 88 Tables window, 55
isolation level, 90 automatic table filers, Virtual column prompt, 55
list of data structures, 96
110 SQL modification of, W
manual, 101 103
optional single entry Tags WRITE locks
point, 115 and row locks/unlocks, and Table filers, 92,
overview, 88 98 101, 105
primary key Thru table name prompt, 85
delimiter, 104, 105 Time conversion routines, 27
referential integrity, Trailing space, 122
88 Transaction
routine name, 96 definition of, 91
row level validation,
88 U
row lock code, 92
SQL statement, 88, Unique indices, 88, 91, 101,
89, 92 104
Table ID, 96 definition of, 91
terminology, 90 UPDATE FILER statement
unique indices, 88 and Table filers, 109
writing, 108 UPDATE statement
Table Index window, 74, 81 use of change seq.
TABLE INFORMATION nbr., 54
option, 46 Update Table Logic window, 49
and Table filers, 104
Table Information window, V
46, 73
Table Options window, 46 Valid Key Condition window,
TABLE PRINT option, 84 66
TABLE PRINT report, 47 VALIDATE KEY option, 66
Validate X Execute window, 30
I-8 Index