Client Server Applications With Visual FoxPro and SQL Server PDF
Client Server Applications With Visual FoxPro and SQL Server PDF
with
Visual FoxPro and SQL Server
Chuck Urwiler
Gary DeWitt
Mike Levy
Leslie Koorhan
Hentzenwerke Publishing
Published by:
Hentzenwerke Publishing
980 East Circle Drive
Whitefish Bay WI 53217 USA
Hentzenwerke Publishing books are available through booksellers and directly from the
publisher. Contact Hentzenwerke Publishing at:
414.332.9876
414.332.9463 (fax)
www.hentzenwerke.com
[email protected]
Copyright 2000 by Chuck Urwiler, Gary DeWitt, Mike Levy and Leslie Koorhan
All other products and services identified throughout this book are trademarks or registered
trademarks of their respective companies. They are used throughout this book in editorial
fashion only and for the benefit of such companies. No such uses, or the use of any trade
name, is intended to convey endorsement or other affiliation with this book.
All rights reserved. No part of this book, or the .CHM Help files available by download from
Hentzenwerke Publishing, may be reproduced or transmitted in any form or by any means,
electronic, mechanical photocopying, recording, or otherwise, without the prior written
permission of the publisher, except that program listings and sample code files may be entered,
stored and executed in a computer system.
The information and material contained in this book are provided as is, without warranty of
any kind, express or implied, including without limitation any warranty concerning the
accuracy, adequacy, or completeness of such information or material or the results to be
obtained from using such information or material. Neither Hentzenwerke Publishing nor the
authors or editors shall be responsible for any claims attributable to errors, omissions, or other
inaccuracies in the information or material contained in this book. In no event shall
Hentzenwerke Publishing or the authors or editors be liable for direct, indirect, special,
incidental, or consequential damages arising out of the use of such information or material.
ISBN: 0-930919-01-8
To my wife, Heather, and children, Jacob and Megan, for having patience.
Michael Levy
This is for my wife Sybille, who has put up with the most awful mess that
a home office can be, with the excuse that once I finish this, then Ill clean it up. And I did.
Leslie Koorhan
v
Hi there!
Ive been writing professionally (in other words, eventually getting a paycheck for my
scribbles) since 1974, and writing about software development since 1992. As an author, Ive
worked with a half-dozen different publishers, and corresponded with thousands of readers
over the years. As a software developer and all-around geek, Ive also acquired a library of
more than 100 computer and software-related books.
Thus, when I donned the publishers cap four years ago to produce the 1997 Developers
Guide, I had some pretty good ideas of what I liked (and didnt like) from publishers, what
readers liked and didnt like, and what I, as a reader, liked and didnt like.
Now, with our new titles for the spring and summer of 2000, were entering our third
season. (For those keeping track, the 97 DevGuide was our first, albeit abbreviated, season,
and the batch of six Essentials for Visual FoxPro 6.0 in 1999 was our second.)
John Wooden, the famed UCLA basketball coach, had posited that teams arent
consistenttheyre always getting betteror worse. Wed like to get better One of my goals
for this season is to build a closer relationship with you, the reader.
In order to do this, youve got to know what you should expect from us.
You have the right to expect that your order will be processed quickly and correctly,
and that your book will be delivered to you in new condition.
You have the right to expect that the content of your book is technically accurate and
up to date, that the explanations are clear, and that the layout is easy to read and
follow without a lot of fluff or nonsense.
You have the right to expect access to source code, errata, FAQs, and other
information thats relevant to the book via our Web site.
You have the right to expect an electronic version of your printed book (in compiled
HTML Help format) to be available via our Web site.
You have the right to expect that, if you report errors to us, your report will be
responded to promptly, and that the appropriate notice will be included in the errata
and/or FAQs for the book.
Naturally, there are some limits that we bump up against. There are humans involved, and
they make mistakes. A book of 500 pages contains, on average, 150,000 words and several
megabytes of source code. Its not possible to edit and re-edit multiple times to catch every last
vi
misspelling and typo, nor is it possible to test the source code on every permutation of
development environment and operating systemand still price the book affordably.
Once printed, bindings break, ink gets smeared, signatures get missed during binding. On
the delivery side, Web sites go down, packages get lost in the mail.
Nonetheless, well make our best effort to correct these problemsonce you let us know
about them.
And, thus, in return, when you have a question or run into a problem, we ask that you first
consult the errata and/or FAQs for your book on our Web site. If you dont find the answer
there, please e-mail us at [email protected] with as much information and detail as
possible, including (1) the steps to reproduce the problem, (2) what happened, and (3) what you
expected to happen, together with (4) any other relevant information.
Id like to stress that we need you to communicate questions and problems clearly.
For example
Your downloads dont work isnt enough information for us to help you.
I get a 404 error when I click on the Download Source Code link on
www.hentzenwerke.com/book/downloads.html is something we can help you with.
The code in Chapter 10 caused an error again isnt enough information. I
performed the following steps to run the source code program DisplayTest.PRG in
Chapter 10, and received an error that said Variable m.liCounter not found is
something we can help you with.
Well do our best to get back to you within a couple of days either with an answer, or at
least an acknowledgment that weve received your inquiry and that were working on it.
On behalf of the authors, technical editors, copy editors, layout artists, graphical artists,
indexers, and all the other folks who have worked to put this book in your hands, Id like to
thank you for purchasing this book, and hope that it will prove to be a valuable addition to your
technical library. Please let us know what you think about this bookwere looking forward to
hearing from you.
As Groucho Marx once observed, Outside of a dog, a book is a mans best friend. Inside
of a dog, its too dark to read.
Whil Hentzen
Hentzenwerke Publishing
August, 2000
vii
Acknowledgements
First of all, Id like to thank my wonderful wife Michelle for dealing with my extended working
hours while I finished this book. As you all probably know, working a full-time job and then
working on a book on top of it doesnt leave room for a lot of quality time together. Shes
been very supportive of me from the day I met her, and especially during the time I worked on
this book. There were no complaints, only support. Thank you, Michelle... I love you.
Next Id like to thank Whil for getting me involved with this book. We had talked in the
past about doing some kind of book, so this was a great way for me to whet my appetite, since I
didnt have to write the whole thing <g>.
Which brings me to my third group of thank yous: Without Gary, Mike and Leslie, I would
have had a lot more late nights of writing. I thank them for their contributions to the book. And
Chaim did a great job of making sure my writing was technically complete as well as clear. Id
like to thank everyone at Micro Endeavors, both past and present. Without you, I would not
have been able to learn so much about VFP and SQL Server. Thanks, everyone! The drinks
are on me!
I must also thank my family and friends, who are all very excited that Im finally going to
have my name on the cover of a book. Thanks for your support and encouragement over the
years as I tried to find my place in the world.
And finally, Id like to thank all of my students, as well as those of you who support me
at the DevCons and user groups where Ive spoken. Without your questions, curiosity, feedback
and willingness to share what you know with me, I wouldnt be able to write this stuff.
Chuck
I would like to thank Bonnie Berent and Kim Cameron for personal support and
encouragement; Tamar Granor and Whil Hentzen for supporting and encouraging my writing;
Chaim Caron for his insights and ideas; The Academy for...
Gary
I dont think I shall soon forget my experiences while participating in the development of this
book. It has given me a better appreciation for those who give up family and free time to share
their knowledge and experiences with the rest of us.
To Heather, thanks for your support, patience and willingness to proofread the early drafts,
even though you did not understand the topics being discussed.
To Whil Hentzen, thanks for inviting me to participate in this project; to Gary DeWitt,
thanks for heading up the project and taking on the bulk of the work; to my good friend Matt
Tiemeyer, thanks for taking the time to discuss ideas and preview drafts; and to another good
viii
friend, Leslie Koorhan (before you came to participate in the project), my thanks for providing
ideas and the many long discussions.
Mike
First of all, I thank Whil Hentzen for giving me the opportunity to make my contribution to this
book, and for pushing. Theres someone else who Id also like to thank for this chance, but
Whil never told me who brought my name up in the first place. Second, theres Mike Levy,
who offered me a peek at his contributions long before I wrote one word. Mike has been a real
friend as well, before and during this process. Third, theres Gary DeWitt (I know these are
other authors, but hey, they deserve all the praise that I can muster), who helped me shape my
chapters. He gave me a lot of direction when I was just trying to formulate my thoughts. And
after all, it was his outline of the book that started me. Fourth is Yair Alan Griver, who started
me years ago with his client/server articles, and started me writing with his encouragement and
inspiration. Finally, I also thank Dan Freeman, Paul Bienick, Ken Levy and Markus Egger.
Leslie
ix
Gary DeWitt
Gary DeWitt has been a frequent contributor to the FoxTalk and FoxPro Advisor journals for
several years. Gary is the author of Client/Server Applications with Visual FoxPro and the
technical editor of Internet Applications With Visual FoxPro 6.0, from Hentzenwerke
Publishing, and has spoken at regional and national FoxPro conferences including Visual
FoxPro DevCon. He is a Microsoft Certified Professional and a past Microsoft Most Valuable
Professional. In addition to Visual FoxPro, Gary also works with C++, Java and Visual Basic,
and has been in the computing industry in one way or another since 1976. Gary is currently
senior software magician at Sunpro, Inc., the leader in fire service software, where he leads a
team responsible for a large client/server COM application compatible with the National Fire
Incident Reporting System.
Gary can be reached at [email protected].
Michael Levy
Michael Levy is a consultant with G.A. Sullivan, where he specializes in SQL Server and
database technologies. He is a Microsoft Certified Solution Developer (MCSD), Database
Administrator (MCDBA) and Trainer (MCT). As an MCT, more than 400 students in 60
classes have benefited from his 10 years of FoxPro and four years of SQL Server experience.
Mike is a well-known member of the Visual FoxPro community and donates his time helping
others on various Internet newsgroups. In addition, he has spoken at multiple conferences and
user groups and is a frequent contributor to various technical journals. Mike is a University of
x
Cincinnati graduate and lives with his wife, two kids, eight fish and an old dog in a house thats
still waiting for Mike to discover landscaping.
Mike can be reached at [email protected].
Leslie Koorhan
Leslie Koorhan is an independent consultant and trainer who specializes in database
applications. He has worked with nearly every version of FoxPro and Visual FoxPro as well as
Microsoft SQL Server since the mid 1990s. He is also a Visual Basic developer. He has written
numerous articles for several publications, most recently a series on Microsoft SQL Server
OLAP Services for FoxTalk. Leslie has also written several training manuals.
Leslie can be reached at [email protected].
Chaim Caron
Chaim Caron is the President of Access Computer Systems, Inc., in New York City, which,
despite its name, specializes in software development using primarily Visual FoxPro. The firm
also provides data integrity testing, data cleansing services, and various other services relating
to data design and software development and support.
Chaim has specialized in FoxPro and Visual FoxPro since 1990. His articles have appeared
in FoxTalk and FoxPro Advisor. He has spoken to technical and industry groups on technical
and business issues for many years.
He spends his free time with his wife and daughter. His third love (after his wife and
daughter) is the mandolin, which he plays with the New York Mandolin Quartet. Major
mandolin influences were provided by Barry Mitterhoff, Carlo Aonzo, Ricky Skaggs, John
Monteleone and, of course, Bill Monroe.
Chaim can be reached at [email protected].
xi
Both the source code and the CHM file are available for download from the Hentzenwerke
Web site. In order to obtain them, follow these instructions:
1. Point your Web browser to www.hentzenwerke.com.
2. Look for the link that says Download Source Code & .CHM Files. (The text
for this link may change over timeif it does, look for a link that references Books
or Downloads.)
3. A page describing the download process will appear. This page has two sections:
Section 1: If you were issued a username/password from Hentzenwerke
Publishing, you can enter them into this page.
Section 2: If you did not receive a username/password from Hentzenwerke
Publishing, dont worry! Just enter your e-mail alias and look for the question
about your book. Note that youll need your book when you answer the question.
4. A page that lists the hyperlinks for the appropriate downloads will appear.
Note that the .CHM file is covered by the same copyright laws as the printed book.
Reproduction and/or distribution of the .CHM file is against the law.
If you have questions or problems, the fastest way to get a response is to e-mail us at
[email protected].
xiii
List of Chapters
Chapter 1: Introduction to Client/Server 1
Chapter 2: Visual FoxPro for Client/Server Development 19
Chapter 3: Introduction to SQL Server 7.0 27
Chapter 4: Remote Views 57
Chapter 5: Upsizing: Moving from File-Server to Client/Server 75
Chapter 6: Extending Remote Views with SQL Pass Through 95
Chapter 7: Downsizing 125
Chapter 8: Errors and Debugging 145
Chapter 9: Some Design Issues for C/S Systems 159
Chapter 10: Application Distribution and Managing Updates 177
Chapter 11: Transactions 193
Chapter 12: ActiveX Data Objects 209
Appendix A: New Features of SQL Server 2000 225
xv
Table of Contents
Our Contract with You, The Reader v
Acknowledgements vii
About the Authors ix
How to Download the Files xi
FetchMemo 71
Tables 72
Field properties 72
DefaultValue 72
RuleExpression 73
UpdateName 73
DataType 73
Summary 74
Chapter 1
Introduction to Client/Server
Client/server applications differ from file-server applications in many ways, but the
key difference is that client/server applications divide the processing between two or
more applications, a client and a server, which typically run on separate computers.
In this chapter, you will learn a little of the history of client/server computing, as well
as the features of client/server databases in general and Microsoft SQL Server in
particular. You will also learn some of the advantages of client/server databases
over file-server databases.
In the beginning, there were mainframes and minicomputers. All data resided on and was
processed by these often room-filling machines. All bowed down before the mighty MIS
department, as all information was in their hands.
Okay, so that might be an exaggeration, but not by much. In the late 1970s, there were
plenty of data processing centers with raised floors, a sea of disk drives, a wall of tape drives
and an army of operators. In this host-based model, all the processing was done by the
mainframe, while data was entered on dumb terminals. Consider the example of Garys first
programming project, in 1979: The project involved trying to get data from our Data General
minicomputer for a decision support system for our product managers. We wanted to give
our incredibly powerful Apple IIs, with two floppy disk drives, access to the corporate sales
data so the product managers could make decisions on our product lines. We were completely
at the mercy of the MIS department, who controlled all the data. No way were they going to
let us have online access to it. They could be coerced into giving us a monthly dump of sales
data, but that was the best we could do. Our project turned into a primitive data warehouse;
modeling and reporting on the data turned out to be easy compared to getting access to it in
the first place.
Lest you think that this is merely a history lesson, there are many of these systems still in
place. The most popular American hospital billing system uses an IBM AS/400 minicomputer-
based DB2 database. Many systems like this are unlikely to be replaced in the near future, but
to developers they are mostly a curiosity, as new development on such systems is rare, and the
growth of the mainframe market is pretty flat.
The PC revolution
Then came the personal computer. PCs allowed departments and often entire corporations to
dispense with their expensive, centralized host-based systems and replace them with networks
of PCs sharing files on file servers. The pendulum had swung the opposite way. Rather than
doing all of the processing on the central scrutinizer, it was done on the workstation. Dumb
terminals had been replaced by dumb file servers.
This is where most of us come in. FoxPro, along with other systems like dBase, Paradox
and Access, has a local data engine. All processing is done on the workstation, and the
network is used for file storage only. Throughout this book, this model is referred to as a file-
server database.
2 Client/Server Applications with Visual FoxPro and SQL Server
Because all the processing was performed locally and because the workstation could be a
powerful computer in its own right, we developers were able to give users very sophisticated
user interfaces. But we could not provide them with a secure, fault-tolerant database, and we
used up a tremendous amount of network bandwidth. In fact, application server software such
as Citrix WinFrame or Windows Terminal Server, which reduce network bandwidth by running
applications on a server machine, became popular primarily because of file-server databases
and their need for a big network pipeline. This is because all processing is performed on the
local workstation, while only the files reside on a file server. To perform a query, all
information necessary for finding the result set, such as index keys, must be downloaded in
addition to the result set itself. Rushmore is very efficient about what it brings down, but
whatever it needs still has to come to the local workstation.
Furthermore, improvements in database performance often require upgrades to each
workstation running the applicationa potentially expensive proposition when many users
are involved.
The server responds by sending back only the records that match. Not only has the quantity
of transmitted data been reduced, but the number of network round trips has, too.
The problem of improving file-server performance is also partially resolved by
client/server applications because database performance can be improved by upgrading a single
machine, the server, rather than upgrading all the workstations. It is considerably less expensive
to upgrade or replace a single, powerful application server than many lower-level workstations!
There are many client/server databases on the market today. Originally many of them, such
as Oracle, Informix and Sybase, ran only on Unix. Several years ago, Microsoft and Sybase
entered into an agreement whereby Microsoft would develop a version of Sybase SQL Server
for the Windows NT platform, and Microsoft SQL Server was the result. Now many
client/server database vendors, including the leader, Oracle, support Windows NT and/or
Windows 9x.
Client/server databases are frequently referred to as SQL databases because they
commonly support Structured Query Language, or SQL.
Data access
The key difference between client/server and file-server databases is in the way data is
accessed. A client/server application always consists of two or more applications: a client
and a server. The database server erects a wall around the physical data, and it can only be
accessed by sending requests to the server application, which processes the requests and
returns the results.
With a Visual FoxPro database, any machine that has VFP or the VFP ODBC driver and
access to the data directory can process that data on the local workstation. All processing is
actually performed on the local workstation, and all information required to perform that
processing must be transmitted from the server to the workstation. After the server data is
copied to memory on the workstation, the user can change the data and the changes are written
directly to the database on the file server.
With a SQL Server database, the client workstation runs one or more applications that
make requests of the database server and accept the results of those requests. The client can
make changes to the data locally, but those changes are not made directly to the database.
Instead, they are packaged as requests, typically a SQL INSERT, UPDATE or DELETE
statement, and sent back to the server. Just as with a request for data, these change requests
are handled by the server, which has the ultimate authority and control over how such requests
are processed.
SQL Server includes a utility called Profiler that provides an excellent demonstration of
just how this works. In Figure 1, you can see a trace in the Profiler. This trace was run while
opening a VFP form that opens a couple dozen views. Each line in the trace shows the actual
SQL statement sent to the server along with details on the number of disk reads and writes,
duration of the processing, and so forth.
4 Client/Server Applications with Visual FoxPro and SQL Server
Security
A Visual FoxPro database has no security. A developer can write procedural code to enforce
security, but this type of security can be circumvented.
By contrast, SQL Server databases are totally secure. All access to the database must be
through the database server application. By default, no user has access to anything in SQL
Server until the administrator has added the user to the system. Even then, the user has no
access until the administrator specifically grants it. This system is called declarative security.
Any attempt to access the data causes the server to check for a users login ID and password.
Figure 2 illustrates an attempt to access the Northwind database from Microsoft Visual
InterDev. Note the login dialog.
Code you write in your application to access a SQL Server database will also require
authentication by the server. Attempting to open a remote view of the Northwind employee
table from the VFP Command Window, as shown in Figure 3, will also prompt the user with a
login dialog.
Chapter 1: Introduction to Client/Server 5
Figure 2. An attempt to log in to the SQL Server Northwind database causes the user
to be prompted for a login ID and password.
Figure 3. Attempting to open a remote view of SQL Server data also causes the user
to be prompted for a login ID and password.
6 Client/Server Applications with Visual FoxPro and SQL Server
The preceding illustrations show the SQL Server login dialog, but there are actually many
ways to handle logging in. For example, you can configure your ODBC connections to supply a
login ID and password when connecting so that the login dialog doesnt appear at all when the
application runs.
SQL Server also offers a feature called Windows NT Integrated Security that can be
used instead of the normal SQL Server authentication. With NT Integrated Security, SQL
Server checks the name of the user logged in to NT rather than requiring a SQL Server user
ID and password.
In addition to authenticating users for access to the database, SQL Server allows
administrators to assign rights to any individual object in the database. For example, some users
might have access to all columns in the employees table, while others might not be allowed to
see addresses or salaries. See Chapter 3, Introduction to SQL Server 7.0, for more
information on security in SQL Server.
Database backup
A friend recently described a clients nightmare with a VFP database. They performed an
automatic tape backup of their network every night. One day, the inevitable happened and the
network went down. No problem, they simply went about restoring from backup. Well, not all
the tables were backed up, as some were open when the backup was performed. So they went
back to the previous nights backup, but no dice. On and on they went, but no complete backup
had been performed because every night somebody had some files open because they forgot to
shut down their system or a developer was working late. They were in big trouble.
SQL Server eliminates this problem by allowing live backup of a database while it is in
use. An administrator can schedule backups, or an application can periodically send a T-SQL
BACKUP command to the server. The database is dumped to a backup file, which is closed as
soon as the backup is completed, and this backup file is copied to the backup tape. If the server
goes down, the clients nightmare isnt a problem. This backup capability permits both 24/7
operation and reliable backup.
Point-in-time recovery
SQL Server records every transaction in a transaction log in memory. Each time a transaction is
completed, it is copied from the log in memory to the log on the disk. At various intervals, the
transactions in the log are written to the physical database on disk. In the case of a crash, the
data can be recovered as long as the transaction log is recoverable. Of course, any updates that
had not yet been written to the physical transaction log would be lost.
The transaction log itself can also be backed up. Normally, the transaction log is not
emptied when the database is backed up. However, when the transaction log itself is backed up,
committed transactions are removed from it to keep the log size to a minimum. So if the
database is backed up on Tuesdays and the transaction log is backed up every day, then the
worst-case scenario even when the transaction log is destroyed is to restore the weekly backup
and then each daily transaction log. Only part of a days transactions are lost, which is a
substantial improvement over the aforementioned clients nightmare.
Backups can be performed more often; however, backups affect performance of
the system. This is one of the trade-off decisions you will have to make when designing a
client/server system.
Chapter 1: Introduction to Client/Server 7
Triggers
Visual FoxPro databases support triggers. A trigger is a stored procedure that is triggered by an
INSERT, UPDATE or DELETE of a record in a table. In VFP databases, triggers are used to
enforce referential integrity and may be used for other purposes as well. One difficulty with
VFP triggers is that the VFP ODBC driver only supports a limited subset of VFP syntax. So
code in a trigger that works fine when running in a Visual FoxPro application may not work
when accessing data via ODBC.
Although SQL Server can use triggers to enforce referential integrity, declarative
referential integrity is the preferred method, simply because declarative integrity performs
substantially better than trigger-based integrity.
Triggers are also frequently used to support business rules and are an excellent way to
provide an audit trail. For example, a trigger might insert a record into an audit table containing
the datetime of the change, the user making the change, and the old and new values.
Here is an example of a very simple auditing trigger. Suppose a fire department wants to
keep track of any changes made to the alarm time, arrival time or cleared time for a fire
incident. Although it is entirely possible that such a change is being made legitimately to reflect
correct times, it is also possible that someone might change these times to make them look
better or to cover up mistakes. Heres the schema for a simple time logging table:
The timelogkey column is an identity column and will automatically enter unique
integers, beginning with one and incrementing by one. Now an update trigger is created for
the incident table:
This trigger requires a bit of explaining. SQL Server stored procedures use temporary
cursors that are visible only within the stored procedure. In the case of triggers, which are a
8 Client/Server Applications with Visual FoxPro and SQL Server
special type of stored procedure, there are two default cursors: deleted and inserted. In delete
triggers, the deleted cursor holds the values of the row being deleted, while in update triggers it
holds the values of the row prior to the update. In insert triggers, the inserted cursor holds the
values of the row being inserted, while in update triggers it contains the new values of the row
being updated.
The update trigger in the preceding code checks to see whether one of the three critical
timesalarmtimehas been updated. This is done with the UPDATE() function. If so, a row is
inserted into the timelog table. The row includes the current datetime (returned by the SQL
Server GETDATE() function), the user making the change, and the name of the column being
changed. It gets the old and new values from the deleted and inserted cursors, respectively, and
inserts them as well.
By extending this technique, you can see that it is possible to create a complete audit trail
of every change made in the database.
Referential integrity
Visual FoxPro databases support trigger-based referential integrity. When an application or
user attempts to delete, modify or insert a record, the appropriate trigger is fired. The trigger
determines whether the attempted delete, modification or insert can proceed. A deletion trigger
may cause cascaded deletes of child records. Similar processing occurs when an attempt is
made to change a primary key value. The change may be prevented by the trigger, or the
change may be cascaded through the child tables. Although such trigger-based referential
integrity is adequate for some purposes, it becomes less reliable as the schema becomes more
complicated, as thousands of triggers could be firing for a single deletion.
While SQL databases also support the use of triggers for the purposes described in the
previous paragraph, the preferred method is declarative referential integrity. Declarative
referential integrity, supported by SQL Server since version 6.0, enforces referential integrity at
the engine level. Deleting a record when children exist is simply prohibited. Instead of using
triggers to cascade deletes, a stored procedure is typically written to delete records from the
bottom up based on a given primary key for the top-level parent record. This technique is not
only more reliable, but it typically provides better performance, too.
Declarative referential integrity is implemented through the use of foreign key constraints.
Here is an example of how to create a foreign key constraint:
Indexes
Indexes are used in Visual FoxPro databases to display data in a particular order, to improve
query performance through Rushmore optimization, to enforce unique values, and to identify
unique primary keys. SQL Server essentially uses indexes for the same purposes, but SQL
Server does not use Rushmore. Instead, it uses its own optimization techniques designed
specifically for the SQL Server query engine.
Chapter 1: Introduction to Client/Server 9
Clustered indexes
When a new record is added to a VFP table, it is typically appended to the end of the file, as
this is much more efficient than writing a record in the middle of a file. If no index order is set,
then browsing a table will show the records in this native order. Sometimes it makes sense for
performance reasons to occasionally sort a table based on the value of some field, such as a
primary key.
In SQL Server, the physical order of records can be controlled with a clustered index. Each
table may have one clustered index, and a new record will be inserted into the table in the order
determined by the clustered index. Clustered indexes can improve query performance when
queries need to return a range of consecutive records. However, they tend to decrease insert or
update performance, since these operations could force a reorganization of the table.
A clustered index on the customerid column of the Northwind customers table is created
like this:
Unique indexes
In a VFP table, a candidate index is used to enforce the uniqueness of a value in a table. They
are called candidate indexes because the unique value is a likely candidate for a primary key. In
SQL Server, the same thing is accomplished with a unique index. Dont confuse this with a
unique index in VFP (i.e., INDEX ONTAG tagname UNIQUE), which is simply an index
containing only a single key even when the table contains multiple records, each of which has a
key of the same value. A unique index in SQL Server, like a candidate index in VFP, prevents
duplication of the value in the table. A unique index on the employeeid column of the
Northwind employees table is created like this:
Primary keys
In a VFP database, you can specify one primary index per table like this:
Behind the scenes, VFP actually creates a candidate tag in the index file and then adds a
special entry in the DBC to indicate that it is the primary key.
Primary keys in SQL Server are very similar, using primary key constraints. This code
creates a primary key constraint and a clustered index on the employeeid column of the
employees table:
Non-clustered indexes
Rushmore optimization in Visual FoxPro is effected with the use of index tags. Query
performance is improved under most circumstances by having an index that matches the filter
expression of a SELECT. Query optimization in SQL Server works in much the same way.
Fields that are likely to be used in filter expressions should have non-clustered indexes. A non-
clustered index on the lastname column of the Northwind employees table is created like this:
Defaults
Both Visual FoxPro and SQL Server databases support defaults. Defaults allow you to specify
a default value for a field. For example, a merchant in the state of Washington might want to
assume that its customers are residents of Washington and automatically insert WA in the
state column.
Rules
Both Visual FoxPro and SQL Server databases support rules. Rules allow you to specify field-
level validation in the database. Once specified, rules are enforced by the database engine.
A SQL Server rule is created using a variable, rather than a column name. Heres a rule
that requires Social Security numbers to consist of nine numeric values. Note the use of the
@social variable (the @ always denotes a local variable in SQL Server):
By using a variable, rather than the name of a column, the same rule can be applied to any
number of columns:
SQL Server automates primary key generation by using identity columns. An identity
column may not have a value inserted into it. Instead, it will automatically be set to the next
available value for that column. The initial (or seed) value can be set, as well as the amount by
which the value is incremented. The identity attribute can also be turned off for a column if you
need manual control over the values inserted into the column. The attribute can be turned back
on and set to increment from the highest existing value. This is all handled automatically by the
engine and requires no code on the part of the developer.
However, identity columns are no panacea. Retrieving the last value is normally handled
by checking the value of the @@IDENTITY global variable. But it may be difficult to get the
correct value of this variable, as insert triggers might have caused other identity columns to be
incremented and thus would return the wrong value for @@IDENTITY. Identity columns can
also cause a problem when databases are replicated. If you have the choice, you should avoid
identity columns when you design a database. Learn to use them, though, because you may find
yourself working on databases that use them.
Stored procedures
Visual FoxPro databases support stored procedures. However, stored procedures that run in
Visual FoxPro may not run via the VFP ODBC driver, so it is very difficult to create stored
procedures that are of any value outside of a VFP application. Because SQL Server stored
procedures are run by SQL Server, you never have this incompatibility issue.
One use for stored procedures in SQL Server is to provide parameterized record sets of
data. VFP views support parameters that can be used to filter the rows returned by the view.
For example, the customerorders view can be defined with a parameter to return only those
records matching a particular customer ID:
The parameterized view is opened by setting the value of the parameter and opening the
view with USE:
cCustomerID = 'ALFKI'
USE customerorders
Parameterized views are not supported by SQL Server, but they can be simulated with a
stored procedure like this:
JOIN orderdetails
ON orders.orderid = orderdetails.orderid
WHERE customers.customerid LIKE @cCustomerID
In T-SQL code, one would access this stored procedure like this:
And from a VFP application, one would access this stored procedure like this:
Stored procedures in SQL Server are written using Transact-SQL, also known as T-SQL,
SQL Servers programming language. Although not as rich a language as Visual FoxPro and
lacking any record-based data navigation (T-SQL is purely set-based), it is nonetheless a
powerful procedural language and can be used for many purposes. It contains syntactical
equivalents to such VFP constructs as IF..ELSE, DO WHILE, RETURN, PARAMETERS
and so on.
One powerful feature of stored procedures in SQL Server is that they can be assigned
security rights just like any other object in a database. One very good use for this is to use
stored procedures for database updates rather than allowing direct access to the tables. The
administrator takes away INSERT, UPDATE and DELETE rights for tables and/or specific
columns, and the only way for a change to be made is to call the appropriate stored procedure
and pass it the values necessary for the change.
Calling SQL Server stored procedures from Visual FoxPro applications will be examined
in greater detail in Chapter 6, Extending Remote Views with SQL Pass Through.
Views
Visual FoxPro databases support views. A view is nothing more than a predefined SQL
SELECT. Views can be used to determine which columns to include in a record set or to
perform multi-table joins. A useful view limiting the number of rows and columns returned in a
record set might look like this:
This view performs a three-way join and returns only five columns. This type of view may
be ideal for reporting and can also be used for data entry, as views can be made updatable.
Predefining the three-way join simplifies things for users who may otherwise have to write such
a query themselves. However, the VFP ODBC driver only supports the calling of VFP views
that are not parameterized through SQL pass through, so many of your views are only available
in a VFP application.
Chapter 1: Introduction to Client/Server 13
SQL Server also supports views. Here is a T-SQL definition for the same view
defined previously:
As with VFP views, SQL Server views can be used to simplify access to the data and can
be made updatable. However, SQL Server views are available to any application that can
access SQL Server tables, making them more flexible than VFP views. Furthermore, security
rights can be assigned to a view in SQL Server. Many developers and DBAs enforce security
by withholding rights to table objects and allowing access to views instead.
or:
Whats the difference? What have you gained by using the user-defined type? When you
look at the schema for this table, you now know not only the structure of the column, but also
something about the nature, or business domain, of the data contained in it because you know it
is designed to hold 901 Codes. Many developers spend a lot more time trying to figure out
existing code than they do writing new code, and anything you can do to document your design
will make somebody elses (or your own future) work easier.
An important restriction on user-defined data types is that they do not provide inheritance.
In other words, if the US Fire Administration changed the codes from four characters to five,
you cannot simply modify the udtCode901 data type and expect the tables to pick up the
change. Instead, you must first unbind or remove the data type from any columns where it is
14 Client/Server Applications with Visual FoxPro and SQL Server
used, and then make the modification to the data type. After the change is made, you can
re-bind the data type to the appropriate columns.
Replication
Replication is the process of synchronizing multiple copies of a database. A company may
have a database in the headquarters and a copy in each regional office. Each night these
databases can be replicated so the headquarters and each regional office will have a copy of
the latest data.
Visual FoxPro databases have no native support for replication. If you want to replicate a
VFP database, you must write the code to do it yourself. While messages occasionally appear
in online forums declaring how easy it is to do this, consider a group of conference attendees
who were asked whether theyve ever attempted this. Only a small percentage said yes, and of
those, almost all gave up before completing the task. Those who completed it usually said they
wouldnt want to do it again.
SQL Server has built-in replication, which is handled as an administrative function.
However, just because the replication is built in doesnt mean its easy to get it to work. You
still have to ensure that primary keys uniquely identify records, even across multiple copies of
the database.
Transactions
Visual FoxPro supports limited transaction protection with BEGIN TRANSACTION, END
TRANSACTION and ROLLBACK. SQL Servers transaction protection is far more robust, as
explained in the Point-in-time recovery section earlier in this chapter. In addition, by
exposing its transaction process to Microsoft Distributed Transaction Coordinator (MS DTC),
SQL Server can participate in transactions across databases, servers and even database systems.
With MS DTC, multiple databases on multiple servers running SQL Server, Oracle and/or
MSDE can all participate in the same transaction. Transactions are covered in greater detail in
Chapter 11, Transactions.
Scalability
The term scalability is in vogue right now. Microsoft uses the word a lot because its hell-bent
on overtaking Sun and Oracle in the enterprise market. Scalability is what the press has often
claimed Windows NT and SQL Server lack in comparison to Oracle running on a Sun server.
When an application is described as scaling well, it is typically meant that it can handle very
high usage.
The term scalability can also be applied to applications written for high-usage
environments that can be used in smaller systems as well. Chapter 7, Downsizing, addresses
this downward scalability.
Visual FoxPro is capable of handling very large amounts of data, with the engine
supporting tables up to 2GB. However, because the processing is handled by the workstation,
really large tables usually cannot be handled efficiently.
SQL Server can handle terabytes of data. (A terabyte is a trillion bytes, or 1000GB.) To
get an idea of just how big a terabyte of data really is, consider this: The entire 100+ year
history of every transaction ever performed on the New York Stock Exchange is approximately
500GB, or one-half terabyte!
Chapter 1: Introduction to Client/Server 15
Versions of SQL Server prior to 7.0 required Windows NT, but starting with version 7.0,
SQL Server is compatible with Windows 95/98, too. This means that the same database that
can service a terabyte of data on a multi-processor Windows 2000 server can also run fine on a
Windows 98 laptop.
Reliability
Visual FoxPro databases are processed on the local workstation. If 100 users are working on a
table simultaneously, then portions of a table and its index exist in the memory of 100 different
computers. The phrase index corruption causes a knowing nod of the head of most every VFP
developer youll ever meet.
Such corruption issues are not a problem with SQL Server. For one thing, the data is only
open in one place, not in multiple copies all over a network. Also, the demands of the
enterprise market are such that client/server databases must be absolutely reliable in high-
volume, mission-critical 24/7 applications.
Advantages of client/server
The advantages of client/server systems over file-server systems follow from the main
differences between the two types of systems. This is a book about client/server development,
so this section will primarily deal with advantages of client/server over file-server. But it
should be pointed out that client/server has disadvantages, too, the primary ones being cost
and complexity.
SQL Server is licensed on a per-user basis, with licenses costing roughly $150 to $200
per user. The license fee may seem quite a leap to a VFP developer whos used to a freely
distributable database engine (but note that there are many SQL databases that cost
considerably more). In addition, there are administrative costs of a client/server solution. Large
client/server databases require almost constant tuning to optimize performance. A changing
user base requires that security be continually updated. For this and other reasons, SQL Server
systems require a database administrator (DBA). For some systems, a part-time DBA is
sufficient, but other systems require one or more full-time DBAs.
16 Client/Server Applications with Visual FoxPro and SQL Server
There is no way to get around the fact that client/server development is more complex and
more expensive than developing file-server systems. If it werent, you wouldnt need this book.
For any given system, you can expect it to take longer and cost more to implement as a
client/server system. But consider the advantages of client/server systems...
Performance
While it is true that Visual FoxPro has a blazingly fast database engine, its performance can
degrade quickly when size and number of users increase and/or network bandwidth decreases.
SQL Server is also blazingly fast. In fact, with identical moderate-sized databases on
identical computers, SQL Server query performance tends to be slightly better than VFPs
in most situations.
The real performance difference appears when you reduce the size of the network pipe.
Over a slow network, youll almost always get significantly better performance from SQL
Server. And with a really low bandwidth connection, like a modem, VFP cant even compete.
This is because SQL Server only needs to send requests and results over the wire, while VFP
requires the transfer of everything necessary to process the query.
This performance enhancement has a cost. You must carefully tune your queries with the
size of the result set in mind. The point is that reducing the size of the result set with SQL
Server provides the lions share of the performance improvements, particularly with low-
bandwidth connections. Thats because only the result set comes down over the wire. But the
result set itself may be only a small part of what VFP needs to perform a query; therefore,
carefully tuning a query for a small result set may not gain you any performance.
Cost
We mentioned that client/server solutions typically cost more than file-server systems, but
under some circumstances, the reverse may be true. A good example of the cost savings
provided by client/server is in large, widely spread fire departments. Most public agencies
simply cannot afford the infrastructure necessary to support high-speed connections between
widely dispersed fire stations and the database server. A modem and a connection to a local ISP
may be the best they can do. Not only are high-speed connections beyond the budgets of many
departments, but those alternatives simply arent available outside of metropolitan areas. And
phone service in rural areas is often of poor enough quality that modem connection speeds are
pretty slow compared to most metropolitan areas. So a high-speed solution isnt affordable, and
a file-server system with low-speed connections is unworkable. That leaves client/server,
which, while typically more expensive than the file-server solution, ends up being cheaper than
a file-server solution of adequate performance.
Another cost factor is that a great deal of performance benefit can be gained by souping up
the server. It may cost a lot less to get one really high-powered server than to have hundreds of
top-of-the-line workstations. One can tune such a system to put a greater burden on the server
and perform less processing on the workstations. With a file-server system, all processing is
performed on the workstation.
Security
A properly managed client/server database can be almost totally secure, no matter how you
access it. File-server databases, on the other hand, have no security at all other than that
Chapter 1: Introduction to Client/Server 17
provided by the network. Anybody with Visual FoxPro and network access rights can do
anything they want to a Visual FoxPro database, no matter how much effort is put into an
applications security model.
Scalability
Occasionally one hears about Visual FoxPro systems with VFP databases that handle hundreds
of users and millions of records. But these systems are very unusual and are extremely difficult
to implement. SQL Server can handle them with ease, as it can handle thousands of users and
terabytes of data. A client/server architecture is indicated for any system that must support a
very large number of users.
Summary
In this chapter, you learned about the history of database systems, the features of client/server
databases in general and SQL Server in particular, and the benefits of doing client/server
development. In the next chapter, well take a look at Visual FoxPro as a client/server
applications development tool.
18 Client/Server Applications with Visual FoxPro and SQL Server
Chapter 2: Visual FoxPro for Client/Server Development 19
Chapter 2
Visual FoxPro for
Client/Server Development
After reading Chapter 1, you should have a good understanding of what makes
client/server databases different and why you might want to use them. But the $64,000
question is: Why would you use Visual FoxPro to develop your client/server
applications? This chapter will answer that question for you, the developer, and provide
answers that you can give if your clients ask this question.
More than half of all software developers today use a certain variation of Beginners All-
purpose Symbolic Instruction Code (BASIC), according to Microsoft. Why? Partly because
of familiarity, partly because of its versatility, and also partly because popularity begets
popularity. But you might recall your parents telling you, Just because everyone else does it
doesnt mean you should do it, too. This chapter discusses six features that make Visual
FoxPro the best client/server rapid application development tool available today: object-
oriented programming, COM support, built-in client/server support, built-in local data
engine, support for other data-access technologies such as ADO, and Rapid Application
Development (RAD). Many of these features could be, or are, the topics of one or more
books of their own. While each feature is discussed separately, keep in mind that it is the
combination of these features that makes VFP such a great client/server development tool.
Not many development tools can offer the combination of features found in FoxPro. Visual
Basic, for example, offers great COM support and support for ADO and is an excellent RAD
environment, but it isnt object-oriented, while C++ is a great object-oriented programming
language but lacks built-in client/server or local database support and has never been accused
of being good for RAD.
popular, modern programming languages such as Visual FoxPro, Visual Basic, C++ and Java
support object-based programming. Object-based programming, through its use of abstract
objects to represent real-world objects, is a step in the right direction for improving code reuse.
There are two features common to all object-based programming languages: encapsulation
and polymorphism.
Encapsulation is the combination of data and code into a single entity. The data, in the
form of memory variables belonging to the object, are called properties. The code, called
methods, gives the object its ability to manipulate its data and to perform other actions. To
represent the first name of a person in a procedural language might require a global memory
variable called gcFirstName. The value of the variable would be set like this:
gcFirstName = "Kim"
An object would encapsulate the first name in a property as part of the object:
oPerson.FirstName = "Kim"
Rather than trying to keep track of numerous memvars for each of numerous persons,
objects require that the programmer maintain memvars only for each object.
The person object can also contain code such as the ability to print out its properties. This
code is known as a method and might be called Print(). To print the characteristics of a person
in a procedural program, you might have a function called PrintPerson() to which you would
pass a parameter for each of that persons characteristics:
While such a function is certainly reusable, it isnt reused easily. An object, on the other
hand, could have a Print() method that contains all the code necessary to print its properties. It
could be called like this:
oPerson.Print()
Which call would you rather make over and over again? More importantly, which call,
when made over and over again, is likely to contain fewer errors and require less debugging?
Polymorphism is the ability of multiple objects to share the same interface. Having the
same interface means that the properties have the same names and data types and the methods
have the same names, parameters and return types. In the real world, it is clear to everyone
that programmers and salesmen are not the same. But despite that, they have the same interface,
as both programmers and salesmen have names, addresses, height, weight and so on.
Furthermore, all programmers and most salesmen can write down their characteristics, though
they might do it differently. A salesman, for instance, would undoubtedly write a much longer
description than a programmer. So the code in a salesman objects Print() method would be
different from the code in a programmers Print() method, but when it comes to using the
objects, they are both manipulated in the same way.
Object-oriented programming goes one step further. Just as certain real-world entities,
such as children, can inherit the characteristics and abilities of their parents, so too can
Chapter 2: Visual FoxPro for Client/Server Development 21
While object-based programming certainly enhances code reuse by simplifying the way the
code is used, object-oriented programming allows a quantum leap in code reuse because of
inheritance. Of the four languages in Microsofts Visual Studio suiteVisual FoxPro, Visual
C++, Visual J++ and Visual Basicall but Visual Basic are object-oriented. Visual Basic is
object-based because it does not support inheritance.
There are two different types of inheritance: single inheritance and multiple inheritance.
Single inheritance means that a child object can inherit the properties and methods of a single
parent, while multiple inheritance means that a child can inherit from multiple parents. With
multiple inheritance, an object normally inherits all the properties and all the methods from a
parent. If an object were created that inherited from both a programmer and a salesman, it
would have a concise Print() method and a long-winded one. While multiple inheritance offers
versatility, it is also more difficult to manage than single inheritance. C++ fully supports
multiple inheritance. To simplify the management of multiple inheritance, some languages,
such as Java, support single class inheritance and multiple interface inheritance. Visual FoxPro
supports single inheritance.
two places within your application needed to use this component, and other applications needed
access to it, too. You could create a COM component to encapsulate that functionality. Since it
was built as a COM component, any application, not just your VFP application, could access
the methods that perform the import and export.
The final piece in the COM puzzle is the ability to distribute Automation components on
multiple computers. The first iteration of this was called Remote Automation, but it has mostly
been supplanted by Distributed COM, or DCOM. Why distribute components on different
computers? For the same reasons you separate the application from the database in client/server
computing. In fact, we consider distributed applications to be merely another step beyond
client/server. You separate client applications from server databases for performance,
scalability, security and cost-effectiveness. All the same reasons apply to distributing
components among different network resources.
Regardless of where Automation servers resideon the local computer or another
computer on the networktheir use in a client/server application turns at least part of the
application from a two-tier design into three-tier one. You might create a three-tier application
using stand-alone components running remotely, or you might create components that can run
in some other Automation host environment such as Microsoft Internet Information Server (IIS)
or Microsoft Transaction Server (MTS). Both IIS and MTS are multi-threaded hosts for
improved scalability. Visual FoxPro allows you to create multi-threaded COM components that
scale well in either host, as well as any others that support apartment-model threading.
Some data never or rarely changes. For example, the states in the United States havent
changed since 1959. If data is static and does not require security, there is no particular reason
to store it in a server database and no need to send it back and forth across the wire every time
a user needs it. So why not keep some of this static, frequently used data on the local
workstation? Visual FoxPros local data engine allows this data to be stored locally, where it
can be accessed quickly and frequently with no drain on the network or the server. Just in case
this data does change, you can keep a master copy on the server and simply check to see
whether it has changed whenever the application starts up. If it has changed, download it from
the server and refresh the local copy; otherwise, just use the local copy. This topic is covered in
greater detail in Chapter 9, Some Design Issues for C/S Systems.
Metadata is data that describes other data. Metadata is usually used by the application,
rather than by the user. Using metadata in combination with data-driven (rather than code-
driven) techniques allows you to create more flexible applications more quickly. If the same or
a similar action must be performed on many different items, you can either hard-code the
particulars of each item, or you can write a generic routine and then create a table with a record
for each item. Adding and deleting items is as simple as adding and deleting records in a table,
and reordering items simply requires changing physical or logical record order. Sometimes this
metadata should be available to users, but other times its handy for it to be unavailable.
The VFP local data engine also allows metadata to be joined in queries with client/server
data or other local data, even if the metadata is compiled into the EXE. Consider the example
of an application that uses metadata to represent rules the federal government has imposed on
completion of data entry. Users are also allowed to create their own rules. Since the users rules
mustnt clash with the governments rules, the user is only allowed to apply rules to columns in
the database for which there are no existing government rules. The SQL Server database is
queried for a list of fields and exclude columns with rules in the metadata table.
A final benefit of VFPs local data engine for client/server development is for disconnected
record sets, such as data on laptop computers that are taken on the road and are not always
connected to the server. A copy of some or all of the servers data is stored locally. The system
can work on this data even while the laptop is disconnected from the server. With Visual
FoxPro, you can create disconnected record sets either using the offline view feature or by
copying record sets to tables. If local data werent supported, then another data engine, such as
MSDE, would have to be installed and used.
whichever works best in the specific situation. ADO is covered in more detail in Chapter 12,
ActiveX Data Objects.
Summary
We believe that Visual FoxPro is the finest client/server rapid application development
tool available today. And considering that Visual Basic isnt object-oriented (well, not yet
VB 7 promises some level of object-oriented programming), anyone using VFP for
client/server development has an automatic advantage over more than half of all developers.
Hopefully this chapter has given you some ammunition you might need to support your
choice of development tool.
In the next chapter, youll learn the basics of Microsoft SQL Server.
26 Client/Server Applications with Visual FoxPro and SQL Server
Chapter 3: Introduction to SQL Server 7.0 27
Chapter 3
Introduction to SQL Server 7.0
The purpose of this chapter is to explore the fundamentals of SQL Server 7.0. Well start
by providing an overview of the installation process. Well follow that with a discussion
of databases, the transaction log and how SQL Server organizes storage. The remaining
portion of the chapter is devoted to indexes, locking, stored procedures, enforcing data
integrity and the other features of SQL Server that are specific to the implementation of a
database application.
In November 1998, Microsoft announced SQL Server 7.0, a significant new release of SQL
Server that included important improvements in the areas of ease of use, scalability, reliability
and data warehousing.
Microsoft saw a need for a database management system that eliminated the more common
administrative functions and provided a simple programming model for developers. They
wanted to produce a product that would protect the investments made by their customers. They
also wanted a product that had the capability to grow with the customera single product that
would offer great performance in the workgroup or enterprise setting and improve reliability.
Finally, Microsoft wanted to provide its customers with a powerful, yet cost-effective data-
warehousing platform.
Capacity
Microsoft Visual FoxPro has a maximum capacity of 2GB per table. Though it happens
infrequently, developers facing this limitation have several choices, such as moving older data
to a separate historical table or partitioning data into separate tables by year, region or other
criteria. These compromised designs generally result in systems that are expensive and difficult
to develop and maintain.
Microsoft SQL Server has an almost unlimited capacity. In fact, if you were to stretch SQL
Server to its theoretical limit, you would have roughly one million TB of storage.
28 Client/Server Applications with Visual FoxPro and SQL Server
Concurrency
Concurrency is the ability of multiple users to access data at the same time. The database
engine must be able to serve those users in a timely manner. SQL Server is capable of handling
hundreds (even thousands, depending on hardware) of simultaneous users.
Robustness
SQL Server has many mechanisms that provide a more robust environment than that provided
by Visual FoxPro:
A new storage structure replaces the frail double-linked chains used by previous
versions of SQL Server.
SQL Servers online backup allows the database to be backed up while users
are actively manipulating the data. SQL Server provides a variety of backup
types, allowing the database administrator to create a backup strategy suited for
any environment.
The transaction log and Autorecovery process ensure that the database will be
restored to a stable state in the event that the server is shut down unexpectedly, such
as by a power failure. Well cover the Autorecovery process later when we discuss
the transaction log.
Security
Visual FoxPro does not support security within the database engine. Developers have
implemented application-based security, but this type of security cannot prevent someone from
using Visual FoxPro, Access or even Excel to access Visual FoxPro data directly.
SQL Server provides multiple layers of security that a user must cross before obtaining
access to the data. The first layer controls access to the server itself. Before a user can access
the server, the user must be authenticated. During the authentication process, SQL Server
determines the identity of the user who is attempting to gain access. SQL Server provides two
authentication methods: SQL Server Authentication and NT Authentication.
SQL Server Authentication: Using this method, SQL Server requires that the user
provide a piece of information that only the user would know: a password. When the
user logs in to the server, he or she provides a login name and password. SQL Server
searches an internal table, and if it finds the login name and password, it permits the
user to access the server.
NT Authentication: Using this method, SQL Server relies on the Windows NT
Domain to verify the user. In other words, NT is vouching for the user. When a user
tries to connect using an NT Domain account, SQL Server verifies that the users
account or the group that he or she is a member of has been granted or denied
permission to access the server.
Which is better? As usual, the answer is not clear-cut. The advantage of NT Authentication
is that users dont need to remember another password. The downside is that you must have a
Chapter 3: Introduction to SQL Server 7.0 29
Windows NT network and domain in place at the site. The advantage of SQL Server
Authentication is that a user from a non-Windows NT network can access the server. The
downside is that users must remember yet another password.
Gaining access to the server is only the first step. In order to access data in a database, the
user must be mapped to a database user in that database. This type of security allows the
database administrator to grant the user access to specific parts of the data stored on the server,
as opposed to an all-or-nothing arrangement.
The third and final layer of security is within the database itself. Permissions to access and
manipulate database objects (tables, views, stored procedures and so forth) are granted to
database users. When a user submits a query to SQL Server, SQL Server verifies that the user
has been granted permission to execute the query. If the user does not have the proper
permissions, SQL Server returns an error.
Installation
SQL Server is one of the easiest Microsoft BackOffice products to install. Once you have the
hardware set up and an operating system installed, installing SQL Server is nothing more than
inserting the CD and answering a half-dozen questions. Since SQL Server is self-configuring,
theres very little, if any, post-installation configuration.
Table 1. Comparing the capabilities of the Standard, Enterprise and Small Business
Server editions.
There is one more edition of SQL Server not listed in Table 1. If you are covered by a Per-
Seat licensing agreement for any server edition listed in the table, you may choose to install the
Desktop SQL Server edition on any client computer. It is not sold as a separate product; its
included on the CD. The Desktop edition was designed for the Road Warrior user (the user
who will be disconnected from the main server but will occasionally need to connect and
synchronize). The Desktop edition can be installed on Microsoft Windows NT Server,
Microsoft Windows NT Workstation and Windows 95/98, but it does not provide support for
the following features:
Parallel queries
Fiber-mode scheduling
Read-ahead scans
Hash and merge joins
Failover clusters
Extended memory addressing
Full-text catalogs and indexes
Microsoft SQL Server OLAP Services
For more information regarding installation of SQL Server on Windows 95/98, see the
topic SQL Server 7.0 on Windows 95/98 in the SQL Server Books Online.
Licensing
During the installation process, youll be asked to choose between two licensing modes:
Per-Server and Per-Seat.
With Per-Server licensing, the administrator will specify the maximum number of
concurrent users that can connect to the SQL Server at any one time. Concurrent users should
not be confused with connections. A specific workstation can have multiple connections to the
server, but all of those connections still count as only one user. Per-Server licensing is best if
your organization has a single SQL Server or if you have a large number of users but only a few
of them are connected at any one time.
A Per-Seat license allows a specific workstation to connect to an unlimited number of SQL
Servers. If subsequent SQL Servers are installed, the existing user license will cover the new
servers. The only additional licenses necessary are for the new servers.
Your installation can begin with Per-Server licensing. Then, as your organization grows
and more SQL Servers are required, you can take a one-time, one-way upgrade from Per-Server
licensing to Per-Seat.
You will not need a Client Access License (CAL) for the installation of NT
Server that is hosting SQL Server unless you are using file and/or print
services of the NT Server.
Chapter 3: Introduction to SQL Server 7.0 31
Character sets
A character set (or code page) is the list of 256 characters that make up the legal values for
SQL Server character data types (char, varchar, text). The first 128 printable characters are the
same for all character sets.
During installation, you must specify the character set that SQL Server will use to
represent characters within the server. Your choice of a character set is very important. There is
only one character set for the entire server, and it affects all databases on the server. Changing
the character set requires rebuilding the master database (something like a mini-reinstall),
re-creating all user databases, and reloading the data.
It is also important that the client workstations use a code page that is consistent with the
character set that was installed on the server. If not, two workstations may have different
representations for the same bit pattern that is stored within SQL Server.
Code page 1252 is the default and is compatible with the ANSI character set used by the
Microsoft Windows operating systems.
Sort order
The sort order determines how two characters compare to each other during sorting or logical
comparison operations. During installation, you will specify two sort orders. The first is
specific to the selected character set and will be for non-Unicode character data. The second
sort order will be for Unicode character data.
Sort orders fall into three categories: binary, dictionary ordercase-sensitive, and
dictionary ordercase-insensitive. With binary sorting, each character is sorted and compared
according to its binary representation. If two characters have the same binary representation,
theyre the same. If not, the lower numerical value is sorted higher in the list. A binary sort
order is the fastest sort order because SQL Server does a simple byte-by-byte comparison.
Also, binary sort orders are always case-sensitive because each character has a unique
binary representation.
With the dictionary sort orders, all the letters are sorted case-insensitive. An a will sort into
the same position as the character A. However, for string comparisons, the case sensitivity of
the sort order determines whether the characters are the same. If you install a dictionary order
case-insensitive sort order (the default), an A will be treated identically to a (A = a). So the
character strings age, Age and AGE are considered identical. If a case-sensitive sort order is
installed, an A is considered different from a (A a).
Network libraries
Network libraries identify the method that clients will use to communicate with SQL Server.
Each network library represents a different type of Interprocess Communication (IPC)
mechanism that SQL Server will recognize. Network libraries work in pairs; both the client and
server must have the same library. To make communications more flexible, SQL Server is
capable of listening on multiple IPCs simultaneously.
Some of the network libraries support only one type of physical network protocol, while
others are capable of using multiple protocols. For example, TCP/IP sockets requires that the
TCP/IP protocol be installed, whereas the Named Pipes and Multiprotocol network libraries
will support multiple physical network protocols.
32 Client/Server Applications with Visual FoxPro and SQL Server
Across the IPC, SQL Server and the client exchange queries, result sets, error messages,
and status information (see Figure 1).
During the setup, you will be asked for the network libraries to install. The setup will
default to installing Named Pipes, Multiprotocol and TCP/IP sockets. The client will default to
Named Pipes unless configured otherwise.
Types of databases
Microsoft SQL Server supports two types of databases: user and system. User databases are the
ones youll create for your applications. Although SQL Server allows a maximum of 32,767
databases on any one server, the typical server contains only one or two. The other type of
database is the system database, which contains the metadata that controls the operation of the
server. Descriptions follow of the four SQL Server system databases.
master
The master database contains the System Catalog, a collection of tables that stores information
about databases, logins, system configurations, locks and processes. It is the most important of
all the system databases.
Chapter 3: Introduction to SQL Server 7.0 33
model
model is both the name and the function of this system database. It is used as a template
whenever a new user database is created. When a new database is created, SQL Server makes a
copy of the model database and then expands it to the size specified by the user.
tempdb
tempdb is SQL Servers work space, similar to VFPs work files. When SQL Server needs a
temporary table for solving a query, sorting or implementing cursors, it creates one in tempdb.
In addition, temporary objects created by a user exist in tempdb. Unlike other databases,
tempdb is reinitialized every time SQL Server is started. Operations within tempdb are logged,
but only to support transaction rollbacks.
msdb
msdb contains the metadata that drives SQL Server Agent. SQL Server Agent is the service
that supports scheduling of periodic activities such as backups, and responds to events that
are posted into NTs Event log. The information for Jobs, Alerts, Operators, and backup
and restore history is held here. Youll probably have little use for directly accessing the
msdb database.
Database files
A database is physically stored in a collection of database files. A database file is an operating
system file, created and maintained by SQL Server. When you create a database, you specify a
list of files. You can specify three types of files: primary, secondary and log.
Primary data files: Every database must have one primary database file. In
addition to storing data, this file contains the database catalog as well as references
to the other files that comprise the database. By convention, the primary file has an
.MDF extension.
Secondary data files: A database may have additional files, called secondary database
files. You might create secondary files if you were running out of space in the primary
file or you wanted to distribute disk activity across multiple physical drives. By
convention, secondary files have an .NDF extension. Note that secondary files require
special consideration, as they complicate the backup and restore process.
Log files: Every database must have at least one log file. Log files contain the
transaction log. By convention, log files have an .LDF extension.
When you create a database file, youll specify several properties including the physical
file name, the path, the initial size, a growth increment, the maximum file size and the logical
name of the file. Youll use the logical file name whenever you manipulate the file properties
using the SQL Server Enterprise Manager or Transact-SQL.
Creating a database
There are many ways to create a database. The easiest way is to use either the Create Database
Wizard (see Figure 2) or the Database Properties dialog (see Figure 3) from within the SQL
34 Client/Server Applications with Visual FoxPro and SQL Server
Server Enterprise Manager (SEM). Both are graphical wrappers for the CREATE DATABASE
command that does the actual work.
Figure 3. The General page of the Database Properties dialog when creating
a new database. The key symbol to the left of the first file specifies that that file
is the primary.
Chapter 3: Introduction to SQL Server 7.0 35
The buffer cache is a place in memory where SQL Server caches data
pages that have been read from disk. It will also contain the execution plan
for stored procedures.
Each time SQL Server starts, every database goes through a recovery phase. During the
recovery phase, SQL Server examines the transaction log looking for transactions that were
committed but not written to disk. SQL Server will reprocess or roll forward those transactions.
In addition, while scanning the transaction log, SQL Server looks for incomplete transactions
that were written to disk. These transactions will be reversed or rolled back.
SQL Servers row size is limited to roughly 8060 bytes because a row
cannot span multiple pages. The rest of the space on the page is taken up
by a 96-byte page header and some overhead for each row.
Chapter 3: Introduction to SQL Server 7.0 37
If the first statement succeeds but the second fails, there is no mechanism to reverse the
first statement. To correct this problem, both statements need to be treated as a single unit. The
following example uses an explicit transaction to do just that:
BEGIN TRANSACTION
UPDATE account SET balance = balance - 100 WHERE ac_num = 14356
UPDATE account SET balance = balance + 100 WHERE ac_num = 45249
COMMIT TRANSACTION
The BEGIN TRANSACTION statement starts the explicit transaction. In the event of an
error, it is now possible to undo the work done by either statement by issuing the ROLLBACK
TRANSACTION statement. If no error occurs, the transaction must be completed with the
COMMIT TRANSACTION statement.
Locking
All database management systems employ some type of concurrency control to prevent users
from interfering with each others updates. SQL Server, like most, uses locks for this purpose.
The query optimizer will determine the best type of lock for a given situation, and the Lock
Manager will handle acquiring and releasing the locks, managing lock compatibilities, and
detecting and resolving deadlocks.
There are three types of locks: shared locks, exclusive locks and update locks.
38 Client/Server Applications with Visual FoxPro and SQL Server
Shared locks
The optimizer acquires shared locks when reading data in order to prevent one process from
changing data that another process is reading. SQL Server normally releases a shared lock once
it is finished reading the data.
Exclusive locks
The optimizer acquires exclusive locks prior to modifying data. The exclusive lock prevents
two processes from attempting to change the same data simultaneously. It also prevents one
process from reading data that is being changed by another process. Unlike shared locks,
exclusive locks are held until the end of the transaction.
Update locks
An update lock contains aspects of both a shared lock and an exclusive lock and is required
to prevent a special kind of deadlock. To understand the reason for update locks, consider
that most data modification operations actually consist of two phases. In the first phase, SQL
Server finds the data to modify. In the second phase, exclusive locks are acquired and the
data is modified.
SQL Server uses an update lock as its searching for the data to change. An update lock is
compatible with existing shared locks but not with other update or exclusive locks. After the
update lock has been applied, no other process may acquire a shared, update or exclusive lock
on the same resource. As soon as all the other locks have been released, SQL Server will
promote (that is, change) the update lock to an exclusive lock, make the change, and then
release the lock when the transaction terminates.
Resources
The optimizer determines which resources to lock based on the query that it is trying to solve.
For example, if the optimizer decides that the best way to solve a query is to do a table scan, it
may acquire a lock on the entire table. SQL Server usually prefers to acquire row locks.
The following is a list of the resources that can be locked:
Database
Table
Extent
Page
Index Key
Row
Deadlocks
If two processes have acquired locks on separate resources but also require a lock on the
resource held by the other process, and neither process will continue until it achieves the lock, a
deadlock condition has occurred. Without intervention, both processes will wait forever.
Chapter 3: Introduction to SQL Server 7.0 39
SQL Server detects deadlock conditions automatically and corrects the problem by
choosing one of the processes and making it the deadlock victim. The deadlock victim will be
the process that will break the deadlock and has the least amount of work for SQL Server to
undo. Deadlocks are covered in detail in Chapter 11, Transactions.
Database objects
Each SQL Server database consists of a collection of objects such as tables, indexes and
stored procedures. Well begin our discovery of database objects with a discussion of
object names.
Server.database.owner.name
The server name, database name and owner name are called qualifiers. When all four
components have been supplied, the name is considered fully qualified. You dont always
have to specify a fully qualified name when referencing an objectall of the qualifiers are
optional. If the server name is omitted, SQL Server defaults to the name of the current server.
If the database name is omitted, SQL Server defaults to the current database. If the owner
name is omitted, SQL Server attempts to access the object using the users username. If that
fails, SQL Server will look for an object with the same name but that is owned by dbo. dbo
(database owner) is a special database user that is automatically mapped to the creator of
the database.
The following are examples of valid object references:
nts1.northwind.dbo.products
northwind.dbo.products
northwind..products
dbo.products
mlevy.products
Products
The first example is a fully qualified name. The second example omits the server name.
The third omits the owner name but retains the dot delimiters, as they are required. This
notation tells SQL Server that the owner of the object could be either the current user or dbo.
The fourth example omits both the server name and the database name. The fifth uses a specific
database user. The last example shows the most common way to refer to a database object
just the name. In this case, SQL Server looks for an object owned by the user making the
connection; if one is not found, SQL Server refers to the object of the same name owned by the
database owner.
A legal object name must follow the Rules for Regular Identifiers as follows (see also the
SQL Server Books Online):
40 Client/Server Applications with Visual FoxPro and SQL Server
Note that @ and # have special meaning when they are used as the first character
of the identifier. The @ symbol denotes a local variable, while a # symbol denotes
a temporary object.
3. The identifier must not be a Transact-SQL reserved word. SQL Server reserves both
the uppercase and lowercase versions of reserved words.
4. Embedded spaces or special characters are not allowed.
If you require an object name that does not conform to these rules, its okay. As long as the
identifier is delimited by square brackets, SQL Server will accept it.
Tables
A table is a collection of rows where each row describes a unique entity (for example,
customers, employees or sales orders). A row is a collection of columns, each of which
represents one attribute of the entity (such as name, address and quantity). In SQL Server, a
table is often referred to as a base table. You will see this term used often, especially during
discussions about views.
Theoretically, a database can have a maximum of 2,147,483,647 tables.
As with most database objects, there are two ways to create tables in SQL Server. You can
use the SQL Server Enterprise Manager or the Transact-SQL CREATE TABLE command.
To create a table using the SQL Server Enterprise Manager, follow these steps:
1. From within the SQL Server Enterprise Manager, expand a server group and then
expand the server.
2. Expand Databases and then expand the database that will contain the new table.
Chapter 3: Introduction to SQL Server 7.0 41
To create a table using Transact-SQL, use the CREATE TABLE command. This
is a simplified example of the CREATE TABLE statement that would create the
northwind..employees table:
Note that your CREATE TABLE statements will probably be more complex.
by storing matching key values in the child and parent tables. The value in the unique identifier
(or Key) column of the parent table appears in the Foreign Key field of the child table. For
example, the Northwind database contains two tables: Orders and Order Details. Each row in
the Order Details table contains the unique identifier of one of the rows in the Orders table.
You do not want to allow the application to add an order item without specifying a specific
order because every order item must belong to exactly one order.
There are two data integrity enforcement types: procedural and declarative.
Procedural data integrity enforces rules using procedural code stored in triggers and stored
procedures. Procedural data integrity is often used when the database engine has no other
functionality available (not the case for Microsoft SQL Server) or if the rules are too complex
to be handled by declarative integrity.
Declarative data integrity enforces data integrity by checking the rules that are defined
when the tables are created. Declarative data integrity is enforced before any changes are
actually made, and therefore enjoys a performance advantage over the procedural methods.
Table 2 summarizes the constraints and other options that SQL Server provides to enforce
data integrity. Although not listed here, triggers and stored procedures (procedural code) can be
used to enforce all types of data integrity.
Data types
The most basic tool a database implementer has for enforcing domain integrity is the data type.
The data type of a column specifies more than what type of data the column can contain. When
you assign a data type to a column, you are controlling:
The nature of the data, such as character, numeric or binary.
The amount of space reserved for the column. For instance, a char(9) will reserve nine
bytes in the row. An int column has a fixed length of four bytes. A varchar(9) column
is a variable length column. In this case, SQL Server will allow a maximum of nine
bytes for the column, but the actual amount used will be determined by the value
stored in the column.
For numeric data types only, the precision of a numeric column specifies the
maximum number of digits that a column can contain, not including the decimal point.
Chapter 3: Introduction to SQL Server 7.0 43
For instance, a decimal(7,2) column can contain a maximum of seven digits. A tinyint
has a domain of 0 255, so the precision is three (but the amount of space reserved
for storage in the row is one byte).
Also for numeric data types, you can specify the scale. The scale determines the
maximum number of positions to the right of the decimal point. The scale must be
greater than or equal to zero and less than or equal to the precision (0 <= s <= p).
For a column defined as decimal(7,2), SQL Server reserves two places to the right of
the decimal point.
IDENTITY property
Each table may have one column that is an Identity column, and it must be defined using one
of the numeric data types. When a row is inserted into the table, SQL Server automatically
generates a unique sequential numeric value for the column.
As with many column properties, the IDENTITY property can be specified when the table
is initially created, or it can be applied to an existing table using the Transact-SQL ALTER
TABLE command. When you specify the IDENTITY property, you have the option of
specifying a starting value and an increment value. The starting value is called the seed value,
and it will become the value placed into the first row added to the table. From that point
forward, the values will be incremented by the increment value.
You can use the Transact-SQL @@IDENTITY system function to return the last
IDENTITY value assigned. You have to be careful with this system function: It is scoped to the
connection, and it contains the last IDENTITY value assigned regardless of the table.
You cannot specify an explicit value for the IDENTITY column unless you
enable the IDENTITY_INSERT connection option.
Nullability
The nullability property specifies whether or not the column can accept a NULL value.
It is best to specify the nullability property explicitly for each column. If you dont, SQL
Server makes the decision for you, based on connection and database settings. (See ANSI
null defaults and SET ANSI_NULL_DFLT_ON in the SQL Server Books Online for
more information.)
Constraints
SQL Server provides constraints as a mechanism to specify data integrity rules. Designers
prefer constraints to procedural mechanisms (triggers and stored procedures) because
constraints are simpler and therefore less vulnerable to designer error. Constraints also enjoy a
performance advantage over procedural mechanisms because SQL Server checks constraints
before updating the data. Procedural mechanisms (i.e., trigger-based integrity solutions) check
the data later in the processafter the data has been updated.
Constraints can be specified when the table is initially defined or added to an existing
table. If a constraint is added to an existing table, SQL Server checks the constraint against the
existing data. If the constraint fails, SQL Server rejects the constraint. To prevent SQL Server
44 Client/Server Applications with Visual FoxPro and SQL Server
from checking existing data, you can include the WITH NOCHECK option. However, WITH
NOCHECK only affects CHECK and FOREIGN KEY constraints.
You can add a PRIMARY KEY constraint to an existing table using the ALTER
TABLE command:
To create a PRIMARY KEY constraint using the SQL Server Enterprise Manager,
see the topic Creating and Modifying PRIMARY KEY Constraints in the SQL Server
Books Online.
UNIQUE constraints
A table may have multiple unique identifiers (although it can have only one primary key). For
example, suppose that we have a patient table that contains both patient ID and patient Social
Security number. Both columns are unique. If the patient ID is the primary key, we can still
instruct SQL Server to enforce uniqueness of the Social Security number by declaring a
UNIQUE constraint on the Social Security number column.
Just like the PRIMARY KEY constraint, SQL Server will not allow any two rows to
contain the same value in a column marked with a UNIQUE constraint. However, unlike a
PRIMARY KEY constraint, a UNIQUE constraint can be placed on a nullable column.
Creating a UNIQUE constraint using Transact-SQL is very similar to creating a
PRIMARY KEY constraint. The following example shows how you would add a UNIQUE
constraint to an existing employee table:
To create a UNIQUE constraint using the SQL Server Enterprise Manager, see the topic
Creating and Modifying UNIQUE Constraints in the SQL Server Books Online.
Chapter 3: Introduction to SQL Server 7.0 45
CHECK constraints
CHECK constraints enforce domain integrity and are similar to Visual FoxPros Field
and Row rules. To create a CHECK constraint, you specify a logical expression involving
the column you wish to check. This expression must not evaluate to False when attempting
to modify the database; otherwise, SQL Server does not permit the modification to occur.
Unlike Visual FoxPro, SQL Server does not allow user-defined functions inside of
CHECK constraints.
You can create a CHECK constraint when you initially define the table or afterwards when
the table already exists.
The following example creates a CHECK constraint on the Gender column that allows
only the character values M and F:
To create a CHECK constraint using the SQL Server Enterprise Manager, see the topic
Creating and Modifying CHECK Constraints in the SQL Server Books Online.
DEFAULT constraints
A DEFAULT constraint specifies a value to place in a column during an insert if a value was
not supplied explicitly. The value specified in the DEFAULT constraint must be compatible
with the data type for the column. Unlike Visual FoxPro, SQL Server DEFAULT constraints
cannot contain user-defined functions.
Heres the example from the CHECK constraint, but this time a new column has been
added to capture the date and time that the row was created:
In this example, if a specific value is not supplied for the creat_date column, SQL Server
will execute the Transact-SQL GETDATE() function and automatically insert the current date
and time into the column.
To create a DEFAULT constraint using the SQL Server Enterprise Manager, see the topic
Creating and Modifying DEFAULT Constraints in the SQL Server Books Online.
foreign key column or columns by allowing only valid primary keys from the parent table.
A FOREIGN KEY constraint usually references the parents primary key, but it can also
reference any of the parents other unique keys (the column or columns that comprise
UNIQUE constraints).
The following ALTER TABLE command defines a FOREIGN KEY constraint on the
Order Details table that references the Orders table:
Indexes
Correctly designed indexes are critically important because of their effect on database
performance. (This is true of both SQL Server and VFP databases.)
When SQL Server searches for a specific row or groups of rows, it can check every row of
the table or it can find an appropriate index and use the information in the index to go directly
to the desired rows. The optimizer will decide which method is less expensive (in terms of page
I/O) and choose it.
In addition to speeding up searches, indexes are used to enforce uniqueness. (See the
earlier discussion of PRIMARY KEY and UNIQUE constraints.)
It is generally a good idea to index the following items:
Columns within a primary key
Columns within a foreign key
Columns that frequently appear in WHERE clauses of queries
Columns that the application uses frequently as the basis for a sort
Creating indexes
Indexes can be created by using the Transact-SQL CREATE INDEX command or
the SQL Server Enterprise Manager. The partial syntax for the CREATE INDEX
command is:
Heres an example:
This statement creates an index on the employeeid column of the orders table in the
Northwind database.
You can create an index on more than one column. Such an index is called a
composite index.
In contrast to Visual FoxPro, the columns of a composite index need not be of the same
data type. In addition, SQL Server will probably not use a composite index to solve a query
unless the high-order column (in this case, lastname) appears in the WHERE clause of the
query. SQL Server keeps some statistical information about the distribution of the data
within the index. The statistics are used by the optimizer to estimate how useful the index
would be in solving the query. For a composite index, SQL Server keeps statistics only on
the high-order column.
Indexes are stored internally as a Balanced Tree (or B-Tree for short). In keeping
with the tree metaphor, different parts of the B-Tree are described using terminology similar to
that of a real treeexcept upside down (see Figure 4). The Root provides the starting point for
all index searches. Below the root (remember, this tree is upside-down) are the Intermediate
(also known as non-leaf-level) nodes. Large indexes will probably have multiple levels of
intermediate nodes.
At the very bottom of the index are the Leaf nodes. All the keys at the leaf level of the
index are sorted in ascending order based on the key values. The type of index determines the
content of the Leaf nodes.
48 Client/Server Applications with Visual FoxPro and SQL Server
Types of indexes
SQL Server supports two types of indexes: clustered and non-clustered.
Non-clustered indexes are very similar to Visual FoxPro indexes. The leaf level of a non-
clustered index contains one key for every row in the table. In addition, each key has a pointer
back to the row in the table. This pointer is called a bookmark and has two possible forms
depending on whether or not the table has a clustered index (discussed later). If the table does
not have a clustered index, the bookmark is a Row Identifier (RID), which is the actual row
location in the form of file#:page#:slot#. If the table does have a clustered index, the bookmark
contains the key from the clustered index for that row.
You may have up to 249 non-clustered indexes per table, although it is common to have
far less.
The leaf level of a clustered index is the table itself. The clustered index sits on top of the
table. As a result, the table is physically sorted according to the clustered key. For this reason, a
table can have only one clustered index.
SQL Server forces all clustered keys to be unique. If the index was not explicitly
created as UNIQUE, SQL Server adds a four-byte value to the key to make it unique. All
non-clustered indexes on a clustered table (a table with a clustered index) will use the
clustered key as its bookmark.
Views
A view is a virtual table that has no persistent storage or physical presence. It is actually a
definition of a query. Its contents are defined by the results of the query when the query is
executed against base tables (that is, physical or real tables). The view is dynamically produced
whenever it is referenced. To the application, a view looks and behaves just like a base table.
If views look, smell and act like real tables, why bother to use them instead of their base
tables? A view can be used to limit a users access to data in a table. Using a view, we can
make only certain columns or rows available. For example, we may want everyone in the
Chapter 3: Introduction to SQL Server 7.0 49
organization to have access to the name, address and phone number information in the
employee table, but only Human Resources personnel should have access to the salary details.
To support this requirement, we would create a view that exposes only the name, address and
phone number. Everyone in the organization would access the employee data through this view
except, of course, Human Resources personnel.
Another use for views is to simplify a complex join situation within the database. The pubs
sample database contains a table of authors and a table of titles. Since there is a many-to-many
relationship between the two tables, a third table, titleauthor, exists that maps authors to titles.
A view could be created that joins the authors, titles and titleauthor tables so that users are
presented with a simpler data structure to use as the basis for queries and reports.
You create (that is, define) a view using the Transact-SQL CREATE VIEW statement. The
CREATE VIEW statement to create the view discussed previously would look like this:
USE pubs
GO
CREATE VIEW titlesandauthors AS
SELECT
Titles.title_id,
Titles.title,
Authors.au_id,
Authors.au_lname,
Authors.au_fname,
Titleauthor.royaltyper AS RoyaltyPercentage
FROM titles INNER JOIN titleauthor INNER JOIN authors
ON authors.au_id = titleauthor.au_id
ON titles.title_id = titleauthor.title_id
Using the view is just a matter of referring to it as you would any real table:
SELECT *
FROM titlesandauthors
ORDER BY title
Stored procedures
A stored procedure is a collection of Transact-SQL statements that is stored in the database.
Stored procedures are similar to procedures in other languages. They can accept parameters,
call other stored procedures (including recursive calls), and return values and status codes
back to the caller. Unlike procedures in other languages, stored procedures cannot be used
in expressions.
Stored procedures are not permanently compiled and stored in the database. The only
thing stored about a stored procedure is the source code, which is physically stored in the
SYSCOMMENTS system table. When SQL Server needs to execute a stored procedure, it
looks in the cache to see whether there is a compiled version there. If so, SQL Server reuses
the cached version. If not, SQL Server gets the definition from the SYSCOMMENTS table,
parses it, optimizes it, compiles it and places the resulting execution plan in the cache. The
execution plan remains there until its paged out (using a least recently used algorithm) or
the server is restarted.
50 Client/Server Applications with Visual FoxPro and SQL Server
Stored procedures are a powerful tool in the database implementers toolbox. Stored
procedures can be used to encapsulate logic and share it across applications. They can provide
a performance advantage, by allowing SQL Server to reuse execution plans and skip the parse,
optimize and compile steps.
Like views, stored procedures can also be used to limit or control access to data.
Stored procedures are created with the Transact-SQL CREATE PROCEDURE command:
USE pubs
GO
CREATE PROCEDURE getauthors AS
SELECT * FROM authors
The previous example was relatively simple. It simply returns the entire authors table to the
caller. The next example adds the use of a parameter that specifies a filter condition:
USE pubs
GO
CREATE PROCEDURE getauthor
@author_id varchar(11)
AS
SELECT * FROM authors WHERE au_id = @author_id
IF @@ROWCOUNT > 0 RETURN 0
ELSE RETURN -1
This example takes a parameter, the ID of an author, and returns the row from the authors
table that matches it. Theres also some additional logic to check the number of affected rows
using the @@ROWCOUNT system function (similar to Visual FoxPros _TALLY system
variable) and return a status code of zero (0) for success or 1 for no matches.
To execute this stored procedure, you would use the EXECUTE statement:
or
Note that the RETURN statement can only return an integer value; therefore, it cannot be
used to return character strings or other data types. Fortunately, returning a result set and the
RETURN statement are not the only ways to get data back from a stored procedure. You can
declare specific parameters as OUTPUT parameters. OUTPUT parameters allow a value to be
returned to the calling routine, similar to passing a parameter by reference in Visual FoxPro.
The following example counts the number of books written by the specified author and returns
the count through an OUTPUT parameter:
USE pubs
GO
CREATE PROCEDURE BookCount
@author_id varchar(11),
Chapter 3: Introduction to SQL Server 7.0 51
The OUTPUT keyword is required in the stored procedure and when the procedure is
called. If the keyword is omitted in either place, SQL Server returns an error.
Heres a more complex example of a stored procedure that handles errors and manages
a transaction:
-- The Funds table is the only table that's needed. We're only going to
-- create the columns required by the TransferFunds stored procedure.
USE bank
CREATE TABLE Funds (
Fund_id int IDENTITY(10000,1) PRIMARY KEY,
Amount money)
GO
CREATE PROCEDURE TransferFunds
@SourceFund int = NULL,
@TargetFund int = NULL,
@amount money = NULL
AS
----------------------
-- Parameter checking
----------------------
IF @SourceFund IS NULL
BEGIN
RAISERROR ('You must supply a source fund', 11, 1)
RETURN 1
END
IF NOT EXISTS (SELECT * FROM funds WHERE fund_id = @SourceFund)
BEGIN
RAISERROR ('Source fund not found', 11, 1)
RETURN 1
END
IF @TargetFund IS NULL
BEGIN
RAISERROR ('You must supply a Target fund', 11, 1)
RETURN 1
END
IF NOT EXISTS (SELECT * FROM funds WHERE fund_id = @TargetFund)
BEGIN
RAISERROR ('Target fund not found', 11, 1)
RETURN 1
END
52 Client/Server Applications with Visual FoxPro and SQL Server
---------------------
-- Make the transfer
---------------------
BEGIN TRANSACTION Fund_Transfer
UPDATE funds SET amount = amount - @amount WHERE fund_id = @SourceFund
IF @@ERROR <> 0 GOTO AbortTransfer
UPDATE funds SET amount = amount + @amount WHERE fund_id = @TargetFund
IF @@ERROR <> 0 GOTO AbortTransfer
COMMIT TRANSACTION Fund_Transfer
RETURN 0
AbortTransfer:
ROLLBACK TRANSACTION Fund_Transfer
RETURN 1
Triggers
A trigger is a special type of stored procedure. It is tightly coupled to a table and is executed by
SQL Server in response to specific operations against the table. The most common use of
triggers is to enforce rules that are specified procedurally (that is, in procedural code). Triggers
are also used to cascade deletes and updates to child tables and to maintain denormalized data.
When you create a trigger, you specify which operation or operations (INSERT, UPDATE
and/or DELETE) cause the trigger to fire. New in SQL Server 7.0 is the ability to have multiple
triggers for the same operation. For example, you can have multiple update triggers, where each
trigger essentially watches for changes in a specific column.
Microsoft has declared that if multiple triggers are defined for the same
operations, their order of operation is unknown.
Unlike a Visual FoxPro trigger, which fires once for each affected row, a SQL Server
trigger fires once no matter how many rows were affected by the query. The trigger always fires
onceeven if the query affected no rows. When you write a trigger, you must consider whether
you need additional code to detect and handle the situation where no rows were affected.
Triggers fire after the data has been modified but before the transaction is committed (in
the case of an implicit transaction). Therefore, a trigger can cause a transaction to be aborted by
issuing a ROLLBACK TRANSACTION from within the trigger. Because the trigger fires after
SQL Server modifies the data, the trigger can view the before and after results of the query.
This is accomplished by using two special tables called Inserted and Deleted. The Inserted
and Deleted tables exist in memory and only for the life of the trigger. These tables are not
visible outside the trigger. (For more information on the Inserted and Deleted tables, see the
following sections in this chapter: The INSERT operation, The DELETE operation and
The UPDATE operation.)
You create a trigger using the Transact-SQL CREATE TRIGGER statement. The partial
syntax is shown here:
Chapter 3: Introduction to SQL Server 7.0 53
Heres a simple example that maintains two audit columns, upd_datetime and upd_user.
First well add the two columns to the products table and then create the trigger:
USE northwind
GO
ALTER TABLE Products ADD
upd_datetime datetime NULL,
upd_user varchar(10) NULL
GO
The previous example referred to the Inserted table that was mentioned earlier. Lets
look at the operation of triggers, and their effects on the Inserted and Deleted tables, in a little
more detail.
This example has a flaw: It will work correctly only if rows are inserted into the Order
Details table one at a time. If one INSERT operation manages to produce two Order Details
rows for the same product, the trigger will generate an error since this specific use of a
subquery allows only one row to be returned. Fortunately, this problem is easy to remedy by
replacing the quantity with the SUM aggregate function. The corrected version follows:
54 Client/Server Applications with Visual FoxPro and SQL Server
This trigger will never fire if you have a FOREIGN KEY constraint defined between the
Order Details and Orders tables. Remember, constraints are checked before any work is
actually done, and triggers fire after changes are made. Before SQL Server executes the
DELETE on Order, the FOREIGN KEY constraint will force it to check for references in the
Order Details table. Finding any foreign keys referencing the row that would be deleted will
cause SQL Server to return a constraint violation error and kill the statement. In order to
implement cascading deletes, you will not be able to use FOREIGN KEY constraints between
the participating tables.
IF UPDATE(unitprice)
BEGIN
If exists (
SELECT *
FROM inserted INNER JOIN deleted
ON inserted.productid = deleted.productid
WHERE inserted.unitprice/deleted.unitprice > 1.25)
RAISERROR(
'No product price may be increased by more than 25%',
10, 1)
ROLLBACK TRANSACTION
END
Summary
Our goal for this chapter was to give you some basic information about SQL Server and
introduce some fundamental concepts and the various database objects that are used to
implement a database design.
In the next chapter, well look at one way to use Visual FoxPro to access SQL Server.
56 Client/Server Applications with Visual FoxPro and SQL Server
Chapter 4: Remote Views 57
Chapter 4
Remote Views
Visual FoxPro provides two built-in mechanisms for working with client/server data:
remote views and SQL pass through. Other data access methods, such as ADO, can
also be used in Visual FoxPro client/server applications. Each technique has its
advantages and disadvantages. Remote views have the advantages of being extremely
easy to use and being bindable to FoxPro controls. A remote view is a SQL SELECT
statement stored in a Visual FoxPro database container (DBC). Remote views use
Open Database Connectivity (ODBC), a widely accepted data-access API, to access
any ODBC-compliant data.
Although the examples in this book use Microsoft SQL Server on the back end,
remote views can also be used with many other back ends such as Oracle, IBM DB2,
Informix, Sybase, Microsoft Access or Excel, or even Visual FoxPro. With a remote
view, you can work with client/server data almost as if it were local Visual FoxPro
data. In this chapter you will learn how to use this terrific tool as the foundation for a
client/server application. In addition, by learning the fundamentals of remote views,
you will be ready to learn about SQL pass through in Chapter 6, Extending Remote
Views with SQL Pass Through.
Connections
Before you can create a remote view, you must specify how the view will connect to the back
end. There are several ways to do this, all of which use ODBC. Therefore, both ODBC itself
and the back-end-specific ODBC driver must be installed and configured on the client machine.
For SQL Server development, ODBC installation is done when installing Visual Studio and/or
SQL Server. For an application you distribute, ODBC installation can be done through the
Visual FoxPro Setup Wizard.
Here is a very simple remote view that returns all rows and columns in the Northwind
databases Customers table:
The second line specifies which connection VFP will use to execute the SELECTin this
case, one called Northwind. VFP will look for a connection called Northwind in two places:
first in the list of named connections in the current DBC, and then in the client machines list of
ODBC Data Source Names, or DSNs.
Named connections, which are stored in the DBC along with the view definitions, offer
greater flexibility than DSNs. Named connections can use a string that defines the server,
database, login name and password for connecting to the back end. A connect string allows you
to define your connection at run time, rather than requiring a DSN, which is especially useful
58 Client/Server Applications with Visual FoxPro and SQL Server
for applications that connect to multiple servers. Alternately, named connections can use an
existing DSN to define the connection.
The quickest way to get rolling with the VCustomers view is to create a DSN to connect to
the SQL Server Northwind database. To create a DSN, start the ODBC Data Sources Control
Panel applet, which, depending on the version of ODBC installed on your machine, looks
something like Figure 1.
There are three types of DSNs: user, system and file. A user DSN can be used only by one
particular user, while a system DSN can be used by any user on the machine. User and system
DSNs are stored in the registry of the client machine, while file DSNs are stored in text files
and can be located anywhere. We typically use system DSNs because we only have to set up
one DSN per machine rather than one per user. Each type of DSN is set up with its own tab in
the dialog.
To create the Northwind system DSN, click on the System DSN tab in the ODBC Data
Source Administrator dialog, click the Add button, select the SQL Server driver, and then
click the Finish button. Now you will see the Create a New Data Source to SQL Server dialog.
Fill in the fields as shown in Figure 2. The DSN name is what you will use when you create
connections, while the description is optional. If SQL Server is running on the local machine,
be sure to put (local) in the Server field rather than the machine name. Using the machine
name, particularly on Windows 95 or 98 machines, will frequently cause the connection to fail,
at least with the driver versions available at the time of this writing.
Chapter 4: Remote Views 59
Figure 2. The Create a New Data Source to SQL Server dialog filled in to create a
connection to the Northwind database on the local machine.
When you click the Next button, ODBC will attempt to locate the specified server; if
successful, youll be asked to configure the connection as shown in Figure 3 and Figure 4. If
unsuccessful, you may have a problem with a network connection, or you may not have
permission to access the server. Neither of these situations can be rectified here, but require
checking your network or your SQL Server.
Figure 3. Configuring the Northwind connection to use SQL Server security with the
default sa login.
60 Client/Server Applications with Visual FoxPro and SQL Server
Once youve created the connection to the Northwind database on SQL Server, create a
Visual FoxPro database by typing the following in the Command Window:
The next step is to use a named connection in the VFP database. A named connection is an
object in the VFP database that contains connection information. Why use a named connection
rather than just a DSN? One major reason is that once you have created one, you can share the
connection among multiple views. Each ODBC connection uses resources on both the server
and the client. They take time to establish, they use memory (about 24K per connection on SQL
Server), and having too many of them can seriously degrade performance in some systems.
Although ODBC has a connection pooling feature that allows unused connections to be reused,
you as a developer cannot control this feature from your application.
If the VCustomers view defined previously, and another view, are opened, two ODBC
connections will be established. To demonstrate this, define a view of the Orders table, then
open it and the VCustomers view:
Chapter 4: Remote Views 61
First of all, note that you will be asked for the user ID and password twice. Also, the last
two lines will display two different numbers. By using a named connection, the same ODBC
connection can be used by both views. To create a named connection and a view that can share
it, open the Northwind DBC and type the following in the Command Window:
Note the addition of the SHARE keyword to the view definition and the use of the
DBSETPROP() call to set the ShareConnection property. You must do both of these in order
share the connection. Now when you attempt to open the two views, you will only be asked to
log in once, and the connect handle will be the same for both cursors. Note that the SHARE
keyword (in the CREATE SQL VIEW statement) and the ShareConnection property (in the
DBSETPROP statement) have no effect with views using a DSN rather than a named
connection because a DSN connection cannot be shared by multiple views.
In Visual FoxPro, most environment settings are local to a data session. However, it is
important to note that a named connection can be shared by multiple datasessions.
The Northwind named connection we just created uses the Northwind DSN, but you can
also create named connections that use connect strings. A connect string is a string that
contains the server name, login name, password and database:
Each of the four parts of the connect string is delimited by a semicolon. No quote
marks are used for the individual parameters, though you can optionally surround the entire
connect string in quotes. You will need the quotes, however, if you require spaces within
the string.
Named connections can also be created with VFPs Connection Designer. Right-click in an
open Database Designer and select Connections to open a list of named connections. Click
New and you will see the Connection Designer, which looks like Figure 5.
62 Client/Server Applications with Visual FoxPro and SQL Server
The Connection Designer has controls for setting additional properties for named
connections. Each of these properties can also be set using the DBSETPROP() function. The
Visual FoxPro documentation provides a complete listing of properties under the
DBGETPROP() topic. Well cover a few of them here:
Asynchronous. When set to .F., the default, the connection executes commands
synchronouslythat is, the next line of code doesnt execute until the previous
command on the connection has completed. Asynchronous execution allows
commands on the connection to execute in the background while your code continues
to execute. While asynchronous processing may be useful for certain tasks, generally
you want synchronous execution.
ConnectTimeout. When set to any value other than 0 (the default), VFP will attempt to
acquire the connection for the number of seconds specified. If Visual FoxPro is unable
to connect within this time period, an error occurs.
IdleTimeout. This is similar to ConnectTimeout, but it doesnt actually disconnect
when it times out. It merely deactivates the connection. If the connection is used
for a view, VFP will reactivate the connection when you attempt to use it again.
Use with care, as this can cause unclear errors to occur in your application (e.g.,
Connectivity error: unable to retrieve specific error information. Driver is probably
out of resources.).
Chapter 4: Remote Views 63
DispLogin. This property determines whether and how the user is prompted for login
information. The default setting is 1 (DB_PROMPTCOMPLETE from Foxpro.h),
which will only prompt the user if some required login information is missing.
DB_PROMPTALWAYS, or 2, will cause the user to be prompted each time a
connection is made to the server. DB_PROMPTNEVER, or 3, will not prompt the
user, even if no login information is supplied, allowing the connection to fail. This last
setting is required for using remote views or SQL pass through with Microsoft
Transaction Server (MTS).
DispWarnings. If this property is set to .T. (the default), then non-trappable ODBC
errors will be displayed to the user in a message box. In an application, youll typically
set this to .F. and deal with errors yourself. For more about error handling, see Chapter
8, Errors and Debugging.
Remote views
In the Connections section of this chapter, you learned how to create a basic remote view
of the Northwind Customers table. This view is nothing more than a SQL SELECT that gets
all rows and all columns of the Customers table. If you run this on your development machine
with SQL Server running on the same machine, the query will execute quickly, as there are
only 91 records. But on a network with many usersparticularly one with a low-bandwidth
connection, and with thousands of customers in the tablethis would be a terribly inefficient
query. A more efficient view can be created by adding a WHERE clause to reduce the number
of rows returned. The following view will only return rows where the customerid column
contains ALFKI:
Now the view will only return a single row, but it can only be used for a single customer.
Visual FoxPro allows you to create parameterized views so that you can define the WHERE
clause when the view is executed.
The cCustomerID thing is a view parameter, which represents variable data that can be
filled in at run time. By preceding cCustomerID with the question mark operator, you tell VFP
to create the structure of the view in the DBC but to evaluate a memvar called cCustomerID at
run time. If cCustomerID exists when the view is opened or REQUERY() or REFRESH() is
issued, its value will be substituted into the WHERE clause. If the variable cCustomerID does
not exist, the user will be prompted to supply it, as shown in Figure 6. In an application, you
will usually want to specify the values of parameters yourself rather than allowing VFP to
prompt the user like this.
Figure 6. When opening a parameterized view where the parameter does not already
exist, the user is prompted to provide a parameter.
When creating a client/server application, we usually create one SELECT * view per table
in the database and set the parameter to the primary key. We use these views for data entry and
give the view the same name as the table, preceded by the letter V. Sometimes it makes sense
to parameterize these views on some foreign key, but generally using the primary key assures
you of views that bring down only a single record.
When views are used to return a range of records for reporting or lookups, it often makes
sense to use parameters other than the primary key. For example, you might want to find all
customers in London:
When you set the variable cCity to the value London and open the VCustomersByCity
view, the result set will be only those customers in London.
You can use wildcards in view parameters, too. To find all customers in any city beginning
with the letter L, set cCity to a value of L% prior to executing the query.
The syntax for wildcards is not the same in SQL Server as it is in FoxPro.
While the % wildcard is the same in both, you cannot use the * wildcard in
SQL Server. Setting cCity to a value of L* would not return customers in
cities beginning with L, but rather in cities beginning with L*. There probably
arent any cities in your data with such a name. Instead, use L%.
As with any other SQL SELECT statement, you can specify a field list. SELECT * may be
useful in some situations, but it is often more efficient to specify the field list explicitly in order
Chapter 4: Remote Views 65
to bring down only the columns you need. For example, if you only need the customer ID,
company name, city and country for each customer, a more efficient and equally useful view of
customers would look like this:
Remote views, like other SQL SELECTs, can also join multiple tables. For example, this
view returns all sales territories and the employees responsible for them:
You must be certain to use join syntax that is supported by the back end. VFP and SQL
Server 7.0 are pretty similar, but you may encounter back ends that are different.
When creating views, avoid using * for the field list. The view will work
without errors until a field is added to the base table(s) on the SQL
Server. Since the view was defined with the previous version of the
table, Visual FoxPro does not know about the new field(s), and produces the
error Base table fields have been changed and no longer match view fields
when executed.
Updatable views
Remote views can be used to update the underlying data. You can append records in views,
delete existing records and update fields. When you are ready to update the data on the back
end, simply issue a call to TABLEUPDATE(), and VFP takes care of sending the changes to
SQL Server.
Remote views can be made updatable in the View Designer, as shown in Figure 7. At a
minimum, you must select one or more primary key columns, determine which columns to
update, and check the Send SQL updates check box. Even if you mark every column as
updatable, updates will not be sent unless you also check this check box. In Figure 7, note that
the primary key column has also been marked as updatable. This is because this columns value
is set by the user, not by the database. If this were an identity column or if its value were set by
an insert trigger, you would not make this column updatable.
66 Client/Server Applications with Visual FoxPro and SQL Server
Figure 7. The Update Criteria page of the Visual FoxPro View Designer can be used
to make remote views updatable.
The Visual FoxPro View Designer is very limited in its ability to modify
remote views. If you have remote views with joins, its likely that you wont
be able to edit them with the View Designer once you have saved them.
Use the View Designer to create your view and mark the updatable fields if you
wish. But when you need to edit the view again, be prepared to do so in code.
(See Chapter 5, Upsizing: Moving from File-Server to Client/Server.)
The Update Criteria tab of the View Designer simply provides a convenient user interface
for setting view and field properties in the DBC. The same properties can be set with
DBSETPROP(). The following two lines of code make the CustomerID field updatable and
mark it as a primary key:
Setting the KeyField and Updatable properties of fields and the SendUpdates property of
the view is critical to updating data. Many a developer has spent a frustrating session trying to
figure out why data isnt being savedwhen its because the view isnt configured to do so.
Figure 7 shows two other properties that are important for updatable views. The first one,
SQL WHERE clause includes, sets the views WhereType property, which determines how
Chapter 4: Remote Views 67
collisions are detected. The four option buttons in the View Designer correspond to the four
numeric values that can be set for the WhereType property. Heres the code that duplicates the
setting shown in Figure 7:
SQL Server will automatically assign a timestamp data type to a column named timestamp.
You can also have timestamp columns with other names, in which case you must explicitly
define the data type.
The final option group on the Update Criteria page of the View Designer, Update using,
sets the views UpdateType property. The default is 1, or DB_UPDATE (Update in Place), and
is what you will want to use most of the time. To let SQL Server choose the most appropriate
action, leave this setting on DB_UPDATE; otherwise, you will force SQL Server to always
delete and then insert records, causing extra work and slowing performance. The UpdateType
property is set in code like this:
Buffering
Because you cant work directly with a table in a client/server database, the data is
automatically buffered, unlike with VFP tables, where your changes immediately affect the
tables unless you use buffering. As with VFP, there are two ways to buffer a view: row
buffering and table buffering. Row buffering commits changes for one row at a time, while
table buffering commits multiple rows.
There is a popular misconception among VFP developers that row buffering should be
used to buffer one row at a time and table buffering should be used to buffer multiple rows.
While that is true, theres one additional difference that may override any other considerations
when trying to decide which scheme to use: Row buffering causes changes to be committed
automatically whenever the record pointer moves, while table buffering requires an explicit
call to TABLEUPDATE() to commit changes. Okay, you say, dont move the record pointer
until youre ready to commit changes. Sometimes it isnt that easy, as some VFP commands
will move the record pointer unintentionally, thus causing the changes to be committed
unintentionally. Also, you may want to wrap changes to multiple tables in a transaction. But
if these changes are happening automatically, you wont be able to combine them into a
transaction. Therefore, we never use row buffering, even if were working with only one row
at a time.
When you open a view, it is row buffered by default. You should change it, either by
setting cursor properties in a forms data environment or by explicitly setting the buffer mode
with CURSORSETPROP(). The following code will change a views buffer mode from row
to table:
CURSORSETPROP("Buffering", 5, "myalias")
Its preferable to open all of your views and then set table buffering for all open work areas
in a method of your form class. Listing 1 shows the code for this method.
Listing 1. This method loops through all open work areas and sets the buffer mode.
PROCEDURE SetBufferMode(tnBufferMode)
IF PCOUNT() = 0
*-- Default to table buffering
tnBufferMode = 5
ENDIF
LOCAL i, lnCount
LOCAL ARRAY laUsed[1]
TABLEUPDATE(.F.)
TABLEREVERT(.T.)
The following line updates all rows in the current work area, but stops when the first
collision is detected:
TABLEUPDATE(.T.)
The following line updates all rows in the current work area, but continues after a collision
is detected and attempts to update all the following rows:
TABLEUPDATE(2)
Note that TABLEUPDATE() can take a logical or a numeric first parameter. Numeric 0 is
equivalent to .F., numeric 1 is equivalent to .T.
Collisions occur when two users are attempting to make changes to the same record. The
WhereType property of a view or cursor, as described previously, determines how collisions
are detected. When SQL Server detects a collision, it generates a non-trappable error. If an
automatic commit is made by moving the record pointer, you are not informed of the change. If
you commit changes manually with the TABLEUPDATE() function, then the return value of
the function will inform you whether the update was successful. Collisions will only be detected
if the second parameter to the TABLEUPDATE() function is FALSE, like this:
TABLEUPDATE(.T., .F.)
If a collision occurred, the TABLEUPDATE() function will return FALSE. If you choose
to do so, you can attempt to resolve the collision and then commit the records again, this time
using TRUE for the second parameter:
TABLEUPDATE(.T., .T.)
This will force the changes to be committed. Collision handling is covered in greater detail
in Chapter 8, Errors and Debugging.
70 Client/Server Applications with Visual FoxPro and SQL Server
If the view brings down 500 records, control is returned to the program as soon as the first
100 are returned. That means either the next line of code will be executed or control of the user
interface will be returned to the user after 100 records. If the FetchAsNeeded property is set to
.T., then no more records will be fetched until the user attempts to scroll to record 101, at
which time the next 100 rows are retrieved:
But if FetchAsNeeded is set to .F., then the remaining 400 rows will be fetched in the
background. In some cases this works great, as a user can be looking at data right away while
more is being fetched in the background. By the time the user gets through all of the first batch
of data, there ought to be at least another batch waiting. However, if there is more code to
execute, you must be cautious of how these properties are set. In the preceding example, if a
following line of code queries another view on the same connection or attempts a SQL pass
through command on the same connection, you will get a connection is busy error. To
prevent such errors, you must set the FetchSize property to 1:
Chapter 4: Remote Views 71
However, if you set it to 1, then all records must be returned before program execution
can continue. This is another good reason to refine your queries so they produce small
result sets.
MaxRecords
The MaxRecords property determines the maximum number of rows in a result set. The main
reason this property exists is to help prevent a non-specific query from sending a large amount
of data to the local workstation. By setting this property to a reasonable value, you prevent the
users from accidentally filling their hard drives with useless data.
Another good example of using the MaxRecords property would be attempting a TOP n
query on a back end that doesnt support it, such as SQL Server 6.5. This query generates a
syntax error on SQL Server 6.5:
But the same thing could be achieved by limiting the number of records returned by
the view:
FetchMemo
The FetchMemo property determines whether the contents of memo fields are brought down
with every record or just when they are needed. When set to .T., like this:
the memo field contents will always be retrieved. This could mean a lot of unnecessary network
traffic. When this property is set to .F., the memo contents will only be retrieved when you or
the user perform some action to cause a MODIFY MEMO to be issued. These include
explicitly issuing MODIFY MEMO or implicitly doing so in a grid, or by navigating to a
record when the memo is bound to an edit field.
72 Client/Server Applications with Visual FoxPro and SQL Server
Tables
The Tables property contains a list of tables included in the view. This property must be set
correctly for TABLEUPDATE() to succeed. Most of the time it works just fine, but
occasionally there are problems with it. Sometimes it works fine when you set the property in
the DBC, like this:
Other times, the Tables property doesnt make its way to the cursor when the view is
opened, even though the property exists in the DBC. In this case, TABLEUPDATE() returns
an error because it cant find the Tables property. The following line of code fixes the
problem reliably:
CURSORSETPROP("Tables", "customers")
Other times, the Tables property makes its way to the cursor and yet VFP still gives an
error that no Tables property can be found. Why this happens we dont know, but we have
occasionally even done this to make it work:
CURSORSETPROP("Tables", CURSORGETPROP("Tables"))
For what its worth, weve encountered this with both local and remote views. Setting the
property at run time has always fixed it.
Field properties
Earlier in this chapter, you learned about the KeyField and Updatable properties for view
fields. There are a few other important field properties, too. Unlike with connection and view
properties, these can only be set persistently in the database. There is no field-level equivalent
to SQLSETPROP() or CURSORSETPROP().
DefaultValue
The DefaultValue property allows you to set the default value of a field when a record is added.
Some developers believe that default values and rules should exist on the back end so they are
under the control of the database. Others believe that default values should be done on the front
end to provide immediate feedback to users rather than waiting for a round trip to the server.
Still others believe in doing them in both places. If a default value exists in the database, it
should exist in the view, too. That way, your userand your codecan see it right away.
When you set up a DefaultValue property, it must be delimited in quotes, like this:
If your default value is a string, it must be delimited with two sets of quotes:
RuleExpression
The RuleExpression property, like the DefaultValue property, can be used to help validate data
up front, rather than waiting for a failed update. Rule expressions work like rules for fields in
VFP tables, and the entire expression is delimited with quotes. This line will prohibit the
postalcode field of the VCustomers view from accepting the value of 123:
UpdateName
The UpdateName property is very important for multi-table joins. If a column of the
same name exists in more than one table in a join, its critical that the right field in the
view get to the right column in the table. Include both the table and column name in the
UpdateName property:
DataType
The DataType property is one you may find yourself working with a lot because there isnt an
exact correspondence between data types in VFP and SQL Server. For example, VFP supports
both date and datetime data types. SQL Server doesnt support a date type, but it has two
datetime types: datetime and smalldatetime, which differ by storage size and precision.
When you create a remote view, VFP will automatically convert SQL Server data types to
FoxPro data types as shown in Table 1.
Table 1. The default data types Visual FoxPro uses for SQL Server data types.
You may use DBSETPROP() to change the data type for any field in a view. For example,
if you would rather work with a date type than a datetime type for a birthdate field, you can
change it like this:
74 Client/Server Applications with Visual FoxPro and SQL Server
You can specify any valid VFP data type just as you would in a VFP CREATE TABLE
statement, including length and precision, as long as the type you specify makes sense. You
cant convert a datetime to an integer, for example.
Summary
In this chapter, you learned the basics of creating Visual FoxPro remote views of SQL Server
data. You learned about ODBC DSNs, VFP named connections, connection and view
properties, and data type conversions. In the next chapter, youll learn about making the
transition from file-server to client/server applications and how to upsize data from VFP to
SQL Server.
Chapter 5: Upsizing: Moving from File-Server to Client/Server 75
Chapter 5
Upsizing: Moving from
File-Server to Client/Server
Surely youve heard the question before. In fact, maybe it is why you are reading this
book. If not, the question is inevitable. What will it take to go client/server? Long ago
one of the authors was asked this question so many times that finally he sat down with
VFP 3, upsized a VFP database to SQL Server 6.5 and started to get his hands dirty
doing client/server work. This is a great way to learn client/server development. So roll
up your sleeves, make a copy of a project you are familiar with and get ready to move
from file-server development to client/server. In this chapter you will learn how to upsize
a VFP database to SQL Server, how to use the upsized data, and some tips on more
easily transitioning from file-server development to client/server.
We didnt invent the term upsizing and are not really sure we like it. There is no rule that
says a SQL Server database or application is bigger than a FoxPro one. But the term has
become so widely usedand is even used in the names of the Upsizing Wizardsthat well
stick with it here. When we discuss upsizing, were referring to converting a Visual FoxPro
database to a client/server database.
Visual FoxPro ships with two Upsizing Wizards: one for SQL Server and one for Oracle.
As elsewhere in this book, all the examples in this chapter will use Microsoft SQL Server.
Why upsize?
If you have an existing file-server application and database that you wish to convert to
client/server, then upsizing the database may be a good way to start the process. If you have
designed your existing application to use local views, then it is possible that the Upsizing
Wizard will do most of the work necessary to make the conversion. If your application accesses
the tables directly rather than using views, then you have a lot more work to do. Even so,
upsizing is still a good place to start, as it gets you an instant copy of the database in SQL
Server so that you can begin working with it quickly.
On the other hand, if you are developing a new client/server application, it is better to use
the tools designed for SQL Server (such as Enterprise Manager, Visual InterDev or Access
2000) to create a new database directly in SQL Server, rather than develop the database first in
VFP and then upsize. This two-step process, called prototyping locally and deploying
remotely, was a more reasonable approach with SQL Server 6.x because the 6.x versions were
very difficult to deploy on laptops or small installations for demos or prototypes. By
comparison, the newer versions of SQL Server (7.x and 2000) can easily be installed and run
on laptops and other small machines. Additionally, with MSDE, a prototype can be deployed
royalty-free without the need for a complete SQL Server installation. (For more on MSDE, see
Chapter 7, Downsizing).
The best reason for upsizing a VFP database is to learn to use SQL Server with a database
76 Client/Server Applications with Visual FoxPro and SQL Server
with which you are already familiar. After upsizing a database, you will have a VFP version
and a SQL Server version containing the same data. You can easily work with both in order to
get the feel for SQL Server. Look at the data types in the two databases, learn about SQL
Server indexes, see how referential integrity is handled, and compare database tools. It is often
much easier to work with data you know rather than with simplified, sample databases, such as
Pubs or Northwind, which are included with SQL Server.
Despite the preference for working with familiar data rather than sample databases, we will
use the VFP Tastrade database for the examples in this chapter.
Figure 1. The ODBC Data Source Administrator dialog showing a System DSN for
upsizing the Tastrade database.
The Upsizing Wizard must be able to open the database and all of its tables exclusively, so
before running the wizard, close them and ensure they are not in use by anyone else.
To run the SQL Server Upsizing Wizard, select Tools | Wizards | Upsizing on the Visual
FoxPro menu and then, in the resulting dialog, choose the SQL Server Upsizing Wizard. The
Chapter 5: Upsizing: Moving from File-Server to Client/Server 77
first step of the wizard will ask you which database to upsize. Figure 2 shows the first step with
the Tastrade database selected.
Figure 2. Step 1 of the SQL Server Upsizing Wizard showing the Tastrade database
selected for upsizing.
Note here that if the database is already open, you will be warned that the Upsizing Wizard
requires exclusive access to the database. Also note that the VFP database is referred to
throughout the wizard as the local database. Clicking the Next button here will take you to
Step 2, shown in Figure 3.
Figure 3. Step 2 of the SQL Server Upsizing Wizardselecting an ODBC data source
to use for upsizing and for remote views created by the Upsizing Wizard.
78 Client/Server Applications with Visual FoxPro and SQL Server
Step 3 of the Upsizing Wizard allows you to select which tables to upsize. By default, none
are selected, but you may choose any or all of the tables. In Figure 4, you can see that we have
selected to upsize all tables.
Figure 4. Step 3 of the SQL Server Upsizing Wizard. All the tables in the
Tastrade database have been selected for upsizing by moving them to the right-
hand list box.
Simple stuff so far, right? Well, heres where it starts to get more complicated. Step 4,
illustrated in Figure 5, allows you to map the data type of each column in each VFP table
to a SQL Server data type, to set up timestamp columns, and to use SQL Servers identity
feature for columns. You select the table in the Table drop-down list, and all the columns
will be shown in the grid below. Note that any table that has a memo field will be marked for
a timestamp column. The timestamp column will not appear in the grid but will be named
timestamp_column when upsized. You have the option at this time to add timestamp columns
to other tables, or to remove the timestamp columns from those that already have them.
However, if you plan to replicate the database, you must remove the timestamp columns, as
they are not supported in replicated databases.
You can choose to use an identity column for the table so that SQL Server can
automatically create unique integer values suitable for use as primary keys by checking
the Identity column check box. The IDENTITY property is described in greater detail
in Chapter 3, Introduction to SQL Server 7.0. Identity columns will be named
identity_column, and the seed value and increment will both be set to one. Before using
identity columns, read Chapter 9, Some Design Issues for C/S Systems, which describes
some gotchas.
Chapter 5: Upsizing: Moving from File-Server to Client/Server 79
Figure 5. Step 4 of the SQL Server Upsizing Wizard showing the data type mapping
from the VFP table to the SQL Server table. The Timestamp column check box is
checked, specifying that a timestamp column will be created for this table, though it
does not appear in the list of column names.
The final task in Step 4 is to set the data types for each column in each table. In most cases,
the default data types will be adequate, but there may be times when you wish to change them.
For example, the VFP numeric data type is mapped by the Upsizing Wizard to the SQL Server
float data type, rather than numeric, and the VFP character type is always mapped to the SQL
Server char, while there may be times when it is preferable to use varchar. Table 1 displays the
default data type conversions used by the Upsizing Wizard.
Table 1. How the Upsizing Wizard maps Visual FoxPro data types to SQL Server
data types.
You should be careful when using varchar data types because they can cause performance
problems that can compromise the scalability of your database. Visual FoxPro uses fixed-length
columns, which allow the database engine to make assumptions about where the columns are
stored, thus permitting fast access. In a table that has variable length columns, the database
engine must store additional bytes to describe the length of each variable length column, which
forces the database engine to work harder to retrieve and write data.
Retrieving data does not cause a major performance problem unless there are many
variable length columns in each row and there are a significant number of rows.
However, performance problems are more likely when writing data. When you insert a
row, SQL Server places it in an existing page that has room for the data. (If there is no such
page, SQL Server creates a new page and inserts the row there.) If you update an existing row
and add data to a variable length column, the row is now longer. If the row no longer fits in the
same page, SQL Server moves the row to a page that has enough space, or creates a new page if
necessary. This activity creates a considerable amount of disk I/O. In a high-transaction
environment, this overhead can cause performance problems.
One particular data type to pay attention to is VFPs date type. Although there are multiple
datetime types, there is no date type in SQL Server! If you use the FoxPro date type, then
whether you like it or not, the SQL Server type will always be a datetime. If you use VFP
remote views, then the best way to deal with this without changing your code is to use
DBSETPROP() to change the data type of the field view to date, as in the following line of
code that changes the order_date fields data type to date:
In Step 5 of the Upsizing Wizard, you select the database in which the upsized tables will
reside. For some reason, the default is to dump everything into the master database. Be very
careful not to click past this step without changing this. You definitely do not want to put your
data in the master database. In Figure 6, the New option has been selected, and the new
database will be named Tastrade.
Steps 6 and 7 are skipped when you run the wizard against SQL Server 7 databases, as
they only apply to version 6.5. Step 6 enables you to select the database device for a new
database, but devices have disappeared in SQL Server 7. Similarly, Step 7 is used to specify the
device for the transaction log of the new database.
Step 8 is a big one, with lots of important decisions to make. Here you will specify which
table attributes to upsize, whether to upsize data or just structure, how to deal with referential
integrity, and what changes to make to the local DBC. The Upsizing Wizard upsizes indexes,
defaults, relationships, relational integrity and validation rules.
Note that the Upsizing Wizard cannot upsize triggers and stored procedures, which contain
procedural code, because SQL Server does not support VFP procedural code.
Also, because of differences in the way expressions are handled in VFP and SQL Server,
some of the features of your VFP database will not be upsized. The effects of these differences
are described in the following sections.
Chapter 5: Upsizing: Moving from File-Server to Client/Server 81
Figure 6. Step 5 of the SQL Server Upsizing Wizard. A new database named Tastrade
will be created rather than the default, which is to dump all the new tables into the
master database. Dont overlook this step!
Indexes
Although there are many similarities between indexes in VFP and SQL Server (in both
products, indexes can be used for primary keys, to enforce non-primary-key uniqueness
and to optimize queries), there are numerous differences between indexes in the two
products. Table 2 shows how the Upsizing Wizard maps VFP index types to SQL Server
index types.
Table 2. Visual FoxPro index types and how the Upsizing Wizard maps them to SQL
Server index types.
SQL Server indexes are covered in greater detail in Chapter 3, Introduction to SQL
Server 7.0, but here are a few things about them to keep in mind regarding upsizing.
82 Client/Server Applications with Visual FoxPro and SQL Server
First, SQL Server indexes are on columns only, never on expressions or UDFs, and
indexes are always ascending. Therefore, any indexes that contain expressions such as NOT or
UDFs or are descending will not be correctly upsized. Only the column names of the indexes
will be upsized, not the expressions, which is probably not what you expected.
Second, the physical order of a table is determined by the clustered index, only one of
which, for obvious reasons, is allowed per table. This is different from a VFP primary index in
that the physical order of a VFP table is not changed by the value of the column in the key for
that index, but it is changed in a SQL Server table. If a clustered index exists on a table, then a
SELECT with no ORDER BY clause will return records in the clustered index order; if no
clustered index exists, then results are returned in an unpredictable order.
Finally, there are no SQL Server indexes similar to VFPs so-called UNIQUE indexes.
Though these will be upsized to non-clustered indexes in SQL Server, there is no uniqueness
to them.
By default, the VFP tag names will be retained for the index names when upsized.
However, if a tag name is a reserved word in SQL Server, then an underscore will be appended
to the end of the name. For example, a tag named level would become an index named
level_ after upsizing.
Defaults
Defaults arent handled quite the same way in SQL Server and Visual FoxPro. In a VFP
database, a default expression is assigned individually to a field. In SQL Server, defaults are
handled either with constraints or with expressions that are created and then bound to a field. In
this way, fewer expressions need to be created, as it is likely that multiple fields will share a
default expression.
The Upsizing Wizard will create a SQL Server default for every field with a default
expression unless the default expression is zero. If one or more fields have a zero default, then
the Upsizing Wizard will create a default called UW_ZeroDefault and will bind it to each field
that needs it. This default is also used for all VFP logical fields, which are upsized to SQL
Servers bit data type and bound to the UW_ZeroDefault default unless the logical field in the
local database has a default setting the value to .T., in which case a default is created that sets
the value to 1.
The Upsizing Wizard names defaults by using the prefix Dflt_ plus the table name
and field name separated by an underscore. Therefore, a default for detail.order_date would
be named Dflt_detail_order_date. Names longer than SQL Servers limit of 30 characters
are truncated.
Expression mapping between VFP and SQL Server is illustrated in Table 3. The following
expressions are the same in both VFP and SQL Server and require no conversion by the
Upsizing Wizard:
CEILING( )
LOG( )
LOWER( )
LTRIM( )
Chapter 5: Upsizing: Moving from File-Server to Client/Server 83
RIGHT( )
RTRIM( )
SOUNDEX( )
SPACE( )
STR( )
STUFF( )
UPPER( )
Relationships
SQL Server 7 has two different ways of handling relationships and referential integrity:
triggers and declarative referential integrity constraints. The Upsizing Wizard can upsize the
referential integrity constraints from a VFP database using either triggers or declarative
referential integrity.
Figure 7 shows the default settings for upsizing, which is to not use declarative referential
integrity. If you choose this option, then the Upsizing Wizard will write triggers that duplicate
the functionality of referential integrity in Visual FoxPro. Table 4 shows how VFP referential
integrity is upsized when you choose this option.
Table 4. Mapping by the SQL Server Upsizing Wizard of Visual FoxPro referential
integrity to SQL Server triggers.
When the Upsizing Wizard creates triggers for referential integrity, it names them by using
the prefix Trig, followed by the letter D for DELETE triggers, I for INSERT triggers or U for
Chapter 5: Upsizing: Moving from File-Server to Client/Server 85
UPDATE triggers, followed by an underscore. The table name follows the underscore. So a
DELETE trigger on the employee table would be named TrigD_Employee.
If you check the Use declarative RI check box in Step 8, then no triggers will be created.
Instead, the Upsizing Wizard will use declarative referential integrity. Declarative referential
integrity, discussed in Chapters 1 and 3, prevents any changes from occurring that would
break the reference and is equivalent to Restrict constraints in VFP. Declarative referential
integrity is a part of the schema rather than a trigger. Without the option of declarative
referential integrity, most SQL Server 7 DBAs would prefer creating a stored procedure for
deleting child records rather than relying on triggers for cascading deletes because the triggers
can create performance issues.
Validation rules
The Upsizing Wizard treats rules much like defaultsa rule object is created and then bound to
a column or data type. This reduces the number of rules if the same rule is required for multiple
columns or types. An example might be the following rule, which prevents entry of values less
than 1,000 or greater than 100,000:
Then the rule can be bound to a column by using a system stored procedure called
sp_bindrule:
The SQL Server Upsizing Wizard does not upsize VFP rules into SQL Server rules,
though. Instead, it writes a trigger for the column with the rule, and the trigger calls a stored
procedure that enforces the rule. For example, in the Tastrade database, the rule for the
order.deliver_by column is converted to the following stored procedure:
IF @status='Failed'
RETURN
The Upsizing Wizard also creates update and insert triggers TrigI_Orders and
TrugU_orders for the orders table that, in turn, call the vrf_orders_deliver_by stored procedure
86 Client/Server Applications with Visual FoxPro and SQL Server
and pass the appropriate parameter. Although this is a rather unusual way to implement rules, it
certainly works.
The naming convention for the triggers is the same as that defined previously for triggers
created for referential integrity. For the stored procedures for field rules, the prefix vrf_
(validation rule field) is concatenated with the table name and column name, separated by
underscores. Table validation rules begin with vrt_ (validation rule table), followed by the
table name.
We recommend that you not use the Upsize-only option, because this
option does not provide any opportunity to tweak the upsizing process.
Further, the upsize can take a considerable amount of time to execute if
you have not chosen the structure only, no data option.
Figure 9. The VFP Project Manager showing the project created by the SQL Server
Upsizing Wizard and the tables and reports associated with it.
88 Client/Server Applications with Visual FoxPro and SQL Server
Report Contents
rpterrs1.frx Errors
rptfiel1.frx Fields
rptinde1.frx Indexes
rptrels1.frx Relations
rpttabl1.frx Tables
rptview1.frx Views
Open the errors report and print it immediatelywe guarantee that you will need it. The
fields report can be huge, as it details every field in the database, along with pre- and post-
conversion data types, defaults, rules and so forth. Even with the miniature Tastrade database,
this report runs 24 pages. If some field-level object did not correctly upsize, then the report will
note the errors associated with it. We believe that it is better to start with the errors report in the
first place. If something in the errors report needs further explanation, then open the fields
report for preview and work your way down to the field in question.
It is interesting to read the errors report, as it gives you a good feel for what kinds of things
dont upsize well. That, in turn, will help you learn more about SQL Server. A good example of
this in the Tastrade database is the failure of many field validation rules to upsize. The most
common reason in this case is that many rules use the VFP EMPTY() function, which cannot
be upsized to SQL Server. Another good example is the orders.order_number default, which
calls a UDF called newid(). The Upsizing Wizard is unaware that the newid() function already
exists in SQL Server 7, and it attempts to upsize an illegal call to that function.
You also may find many errors in views. Typically, these errors are caused by differences
in SQL syntax between SQL Server and VFP and are relatively easy to fix.
The tables in the project are used for creating the reports, with the exception of
sql_uw.dbf. This table contains one row with one column, a memo field containing the T-SQL
script generated by the Upsizing Wizard. This table will exist only if you chose to save the
generated SQL in the last page of the wizard. This script can be quite useful in helping you
learn SQL Server. The script can even be used for deploying a system. See Chapter 10,
Application Distribution and Managing Updates, for more information on using scripts to
deploy databases.
If you have never used GenDBC, then this is a good time to become familiar with it.
GenDBC is a program that is distributed with VFP and can be found in the HOME() +
tools\gendbc directory. It creates a PRG that can recreate the structure of a VFP database.
To run it, simply execute the following code in the Command Window:
DO (HOME() + "tools\gendbc\gendbc.prg")
We use GenDBC a lot, and not just because we would rather work with code for views
than the visual tools. Many times, you will have to maintain your views through code, as the
VFP View Designer simply will not allow you to edit many types of complex remote views.
You can visually create the views that you want (and we certainly recommend doing so where
possible), but it is possible that when you try to edit it, you will receive an error.
GenDBC to the rescue! When you run GenDBC, every database object is recreated in
code. It will create a function for each view, table and relation, and another function to generate
the local referential integrity code. Table 6 lists the functions created and their use. If, for
example, your DBC has a view named Category, then a function will be generated named
MakeView_Category.
Function Purpose
MakeView_ Recreate local and remote view.
MakeTable_ Recreate VFP table.
MakeRelation_ Recreate a VFP relation.
MakeRI_ Recreate the relational-integrity code.
At the top of the generated program is a set of calls to each of the functions generated.
Figure 10 shows the VFP Procedures and Functions dialog for the PRG generated by
GenDBC.prg for the VFP database after upsizing Tastrade. Take a close look at the first three.
Table 7 presents descriptions of each of these three functions.
Function Purpose
MakeView_CATEGORY Remote view of the SQL Server table, created by
the Create remote views on tables option in the
Upsizing Wizard.
MakeView_CATEGORY_LISTING Existing local view, redirected to the SQL Server tables,
rather than the VFP tables.
MakeView_CATEGORY_LISTING_LOCAL Existing local view, renamed by appending _LOCAL to
the end.
90 Client/Server Applications with Visual FoxPro and SQL Server
Figure 10. The Procedures and Functions dialog for the VFP editor showing some of
the procedures created by GenDBC for the upsized Tastrade.DBC. Note the grouping
of views in threes: category was created by the Upsizing Wizard from the category
table; category_listing was created by the Upsizing Wizard by converting the local
view; and category_listing_local is the renamed local view.
What to do with these views? Look at the last view in Table 7though it is helpful to have
the old local view available to compare the results to the new remote view, you wont be
deploying the local database. Therefore, youll be deleting this one eventually. Regardless, here
is the code for it:
The first view in the list certainly isnt needed either. However, if you will be reworking
the system, you might want to modify this view a bit and keep it around. You will need to
modify this view because, in its current form, it is not parameterized and will return all records
in the table. Here is the code for the non-parameterized view created by the Upsizing Wizard:
This view can be quite useful if it is parameterized to return only a single record:
The second view, CATEGORY LISTING, is the one that most closely matches the original
local view. Here is the original local view:
Except for the name, you can see that it is identical to the original local view. Here is the
new view that has been redirected to the remote tables:
In Figure 10, you can see this pattern of three views per table repeated over and over again.
In Figure 11 you can see that all the local tables are still there, too, but that theyve been
renamed by appending _LOCAL to the table name.
Figure 11. The Procedures and Functions dialog for the VFP editor showing local
table generation procedures created by GenDBC for the upsized Tastrade.DBC.
So, what is the best way to deal with all this? We suggest opening the PRG and
commenting out the function calls you dont need. First get rid of all MakeTable, MakeRelation
and MakeRI calls. You arent going to be using local data, so why bother keeping those
92 Client/Server Applications with Visual FoxPro and SQL Server
around? Also, comment out all the MakeView__LOCAL calls because, again, there is no
local data.
That leaves the MakeView calls that create the remote views. Here we recommend being
selective. Go through each one to decide whether you can use the view. If youre not sure, keep
the view because it is easier to delete a view later than to recreate it.
Once youve commented out all the unnecessary calls, move the PRG to a new, clean
directory, set that directory as your default directory with SET DEFAULT and execute the
modified PRG. Why? Because now it will create a new DBC with only those objects you didnt
comment out. Then run GenDBC again. Now your generated PRG will be much smaller, as it
no longer contains any of the functions you didnt call.
For the remainder of your development on the project, this generated file will be your
master for the DBC. Youll check it, and not the DBC, into your source control program.
Youll modify it, not the DBC, when you make changes. If you need to modify a view, work on
the code in the PRG and then simply call the function you worked on. Sometimes, when
wholesale changes have been made, you might want to simply delete the DBC from the disk
and run the entire generated PRG to recreate the DBC from scratch.
When you create new views, you should also do so in code in the originally generated
PRG. You might look at the PRG and think that this could be a daunting task. After all, there
are four DBSETPROP() calls for every field in every view! However, not all of those calls are
necessary. Table 8 shows the calls to DBSETPROP() for each view and a brief description of
when it is required.
The only property that typically needs to be changed for most fields is Updatable, which,
by default, is set to .F. Instead of setting this property for each field, you can simply let a
procedure set all of them to .T. for you, and then you can set individual fields to .F. if
necessary. Heres some code that will do that for you:
You will need to set the KeyField property to .T. for at least one column per table involved
in the join. Typically there will be one or two fields in a view that need to be set to .T. As the
default is .F., it is simple to write the one or two lines necessary to do this.
You may find it worthwhile to replace all the property calls that were generated by
GenDBC. Why? If your companys developers access a source-control database via the Internet
and must sometimes use slow dial-up connections, youll find that reducing the size of the file
really helps speed up this process. Also, by using a PRG instead of a DBC, youll dramatically
reduce the amount that must be transferred over a slow connection.
Summary
In this chapter you learned about the SQL Server Upsizing Wizard in Visual FoxPro, how it
works, what it does and how to deal with its results. Hopefully, even if you dont use the
Upsizing Wizard, youve picked up some tips that will help you get going in client/server
development. In Chapter 7, you will learn about downsizing.
94 Client/Server Applications with Visual FoxPro and SQL Server
Chapter 6: Extending Remote Views with SQL Pass Through 95
Chapter 6
Extending Remote Views
with SQL Pass Through
As you learned in Chapter 4, remote views make it very easy to access remote data.
Remote views are actually wrappers for SQL pass through, handling the tasks of
managing connections, detecting changes, formatting UPDATE commands and
submitting them to the server. In this chapter, were going to take an in-depth look at
SQL pass through. Well see how to connect to the server, submit queries, manage
transactions and more.
There is no doubt about it, remote views offer the easiest way to access remote data. However,
with ease of use comes less flexibility. Visual FoxPro contains another powerful mechanism for
manipulating remote data called SQL pass through (SPT). SQL pass through provides all the
functionality of remote views and more.
With SQL pass through you can:
Execute queries other than SELECT.
Access back-end-specific functionality.
Fetch multiple result sets in a single call.
Execute stored procedures.
However:
There is no graphical user interface.
You must manually manage connections.
Result sets are read-only by default and must be configured to be updatable.
The flexibility that SQL pass through allows makes it a powerful tool. It is important for
client/server developers to understand it thoroughly.
LOCAL hConn
hConn = SQLConnect("ODBCPubs", "sa", "")
The second way to use SQLConnect() is to supply the name of a Visual FoxPro
connection that was created using the CREATE CONNECTION command. As you saw in
Chapter 4, Remote Views, the CREATE CONNECTION command stores the metadata that
Visual FoxPro needs to connect to a remote data source. The following example creates a
Visual FoxPro connection named VFPPUBS and then connects to the database described by
the connection:
LOCAL hConn
CREATE DATABASE cstemp
CREATE CONNECTION vfppubs ;
DATASOURCE "ODBCPubs" ;
USERID "sa" ;
PASSWORD ""
hConn = SQLConnect("vfppubs")
Option Description
DSN References an ODBC DSN.
Driver Specifies the name of the ODBC driver to use.
Server Specifies the name of the SQL Server to connect to.
UID Specifies the login ID or username.
PWD Specifies the password for the given login ID or username.
Database Specifies the initial database to connect to.
APP Specifies the name of the application making the connection.
WSID The name of the workstation making the connection.
Trusted_Connection Specifies whether the login is being validated by the Windows NT Domain.
Chapter 6: Extending Remote Views with SQL Pass Through 97
Not all of the options listed in Table 1 have to be used for each connection. For
instance, if you specify the Trusted_Connection option and connect to SQL Server using NT
Authentication, theres no reason to use the UID and PWD options since SQL Server would
invariably ignore them.
The following code demonstrates some examples of using SQLStringConnect().
From this point forward, substitute the name of your server for the string
<MyServer> in code examples.
LOCAL hConn
hConn = SQLStringConnect("Driver=SQL Server;Server=<MyServer>;"+ ;
UID=sa;PWD=;Database=pubs")
hConn = SQLStringConnect("DSN=ODBCPubs;UID=sa;PWD=;Database=pubs")
hConn = SQLStringConnect("DSN=ODBCPubs;Database=pubs;Trusted_Connection=Yes")
Visual FoxPro returns error 1526 for all errors against a remote data
source. The fifth element of the array returned by AERROR() contains the
remote data source-specific error.
#define MB_OKBUTTON 0
#define MB_STOPSIGNICON 16
LOCAL hConn
hConn = SQLConnect("ODBCPubs", "bad_user", "")
IF (hConn < 0)
LOCAL ARRAY laError[1]
AERROR(laError)
MESSAGEBOX( ;
laError[2], ;
MB_OKBUTTON + MB_STOPSIGNICON, ;
"Error " + TRANSFORM(laError[5]))
ENDIF
98 Client/Server Applications with Visual FoxPro and SQL Server
Figure 1. The error message returned from SQL Server 7.0 when trying to establish a
connection using an unknown login.
Disconnecting
As mentioned previously, the developer is responsible for connection management when using
SQL pass through. It is very important that a connection be released when it is no longer
needed by the application because connections consume valuable resources on the server, and
the number of connections may be limited by licensing constraints.
You break the connection to the remote data source using the SQLDisconnect() function.
SQLDisconnect() takes one parameter, the connection handle created by a call to either
SQLConnect() or SQLStringConnect(). SQLDisconnect() returns a 1 if the connection was
correctly terminated and a negative value if an error occurred.
The following example establishes a connection to SQL Server 7.0 and then drops
the connection:
LOCAL hConn,lnResult
*hConn = SQLStringConnect("Driver=SQL Server;Server=<MyServer>;"+ ;
UID=sa;PWD=;Database=pubs")
hConn = SQLConnect("ODBCPubs", "sa", "")
IF (hConn > 0)
MESSAGEBOX("Connection established")
lnResult = SQLDisconnect(hConn)
IF lnResult < 0
MESSAGEBOX("Disconnect failed")
ENDIF && lnResult < 0
ENDIF && hConn > 0
To disconnect all SQL pass through connections, you can pass a value of
zero to SQLDisconnect().
Accessing metadata
VFP has two SQL pass through functions that return information about the database youve
connected to. The first, SQLTables(), returns a result set containing information about the
tables and views in the database. The second, SQLColumns(), returns a result set containing
information about a specific table or view.
Chapter 6: Extending Remote Views with SQL Pass Through 99
Column Description
Table_cat Object qualifier. However, In SQL Server 7.0, Table_cat contains the
name of the database.
Table_schema Object owner.
Table_name Object name.
Table_type Object type (TABLE, VIEW, SYSTEM TABLE or another data-store-
specific identifier).
Remarks A description of the object. However, SQL Server 7.0 does not return a
value for Remarks.
100 Client/Server Applications with Visual FoxPro and SQL Server
Column Description
Field_name Column name
Field_type Visual FoxPro data type
Field_len Column length
Field_dec Number of decimal places
Table 4. A description of the columns returned from SQL Server using SQLColumns()
and the NATIVE option.
Column Description
Table_cat SQL Server database name
Table_schema Object owner
Table_name Object name
Column_name Column name
Data_type Integer code for the ODBC data type
Type_name SQL Server data type name
Column_size Display requirements (character positions)
Buffer_length Storage requirements (bytes)
Decimal_digits Number of digits to the right of the decimal point
Num_prec_radix Base for numeric data types
Nullable Integer flag for nullability
Remarks SQL Server always returns NULL
Column_def Default value expression
SQL_data_type Same as Data_type column
SQL_datetime_sub Subtype for datetime data types
Character_octet_length Maximum length of a character or integer data type
Ordinal_position Ordinal position of the column (starting at 1)
Is_nullable Nullability indicator as a string (YES | NO)
SS_data_type SQL Server data type code
Figure 4. A subset of the columns returned by SQLColumns() with the NATIVE option.
(See Table 4 for a complete list of columns.)
Submitting queries
Most interactions with the remote server will be through the SQLExec() function. SQLExec() is
the workhorse of the SQL pass through functions. Youll use it to submit SELECT, INSERT,
UPDATE and DELETE queries, as well as calls to stored procedures. If the statement is
successfully executed, SQLExec() will return a value greater than zero that represents the
number of result sets returned by the server (more on multiple result sets later). A negative
return value indicates an error. As discussed previously, you can use AERROR() to retrieve
information about the error. Its also possible for SQLExec() to return a value of zero (0), but
only if queries are being submitted asynchronously. Well look at asynchronous queries in a
later section.
and SQLColumns() functions, the name of the result set will be SQLRESULT unless another
name is specified in the call to SQLExec().
For example, the following call to SQLExec() runs a SELECT query against the authors
table in the pubs database.
From this point forward, examples may not include the code that
establishes the connection.
To specify the name of the result set rather than accept the default, SQLRESULT,
specify the name in the third parameter of the SQLExec statement. The following example uses
the same query but specifies that the resultant cursor should be called authors:
Figure 5 shows the Data Session window with the single authors cursor open.
Figure 5. The Data Session window showing the single authors cursor.
SQLExec() returns a value (stored in lnResults in this example) containing the number of
cursors returned.
Now you might be wondering what names Visual FoxPro assigns to each cursor. In the
preceding example, the results of the first query (SELECT * FROM authors) will be placed into
a cursor named Sqlresult. The results from the second query will be placed into a cursor named
Sqlresult1. Figure 6 shows the Data Session window with the two cursors open.
Figure 6. The Data Session window showing the two cursors Sqlresult and Sqlresult1.
Visual FoxPros default behavior is to wait until SQL Server has returned all the result sets
and then return the result sets to the application in a single action. Alternately, you can tell
Visual FoxPro to return the result sets one at a time as each one is available. This behavior is
controlled by a Connection property, BatchMode. If BatchMode is True (the default), Visual
FoxPro returns all result sets at once; else, if False, Visual FoxPro returns the result sets one at
a time.
Use the SQLSetProp() function to manipulate connection settings. The following example
changes the BatchMode property to False, causing Visual FoxPro to return results sets one at
a time:
When BatchMode is False, Visual FoxPro automatically returns only the first result set.
The developer must request that Visual FoxPro return additional result sets by calling the
104 Client/Server Applications with Visual FoxPro and SQL Server
SQLMoreResults() function. SQLMoreResults() returns zero (0) if the next result set is not
ready, one (1) if it is ready, two (2) if there are no more result sets to retrieve, or a negative
number if an error has occurred.
The following example demonstrates the SQLMoreResults() function. In this example,
were going to retrieve information about a specific book by submitting queries against the
titles, authors, titleauthor and sales tables.
DO WHILE .T.
lnResult = SQLMoreResults(hConn)
DO CASE
CASE lnResult < 0
*-- Error condition
CASE lnResult = 0
*-- No result sets are ready
CASE lnResult = 2
*-- All result sets have been retrieved
EXIT
OTHERWISE
*-- Process retrieved result set
ENDCASE
ENDDO
It is important to realize that SQLMoreResults() must continue being called until it returns
a two (2), meaning no more result sets. If any other SQL pass through function is issued before
SQLMoreResults() returns 2, Visual FoxPro will return the error shown in Figure 7.
The preceding statement is not entirely true. You can issue the
SQLCancel() function to terminate any waiting result sets, but we havent
introduced it yet.
Chapter 6: Extending Remote Views with SQL Pass Through 105
Figure 7. The results of trying to issue another SQL pass through function while
processing result sets in non-batch mode.
In this example, SQLExec() executes a data modification query rather than a SQL
SELECT statement. Therefore, it returns a success indicator (1 for successful execution or a
negative number in the event of an error), rather than the number of result sets. If the query
successfully updates zero, one, or one million rows, SQLExec() will return a value of one. (A
query is considered successful if the server can parse and execute it.)
To determine the number of rows updated, use the SQL Server global variable
@@ROWCOUNT, which performs the same function as Visual FoxPros _TALLY global
variable. After executing a query, @@ROWCOUNT contains the number of rows affected by
the query. The value of @@ROWCOUNT can be retrieved by issuing a SELECT query:
Parameterized queries
You previously read about using parameters in views to filter the query. You can use this same
mechanism with SQL pass through.
106 Client/Server Applications with Visual FoxPro and SQL Server
Using a parameterized query might seem unnecessary at first. After all, since you pass the
query as a string, you have complete control over its creation. Consider the following example:
Creating the query using the technique from the previous example works in most
situations. However, there are situations where using a parameterized query makes more sense.
For example, when different back ends impose different requirements for specifying literal
values, it is easier to allow Visual FoxPro to handle the conversion. Consider dates. Visual
FoxPro requires date literals to be specified in the form {^1999-12-31}. SQL Server, on the
other hand, does not recognize {^1999-12-31} as a date literal. Instead you would have to use a
literal similar to 12/31/1999 or 19991231 (the latter being preferred).
The following code shows how the same query would be formatted for Visual FoxPro and
SQL Server back ends:
In this situation, Visual FoxPro converts the search arguments to the proper format
automatically. The following example demonstrates this:
The preceding query would work correctly against both Visual FoxPro and SQL Server.
There are other data types that also benefit from the use of parameterization. Visual
FoxPros Logical vs. SQL Servers Bit is another example. A literal TRUE is represented in
Visual FoxPro as .T., while in Transact-SQL it is 1.
Chapter 6: Extending Remote Views with SQL Pass Through 107
Figure 8 shows the output from the SQL Server Profiler for the following query, and
Figure 9 shows the output for the second one:
LOCAL llTrue,lnResult
lnResult = SQLExec(hConn, "SELECT * FROM authors WHERE contract = 1")
llTrue = .T.
lnResult = SQLExec(hConn, "SELECT * FROM authors WHERE contract = ?llTrue")
There is an important difference between how these two queries were submitted to SQL
Server. As expected, the first query was passed straight through. SQL Server had to parse,
108 Client/Server Applications with Visual FoxPro and SQL Server
optimize, compile and execute the query. The next time the same query is submitted, SQL
Server will have to parse, optimize and compile the query again before executing it.
The second query was handled quite differently. When Visual FoxPro (actually, ODBC)
submitted the query, the sp_executesql stored procedure was used to identify the search
arguments to SQL Server. The following is an excerpt from the SQL Server Books Online:
SQL Server will take advantage of this knowledge (the search arguments) by caching the
execution plan (the result of parsing, optimizing and compiling a query) instead of discarding
the execution plan, which is the normal behavior. The next time the query is submitted, SQL
Server can reuse the existing execution plan, but with a new parameter value.
There is a tradeoffcalling a stored procedure has a cost, so you should not blindly write
all your queries using parameters. However, the cost is worth incurring if the query is executed
repeatedly. There are no magic criteria to base your decision on, but some of the things to
consider are the number of times the query is called and the length of time between calls.
Each property plays an important role in the creation of the commands sent to the server.
Visual FoxPro will create an INSERT, UPDATE or DELETE query based on the operations
performed on the cursor. The UpdatableFieldList property tells Visual FoxPro which columns
Chapter 6: Extending Remote Views with SQL Pass Through 109
it needs to track changes to. The Tables property supplies the name of the remote table, and the
UpdateNameList property has the name of the column in the remote table for each column in
the cursor. KeyFieldList contains a comma-delimited list of the columns that make up the
primary key. Visual FoxPro uses this information to construct the WHERE clause of the query.
The last property, SendUpdates, provides a safety mechanism. Unless SendUpdates is marked
TRUE, Visual FoxPro will not send updates to the server.
There are two other properties that you may want to include when making a cursor
updatable. The first, BatchUpdateCount, controls the number of update queries that are
submitted to the server at once. The default value is one (1), but increasing this property can
improve performance. SQL Server will parse, optimize, compile and execute the entire batch of
queries at the same time. The second parameter, WhereType, controls how Visual FoxPro
constructs the WHERE clause used by the update queries. This also affects how conflicts are
detected. Consult the Visual FoxPro online Help for more information on the WhereType
cursor property.
The preceding example uses the Transact-SQL EXECUTE command to call the stored
procedure. You can also call stored procedures using the ODBC escape syntax, as
demonstrated in the following example:
Using the ODBC escape syntax offers two small advantages. First, ODBC will
automatically convert the statement to the format required by the back end you are working
withas long as that back end supports directly calling stored procedures. Second, it is an
alternate way to work with OUTPUT parameters.
Using the ODBC calling convention is slightly different from calling a Visual FoxPro
function, as shown here:
Output parameters
There is another way to get information from a stored procedure: output parameters. An output
parameter works the same way as a parameter passed to a function by reference in Visual
FoxPro: The stored procedure alters the contents of the parameter, and the new value will be
available to the calling program.
The following Transact-SQL creates a stored procedure in the Northwind database that
counts the quantity sold of a particular product within a specified date range:
The stored procedure accepts four parameters: the ID of the product, the start and end
points for a date range, and an output parameter to return the total quantity sold.
The following example shows how to call the stored procedure and pass the parameters:
You can also call the p_ProductCount procedure using ODBC escape syntax, as in the
following code:
Chapter 6: Extending Remote Views with SQL Pass Through 111
Because SQL Server returns result codes and output parameters in the
last packet sent from the server, output parameters are not guaranteed to
be available until after the last result set is returned from the serverthat
is, until SQLExec() returns a one (1) while in Batch mode or SQLMoreResults()
returns a two (2) in Non-batch mode.
Transaction management
A transaction groups a collection of operations into a single unit of work. If any operation
within the transaction fails, the application can cause the data store to undo (that is, reverse) all
the operations that have already been completed, thus keeping the integrity of the data intact.
Transaction management is a powerful tool, and the Visual FoxPro community was
pleased to see its introduction into Visual FoxPro.
In Chapter 3, Introduction to SQL Server 7.0, we looked at transactions within SQL
Server and identified two types: implicit (or Autocommit) and explicit. To review, implicit
transactions are individual statements that commit independently of other statements in the
batch. In other words, the changes made by one statement are not affected by the success or
failure of a statement that executes later. The following example demonstrates transferring
funds from a savings account to a checking account:
lnResult = SQLExec(hConn, ;
"UPDATE account " + ;
"SET balance = balance 100 " + ;
"WHERE ac_num = 14356")
lnResult = SQLExec(hConn, ;
"UPDATE account " + ;
"SET balance = balance + 100 " + ;
"WHERE ac_num = 45249")
Even if the two queries are submitted in the same SQLExec() call, as in the following
example, the two queries commit independently of each other:
lnResult = SQLExec(hConn, ;
"UPDATE account " + ;
"SET balance = balance 100 " + ;
"WHERE ac_num = 14356" + ;
";" + ;
"UPDATE account " + ;
"SET balance = balance + 100 " + ;
"WHERE ac_num = 45249")
Each query is independent of the other. If the second fails, nothing can be done to undo the
changes made by the first except to submit a correcting query.
112 Client/Server Applications with Visual FoxPro and SQL Server
On the other hand, an explicit transaction groups multiple operations and allows
the developer to undo all changes made by all operations in the transaction if any one
operation fails.
In this section, were going to look at the SQL pass through functions that manage
transactions: SQLSetProp(), SQLCommit() and SQLRollback().
SQL pass through doesnt have a function to start an explicit transaction. Instead,
explicit transactions are started by setting the connections Transactions property to a two
(2) or DB_TRANSMANUAL (from Foxpro.h). The following example shows how to use
the SQLSetProp() function to start a manual (Visual FoxPro term) or explicit (SQL Server
term) transaction:
#include FOXPRO.h
lnResult = SQLSetProp(hConn, "TRANSACTIONS", DB_TRANSMANUAL)
Enabling manual transaction does not actually start a transaction. The transaction actually
starts only when the first query is submitted. After that, all queries submitted on the connection
will participate in the transaction until the transaction is terminated. You will see exactly how
this works in Chapter 11, Transactions. Regardless, if everything goes well and no errors
occur, you can commit the transaction with the SQLCommit() function:
lnResult = SQLCommit(hConn)
If something did go wrong, the transaction can be rolled back and all operations reversed
with the SQLRollback() function:
lnResult = SQLRollback(hConn)
Manual transactions can only be disabled by calling SQLSetProp() to set the Transactions
property back to 1. If you do not reset the Transactions property to 1, the next query submitted
on the connection automatically causes another explicit transaction to be started.
Taking all that into account, the original example can be rewritten as follows:
#include FOXPRO.h
IF (lnResult != 1)
SQLRollback(hConn)
ENDIF
SQLSetProp(hConn, "TRANSACTIONS", 1)
RETURN (lnResult = 1)
The code in the preceding example wraps the UPDATE queries within the explicit
transaction and handles an error by rolling back any changes that may have occurred.
Binding connections
Sometimes its necessary for two or more connections to participate in the same transaction.
This scenario can occur when dealing with components in a non-MTS environment. To
accommodate this need, SQL Server provides the ability to bind two or more connections
together. Once bound, the connections will participate in the same transaction.
If multiple connections participate in one transaction, any of the participating connections
can begin the transaction, and any participating connection can end the transaction.
Connection binding is accomplished by using two stored procedures: sp_getbindtoken and
sp_bindsession. First, execute sp_getbindtoken against the first connection to obtain a unique
identifier (the bind token, as a string) for the connection. Next, pass the bind token to
sp_bindsession, which is executed against another connection. The second function binds the
two connections. The following example demonstrates the entire process:
SQLDisconnect(hConn1)
SQLDisconnect(hConn2)
SQLDisconnect(hConn3)
In the example, three connections are established to the server. In the first call to
sp_getbindtoken, you get the bind token. You must use the ? and @ symbols with the lcToken
variable because the binding token returns through an OUTPUT parameter. You then pass the
bind token to the second and third connections by calling sp_bindsession.
Asynchronous processing
So far, every query that weve sent to the server was sent synchronously. Visual FoxPro paused
until the server finished processing the query and returned the result set to Visual FoxPro.
There are times, however, when you may not want Visual FoxPro to pause while the query is
running. For example, you may want to provide some feedback to the user to indicate that the
application is running and has not locked up, or you may want to provide the ability to cancel a
query mid-stream. To prevent Visual FoxPro from pausing, submit the query asynchronously.
Just remember that this approach makes the developer responsible for determining when the
query processing is finished.
114 Client/Server Applications with Visual FoxPro and SQL Server
The loop will stay engaged as long as SQLExec() returns a zero (0) identifying the query
as still being processed.
In the preceding example, the stored procedure being called does not actually run any
queries. The stored procedure LongQuery simply uses the Transact-SQL WAITFOR command
with the DELAY option to pause for a specific period of time (two minutes in this case) before
proceeding. The code for LongQuery is shown here:
lcQuery = ;
[CREATE PROCEDURE longquery AS ] + ;
[WAITFOR DELAY '00:02:00']
*-- hConn should be a connection to the pubs database
lnResult = SQLExec(hConn, lcQuery)
SET ESCAPE ON
ON ESCAPE llCancel = .T.
llCancel = .F.
lnResult = 0
DO WHILE (!llCancel AND lnResult = 0)
lnResult = SQLExec(hConn, lcQuery)
DOEVENTS
ENDDO
WAIT CLEAR
SQLDisconnect(hConn)
If the user presses the Escape key, the ON ESCAPE mechanism sets the local variable
llCancel to TRUE, terminating the WHILE loop. The next IF statement tests whether the query
was canceled before the results were returned to Visual FoxPro. If so, the SQLCancel()
function is used to terminate the query on the server.
Asynchronous queries are a bit more complicated to code, but they permit the user to
cancel a query, which adds polish to your applications.
The following example will not work correctly if your ODBC DSN is
configured for NT Authentication.
You should be prompted with the ODBC connection dialog similar to Figure 11.
Figure 11. Visual FoxPro prompting with the ODBC connection dialog due to missing
login information.
If you execute the following code, youll get a different result. Visual FoxPro will not
prompt with the ODBC connection dialog, and SQLConnect() will return a 1.
It is highly recommended that you always use the never prompt option for this property,
as you should handle all logins to the server through your own code instead of this dialog.
Refer to the topic Monitoring with SQL Server Profiler in the SQL Server
Books Online for more information on using the SQL Server Profiler.
lnResult = SQLExec(hConn, ;
"UPDATE authors " + ;
"SET au_lname = 'White' " + ;
"WHERE au_id = '172-32-1176'")
Figure 12. The commands captured by SQL Profiler for a simple UPDATE query.
Notice that nothing special was sent to the serverthe query was truly passed through
without any intervention by Visual FoxPro. When SQL Server received the query, it proceeded
with the normal process: parse, name resolution, optimize, compile and execute.
Figure 13 shows the commands sent to SQL Server for a query that uses parameters. This
is the same query as before, except this time were using parameters in place of the literals
White and 172-32-1176.
Figure 13. The commands captured by SQL Profiler for the UPDATE query
with parameters.
This time we see that Visual FoxPro (and ODBC) has chosen to pass the query to SQL
Server using the sp_executesql stored procedure.
There is a benefit to using sp_executesql only if the query will be submitted multiple
times and the only variation will be the values of the parameters/search arguments (refer
back to the section The advantage of parameterization for a review of the sp_executesql
stored procedure).
Figure 14 shows the commands sent to SQL Server when multiple queries are submitted in
a single call to SQLExec().
lnResult = SQLExec(hConn, ;
"SELECT * FROM authors; " + ;
"SELECT * FROM titles")
Figure 14. The commands captured by SQL Profiler when multiple queries are
submitted in a single call to SQLExec().
As we hoped, Visual FoxPro (and ODBC) has submitted both queries to SQL Server in a
single submission (batch). For the price of one trip to the server, weve submitted two queries,
and the only drawback is the funny name that will be given to the results of the second query
(you may want to review the section Retrieving multiple result sets for a refresher).
Remote views
Now that weve examined SQL pass through, lets perform the same exercise using remote
views. The code to create the remote view was generated by GENDBC.PRG and is shown here:
Figure 15 shows the commands captured by SQL Profiler when the remote view
was opened.
As with the simple SQL pass through query, nothing special is sent to the server.
Figure 16 shows the results of changing a single row and issuing a TABLEUPDATE().
Just like the parameterized SQL pass through, Visual FoxPro (and ODBC) uses
sp_executesql to make the updates. In fact, modifying multiple rows and issuing a
TABLEUPDATE() results in multiple calls to sp_executesql (see Figure 17).
Figure 17. The commands sent to SQL Server when multiple rows of a remote view
are modified and sent to the server with a single call to TABLEUPDATE(.T.).
This is exactly the situation for which sp_executesql was created. The au_lname column of
the first two rows was modified. SQL Server will be able to reuse the execution plan from the
first query when making the changes for the next, eliminating the work that would have been
done to prepare the execution plan (parse, resolve, optimize and compile) for the second query.
What have we learned? Overall, remote views and SQL pass through cause the same
commands to be sent to the server for roughly the same situations, so the performance should
be similar. Given these facts, the decision to use one over the other must be made based on
other criteria.
Remote views are a wrapper for SQL pass through and, hence, a handholding mechanism
that handles the detection of changes and the generation of the commands to write those
changes back to the data store. Anything that can be done with a remote view can also be done
using SQL pass throughalthough it may require more work on the part of the developer.
However, the converse is not true. There are commands that can only be submitted using SQL
pass through. Returning multiple result sets is the most obvious example.
Remote views require the presence of a Visual FoxPro database, which might be a piece
of baggage not wanted in a middle-tier component. On the other hand, the simplicity of
122 Client/Server Applications with Visual FoxPro and SQL Server
remote views makes them a very powerful tool, especially when the query is static or has
consistent parameters.
hConn = CURSORGETPROP("ConnectHandle")
In the following example, the previously described v_authors view is opened, and then its
connection is used to query the titles table:
USE v_authors
hConn = CURSORGETPROP("ConnectHandle", "v_authors")
lnResult = SQLExec(hConn, "SELECT * FROM titles")
If your application uses remote views with a shared connection, then by using this
technique you can use a single ODBC connection throughout the application for views and
SQL pass through. The following sections give some brief examples of how combining remote
views with SQL pass through can enhance your applications.
Transactions
Even if remote views suffice for all your data entry and reporting needs, you will need
SQL pass through for transactions. Transactions are covered in greater detail in Chapter
11, Transactions.
Stored procedures
Consider the example of a form that firefighters use to report their activity on fire incidents.
It uses 45 different views, all of which share a single connection, for data entry. However,
determining which firefighters are on duty when the alarm sounds is too complicated for a
view. A stored procedure is executed with SQLExec() to return the primary keys of the
firefighters who are on duty for a particular unit at a specific date and time. The result set is
scanned and the keys are used with a parameterized view that returns necessary data about
each firefighter.
Chapter 6: Extending Remote Views with SQL Pass Through 123
Filter conditions
Suppose you give a user the ability to filter the data being presented in a grid or report. You can
either bring down all the data and then filter the result set, or let the server filter the data by
sending it a WHERE clause specifying the results the user wants. The latter is more efficient at
run time, but how do you implement it? Do you write different parameterized views for each
possible filter condition? Perhaps, if there are only a few. But what if there are 10, 20 or 100
possibilities? Your view DBC would quickly become unmanageable.
We solved this problem by creating a single view that defines the columns in the result set,
but does not include a WHERE clause. The user enters all of his or her filter conditions in a
form, and when the OK button is clicked, all the filter conditions are concatenated into a single,
giant WHERE clause. This WHERE clause is tacked onto the end of the views SQL SELECT,
and the resulting query is sent to the back end with SQLExec(). Heres an example with a
simple WHERE clause looking for a specific datetime:
You can even make the new result set updatable by simply copying some of the properties
from the updatable view used as a template:
Summary
In this chapter, we explored the capabilities of SQL pass through and how to use it effectively.
The next chapter takes a look at building client/server applications that can scale down as well
as up, allowing you to use a single code base for all your customers.
124 Client/Server Applications with Visual FoxPro and SQL Server
Chapter 7: Downsizing 125
Chapter 7
Downsizing
Scalability is a popular term these days and is commonly used to mean that something
small can be made big. But what about the opposite situation? What about taking
something big and making it small? What about taking client/server systems and
downsizing them so they work where SQL Server might not be appropriate or cost-
effective? This chapter addresses how developers can maximize their efficiency by
creating systems that can use either client/server or file-server back ends with a single
code base. In this chapter, youll learn how to write applications for either VFP or SQL
Server back ends, and how to downsize them by using either remote SQL Server views
and local VFP views or remote SQL Server and remote VFP views. Youll also learn how
the Microsoft Data Engine (MSDE) lets you deploy true client/server applications on any
computerfor free.
views for data access, but other abstract mechanisms, such as ADO, will work as well. For
details on using ADO, see Chapter 12, ActiveX Data Objects.
A client/server-style application using Visual FoxPro data can be designed using two types
of views: local or remote.
Later in this chapter, you will learn details of how to create components that keep back-
end-dependent branching code in a limited number of places. If your application uses remote
views of VFP data, almost no branching logic is necessary.
Another disadvantage is unrelated to the ODBC driver, per se, but rather has to do with the
normal differences among any two back ends: The SQL syntax differs between VFP and SQL
Server or between any two back ends. A SQL pass through command that works for one back
end may not work for the other. However, as long as you pass through simple SQL syntax that
is compatible with both databases, this isnt a problem.
Figure 1. The ODBC Visual FoxPro Setup dialog, expanded to show all options.
If you are connecting to a VFP database, you set the path to be the full path and file
name of the DBC. When connecting to free tables, you specify the directory only. Note that
UNC paths are supported. The lower portion of the dialog, below the path, is only visible
after the Options button is clicked. The settings there correspond to SET COLLATE, SET
EXCLUSIVE, SET NULL, SET DELETED and FetchAsNeeded.
As long as the VFP database is in the search path, this syntax works fine. Put your local
views in a separate DBC from your VFP database and give a copy to each user, just as you
would with remote views. In other words, your application uses two DBCs: one with views and
one with tables. This makes your application more modular and reliable and makes it easier to
use the data environment of forms.
Chapter 7: Downsizing 129
One nice thing about local views is that the VFP View Designer doesnt hiccup nearly as
often as with remote views. Many remote views with joins cannot be edited in the View
Designer and require all work to be done in code. But this is much less often the case with
local views.
When you open a local view, VFP actually uses at least two work areas. If the view is
based on a single table and no view of that table has been opened yet in the current data
session, then after you USE the view you will see one work area for the view, and one for the
table itself. When the view joins multiple tables, one work area will be used for the view and
one for each table in the join. Figure 2 shows the three work areas opened for the following
local view of data in the VFP TasTrade sample:
Figure 2. The VFP Data Session window showing three work areas opened for a
single local view.
Another nice feature of local views is that if you create a multi-table join on tables for
which relations are defined, the View Designer will automatically detect those relations and
create join conditions to match, as shown in Figure 3.
130 Client/Server Applications with Visual FoxPro and SQL Server
Figure 3. The VFP View Designer will automatically detect persistent relations
between tables and create join conditions that match.
Yet your application must be able to provide both types of functionality. The way to
prevent unmanageable spaghetti is to pull the branching code out into a few abstract
components that are then used by various parts of the application when working with data.
There are three main areas where you should perform this abstraction:
Application-level data-handling class(es)
Form-level data-handling class(es)
Views DBC
Chapter 7: Downsizing 131
You also might want to let the application-level data handler provide other database-
specific services to the remainder of the application. Although we prefer to put non-startup
functionality in the form-level data handler, we do use a method in the application-level data
handler that returns which type of back end is being used.
132 Client/Server Applications with Visual FoxPro and SQL Server
Table 1. Five form-level data handler methods and the VFP functions they replace.
Listing 1. A simple snippet that begins a transaction and attempts to update two
views. If either update fails, the transaction is rolled back; otherwise, it is committed.
BEGIN TRANSACTION
DO CASE
CASE ! TABLEUPDATE("view1")
ROLLBACK
Chapter 7: Downsizing 133
CASE ! TABLEUPDATE("view2")
ROLLBACK
OTHERWISE
END TRANSACTION
ENDCASE
Listing 2. A snippet that does the same thing as the code in Listing 1, but calls the
form-level data handler instead of making the calls directly.
WITH THISFORM.oDataHandler
.BeginTransaction()
DO CASE
CASE ! .UpdateTable("view1")
.RollbackTransaction()
CASE ! .UpdateTable("view2")
.RollbackTransaction()
OTHERWISE
.CommitTransaction()
ENDCASE
ENDWITH
In Chapter 6, Extending Remote Views with SQL Pass Through, you learned about
transaction handling with remote data, which explained why the transaction-handling
methods should be different between the two data handlers. The VFP data handlers
BeginTransaction() method simply needs to pass through BEGIN TRANSACTION,
something like this:
BEGIN TRANSACTION
while the SQL Server handler sets the connections Transactions property to manual:
Naturally, each method needs to check existing settings and so forth, but the preceding
code shows the primary functionality.
The meat of the CommitTransaction() method for VFP looks like this:
END TRANSACTION
SQLCOMMIT(lnHandle)
SQLSETPROP(lnHandle, "Transactions", DB_TRANSAUTO)
Note that the VFP command END TRANSACTION both commits and ends a
transaction, but that the SQL Server version must set the Transactions property back
to automatic. The RollbackTransaction() methods are essentially the same as
134 Client/Server Applications with Visual FoxPro and SQL Server
Views DBC
If youve worked with lots of local data in VFP applications, you may be in the habit of calling
DBCs databases. If you do, wash your mouth out with soap right now and dont do it again.
A DBC is not truly a database. A database is a collection of tables, while a DBC is nothing
more than a metadata table containing information about tables, views and/or connections. You
wouldnt give every user his or her own copy of a database, but you can and should give every
user his or her own copy of the views DBC. The DBC contains nothing more than a bunch of
code and properties defining the views and connections. If each user has a copy, you dont have
to worry about pathing, and you can temporarily store all kinds of useful, user-specific data in
the DBC. This technique is covered in more detail in Chapter 4, Remote Views.
Since a view is nothing more than a collection of code and properties, it can be used to
abstract data access functionality. A view of the same name in two different DBCs can be
defined differently for different back ends. Stored procedures with the same interface can do
different things. Properties for objects in the DBCs can be set differently. In fact, these are the
main things to do differently in your views DBCs.
Each view definition must be written using the SQL syntax supported by the appropriate
back end, as back end requirements and capabilities vary. For example, VFP and SQL Server
7.0 both support TOP n queries, but dont try this with SQL 6.5. Youll have to leave that
clause out of your SELECT and use the views MaxRecords property instead.
Different back ends also support different functions. For example, to produce a query
returning rows for a particular date in VFP, you would use the YEAR() function, but in SQL
Server, you would use the DATEPART(year) function. Different back ends also have different
keywords, so a view that works fine in VFP might fail in SQL Server because you attempted to
use a SQL Server keyword.
Just be sure that you create views that look the same to the front end. This may take
some trial and error, and you should work with both back ends at the same time. Consider
one authors experience of working on a module and testing it with a VFP back end, only
to come back some time later and discover that a SQL Server keyword had been used in a
view definition, making it necessary to go back and change a bunch of code where the view
was used.
Some functions just seem to fit best as stored procedures in the views DBC. As long as
youre only opening one DBC, you can call the stored procedure at any time with a simple
function or procedure call. One excellent use for DBC stored procedures is to generate primary
keys. With SQL Server, you may call a SQL stored procedure or use an identity column, while
with VFP you might get a primary key directly from the VFP database. Either way, if you
create a stored procedure in the views DBC, it can contain the logic that is appropriate for its
back end; all you do is call the procedure. If you put this sort of function in your application-
Chapter 7: Downsizing 135
level data handler, you might find yourself writing lightweight COM components where you
need this functionality and dont want the overhead of a data-handling class. You can simply
move the code to the views DBCs and rewrite the data handler to pass the call through to the
stored procedure.
Finally, you can set properties for each object in the DBC. One good example of the need
for this is to make back-end-specific data type conversions, as with date fields in VFP tables
and datetime fields in SQL Server tables. As SQL Server has no date data type, you must use
datetimes. This is no problem if you also use datetimes in the VFP schema, but if you used
dates in VFP, then you simply change the DataType property of the views field. A SELECT of
a date field in a VFP table will automatically set the data type to date. You can easily change it
like this:
IF EMPTY(myfield) OR ISNULL(myfield)
Do something
But datetimes are different. EMPTY() will return FALSE in a remote view containing an
empty datetime field. If the empty datetime field is in SQL Server, then in a remote view it will
appear as 01/01/1900 12:00:00 AM (with SET(DATE) = MDY). To complicate matters
further, an empty datetime field in a remote view of VFP data will be 12/30/1899 12:00:00
AM. So every time you test for an empty or null datetime, you also have to test for it being
equal to one of these datetime values.
This is an excellent argument for writing your own function to test for EMPTY() or
ISNULL(). If the VARTYPE() of the value being tested is T, be sure to test for the empty
datetime values.
To make matters worse, you may have to deal with actual dates that are the same as the
empty ones. There are people alive today who were born on those days, though admittedly
not many. Fortunately, you most likely dont have to deal with the time for those dates. So to
ensure that you are dealing with an actual datetime, rather than a phantom one, set the time
component to something other than 12:00:00 AM. We use 12:00:01 AM instead.
By the way, you cant ensure against empty dates in your user interface because somebody
can always get to a field some other way. Remember, your application isnt the only way
people can get to data.
136 Client/Server Applications with Visual FoxPro and SQL Server
What is MSDE?
MSDE is a client/server database thats 100 percent compatible with SQL Server 7.0. It
is included with Microsoft Office 2000 Premium and Developer editions, and a royalty-free
run-time distribution version is available for licensed users of any of the following
Microsoft products:
Visual FoxPro 6.0, Professional edition
Visual Basic 6.0, Professional and Enterprise editions
Visual C++, Professional and Enterprise editions
Visual InterDev 6.0, Professional edition
Visual J++ 6.0, Professional edition
Visual Studio 6.0, Professional and Enterprise editions
User limitations
Microsoft says MSDE is tuned for five or fewer concurrent users. What does this really mean?
Theoretically it means the server is likely to be actively handling requests from five users at the
same time. But it doesnt mean an MSDE system is limited to five users.
In order to explore the limits of this feature, we did some tests to attempt to determine how
many users could be logged into MSDE at the same time and how performance was affected as
the number of users went up. We were able to connect more than 100 users without a problem.
However, there was a severe performance penalty as the number of users increased. With 15 or
fewer connections, there seemed to be no difference in performance between MSDE and SQL
Server 7.0 on similarly configured systems. But as soon as a sixteenth user was connected,
everything slowed down dramatically. When 16 users were connected, everything was slower
than with 15, even when only one user was actually doing anything.
138 Client/Server Applications with Visual FoxPro and SQL Server
Capacity limitations
Each MSDE database is limited to 2GB. Bigger databases require SQL Server.
No user interface
SQL Server ships with Enterprise Manager, Performance Monitor, Profiler, Query Analyzer
and other great tools for administering SQL Server and designing/modifying databases. None
of these tools are included in MSDE. However, if those tools exist on a network, they can be
used to manage a MSDE server. Well discuss three other possible tools here: Microsoft
Access 2000, Microsoft Visual InterDev 6.0 and in-house tools.
Access 2000
Microsoft Access 2000 can be used to manage most aspects of an MSDE server, including
database schema, triggers, stored procedures, views, security, backup/restore and
replication. Some things that arent particularly easy to work with from Access are user-
defined data types, defaults and rules. Many of the individual administrative tasks are
performed almost identically in Access and Enterprise Manager. For example, creating
views uses tools that differ in the two products only in their toolbars. Figure 4 shows the
design surface for views in Access, and Figure 5 shows the Enterprise Manager version,
which offers a toolbar.
As with Access 2000, youll find many of the design surfaces in Visual InterDev to be
similar to those in Enterprise Manager. An example is the table design view, shown in Figure
7, which is identical to the one in Enterprise Manager.
However, as with Access, youll find that Visual InterDev isnt a complete replacement for
the Enterprise Manager. For example, Visual InterDev doesnt offer an easy way to manage
users or security.
Figure 7. The table design view in Visual InterDev 6.0 is identical to the one in
Enterprise Manager for SQL Server 6.0.
In-house tools
Chapter 10, Application Distribution and Managing Updates, discusses creating tools that
allow users to perform various management functions for SQL Server databases. After all, you
may need to make schema changes during the life of a project and, though you could require
users to run scripts in the Query Analyzer, providing the user with an application specifically
for managing your database may give you better control. MSDE makes such a tool/application
even more important. Perhaps your users dont have Access or Visual InterDev. Maybe thats
good, as users can do a fair amount of damage with such tools. But they may need to perform
simple tasks such as changing the administrators password or adding new users. These tasks are
fairly simple to perform and are discussed in greater detail in Chapter 10.
Chapter 7: Downsizing 141
c:\temp\msdex86.exe a f1 c:\temp\unattend.iss
The path in the command line must be the fully qualified path to the two files. You may
rename the .iss file, but you must pass the fully qualified path, not a relative path, or installation
will fail. If you use InstallShield or other commercial installers for your install program, you
can specify the path in an installation script with a memvar for the location the user selected
for installation.
Microsoft says in its documentation to surround the .iss file name and path with double
quotes, but weve found that doing so causes frequent installation failures. You must also use
the a and f1 switches or your installation will fail. Installation will also fail if there are any
spaces in either path on the command line.
The documentation also says to use a s switch to cause a silent mode install, and that
omitting the switch will provide a user interface during the install. In tests, the switch does
nothing and you get a silent install whether you want it or not. Because youre stuck with a
silent install that takes several minutes with no feedback to the user, be sure to warn them in
some way prior to performing the MSDE install.
The .iss file contains numerous installation settings, including the installation directory.
This path is hard-coded into the file, and you cannot use relative paths. This means the user has
no choice of destination directory for MSDE. You could programmatically change the file at
install time to substitute a user-defined path, but this would be quite a bit of work with most
installation programs.
The MSDE installation will also fail when certain registry keys exist on the target machine.
MSDE cannot be installed on a computer that has had SQL Server on it unless
SQL Server has been completely uninstalled. We found this out the hard way by attempting
to put MSDE on a machine that had once had SQL Server 6.5 on it. The 6.5 installation had
been upgraded to 7.0; 6.5 was later uninstalled. Unfortunately, numerous registry keys
remained behind.
The MSDE installation program writes a file called setup.log in your Windows directory.
This file looks just like an INI file, and there are four lines to look for to help debug the
installation. If everything went fine, it will look like this:
142 Client/Server Applications with Visual FoxPro and SQL Server
[Status]
Completed=1
[ResponseResult]
ResultCode=0
If the Completed value is anything other than 1, the installation failed. If the ResultCode
value is anything other than 0 or 1, the installation also failed. Even though the ResultCode
value of 1 is technically an error, if Competed is 1 and ResultCode is 1, then the installation
simply requires a reboot. Other ResultCode values are shown in Table 2.
Value Meaning
0 Success.
-1 General error, or requires reboot.
-2 Invalid mode.
-3 Required data not found in the .iss file.
-4 Not enough memory available.
-5 File does not exist.
-6 Cannot write to the response file.
-7 Unable to write to the log file (dont know how youd find this one out).
-8 Invalid path to the InstallShield silent response file.
-9 Not a valid list type (string or number).
-10 Data type is invalid.
-11 Unknown error occurred during setup.
-12 Dialog boxes are out of order.
-51 Cannot create the specified folder.
-52 Cannot access the specified file or folder.
-53 Invalid option selected.
After a successful installation, the Startup folder on the Start button will contain a shortcut
called Service Manager. This is the only user interface for MSDE. When the system is
booted, the Service Manager will start. However, by default, the MSDE service itself will not
be started. Start the Service Manager to start MSDE, and also check the Auto-start service
when OS starts check box so that MSDE will automatically start when the computer is booted.
If the user has a license for Microsoft Office 2000 Premium or Developer edition, then
MSDE can also be installed from the Office 2000 CD. Run \Sql\X86\Setup\Sqlsetup.exe from
the Office 2000 CD. This version will install Microsoft DTS (Data Transformation Service) in
addition to MSDE. This version of MSDE can only be installed on machines where Access
2000 is already installed.
After copying all the appropriate MDF and LDF files to the new server, you attach
them like this:
Theres one minor catch. Although a list of users is stored in the sysusers table of each
database, the Security IDs (SIDs) required by SQL Server are actually stored in the
master..sysxlogins table. A user wont be able to get into the database after the move
because the SIDs dont match. This is easily corrected by running the SQL Server
sp_change_users_login stored procedure for every user. Listing 3 shows a VFP procedure that
calls sp_change_users_login for every user and application role in the database. If the ODBC
DSN you use to connect already specifies the database, then you dont need to call this
procedure with the name of the database. This works only for normal SQL Server users and
application roles; it does not work for NT-integrated users. You might encounter situations
where this step isnt required, such as when no logins are defined in a database because all
access is through the administrative login.
Listing 3. This code will reset the internal Security ID (SID) for a SQL database after it
has been moved from one server to another or from MSDE to SQL Server.
LPARAMETERS tcDatabase
LOCAL lnHandle, lcSQL, lnResult
*-- By connecting without parms VFP asks for DSN, then login and password
lnHandle = SQLCONNECT()
SELECT sqlusers
SCAN
144 Client/Server Applications with Visual FoxPro and SQL Server
USE IN sqlusers
RETURN
You arent limited to migrating only from MSDE to SQL Server. You
can migrate the other way, too. However, be aware that MSDE has
capacity limitations that might prevent a large database from being
migrated to MSDE.
Summary
In this chapter, you learned a couple of approaches to using the same application code with
either a VFP or SQL Server back end: using remote views with SQL Server and local views
with VFP, and using remote views with both SQL Server and VFP. You also learned about
Microsoft Data Engine, a SQL-Server-compatible client/server database that just might
eliminate the need to code for more than one back end, as it allows you to deploy your SQL
Server application from laptops to the enterprise. In the next chapter, youll learn about error
handling in client/server applications.
Chapter 8: Errors and Debugging 145
Chapter 8
Errors and Debugging
Error handling and debugging in traditional Visual FoxPro file server applications is
relatively straightforward because traditional Visual FoxPro applications use a single
technology that controls the user interface, data access, and the actual storage of data.
Client/server applications use three separate technologies (the Visual FoxPro user
interface, ODBC and a SQL Server database), which must communicate with each other.
Because this architecture is more complex, the process of handling errors in the
application and debugging is also more complex. In this chapter, you will learn some of
the secrets of handling client/server errors. You are also introduced to some debugging
tools that will make debugging easier for you.
Handling errors
After reading this far into the book, you may have gathered that handling data errors in a
client/server application is not very different from a traditional Visual FoxPro application. This
is simply because most data updates are handled through a TABLEUPDATE() or SQLExec()
call, and you can use the AERROR() function to determine the cause of any failures. Any other
type of failure (such as an application error) is trapped with either the Error event of the object,
or through your global ON ERROR handler.
Unfortunately, it is not that simple: Handling client/server errors from Visual FoxPro
can get tricky, particularly when SQL Server is used in a way that is not friendly to the
client application.
Trapping errors
The first lesson to learn is how to trap the errors you receive during a TABLEUPDATE() call.
You probably know that this function does not report errors through the ON ERROR handler.
Instead, you must use the AERROR() function to capture the reasons for TABLEUPDATE()
failures. In file-server applications, this array has a single row containing the details of the
failure. For any Visual FoxPro error, the array contains the data shown in Table 1.
However, for errors that occur through an ODBC update, such as when updating data on a
SQL Server with a remote view, the array will always have the same value, 1526, in the first
column. This is because all ODBC errors trigger the same Visual FoxPro error (1526). The
remaining elements of the array contain data that differs from a traditional Visual FoxPro error,
since ODBC errors are reported differently. The contents of the array from an ODBC error are
shown in Table 2.
ODBC errors can create multiple rows in the array created by AERROR().
Normally, you are primarily interested in the data of the first row, but the
other rows could contain important information that is related to the error
reported in the first row.
Reporting errors
By analyzing the array, you can quickly determine the cause of any update errors by reading the
fifth column and comparing the value there with error numbers from SQL Server. For example,
imagine that you are working with the sample pubs database on SQL Server. In this database,
there is an authors table, which contains information about the authors of the books in the
database. The sample CREATE TABLE statement shown here defines the first column of the
authors table, named au_id:
From this statement, you can see that the au_id column has a CHECK constraint
(equivalent to a field validation rule in Visual FoxPro), restricting the data to the format
Chapter 8: Errors and Debugging 147
999-99-9999, where 9 represents any valid digit. Therefore, if you attempt to place any data
into this column that does not meet this CHECK constraint, the operation fails.
The CHECK constraint created here does not specify a name for the
constraint. This causes SQL Server to generate a name for the constraint
in the format CK__tablename__fieldname__xxxxxxxx, where xxxxxxxx is a
unique character string. To avoid this, you should always provide a name for all
constraints, for reasons that should become clear in the next few paragraphs.
Now imagine that you have created a view that retrieves data from the authors table and
allows the user to update that data. Suppose that after your user has pulled down a record to
edit, he or she decides to change the au_id column of an author. In doing so, the CHECK
constraint on the column is violated, which causes TABLEUPDATE() to fail. After invoking
the AERROR() function, the resultant array will contain the following information:
LAERROR Pub A
( 1, 1) N 1526 ( 1526.00000000)
( 1, 2) C "Connectivity error: [Microsoft][ODBC SQL Server
Driver][SQL Server]UPDATE statement conflicted with
COLUMN CHECK constraint 'CK__authors__au_id__08EA5793'.
The conflict occurred in database 'pubs', table
'authors', column 'au_id'."
( 1, 3) C "[Microsoft][ODBC SQL Server Driver][SQL Server]UPDATE
statement conflicted with COLUMN CHECK constraint
'CK__authors__au_id__08EA5793'. The conflict occurred in
database 'pubs', table 'authors', column 'au_id'."
( 1, 4) C "23000"
( 1, 5) N 547 ( 547.00000000)
( 1, 6) N 1 ( 1.00000000)
( 1, 7) C .NULL.
( 2, 1) N 1526 ( 1526.00000000)
( 2, 2) C "Connectivity error: [Microsoft][ODBC SQL Server
Driver][SQL Server]UPDATE statement conflicted with
COLUMN CHECK constraint 'CK__authors__au_id__08EA5793'.
The conflict occurred in database 'pubs', table
'authors', column 'au_id'."
( 2, 3) C "[Microsoft][ODBC SQL Server Driver][SQL Server]The
statement has been terminated."
( 2, 4) C "01000"
( 2, 5) N 3621 ( 3621.00000000)
( 2, 6) N 1 ( 1.00000000)
( 2, 7) C .NULL.
As you can see from this output, the array contains two rows. The first reports the
violation of the CHECK constraint, and the second tells you that SQL Server has terminated
the statement.
While you are developing your application, you can show the contents of the second or
third columns of this array, since only you (or other developers) would ever see the message.
However, if your end users are anything like the folks that weve encountered, they will surely
react with panic to a message like that one! Therefore, you will eventually want to capture an
error like this and translate it into something more friendly before putting the application in
front of them.
148 Client/Server Applications with Visual FoxPro and SQL Server
This is where the trouble begins, as there is no easy way to uniquely identify the errors
you receive back from SQL Server. For example, notice that the fifth column of the first row
of the array contains the SQL Server error number 547. If you check this error number in
the Help system of SQL Server, you will find that this error is reported for any kind of
constraint violation, not only CHECK constraints. This means that the only other way to
determine the exact cause of the error is to parse the character string in the second or third
column of the array.
In deciding which column to use, notice that the first and second rows of the array have the
same exact error message in the second column, but the third column reports a different error
for each row.
Regardless, using the contents of the error array to create a user-friendly message poses a
small problem that can really only be solved in one of two ways. The first is to search for the
constraint name (in this case, CK__authors__au_id__08EA5793) and present a friendly
message for each. While this seems easy enough, it requires that you always have an up-to-date
list of constraints that are on the server and details about their meaning. If the constraint is
modified at any time, you will have to update your client-side code to match. Also, refer to the
earlier note about how this constraint name came to exist. If someone simply regenerates the
schema of the database or the table, the name change could break all of your code.
A second approach to error handling is a radical change over this first approach: Do not
update the data on the server through views, but use stored procedures instead. The first step to
making this approach work is to define a stored procedure for handling the updates of author
records, as in the following T-SQL code:
This code starts by creating a system message that is stored in the master database of SQL
Server. This makes the message available to all databases, and it can be easily invoked by the
RAISERROR function. After the message is created, the stored procedure code can reference
the message through its unique error number (in this case, 50001). Note that this code merely
creates the system message and the stored procedureit does not handle any kind of update of
an author record, nor does it actually execute the stored procedure.
With this procedure in place in the pubs database, instead of posting the update directly
through a Visual FoxPro view, you can now use a SQL pass through call:
The SQL pass through statement produces an error that can be captured from within Visual
FoxPro with the AERROR array. In this case, the error array looks like this:
LA Pub A
( 1, 1) N 1526 ( 1526.00000000)
( 1, 2) C "Connectivity error: [Microsoft][ODBC SQL Server
Driver][SQL Server] 'Invalid author ID specified. Use
the format 999-99-9999 for author IDs."
( 1, 3) C "[Microsoft][ODBC SQL Server Driver][SQL
Server] 'Invalid author ID specified. Use the format
999-99-9999 for author IDs."
( 1, 4) C "37000"
( 1, 5) N 50001 ( 50001.00000000)
( 1, 6) N 1 ( 1.00000000)
( 1, 7) C .NULL.
Therefore, if the stored procedure is programmed with these friendly error messages, you
can simply display these messages directly from the AERROR array. Furthermore, you can
trap for each specific error number (in this case, 50001) and translate the error message if
desired. However, to get this kind of information, you will need to use stored procedures and
forego the convenience of using views.
A third alternative is to not use any data validations on the server at all, and instead handle
all of them from within the client application. This approach performs well, but only if you are
writing the sole application that will ever touch the data on the server. Leaving the database
open with no validation whatsoever is typically not a good idea, as it makes it very easy for bad
data to enter the system.
Decisions, decisions The choice is up to you. One major factor in your decision may
depend on how much access you have to the server. If you are the entire development
department, and you take over the SQL Server and the client-side application development,
then you can choose to either design your own custom stored procedures for the updates, or
monitor the naming of constraints and other rules so you can capture them when using views.
150 Client/Server Applications with Visual FoxPro and SQL Server
However, you may work in an environment where you do not own all of the pieces of
the application. For example, the SQL Server may already be in use by another department.
This department claims ownership of the server and the associated database that you need to
access in order to write your Visual FoxPro application. Since the database belongs to the
other department, you may have political problems in acquiring the necessary access to the
database(s) on that server. Without proper access, you will have a tough time determining the
current status of any rules, stored procedures and so on. This can definitely complicate matters,
and may force you to resort to error-handling techniques that otherwise would not be your
first choice.
Conflict resolution
No, this is not a section on how to deal with difficult employees or your significant other. It
is, however, meant to introduce another big difference between Visual FoxPro and SQL Server:
how they handle update conflict resolution.
When using native Visual FoxPro data, you have some choices when deciding how to deal
with update conflicts. Recall that this error occurs when two users have pending changes on the
same record and then attempt to commit their changes. Only one user can update at a time, so
when the first user finishes his or her update, the second user is left with a problem because the
data on disk is not the same as it was when the edit began. You can control Visual FoxPros
behavior through the second parameter of TABLEUPDATE(), which allows you to modify
how the second user experiences this problem.
If you set the second parameter to False, the second users update fails with error 1585,
Update conflict, as Visual FoxPro will detect that the first user updated the disk values while
the second user was waiting with pending changes. On the other hand, if you set the second
parameter to True, the second user is permitted to overwrite the first users changes without
questions or errors. Informally known as last one in, wins, this avoids the update conflict
problem entirely. This is a great choice if it is not common for two users to edit the same
records concurrently, and it reduces the amount of error-handling code that you have to write.
When working with remote data, the same options are available and operate similarly.
What changes is how you handle an update conflict error (i.e., use False for the second
parameter). When a conflict occurs against Visual FoxPro data, you can use the OLDVAL()
and CURVAL() functions to view the state of a field before it was edited and the actual value
on disk, respectively. This allows you to go so far as to show the user what changes were made
by the other user and let them evaluate how to proceed.
However, when dealing with remote data, CURVAL() is worthless, as it always returns
the same value as OLDVAL(). Therefore, you have to use a completely different technique to
resolve conflicts when working with remote data. Since CURVAL() does not work, you have
to find a different way to get at the actual data on the server. You may first think that
REQUERY() is the answer, but this cannot be done on a view that has pending changes.
The only technique that seems to work is to open a separate copy of the data in its own
work area, either with SQL pass through or with the view. You still have to use the
REQUERY() function after opening this second copy to ensure that youre looking at the latest
values. This is due to the way that Visual FoxPro caches the views contents. But once you
have opened the view and executed REQUERY(), you can retrieve the current values and use
them as part of your conflict resolution.
Chapter 8: Errors and Debugging 151
Before we continue, we should point out another subtle difference that exists between
updates against native data and updates against remote data. Recall that TABLEUPDATE() can
detect update conflicts with its second parameter set to False. When applied to native data, an
update conflict occurs even if the two users edited different fields. This is due to the way Visual
FoxPro implements data bufferinga comparison of the entire row takes place, instead of only
checking the modified fields. To check whether there is truly an update conflict, you must write
a handful of code that employs the use of the OLDVAL() and CURVAL() functions. However,
when using views (either local or remote, actually), you can choose a WhereType that ensures
only the modified fields are checked against the back end for update conflicts.
For example, when your WhereType is set to the default of 3 (key and modified fields),
Visual FoxPro will submit an UPDATE statement that only includes the fields that were
modified by the current user. As long as the other user did not edit any of these same fields, no
conflict exists. However, if you use the WhereType of 2 (key and updatable fields), you are
bound to hit update conflicts more readily, as this will include any fields marked as updatable.
Note that choosing a WhereType of 4 (key and timestamp) is going to catch any update
conflict in the entire record, as the timestamp will be updated regardless of which field or fields
were changed. However, if you need to detect changes in any field (particularly memo fields),
this proves to be the most efficient option.
Finally, if you wish to avoid update conflicts entirely, you can choose a WhereType
of 1 (key fields only), so that Visual FoxPro only cares about matching the key fields before
posting the update. This has the same effect as specifying True for the second parameter of
TABLEUPDATE().
View errors
There are many possible errors that ODBC can return when a view update fails. There are
three main categories of errors: Something is wrong with your view, something is wrong
with the data, or something is wrong with the server. Youve already seen how data errors
happen, and with server errors, a Visual FoxPro error occurs that you can trap with traditional
error handling.
During development, you are most likely to run into errors in your views. Most of these
errors will be among those listed in Table 3.
One frustrating issue with some of these is the use of the ownership qualifier dbo for
tables. Sometimes youll get an error message such as the first one in Table 3No update
table(s) specified. Use the Tables cursor property. So you check the value of the property,
expecting it to be empty, but it says something like this:
dbo.category
This is exactly what it should be for the category table. We dont know why, but
sometimes VFP chokes when dbo is used and vice versa. So, if you get this message and you
used dbo, then change the property to remove dbo. If the property doesnt contain dbo, then
change the property to add it. This error doesnt happen very often, but you cant guess how
many hours it will cost you to find it the first time! We have had the error occur with both SQL
Server 6.5 and 7.0, but not yet with SQL Server 2000.
Debugging tools
Now that you have the general idea of how to handle and report errors in your client/server
application, you need to learn how to debug errors that originate in any part of the application.
On the client side, you can continue to use your familiar Visual FoxPro debugging tools:
the Debugger, Event Tracker, Coverage Profiler and, of course, your collection of past
experience. But the client side is only one of three places where problems can crop up. The
other two are in the ODBC driver and on the SQL Server. These two pieces have their own
ways of debugging: ODBC logs (or trace) and some SQL Server tools.
By using the Profiler, you can examine the commands that VFP sends to SQL Server,
evaluate the performance of individual commands or stored procedures, and even debug T-SQL
stored procedures one line at a time.
The Profiler can be found in the Microsoft SQL Server file group. To create a trace,
open it and select File | New | Trace. Figure 1 shows the resulting Trace Properties dialog.
All you need to do here is name the trace. In the example shown in Figure 1, the trace is
called MyTrace.
Figure 2 shows a simple trace created by opening a view of the Tastrade category table
created by the SQL Server Upsizing Wizard in Chapter 5, Upsizing: Moving from File-Server
to Client/Server. This view shows some of the interesting information that can be supplied by
the Profiler. The Event Class column describes the server event. When VFP sends a query, the
class is SQL:BatchCompleted, and the Text column next to it will show the actual SQL
statement that was sent in the batch. Note that the SQL User Name column displays the default
administrative login. Since this system is using NT security, the NT User Name appears as
well. The CPU column will show which CPU in a multi-processor server handled the request.
SQL Server is multi-threaded, so each thread can be sent to a different CPU. In this case,
however, it is running on a single-processor system, so only CPU 0 is used. The next three
154 Client/Server Applications with Visual FoxPro and SQL Server
columns are particularly useful, as they can be used to troubleshoot performance problems.
These show the number of reads, the number of writes, and the duration in milliseconds of the
event. Partially hidden is the Connection column, and further to the right, off the screen, is the
datetime of the event. The datetime column isnt particularly useful, as we would rather use the
duration column to troubleshoot performance.
The Profiler is a great tool for tracking down all kinds of issues. Consider the following
example of using the Profiler to track down a particularly nasty performance problem in the
way the SQL Server 6.5 product handled certain messages from the ODBC driver. In this
situation, Visual FoxPro was used as the client application for a SQL 6.5 box that was quite
powerful: multiple processors, high-throughput RAID array, and all the other toys.
Nevertheless, for some reason, when an insert occurred to a SQL table with a Text field, the
application appeared to freeze.
The client tried letting it run its course to determine if the insert would eventually complete
or if it was a lock-up situation. After waiting two hours with no response, they asked for help.
When it had been determined that the code was implemented correctly and that the database
schema was valid and efficient, the Profiler provided the answerit supplied the pertinent
information and explained why performance was so awful.
The table was rather large, with more than 200,000 records, and each record had a
Text field. SQL Server 6.5 was quite wasteful with Text fields, as it allocated a single 2K
page for each Text field, regardless of whether the field contained data or not (SQL Server
7 fixed this problem by allowing its 8K pages to be shared among multiple Text fields).
Therefore, searches through Text fields were to be avoided at all costs, since each search
would require moving and searching through 400MB of data. Even on this heavy-duty machine,
searching through that much data would be slow, particularly since there are no indexes on
Text fields.
What was happening was that SQL 6.5 received the update request and converted it into
three statements. The first inserted all of the data, except the contents of the Text field. Instead
Chapter 8: Errors and Debugging 155
of using the actual Text data, the statement provided a unique value for the Text field.
Then, the next statement was the death knell: it was performing a SELECT statement with a
WHERE clause to find the unique value in the Text field! (Wouldnt you at least think it
would try to find the record with the primary key of the table?) It did this because it would
use another function called UPDATETEXT to place the data into the Text fieldused in
order to avoid other kinds of problems with sending large amounts of Text data across an
ODBC connection.
Once this performance problem was discovered, it was easy to rewrite their update routine
to solve the problem. Without the Profiler, there would have been no clue as to why the server
would choke on something as apparently simple as an INSERT statement.
Another great use for the Profiler is as a learning tool. You can find out all sorts of neat
things by trying a function in the Enterprise Manager or from Visual FoxPro, and then looking
at the Profiler to see what happened. For example, imagine that you are not sure how to build a
CREATE DATABASE statement in T-SQL. You can use the Enterprise Manager to create a
new database, and then switch over to the Profiler to see what command(s) the Enterprise
Manager submitted to the server. Heres the output from trying this little test:
You may also uncover some undocumented features or discover how to do something
programmatically that you thought could only be done through the Enterprise Manager. For
example, we once used the Profiler to help troubleshoot a problem with the Upsizing Wizard
before the source code for the wizard was available.
ODBC logs
Although not our favorite tool for debugging a client/server application, ODBC trace is
sometimes the only way to determine whether the problem lies between Visual FoxPro and
SQL Server. For example, perhaps you believe that SQL Server is receiving an ill-formed
statement that is causing an error that you just cannot seem to track down within either Visual
FoxPro or SQL Server.
We dont jump at the chance to use ODBC logs, because they are tremendously large due
to the detailed information stored therein, and it is quite tedious to wade through them. Because
C++ programmers, not Visual FoxPro developers, develop ODBC drivers, the logs they
produce are full of hex values, but you can also view function calls and their success codes.
This permits you to view the exact steps that ODBC takes to get data from the server and to put
data back onto it. Armed with this information, you may be able to determine the cause of the
problem you are having and perhaps come up with a workaround for it.
Depending on how you are accessing the server, there are two places where you can
request creation of an ODBC log. First, if you are using a DSN, there is a check box where you
can enable tracing (see Figure 4). Even if you are not using a DSN, you can also use the
Tracing tab to enable an ODBC trace (see Figure 5).
Chapter 8: Errors and Debugging 157
Once you turn on ODBC logging, it will log every statement that occurs across the ODBC
connection until it is disabled. Since this generates a large amount of information, you will want
to turn it off as soon as you have logged the desired functions.
One big problem with using an ODBC trace is the amount of time it takes to create the file.
For example, what if you were to trace the opening of a view that retrieves data from the
authors table in the pubs database? This table has a whopping 23 records, and contains
records that are no more than 151 bytes wide. The table uses mostly varchar fields, so the
record width is variable and is usually much smaller than 151 bytes. Worst-case scenario means
that the table is 3473 bytes (or 3.4K) at its largest possible size.
When this view is opened under normal circumstances, it opens almost instantaneously.
However, when the ODBC trace is enabled, opening the view will take 28 seconds. Clearly, the
additional time is required to report the wealth of information produced by the ODBC trace to
the logthe resulting log file is 178K!
This log file contains several useful bits of information. The first part of the file shows how
the ODBC driver connects to the server, changes the database context, and sets a variety of
options, including the timeout values set by the Visual FoxPro connection definition used by
the view. Once the connection is complete, the log shows that the SQL statement SELECT *
FROM dbo.Authors Authors was executed. The rest of the log contains the steps used to
retrieve the column data types, and their values for each record.
Therefore, you can use the ODBC log to produce a high level of detail on how Visual
FoxPro is communicating through the ODBC driver to talk to SQL Server. However, in our
experience, the ODBC log has been a last resort debugging tool. In most cases, if the
problem is not something easily traced within the Visual FoxPro debugger, the next step is to
use the SQL Server Profiler.
Summary
In this chapter, you have seen the details of handling errors and how to go about debugging
problems. Errors are handled differently in a client/server application, and regardless of the
development product you choose, you must decide where to actually handle errors. In Visual
FoxPro, you can perform all error handling in the client application and only send clean data to
the server, you can use views and capture TABLEUPDATE() errors, or you can discard views
and use stored procedures on the server and SQLExec() statements.
Of course, you can still use the Visual FoxPro debugging tools to track down any Visual
FoxPro errors. However, when the problems seem to be outside of Visual FoxPro, SQL Server
and ODBC both provide some good tools for watching all of the activity generated by your
Visual FoxPro application.
In the next chapter, you will see more information on how to design a client/server
application in order to keep it flexible, scalable and maintainable.
Chapter 9: Some Design Issues for C/S Systems 159
Chapter 9
Some Design Issues
for C/S Systems
In the first part of this book, you learned about the capabilities of Visual FoxPro as a
client/server tool and the capabilities and features of Microsoft SQL Server. We also
demonstrated why client/server applications can be better than file-server applications.
But before you begin with a client/server application, youll want to know about the
choices in design that have to be made. This chapter covers the issues raised by moving
an application from file-server to client/server and the options for design, performance
and security.
Microsoft Visual FoxPro is a full-featured product with a fully developed language, visual tools
for handling forms and reports, object orientation features, a great database engine, and many
other tools that make the development process run smoothly. Microsoft SQL Server is a
database and query engine with quite a few administration tools to make things like security
and backups a snap to perform. But SQL Server lacks a front end, and although it has a
language (Transact-SQL), it is not designed to handle everything that a product like VFP can.
The lack of a front end is not a detriment; its simply a design decision to allow developers to
use products with which they are already familiar.
But as the saying goes, Familiarity breeds contempt. In this case, the contempt is not for
the devil you know, but rather for the devil you dont know: SQL Server. In this situation,
familiarity with a known product leads you to feel uncomfortable with the capabilities of the
new product. Thats not necessarily a bad thing, but with client/server, you may have to rethink
your programming habits. You have to remember that client/server performance is better than
file-server when the pipeline gets smaller. But that is only true when you treat the pipeline
gingerly, restricting access across the network to messages between client and server rather
than massive data transfers.
The question that arises is, just how should you design a client/server application? Where
should the messages be restricted and how? Should the server do everything that has to do with
the data? When should you take advantage of Visual FoxPros local data speed and language
capabilities? How do you reconcile VFPs database container against SQL Servers database?
Data types
No one can possibly argue that a database needs a data type defined for each field in a table.
The decision is whether the front-end program should also know about the data type. When
using a remote view in Visual FoxPro, the type becomes known when the view is used, so there
is no question that the proper data type restriction will be handledexcept when certain SQL
Server types are used that do not have a correct mapping to the types in Visual FoxPro. These
are the types that allow designers to avoid wasting database space because of the defined
ranges of allowable values. Smallmoney is only four bytes, as opposed to Moneys eight bytes.
But both types are converted to Currency (eight bytes) in VFP. The same thing goes for
Smalldatetime. Besides Int, SQL Server also supports Smallint and Tinyint, which are two
bytes and one byte, respectively. Both get converted into Integer in VFP. Binary and Varbinary
are converted into Memo (binary) types in VFP and are virtually uneditable directly (code can
be easily written to convert back and forth), and Uniqueidentifier (which is a 16-byte binary
value presented as 32 hexadecimal characters) is treated as Character data in VFP. Because of
these conversions, it is very easy to forget and allow values using Visual FoxPros restrictions
that will fail when the view sends the data back to SQL Server.
The issue is: Where should the problem be handled? Since the remote view changes some
data types in conversion, you might choose to use Visual FoxPro code to enforce the tighter
restraints of the SQL Server data types. This can be done at the view level via the View Field
Properties dialog by setting a Field Validation Rule where needed (as in Figure 1), or in the
forms used for data entry, or even in classes that you create for the fields in the forms. When
Chapter 9: Some Design Issues for C/S Systems 161
using forms or classes, you would put the test into the LostFocus (or perhaps Valid) event of
the control or class.
Figure 1. The View Field Properties dialog showing a Validation Rule entered for the
UnitPrice field.
The complete expression cannot be seen, and there is a comment that explains the
validation.Otherwise, without any type of code on the client side, you would have to
process any errors returned by the server. This means that you would have to learn the
various error codes that might be returned from the server and then write the code in VFP
to handle those errors.
The downside of coding data type validations on the client side is that since a data type
is defined in the database, any changes in the data type on the server would require changes
in the code on the client. The issue here would be one of deployment. Although the server
change only necessitates a change in one place, the client changes mean recompiling the
software and then distributing it on every computer where its used. By using error processing
on the client side, every change in structure would not require a rewrite because the errors
would still be valid.
Another less favorable way to handle this problem is to restrict the database design on the
server to using only those data types that are directly transferable to the VFP data types, thus
avoiding the range errors that might result.
Nulls
Both SQL Server and Visual FoxPro recognize Null values. A problem only surfaces when a
field that accepts Null values on the server is filled with spaces (for Character fieldsfor other
data types, there are other values to watch for) on the client. Instead of getting the Null value,
which is what may have been intended, the server will receive the field with the spaces. When
162 Client/Server Applications with Visual FoxPro and SQL Server
binding a forms controls directly to the remote view, that would normally not be a problem. If
the user does not fill in a value for the field, then the view would place a Null value in the field
on the server. But if the user actually types in a blank of any kind, then the spaces will be sent
to the server.
To handle this on the front end may require special coding so that if a field were left blank,
then a Null value would be inserted into the field. The problem here is that a bound control is
more difficult to handle. The other problem is once again how to know the structure of the
database on the client side. In order for you to use VFP to process the situation, you have to
know the database structure of the server, and if there are any changes, then those changes must
migrate to the client application.
The flip side of Null is Not Null, where a Null value is not allowed. This means that a
value must be returned to the server, or errors occur. When doing an insert using a remote view,
if a field is left blank, then VFP tries to insert a Null value into that field, causing the insert to
fail. For any field that does not allow Null values, either validation has to be done on the client
side, or the error code must be processed from the server.
There is no easy way of handling Nulls on the client side. In any case where you will be
using remote views, you will need to know the Null property of the field on the server. Then it
is up to you as to whether to put the requirements of the field in the view, the form or a class.
Defaults
Defaults on the server can be very handy. They provide the necessary value for a field when no
value is given for that field when a record is inserted from the client. Defaults override the
blank on the client side. (Actually, when a record is added through a remote view, Visual
FoxPro generates an INSERT statement that will only include values for the fields that have
been modified in the new record. That means any fields that are left blank are omitted from the
INSERT statement.) If a field on the server allows Null values and has a default, the default is
used whenever the field is left blank in a new record. In this way a default covers the situation
better than the Null property.
The question is whether or not to put the default value on the client side of the application
as well. The rationale for this is to let the data entry people see the value that would be used.
On the other hand, this requires that the client application be kept informed of any changes on
the server.
using the DBSETPROP() function to set a Row Rule Expression. You can also process these
rules via events in forms or classes, or by using VFP code when saving the changes to a record.
Primary keys
Primary keys are used to enforce entity integrity. By definition, the primary key is a value that
cannot be duplicated throughout an entire table. No two records can have the same primary key
value. Therefore, no two records can be exactly alike.
For all practical purposes, primary keys are created and behave the same in both Visual
FoxPro and Microsoft SQL Server. When you designate a primary key, both products create an
index (called candidate in VFP, unique in SQL Server) that enforces the primary key rule.
Primary keys are especially important to have on the server, because a remote view needs at a
minimum the primary key value in order to update existing records.
The source values for primary keys can either come from the data itself (natural) or can be
artificial, generated by your application or the system. Some designers choose to use natural
data as a primary key so that no extra data need be created for that purpose. This may come out
of a natural way of uniquely identifying individual entities within a table, such as name (by
using a combination of the name parts), Social Security number, badge number, part number,
invoice number, or any of several ways that are natural to the data itself.
Other designers prefer to create new fields for a primary key because of compactness or to
prevent changes to the key itself. An artificial or surrogate key is usually a single field that is
either an integer value or a short string. Since primary keys are generally used as foreign keys
when establishing relationships between tables, keeping them short is important as both a disk
space saver and a performance enhancement for quick joins between tables.
Generating keys
There is no intent here to say that one way of generating primary keys is better than another,
but rather simply to explore the issues when generating them. If the key is being created by the
application, or perhaps the usage of the application, then there is not really a problem. The code
for creating that key will be in the VFP program, so that when a new record is created, then the
new value for the primary key will be inserted along with the rest of the data. The only
important note is to remember that SQL Server may also be set to enforce uniqueness, so if
there is a possibility of error, it will have to be handled by the application code as well. You
will also have to be sure to set the primary key as updatable in the view definition.
On the other hand, if you use some mechanism on the server, then there are definite
repercussions. The first is that you will not know the value of the new primary key until after
the insert has completed. In fact, using a remote view may necessitate a REQUERY() of the
view after the insert has completed. This can cause quite a problem in terms of performance,
and there is no shortcut around it. The performance issue is that the insert is handled in the
background by VFP, and since the new primary key is only on the server, it cannot be seen on
the client. Normally a refresh of the current record would show any changes to a record in a
view, but since the REFRESH() function requires the primary key on the client, it will fail to
show the new record.
There are ways of handling this situation, but they all require multiple trips between the
client and server. If you are using the IDENTITY property (described in Chapter 3,
Introduction to SQL Server 7.0) for creating primary keys in a table, you could use the
164 Client/Server Applications with Visual FoxPro and SQL Server
following method. When you add a record to a table with the IDENTITY property, the
@@IDENTITY function will return the last value generated by an insert for that connection.
(This was also described in Chapter 3.)
Regardless of how you implement this technique, you must first find out the connection
handle being used by the cursor where the results of the view are stored. Next, generate an
insert command and use SQL pass through to send the command to the server. Finally, use SQL
pass through to send a command that will return the @@IDENTITY value in another cursor.
The following code demonstrates this technique:
lhConnection = CURSORGETPROP("ConnectHandle")
lnOK = SQLExec( lhConnection, "INSERT INTO TableName (col1, col2, col3) " + ;
VALUES (12,'First', 'Last')")
IF lnOK = 1
lnOK = SQLExec( lhConnection, "SELECT @@IDENTITY AS PriKey", IdentTable)
IF lnOK = 1
SELECT IdentTable
lnNewRec = PriKey
USE
ENDIF
ENDIF
After this, lnNewRec will have the new primary key value generated by the IDENTITY
property on the server. You can place the value into the primary key field of the view, but you
will not be able to edit that record until after a REQUERY() of the view has been done.
Note that you can also use the TABLEUPDATE() function after you INSERT data into a
view to provide the same INSERT INTO statement as described previously. However, you
must still determine the connection handle and use a SQLExec() call to grab the
@@IDENTITY value, and you will still need to REQUERY() before being able to edit the
record in the VFP cursor.
This technique works even if you request the IDENTITY value before
committing the changes to a transaction. Therefore, if you insert a
parent record after beginning a transaction, you can still use SELECT
@@IDENTITY to get the foreign key value that you need for any child
records. However, if you ROLLBACK the transaction, the IDENTITY value
is essentially lost.
Referential integrity
There are various ways to enforce referential integrity on the server and a couple of ways in
Visual FoxPro. The difference is that although SQL Server supports what is known as
Declarative Referential Integrity (DRI) where the integrity is built into the database structure,
VFP does not. Both server and client can use triggers for referential integrity.
Regardless of which method is used on the server, there is virtually nothing that can be
done on the client to prevent a referential integrity violation. Since these problems occur
because of referencing another table in the database, the data modification must pass through to
the server in order to get the error. The only thing that you can do is program for the error, and
handle it after the server returns the result.
Chapter 9: Some Design Issues for C/S Systems 165
DRI/foreign keys
Just like Visual FoxPro, Microsoft SQL Server supports the creation of relationships via the
CREATE TABLE or ALTER TABLE commands. The difference is that in VFP, the options
only create a defined relationship that is a precursor to using the Referential Integrity (RI)
Builder, whereas in SQL Server, the relationship becomes the referential integrity.
You can establish a relationship in both products by creating a foreign key in a child table
that references the primary key in a parent table. In SQL Server, this is called DRI, and it
establishes a restrictive relationship. DRI will cause an error when an attempt is made to delete
a record in the parent table that has records in a child table, or when the primary key of the
record in the parent table is modified and that record has related records in a child table. This
means that there is no way to institute cascading deletes or updates when using DRI.
One way that you could do cascading deletes is by deleting the child records first via some
client code. This is not easily performed when using remote views, but it is one way of handling
it from VFP.
Triggers
Both client and server support the use of triggers for handling referential integrity. In Visual
FoxPro it is possible to use the RI Builder to make the job of code generation easier, but there
is no such mechanism in SQL Server. There you would have to write Transact-SQL code to
handle the situation. That means you will write code for the Update and Insert triggers on the
child table, and Update and Delete triggers on the parent table. Note that the Upsizing Wizard
(covered in Chapter 5, Upsizing: Moving from File-Server to Client/Server) will write basic
RI code into the triggers for you, but only if you have selected the correct options and
employed the RI Builder in your VFP database.
which is a constraint violation. By examining the message in element 2, you will find out what
statement caused the error, which constraint, by name, was violated, and the database, table and
column names that are involved.
Even though you may find the first method preferable, there is the fact that transferring the
rules cannot cover every situation. Sometimes the modification has to go to the server anyway
in order to find out the error, as with referential integrity. Sometimes the problem is simply that
the tools discussed just dont do the job by themselves. Something else is going to be needed.
That something should also handle errors at the place they happen, the server.
Stored procedures
You can write code on the server that can handle all of the operations that you want to do for
the client/server application, or you can write code just to handle some of the operations, such
as data modification. But when you start to use code on the server, you will lose the ability to
use all aspects of certain tools on the client, specifically remote views.
Remote views are SELECT statements that return data from the server to the client, and
when they are used for updating as well, they will also generate the data modification
statements automatically. These statements are the INSERT, UPDATE and DELETE
commands that are common to most SQL implementations. The problem is that you have very
little flexibility in dealing with the data (its automatic) and a lot of headaches wrapped around
error handling. One technique that you might have already thought of is that the remote view
does not have to be updatable. Instead, you could use it for returning the data from the server,
and then manually create the data modification statements, which are sent to the server via SQL
pass through. But that does not eliminate the error handling or the explicit knowledge needed of
the servers data integrity mechanisms.
This is simply a choice. The alternative is to use code on the server in the form of stored
procedures. Just as in Visual FoxPro, stored procedures are stored in the database. That way,
they are available for use by any client application. Stored procedures on the server can handle
all aspects of data retrieval and data modification.
Within a stored procedure can be all of the error handling that you would need when using
SQL Server. This way, the nature of the errors is known at the place where the errors occur
instead of at the client. The stored procedure can then either handle the error or return an error
code to the client application that would be defined so that you would know how to handle the
error perfectly. Although this might seem to be the same thing as programming the validation
code in the client, the difference is that changes can sometimes be isolated at the server without
having to rewrite any code on the client. The documentation for the stored procedure would
indicate what inputs were required and what the return codes would mean.
Stored procedures also create a solution to the insert problem that was previously
described when allowing the server to generate a primary key. By using stored procedures,
all of the issues are handled at the server, and the procedure returns the new key to the client
as part of the way it works. This avoids the extra trip to the server to find out what key
was generated.
Basically, there are two ways that you can use stored procedures: either through SQL pass
through commands or via ActiveX Data Objects (ADO). With SQL pass through, you are only
slightly limited in what the stored procedures can do for you. ADO, on the other hand, provides
Chapter 9: Some Design Issues for C/S Systems 167
more flexibility with stored procedures, but forces you to add some code to handle the relative
lack of integration between ADO and VFP.
This would create a one-record cursor with two fields, one with the code indicating
success, failure or whatever, and the other with an explanatory message. There might be other
fields for special cases, such as stored procedures that would handle INSERT operations, and
would return a generated primary key value for the new record.
By definition, the SQLExec() function in Visual FoxPro creates a cursor with the alias of
Sqlresult, but the third parameter allows you to specify the name of the alias to use for any
cursors generated by the command. If the stored procedure generates multiple cursors, then the
aliases will be the name selected followed by numbers, where 1 would be the second set of
data returned from the server.
When using SQL pass through, you must create the entire stored procedure command, as in
the following code:
In this example, the command to run the stored procedure, the name of the procedure and
the parameters all became one string variable that was then passed to the server. As you can
see, any parameters must be a part of the string itself. When passing character values, you will
need to enclose them with single quotes within the command. SQL Server supports optional
parameters. That means that not all parameters have to be defined when executing the stored
procedure. But if you are passing the parameters by position, as in the preceding code, you
cannot skip any parameters. In that case, or in every situation, you can use named parameters so
that order does not matter.
This example assumes that there is a parameter with the name of @CategoryName. All
parameters in SQL Server start with an @.
Just as views can be parameterized in Visual FoxPro, so too can SQL pass through
commands. This is done simply by placing a ? in front of a variable name. If the variable name
does not exist when the SQLExec() function is executed, an input dialog box will appear
prompting the user to enter the value. Normally, you would create the variable just before
execution based on some other form of input.
lcCatName = THISFORM.txtCategory.Value
lcCommand = "EXECUTE procSalesByCategory @CategoryName=?lcCatName"
SQLExec( lhConnection, lcCommand, "SalesCat")
168 Client/Server Applications with Visual FoxPro and SQL Server
ADO
A little more flexibility results when ADO is used instead of pass through queries. The problem
with SQL pass through is that return codes must be handled through a cursor or result set. SQL
pass through does allow OUTPUT parameters, but does not allow for return codes from
procedures. (Yes, procedures can return a single, integer value.) This is somewhat limiting, but
by using ADO, that problem is overcome.
Without getting into an entire discussion of ADO (as it is covered more completely in
Chapter 12, ActiveX Data Objects), you should understand how the ADO command object
handles stored procedures. In ADO, the command object has the ability to handle stored
procedures and their parameters in a more object-oriented way. Command objects have a
property for storing the name of the stored procedure (CommandText) and another property to
indicate that the command is a stored procedure (CommandType). The command object also
has a parameters collection, used in place of passing variables from Visual FoxPro.
The parameters collection contains a parameter object for every parameter passed into a
stored procedure, and even one for the return value. The advantage of this is that if a parameter
of a stored procedure is defined as an OUTPUT parameter, then after the command object has
been executed, that parameter object will have the output value. This way, stored procedures
can be designed to return information in a more natural way than a cursor. It makes sense that a
procedure that modifies data should not return data, and by using ADO, you can avoid that
sticky situation.
The downside to ActiveX Data Objects is that they cannot be used in the same fashion as
remote views. Visual FoxPro does not have an ADO tool to make development with this latest
technology easy. As a result, you would have to do a lot more coding than with remote views.
ADO does have a recordset object that is a little like the cursors in Visual FoxPro, and they can
even be updatable. Unfortunately, VFP cannot use the recordset cursor (theyre called cursors
also) in the same way as the temporary data returned via ODBC. Instead, a recordset cursor
exists in the memory of the client workstation, and extra code would be needed to populate a
local cursor. Also, although recordset cursors can be updatable, they work differently from
remote views, usually causing a higher resource drain on the server.
ADO will be covered in more detail in Chapter 12, ActiveX Data Objects.
SQL Server, especially the Enterprise Manager, have made the day-to-day tasks easier and
easier to handle. The DBA feels that SQL Server is strong enough and has enough built-in
features to make data integrity the province of the server. What that means is that the DBA
feels that all data integrity needs are met at the server.
This is the biggest issue in a client/server design, balancing the experience of the
different parties to achieve the goal of a well-designed, robust and secure database. One
of the things that you may encounter when moving to a client/server design is that the
server needs more administering than the typical desktop database solution. Therefore it is
important to understand the needs of other parties who will be or are already involved in a
client/server database.
The answer to how integrity should be handled is not a black and white decision. It is not
all or nothing, but rather a balance where the strengths of the client are balanced with the
strengths of the server. By now you should realize that it is impractical to try to design the
client side without some knowledge of the server design. So even though you would like to
have all data integrity handled at the server so that the client application can be designed once,
there will be modifications on the server that impact the client.
In the rest of this chapter, we will examine some other issues regarding performance and
security that are also part of the database design, and then in the next chapter, we will present
choices and recommendations that will help you decide.
Choosing indexes
The biggest impact on the server in terms of performance is indexing. If there are no indexes,
the server must read all records of a table to search for data. This also goes for data
modifications, in order to find the record to modify. Although youre happy to have the server
handle data retrieval, without indexes, any server will be brought to its knees in no time, and all
the presumed benefits of client/server will be lost.
Choosing indexes in SQL Server is a complex issue because there will many different ways
of querying data. The two different types of indexes, clustered and non-clustered, were
explained in Chapter 3, Introduction to SQL Server 7.0. What you need to know now is how
to choose your indexes.
The brute-force way to find out which indexes are useful is through trial and testing. First
you create one or more indexes, and then you find out whether your queries will use any of
them. If there are any unused indexes, then you should drop them because they will just add
overhead to maintenance without giving you any benefit.
Rather than the trial-and-error approach, Microsoft SQL Server provides some tools that
can help you find good indexes. The tools all reside within the administrative applications
installed with SQL Server. (You can also install these tools on any client system as well.)
In the Query Analyzer, there are two tools that can help you. The first one is the Graphical
Estimated Execution Plan. This can be selected from the Query menu or from the toolbar. You
170 Client/Server Applications with Visual FoxPro and SQL Server
can also specify that the execution plan be displayed along with the execution of a statement.
By examining the graphical plan, you will discover which indexes, if any, are being used for a
statement or group of statements (a batch). Figure 2 shows a sample execution plan.
This plan shows the steps that the query engine of SQL Server will perform in order to
carry out the command. When you position the mouse pointer over any of the icons, a box will
appear explaining the nature of the step, various statistics on that step and the argument used to
perform it. Through this tool, you will begin to understand the thinking behind SQL Servers
optimizer and then be able to choose good indexes.
Furthermore, the estimated execution plan will actually flag steps where you have no
statistics in red and offer a suggestion for creating those statistics. Statistics are what the SQL
Server optimizer uses in determining whether an index is useful to optimize a statement. When
you right-click on any icon, the context menu that appears will allow you to build and examine
indexes on the server.
Another tool in the Query Analyzer is found in the Query menu: Perform Index Analysis.
When you use this option, SQL Server will offer suggestions for indexes that would improve
the performance of the statement being analyzed. After suggestions are offered, there will be an
opportunity for you to create those indexes by accepting the commands that the Index Analysis
shows (see Figure 3).
The problem with the previous two ways of analyzing queries is that they generally do not
take into account the overall usage of the data. Determining which index should be the
clustered index, or none, and how you should build your non-clustered indexes (composite,
unique and so forth) is very difficult because there is no way to predict exactly all of the ways
that the data will be queried. To that end, the Enterprise Manager has a tool called the Index
Tuning Wizard, which can be a great help.
Before you use this tool, you should either create a script (text file with a .SQL extension)
with all of the queries that are being used, or a workload file from the SQL Server Profiler. The
first method is almost the same as using the Query Analyzer Index Analysis tool, so the real
benefit comes from the second method.
The SQL Server Profiler is a tool that can be used to trace the commands that come into
SQL Server. When creating a trace, you can specify what to include or exclude from the trace.
By specifying a particular table in a database, the trace will only include statements that
reference that table. By saving the results into a workload file (which can be either a table in the
database or a file on the disk), you will have a picture of how that table is being used. This
workload table or file can then be used as the input for the Index Tuning Wizard. This way, you
will get recommendations based on the actual usage of your data.
be on the server, but the business rulesthe user-defined integritycould be on either, or both.
It is better to try and centralize the business rules, so that they can more easily be changed, but
there are other performance issues to consider.
Bandwidth
Along with managing the server and client, you also have to be concerned with the network as
well. Bandwidth refers to the capacity of a network, which can be impacted by many factors.
These factors include the physical layout, the type of cabling used, the distance from clients to
servers, and the number of users on the network. But one of the more important factors is the
design of the client/server layout.
Although there are many physical characteristics, the one thing that is known is that
bandwidth is a finite resource and should not be abused. You want to keep the amount of
information passing across the network to a minimum. That way, any network concerns will
stay with the physical. In order to keep network use down, try not to download data
unnecessarily, and try to keep trips between client and server to as few as possible.
To help limit the amount of downloaded data, only specify the fields that are absolutely
needed on the client. Furthermore, limit the number of records returned through the use of the
WHERE clause, and make sure users are required to specify what it is they are trying to find.
Reducing the number of trips between the client and server is a bit trickier. As youve
already seen in the data integrity section, sometimes you have to go to the server to validate
data entry, which means that if an error occurs, you end up with two extra trips. One way of
helping with that is to keep the number of error trips to a minimum. This can be done by not
reporting just one error at a time. It is more efficient to gather up all the errors committed by
the data entry person and then report them back so that multiple discovery trips are eliminated.
Stored procedures can do this as well as the multiple rows returned by AERROR(). However,
as mentioned earlier, stored procedures also minimize the amount of information that needs to
be sent to the server in the first place, so you can see that stored procedures can be an even
bigger help in reducing trips as well.
Scalability
Scalability refers to the ability of an application to handle larger and larger numbers of users.
This goes hand-in-hand with bandwidth reduction, as the more that you design to protect
bandwidth, the more users the system will be able to handle. But it is also a part of the design
as well.
In the past, using only Visual FoxPro, it was easy to create forms with controls that were
bound to the data itself. For the size of the applications and the way that VFP actually takes the
information into each client systems memory, this works well. But when all of the data is being
shared among many different client systems and the data stays in the memory of the server, then
binding forms impacts scalability.
Now it is acknowledged that using remote views hardly involves binding client forms to
server data, but the potential for abuse is there and it should be avoided at all costs. One way to
look at it is to ask yourself if the design that youre using will work well for two users, 10 users,
100 users, 500 users, 1,000 users and more. By keeping the number of potential users in mind,
your designs will be more efficient.
Chapter 9: Some Design Issues for C/S Systems 173
Data location
This last area of performance is the tricky one, and youll soon see why it is important. There
are times when the data should be stored on the client instead of the server. Thats right, even
though the server is where the data is kept, there are times when you want to store data on the
client. This is done not as a permanent solution, but rather to reduce network traffic. For
example, suppose the server had a table of state and province postal codes. These are not likely
to change; therefore, its a waste of network bandwidth to download this table more often than
its modified.
The same is true to some degree for any table that is relatively stable. We dont mean that
it has to be completely stable, just that it has to be data that is modified infrequently. This way,
the data can be stored on the client and only downloaded when necessary. This enables you to
move a little bit more of the validation to the client system, but this time, rather than it being
hard-coded, it is data-driven validation, based on the data stored on the client.
The only question then is when should the data be downloaded, and how will you know
when that data has been modified. There are several options for the first question. It can be
done every time the application is launched, the first time the application is run during a
calendar period, or when the data is modified. There are many ways that modification can be
detected, such as a special smaller table that can hold datetime values for when the data is
changed, or sending it automatically through replication.
Security
The last design issue is securitymaking sure that only authorized users have access to the
data. Although you may have used application-level security in the past, Microsoft SQL
Server is so good at handling this that youll definitely want to use the built-in tools to
administer security.
Client application
Just as in the days of writing VFP-only applications, the client program may still require a user
to supply a login ID and password. This time, however, the client side will merely pass that
information on to the server, where the user will be authenticated.
If you are using SQL Server in a Windows NT network, then by using NT authentication,
the user will not even have to worry about a login procedure. NT authentication means that the
user was already authenticated when they logged onto the network. So if their network user ID
was registered with SQL Server, or if any Windows NT group they are a member of is
registered with SQL Server, theyll have access.
If you cannot use NT authentication, you can handle logins by storing the necessary
information in memory and using that to connect to the server in one of several ways. The first
is by creating a data source through the ODBC Data Source Administrator. Then, when
creating the connection, specify the data source name. The data source can actually specify the
login information, so if everyone using the application has been validated through some other
means, then perhaps the user need not specify any login information. But this would be very
risky, as anyone who can access the client computer would be able to gain access to the server.
Another way is directly through the ODBC driver, using what is known as a DSN-less
connection. Login information must be specified using this technique. Finally, if you are using
174 Client/Server Applications with Visual FoxPro and SQL Server
ADO, then you could use the OLE DB provider for SQL Server. Using this technique, youll
need the login information as well.
Application roles
Theres a new way to manage security in Microsoft SQL Server 7 through a feature called
application roles. Unlike server and database roles, application roles do not have members.
Instead, an application role has a password. Since almost all database activity, other than
administration, will be handled through your client/server application, there is no need to
manage standard roles and their memberships. Instead, have the client application login to the
server and then set the application role. Once the application role is set, the user will be unable
to do anything that is not allowed to the application role.
Chapter 9: Some Design Issues for C/S Systems 175
Even if a user has permissions that the role does not have, they will be unavailable during
the application. The users membership in any standard roles will not have any impact on the
application role because it overrides the connections permissions.
The Transact-SQL statement that sets the application role is the following:
The sp_setapprole is a system stored procedure that activates the application role and
is submitted through a SQLExec() call. There is an option to encrypt the password as it is
sent to the server. The users will still need to be able to authenticate or log on to the server,
but they will need absolutely no database permissions. All the permission setting will be
with the application role, and those should be set to match up with the activity of the
client/server application.
Summary
In this chapter, you learned about the issues around client/server database design. You learned
what the various options are when planning out an application such as this, with special
attention being paid to those areas where there are conflicts between client design and server
design. When planning out the data integrity issues, keep in mind the pros and cons of where
validation is done. When validation is performed on the client, the issue is deployment and
recompiling when changes are made. When validation is done on the server, the issue is
network traffic and server overload.
You learned that stored procedures aid in handling validation, security and error
processing, as well as cutting down on network traffic. You also saw the use of stored
procedures through ADO and the advantages that ADO brings.
Client/server design is not just choosing between client and server; its also making
database decisions that will impact performance and security.
In this chapter, the options were presented so that you can make informed decisions. In the
next chapter, you will learn about the care and feeding of a client/server database.
176 Client/Server Applications with Visual FoxPro and SQL Server
Chapter 10: Application Distribution and Managing Updates 177
Chapter 10
Application Distribution
and Managing Updates
In Chapter 9, you learned about the options available in designing a client/server
application. In this chapter, youll learn how to plan development so as to make the
deployment as easy as possible, the options for deploying, and the management of
changing the application, whether changing the server side, the client side or both.
Planning for distribution and modification is no less important than any other aspect of the
development process. You have to provide your users with the ability to install your finished
project anywhere they choose, and you must provide a usable mechanism for the distribution
of all parts of the application. Then you have to make sure that you have plans in place to
handle changes to any or all parts of the product. Finally, during this process you must also
devise a method of version control, which encompasses not only the code but also the server
side of things. So planning for change is clearly a much bigger job in client/server than in
one-tier applications.
In this chapter we will look at the planning stages of development, and then examine the
various ways of deploying the client side of the application. Well explore the actual
distribution of a database design for the server side, and wrap up with a discussion of updates
and version control.
Client/server development
What creates the challenge in planning for distribution and change is that you are working with
a client/server application. This may sound simple, but it means that one part of the application
resides on the client and the other part resides on the server. (Just think what happens when you
start designing n-tier applications, with three or more parts!)
Again, this might seem obvious, but there are a lot of issues that can make the development
process a true headache if the obvious is overlooked. Among other things, if you develop the
entire application on one computer, remember that eventually the parts will be separatedno
part of your code should assume anything regarding locations. In the following sections,
well examine the challenges of planning for relocation in both Visual FoxPro and Microsoft
SQL Server.
Development environment
Before beginning the development process, its a good idea to examine the environment in
which the development will be done. Everyone who is a part of the development process will
be working on his or her own computer, so its important that the code and the database be
shared in some fashion during this time. For this reason, it is important to have some kind of
source control software. Even if you are the only developer involved, source control software
can be a wise investment if it also supports version control.
178 Client/Server Applications with Visual FoxPro and SQL Server
You should also never make the mistake of assuming that the locations being used for
development will remain the same or even have the same relationship once the application is
set up for production.
(DSNs). Furthermore, you can use either Connection objects or DSNs when using the
SQLConnect() VFP function.
The problem with using ODBC DSNs is that they are defined on the computer that is doing
the connecting. There are three types of ODBC DSNs: user, system and file. Both user and
system DSNs reside on the client computer itself, and even though a file DSN can be relocated
to other computers, the server name is still definitely a part of it. This means that the DSN used
during development has to be reset at installation in order to point to the production server.
If you use a file DSN, then you can modify it at installation with the server name. Thats
because a file DSN is a text file with one line that reads SERVER=<name of server>. By
using the low-level file I/O functions in Visual FoxPro or the Windows Scripting Host object
library, you can create or duplicate a file DSN during the installation process. There are other
issues that might have to be addressed in a file DSN; these are covered in the next section.
However, the biggest problem in using file DSNs is that, by default, Visual FoxPro does not
see them in the database container tools.
Another issue to consider in the development process is the aforementioned possibility that
the database might have a different name. This is also addressed by using a file DSN or SQL
pass through functions. But its not always that simple. There are applications where there are
multiple identical databasesthat is, they have the exact same set of tables and stored
procedures and everything else. This type of application is used in cases where a business
might have to handle the exact same type of data for many clients, such as accountants.
You can avoid using DSNs by instead providing a complete connect string. To create
remote views that use DSN-less connections, you must create a Connection object in a Visual
FoxPro database container, and then provide the connect string in the properties of the object.
With SQL pass through, you can specify a connect string with the SQLStringConnect()
function, allowing you to avoid the need for a VFP DBC. Of course, all of your server
communication would then have to occur through SQL pass through.
Hopefully, you now appreciate the necessity of avoiding hard-coded references to servers
and databases, and see the wisdom in devising alternatives for creating such references. One of
the simplest techniques is to make the whole connection process data-driven. For example, you
could create and use a local table that stores server and/or database names. When the
application is launched, the table values are read and stored in variable names, or properties,
that are used throughout the code. During installation, a procedure captures the server name,
the database names and their locations, and stores them in the local table. The fact that this
table may be duplicated is not a problem, as long as it is accessible to the user who did the
installation, or other users if the app is run from a network.
Deployment models
Once youve written and tested the application, how do you deliver it? Before exploring the
various options available for distributing the server side of the application, lets look at the
various methods of sending the applications out to the users.
Traditional
In the old daysbefore Windows, before components, before multi-tier and, most of all, before
the Internetapplications were deployed very simply: You copied them. Youd develop the
program using whatever development platform you wrote in, copy the files to a floppy disk and
180 Client/Server Applications with Visual FoxPro and SQL Server
send or carry it to the target system, and then copy them from the floppy to the users hard disk
or network drive.
The only problem was that in order for this to work, the program had to be compiled into a
stand-alone executable. With products like FoxPro, you had to make sure that the user had a
copy of the same package that was used for programming. It did get simpler, in a way, once you
could distribute a run-time version of your development package, with no extra cost! But
whether it was a run-time or a development version of the language, you started down a
slippery slope. Soon, with Visual FoxPro, it isnt enough just to install the code and a package;
other programs are also needed. That requires you to be very careful about matching up the
development platform with the users platforms.
Components
Once programming no longer involved just one development product, it became necessary to
find a way to break the application apart, and a way to make sure all the parts would find their
way into the delivered application.
Components came to the rescue. Components are just a way to break a program into parts.
Each part performs various tasks so that no part needs any particular set of parts (i.e., they have
low dependency on each other). Of course, after years of trying to write reusable code, Visual
FoxPro came along and introduced you to classes, and suddenly, reusable code became the
rule. So in using VFP, you started to break down the parts of an application into components,
such as user interface, business rules and data handling.
Having done all that, VFP also was brought into the Microsoft component fold, allowing
you to take advantage of third-party components, even components not written with Visual
FoxPro. How this all came about is a part of the never-ending saga of finding better ways to
write and maintain programs, while making them match up with the users rather than forcing
users to match the code.
Microsofts answer to components is the Component Object Model (COM). COM works
by registering the details of every component thats installed in your Windows operating
system. This is handled through the Windows registry automatically whenever any component
software is installed on a computer. Then, when another product needs to use that component,
Windows looks up a code known as the class ID (or CLSID) in the registry to determine where
the actual software is for that component.
When you create an application, you can take advantage of these components and even
create your own. Then, when the project is turned into an application, you must collect all
of the components youve used and make sure that they get installed and registered on the
target computer.
The advantage of using components is that they can be modified and reinstalled without
your having to recompile the entire program. In this way, minor modifications to these parts
can be made independent of all other parts. Using the old way, everything was written into one
big program, requiring you to redistribute that big program every time a change was made.
Server
The new element in todays environment is that the server side also has to be deployed. This
includes the database design, the seed data for the database, script files that are used with the
application, and a plan for importing data from other sources to start the database. Unlike the
Chapter 10: Application Distribution and Managing Updates 181
client side of the application, there is no automatic way of packaging the parts of the server
side. The server can consist of many separate files that have no real connection to each other.
This means that you have to use a technique similar to the old way of doing things. That is,
some files may have to be transported (copied) from the development platform to the servers
production platform.
In the next section, we will explore the various challenges of deploying the server, and the
different ways of actually accomplishing this.
First installation
If there has never been a SQL Server installation, then you may be responsible for setting it up.
That means obtaining the Microsoft SQL Server software and the correct number and type of
licenses. Yes, unlike the client side of the application, which in many cases can be distributed
royalty-free, Microsoft SQL Server requires licenses. For information on licensing, see Chapter
3, Introduction to SQL Server 7.0.
There are a number of other issues that you must consider when installing the server, such
as the character set, the sort order, the Unicode collation sequence, the account to use for the
SQLServer service and SQLServerAgent service, and which components need to be installed.
These are the same sorts of things that you should have encountered when you installed
Microsoft SQL Server in the development environment.
The recommended method of handling a new installation for a client system is to use a
batch file and an initialization file. The initialization file will contain all the instructions needed
by SQL Servers setup program so that the install can be done unattended. There are several
sample initialization files that ship with SQL Server, but if you need to create one of your own,
you can launch the setup program with an option that will save your choices into an
initialization file. You can even cancel the install at the last step and the file will still be
created. The last step will be to include two extra sections needed for the install process. These
are the [SdStartCopy-0] and [SdFinish-0] sections. You can find examples of these sections at
the beginning and end of the sample files.
The following command creates the initialization file:
setupsql.exe k=Rc
182 Client/Server Applications with Visual FoxPro and SQL Server
This will create a file called setup.iss in the \Windows or \WinNT directory. To use the
initialization file that was just created, use the following code:
The start /wait together with the SMS switch forces the installation to complete without
returning to the command prompt until the setup is finished. The f1 switch must specify the
full path and name of the initialization file that you create. The s switch causes it to run in
silent mode and never present any screen to the user. You must include a full path to the
setupsql program.
If you are controlling the installation through a batch, then youll also need to allow the
user to choose the location of the SQL Server software and databases. That choice will be
for both the computer where SQL Server will reside as well as the folders where the program
and databases will be placed.
Once the Microsoft SQL Server installation is done, your server-side database can
be installed.
Prior existence
If SQL Server is already in place at the location of your client/server application, then the entire
process can be as simple as having the user point to the server, and then choosing the location
of your database. The consideration here is whether or not there is already an active SQL
Server installation at the target site.
If there is an active server for other databases, then security is an issue. In order for your
installation to be successful, the person or job doing the installation must have the rights to
create a database, or be a system administrator. If there is already a database administrator at
that site, then its possible that the installation would have to be in conjunction with that person
or department. There are SQL Server facilities where security is maintained very tightly and
permissions to act as a system administrator are severely limited. This means that you will have
to coordinate the software installation with the database administration staff.
With the first method, there are two options. You can use a predefined Connection object
from a database container, or you can supply the ODBC data source name, followed by a login
ID and password. The second method uses a string that contains all of the information needed
Chapter 10: Application Distribution and Managing Updates 183
by the server in order to connect. Both of these connect functions will return a connection
handle that will be used in future SQL pass through functions.
The last method first connects via a remote view; then, by querying the resulting cursors
ConnectHandle property through the CursorGetProp() function, the same result is achieved.
Here are samples of these three methods:
hSQL = SQLConnect("MyServer","Installer","password")
hSQL = SQLStringConnect("DSN=MyServer;UserID=Installer;Password=password")
USE MyRemoteView
hSQL = CURSORGETPROP("ConnectHandle")
The connection handle is then used in subsequent calls to the server as the first argument in
all of the SQL pass through functions. The only function that can be used with data definition
language is the SQLExec() function. This function takes two required arguments and one
optional argument. The first argument is the handle; the second is the SQL statement to be
executed on the server. This statement can be a parameterized query, similar to parameterized
views, so that values for the statement can be supplied through other variables. The third
optional argument is the alias name for the result set(s) returned from the command, if any. By
default the alias used is SQLResult, but you can specify any name.
After adding some tables, you could use the SQLTABLES() function to download the
names of the tables and/or views that youve created in the database. Your program might do
this to check that all of the objects were created as desired.
There are other SQL pass through functions that can also be used to look at columns, start
and complete transactions, and set properties for the connection itself. One thing to remember
is that the SQLExec() function allows multiple statements to be sent to the server.
The advantage of using SQL pass through is that all of the setup code is done through a
routine written in Visual FoxPro. The disadvantage is the same thing. If youve built a SQL
Server database for development and testing, then you already have the format needed for the
installation. In order to create a VFP pass through program, youll also have to write a program
to break the database down into its component parts and objects.
SQL scripts
SQL scripts allow the entire SQL Server database to be created through text files containing
Transact-SQL commands. Using this technique, you will need to have the script files (usually
text files with the extension of .SQL) sent to the target system during installation, and then
launch them either by loading them into the Query Analyzer tool or through the use of the
command utility osql.
The advantage of script files is that they can be generated automatically through a menu
option in the Enterprise Manager. This option takes you to a dialog where you choose which
objects to script, as well as other options such as scripting the permissions, making sure the
appropriate logins are in the script, and ensuring that indexes, triggers and constraints are part
of the script as well.
The disadvantage of this option is that it does not automatically script the addition of any
data that may have to be in the database before the application begins. This requires additional
scripts to run in order for the data to be inserted.
184 Client/Server Applications with Visual FoxPro and SQL Server
SQL-DMO
SQL Servers Distributed Management Objects (SQL-DMO) is the framework upon which the
Enterprise Manager is built. By using the same object model, you can write Visual FoxPro
programs that can do the same things as the graphical user tool that ships with SQL Server. In
fact, you could design your own graphical tool for the installation of your application.
As with SQL pass through, SQL-DMO affords you the client-side programming option of
installation. SQL-DMO is more specific than pass through. In this method, you instantiate
objects to create, set their properties, and then execute the methods that will do the job. The
following example creates a new database:
oSQLServer = CREATEOBJECT("SQLDMO.SQLServer")
oSQLServer.Connect("MyServer","sa","")
oDatabase = CREATEOBJECT("SQLDMO.Database")
oDataFile = CREATEOBJECT("SQLDMO.DBFile")
oLogFile = CREATEOBJECT("SQLDMO.LogFile")
oDatabase.Name = "CSExample"
oDatabase.FileGroups("PRIMARY").DBFiles.Add(oDataFile)
oDatabase.TransactionLog.LogFiles.Add(oLogFile)
oSQLServer.Databases.Add(oDatabase)
Within SQL-DMO, there are objects defined for everything in SQL Server, including
databases, files, file groups, tables, columns, triggers, stored procedures, views, users and
logins. Everything that has to do with administration of a SQL Server database is covered in
this object model.
The advantage of using SQL-DMO is that, like SQL pass through, everything can be
handled from a Visual FoxPro program. And the control is much tighter and more exact than
with pass through. Just as with the other method, the entire setup of the database can be data-
driven, with all of the object definitions stored in a VFP table, or database. Another advantage
is that SQL-DMO has events that your program could be set up to handle.
The disadvantage of SQL-DMO is that the database used for development and testing has
to be broken down into its component parts; this can be handled with a SQL-DMO program.
This method also requires extensive knowledge of the SQL-DMO object model. Just remember
that once you have written routines to use SQL-DMO, they can be used again and again in
future installations as well as for database maintenance.
You might think that since SQL-DMO is designed for administration, data modifications
would require another object library, such as ADO. But actually, there are several Execute
Chapter 10: Application Distribution and Managing Updates 185
methods available with the Database and SQLServer objects, allowing you to actually pass
through any command, even those commands that return results.
This screen is where you set up which objects to transfer, whether to create the objects on
the destination server, whether to drop any existing objects, and whether to transfer the data. By
clearing the check box for Transfer all objects, you can choose the Select Objects button and
decide exactly what goes and what doesnt. By clearing the Use default options check box, you
can select the Options button to set such things as transferring users, logins, indexes, triggers,
keys and permissions.
In many ways, transferring objects is very similar to generating scripts. In fact, note that in
the screen shown in Figure 1, theres also a place to specify where the scripts are stored. Thats
because the transfer objects task does the transfer via scripts that are just about the same as the
scripts that would be generated by the SQL scripts option discussed earlier.
186 Client/Server Applications with Visual FoxPro and SQL Server
The advantage of this method is that the transfer of objects is automated to a high degree,
and once the DTS package has been created, it can be run over and over again until everything
is set up just the way it should be. The other big advantage of object transfer is that any data
already set up before production can also be transferred in the same process.
The disadvantage of this method is that it only works when the source and destination are
connected. Another problem is that different techniques will be required when doing server
upgrades because the transfers are of whole objects, and this method does not allow for
modification of objectsat least, not without losing any existing data.
This method is also attractive if you have data in non-relational sources that will need to be
transferred into the client/server system. This is because DTS is designed to pick up data from
anywhere through OLE DB, and transform it and copy it to any destination. Since you would
already be using DTS for the object transfer, then it would require just a little more work to
incorporate the transfer of the data within the same package. All it would need is the addition of
some other tasks to the object transfer package.
Backup/restore
Another method that is also simple and straightforward is to create a backup of your
development database and restore it on your production server. If the setups for the
development and production systems are identical, then you can use the SQL Server backup.
Simply back up the database to tape, disk or across the network, and then use the SQL Server
restore process to put the database into the target system. In order for this to work, both SQL
Server installations must have used the same installation options, including character set, sort
order and Unicode collation sequence.
Since the files that make up the database must be the same number and size, the relative
capacities of both systems must be the same. That is, if you have a 20GB data file when you
back up the database, then youll have to restore to a 20GB file. You cannot split it into
separate files when you do the restore. On the other hand, you can put the files that make up a
database in different relative locations when you do the restore.
Probably the most common reason for using this method over others is that the backup
files can be transferred easily. The only reason not to use this technique is that your
development version of the database may have objects that are only for development and
should not be moved to the production system.
physical names (locations) of all of the files. There is also a limit of 16 files per database in
one command.
There are a couple of disadvantages to this method. First, if you detach the database, then
youll have to run the sp_Attach_DB stored procedure to reattach the database to the
originating server. However, this could also be an advantage, since youll be testing your
command for the production system. Another disadvantage is that, like the backup/restore
method, the installation options for both target and destination must be the same, and you will
want to make sure that there are no development objects in the database.
Table 1 summarizes the various installation methods.
In choosing a method for installation, you must also consider how you will be handling
updates or modifications to the database. The next section covers this topic.
Managing updates
Updates are very difficult to manage, especially in a client/server application. Thats because
you might be updating the client-side programming, or the server-side database, or both. This
requires you to devise the update strategy before you even begin the installation.
In this section we will look at the various elements of the update process, and conclude
with the challenge of coordinating between client and server.
Application changes
On the client side, you have the program or application to consider. The issues here are that no
project is ever done, and you have to make sure that the application can handle change easily
and that the system for managing change keeps the program consistent. Changes can take the
188 Client/Server Applications with Visual FoxPro and SQL Server
form of minor bug fixes, major changes to the way the program works, and upgrades to the
system that result in the product being vastly different than it was before.
Managing code changes is not new to client/server, and many of the issues surrounding
this task are the same as with other application architectures. The difference with client/server
is that a program can be split up into different parts that do not have to be modified at the
same time.
Version control
The first step in handling updates is to have source control software that will help you manage
the process. Its also helpful if you can do version control, either in the source control software
or in the application itself.
A common technique today is to add a version number to your executables, where the
numbers represent major, minor and revision changes. The major number is reserved for big
changes in the application, such as new features that were not in the original program. Also, a
change in the user interface screens may be regarded as a major modification. Minor numbers
are used for changes that are less dramatic but demonstrate some sort of visible change. For
example, a new option could have been added to a menu or a new screen incorporated
seamlessly into the application. Revision numbers typically represent a new compilation of the
code that corrects application bugs.
Figure 2 shows the screen in Visual FoxPro where version numbers can be set. This screen
is accessed by choosing the Build button on the Project screen, and then choosing the Version
button in the Build dialog. The Auto Increment check box causes the Revision number to be
increased by one with every build. You can only use the Version screen by creating a COM
server or Win32 executable.
When you are checking a copy of an existing program and trying to determine the version
number, you can use the AGETFILEVERSION() function to build an array of information from
this dialog. Element number 11 will have the product version value.
By using version numbers, its easy to track down bugs that have already been fixed. First
you determine the version of the software that the reporting party is using, and then see whether
that bug was fixed in subsequent versions.
The most important aspect of version control is managing the distributed versions. If there
are multiple copies of the client code, then every user has to be tracked and updated. You may
need to maintain a table of this information to track exactly who has which version. You must
also have methods in place to make sure that you do not overwrite a newer version with an
older update.
Traditional
With a monolithic application, a new version is the entire application. This means that the
entire project has to be rebuilt and then distributed in some manner to the customers or users.
No matter how small an adjustment was made to the code, you must distribute the entire
program file.
In using this method, you will need to create an update program that reads the version
information from the target site, so that the version being installed is assured of being
more recent.
Component-based
The big advantage of components is that the entire application does not have to be distributed
for every change. Rather, components are separate parts of the whole that have an interface
thats unlikely to change. Here the word interface refers to the way one piece of code talks to
another piece of code. For example, you might have a form that gathers information from the
user and then passes the data on to another piece of code that validates that data. Since the
validation routines are not in the form, they can be updated separately from the form itself. The
only thing to make sure of is that the names of the methods and properties of the validation
component are not changed.
The major disadvantage of component-based applications is that each component will
have its own version number. This is where the minor numbers can make a big difference. A
minor number change can indicate a change in the interface of a component, so that you could
have a system where components always match up as long as the major and minor numbers are
the same.
In spite of any versioning problems, components are currently the preferred mechanism
for creating applications. In this way, each component of an application can be modified and
improved with little impact on other components. In fact, its possible for applications to be
developed using different software products as long as they are component-enabled.
Database updates
Database changes are a little more difficult than simply modifying code. For one thing, some
updates are changes to the database schema (i.e., the structure of tables), which impacts data
already in the database. Other changes could involve improving performance by modifying,
adding or dropping indexes. New stored procedures could be created in conjunction with added
190 Client/Server Applications with Visual FoxPro and SQL Server
features of the client-side application. Existing stored procedure code could be modified to
handle changes to the schema, or to take advantage of changes that improve performance.
Version control
Version control is much harder to manage on the server side because there are no built-in
version numbers in a SQL Server database. In addition to the aforementioned problems with
source control, you will also have to manage your own version numbers.
There are fewer options available for updating SQL Server than there are for installation.
For example, its not possible to back up the development database, or even a sample
production database, because the restore process would overwrite any data that the user of the
application had added to the database. Transferring objects would only work for new objects,
or non-data-containing objects such as views and stored procedures. And theres no way to use
the sp_Attach_DB system stored procedure without encountering the same problems associated
with the backup/restore method.
SQL scripts
For changes to a SQL Server database, SQL scripts are probably the best method. Thats
because as you make changes to the test database, you can save the changes as scripts. Nearly
every tool in the Enterprise Manager that can make schema changes has a Save change script
button that allows you to save the commands the Enterprise Manager generates. This way, you
will be able to know exactly what was done and when. It also helps with source control because
source control systems can monitor new files just as easily as old files.
If you are not using the Enterprise Manager, whatever commands you create to modify the
database can be saved in script files and then used to update the production systems. The
disadvantage that was stated earlier for this method does not apply for updating the schema. So
for all practical purposes, this is the preferred method, but it still does not solve all of the
version control issues.
SQL-DMO
This method is similar to SQL pass through, but instead of just using Transact-SQL commands,
you would use the objects of the management object model to set properties and run methods.
The advantages of using this technique are that the code is much more precise than using the
Chapter 10: Application Distribution and Managing Updates 191
SQLExec() function, and that it can also be used for querying the current structure of
the database.
Since a VFP program would be used to handle the update, you could do the same thing
as with the pass through option and set versioning information in the update code. You would
do this by using COM servers or executable programs and storing the version information for
the database in the program. This method would still not solve the problem of creating version
number information within the database itself.
The disadvantage of this method is that you would have to know exactly what the changes
are, and document as well as program them. Otherwise, there would be no native way of
knowing exactly what changes are made.
Why
Sometimes a client/server application has data that rarely changes. Every development project
will define rarely differently, but this could be data that is changed monthly, yearly, daily,
once per session or never. But however you define what constitutes a rare change, once
youve made that decision, you can improve the performance of your application by
downloading rarely changed data to local tables on the client machine. Once the data is local,
then lookups into that data will go much faster, improving overall speed. How much data can
be handled in this way depends on the capacity of the client hard disk and the amount of time
needed to download the data.
Of course, you will also have to determine when to download the data for periodic
refreshes. As stated earlier, it might be done just on the basis of the date, or upon startup of the
client application. In any case, the client application must check for the data locally so that if
its not there, it can be downloaded. The most important thing on a day-to-day basis is to
balance the latency factor against the performance factor. That is, how important is it to have
the absolute latest information vs. the fastest performing application.
Managing updates
Local lookup tables present a special concern when updating. The biggest issue is when those
tables are modified in any fashion. The updating program itself cannot be aware of all the client
locations of the lookup tables. Therefore, when you create updates that impact those local
tables, something in your application should be made aware of the change and then download
the latest table (which will include all of the changes).
The challenge is that local lookup tables are part of the client side of the application, but
changes to the server are where those tables are modified. In order to handle this special
situation, updates to locally stored information will need modifications to both sides of the
application: the database and the Visual FoxPro code.
Summary
In this chapter, you have learned about the special challenges of installing and updating a
client/server application. Along with the challenges, you have seen several ways of handling
these processes. You have also witnessed the special planning thats required to ensure a
successful client/server setup.
You should now understand why it is so important to use some sort of version control, so
that when updates are performed, youll know exactly how the pieces of the overall puzzle are
put together. And thats just what this isa kind of jigsaw puzzle, where the individual pieces
have to be cut just right to fit. The ever-present problem is that the shapes of the pieces keep
changing, making the puzzle that much tougher to solve.
Chapter 11: Transactions 193
Chapter 11
Transactions
One of the main benefits of using SQL Server is its ability to handle transactions. This
benefit comes with a learning curve, as SQL Server handles transactions differently than
VFP does. SQL Server provides a much greater level of control over how transactions
occur. As usual, having more control means there is more to know. This chapter covers
the details of SQL Servers transaction handling as well as how to design and write
Visual FoxPro code to manage transactions properly.
Transaction basics
A transaction is a sequence of data modifications that is performed as a single logical unit of
work. This logical unit of work is typically specified by an explicit set of commands, such as
the BEGIN TRANSACTION and END TRANSACTION commands from Visual FoxPro.
Through the operation of a transaction, the database is essentially transformed from one state
to another, ideally uninterrupted by any other activity that might be running concurrently.
The classic example of a transaction is the bank customer who uses an ATM to transfer
some money from a checking account to a savings account. This requires a logical unit of work
that consists of two distinct steps: Deduct the amount from the checking account, and then
credit the savings account by the same amount.
If the logical unit of work is broken, either the bank is unhappy (i.e., the money was
applied to the savings account but not removed from checking) or the customer is unhappy (i.e.,
the money was removed but never deposited). This could happen for a variety of reasons, many
of which have been quantified in a set of properties known by the acronym ACID.
ACID properties
The ACID acronym was coined sometime in the early 1980s. It first appeared in a paper
presented to the ACM. Since then, databases have the ACID test applied to them in order to
discover whether they have real transactions. The ACID properties are Atomicity,
Consistency, Isolation and Durability.
Atomicity: A transaction must support a unit of work, which is either committed in
whole or discarded in whole. For example, in the ATM example, either both accounts
must be updated or neither account can be updated.
Consistency: This property means that a transaction must leave the data in a consistent
state at the completion of the transactionthat is, the transaction can only commit
legal results to the database. Updates must conform to any rules programmed into
the system .
Isolation: Every transaction must be isolated from all other transactions. That is, one
transaction cannot read or use data from uncommitted data in another transaction. Any
194 Client/Server Applications with Visual FoxPro and SQL Server
SQL-92 compliant database (like SQL Server) supports a choice of four different
levels of isolation. These levels are discussed later in this chapter.
Durability: This property requires that a complete transaction be stored permanently,
regardless of any type of system failure. That is, a transaction must maintain the unit
of work either by permanently storing the changes that it says are committed to the
database, or by rolling back the incomplete transaction, even if power fails to the
computer at any time
SQL Server fully supports all four properties, but Visual FoxPro supports only the first
threeit falls short in the Durability test. The following sections document how both Visual
FoxPro and SQL Server fare against the ACID test.
Through this code, you can also see how the Visual FoxPro transaction qualifies as
Consistent, as the TABLEUPDATE() functions fail if any data integrity rules fail, such as field
validation rules or referential integrity.
What may not be so obvious is how Visual FoxPro handles the Isolation property of
the transaction. Like other database systems, locks are used to maintain the isolation
between Visual FoxPro transactions in the form of record locks, header locks and perhaps
even file locks.
During each of the TABLEUPDATE() calls in the previous code, Visual FoxPro will
implicitly attempt to lock the records that were modified by the REPLACE commands. In this
case, only two record locks are required to ensure the complete isolation of this transaction,
which will ensure that someone else doesnt write to either record while this transaction is in
progress. In addition, if another process tries to read the data before this transaction executes its
END TRANSACTION statement, the other process will read the original, unchanged data.
Inside of a transaction, Visual FoxPro will hold these row locks until either the
ROLLBACK or END TRANSACTION statements are issued. This means that the transaction
overrides the native TABLEUPDATE() behavior where the locks would normally be released
as soon as the modifications were written to disk.
Visual FoxPros transaction isolation has performance consequences that are not obvious
with this example. Imagine instead that the modifications made during the transaction included
not only some record updates, but also the addition of new records. In these kinds of updates, it
is likely that either a header lock or file lock will be required, which is held throughout the
transaction. Since there is only one header or file lock, this can quickly cause lock contention
between active transactions, as each must battle to acquire the single header or file lock.
You should also understand that Visual FoxPro implicitly acquires the locks necessary to
update an index or memo file, which are stored in a tables associated CDX and FPT files,
respectively. Therefore, if your transaction performs a modification on a field that is part of an
index key, or inserts a new record, that tables CDX file must be locked. There are no record
locks in a CDX fileVisual FoxPro locks the entire CDX file. This is also true for FPT files,
which means that only one person can lock either of these files at any given time.
This is one of the main reasons that you must keep your transactions as short as possible.
If one transaction performs an update that requires an index or memo file lock, all other
transactions will be unable to perform similar operations until the locks are released.
However, if such a failure occurs during the execution of the END TRANSACTION
statement, there is no mechanism to recover from this failure. Granted, this should be a small
window of opportunity, but it does exist, and it grows as more changes are made during the
transaction. If this type of failure occurs, you may end up (at best) with a partially committed
transaction where the checking account is debited but the savings account is not credited. In the
worst-case scenario, the two tables in the transaction may end up corrupted beyond repair.
USE Pubs
UPDATE Authors SET Phone = '987 '+Right(Phone,8) WHERE Phone like '408 %'
This statement has the potential to affect multiple records, since its unlikely that the
WHERE clause will match only one record. If any of the records cannot be updated for any
reason, it is important that the entire UPDATE statement fail. This is what Autocommit
transactions providethey automatically wrap any SQL statement into its own implied
transaction, as if the following code was written:
USE Pubs
BEGIN TRANSACTION
UPDATE Authors SET Phone = '987 '+Right(Phone,8) WHERE Phone like '408 %'
IF @@Error <> 0
ROLLBACK TRANSACTION
ELSE
COMMIT TRANSACTION
Implicit transactions
There is a third type of transaction in SQL Server, which is different from an Autocommit or an
explicit transaction, known as an implicit transaction. Visual FoxPro uses implicit transactions
behind the scenes when talking to SQL Server.
Implicit transactions are activated with the SET IMPLICIT_TRANSACTIONS ON
T-SQL statement. The SQL Server ODBC driver issues this statement when you activate the
Manual Transactions property setting of a connection. The following code performs this
activation, which was first demonstrated in Chapter 6, Extending Remote Views with SQL
Pass Through:
Chapter 11: Transactions 197
#INCLUDE FoxPro.h
lnResult = SQLSetProp(lhConn, "TRANSACTIONS", DB_TRANSMANUAL)
You could also set this permanently through the Connection Designer by clearing the
Automatic Transactions option, but this is not recommended. In either case, once activated,
any of the following statements will implicitly begin a transaction on SQL Server. Note that if a
transaction is already in effect, these commands do not start a new transaction, as nested
transactions are not possible while implicit transactions are in effect.
ALTER TABLE
FETCH
REVOKE
CREATE
GRANT
SELECT
DELETE
INSERT
TRUNCATE TABLE
DROP
OPEN
UPDATE
When using implicit transactions, you must explicitly issue either the ROLLBACK
TRANSACTION or COMMIT TRANSACTION command to complete the transaction,
even though you did not issue a BEGIN TRANSACTION command. Otherwise, the
transaction does not complete, potentially blocking other users from accessing the data
needed by their transactions. From Visual FoxPro, you can complete the transaction by
using the SQLRollback() or SQLCommit() SQL pass through functions. The following
example demonstrates how these functions are employed:
Note that all of these anomalies can be avoided if locks are employed at the right resource
(i.e., row, page, table or index) and of the right type (i.e., shared, exclusive and so forth).
Having described these anomalies, it is now easier to define how each isolation level prevents
one or more of these problems.
Read uncommitted is the lowest level of isolation, and it provides the greatest amount
of throughput, as it virtually eliminates locking between sessions. By enabling this
level, you allow transactions to read dirty or uncommitted data. This occurs because
the session that is set to read uncommitted will read through any exclusive locks
held by other sessions. Furthermore, shared locks are not used when reading data at
this level. All of the previously specified anomalies can occur at this level.
Read committed is the default isolation level of SQL Server transactions. It ensures
that the session respects any exclusive locks held by other sessions, preventing the
dirty read problem described earlier. However, this isolation level releases any shared
locks immediately after the data is read. Therefore, non-repeatable reads and phantom
reads can occur at this level.
Repeatable read is the next highest isolation level available. SQL Server enforces
repeatable reads by allowing a transaction to hold the shared locks used to read data
until the transaction completes. This prevents other sessions from modifying this data,
as data modifications require an exclusive lock, which is incompatible with shared
locks. This isolation level still allows phantom reads to occur.
Serializable is the highest isolation level available, and therefore it also has the
potential for the highest amount of lock contention (read: slowest performance). In
serializable transactions, dirty reads, non-repeatable reads and phantoms are
eliminated. However, instead of using the expected table locks, SQL Server uses
something known as a key-range lock. For example, in a query that asks for all
customers in a certain ZIP code, the pages of the index that contain the keys that point
to the matching records are locked. This prevents other sessions from inserting
customer records into the table that would become part of the selected ZIP code.
It is important to understand this level of detail, as it is the only way that you can control
how locking is handled by SQL Server. There are no equivalents to Visual FoxPros RLOCK()
or FLOCK() functions in T-SQL.
200 Client/Server Applications with Visual FoxPro and SQL Server
Any statements issued across this connection (the one specified by the connection handle
lhConn) will now work under the serializable isolation level. This isolation level remains in
effect until the connection is dropped, another SET TRANSACTION ISOLATION LEVEL
statement is issued, or if any specific SQL statement uses any locking hints to override the
locking behavior.
Note that this setting is only for the current connectionother connections are unchanged
by this setting, and will operate at their own isolation levels. This means that you can have
separate connections that work at different isolation levels, but this is typically not a good idea,
particularly from a debugging standpoint.
The alternative method of setting the isolation level is to use the table hints portion of the
FROM clause of a SELECT, UPDATE or DELETE T-SQL statement. With the right table
hint, you can override the default connection-level setting of the isolation level for an
individual statement. For example, the following query will override the default connection
setting and resort to a level of uncommitted read for the duration of the query:
Another possible hint is the READPAST hint. When using this locking hint, SQL Server
will skip any locked rows. This statement could prevent you from seeing a complete result set,
but the benefit is that your transactions will not wait until locks are released before reading the
data. This is a useful tool for determining whether blocking is a problem in specific queries.
If you have a specific query that should operate as serializable, but the session is set at
the read committed level (the default), then use the SERIALIZABLE or HOLDLOCK hints
(they are interchangeable hints). This will force only the current statement to hold locks;
when used inside of a transaction, this could help prevent unnecessary lock contention, as only
the specific statement will use serializable isolation, instead of every table that participates in
the transaction.
Durable transactions
The last ACID property to test against SQL Server is Durability, which specifies that a
complete transaction must be stored permanently, regardless of any type of system failure. As
mentioned earlier, this is the only ACID property where Visual FoxPro is lacking. However,
SQL Server transactions qualify as durable. SQL Server implements durability via its
transaction log.
As was briefly discussed in Chapter 3, SQL Server uses a technique known as write-ahead
logging to ensure the durability of its transactions. This means that any data modifications are
Chapter 11: Transactions 201
first written synchronously to the transaction log (i.e., your application will wait until it
completes this operation before control returns) before they are committed to disk. In fact,
there may be a long period of time between the moment when SQL Server receives your
modification and the moment SQL Server commits the data to disk.
When you submit a data modification statement to SQL Server, the statement is logged.
Actually, either the statement itself is logged, or the before and after versions of the data are
logged. In either case, enough information is placed into the log so that SQL Server can
reconstruct the data modification at any time. Logging occurs for any SQL statement that
modifies data (e.g., UPDATE, DELETE or INSERT) as well as statements that create indexes
and modify structures (yes, you can roll back a CREATE INDEX statement in SQL Server!).
Once this logging is complete, SQL Server loads the proper extent(s) into a part of
memory known as the buffer cache, but only if the data does not already reside there. This is
where the data modification is made, allowing any other processes to see the change. In other
words, SQL Server, much like Visual FoxPro, tries to read data from memory or write
modifications to memory, not directly from disk. Therefore, what resides on disk may not
actually be whats currently held in memory. This is usually referred to as a dirty buffer, and
it must eventually get committed to disk.
On an as-needed basis, SQL Server will write these dirty buffers to the actual database
files with something known as the Lazy Writer process. When this occurs, SQL Server marks
the transaction log so that it understands that logged changes have been committed to disk. This
mark is known as a checkpoint, and any logged transactions that occur before the checkpoint
are known to be on disk, while those appearing after the checkpoint are not.
The checkpoint process occurs only after a certain amount of activity has occurred in the
database. If it is a slow day in the database, checkpoints will happen less frequently, while on a
particularly busy day, checkpoints may occur much more often. This is controlled by something
known as the recovery interval, a system-level setting that controls just how much dirty data is
allowed to exist at any given time. In advanced situations, you can manipulate this setting to
limit the occurrence of checkpoints, but for nearly all situations, the default setting is
appropriate (read: dont mess with this setting!).
The durability of a SQL Server database becomes apparent when the server fails for
some reason (hardware failure, operating system crash and so on). Since SQL Server has
logged all data modifications, it can use the log to determine how to recover the database after
a failure. For example, imagine that several sessions were updating tables when the power
failed on a server. If the workstations received notification that their updates were successful,
then their updates have been written to the transaction log successfully. If a workstation did
not complete an update successfully, then the transaction log may not contain the COMMIT
TRANSACTION statement for that workstations transaction.
When the server is restarted, SQL Server starts the recovery process. This process reads
the transaction log, starting with the last checkpoint in the log, as this marks the last complete
transaction that is known to be written to disk. Any complete transactions that appear after the
checkpoint are now committed through a roll forward process, while incomplete transactions
are automatically rolled back, as some of their changes may have been written to disk.
Note that SQL Server may write the pending changes to disk at any time to make room in
the buffer cache for extents that contain pages for other operations. This is why the recovery
202 Client/Server Applications with Visual FoxPro and SQL Server
operation must roll back incomplete transactions, as some or all of the data could have been
committed to disk, even though a checkpoint has not occurred.
Hopefully you now understand why SQL Servers transactions are superior to those of
Visual FoxPro, as SQL Server passes the ACID test with excellence.
Locking
An important consideration in using transactions is how users are affected by the locking
that occurs during transactions. First, lets review the basic locking strategy employed by
SQL Server.
There are several resources that can be locked, such as a row, an index key (i.e., a row in
an index), a page (either a data or index page), an extent, a table, or the database. In normal use
of a database, data modifications will only require locks at the row, page or table level. An
extent is only locked briefly when SQL Server allocates or drops an extent, while the database
lock is used by SQL Server to determine whether someone is in a database.
SQL Server usually acquires record locks in favor of page or table locks in most situations.
The actual lock that is acquired is based upon the determination of the query optimizer, which
uses a cost-based algorithm to determine how locks are placed. Therefore, it is possible that
SQL Server may use a series of page locks instead of row locks, as page locks will consume
fewer resources than a large number of row locks.
Regardless, for each of these resources, SQL Server can use one of several different lock
modes. These include shared, exclusive and update locks, as well as a set of intent locks.
Shared locks (S) are used when reading data, while exclusive locks (X) are used in data
modifications. Update locks (U), as discussed in Chapter 3, are used when SQL Server must
read the data before modifying it. Update locks are put in place while it reads the data and
then promotes these locks to exclusive locks when finished reading but prior to making
the modifications.
Intent locks are used to prevent possible performance problems with locks at the row or
page levels. When a process intends to lock a row or a page, the higher-level resource in the
object hierarchy is first locked with an intent lock. For example, if an S lock is required on a
specific row of a table, SQL Server first attempts to get an intent shared (IS) lock on the table
and, upon success, attempts an IS lock on the page for that row. If both succeed, then the
shared row lock is acquired. By employing intent locks, SQL Server avoids having to scan
every row in a page or every page in a table before determining whether the page or table can
be locked.
These intent locks include intent shared (IS), intent exclusive (IX) and shared with intent
exclusive (SIX) locks. These acronyms (IS, IX, SIX and so forth) are visible when viewing
lock activity on the server.
Chapter 11: Transactions 203
Lock compatibility
In Visual FoxPro, locks are simple, as there is only one lock mode for all of the available
resources (record, header or file). That one lock mode, which compares somewhere between a
shared lock and an exclusive lock in SQL Server, can be acquired on any of these resources. If
one user holds a lock, then other users are unable to modify that locked resource until they
acquire that lock, but they can read the locked resource.
In SQL Server, the complexity of the various lock modes raises the question of which
locks can be acquired concurrently with other locks. The answer to this question can be found
in the SQL Server Books Online, in a Help page titled Lock Compatibility. This page
contains a compatibility matrix, which is reproduced in Table 1.
As you can see from Table 1, shared locks are compatible with other shared locks,
allowing more than one person to read data concurrently. However, exclusive locks are not
compatible with any other lock, preventing any kind of concurrent update or reading of data
while it is being modified.
Blocking
The result of any lock incompatibility is called blocking in the SQL Server documentation. To
relate this to your knowledge of Visual FoxPro, blocking occurs when one user is holding a
record lock that another user requires. Since the second user cannot acquire a lock until the first
is released, the first user blocks the second user. By default, Visual FoxPro will force the
blocked user to wait indefinitely for the blocking user to release the lock. However, also by
default, the blocked user can press Esc when he or she grows tired of waiting. This behavior
can be changed in Visual FoxPro through the SET REPROCESS command; however, this
command has no effect on how SQL Server locks data.
In SQL Server, when blocking occurs, the session being blocked will wait indefinitely for
the blocking session to release the conflicting lock. If you wish to modify this behavior, you
can use the SET LOCK_TIMEOUT command, which changes the amount of time the session
will wait before automatically timing out a lock attempt. When a lock timeout occurs, your
TABLEUPDATE or SQLExec call will fail, so use AERROR() to determine the cause of the
error. In the returned array, the fifth element will contain error number 1222, which
corresponds to a SQL Server Timeout error.
204 Client/Server Applications with Visual FoxPro and SQL Server
If you change the default lock timeout period, you will need to add code
that tests for error 1222. If this error occurs, you must roll back and restart
any transactions that are in progress, as a lock timeout error does not
cause SQL Server to abort any transactions.
The lock timeout setting is specified in milliseconds; therefore, to set the timeout to three
seconds, issue the following Visual FoxPro command:
You can use @@LOCK_TIMEOUT to query the sessions current lock timeout setting.
The default setting is 1, which conveniently corresponds to SET REPROCESS TO 1 in
Visual FoxPro:
If you would like to test blocking, you can do this easily from Visual FoxPro with two
views, each of which uses a separate connection. If you share connections between them, you
will not be able to test the effects of blocking unless you load two separate instances of Visual
FoxPro. To test blocking, start a transaction, and then issue a statement that changes data on the
back end. Do not finish the transaction, so that any locks will stay in effect while you use the
second connection to attempt access to the same data that is locked in the first connection.
The following code snippet demonstrates how this is accomplished:
Obviously, it would be quite beneficial to determine the reason for blocking at any given
time. This is accomplished by viewing the current lock activity, either through SQL Server
Enterprise Manager or by executing one of SQL Servers stored procedures from within
Visual FoxPro.
To view general information about all processes and the possibility of blocking, click on
the Process Info node. This will display the screen pictured in Figure 1, which shows one row
for each connection to the server. The first six SPIDs are system processes that have nothing to
do with Visual FoxPro connections to the server. However, SPIDs 9 and 10 are two Visual
FoxPro connections, both of which have open transactions.
From this view, you can see that SPID 10 is waiting for a locked resource from another
process. To see which process is doing the blocking, you would have to scroll and view the last
two columns, titled Blocked By and Blocking, respectively. Figure 2 shows these last two
columns, and shows clearly that SPID 9 is blocking SPID 10. Note that you have to scroll back
to the left to see which rows correspond to which SPID.
Of course, this dialog only gives you a single clue as to what is causing the blocking
between these two processes. The Wait Resource column shows that a KEY lock is preventing
SPID 10 from progressing. To get more detail, you can expand the Locks/Process node, and
then select the desired SPID from the list. This will display a list of the locks that the process
has acquired, as well as any locks that it is waiting to acquire. Take a peek at Figure 3 for the
output that is seen for the current situation where SPID 9 blocks SPID 10.
In this figure, you can clearly see that the process is waiting for a KEY lock in the authors
table, while it has already acquired an intent shared (IS) lock on the page and the table.
While you are in the process list, you can double-click any of the SPIDs icons to get a
dialog that displays the Process Details, which includes a very handy display of the last T-SQL
command batch issued by the process. Since SPID 10 is blocked by another process, this allows
you to see what commands the blocked process issued, which could help you determine the
cause of the blockage.
Of course, all of this information is great, but what if you do not have access to SQL EM?
Fortunately, you can access most of this information from Visual FoxPro, but it must be from
an unblocked process! The system stored procedures sp_who, sp_who2 and sp_lock return
cursors of data about current activity on the SQL Server. These stored procedures can be
executed with Visual FoxPros SQLExec SQL pass through function.
Note that sp_who2 is an undocumented procedure that returns additional information over
that of sp_who. Both of these procedures return information about each SPID, including the
information viewed in the current activity window (in fact, it seems sp_who2 is called by SQL
EM for the current activity window). The sp_lock procedure returns locking information
about all processes, and returns the same information as the Locks windows under the current
activity tree.
All of these procedures accept a parameter for the number of the desired SPID. For
example, the following Visual FoxPro code calls the sp_who2 procedure to retrieve
Chapter 11: Transactions 207
information about the SPID for the current connection, accessed by using the @@spid system
function, and places the result into a cursor called cWho2:
You may invoke these procedures without specifying any parameter in order to retrieve
information about all processes.
Deadlocks
Deadlocks are a different concept than blocking and should be treated as a completely different
problem. While blocking is usually only temporary, a deadlock will last indefinitely if not
explicitly handled. To understand a deadlock, imagine the following scenario: Two users are
accessing the same database. Within a transaction, the first user accesses the customer table
and attempts to change information in the record for Micro Endeavors, Inc. in that table.
Meanwhile, also within a transaction, the second user accesses the contact table and attempts to
change the phone number for Mr. Hentzen. For some reason, the first user now attempts, within
the same transaction, to change the same record in the contact table, and the second user
attempts to change the same record in the customer table.
Since the first user holds an exclusive lock on the customer record, the second user is
waiting for that lock to be released in order to continue. However, the second user is holding a
lock on the contact record, forcing the first user to wait for that lock to be released before
continuing. There you have it: a deadlock, also known as a circular chain of lock requests.
If this situation were to happen in Visual FoxPro, with any luck you will have changed
SET REPROCESS from its default so that at least one of the two processes would
automatically fail in its attempt to get the second lock. When the users lock attempt fails, they
would be given the chance to try their transaction again, and most likely would succeed.
In SQL Server, this situation is automatically handled by an internal scheme for detecting
deadlocks. When SQL Server detects a deadlock, one process is chosen as the deadlock victim
and that transaction is automatically rolled back. SQL Server chooses the process that has the
least amount of activity as the victim, and when the transaction is canceled, an error is returned
to the application. This means that your application must always detect error number 1205
when issuing a TABLEUPDATE or SQLExec call. This error can occur before a transaction
has completed. When error 1205 is detected, you must restart your transaction, since the server
has already rolled back the transaction for the deadlock victim.
Deadlocks can be avoided by ensuring that all of your stored procedures and Visual
FoxPro code access resources in the same order at all times. In other words, if both of the
aforementioned users attempted to access the customer table first, and then the contact table,
the deadlock would not have occurred. However, since this requirement cannot always be met
in the real world, you will need to add code to detect when the user is the victim of a deadlock
and handle it accordingly.
Occasionally, it is necessary to set one process at a lower priority level than
another for the purpose of resolving deadlocks. If this is the case, you can use the SET
DEADLOCK_PRIORITY command to establish the sessions that should be the preferred
victims of deadlocks.
208 Client/Server Applications with Visual FoxPro and SQL Server
Transaction gotcha!
After reading all of this information about how SQL Server and Visual FoxPro handle
transactions, you still may not be aware of the fact that Visual FoxPro transactions do nothing
to start, end or commit transactions on the back-end database. Consider the following Visual
FoxPro code that attempts to update two remote views:
In this example, a Visual FoxPro transaction is started and wraps the updates of two
different views. Unfortunately, in client/server applications, these updates are applied to
remote tables, not Visual FoxPro tables. Therefore, the TABLEUPDATE() statements are not
affected by the Visual FoxPro transaction, thereby writing their changes immediately to the
source tables.
In other words, if the first TABLEUPDATE() succeeds but the second one fails, the
ROLLBACK command has no effect whatsoever. The solution? Look earlier in this chapter for
the code that starts a transaction by setting the Transactions property of the connection to
manual and submits the SQLRollback() or SQLCommit() SQL pass through functions.
No matter what, do not use Visual FoxPro transactions against remote views.
Summary
In this chapter, you have seen a comparison of Visual FoxPro and SQL Server transactions.
The ACID properties are used to test the quality of transactions by a database system. Visual
FoxPro falls a bit short, but SQL Server transactions are fully compliant with the ACID
standard. You also learned how blocking and deadlocks occur and how to retrieve the
information that SQL Server provides on locks that each process is holding.
In the next chapter, you will switch gears entirely and see how to use the basics of ADO in
a Visual FoxPro application.
Chapter 12: ActiveX Data Objects 209
Chapter 12
ActiveX Data Objects
ActiveX Data Objects (ADO) has been mentioned in previous chapters as a possible
alternative to communicating with SQL Server via ODBC. This chapter introduces you to
ADO, presents the pros and cons of using ADO, and explains the mechanics of using
ADO in Visual FoxPro applications.
Why ADO?
What purpose does ADO serve in Visual FoxPro development? Why use ADO when views and
SQL pass through seem to provide all the necessary functionality?
ADO benefits
ADO provides several benefits over native Visual FoxPro data access:
ADO is the best technology for passing data between tiers of an n-tier application.
ADO can access non-relational data sources such as text files and CSV files.
According to Microsofts Web site, OLE DB provides high-performance access to
any data source, including relational and non-relational databases, email and file
systems, text and graphics, custom business objects, and more.
ADO permits return values to be retrieved from stored procedures.
ADO can be used as a workaround for several bugs or deficiencies in VFP.
any Visual Studio product as well as within Active Server Pages. Other non-Microsoft
products, such as Java, can also use ADO with varying levels of compatibility.
Alternatives to ADO are not satisfactory. These include:
Accept the limitation that all front ends will be constructed in the same technology
as the middle tier. However, this limitation eliminates one of the main benefits of
n-tier architecture.
Pass an array to the front end. This seems like a good idea until you realize that
different products handle arrays differently, forcing you to write custom array-
handling code for each client. While Visual FoxPro does provide some help in this
arena, there are issues with handling the different data types, validation of the data,
and ensuring that the clients can return any necessary information as a compatible
array. In addition, passing data back from the front end to the middle tier is more
complicated and requires extensive custom coding.
We agree with Microsofts suggestion that ADO is the best choice for passing data
between the front and middle tiers of a multi-tiered application.
Stored procedures
In Chapter 6, Extending Remote Views with SQL Pass Through, you learned how to call
SQL Server stored procedures with the SQLExec() function. Through SQLExec(), you can pass
parameters as inputs and accept return values through OUTPUT parameters. However, there is
no mechanism for capturing the return value from a SQL Server stored procedure (i.e., when
the T-SQL RETURN command is used to return an integer value).
ADO provides the ability to invoke stored procedures, and to capture any type of
returned value.
can result in truncation. (See article Q234070 in the Microsoft Knowledge Base for more
details on this topic.) Since this is a bug in Visual FoxPro, you need a workaround. One
approach is to use ADO instead of a view or SQL pass through. ADO properly retrieves data
from nText columns.
Note: The new data types that support Unicode are nChar, nVarchar and nText. These
work similarly to their non-Unicode counterparts, except that they consume two bytes per
displayed character. These data types are important when creating a database that must store
characters from other languages, since a language like Japanese has well over 255 distinct
characters. With Unicode, more than 65,000 distinct characters are available, allowing the full
Japanese character set to be stored in a Unicode field.
ADO disadvantages
There are some disadvantages to using ADO with Visual FoxPro:
ADO data cannot be handled in the same way as data retrieved through Visual FoxPro
remote views or SQL pass through. Instead, you must access the data through the
properties and methods of the ADO object.
Native Visual FoxPro form controls cannot be bound to ADO data. (However,
ActiveX controls exist that can be bound to ADO data sources.)
ADO data cannot be manipulated using powerful Visual FoxPro technology.
(However, ADO data can be converted to cursors, which can be manipulated directly
by native Visual FoxPro.)
As you can see, there are advantages and disadvantages to using ADO within a Visual
FoxPro application. Many of the disadvantages could be reduced or eliminated by changes to
Visual FoxPro. It is widely hoped that future versions of Visual FoxPro will provide better
support for ADO.
loConn = CREATEOBJECT("ADODB.Connection")
Note that creating a Connection object does not connect to any data source. To connect
to a data source, you must specify values for one or more properties of the Connection object,
and then invoke a method of the object to initialize the connection. The following code
shows how to connect to a SQL Server called MySQLSvr by invoking the Open method of a
Connection object:
loConn = CREATEOBJECT("ADODB.Connection")
IF VARTYPE(loConn) = "O"
lcConnStr = "Driver=SQL Server;Server=MySQLSvr;Database=pubs;" + ;
"uid=User;pwd=Password"
loConn.Open(lcConnStr)
ENDIF
On the other hand, you can populate the ConnectionString property before invoking the
Open method, like this:
loConn = CREATEOBJECT("ADODB.Connection")
IF VARTYPE(loConn) = "O"
loConn.ConnectionString = "Driver=SQL Server;Server=MySQLSvr;" + ;
"Database=pubs;uid=User;pwd=Password"
loConn.Open()
ENDIF
To test whether the connection was successful, query the value of the State property on the
Connection object. If the State property is one, the connection is open; otherwise, if it is zero,
Chapter 12: ActiveX Data Objects 213
the connection failed and is closed. An unsuccessful connection attempt triggers a Visual
FoxPro error, which can be trapped by an objects Error event or any ON ERROR routine.
The preceding examples used an ODBC driver to connect to SQL Server. However, an
OLE DB provider also exists for SQL Server. Using the OLE DB provider improves
performance, since ODBC is bypassed.
To use the SQL Server OLE DB provider, use a different connection string as follows:
loConn = CREATEOBJECT("ADODB.Connection")
IF VARTYPE(loConn) = "O"
lcConnStr = "Provider=SQLOLEDB;User ID=User;Password=Password;" + ;
"Initial Catalog=Pubs;Data Source=MySQLSvr"
loConn.Open(lcConnStr)
ENDIF
This code connects to the pubs database on a SQL Server called MySQLSvr using the SQL
Server OLE DB provider called SQLOLEDB. Since a connection string is rather cryptic, you
might prefer to create this connection string with a Microsoft Data Link file. To start, simply
create a new, empty file with a UDL extension. Then, through Windows Explorer, double-click
the file, which will open the dialog shown in Figure 1.
Once the dialog is open, you can use the Provider and Connection pages to provide the
details of the desired connection (such as the provider, server name, login credentials, and the
initial database to select). Use the Test Connection button on the Connection page to verify
your selections, and then press the OK button.
Next, open the UDL file with Notepad. The UDL file should appear like the example
shown in Figure 2. It contains the full connection string that corresponds to the options you
selected in the UDL dialog. Simply copy the connection string into your Visual FoxPro code
for use with an ADO Connection object.
After you connect to a data source, you will probably want to retrieve data from that
source. Retrieving data requires another ADO object called the RecordSet object.
loRS = CREATEOBJECT("ADODB.RecordSet")
As with the Connection object, creating the object does not populate the object with data.
Retrieving data requires a call to the Open method of the RecordSet object. The following code
example retrieves all of the records from the authors table in the pubs database on SQL Server
and places the records into a RecordSet object:
loRS = CREATEOBJECT("ADODB.RecordSet")
IF VARTYPE(loRS) = "O"
lcSQL = "SELECT * FROM Authors"
lcConnStr = "Provider=SQLOLEDB;User ID=User;Password=Password;" + ;
"Initial Catalog=Pubs;Data Source=MySQLSvr"
loRS.Open(lcSQL, lcConnStr)
ENDIF
Note that behind the scenes, the Open method created its own Connection object with
attributes specified in the connection string, which was passed as the second parameter. You
can confirm that the RecordSet object stored the Connection object specification with the
following code:
Chapter 12: ActiveX Data Objects 215
loCn = loRS.ActiveConnection
Activate Screen
?loCn.ConnectionString
?loCn.State
The ActiveConnection property is an object reference to the Connection object that the
RecordSet object uses to connect to a data source. By checking the connections
ConnectionString property, you can see which data source has been opened by the RecordSet.
It is not common to allow the Connection object to be created implicitly, since it is harder
to share an implicitly created connection with other ADO objects. A better practice is to create
a Connection object explicitly, and then use that connection for one or more RecordSet objects
as follows:
loCn = CREATEOBJECT("ADODB.Connection")
loRS = CREATEOBJECT("ADODB.RecordSet")
IF VARTYPE(loCn)="O" AND VARTYPE(loRS) = "O"
loCn.ConnectionString = "Provider=SQLOLEDB;User ID=sa;Password=;" + ;
"Initial Catalog=Pubs;Data Source=MySQLSvr"
loCn.Open()
IF loCn.State = 1
WITH loRS
.ActiveConnection = loCn
.Open("SELECT * FROM Authors")
ENDWITH
ENDIF
ENDIF
In this example, the Connection and RecordSet objects are created with the
CREATEOBJECT() function. The Connection objects ConnectionString property is specified,
and then the Open method is invoked. If the connection is successfully opened, an object
reference to the Connection object is stored in the ActiveConnection property of the RecordSet
object. This property tells the Open method where to send the SQL SELECT statement (which
is specified as a parameter to the Open method). The Open method then executes the SQL
SELECT statement that, in this example, retrieves all records from the authors table.
ACTIVATE SCREEN
CLEAR
DO WHILE NOT loRS.EOF
FOR EACH loField IN loRs.Fields
??TRANSFORM(loField.Value)+CHR(9)
ENDFOR
? && Move to next line
loRS.MoveNext()
ENDDO
216 Client/Server Applications with Visual FoxPro and SQL Server
The RecordSets EOF property will be True if you have moved past the last record of
the RecordSet, similar to the way a Visual FoxPro cursor works. In addition, only one record
can be seen at any timethe RecordSet will initially position the record pointer on the
first record after retrieving the data. Inside of each record, you can access each field with the
Fields collection.
Each field has numerous properties, such as the Value property, which was referenced in
the preceding code. You can also get each fields Name, DefinedSize, NumericScale or
Precision through properties of the same names.
The MoveNext method works just like the SKIP command in Visual FoxPro, moving the
record pointer to the next record. You can also MoveFirst, MovePrevious or MoveLast,
corresponding to the GO TOP, SKIP 1 and GO BOTTOM Visual FoxPro commands.
It is interesting to note that the similarity between a RecordSet and a Visual FoxPro
cursor is not an accident: The RecordSet is based on the Visual FoxPro cursor engine. This
similarity will become more apparent as you explore other methods and properties of the
RecordSet object.
WITH ThisForm
.oCn = CREATEOBJECT("ADODB.Connection")
.oRS = CREATEOBJECT("ADODB.RecordSet")
*--Other code goes here to open record set
ENDWITH
The default cursor type for an ADO RecordSet is known as a forward-only static cursor.
This means that you can only use the MoveNext method of the RecordSet (i.e., forward-only),
and that any changes on the data source are not reflected in the cursor (i.e., static). Before you
can display a RecordSet in an ActiveX control on a Visual FoxPro form, you must change the
cursor type of the RecordSet to allow movement in any direction. However, a static cursor is
preferred for performance reasons, as it will not maintain communication with the server to
detect changes made by other users. The CursorType property is used to specify the type of
cursor used by the RecordSet, and must be specified before opening it. To create the static
cursor required by the ActiveX grid control, use 3 for the CursorType, as in the following code:
.oRS.ActiveConnction = .oCn
.oRS.CursorType = 3 && adOpenStatic
.oRS.Open("SELECT * FROM Authors")
Chapter 12: ActiveX Data Objects 217
The next step is to place an instance of the Microsoft ADO Data control onto your form
and give it a name like oleDataCtrl. This control is needed to provide the proper interface so
the ActiveX grid can bind to the RecordSet. You can place this control anywhere within the
form, as the control is invisible at run time.
Now place a Microsoft DataGrid control on your form and set its Name property to
oleGrid. Once these controls are on your form, your form will look similar to Figure 3. To
make it all work, write the following code in the Init method of the formthis will cause the
DataGrid to display the contents of the RecordSet you created earlier:
WITH ThisForm
.oleDataCtrl.Object.RecordSet = .oRS
.oleDataCtrl.Object.Refresh()
.oleGrid.Object.DataSource = .oleDataCtrl.Object
.oleGrid.Object.Refresh()
ENDWITH
When you execute the form, the data will appear in the grid control as shown in Figure 4.
However, the data will be read-only, as the RecordSet also defaults to a read-only cursor
type. To modify this, change the LockType property of the RecordSet object so it will use
optimistic locking:
Figure 4. Viewing an ADO RecordSet at run time in a Visual FoxPro form with
ActiveX controls.
ADO constants
If you check the Help system on the ADO RecordSet and Connection objects, you will see
many references to constants that begin with the letters ad. You saw some of these constants
referenced in the previous code snippets, such as adLockOptimistic and adOpenStatic. While a
Visual Basic program intrinsically recognizes these constants, Visual FoxPro does not;
therefore, you must either explicitly reference their values or create constants with the
#DEFINE preprocessor directive:
#DEFINE adLockOptimistic 3
#DEFINE adOpenStatic 3
...
.oRS.CursorType = adOpenStatic
.oRS.LockTpe = adLockOptimistic
To make this easier, Microsoft now distributes an include file, adovfp.h, that you can use
in your applications. This file contains all the constants recognized by ADO, including those
mentioned in the previous code snippets. To get this file, visit the Visual FoxPro home page at
http://msdn.microsoft.com/vfoxpro and search for a utility called VFPCOM. This file is self-
extracting and expands into several files, including the adovfp.h file.
Chapter 12: ActiveX Data Objects 219
loCn = CREATEOBJECT("ADODB.Connection")
loRS = CREATEOBJECT("ADODB.RecordSet")
loVFPCOM = CREATEOBJECT("VFPCOM.COMUtil")
Unfortunately, the cAuthors cursor created in the preceding code is never updatable, so
you must create code that writes any changes back to the data source, either through the
RecordSet or with native Visual FoxPro techniques.
More on VFPCOM
One exciting feature of the VFPCOM utility is the ability to bind Visual FoxPro code to the
events of any COM server, including any object from ADO. For example, the RecordSet
object has events that occur when data is changed in the current record. These events are
WillChangeRecord, WillChangeField, FieldChangeComplete and RecordChangeComplete.
Native Visual FoxPro cannot handle these events, as it does not contain functionality to receive
and process events from a COM server like ADO. Armed with the VFPCOM utility, you can
have Visual FoxPro code respond to these events as they happen.
To handle the events triggered from an ADO RecordSet, you first create a Visual FoxPro
class that hooks to the events of the RecordSet. This is easy to do using VFPCOMs
ExportEvents method as follows:
220 Client/Server Applications with Visual FoxPro and SQL Server
loVFPCOM = CREATEOBJECT("VFPCOM.COMUtil")
loRS = CREATEOBJECT("ADODB.RecordSet")
lnStat = loVFPCOM.ExportEvents(loRS,"sample.prg")
IF lnStat = 0
MESSAGEBOX("program created successfully")
ENDIF
The previous code creates a program called sample.prg, which contains the definition of a
custom class with methods for each possible event of the RecordSet:
PROCEDURE FetchComplete(pError,adStatus,pRecordset)
* Add user code here
ENDPROC
PROCEDURE FetchProgress(Progress,MaxProgress,adStatus,pRecordset)
* Add user code here
ENDPROC
PROCEDURE FieldChangeComplete(cFields,Fields,pError,adStatus,pRecordset)
* Add user code here
ENDPROC
PROCEDURE MoveComplete(adReason,pError,adStatus,pRecordset)
* Add user code here
ENDPROC
PROCEDURE RecordChangeComplete(adReason,cRecords,pError,adStatus,pRecordset)
* Add user code here
ENDPROC
PROCEDURE RecordsetChangeComplete(adReason,pError,adStatus,pRecordset)
* Add user code here
ENDPROC
PROCEDURE WillChangeField(cFields,Fields,adStatus,pRecordset)
* Add user code here
ENDPROC
PROCEDURE WillChangeRecord(adReason,cRecords,adStatus,pRecordset)
* Add user code here
ENDPROC
PROCEDURE WillChangeRecordset(adReason,adStatus,pRecordset)
* Add user code here
ENDPROC
PROCEDURE WillMove(adReason,adStatus,pRecordset)
* Add user code here
ENDPROC
ENDDEFINE
Chapter 12: ActiveX Data Objects 221
Of course, you will want to add user code in these methods to make them respond
appropriately. Once you make your modifications, you can bind a RecordSetEvents object to a
RecordSet with the BindEvents method of the VFPCOM utility, as follows:
Now you can detect an event that occurs when a user modifies data in a RecordSet, moves
a record pointer, or performs other operations that trigger events.
Note that the Connection object and the yet-to-be-introduced Command object also have
events that may be of interest. Be sure to check the ADO documentation in the MSDN library
for more details.
#INCLUDE adovfp.h
LOCAL loCn, loCmd, loRS, loFld
loCn = CREATEOBJECT("ADODB.Connection")
loCn.ConnectionString="Driver=SQL Server;"+ ;
"Server=MySQLSvr;uid=sa;pwd=;Database=Pubs"
loCn.Open()
IF loCn.State = 1
loCmd = CREATEOBJECT("ADODB.Command")
loRS = CREATEOBJECT("ADODB.RecordSet")
loCmd.ActiveConnection = loCn
222 Client/Server Applications with Visual FoxPro and SQL Server
loRS = loCmd.Execute()
ACTIVATE SCREEN
DO WHILE NOT loRS.EOF
FOR EACH loFld IN loRS.Fields
??TRANSFORM(loFld.Value)+" "
ENDFOR
loRS.MoveNext
?
ENDDO
ENDIF
You can see that the Command object has an ActiveConnection property, just like the
RecordSet object. The actual command is specified by the CommandText property, and the
CommandType property is used to signify that the contents of CommandText are literal text for
the server. In this example, since the command includes the EXECUTE keyword as well as a
parameter, it was necessary to use adCmdText instead of the expected adCmdStoredProc.
Finally, since the byroyalty procedure returns a result set, a RecordSet object is used to capture
the result.
You will not be able to call a stored procedure with output parameters or a return value
with the preceding code. Instead, you must take advantage of the Command objects
Parameters collection. This collection is designed to handle the variations in the number and
type of parameters used by the range of available stored procedures.
To use the Parameters collection, you must add Parameter objects to the collection. When
adding parameters, you also specify the attributes of each parameter, such as whether it is an
input or output parameter, the parameters data type and, for input parameters, the input value.
The following code modifies the previous example to use the Parameters collection:
loCmd.ActiveConnection = loCn
loCmd.CommandText = "byroyalty"
loCmd.CommandType = adCmdStoredProc
loParam = loCmd.CreateParameter("Percentage",adInteger,adParamInput,0,40)
loCmd.Parameters.Append(loParam)
loRS = loCmd.Execute()
As you can see, the CreateParameter method builds the actual Parameter object separately
from the Command object. Once the Parameter object is created, you use the Append method
to add it to the Parameters collection of the Command object. This parameter object is then
automatically passed to SQL Server when the stored procedure is invoked.
As you read in Chapter 6, Extending Remote Views with SQL Pass Through, SQL pass
through can receive output parameters from SQL Server but cannot handle return values from
stored procedures. Fortunately, the ADO Command object can handle return values by adding
the appropriate Parameter object to the Parameters collection. To demonstrate, first consider
the following sample SQL Server stored procedure that accepts an input parameter and returns
a RecordSet, an output parameter and a return value:
Chapter 12: ActiveX Data Objects 223
To invoke this procedure properly, it must be called with an input parameter as well as an
output parameter. Further, it returns a RecordSet (from the SELECT statement) and an integer
value (from the RETURN statement). The following code illustrates how to invoke this
procedure properly with a Command object so you can display the contents of the returned
RecordSet, return value and output parameter:
loCmd.CommandText = "myproc"
loCmd.CommandType = adCmdStoredProc
* Set up parameters
loParam = loCmd.CreateParameter("Return", adInteger, adParamReturnValue,0, 0)
loCmd.Parameters.Append(loParam)
loRS = loCmd.Execute()
Notice how the parameters are created and appended to the Command object. The order of
the parameters is significant: You must declare the return value first, followed by each
parameter in the order required by the stored procedure. Rearranging the parameter causes the
Execute method to fail.
Also note that you must close the returned RecordSet object before you can query the
returned values; otherwise, the parameters will contain empty values. This requirement
exists because of the way that ADO retrieves the results from a data sourcefirst the
RecordSet is passed, then the Output parameters and return values. Therefore, you must
224 Client/Server Applications with Visual FoxPro and SQL Server
always close any RecordSets before you are able to retrieve the actual values returned from a
stored procedure call.
Summary
This chapter has shown you the basics of using ADO within a Visual FoxPro application. The
advantages and disadvantages of ADO were described and compared to using the native Visual
FoxPro tools for accessing data. You then saw examples of the Connection, RecordSet and
Command objects, which showed the purpose of each type of object. Hopefully, you now have
enough information to incorporate ADO technology into your client/server applications.
Appendix A: New Features of SQL Server 2000 225
Appendix A
New Features of
SQL Server 2000
This book was written about SQL Server 7 and its set of features. However, at press
time, Microsoft had just finalized SQL Server 2000, with a scheduled launch date of late
September 2000. Obviously, you might have questions about how the new version
affects your Visual FoxPro client/server applications and, in particular, if any of SQL
Server 2000s new features are worth exploring. This appendix serves as a short
comparison of the two products, describing some of what Microsoft did to make SQL
Server 2000 superior to SQL Server 7.
For a greater level of detail on any features of SQL Server 2000, you can check Microsofts
Web site at http://www.microsoft.com/sql/, or install and read SQL Server 2000 Books Online,
particularly the topic Whats New in Microsoft SQL Server 2000.
Feature list
Table 1 shows an abridged high-level feature list for SQL Server 2000. The complete version
of this table is available from Microsoft at www.microsoft.com/sql/productinfo/sql2ktec.htm.
In the Editions column, E stands for the Enterprise edition, S for Standard and P for
Personal. Each edition has its own hardware and operating system requirements, which are
detailed on Microsofts Web site as well as in Books Online.
Table 1, continued
As you can see from Table 1, there are plenty of new featuresand this table is not a
complete list! To save space, the features dealing specifically with data warehousing and XML
were left out of the table; however, if you are working with either of these technologies, be sure
to check out SQL Server 2000. It contains numerous features over those provided by SQL
Server 7 for data warehousing, while providing support for XMLsupport that did not exist at
all in SQL Server 7.
Appendix A: New Features of SQL Server 2000 227
Since there are so many new features, this appendix will only cover those that are related
to the topics covered in this book.
Installation issues
Before you attempt to install SQL Server 2000, you should take the time to read the
setup/upgrade Help provided on the installation CD. It provides valuable information for
upgrading an existing SQL Server 7 to 2000, as well as the new installation options available
to you.
One installation issue with SQL Server 2000 is that it now allows multiple instances of the
server on the same computer. This is useful if you need to run databases for different clients or
applications, but cannot afford the additional expense of multiple servers. Under SQL Server 7,
if you needed a different sort order or code page for two different databases, you were forced to
install the product on two different machines. This is because these features can only be set at
the server level.
Once you have installed SQL Server 2000, you will find that you have the same tools that
were available under SQL Server 7, such as Enterprise Manager, Service Manager, Books
Online, and the Query Analyzer. However, all of these tools have been enhanced for use in
SQL Server 2000.
Query Analyzer
The SQL Server 2000 Query Analyzer has significant enhancements over the version included
with SQL Server 7. One very nice feature is the Object Browser, displayed on the left side of
the Query Analyzer window (shown in Figure 1). It contains a list of all the available objects,
as well as a hierarchical list of available functions and variables. Any member of the object
browser can be dragged to the query window, where text is automatically entered based upon
the dragged object.
For example, you can grab a table from the Object Browser and right-mouse-drag it to the
query window. When you release the right mouse button, a shortcut menu appears, allowing
you to insert code for any of the following commands:
This is extremely powerful, as it frees the developer from having to remember all of a
tables column names or the syntax of these commands. The CREATE TABLE command even
includes any constraints on the selected table.
The Object Browser also contains templates for many T-SQL commands. These can also
be dragged and dropped into the query window, allowing you to quickly build scripts. The
templates insert any necessary parameters enclosed in less than/greater than symbols (i.e.,
<parameter>) so that you can easily find and replace the parameters for your particular needs.
228 Client/Server Applications with Visual FoxPro and SQL Server
Figure 1. The SQL Server 2000 Query Analyzer showing the new Object Browser.
User-defined functions
Another feature that Visual FoxPro developers greatly missed in SQL Server 7 was the ability
to create and incorporate user-defined functions nearly anywhere in code. SQL Server 2000
changes things by allowing you to write user-defined functions, store them in a database, and
use them inside commands or even as column definitions.
The following example demonstrates how to create a simple user-defined function, and
then shows its use within a CREATE TABLE statement:
MyFraction(numerator,denominator) ) )
GO
To test this functionality, insert some values into the first two fields of the table. When you
query the table afterwards, you can see that SQL Server 2000 has automatically populated the
third column in each inserted record with the result of the user-defined function:
(4 row(s) affected)
Other than this special feature, user-defined functions in SQL Server 2000 can be used
in the same fashion as in Visual FoxPro. For example, you can use this same function within
a query:
SQL Server 2000 user-defined functions can return any data type except text, ntext, image,
cursor or timestamp. This means that you can use the new data types (detailed in the New Data
Types section later in this appendix) as return values from user-defined functions, providing
plenty of flexibility for your database implementation needs.
Appendix A: New Features of SQL Server 2000 231
Referential integrity
One of the biggest gotchas in SQL Server 7 is that declarative referential integrity only
supports restrictive relationships. This is because of the way the FOREIGN KEY and
REFERENCES constraints were designed in SQL Server 7they can only handle the restrict
rules. If you need to cascade a Delete or Update, you must work with stored procedures, or
remove the constraints and implement the cascade with T-SQL code in the appropriate trigger.
SQL Server 2000 now allows cascading referential integrity constraints. This means that
you no longer have to write code to implement a cascading Delete or Update. Furthermore,
since constraints are defined at the table level without code, this new feature will perform more
efficiently over the trigger- or stored-procedure-based techniques necessary under version 7.
To implement cascading RI, you can use the Table Properties dialog (shown in Figure 3),
which is part of the table designer in Enterprise Manager. By clicking the Cascade check boxes,
you will set the appropriate cascading constraint in the table.
Alternatively, you can use SQL Server 2000s CREATE TABLE or ALTER TABLE
T-SQL commands, which now support the ON DELETE, ON UPDATE and CASCADE
keywords. The following example creates a relationship between an existing State code table
and a new Customer table, basing the relationship upon the state code field in both tables:
Trigger enhancements
In SQL Server 7, constraints are fired before the data is modified in a table. Therefore, if a
constraint fails, SQL Server fails the modification, leaving the data untouched and preventing
the firing of triggers. Triggers can only fire after a data modification has taken place, which can
only happen after all constraints have passed successfully. However, since all triggers finish
with an implied COMMIT TRANSACTION, triggers that need to discard changes must do so
by issuing a ROLLBACK TRANSACTION statement. This reverts changes that were already
made to the data.
SQL Server 2000 still supports these types of triggersthey are now called AFTER
triggers. In addition, a new type of trigger exists in SQL Server 2000 called an INSTEAD
OF trigger. These triggers fire instead of the triggering action (i.e., INSERT, UPDATE or
DELETE), execute before any constraints, and can be used on tables or views. Therefore, when
a data modification is made in SQL Server 2000, any INSTEAD OF triggers fire first, then the
constraints and, finally, any AFTER triggers.
The best place to use INSTEAD OF triggers is on views, particularly when the view
contains more than one base table. This allows any insertion of records into a view to work
properly and permits the view to be fully updatable. Without INSTEAD OF triggers, views can
only modify data in one table at a time.
Another trigger feature that was new for SQL Server 7 has been enhanced in SQL Server
2000. In version 7, it became possible to define multiple triggers for a single operation. For
example, you can create multiple UPDATE triggers, where each trigger essentially watches
for changes in a particular column. The only problem with multiple triggers is that SQL Server
7 did not provide any mechanism for specifying the order in which these multiple triggers
would fire.
Appendix A: New Features of SQL Server 2000 233
This shortcoming of SQL Server 7 forced developers to write a single trigger that
encapsulated the functionality of the desired multiple triggers. With a single trigger, calls could
be made in the desired sequence.
In SQL Server 2000, you can now specify which trigger fires first and which fires last with
the sp_SetTriggerOrder system stored procedure. For example, if you have three update
triggers named Upd_Trig1, Upd_Trig2 and Upd_Trig3, you can force them to fire in numerical
order with the following T-SQL code:
EXECUTE sp_SetTriggerOrder
@TriggerName='Upd_Trig1',
@Order='first',
@stmttype='UPDATE'
EXECUTE sp_SetTriggerOrder
@TriggerName='Upd_Trig3',
@Order='last',
@stmttype='UPDATE'
All triggers have an order of None by default, which means that their order has not
been specified. If you need to determine whether a trigger is first or last, you must use the
OBJECTPROPERTY() function with one of the following properties: ExecIsFirstInsertTrigger,
ExecIsFirstUpdateTrigger, ExecIsFirstDeleteTrigger, ExecIsLastInsertTrigger,
ExecIsLastUpdateTrigger or ExecIsLastDeleteTrigger.
In the preceding code, if the @IsFirst variable contains zero, the trigger is not the first
update trigger. If the variable contains one, then the trigger has been specified as the first
update trigger for the table.
through an index instead of a table scan. The restriction here is that the calculated column
function must be deterministic. This means that the function must always return the same result
set when provided with the same set of input values.
Another restriction can be demonstrated with the table definition created in the preceding
User-defined functions section, where a function was used as a column definition. While this
column could be indexed, SQL Server 2000 will not yet allow it. To enable the indexing of this
column, you must use the SCHEMABINDING function. A schema-bound function will prevent
the associated object from being altered or dropped, ensuring that any dependencies on the
function do not accidentally disappear and break the function.
Big integers
The bigint data type is an eight-byte (64-bit) integer value with a range of
9,223,372,036,854,775,808. This can be used in IDENTITY columns where the number
of records will exceed the limited range of the int data type (a four-byte integer with a
range of 2,147,483,648). However, as this new data type is incompatible with the current
integer functions, SQL Server 2000 also added a COUNT_BIG() function and the
ROWCOUNT_BIG() function. These are functionally equivalent to the COUNT() function
and @@ROWCOUNT variable, but the returned data type is a bigint instead of an int.
Variants
The sql_variant data type is very similar to what were used to in Visual FoxProits a variant
data type, and it can hold any data type at any time. It cannot store BLOB data (text, ntext or
image data) or timestamp data, but it can be used as the data type for any column! Therefore, it
is possible now in SQL Server 2000 to have a column that stores different types of data in each
row. For example, here is a test SQL script to verify that this really works:
Note how the INSERT INTO statements put a different data type into each row of the
table. None of these statements fail because the first field is defined with the sql_variant data
type. Of course, once the data has been stored, you will want to retrieve the data, and the
Appendix A: New Features of SQL Server 2000 235
SELECT statement handles this with no problems. However, if you desire to know the data
type of the actual data in the column, you can use the SQL_VARIANT_PROPERTY() function
to get the data type, similar to how the TYPE() function works in Visual FoxPro.
For example, if you wanted to add a column that displays the data type of the first field,
you could use this SELECT statement to produce the following output:
(4 row(s) affected)
Tables as variables
The new table data type permits result sets from queries to be stored in a variable on the server.
This means that you cannot use the table data type for a column definition, but it can be used
within server-side code. This data type is a clear advantage over using temporary tables, since
these tables always consume some amount of space in the tempdb database. On the other hand,
data stored in a table data type exists entirely in memory, eliminating the performance problems
and storage requirements of temporary tables.
Defining a table data type requires use of the DECLARE command in T-SQL, with
additional text that specifies the structure of the table. Once the table has been declared, you
can work with it as any other table: Insert data, delete records or modify data that youve placed
into it. The following is an example of how to do this:
While this is not a tremendously useful example, it at least shows how a table variable
works like a temporary cursor in Visual FoxPro. It is important to note here that the @MyTable
variable goes out of scope when the procedure ends, so any data stored in the @MyTable
table will be released at that point. This is true for any variable declared within a SQL Server
stored procedure.
236 Client/Server Applications with Visual FoxPro and SQL Server
Summary
With only the features covered in this appendix, its easy to see how SQL Server 2000 offers a
tremendous amount of benefit over SQL Server 7. However, this is only a small part of what
has changed for the newest version of SQL Server. The features mentioned here relate to
databases that are built for use in transactional, non-Internet-based applications. For Internet
applications, SQL Server 2000 provides numerous XML features to sweeten the pot over SQL
Server 7. Additionally, for OLAP applications, there are plenty of enhancements to further
improve on the performance of retrieving data from your data warehouse.
In any case, upgrading from SQL Server 7 to SQL Server 2000 seems like a win-win
situation, no matter what type of application you are planning to build with it.
Index 237
Index
cache, buffer, 36,
,CAL (Client Access License), 30
@, 10
Calling stored procedures, 109
Abstracting data access functionality, 130
Candidate index, 9
Access 2000, 138
Capacity, 27
Accessing metadata, 98
Capacity limitations, 138
ACID properties, 193
Changes made locally, 86
ActiveX controls, Displaying RecordSets
char, 73
with, 217
Character sets, 31
ActiveX Data Objects, 209
CHECK constraints, 45
ADO, 168
Checkpointing, 36
ADO benefits, 209
Choosing indexes, 169
ADO constants, 219
Client Access License, 30
ADO disadvantages, 211
Client application, 173
Advantages of client/server, 15
Client/server database, 2
Advantages of remote VFP data, 126
Client/server development, 177
AERROR(), 97
Client/server division of work, 171
Agent, SQL Server, 33
Client/server performance issues, 169
An uncommitted dependency, 199
Client/server to the rescue, 2
Application changes, 188
Clustered index, 9
Application Distribution, 177
Code page, 31
Application roles, 175
COM, 21
Application-level data handler, 131
Command object, 222
Asynchronous processing, 113
Committed read, 199
Atomicity, 193
Committing buffers, 69
Audit trail, 7
Compatibility, SQL Server, 136
Authentication, SQL Server, 28
Component-based, 189
Autocommit transaction, 37
Components, 180
B-Tree, 47
Composite index, 47
Backup/restore, 186
Concurrency, 28
Balanced Tree, 47
Concurrency control, 37
Bandwidth, 172
Conflict resolution, 150
Base table, 40
Connect strings, 61
Big integers, 234
Connecting to the server, 95
binary, 73
Connection Designer, 61
Binding connections, 113
connection errors, Handling, 97
bit, 73
Connection object, 212
Blocking, 203
Connection properties revisited, 115
Bookmark, 48
connection properties, other, 116
Buffer cache, 36
Connections, 57
Buffering, 68
Connections, Binding, 113
Built-in client/server support, 23
ConnectionTimeOut property, 117
Built-in local data engine, 23
238 Client/Server Applications with Visual FoxPro and SQL Server
ConnectTimeout, 62 DefaultValue, 72
Consistency, 193 DELETE operation, 54
Constraints, 43, 84 Deployment models, 179
constraints, DEFAULT, 45 Design Issues, 156
Constraints, PRIMARY KEY, 44 Development environment, 177
Cost, 16 Disadvantages of remote VFP data, 127
Create Database Wizard, 33 Disconnecting, 98
Creating a database, 33 Displaying RecordSets with ActiveX
Creating indexes, 47 controls, 217
Data access, 3 Displaying RecordSets with code, 215
data access functionality, 130 Displaying RecordSets with the VFPCOM
Data integrity mechanisms, 160 utility, 220
data integrity, enforcing, 41 DispLogin, 63, 116
Data location, 173 DispWarnings, 63
Data Source Names, 57 Distributing databases (creating), 181
Data types, 42, 160 Distributing MSDE applications, 141
Database backup, 6 Domain integrity, 41
Database files, 33 Downsizing, 125
Database objects, 39 DRI/foreign keys, 165
Database Properties dialog, 33 DSN, file, 58
Database updates, 190 DSN, system, 58
DataType, 73 DSN, user, 58
Datatype, binary, 73 DSNs, 57
Datatype, bit, 73 DTS, 185
Datatype, char, 73 Durability, 194
Datatype, decimal, 73 Durable transactions, 201
Datatype, float, 73 Editions, SQL Server, 29
Datatype, image, 73 Enforcing data integrity, 41
Datatype, smalldatetime, 73 Entity integrity, 41
Datatype, smallint, 73 Errors, 145
Datatype, smallmoney, 73 Exclusive locks, 38
Datatype, sysname, 73 Execution plan, 49
Datatype, text, 73 Existence of SQL Server, 181
Datatype, tinyint, 73 explicit transactions, 37
Datatype, varbinary, 73 Expression mapping, 82
Datatype, varchar, 73 Extent, 36
Deadlocks, 38, 208 Feature list, 225
Debugging, 145 Features of client/server databases, 3
Debugging stored procedures, 228 FetchAsNeeded, 70
Debugging tools, 152 FetchMemo, 71
decimal, 73 FetchSize, 70
Declarative data integrity, 42 Field properties, 72
Declarative security, 4 File server, 1
DEFAULT constraints, 45 File-server database, 1
Defaults, 10, 82, 160, 162 Filter conditions, 123
Index 239