0% found this document useful (0 votes)
9 views

Pro Dynamic NET 4 0 Applications Data Driven Programming For The NET Framework 1st Edition Carl Ganz JR 2024 Scribd Download

ebook

Uploaded by

hojerbwer
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Pro Dynamic NET 4 0 Applications Data Driven Programming For The NET Framework 1st Edition Carl Ganz JR 2024 Scribd Download

ebook

Uploaded by

hojerbwer
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Full download ebooks at https://ebookmeta.

com

Pro Dynamic NET 4 0 Applications Data Driven


Programming for the NET Framework 1st Edition
Carl Ganz Jr

For dowload this book click link below


https://ebookmeta.com/product/pro-dynamic-
net-4-0-applications-data-driven-programming-for-the-net-
framework-1st-edition-carl-ganz-jr/

OR CLICK BUTTON

DOWLOAD NOW
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

NET Microservices Architecture for Containerized NET


Applications Cesar De La Torre

https://ebookmeta.com/product/net-microservices-architecture-for-
containerized-net-applications-cesar-de-la-torre/

Pro C# 7: With .NET and .NET Core - Eighth Edition


Andrew Troelsen

https://ebookmeta.com/product/pro-c-7-with-net-and-net-core-
eighth-edition-andrew-troelsen/

ASP NET Core MVC 2 0 Cookbook Effective ways to build


modern interactive web applications with ASP NET Core
MVC 2 0 1st Edition Engin Polat

https://ebookmeta.com/product/asp-net-core-mvc-2-0-cookbook-
effective-ways-to-build-modern-interactive-web-applications-with-
asp-net-core-mvc-2-0-1st-edition-engin-polat/

Pro C# 10 with .NET 6: Foundational Principles and


Practices in Programming 11th Edition Andrew Troelsen

https://ebookmeta.com/product/pro-c-10-with-net-6-foundational-
principles-and-practices-in-programming-11th-edition-andrew-
troelsen/
Pro C# 9 with .NET 5: Foundational Principles and
Practices in Programming 10th Edition Andrew Troelsen

https://ebookmeta.com/product/pro-c-9-with-net-5-foundational-
principles-and-practices-in-programming-10th-edition-andrew-
troelsen/

Pro C# 10 with .NET 6 : Foundational Principles and


Practices in Programming 11th Edition Andrew Troelsen

https://ebookmeta.com/product/pro-c-10-with-net-6-foundational-
principles-and-practices-in-programming-11th-edition-andrew-
troelsen-2/

Pro .NET Benchmarking: The Art of Performance


Measurement 1st Edition Andrey Akinshin

https://ebookmeta.com/product/pro-net-benchmarking-the-art-of-
performance-measurement-1st-edition-andrey-akinshin/

Pro ASP NET Core 7 MEAP 10th Edition Adam Freeman

https://ebookmeta.com/product/pro-asp-net-core-7-meap-10th-
edition-adam-freeman/

Architecting Cloud Native NET Apps for Azure v1 0 3


Robert Vettor

https://ebookmeta.com/product/architecting-cloud-native-net-apps-
for-azure-v1-0-3-robert-vettor/
CYAN

YELLOW

MAGENTA
BLACK

PANTONE 123 C

BOOKS FOR PROFESSIONALS BY PROFESSIONALS®

THE EXPERT’S VOICE® IN .NET

Companion

eBook

Available

Carl Ganz, Jr., author of

Pro Crystal Enterprise /

Dynamic .NET 4.0 Applications

BusinessObjects XI

Pro

Programming, 2006

Dear Reader,

Real World Enterprise Reports

Dynamic .NET 4.0

using VB6 and VB.NET, 2003

Applications should be as flexible and dynamic as possible; the more


dynamic an Visual Basic 5 Web Database

application is, the less work will be required to maintain it, because
it stores the Development, 1997
user interface definitions and business rules themselves in a
database. Then, when the application executes, the user interface is
dynamically generated, and the data CA-Visual Objects Developer’s

Guide, 1995

validated by business rules compiled at runtime.

Pro Dynamic .NET 4.0 Applications is an essential guide to building


cutting-edge dynamic .NET 4.0 applications that provide such
flexibility and make end-Pro

users as independent of the application’s developer as possible. This


approach enables users to have more control over the way that their
application functions whilst simultaneously reducing the frequency
with which you need to issue service patches and support updates –
saving time and money for both you and the user.

Some applications, by their very nature, require flexibility, whereby


most, if not all, of their functionality is generated at runtime.
Constantly changing survey applications, CRM systems, and
accounting systems, whose every data element Dynamic .NET 4.0

could never be anticipated at development time, are just a few of


the applications that could benefit from the data-driven techniques I
cover in this book.

I provide examples based on real-world experiences of developing


dynamic software, covering the basics of data-driven programming;
dynamic user interfaces for WinForms, ASP.NET, and WPF; dynamic
reporting; and database design for data-driven applications. Though
the examples have been created in C# 4.0, the underly-Applications

ing concepts can be applied to almost any programming language on


almost any platform. This book will save you and your users
countless hours by helping you to Applications
create applications that can easily be modified without major
redevelopment effort.

Data-Driven Programming for the .NET

Sincerely

Carl Ganz, Jr.

Framework

Companion eBook

THE APRESS ROADMAP

Use data-driven programming to write flexible

Beginning ASP.NET 4.0

Pro ASP.NET 4.0

Pro Dynamic

in C# 2010

in C# 2010, Fourth Edition

.NET 4.0 Applications

and dynamic applications

Pro C# 2010 and the

WPF Recipes in C# 2010:

See last page for details

Beginning C#
.NET 4.0 Platform,

A Problem-Solution Approach

on $10 eBook version

Fifth Edition

Pro ASP.NET

Illustrated C#

Pro WPF in C# 2010

MVC 2 Framework

Ganz,

SOURCE CODE ONLINE

Carl Ganz, Jr.

www.apress.com

ISBN 978-1-4302-2519-5

Jr

5 49 9 9

US $49.99

Shelve in:

.NET

User level:
9 781430 225195

Intermediate–Advanced

www.it-ebooks.info

this print for content only—size & color not accurate

7.5 x 9.25 spine = 0.71875" 264 page count

www.it-ebooks.info

Pro Dynamic .NET 4.0

Applications

Data-Driven Programming for the .NET

Framework

■■■

Carl Ganz, Jr.

www.it-ebooks.info

Pro Dyn a mic .NE T 4.0 Appli c atio ns: D ata- Driv en P ro g ra m
min g f o r th e . NET F ram e wo rk Copyright © 2010 by Carl Ganz,
Jr.

All rights reserved. No part of this work may be reproduced or


transmitted in any form or by any means, electronic or mechanical,
including photocopying, recording, or by any information storage or
retrieval system, without the prior written permission of the
copyright owner and the publisher.
ISBN-13 (pbk): 978-1-4302-2519-5

ISBN-13 (electronic): 978-1-4302-2520-1

Printed and bound in the United States of America 9 8 7 6 5 4 3 2 1

Trademarked names may appear in this book. Rather than use a


trademark symbol with every occurrence of a trademarked name, we
use the names only in an editorial fashion and to the benefit of the
trademark owner, with no intention of infringement of the
trademark.

President and Publisher: Paul Manning

Lead Editor: Matthew Moodie

Technical Reviewer: Ryan Follmer

Editorial Board: Clay Andres, Steve Anglin, Mark Beckner, Ewan


Buckingham, Gary Cornell, Jonathan Gennick, Jonathan Hassell,
Michelle Lowman, Matthew Moodie, Duncan Parkes, Jeffrey Pepper,
Frank Pohlmann, Douglas Pundick, Ben Renow-Clarke, Dominic
Shakeshaft, Matt Wade, Tom WelshProject Manager: Anita Castro

Copy Editor: Tiffany Taylor

Compositor: Bronkella Publishing LLC

Indexer: John Collin

Artist: April Milne

Cover Designer: Anna Ishchenko

Distributed to the book trade worldwide by Springer-Verlag New


York, Inc., 233 Spring Street, 6th
Floor, New York, NY 10013. Phone 1-800-SPRINGER, fax 201-348-
4505, e-mail orders-

[email protected], or visit http://www.springeronline.com.

For information on translations, please e-mail [email protected], or


visit http://www.apress.com.

Apress and friends of ED books may be purchased in bulk for


academic, corporate, or promotional use. eBook versions and
licenses are also available for most titles. For more information,
reference our Special Bulk Sales–eBook Licensing web page at
http://www.apress.com/info/bulksales.

The information in this book is distributed on an “as is” basis,


without warranty. Although every precaution has been taken in the
preparation of this work, neither the author(s) nor Apress shall have
any liability to any person or entity with respect to any loss or
damage caused or alleged to be caused directly or indirectly by the
information contained in this work.

The source code for this book is available to readers at


http://www.apress.com. You will need to answer questions
pertaining to this book in order to successfully download the code.

ii

www.it-ebooks.info

With all paternal love, to Carl John III, Rose Veronica, and our
unborn baby, either Paul Christian or Emily Anne, whichever one you
turn out to be.

iii

www.it-ebooks.info
■ CONTENTS

Contents at a Glance

Contents at a Glance
.......................................................................... iv

Contents
............................................................................................ v

About the Author


............................................................................... ix

About the Technical Reviewer


.............................................................. x

Acknowledgments
.............................................................................. xi

Introduction
..................................................................................... xii

■Chapter 1: Introducing Data-Driven Programming


................................ 1

■Chapter 2: Reflection
...................................................................... 29

■Chapter 3: Runtime Code Compilation


.............................................. 59

■Chapter 4: Dynamic WinForms


......................................................... 77

■Chapter 5: Dynamic ASP.NET


.......................................................... 123
■Chapter 6: Dynamic WPF
................................................................ 155

■Chapter 7: Reporting
..................................................................... 183

■Chapter 8: Database Design


........................................................... 217

■Index..........................................................................................
.. 237

iv

www.it-ebooks.info

Contents

Contents at a Glance
.......................................................................... iv

Contents
............................................................................................ v

About the Author


............................................................................... ix

About the Technical Reviewer


.............................................................. x

Acknowledgments
.............................................................................. xi

Introduction
..................................................................................... xii

■Chapter 1: Introducing Data-Driven Programming


............................... 1
Database Metadata
....................................................................................................
..... 2

SQL Server
....................................................................................................
......................................... 2

Oracle...........................................................................................
.......................................................... 4

Practical Applications
....................................................................................................
.6

Code
Generation....................................................................................
.......................... 9

Custom Code Generators


....................................................................................................
................. 10

Using the CodeDOM


....................................................................................................
......................... 13

Summary.......................................................................................
................................ 28

■Chapter 2:
Reflection...................................................................... 29

Instantiating
Classes..........................................................................................
........... 29
Loading Shared
Assemblies....................................................................................
............................. 31

Examining Classes
....................................................................................................
........................... 32

Drilling Down into Assembly Objects


............................................................................ 41

Building an Object
Hierarchy.......................................................................................
......................... 42

Importing Control Definitions


....................................................................................................
........... 45

www.it-ebooks.info

■ CONTENTS

Decompiling Source
Code.............................................................................................
52

Summary.......................................................................................
................................ 57

■Chapter 3: Runtime Code Compilation


.............................................. 59

System.CodeDom.Compiler
Namespace....................................................................... 59
Compiling the Code
....................................................................................................
.......................... 61

Error Handling
....................................................................................................
.................................. 63

Executing the Code


....................................................................................................
.......................... 66

Referencing Controls on
Forms..................................................................................... 68

Adding
References....................................................................................
.................... 70

Testing
....................................................................................................
...................... 75

Summary.......................................................................................
................................ 75

■Chapter 4: Dynamic WinForms


......................................................... 77

Instantiating Forms
....................................................................................................
... 77

Wiring Events
....................................................................................................
............ 81
Another random document with
no related content on Scribd:
figures, an accuracy no analog computer can match. The significant
point is that the analog can never hope to compete with digital types
for accuracy.
A third perhaps not as important advantage the digital machine
has is its compactness. We are speaking now of later computers,
and not the pioneer electromechanical giants, of course. The
transistor and other small semiconductor devices supplanted the
larger tubes, and magnetic cores took the place of cruder storage
components. Now even more exotic devices are quietly ousting
these, as magnetic films and cryotrons begin to be used in
computers.
Science Materials Center

BRAINIAC, another do-it-yourself computer. This digital machine is here being


programmed to solve a logic problem involving a will.
This drastic shrinking of size by thinking small on the part of
computer designers increases the capacity of the digital computer at
no sacrifice in accuracy or reliability. The analog, unfortunately,
cannot make use of many of these solid-state devices. Again, the
bugaboo of accuracy is the reason; let’s look further into the
problem.
The most accurate and reliable analog computers are mechanical
in nature. We can cut gears and turn shafts and wheels to great
accuracy and operate them in controlled temperature and humidity.
Paradoxically, this is because mechanical components are nearer to
digital presentations than are electrical switches, magnets, and
electronic components. A gear can have a finite number of teeth;
when we deal with electrons flowing through a wire we leave the
discrete and enter the continuous world. A tiny change in voltage or
current, or magnetic flux, compounded several hundred times in a
complex computer, can change the final result appreciably if the
errors are cumulative, that is, if they are allowed to pile up. This is
what happens in the analog computer using electrical and electronic
components instead of precisely machined cams and gears.
The digital device, on the other hand, is not so penalized. Though
it uses electronic switches, these can be so set that even an
appreciable variation in current or voltage or resistance will not
affect the proper operation of the switch. We can design a transistor
switch, for example, to close when the current applied exceeds a
certain threshold. We do not have to concern ourselves if this excess
current is large or small; the switch will be on, no more and no less.
Or it will be completely off. Just as there is no such thing as being a
little bit dead, there is no such thing as a partly off digital switch. So
our digital computer can make use of the more advanced electronic
components to become more complex, or smaller, or both. The
analog must sacrifice its already marginal accuracy if it uses more
electronics. The argument here is simplified, of course; there are
electronic analog machines in operation. However, the problem of
the “drift” of electronic devices is inherent and a limiting factor on
the performance of the analog.
These, then, are some of the advantages the digital computer has
over its analog relative. It is more flexible in general—though there
are some digital machines that are more specialized than some
analog types; it is more accurate and apparently will remain so; and
it is more amenable to miniaturization and further complexity
because its designer can use less than perfect parts and produce a
perfect result.
In the disadvantage department the digital machine’s only
drawback seems to be its childish way of solving problems. About all
it knows how to do is to add 1 and 1 and come up with 2. To
multiply, it performs repetitive additions, and solving a difficult
equation becomes a fantastically complex problem when compared
with the instantaneous solution possible in the analog machine. The
digital computer redeems itself by performing its multitudinous
additions at fabulous speeds.
Because it must be fed digits in its input, the digital machine is not
economically feasible in many applications that will probably be
reserved for the analog. A digital clock or thermometer for
household use would be an interesting gimmick, but hardly worth
the extra trouble and expense necessary to produce. Even here,
though, first glances may be wrong and in some cases it may prove
worth while to convert analog inputs to digital with the reverse
conversion at the output end. One example of this is the airborne
digital computer which has taken over many jobs earlier done by
analog devices.
There is another reason for the digital machines ubiquitousness, a
reason it does not seem proper to list as merely a relative advantage
over the analog. We have described the analog computer used as an
aid to psychological testing procedures, and its ability to handle a
multiplicity of problems at once. This perhaps tends to obscure the
fact that the digital machine by its very on-off, yes-no nature is
ideally suited to the solving of problems in logic. If it achieves
superiority in mathematics in spite of its seemingly moronic handling
of numbers, it succeeds in logic because of this very feature.
While it might seem more appropriate that music be composed by
analogy, or that a chess-playing machine would likely be an analog
computer, we find the digital machine in these roles. The reason may
be explained by our own brains, composed of billions of neurons,
each capable only of being on or off. While many philosophers build
a strong case for the yes-no-maybe approach with its large areas of
gray, the discipline of formal logic admits to only two states, those
that can so conveniently be represented in the digital computer’s
flip-flop or magnetic cores.
The digital computer, then, is not merely a counting machine, but
a decision-maker as well. It can decide whether something should
be added, subtracted, or ignored. Its logical manipulations can by
clever circuitry be extended from AND to OR, NOT, and NOR. It thus
can solve not only arithmetic, but also the problems of logic
concerning foxes, goats, and cabbages, or cannibals and
missionaries that give us human beings so much trouble when we
encounter them.
The fact that the digital computer is just such a rigorously logical
and unbending machine poses problems for it in certain of its
dealings with its human masters. Language ideally should be logical
in its structure. In general it probably is, but man is so perverse that
he has warped and twisted his communications to the point that a
computer sticking strictly to book logic will hit snags almost as soon
as it starts to translate human talk into other human talk, or into a
logical machine command or answer.
For instance, we have many words with multiple meanings which
give rise to confusion unless we are schooled in subtleties. There are
stories, some of them apocryphal but nonetheless pointing up the
problem, of terms like “water goat” cropping up in an English-to-
Russian translation. Investigation proved that the more meaningful
term would have been “hydraulic ram.” In another interesting
experiment, the expression, “the spirit is willing but the flesh is
weak” was machine translated into Russian, and then that result in
turn re-translated back into English much in the manner of the party
game of “Telephone” in which an original message is whispered from
one person to another and finally back to the originator. In this
instance, the final version was, “The vodka is strong, but the meat is
rotten.”
It is a fine distinction here as to who is wrong, the computer or
man and his irrational languages. Chances are that in the long run
true logic will prevail, and instead of us confusing the computer it
will manage instead to organize our grammar into the more efficient
tool it should be. With proper programming, the computer may even
be able to retain sufficient humor and nuance to make talk
interesting and colorful as well as utilitarian.
We can see that the digital machine with its flexibility, accuracy,
and powerful logical capability is the fair-haired one of the computer
family. Starting with a for abacus, digital computer applications run
through practically the entire alphabet. Its take-over in the banking
field was practically overnight; it excels as a tool for design and
engineering, including the design and engineering of other
computers. Aviation relies heavily on digital computers already, from
the sale of tickets to the control of air traffic.
Gaming theory is important not only to the Saturday night poker-
player and the Las Vegas casino operator, but to military men and
industrialists as well. Manufacturing plants rely more and more on
digital techniques for controls. Language translation, mentioned
lightly above, is a prime need at least until we all begin speaking
Esperanto, Io, or Computerese. Taxation, always with us, may at
least be more smoothly handled when the computers take over.
Insurance, the arrangement of music, spaceflight guidance, and
education are random fields already dependent more or less on the
digital computer. We will not take the time here to go thoroughly
into all the jobs for which the computer has applied for work and
been hired; that will be taken up in later chapters. But from even a
quick glance the scope of the digital machine already should be
obvious. This is why it is usually a safe assumption that the word
computer today refers to the digital type.
Hybrid Computers
We have talked of the analog and the digital; there remains a
further classification that should be covered. It is the result of a
marriage of our two basic types, a result naturally hybrid. The
analog-digital computer is third in order of importance, but
important nonetheless.
Minneapolis-Honeywell

Nerve center of Philadelphia Electric Company’s digital computer-directed


automatic economic dispatch system is this console from which power directors
operate and supervise loading of generating units at minimum incremental cost.

Necessity, as always, mothered the invention of the analog-digital


machine. We have talked of the relative merits of the two types; the
analog is much faster on a complex problem such as solving
simultaneous equations. The digital machine is far more accurate. As
an example, the Psychological Matrix Rotator described earlier could
solve its twelve equations practically instantaneously. A digital
machine might take seconds—a terribly long time by computer
standards. If we want an accurate high-speed differential analyzer,
we must combine an analog with a digital computer.
Because the two are hardly of the same species, this breeding is
not an easy thing. But by careful study, designers effected the
desired mating. The hybrid is not actually a new type of computer,
but two different types tied together and made compatible by
suitable converters.
The composite consists of a high-speed general-purpose digital
computer, an electronic analog computer, an analog-to-digital
converter, a digital-to-analog and a suitable control for these two
converters. The converters are called “transducers” and have the
ability of changing the continuous analog signal into discrete pulses
of energy, or vice versa.
Sometimes called digital differential analyzers, the hybrid
computers feature the ease of programming of the analog, plus its
speed, and the accuracy and much broader range of the digital
machine. Bendix among others produced such machines several
years ago. The National Bureau of Standards recently began
development of what it calls an analog-digital differential analyzer
which it expects to be from ten to a hundred times more accurate
than earlier hybrid computers. The NBS analyzer will be useful in
missile and aircraft design work.
Despite its apparent usefulness as a compromise and happy
medium between the two types, the hybrid would seem to have as
limited a future as any hybrid does. Pure digital techniques may be
developed that will be more efficient than the stopgap combination,
and the analog-digital will fall by the wayside along the computer
trail.
Summary
Historically, the digital computer was first on the scene. The
analog came along, and for a time was the more popular for a
variety of reasons. One of these was the naïve, cumbersome mode
of operation the digital computer is bound to; another its early lack
of speed. Both these drawbacks have been largely eliminated by
advances in electronics, and apparently this is only the beginning. In
a few years the technology has progressed from standard-size
vacuum tubes through miniature tubes and the shrinking of other
components, to semiconductors and other tinier devices, and now
we have something called integrated circuitry, with molecular
electronics on the horizon. These new methods promise computer
elements approaching the size of the neurons in our own brains, yet
with far faster speed of operation.
Such advances help the digital computer more than the analog,
barring some unexpected breakthrough in the accuracy problem of
the latter. Digital building blocks become ever smaller, faster,
cheaper, and more reliable. Computers that fit in the palm of the
hand are on the market, and are already bulky by comparison with
those in the laboratory. The analog-digital hybrid most likely will not
be new life for the analog, but an assimilating of its better qualities
by the digital.
“‘What’s one and one and one and one and one and one and one
and one and one and one?’
‘I don’t know,’ said Alice. ‘I lost count.’
‘She can’t do Addition,’ the Red Queen interrupted.”
—Lewis Carroll
5: The Binary Boolean Bit

In this world full of “bigness,” in which astronomical numbers


apply not only to the speed of light and the distance to stars but to
our national debt as well, it is refreshing to recall that some lucky
tribes have a mathematical system that goes, “One—two—plenty!”
Such an uncluttered life at times seems highly desirable, and we can
only envy those who lump all numbers from three to billions as
simply “plenty.”
Instead we are faced today with about as many different number
systems as there are numbers, having come a long way from the
dawn of counting when an even simpler method than “one—two—
plenty” prevailed. Man being basically self-centered, he first thought
in terms of “me,” or one. Two was not a concept, but two “ones”;
likewise, three “ones” and so on. Pebbles were handy, and to
represent the ten animals slain during the winter, a cave man could
make ten scratches on the wall or string out that many stones.
It is said that the ancient cabbies in Rome had a taximeter that
dropped pebbles one by one onto a plate as the wheels turned the
requisite number of revolutions. This plate of stones was presented
to the passenger at the end of his ride—perhaps where we get the
word “fare”! Prices have risen so much that it would take quite a bag
of pebbles in the taximeter today.
Using units in this manner to express a sum is called the unitary
system. It is the concept that gives rise to the “if all the dollars spent
in this country since such and such a time were laid end to end—”
analogies. Put to practice, this might indeed have a salutary effect,
but long ago man learned that it was not practical to stick to a one-
for-one representation.
How long it was before we stumbled onto the fact that we had a
“handy” counting system attached to our wrists is not positively
known, but we eventually adopted the decimal system. In some
places the jump from one to ten was not made completely. The
Pueblo Indians, for instance, double up one fist each time a sum of
five is reached. Thus the doubled fist and two fingers on the other
hand signifies seven. In the mathematician’s language, this is a
modulo-5 system. The decimal system is modulo-10; in other words
we start over each time after reaching 10.
Besides the word digit in our vocabulary to tie fingers and
numbers, the Roman numerals V and X are graphic representations
of one hand with thumb widespread, and two hands crossed,
respectively. A point worth remembering is that the decimal system
was chosen arbitrarily because we happen to have ten digits. There
is no divine arithmetical significance in the number 10; in fact
mathematicians would prefer 12, since it can be divided more ways.
The ancient Mayans, feeling that if 10 were ten times as good as
1, then surely 20 would be twice the improvement of the decimal
system. So they pulled off their boots and added toes to fingers for a
modulo-20 number system. Their word for 20, then, is the same as
that for “the whole man” for very good reason. Other races adopted
even larger base systems, the base of 60 being an example.
If we look to natural reasons for the development of number
systems, we might decide that the binary, or two-valued system, did
not attain much prominence in naïve civilizations because there are
so few one-legged, two-toed animals! Only when man built himself a
machine uniquely suited to two-valued mathematics did the binary
system come into its own.
Numbers are merely conventions, rigorous conventions to be sure
with no semantic vagueness. God did not ordain that we use the
decimal system, as evidenced in the large number of other systems
that work just fine. Some abacuses use the biquinary system, and
there are septal, octal, and sexagesimal systems. We can even
express numbers in an ABC or XYZ notation. So a broad choice was
available for the computer designer when he began to look about for
the most efficient system for his new machine.
Considering only the question of a radix, or base, which will permit
the fewest elements to represent the desired numbers,
mathematicians can show us that a base of not 10, or 12, or any
other whole number is most efficient, but the fraction 2.71828. The
ideal model is not found in man, then, since man does not seem to
have 2.71828 of anything. However, the strange-looking number
does happen to be the base of the system of natural logarithms.
Now a system of mathematics based on 2.71828 might make the
most efficient use of the components of the computer, but it would
play hob with other factors, including the men who must work with
such a weird set of numbers. As is often done, a compromise was
made between ideal and practical choices. Since the computer with
the most potential seems to be the electronic computer, and since its
operation hinges on the opening and closing of simple or
sophisticated switches, a two-valued mathematical system, the
binary system, was chosen. It wasn’t far from the ideal 2.71828, and
there was another even more powerful reason for the choice. Logic
is based on a yes-no, true-false system. Here, then, was the best of
all possible number systems: the lowly, apparently far-from-
sophisticated binary notation. As one writer exclaimed sadly, a
concept which had been hailed as a monument to monotheism
ended up in the bowels of a robot!
The Binary System
It is believed from ancient writings that the Chinese were aware of
the binary or two-valued system of numbers as early as 3000 B.C.
However, this fact was lost as the years progressed, and Leibnitz
thought that he had discovered binary himself almost 5,000 years
later. In an odd twist, Leibnitz apprised his friend Grimaldi, the Jesuit
president of the Tribunal of Mathematics in China, of the religious
significance of binary 1 and 0 as an argument against Buddhism!
A legend in India also contains indications of the power of the
binary system. The inventor of the game of chess was promised any
award he wanted for this service to the king. The inventor asked
simply that the king place a grain of wheat on the first square of the
board, two on the second, and then four, eight, and so on in
ascending powers of two until the sixty-four squares of the board
were covered. Although the king thought his subject a fool, this
amount of wheat would have covered the entire earth to a depth of
about an inch!
We are perhaps more familiar with the binary system than we
realize. Morse code, with its dots and dashes, for example, is a two-
valued system. And the power of a system with a base of two is
evident when we realize that given a single one-pound weight and
sufficient two-pound weights we can weigh any whole-numbered
amounts.
At first glance, however, binary numbers seem a hopeless
conglomeration of ones and zeros. This is so only because we have
become conditioned to the decimal system, which was even more
hopeless to us as youngsters. We may have forgotten, with the
contempt of familiarity, that our number system is built on the idea
of powers. In grade school we learned that starting at the right we
had units, tens, hundreds, thousands, and so on. In the decimal
number 111, for example, we mean 1 times 102, plus 1 times 101,
plus 1. We have handled so many numbers so many times we have
usually forgotten just what we are doing, and how.
The binary system uses only two numbers: 1 and 0. So it is five
times as simple as the decimal system. It uses powers of two rather
than ten, again far simpler. Let’s take the binary number 111 and
break it down just as we do a decimal number. Starting at the left,
we have 1 times 22, plus 1 times 21, plus 1. This adds up to 7, and
there is our answer.
The decimal system is positional; this is what made it so much
more effective in the simple expression of large numbers than the
Roman numeral system. Binary is positional too, and for larger
numbers we continue moving toward the left, increasing our power
of two each time. Thus 1111 is 23 plus 22 plus 21 plus 1.
System Development Corp.

A computer teaching machine answering a question about the binary system.

We are familiar with decimal numbers like 101. This means 1


hundred, no tens, and 1 unit. Likewise in binary notation 101 means
one 4, no 2’s, and one 1. For all its seeming complexity, then, the
binary system is actually simpler than the “easy” decimal one we are
more familiar with. But despite its simplicity, the binary system is far
from being inferior to the decimal system. You can prove this by
doing some counting on your fingers.
Normally we count, or tally, by bending down a finger for each
new unit we want to record. With both hands, then, we can add up
only ten units, a quite limited range. We can add a bit of
sophistication, and assign a different number to each finger; thus 1,
2, 3, 4, 5, 6, 7, 8, 9, 10. Now, believe it or not, we can tally up to 55
with our hands! As each unit is counted, we raise and lower the
correct finger in turn. On reaching 10, we leave that finger—thumb,
actually—depressed, and start over with 1. On reaching 9, we leave
it depressed, and so on. We have increased the capacity of our
counting machine by 5-1/2 times without even taking off our shoes.
The mathematician, by the way, would say we have a capability of
not 55 but 56 numbers, since all fingers up would signify 0, which
can be called a number. Thus our two hands represent to the
mathematician a modulo-56 counter.
This would seem to vanquish the lowly binary system for good,
but let’s do a bit more counting. This time we will assign each finger
a number corresponding to the powers of 2 we use in reading our
binary numbers. Thus we assign the numbers 1, 2, 4, 8, 16, 32, 64,
128, 256, and 512. How many units can we count now? Not 10, or
55, but a good bit better than that. Using binary notation, our ten
digits can now record a total of 1,023 units. True, it will take a bit of
dexterity, but by bending and straightening fingers to make the
proper sums, when you finally have all fingers down you will have
counted 1,023, or 1,024 if you are a mathematical purist.
Once convinced that the binary method does have its merits, it
may be a little easier to pursue a mastery of representing numbers
in binary notation, difficult as it may seem at the outset. The usual
way to convert is to remember, or list, the powers of 2, and start at
the left side with the largest power that can be divided into the
decimal number we want to convert. Suppose we want to change
the number 500 into binary. First we make a chart of the positions:
Power of 2 8 7 6 5 4 3 2 1 0
Decimal 256 128 64 32 16 8 4 2 1
Number
Binary Number 1 1 1 1 1 0 1 0 0
Since 256 is the largest number that will go into 500, we start
there, knowing that there will be nine binary digits, or “bits” in our
answer. We place a 1 in that space to indicate that there is indeed
an eighth power of 2 included in 500. Since 128 will go into the
remainder, we put a 1 in that space also. Continuing in this manner,
we find that we need 1’s until we reach the “8” space which we must
skip since our remainder does not contain an 8. We mark a 1 in the
4 space, but skip the 2 and the 1. Our answer, then, in binary
notation is 111110100. This number is called “pure binary.” It can
also lead to pure torture for human programmers whose eyes begin
to bug with this “bit chasing,” as it has come to be called. Everything
is of course relative, and the ancient Roman might gladly have
changed DCCCLXXXVIII to binary 1101111000, which is two digits
shorter.
There is a simpler way of converting that might be interesting to
try out. We’ll start with our same 500. Since it is an even number,
we put a 0 beneath it. Moving to the left, we divide by two and get
250. This also is an even number, so we mark down a 0 in our binary
equivalent. The next division gives 125, an odd number, so we put
down a 1. We continue to divide successively, marking a zero for
each even remainder, and a 1 for the odd. Although it may not be
obvious right away, we are merely arriving at powers of two by a
process called mediation, or halving.
Decimal 1 3 7 15 31 62 125 250 500
Binary 1 1 1 1 1 0 1 0 0
Obviously we can reverse this procedure to convert binary
numbers to their decimal equivalents.
There is an interesting extension of this process called duplication
by which multiplication can be done quite simply. Let us multiply 95
times 36. We will halve our 95 as we did in the earlier example,
while doubling the 36. This time when we have an even number in
the left column, we will simply cancel out the corresponding number
in the right column.
95 36
47 72
23 144
11 288
5 576
2 1152
1 2304
——
3420
This clever bit of mathematics is called Russian peasant
multiplication, although it was also known to early Egyptians and
many others. It permits unschooled people, with only the ability to
add and divide, to do fairly complex multiplication problems. Actually
it is based on our old stand-by, the binary system. What we have
done is to represent the 95 “dyadically,” or by twos, and to multiply
36 successively by each of these powers as applicable. We will not
digress further, but leave this as an example of the tricks possible
with the seemingly simple binary system.
Even after we have learned to convert from the decimal numbers
we are familiar with into binary notation almost by inspection, the
results are admittedly unwieldy for human handling. An employee
who is used to getting $105 a week would be understandably
confused if the computer printed out a check for him reading
$1101001. For this reason the computer programmer has reached a
compromise with the machine. He speaks decimal, it speaks binary;
they meet each other halfway with something called binary-coded
decimal. Here’s the way it works.
A little thought will show that the decimal numbers from 0 through
9 can be presented in binary using four bits. Thus:
Decimal Binary
0 0
1 1
2 110
3 111
4 1100
5 1101
6 1110
7 10111
8 11000
9 11001
In the interest of uniformity we fill in the blanks with 0’s, so that
each decimal number is represented by a four-digit block, or word,
of binary code. Now when the computer programmer wants to feed
the number 560 into the computer in binary he breaks it into
separate words of 5, 6, and 0; or 0101, 0110, and 0000. In effect,
we have changed $5 words into four-bit words! The computer
couldn’t care less, since it handles binary digits at the rate of millions
a second; and the human is better able to keep his marbles while he
works with the computer. Of course, there are some computers that
are classed as pure binary machines. These work on mathematical
problems, with none of the restrictions imposed by human frailty. For
the computer the pure binary system is more efficient than the
binary decimal compromise.
The four-digit words can be made to represent not only numbers,
but letters as well. When this is done it is called an alpha-numeric or
alphameric code. Incidentally, it is conceivable that language could
be made up of only 1’s and 0’s, or perhaps a’s and b’s would be
better. All it would take would be the stringing together of enough
letters to cover all the words there are. The result would be rather
dull, with words like aabbababaabbaaba, bbaabbaabababaaabab,
and aaaaaaaaabaaa; it is doubtful that the computer will make much
headway with a binary alphabet for its human masters.
In the early days of binary computer work, the direct conversion
to binary code we have discussed was satisfactory, but soon the
designers of newer machines and calculating methods began to
juggle the digits around for various reasons. For one thing, a decimal
0 was represented by four binary 0’s. Electrically, this represents no
signal at all in the computer’s inner workings. If trouble happened,
say a loose connection, or a power failure for a split second, the
word 0000 might be printed out and accepted as a valid zero when it
actually meant a malfunction. So the designers got busy trying other
codes than the basic binary.
One clever result is the “excess-3” code. In this variation 3 is
added to each decimal number before conversion. A decimal 30 is
then represented by the word 0011 instead of 0000. There is, in
fact, no such computer word as 0000 in excess-3 code. This
eliminates the possibility of an error being taken for a 0. Excess-3
does something else too. If each digit is changed, that is, if 1’s
become 0’s and 0’s become 1’s, the new word is the “9’s
complement” of the original. For example, the binary code for 4 in
excess-3 is 0111. Changing all the digits, we get 1000, which is
decimal 5. This is not just an interesting curiosity, but the 9’s
complement of 4 (9 minus 4). Anyone familiar with an adding
machine is used to performing subtraction by using complements of
numbers. The computer cannot do anything but add; by using the
excess-3 code it can subtract by adding. Thus, while the computer
cannot subtract 0110 from 1000, it can quite handily add 1001 to
1000 to get the same result.
There are many other reasons for codes, among them being the
important one of checking for errors. “Casting out nines” is a well-
known technique of the bookkeeper for locating mistakes in work.
Certain binary codes, containing what is called a “parity bit,” have
the property of self-checking, in a manner similar to casting out
nines. A story is told of some pioneer computer designers who hit on
the idea of another means of error checking not as effective as the
code method.
The idea was clever enough, it being that identical computers
would do each problem and compare answers, much like the pairs of
abacus-wielders in Japan’s banks. In case both computers did not
come up with the same answer, a correction would be made. With
high hopes, the designers fed a problem into the machines and sat
back to watch. Soon enough a warning light blinked on one machine
as it caught an error. But simultaneously a light blinked on the other.
After that, chaos reigned until the power plugs were finally pulled.
Although made of metal and wires, the computers demonstrated a
remarkably human trait; each thought the other was wrong and was
doing its best to change its partner’s answer! The solution, of
course, was to add a third computer.
Binary decimal, as we have pointed out, is a wasteful code. The
decimal number 100 in binary decimal coding is 0001 0000 0000, or
12 digits. Pure binary is 1100100, or only 7 digits. By going to a
binary-octal code, using eight numbers instead of ten, the words can
be 3-bit instead of 4-bit. This is called an “economy” code, and finds
some application. There are also “Gray” codes, reflected binary
codes, and many more, each serving a particular purpose.
Fortunately for the designer, he can be prodigal with his use of
codes. With 4-bit words, 29 billion codes are available, so a number
of them are still unused.
Having translated our decimal numbers into code intelligible to our
computer, we still have the mathematical operations to perform on
it. With a little practice we can add, subtract, multiply, and divide our
binary numbers quite easily, as in the examples that follow.
Addition: 1100 (12)
0111 ( 7)
—— ——
10011 (19)

Subtraction: 1010 (10)


- ( 2)
0010
——— ——
1000 (8)

Multiplication: 0110 (6)


× 0011 (3)
——— –——
0110
0110
0000
0000
———
10010 (18)

TN1 Division: 1010 ÷ 10 = (10 ÷ 2 =


0101 5)
The rules should be obvious from these examples. Just as we add
5 and 5 to get 0 with 1 to carry, we add 1 and 1 and get 0 with 1 to
carry in binary. Adding 1 and 0 gives 1, 0 and 0 gives 0. Multiplying
1 times 1 gives 1, 1 times 0 gives 0, and 0 times 0 gives 0. One
divides into 1 once, and into 0 no times. Thus we can manipulate in
just the manner we are accustomed to.
The computer does not even need to know this much. All it is
concerned with is addition: 1 plus 1 gives 0 and 1 to carry; 1 plus 0
gives 1; and 0 plus 0 gives 0. This is all it knows, and all it needs to
know. We have described how it subtracts by adding complements.
It can multiply by repetitive additions, or more simply, by shifting the
binary number to the left. Thus, 0001 becomes 0010 in one shift,
and 0100 in two shifts, doubling each time. This is of course just the
way we do it in the decimal system. Shifting to the right divides by
two in the binary system.
The simplest computer circuitry performs additions in a serial
manner, that is, one operation at a time. This is obviously a slow
way to do business, and by adding components so that there are
enough to handle the digits in each row simultaneously the
arithmetic operation is greatly speeded. This is called parallel
addition. Both operations are done by parts understandably called
adders, which are further broken down into half-adders.
There are refinements to basic binary computation, of course. By
using a decimal point, or perhaps a binary point, fractions can be
expressed in binary code. If the position to the left of the point is
taken as 2 to the zero power, then the position just to the right of
the point is logically 2 to the minus one, which if you remember your
mathematics you’ll recognize as one-half. Two to the minus two is
then one-fourth, and so on. While we are on the subject of the
decimal point, sophisticated computers do what is called “floating-
point arithmetic,” in which the point can be moved back and forth at
will for much more rapid arithmetical operations.
No matter how many adders we put together and how big the
computer eventually gets, it is still operating in what seems an
awkward fashion. It is counting its fingers, of which it has two. The
trick is in the speed of this counting, so fast that one million
additions a second is now a commonplace. Try that for size in your
own decimally trained head and you will appreciate the computer a
little more.
The Logical Algebra
We come now to another most important reason for the
effectiveness of the digital computer; the reason that makes it the
“logical” choice for not only mathematics but thinking as well. For
the digital computer and logic go hand in hand.
Logic, says Webster, is “the science that deals with canons and
criteria of validity in thought and demonstration.” He admits to the
ironic perversion of this basic definition; for example, “artillery has
been called the ‘logic of kings,’” a kind of logic to make “argument
useless.” Omar Khayyám had a similar thought in mind when he
wrote in The Rubáiyát,
The grape that can with logic absolute,
The Two-and-Seventy Sects confute.

Other poets and writers have had much to say on the subject of
logic through the years, words of tribute and words of warning.
Some, like Lord Dunsany, counsel moderation even in our logic.
“Logic, like whiskey,” he says, “loses its beneficial effect when taken
in too large quantities.” And Oliver Wendell Holmes asks,
Have you heard of the wonderful one-hoss shay
That was built in such a logical way
It ran a hundred years to the day?

The words logic and logical are much used and abused in our
language, and there are all sorts of logic, including that of women,
which seems to be a special case. For our purposes here it is best to
stick to the primary definition in the dictionary, that of validity in
thought and demonstration.
Symbolic logic, a term that still has an esoteric and almost
mystical connotation, is perhaps mysterious because of the strange
symbology used. We are used to reasoning in words and phrases,
and the notion that truth can be spelled out in algebraic or other
notation is hard to accept unless we are mathematicians to begin
with.
We must go far back in history for the beginnings of logic.
Aristotelian logic is well known and of importance even though the
old syllogisms have been found not as powerful as their inventors
thought. Modern logicians have reduced the 256 possible
permutations to a valid 15 and these are not as useful as the newer
kind of logic that has since come into being.
Leibniz is conceded to be the father of modern symbolic logic,
though he probably neither recognized what he had done nor used it
effectively. He did come up with the idea of two-valued logic, and
the cosmological notion of 1 and 0, or substance and nothingness.
In his Characteristica Universalis he was groping for a universal
language for science; a second work, Calculus Ratiocinator, was an
attempt to implement this language. Incidentally, Leibnitz was not
yet twenty years old when he formulated his logic system.
Unfortunately it was two centuries later before the importance of
his findings was recognized and an explanation of their potential
begun. In England, Sir William Hamilton began to refine the old
syllogisms, and is known for his “quantification of the predicate.”
Augustus De Morgan, also an Englishman, moved from the
quantification of the predicate to the formation of thirty-two rules or
propositions that result. The stage was set now for the man who has
come to be known as the father of symbolic logic. His name was
George Boole, inventor of Boolean algebra.
In 1854, Boole published “An Investigation of the Laws of Thought
on which are Founded the Mathematical Theories of Logic and
Probabilities.” In an earlier pamphlet, Boole had said, “The few who
think that there is that in analysis which renders it deserving of
attention for its own sake, may find it worth while to study it under a
form in which every equation can be solved and every solution
interpreted.” He was a mild, quiet man, though nonconformist
religiously and socially, and his “Investigation” might as well have
been dropped down a well for all the immediate splash it made in
the scientific world. It was considered only academically interesting,
and copies of it gathered dust for more than fifty years.
Only in 1910 was the true importance given to Boole’s logical
calculus, or “algebra” as it came to be known. Then Alfred North
Whitehead and Bertrand Russell made the belated acknowledgment
in their Principia Mathematica, and Russell has said, “Pure
mathematics was discovered by Boole, in a work he called ‘The Laws
of Thought.’” While his praise is undoubtedly exaggerated, it is
interesting to note the way in which mathematics and thought are
considered inseparable. In 1928, the first text on the new algebra
was published. The work of Hilbert and Ackermann, Mathematical
Logic, was printed first in German and then in English.
What was the nature of this new tool for better thinking that Boole
had created? Its purpose was to make possible not merely precise,
but exact analytical thought. Historically we think in words, and
these words have become fraught with semantic ditches, walls, and
traps. Boole was thinking of thought and not mathematics or science
principally when he developed his logic algebra, and it is indicative
that symbolic logic today is often taught by the philosophy
department in the university.
Russell had hinted at the direction in which symbolic logic would
go, and it was not long before the scientist as well as the
mathematician and logician did begin to make use of the new tool.
One pioneer was Shannon, mentioned in the chapter on history. In
1938, Claude Shannon was a student at M.I.T. He would later make
scientific history with his treatise on and establishment of a new field
called information theory; his early work was titled “A Symbolic
Analysis of Relay and Switching Circuits.” In it he showed that
electrical and electronic circuitry could best be described by means
of Boolean logic. Shannon’s work led to great strides in improving
telephone switching circuits and it also was of much importance to
the designer of digital computers. To see why this is so, we must
now look into Boolean algebra itself. As we might guess, it is based
on a two-valued logic, a true-false system that exactly parallels the
on-off computer switches we are familiar with.
The Biblical promise “Ye shall know the truth, and the truth shall
make you free” applies to our present situation. The best way to get
our feet wet in the Boolean stream is to learn its so-called “truth
tables.”
Conjunctive Boolean Operation
A and B equal C ABC
(A · B = C) ———
000
100
010
111

Disjunctive Boolean Operation


A or B equals C ABC
(Ā ∨ B = C) ———
000
101
011
111
In the truth tables, 1 symbolizes true, 0 is false. In the conjunctive
AND operation, we see that only if both A and B are true is C true.
In the disjunctive OR operation, if either A or B is true, then C is also
true. From this seemingly naïve and obvious base, the entire
Boolean system is built, and digital computers can perform not only
complex mathematical operations, but logical ones as well, including
the making of decisions on a purely logical basis.

You might also like