White Paper Has Oop Failed
White Paper Has Oop Failed
White Paper Has Oop Failed
is in serious difficulty. It is
controlled by what amounts
to a quasi-religious cult--
Object Oriented Programming
(OOP). As a result,
Has productivity is too often the
OOP Failed? last consideration when
By Richard Mansfield programmers are hired to
September 2005
help a business computerize
its operations.
There’s no evidence that the OOP approach is efficient for most programming jobs.
Indeed I know of no serious study comparing traditional, procedure-oriented
programming with OOP. But there’s plenty of anecdotal evidence that OOP retards
programming efforts. Guarantee confidentiality and programmers will usually tell you that
OOP often just makes their job harder.
Excuses for OOP failures abound in the workplace: we are “still working on it”;
“our databases haven’t yet been reorganized to conform to OOP structural
requirements”; “our best OOP guy left a couple of years ago”; “you can’t just read
about OOP in a book, you need to work with it for quite a while before you can
wrap your mind around it”; and so on and so on. If you question the wisdom of
OOP, the response is some version of “you just don’t get it.” But what they’re
really saying is: “You just don’t believe in it.”
All too often a company hires OOP consultants to solve IT problems, but then that
company’s real problems begin. OOP gurus frequently insist on rewriting a company’s
existing software according to OOP principles. And once the OOP takeover starts, it
can become difficult, sometimes impossible, to replace those OOP people. The
company’s programming and even its databases can become so distorted by OOP
technology that switching to more efficient alternative programming approaches can be
When the Java language was first designed, a choice had to be made. Should
they mimic the complicated, counter-intuitive punctuation, diction, and syntax used in
C++, or should they create an understandable, clear language--as much as possible
like English? A “natural language.”
I’m not saying we don’t need professors. Society often benefits from--and some
kinds of progress depend on--people who flit around testing every new intellectual
craze, taking ideas to extremes. Indeed, Peter Pan would be much the poorer without
Tinkerbelle. But if I’m faced with a mission-critical project, I wouldn’t seek advice
from a pixie.
Like many theories, OOP includes some attractive concepts. OOP’s major
principles--encapsulation, inheritance, and polymorphism--are of value in specialized
programming contexts. However, these concepts often prove difficult to apply to most
real-world programming tasks. Yet most programming shops today are relentless in
their cult-like insistence on employing C/OOP for every job they tackle.
OOP usually isn't the best solution that many claim it is. Of course in the current
workplace programmers and developers cannot freely raise questions about OOP’s
efficacy. OOP is the dominant theory. Questioning it can imperil one’s job. What’s
more, few alternatives to C/OOP are taught in the schools anymore.
OOP does have some useful features for such group-programming jobs.
Encapsulation seals off tested code from prying eyes and from modification that might
introduce bugs. But hiding code isn’t unique to OOP. Most languages permit
Some even argue that OOP is best even for programmers working solo. This claim
often rests on OOP’s code reusability features--the idea that with OOP you can
easily reuse objects you’ve previously written for other programs, or more easily
update programs if they need modification later on.
This, too, is a largely bogus claim. From the simplest copying and pasting on the
low end, all the way up to classic code libraries--every computer language offers
code reuse and code maintenance features in some form or another. And the
considerable effort required to superimpose OOP on more natural programming
languages and techniques often requires far more programmer time, than any time
saved during future code maintenance. What’s more, it seems obvious that when you
go back to modify existing code, the easier that code is to read and understand, the
more quickly you can edit it. OOP and C are not known for readability. Quite the
opposite.
Key terms such as object, method and property have lost their descriptive
value almost entirely because they’ve become largely indistinguishable in
meaning. And the term object itself is applied to almost every element in
OOP code libraries.
The vocabulary that defines OOP features has become highly redundant.
The diction in OOP languages grows enormous: The Basic language in 1990
consisted of fewer than 400 terms, and of these only 50 were usually
employed for most common programming tasks; Basic .NET has many
thousands of terms.
OOP code libraries become huge and difficult to use: Programmers now often
find themselves spending less time writing actual source code than they
spend trying to find, and correctly address, classes in the .NET framework.
Let me elaborate on these points. First, the OOP classification system used to
organize the elements of the programming language breaks down quickly into
essentially meaningless, sometimes conflicting, categories.
Another unpleasant side effect of OOP is its elite patois--a highly redundant jargon
only professionals understand. OOP computer language features now have multiple
names, names that often have no distinctions in meaning--not even subtle
distinctions. For example, consider the many synonyms for what is probably best
described simply as a code library (i.e., a collection of procedures): Assembly,
control library, control type library, class library, object library, namespace, project
model, object model, host object model, proxylib, type library, plug-in, plugin type
library, services, services library, development environment, core type library,
extensibility, runtime library, runtime execution library, runtime execution engine, kernel,
helper, and dynamic link library.
Some will argue that there are distinctions here. Indeed, in some cases, adjectives
such as core, control, or plug-in do shade the meaning a bit. But nearly all the
terms in the previous paragraph are merely synonyms--without even the slightest
distinction between them. And some of the terms are simply inept. One popular
usage, "object library," makes no sense on closer examination. OOP asserts that
"objects" only exist during runtime, during execution of a program. Objects must be
instantiated, they cannot be collected in a library--only classes can exist in a library.
The proper term is "class library," for the same reason that you don't confuse a
cookbook with a restaurant.
Perhaps the worst form of inflation resulting from OOP is the redundancy in the
programming code now required to do even common, previously simple, tasks.
Consider the difference between the non-OOP instructions to print a document, and
the OOP version. Here’s how you traditionally print some text in Visual Basic Version
6 (pre-OOP):
Printer.Print Text1
In the VB.NET OOP version, you must replace these three words with 80 lines of
source code (277 words total). You must use a group of members, import several
libraries, and muster a fair amount of information (such as the “brush” color, line
height, running word count, and so on). You must supervise several other aspects of
the printing process as well. For example, if you don’t correctly specify and keep
track of how several elements of the printing are carried out--every few pages only
the top half of the last line of text appears on the paper, or characters are chopped
off at the right ends of lines. If you’re interested in the gory details, the OOP
version can be found in Book 6, Chapter 3 of my book Visual Basic .NET All-In-
One Desk Reference For Dummies (Wiley).
True, all these extra lines of code do give you more flexibility than the simple
pre-OOP version. But that flexibility carries a terrible price. And this “advantage” is
counterfeit. Non-OOP languages permit you the same flexibility if you want it: You
can employ code libraries (API’s) to manipulate any printer features you need to
control. The difference is, with OOP you must write those 80 extra lines of code
for every printing job.
Even highly experienced OOP programmers continually struggle with even simple
tasks. You’d think that years of experience writing in OOP or C-languages would
result in greater facility and productivity. Unfortunately, for all but the brilliantly talented
few--where OOP and C seem harmonic with their brain patterns--experience seems
to have relatively little impact on overall productivity. The main reason for this is that
TextBox1.FontSize = 11
Which version seems more efficient, easier to remember and use, more readable,
and ultimately more sensible? OOP claims to simplify and to improve efficiency. It
rarely does.
One major problem with OOP libraries is that you have to learn new, unique
taxonomic "addresses," interrelationships, and coding approaches for each programming
task. There's little regularity, so there are few rules you can learn and apply across
tasks. It’s as if a card catalog had exploded in the library, and you had to look
through the pile of cards, only by chance finding the address of a book quickly.
OOP libraries (and the resulting way that you invoke or “address” the functions
contained therein) are organized, but the organization is inconsistent, haphazard, often
illogical. For example, in Visual Basic .NET (the OOP version of Basic), one
committee of Microsoft programmers decided that to change the size of text you must
use the syntax in the previous example.
Dim c As Color
c = System.Drawing.Color.FromName("blue")
TextBox1.ForeColor = c
Mind you, you're doing exactly the same thing in both situations, namely changing
a property of text. But you change these properties using vastly different source code.
OOP programming also requires that quite a bit of time and effort be spent
wrestling with scoping rules and other essentially clerical issues. For example, among
the OOP scoping commands are: Protected, Friend, and Shared. You even find
combination scoping, using two scope declarations at the same time, such as
Protected Friend, ReadOnly Public, and WriteOnly Friend. This kind of inflation, and
the resulting ambiguity, should be a clue that theory is triumphing over practicality.
Hiding code can of course be useful, particularly with large, complicated programs
where you want to ensure that nobody tampers with other people’s tested, finished
procedures. However, OOP doesn’t limit itself to this useful clerical task. It’s a much
larger, more ambitious system. Its most astonishingly messy feature is often claimed
as its greatest strength: the attempt to automate the process of code reuse.
OOP also encourages code reuse via inheritance: expecting the programmer to
modify invisible (encapsulated) code. The contribution of inheritance to code
unreadability, slow program execution, and bugs in general has--even among some
OOP apologists--been widely acknowledged.
Alternatives
Nobody should be ashamed to admit that they have problems with OOP or the C
language and its derivatives. Everyone has problems with them. Even OOP experts
bicker among themselves about the "proper" way to manage the various convoluted
aspects of OOP theory.
Some OOP theorists claim that the only alternative to OOP is “spaghetti code,” but
this is a straw man argument. Long ago structured programming ended the abuse of
the GOTO command. Nobody seriously suggests going back to the early days before
subroutines were in common use.
In sum: like countless other intellectual fads over the years ("relevance,"
communism, “modernism,” and so on--history is littered with them) OOP will be
with us until eventually reality asserts itself. But considering how OOP currently
pervades both universities and workplaces, OOP may well prove to be a durable
delusion. Entire generations of indoctrinated programmers continue to march out of the
academy, committed to OOP and nothing but OOP for the rest of their lives.
Richard Mansfield has been a prominent figure in the computer field for over two
decades. From 1981 through 1987, he was editor of Compute! Magazine. He has
written hundreds of magazine articles and two columns, and he began writing books
full-time in 1991. He’s written 38 computer books altogether and several became
bestsellers, including Machine Language for Beginners (Compute! Books), The Visual
Guide to Visual Basic (Ventana), and The Visual Basic Power Toolkit (Ventana,
with Evangelos Petroutsos).
His recent titles include The Savvy Guide to Digital Music (Sams), CSS Web Design
for Dummies (Wiley), Office 2003 Application Development All-in-One Desk
Reference For Dummies (Wiley), Visual Basic .NET All-In-One Desk Reference for
Dummies (Wiley), XML All-In-One Desk Reference for Dummies (Wiley, with
Richard Wagner), Visual Basic .NET Database Programming for Dummies, Visual
Basic 6 Database Programming for Dummies (Hungry Minds), and The Wi-Fi
Experience: Everyone's Guide to 802.11b Wireless Networking (Que).
Overall, his books have sold more than 500,000 copies worldwide, and have been
translated into 12 languages.