Design and Code Review Checklist: Instructors
Design and Code Review Checklist: Instructors
A
Quality
Considerations
Did you know? Software vendors lose
an average of.63% of their market value
on the day security-related flaws are disclosed.
priorities
Structural
B
Does the code satisfy
these principles?
1
Open CLosed
Principle:
• Open for Extension
2
• Closed for Modification
Dry Principle
• Don’t Repeat Yourself
Single Responsibility
3
• Only One REsponsibility Consider delegation
Principle For Each Project over inheritance, unless
you need to change base
class behavior.
Liskov Substitution
Principle
• Subtypes Must Be Substituted
For THeir base Types 4
C Maintainability
leave it out!
www.Intertech.com
Instructors Who Consult. Consultants Who Teach.
VISUAL
CHECKLIST
.......................................................................................................
1
COMPREHENSIVE
CHECKLIST
.......................................................................................
2
Is
the
class
/
procedure
/
variable
scope
correct?
...............................................................................
3
This
Intertech
checklist
provides
a
comprehensive
compilation
of
design
and
code
review
principles
for
consideration
during
projects.
There
are
items
on
the
checklist
that
are
outlined
in
detail
further
on
in
the
document
and
a
few
where
we’ve
provided
links
from
this
document
to
quality
design
and
review
resources.
Underlined
Text
Links
To
Further
Detail
ü Review
Unit
Tests
ü Is
the
code
in
the
right
place?
ü Does
the
name
space
make
sense?
ü Is
the
class
/
procedure
/
variable
scope
correct?
ü Are
the
Classes,
methods,
and
variables
named
correctly?
ü Are
the
methods,
and
variables
typed
correctly?
ü Look
at
the
code.
ü Review
for
OCP
(Open
Closed
Principle
-‐
Open
for
extension
closed
for
modification)
ü Review
for
DRY
Principle
(Don't
Repeat
Yourself
-‐
abstract
common
things
and
put
in
single
place).
ü Review
for
SRP
(Single
Responsibility
Principle
-‐
every
object
has
a
single
responsibility.
All
the
object's
services
should
be
focused
on
that
responsibility).
ü Review
for
LSP
(Liskov
Substitution
Principle
Subtypes
must
be
substitutable
for
their
base
types).
ü Consider
Delegation
over
Inheritance.
If
you
don't
need
to
change
base
class
behavior,
consider
delegating
(handing
over
responsibility
of
a
task)
rather
than
inheritance.≠
ü Consider
Composition
over
Inheritance.
Similar
to
delegation
except
the
owner
class
uses
a
set
of
behaviors
and
chooses
which
one
to
use
at
runtime.
When
the
delegating
class
is
destroyed,
so
are
all
the
child
classes.
ü Aggregation.
Similar
to
composition
except
when
the
delegating
class
is
destroyed,
the
child
classes
are
not.
ü Consider
Polymorphism.
Make
a
group
of
heterogeneous
classes
look
homogeneous
ü Consider
generics.
ü Testability
considerations?
ü YAGNI
(You
ain’t
gonna
need
it)
When
in
doubt,
leave
it
out!
ü Does
object
wake
up
in
a
known
good
state
(constructor)
ü Consider
Security.
ü Is
our
code
unnecessarily
complex?
I
always
favor
simplicity
until
forced
to
do
otherwise.
ü Will our code perform? I tend to assume it will until proven otherwise.
15. }
16. public enum RuleType
17. {
18. Condition,
19. Action
20. \
Here
is
a
very
crude
implementation
of
a
rule
engine
25. {
26.
27. rules.Read();
28. if ((int)rules[0] == (int)RuleType.Action)
29. {
30. executing = ExecuteAction((int)rules[1]);
31. }
32. else
33. {
34. if(ExecuteCondition((int)(rules[1])))
35. {
36. executing = true;
37. }
38. else
39. {
40. rules.Read();
41. executing = true;
42. }
43. }
44. //Implement some exit strategy here
45. }
46. }
47. }
48. }
49. public bool ExecuteCondition(int theCondition)
50. {
51. switch (theCondition)
52. {
53. case (int)Conditions.IsComplete:
54. //Is the order complete?
55. return true; //or false
56. case (int)Conditions.CanShip:
57. //Can we ship the order?
58. return true; //or false
59. case (int)Conditions.IsInStock:
60. //Is this item in stock?
61. return true; //or false
62. default:
63. throw new Exception("Unsupported Condition");
64. }
65. }
66.
67. public bool ExecuteAction(int theAction)
68. {
69. switch (theAction)
70. {
71. case (int)Actions.CreateOrder:
72. //Execute order create logic
73. return true; //or false
74. case (int)Actions.CreateBackorder:
75. //Execute backorder logic
76. return true; //or false
77. case (int)Actions.ShipOrder:
78. //Execute shipping logic
79. return true; //or false
80. case (int)Actions.StoreOrder:
81. //send to warehouse
82. return true; //or false
83. case (int)Actions.CloseOrder:
84. //Execute order close logic
85. return true; //or false
86. case (int)Actions.ReduceInventory:
87. //Remove item from inventory
88. return true; //or false
89. default:
90. throw new Exception("Unsupported Action");
91. }
92.
93. }
94. \
Never
mind
the
problems
with
error
handling,
transactions,
and
the
lack
of
support
for
nested
conditions,
the
point
of
the
blog
is
OCP,
not
creating
a
rule
engine.
What
we
have
will
work
but
does
it
satisfy
the
OCP?
No,
there
are
a
couple
problems
with
this
type
of
implementation
that
will
make
it
difficult
to
maintain.
The
first
problem
is
that
the
logic
for
the
Rule
Engine,
Conditions,
and
Actions
are
all
in
one
class
and
therefore
one
assembly.
Any
system
that
wants
to
use
any
of
this
logic
will
be
tied
to
all
of
this
logic.
The
second
problem
is
that
any
time
you
want
the
rule
engine
to
do
something
new,
you
have
to
modify
this
assembly
which
is
a
violation
of
the
Open/Closed
Principle.
Let’s
take
a
look
at
a
more
robust
design.
We
will
still
use
an
enum
to
distinguish
between
Actions
and
Conditions:
Now
let’s
declare
an
interface:
We
will
put
both
the
enum
and
the
Interface
in
an
assembly
called
Rule.Types.
Now
let’s
add
a
few
classes:
8. }
9.
10. public RuleType TypeOfRule()
11. {
12. return RuleType.Action;
13. }
14.
15. #endregion
16. \
We
will
put
this
in
an
assembly
called
Rule.Actions.CreateOrder.
We
will
put
this
class
in
Rules.Conditions.CanShip.
In
fact
we
will
create
a
separate
assembly
for
each
condition
and
action
we
defined
in
the
enums.
Here
is
our
new
rules
engine
which
goes
in
the
Rule.Engine
assembly:
Notice
that
the
ExecuteRules
method
takes
a
generic
list
of
type
ISupportRules
as
a
parameter
but
has
no
reference
to
any
of
the
conditions
or
actions.
Also
notice
that
the
condition
and
action
classes
have
no
reference
to
each
other
or
the
rules
engine.
This
is
key
to
both
code
reuse
and
extensibility.
The
refactored
rule
engine,
condition,
and
action
classes
are
completely
independent
of
each
other.
All
they
share
is
a
reference
to
Rule.Type.
Some
other
system
may
use
any
of
these
assemblies
independent
of
each
other
with
the
only
caveat
being
they
will
need
to
reference
the
Rule.Type
assembly.
The
other
thing
we
gained
with
this
approach
is
we
can
now
Extend
the
rule
engine
(make
it
execute
new
conditions
and
actions)
by
simply
adding
a
new
Condition
or
Action
that
implements
ISupportRules
and
passing
it
into
the
ExecuteRules
method
as
a
part
of
the
generic
list.
We
can
do
all
of
this
without
recompiling
the
RefactoredRulesEngine
which
is
the
goal
of
the
OCP.
By
the
way,
this
design
approach
is
called
the
Strategy
Pattern.
If
you
haven’t
noticed
yet
I’m
leaving
out
one
major
piece
of
the
puzzle.
How
does
the
generic
list
of
rules
get
generated?
I’m
going
to
wave
my
hands
here
a
little
bit
and
save
the
details
for
another
blog.
We
would
use
a
creational
pattern
(one
of
the
factory
patterns).
If
we
assume
we
are
consuming
the
table
outlined
in
our
first
solution
this
factory
would
accept
a
Ruleset
ID
and
magically
return
the
generic
List<IsupportRules>
of
rules.
The
implementation
of
the
factory
pattern
could
be
written
in
such
a
way
that
each
time
you
add
a
Condition
or
an
Action
the
factory
would
need
to
be
recompiled
or
we
could
use
a
provider
pattern
and
use
a
configuration
file
to
allow
us
to
create
these
new
Conditions
and
Actions
without
a
recompile.
To
summarize
things
a
bit:
conceptually
we
have
this
RulesEngine
that
is
relatively
complex
(much
more
complex
than
I
have
written)
and
we
want
to
write
it,
test
it,
and
leave
it
alone.
At
the
same
time
though
we
have
this
need
to
enhance
the
system
by
adding
more
rules.
By
using
using
the
strategy
pattern
we
now
have
this
stable
rule
execution
engine
that
can
execute
any
condition
or
action
that
implements
the
ISupprotRules
interface.
Because
we
inject
a
list
of
conditions
and
rules
into
the
ExecuteRules
method
we
can
do
all
of
this
without
recompiling
the
refactored
rules
engine.
Another
approach
we
might
have
taken
to
satisfy
the
OCP
is
the
Template
Method
pattern.
In
the
template
method
pattern
we
would
make
use
of
an
abstract
class
to
define
the
skeleton
of
an
algorithm,
then
allow
the
concrete
classes
to
implement
subclass
specific
operations.
1. [System.Diagnostics.DebuggerBrowsable(System.Diagnostics.DebuggerBrowsab
leState.Never)]
2. int aComplexInteger;
3. public int AComplexInteger
4. {
5. get { return aComplexInteger; }
6. set
7. {
8. if (value == 0)
9. throw new ArgumentOutOfRangeException("AComplexIntege
r");
10. if (value != aComplexInteger)
11. {
12. aComplexInteger = value;
13. //Maybe raise a value changed event
14. }
15. }
16. \
One
of
my
favorite
techniques
is
constructor
chaining.
If
we
have
multiple
constructors
that
perform
similar
logic,
we
should
use
constructor
chaining
to
avoid
duplicating
code:
1. using System;
2. using System.Collections.Generic;
3. using System.Linq;
4. using System.Text;
5.
6. namespace CicMaster
7. {
8. class BetterConstructorLogic
9. {
10. #region WhyItIsBetter
11. //No duplicate code, this is called constructor chaining
12. //step through this
13. #endregion
14. string someString = string.Empty;
15. int someInteger = 0;
16. List<int> myIntegers = new List<int>();
17.
18. public BetterConstructorLogic():this("A Default Value")
19. {
20. //someString = "A Default Value";
21. System.Diagnostics.Debug.WriteLine("In Default
Constructor");
22. }
23.
24. public BetterConstructorLogic(string aString):this(aString,123)
25. {
26. //someInteger = 123;
27. System.Diagnostics.Debug.WriteLine("In one param
constructor");
28. }
29.
30. public BetterConstructorLogic(string aString, int anInteger)
31. {
The
final
technique
I
would
like
to
mention
is
use
a
factory
to
create
all
but
the
simplest
objects.
The
following
(admittedly
nonsensical)
code
needs
to
execute
maybe
a
half
dozen
lines
of
code
to
construct
an
Order
object.
1. using System;
2. using System.Collections.Generic;
3. using System.Linq;
4. using System.Text;
5. using System.Threading;
6.
7. namespace BusinessLayer
8. {
9. class ObjectContext
10. {
11. public ObjectContext(String username)
12. {
13. //Look up the user permissions
14. }
15. public bool IsInTranaction { get; set; }
16. public bool BeginTransaction()
17. {
18. //do some transaction logic
Duplicating
these
few
lines
of
code
in
a
couple
places
is
not
that
difficult.
Now
say
the
application
is
enhanced
and
grows
for
a
few
years
and
suddenly
we
see
this
code
duplicated
dozens
or
hundreds
of
times.
At
some
point
it
is
likely
that
we
want
to
change
the
construction
logic,
finding
and
changing
all
the
code
we
use
to
create
the
order
is
difficult,
time
consuming,
and
a
QA
burden.
A
better
approach
would
be
to
encapsulate
the
logic
required
to
build
a
new
order.
Here
is
an
implementation
using
a
simple
factory.
It
is
much
easier
to
find,
change,
and
test
this
code:
If
we
recognize
that
we
are
duplicating
even
a
few
lines
of
code
over
and
over
we
need
to
take
a
serious
look
at
the
code
and
figure
out
a
way
to
encapsulate
the
logic.
For
the
sake
of
this
blog
I’m
only
going
to
look
at
the
file
handling
logic.
I
do
think
the
FileHandler
logic
defined
above
is
a
defensible
breakdown
of
tasks,
the
FileHandler
object
takes
care
of
everything
having
to
do
with
the
files
we
want
to
process.
What
I
don’t
like
about
the
break
down
is
we
have
put
the
file
monitoring
logic
and
file
transferring
logic
in
the
same
class.
If
we
change
the
requirements
down
the
road
so
that
we
only
look
for
files
with
certain
extensions
we
would
have
to
change
the
FileHandler
implementation.
If
we
change
the
requirements
and
we
need
to
support
a
different
secure
storage
location
(such
as
a
third
party
document
management
system
that
doesn’t
support
streaming)
we
would
again
have
to
change
the
FileHandler.
With
the
high
level
design
I
defined
above
we
would
have
2
reasons
to
change
the
class:
1. Because
the
file
monitoring
logic
changes
2. Because
the
file
transfer
logic
changes.
This
is
a
violation
of
the
SRP.
Still
not
convinced?
Let’s
look
at
another
change
that
in
my
opinion
tips
the
scales
in
favor
of
separating
the
file
monitoring
and
transferring
logic.
What
if
we
want
to
allow
documents
to
be
imported
into
the
system
via
a
web
service?
We
don’t
want
to
duplicate
the
file
transferring
logic
so
we
would
want
to
employ
the
services
of
the
FileHandler
object.
We
certainly
don’t
want
or
need
the
file
monitoring
logic
in
our
web
service,
therefore
I
favor
putting
the
file
monitoring
logic
and
file
transferring
logic
in
different
classes.
1. namespace OpenClosePrinciple
2. {
3. public enum Actions
4. {
5. CreateOrder,
6. CreateBackorder,
7. CloseOrder,
8. ShipOrder,
9. StoreOrder,
10. ReduceInventory
11. }
12. public enum Conditions
13. {
14. IsComplete,
15. IsInStock,
16. CanShip
17. }
18. public enum RuleType
19. {
20. Condition,
21. Action
22. }
23. }
24.
An
Action
derived
from
RuleBase:
1. namespace OpenClosePrinciple
2. {
3. public class CreateOrderAction:RuleBase
4. {
5. #region ISupportRules Members
6.
7. public override bool Execute()
8. {
9. return true;
10. }
11.
12. public override RuleType TypeOfRule()
13. {
14. return RuleType.Action;
15. }
16.
17. #endregion
18. }
19. }
1. namespace OpenClosePrinciple
2. {
3. public class CanShipCondition:RuleBase
4. {
5. #region ISupportRules Members
6.
7. public override bool Execute()
8. {
9. return true;
10. }
11.
12. public override RuleType TypeOfRule()
13. {
14. return RuleType.Condition;
15. }
16.
17. #endregion
18. }
19. }
20.
RuleBase:
1. amespace OpenClosePrinciple
2. {
3. public abstract class RuleBase
4. {
5. public virtual bool Execute()
6. {
7. return true;
8. }
9.
10. public abstract RuleType TypeOfRule();
11. }
12. }
13.
The
Rule
Engine.
1. using System;
2. using System.Data.SqlClient;
3. using System.Collections.Generic;
4. namespace OpenClosePrinciple
5. {
6. public class RefactoredRuleEngine
7. {
8.
9. public void ExecuteRules(List<RuleBase> rules)
10. {
11. bool executing = true;
12. int ruleIndex = 0;
13. while (executing)
14. {
15. if (rules[ruleIndex].TypeOfRule() == RuleType.Action)
16. {
17. executing = rules[ruleIndex].Execute();
18. ruleIndex++;
19. }
20. else
21. {
22. if (rules[ruleIndex].Execute())
23. {
24. ruleIndex++;
25. executing = true;
26. }
27. else
28. {
29. ruleIndex += 2;
30. executing = true;
31. }
32. }
33. //Implement some exit strategy here
34. }
35.
36. }
37. }
38. }
39.
Our
Base
class
simply
supports
an
Execute
and
a
RuleType
Method.
Both
our
condition
and
Action
class
derive
from
RuleBase
and
may
be
“passed
around”
as
a
RuleBase
and
therefore
we
have
satisfied
the
structural
idea
of
being
able
to
substitute
a
derived
class
for
its
base
class.
The
second
angle
I
want
to
look
at
this
principle
is
from
a
behavioral
perspective.
Most
examples
I
read
demonstrating
an
LSP
violation
use
the
Rectangle
base
class
and
the
square
subclass.
Proof
of
the
LSP
violation
is
based
on
setting
the
length
and
width
of
the
Square
to
unique
values
then
discovering
that
an
assert
in
a
unit
test
that
calculates
a
rectangles
area
returns
an
incorrect
result.
I
agree
that
this
is
a
violation.
I
believe
the
problem
lies
in
the
claim
that
a
square
“is-‐a”
rectangle.
Now
don’t
go
running
to
a
dictionary
and
grab
the
definition
of
a
rectangle
and
tell
me
that
a
square
fits
the
definition
of
a
rectangle
–
I’m
not
talking
about
the
English
language.
The
classic
implementation
of
the
rectangle
when
discussing
the
LSP
principle
is
to
expose
a
Width
and
a
Height
property
and
a
CalculateArea
method
which
returns
Width
X
Height.
This
makes
sense.
The
problem
in
claiming
that
the
Square
“is-‐a”
Rectangle
is
that
with
a
Square
there
is
not
a
notion
of
a
Width
and
a
Height
that
are
different.
The
Width
and
Height
must
be
the
same.
It
doesn’t
make
sense
to
expose
2
properties,
it
is
misleading
and
a
consumer
can
logically
assume
that
they
would
be
independent
properties.
But
in
the
implementation
of
the
square
this
is
not
the
case.
Our
Square
is
not
(logically)
a
Rectangle
because
it
does
not
have
a
Length
and
Width
that
are
independent
of
each
other.
To
consider
whether
our
design
above
has
violated
LSP
from
a
behavior
standpoint
we
have
to
take
a
look
at
the
RuleEngine.
At
a
glance
it
looks
like
we
have
a
violation,
notice
that
we
have
some
logic
in
the
engine
that
concerns
itself
with
the
RuleType.
if
(rules[ruleIndex].TypeOfRule()
==
RuleType.Action)
…
This
does
have
the
“smell”
of
bad
code
that
we
need
to
constantly
look
for
but
I
maintain
that
if
we
take
a
closer
look
this
is
NOT
a
violation.
When
we
cooked
up
the
notion
of
the
RuleEngine
we
decided
we
would
support
2
types
of
rules,
Conditions
and
Actions.
It
is
reasonable
to
do
this.
We
would
never
get
our
code
out
the
door
if
we
didn’t
put
some
constraints
on
the
types
of
rules
we
could
support.
The
line
of
code
in
question
is
simply
executing
code
based
on
whether
it
is
a
condition
or
an
action.
An
example
of
code
that
would
violate
LSP
or
OCP
would
be
as
follows:
if
(TypeOf(rules[ruleIndex])
==
CanShipCondition)
Here
we
are
trying
to
execute
logic
based
on
the
derived
type
which
is
a
violation
of
OCP
and
LSP.
The
consumer
of
the
base
class
would
not
have
to
worry
about
the
derived
type
if
the
derived
type
was
substitutable
for
its
base
class
and
by
worrying
about
derived
types,
the
rules
engine
is
no
longer
open
for
extension.
The
problem
with
breaking
encapsulation
is
that
making
a
change
to
one
class
(our
base
class)
can
cause
unintended
changes
in
other
consuming
classes
(our
derived
classes).
I
can’t
begin
to
tell
you
how
many
times
I
have
seen
a
seemingly
small
change
to
a
base
class
break
huge
pieces
of
an
application.
Now
let’s
consider
using
delegation
rather
than
inheritance.
We
will
change
our
class
diagram
to
look
like
the
following:
We
can
say
a
teacher
Has
A
person
and
similarly
student
Has
A
person.
The
down-‐side
of
this
design
is
that
we
have
to
write
more
code
to
create
and
manage
the
person
class
and
thus
the
code
will
not
perform
as
well
(though
in
most
cases
I
suspect
the
performance
hit
is
negligible).
We
also
cannot
use
polymorphism
-‐
we
cannot
treat
a
student
as
a
person
and
we
cannot
treat
a
teacher
as
a
person.
The
up-‐side
of
this
design
is
that
we
can
decrease
coupling
by
defining
a
person
interface
and
using
the
interface
as
the
return
type
for
our
Person
property
rather
than
the
concrete
Person
class,
and
our
design
is
more
robust.
When
we
use
delegation
we
can
have
a
single
instance
of
a
person
act
as
both
a
student
and
a
teacher.
Try
that
in
C#
using
the
inheritance
design!
If
we
take
a
little
different
approach
and
use
composition
instead
we
might
have
a
class
diagram
similar
to
the
following:
The
CompositionDatabaseWriter
does
not
need
to
implement
IDatabaseWriter
but
I
did
that
because
I
think
it
makes
the
example
easier
to
understand.
The
application
will
use
the
CompositionDatabaseWriter
to
do
database
work
and
CompositionDatabaseWriter
will
determine
whether
to
use
SqlServerDatabaseWriter
or
OracleDatabaseWriter
at
runtime
perhaps
by
using
a
configuration
file
entry.
When
one
of
the
CompositionDatabaseWriter
methods
is
called,
CompositionDatabaseWriter
simply
calls
the
corresponding
method
on
the
Sql
Server
or
Oracle
object.
Both
designs
allow
us
to
interact
with
an
oracle
or
Sql
Server
database
which
was
our
goal.
Now
here
is
where
the
flexibility
of
composition
comes
in;
Suppose
we
now
need
to
support
a
to
do
to
use
it
is
deploy
this
single
assembly
and
change
a
config
file.
Very
powerful,
because
we
used
composition
we
can
change
the
behavior
at
runtime
via
configuration.
The
fact
of
the
matter
is
that
our
sample
is
so
simple
we
didn’t
even
use
composition
at
all
so
imagine
the
considerably
more
complex
scenario
where
what
we
are
doing
is
creating
a
persistence
object
that
must
transactionally
work
with
the
file
system
and
a
database.
We
define
the
persistence
object
and
within
the
persistence
object
we
delegate
the
responsibility
of
dealing
with
the
file
system
to
one
class,
and
dealing
with
the
database
to
another.
The
persistence
object
is
composed
of
a
FileSytemWriter
and
a
DatabaseWriter.
Our
Persistence
object
has
only
one
concern
–
coordinating
database
and
file
persistence.
Our
FileSystemWriter
has
only
one
concern
–
managing
File
System
interactions,
and
finally
our
DatabaseWriter
is
only
concerned
about
interacting
with
a
database
of
one
type.
We
have
decomposed
a
fairly
complex
problem
into
a
few
smaller
more
managable
problems,
we
came
up
with
a
nice
robust
design
thanks
to
composition
and
we
have
managed
to
do
it
in
such
a
way
that
there
are
no
SRP
violations
either!
spending
time
and
money
documenting
and
testing
a
feature
that
we
are
guessing
we
will
need
and
by
the
way
we
are
also
guessing
how
to
best
implement
it.
If
all
of
those
(mostly
financial)
arguments
aren’t
enough
to
convince
you,
let’s
take
a
look
at
some
down
sides
from
an
architect
or
developer’s
point
of
view.
If
code
is
in
a
release
you
have
to
assume
it
is
being
used.
When
it
comes
time
to
change
the
code
you
think
you
needed,
you
are
going
to
spend
time
figuring
out
how
to
change
it
without
breaking
the
(imagined?)
user
base.
This
might
involve
migrating
data,
configurations,
backup
procedures,
reports,
integration
work
flows
etc.
When
it
comes
time
to
change
the
code
you
really
do
need
you
must
consider
all
of
the
code.
This
includes
the
code
you
don’t
realize
that
you
don’t
need
–
if
it
is
in
production,
how
can
you
be
confident
in
saying
“oh,
we
really
don’t
need
that
code”.
This
really
stifles
our
ability
to
modernize
our
code.
Even
if
you
do
find
a
way
to
refactor
the
bloated
code
base
not
only
will
you
have
to
change
the
existing
code
but
you
will
have
to
change
the
tests,
documentation,
and
training
materials.
Let
the
business
analysts,
user
base,
and
marketing
folks
decide
what
features
your
product
needs
and
when
it
needs
it.
I
would
rather
architect
and
design
a
system
based
on
things
that
are
known.
I
don’t
ever
want
to
tell
somebody
that
the
system
is
the
way
it
is
because
I
thought
we
needed
some
functionality
that
turns
out
to
be
useless.
When
we
do
a
good
job
of
designing
and
reviewing
our
system
we
can
be
confident
that
we
can
refactor
our
code
to
implement
major
functional
changes
when
they
are
understood,
needed,
and
the
highest
priority.
Contact
Information
Intertech,
Inc.
1020
Discovery
Road,
Suite
145
Eagan,
MN
55121
+800.866.9884
+651-‐288-‐7000
Ryan
McCabe
–
Vice
President
of
Sales
+651.288.7001
[email protected]
Intertech
Background
Tom
Salonek
founded
Intertech
in
1991.
Intertech
is
a
leading
Twin
Cities-‐based
software
development
and
the
largest
software
developer
training
company
in
Minnesota.
Intertech
designs
and
develops
software
solutions
for
state
government
and
mid-‐sized
corporations.
Intertech
has
created
prepackaged
software,
software
that
powers
Fortune
500
businesses,
as
well
as
systems
for
state
government.
Intertech
works
with
NASA,
Wells
Fargo,
Lockheed
Martin,
Microsoft
and
Intel
and
other
major
companies
around
the
United
States
teaching
and
helping
them
use
technology.
Intertech’s
technical
team
frequently
publishes
books
in
the
Intertech
Instructor
Series
through
a
partnership
with
Apress
in
Berkley,
California.
The
Intertech
Instructor
Series
includes
best-‐selling
technology
training
books
on
Amazon.com.
Growth
Intertech
is
frequently
listed
in
“fast
growth”,
“top”,
and
“best”
lists,
including:
• 2013
Consulting
Magazine,
8th
Best
IT
Consulting
Firms
to
Work
For
in
North
America
• 2013
Consulting
Magazine,
1st
Employee
Morale
• 2013
Ernst
&
Young,
CEO
named
Entrepreneur
of
the
Year
Finalist
• 2013
The
Business
Journal,
Great
Places
to
Work
(nine
time
winner)
• 2013
Inc.
5000,
One
of
the
Fastest
Growing
Firms
in
America
(six
time
winner)
• 2012
Minnesota
Business
Magazine,
100
Top
Employers,
#1
Mid-‐Sized
Company
Winner
• 2012
The
Business
Journal,
Fast
50
growth
firm
(three
time
winner)
• 2012
The
Business
Journal,
Great
Places
to
Work
(eight
time
winner)
• 2012
Inc.
5000,
One
of
the
Fastest
Growing
Firms
in
America
(five
time
winner)
• 2012
Star
Tribune,
Top
100
Workplace
(eighth
place
in
category)
• 2011
The
Business
Journal,
Great
Places
to
Work
(seven
time
winner)
• 2011
Inc.
5000,
One
of
the
Fastest
Growing
Firms
in
America
(four
time
winner)
• 2010
“Healthiest
Employer”
by
the
Minneapolis/St.
Paul
Business
Journal
and
OptumHealth
• 2010
PCI
Entrex,
4th
Quarter
"Entrex
Growth
Awards"
• 2010
PCI
Entrex,
3rd
Quarter
"Entrex
Growth
Awards"
• 2010
The
Business
Journal,
Great
Places
to
Work
(six
time
winner)
• 2010
Minneapolis/St.
Paul
Business
Journal
and
OptumHealth,
Healthiest
Employer
• 2009
The
Business
Journal,
Great
Places
to
Work
(five
time
winner)
• 2009
Inc.
5000,
One
of
the
Fastest
Growing
Firms
in
America
• 2009
The
Wall
Street
Journal,
Winning
Workplaces
finalist
(one
of
35
in
America)
• 2008
UpSize
Magazine
-‐
Business
Builder
Awards,
Communications
Finalist
• 2008
PCI
Entrex,
Fastest
Growing
Privately
held
firm
in
US
(Q4)
• 2008
PCI
Entrex,
Fastest
Growing
Privately
held
firm
in
US
(Q3)
• 2008
The
Business
Journal,
Great
Places
to
Work
(four
time
winner)
• 2008
Minnesota
Work
Life
Champion,
Awarded
For
Promoting
Healthy
Work
and
Life
Balance
• 2008
Inc.
5000,
One
of
the
Fastest
Growing
Firms
in
America
• 2007
UpSize
Magazine
-‐
Business
Builder
Awards,
Community
Impact
Finalist
• 2007
The
Business
Journal,
Great
Places
to
Work
(three
time
winner)
• 2007
Inc.
5000,
One
of
the
Fastest
Growing
Firms
in
America
• 2006
The
Business
Journal,
Great
Places
to
Work,
10th
in
Minnesota
Recognition
In
six
of
the
last
seven
years,
Intertech
was
chosen
from
a
field
of
over
200
as
a
winner
in
The
Business
Journal’s
Best
Places
to
Work
in
Minnesota.
In
addition,
Intertech
has
been
featured
in
Fortune
Small
Business,
Forbes,
The
Business
Journal,
Twin
Cities
Business
Monthly,
Upsize,
Ventures,
The
Star
Tribune,
The
Pioneer
Press,
and
Inc.
magazine.
Intertech, Inc.
Intertech Staff.
Ryan
McCabe
–
Vice
President
of
Sales
+651.288.7001
[email protected]