Management Information System (MIS) : Sheena Koshy Silver
Management Information System (MIS) : Sheena Koshy Silver
Management Information System (MIS) : Sheena Koshy Silver
System implementation covers a broad spectrum of activities from a detailed workflow analysis to the
formal go-live of the new system. During system implementation organizations may refine the initial
workflow analysis that had been completed as part of the requirements analysis phase. With the aid of
the vendor they may also start mapping out the proposed new workflow.
The system implementation phase requires the vendor to play a very prominent role. In addition to the
workflow analysis it is during this phase that full system testing is completed. Other key activities that
would occur during this phase include piloting of the new system, formal go-live and the immediate post
implementation period during which any application issues are resolved.
Suppose you are the Chief Manager in a distribution firm, how will you implement MIS in
the organization?
If I will the Chief Manager in a distribution firm, I will implement MIS in the organization
through many ways.
Implementation of MIS
The choice of the system or the sub-system depends on its position in the total MIS plan, the
size of the system, the user understands of the system and the complexity and its interface with
other systems. The designer first develops systems independently and starts integrating them
with other systems, enlarging the system scope and meeting the varying information needs.
Determining the position of the system in the MIS is easy. The real problem in the degree of
structure, and formalization in the system and procedures which determine the timing and
duration of development of the system. Higher the degree of structured ness and formalization,
greater is the stabilization of the rules, the procedures, decision making and the understanding
of the overall business activity. Here, it is observed that the user’s and the designer interaction
are smooth, and each other’s need are clearly understood and respected mutually. The
development becomes a methodical approach with certainty in input-process and outputs.
MIS is generally used by medium and larger scale organizations. However, small organizations
are yet to understand its application. There is dire need to build up computer culture by
properly disseminating information about computer applications and its benefits.
Implementation of MIS can be achieved by using any of the methods such as direct, parallel,
modular or phase in.
• Direct Approach
Direct installation of the new system with immediate discontinuance of the old existing system
is reffered as “cold turnkey” approach. This approach becomes useful when these factors are
considered.
• Parallel Approach
The selected new system is installed and operated with current system. This method is
expensive because of duplicating facilities and personal to maintain both the systems. In this
approach a target date must be fixed when the operations of old system cease and new one will
operate on its own.
• Modular Approach
This is generally recognized as “Pilot approach”, means the implementation of a system in the
Organization on a piece-meal basis.
• Phase-in-Implementation
This approach is similar to modular method but it differs because of segmentation of system,
however, not the organization. It has advantages that the rate of changes in a given
Organization can be totally minimized and the data processing resource can be acquired
gradually over a period of time. System exhibits certain disadvantages such as limited
applicability, more costs incurred to develop interface with old system and a feeling in the
Organization that system is never completed.
Implementation Procedures
• Planning the Implementation
After designing the MIS it is essential that the organization should plan carefully for
implementation. The planning stage should invariably include the following:
3. Establishing of MIS
For monitoring the progress of implementation and for proper control of activities, efficient
information system should be developed.
4. Acquisition of Facilities
For installation of new system or to replace current system the manager should prepare a
proposal for approval from the management by considering space requirement movement of
personal and location for utility outlets and controls.
5. Procedure Development
This is an important stop for implementation of the system including various activities such as
evaluation selection of hardware, purchase or development of software, testing and
implementation strategies.
The MIS manager should generate files and formats for storing actual date. This requires
checklist data, format date storage forms and other remarks in data base.
Test should be performed in accordance with the specifications at the implementation stage
consisting of component test sub system test and total system test.
• Evaluation and maintenance of system
The performance should e evaluated in order to find out cost effectiveness and efficacy of the
system with minimum errors due to designs environmental changes or services.
Software Maintenances
The proper maintenance is the enigma of the system development and it holds software
industry captive lying up programming resources. There are some problems in maintenance
such as regarding it as non rewarding non availability of technicians and tools no cognizance of
users about maintenance problem and cost lack of standard procedures and guidelines. Most
programmers feel maintenance as low level drudgery. If proper attentions is paid over a period
of time eventually less maintenance is required.
Types of Maintenance
Several organizations having MIS generally go in for reducing maintenance costs and it
consists of three major phases.
Evaluation Methods
Evaluation of the MIS in an organization is integral part of the control processes. There are
several evaluation approaches such as quality assurance review compliance of audits budget
performance review computer personal productivity assessment computer performance
evaluation service level monitoring user audit survey post installation review and cost benefit
analysis.
Evaluation performance measurement can be classified into two classes as effectiveness and
efficiency. The relationship between effectiveness and efficiency is that the format is a measure
of goodness of out put and the latter is a measure of the resources required to achieve the
output.
The systems development life cycle is a project management technique that divides complex projects into
smaller, more easily managed segments or phases. Segmenting projects allows managers to verify the
successful completion of project phases before allocating resources to subsequent phases.
Software development projects typically include initiation, planning, design, development, testing,
implementation, and maintenance phases. However, the phases may be divided differently depending on
the organization involved. For example, initial project activities might be designated as request,
requirements-definition, and planning phases, or initiation, concept-development, and planning phases.
End users of the system under development should be involved in reviewing the output of each phase to
ensure the system is being built to deliver the needed functionality.
Note: Examiners should focus their assessments of development, acquisition, and maintenance activities
on the effectiveness of an organization’s project management techniques. Reviews should be centered
on ensuring the depth, quality, and sophistication of a project management technique are commensurate
with the characteristics and risks of the project under review.
INITIATION PHASE
Careful oversight is required to ensure projects support strategic business objectives and resources are
effectively implemented into an organization's enterprise architecture. The initiation phase begins when
an opportunity to add, improve, or correct a system is identified and formally requested through the
presentation of a business case. The business case should, at a minimum, describe a proposal’s
purpose, identify expected benefits, and explain how the proposed system supports one of the
organization’s business strategies. The business case should also identify alternative solutions and detail
as many informational, functional, and network requirements as possible.
The presentation of a business case provides a point for managers to reject a proposal before they
allocate resources to a formal feasibility study. When evaluating software development requests (and
during subsequent feasibility and design analysis), management should consider input from all affected
parties. Management should also closely evaluate the necessity of each requested functional
requirement. A single software feature approved during the initiation phase can require several design
documents and hundreds of lines of code. It can also increase testing, documentation, and support
requirements. Therefore, the initial rejection of unnecessary features can significantly reduce the
resources required to complete a project.
If provisional approval to initiate a project is obtained, the request documentation serves as a starting
point to conduct a more thorough feasibility study. Completing a feasibility study requires management to
verify the accuracy of the preliminary assumptions and identify resource requirements in greater detail.
Primary issues organizations should consider when compiling feasibility study support documentation
include:
Business Considerations:
Strategic business and technology goals and objectives;
Expected benefits measured against the value of current technology;
Potential organizational changes regarding facilities or the addition/reduction of end users,
technicians, or managers;
Budget, scheduling, or personnel constraints; and
Potential business, regulatory, or legal issues that could impact the feasibility of the project.
Functional Requirements:
End-user functional requirements;
Internal control and information security requirements;
Operating, database, and backup system requirements (type, capacity, performance);
Connectivity requirements (stand-alone, Local Area Network, Wide Area Network, external);
Network support requirements (number of potential users; type, volume, and frequency of data
transfers); and
Interface requirements (internal or external applications).
Project Factors:
Project management methodology;
Risk management methodology;
Estimated completion dates of projects and major project phases; and
Estimated costs of projects and major project phases.
Cost/Benefit Analysis:
Expected useful life of the proposed product;
Alternative solutions (buy vs. build);
Nonrecurring project costs (personnel, hardware, software, and overhead);
Recurring operational costs (personnel, maintenance, telecommunications, and overhead);
Tangible benefits (increased revenues, decreased costs, return-on-investments); and
Intangible benefits (improved public opinions or more useful information).
The feasibility support documentation should be compiled and submitted for senior management or board
study. The feasibility study document should provide an overview of the proposed project and identify
expected costs and benefits in terms of economic, technical, and operational feasibility. The document
should also describe alternative solutions and include a recommendation for approval or rejection. The
document should be reviewed and signed off on by all affected parties. If approved, management should
use the feasibility study and support documentation to begin the planning phase.
PLANNING PHASE
The planning phase is the most critical step in completing development, acquisition, and maintenance
projects. Careful planning, particularly in the early stages of a project, is necessary to coordinate activities
and manage project risks effectively. The depth and formality of project plans should be commensurate
with the characteristics and risks of a given project.
Project plans refine the information gathered during the initiation phase by further identifying the specific
activities and resources required to complete a project. A critical part of a project manager’s job is to
coordinate discussions between user, audit, security, design, development, and network personnel to
identify and document as many functional, security, and network requirements as possible.
DESIGN PHASE
The design phase involves converting the informational, functional, and network requirements identified
during the initiation and planning phases into unified design specifications that developers use to script
programs during the development phase. Program designs are constructed in various ways. Using a top-
down approach, designers first identify and link major program components and interfaces, then expand
design layouts as they identify and link smaller subsystems and connections. Using a bottom-up
approach, designers first identify and link minor program components and interfaces, then expand design
layouts as they identify and link larger systems and connections.
Contemporary design techniques often use prototyping tools that build mock-up designs of items such as
application screens, database layouts, and system architectures. End users, designers, developers,
database managers, and network administrators should review and refine the prototyped designs in an
iterative process until they agree on an acceptable design. Audit, security, and quality assurance
personnel should be involved in the review and approval process.
Management should be particularly diligent when using prototyping tools to develop automated controls.
Prototyping can enhance an organization’s ability to design, test, and establish controls. However,
employees may be inclined to resist adding additional controls, even though they are needed, after the
initial designs are established.
Designers should carefully document completed designs. Detailed documentation enhances a
programmer’s ability to develop programs and modify them after they are placed in production. The
documentation also helps management ensure final programs are consistent with original goals and
specifications.
Organizations should create initial testing, conversion, implementation, and training plans during the
design phase. Additionally, they should draft user, operator, and maintenance manuals.
Designing appropriate security, audit, and automated controls into applications is a challenging task.
Often, because of the complexity of data flows, program logic, client/server connections, and network
interfaces, organizations cannot identify the exact type and placement of the features until interrelated
functions are identified in the design and development phases. However, the security, integrity, and
reliability of an application is enhanced if management considers security, audit, and automated control
features at the onset of a project and includes them as soon as possible in application and system
designs. Adding controls late in the development process or when applications are in production is more
expensive, time consuming, and usually results in less effective controls.
Standards should be in place to ensure end users, network administrators, auditors, and security
personnel are appropriately involved during initial project phases. Their involvement enhances a project
manager's ability to define and incorporate security, audit, and control requirements. The same groups
should be involved throughout a project’s life cycle to assist in refining and testing the features as projects
progress.
Application control standards enhance the security, integrity, and reliability of automated systems by
ensuring input, processed, and output information is authorized, accurate, complete, and secure. Controls
are usually categorized as preventative, detective, or corrective. Preventative controls are designed to
prevent unauthorized or invalid data entries. Detective controls help identify unauthorized or invalid
entries. Corrective controls assist in recovering from unwanted occurrences.
Input Controls
Automated input controls help ensure employees accurately input information, systems properly record
input, and systems either reject, or accept and record, input errors for later review and correction.
Examples of automated input controls include:
Check Digits – Check digits are numbers produced by mathematical calculations performed on input
data such as account numbers. The calculation confirms the accuracy of input by verifying the
calculated number against other data in the input data, typically the final digit.
Completeness Checks – Completeness checks confirm that blank fields are not input and that
cumulative input matches control totals.
Duplication Checks – Duplication checks confirm that duplicate information is not input.
Limit Checks – Limit checks confirm that a value does not exceed predefined limits.
Range Checks – Range checks confirm that a value is within a predefined range of parameters.
Reasonableness Checks – Reasonableness checks confirm that a value meets predefined criteria.
Sequence Checks – Sequence checks confirm that a value is sequentially input or processed.
Validity Checks – Validity checks confirm that a value conforms to valid input criteria.
Processing Controls
Automated processing controls help ensure systems accurately process and record information and either
reject, or process and record, errors for later review and correction. Processing includes merging files,
modifying data, updating master files, and performing file maintenance. Examples of automated
processing controls include:
Batch Controls – Batch controls verify processed run totals against input control totals. Batches are
verified against various items such as total dollars, items, or documents processed.
Error Reporting – Error reports identify items or batches that include errors. Items or batches with
errors are withheld from processing, posted to a suspense account until corrected, or processed and
flagged for later correction.
Transaction Logs – Users verify logged transactions against source documents. Administrators use
transaction logs to track errors, user actions, resource usage, and unauthorized access.
Run-to-Run Totals – Run-to-run totals compiled during input, processing, and output stages are
verified against each other.
Sequence Checks – Sequence checks identify or reject missing or duplicate entries.
Interim Files – Operators revert to automatically created interim files to validate the accuracy, validity,
and completeness of processed data.
Backup Files – Operators revert to automatically created master-file backups if transaction processing
corrupts the master file.
Output Controls
Automated output controls help ensure systems securely maintain and properly distribute processed
information. Examples of automated output controls include:
Batch Logs – Batch logs record batch totals. Recipients of distributed output verify the output against
processed batch log totals.
Distribution Controls – Distribution controls help ensure output is only distributed to authorized
individuals. Automated distribution lists and access restrictions on information stored electronically or
spooled to printers are examples of distribution controls.
Destruction Controls – Destruction controls help ensure electronically distributed and stored
information is destroyed appropriately by overwriting outdated information or demagnetizing
(degaussing) disks and tapes. Refer to the IT Handbook’s “Information Security Booklet” for more
information on disposal of media.
DEVELOPMENT PHASE
The development phase involves converting design specifications into executable programs. Effective
development standards include requirements that programmers and other project participants discuss
design specifications before programming begins. The procedures help ensure programmers clearly
understand program designs and functional requirements.
Programmers use various techniques to develop computer programs. The large transaction-oriented
programs associated with financial institutions have traditionally been developed using procedural
programming techniques. Procedural programming involves the line-by-line scripting of logical instructions
that are combined to form a program.
Primary procedural programming activities include the creation and testing of source code and the
refinement and finalization of test plans. Typically, individual programmers write and review (desk test)
program modules or components, which are small routines that perform a particular task within an
application. Completed components are integrated with other components and reviewed, often by a group
of programmers, to ensure the components properly interact. The process continues as component
groups are progressively integrated and as interfaces between component groups and other systems are
tested.
Organizations should complete testing plans during the development phase. Additionally, they should
update conversion, implementation, and training plans and user, operator, and maintenance manuals.
Development Standards
Development standards should be in place to address the responsibilities of application and system
programmers. Application programmers are responsible for developing and maintaining end-user
applications. System programmers are responsible for developing and maintaining internal and open-
source operating system programs that link application programs to system software and subsequently
to hardware. Managers should thoroughly understand development and production environments to
ensure they appropriately assign programmer responsibilities.
Development standards should prohibit a programmer's access to data, programs, utilities, and systems
outside their individual responsibilities. Library controls can be used to manage access to, and the
movement of programs between, development, testing, and production environments. Management
should also establish standards requiring programmers to document completed programs and test results
thoroughly. Appropriate documentation enhances a programmer's ability to correct programming errors
and modify production programs.
Coding standards, which address issues such as the selection of programming languages and tools, the
layout or format of scripted code, and the naming conventions of code routines and program libraries, are
outside the scope of this document. However, standardized, yet flexible, coding standards enhance an
organization’s ability to decrease coding defects and increase the security, reliability, and maintainability
of application programs. Examiners should evaluate an organization’s coding standards and related code
review procedures.
Library Controls
Libraries are collections of stored documentation, programs, and data. Program libraries include reusable
program routines or modules stored in source or object code formats. Program libraries allow
programmers to access frequently used routines and add them to programs without having to rewrite the
code. Dynamic link libraries include executable code programs can automatically run as part of larger
applications.
Library controls should include:
Automated Password Controls – Management should establish logical access controls for all libraries
or objects within libraries. Establishing controls on individual objects within libraries can create
security administration burdens. However, if similar objects (executable and non-executable routines,
test and production data, etc.) are grouped into separate libraries, access can be granted at library
levels.
Automated Library Applications – When feasible, management should implement automated library
programs, which are available from equipment manufacturers and software vendors. The programs
can restrict access at library or object levels and produce reports that identify who accessed a library
and what, if any, changes were made.
Version Controls
Library controls facilitate software version controls. Version controls provide a means to systematically
retain chronological copies of revised programs and program documentation.
Development version control systems, sometimes referred to as concurrent version systems, assist
organizations in tracking different versions of source code during development. The systems do not
simply identify and store multiple versions of source code files. They maintain one file and identify and
store only changed code. When a user requests a particular version, the system recreates that version.
Concurrent version systems facilitate the quick identification of programming errors. For example, if
programmers install a revised program on a test server and discover programming errors, they only have
to review the changed code to identify the error.
Software Documentation
Organizations should maintain detailed documentation for each application and application system in
production. Thorough documentation enhances an organization’s ability to understand functional,
security, and control features and improves its ability to use and maintain the software. The
documentation should contain detailed application descriptions, programming documentation, and
operating instructions. Standards should be in place that identify the type and format of required
documentation such as system narratives, flowcharts, and any special system coding, internal controls, or
file layouts not identified within individual application documentation.
Management should maintain documentation for internally developed programs and externally acquired
products. In the case of acquired software, management should ensure (either through an internal review
or third-party certification) prior to purchase, that an acquired product’s documentation meets their
organization's minimum documentation standards. For additional information regarding acquired software
distinctions (open/closed code) refer to the "Escrowed Documentation" discussion in the "Acquisition"
section.
Examiners should consider access and change controls when assessing documentation activities.
Change controls help ensure organizations appropriately approve, test, and record software
modifications. Access controls help ensure individuals only have access to sections of documentation
directly related to their job functions.
TESTING PHASE
The testing phase requires organizations to complete various tests to ensure the accuracy of
programmed code, the inclusion of expected functionality, and the interoperability of applications and
other network components. Thorough testing is critical to ensuring systems meet organizational and end-
user requirements.
If organizations use effective project management techniques, they will complete test plans while
developing applications, prior to entering the testing phase. Weak project management techniques or
demands to complete projects quickly may pressure organizations to develop test plans at the start of the
testing phase. Test plans created during initial project phases enhance an organization’s ability to create
detailed tests. The use of detailed test plans significantly increases the likelihood that testers will identify
weaknesses before products are implemented.
Testing groups are comprised of technicians and end users who are responsible for assembling and
loading representative test data into a testing environment. The groups typically perform tests in stages,
either from a top-down or bottom-up approach. A bottom-up approach tests smaller components first and
progressively adds and tests additional components and systems. A top-down approach first tests major
components and connections and progressively tests smaller components and connections. The
progression and definitions of completed tests vary between organizations.
Bottom-up tests often begin with functional (requirements based) testing. Functional tests should ensure
that expected functional, security, and internal control features are present and operating properly.
Testers then complete integration and end-to-end testing to ensure application and system components
interact properly. Users then conduct acceptance tests to ensure systems meet defined acceptance
criteria.
Testers often identify program defects or weaknesses during the testing process. Procedures should be in
place to ensure programmers correct defects quickly and document all corrections or modifications.
Correcting problems quickly increases testing efficiencies by decreasing testers’ downtime. It also
ensures a programmer does not waste time trying to debug a portion of a program without defects that is
not working because another programmer has not debugged a defective linked routine. Documenting
corrections and modifications is necessary to maintain the integrity of the overall program documentation.
Organizations should review and complete user, operator, and maintenance manuals during the testing
phase. Additionally, they should finalize conversion, implementation, and training plans.
Acceptance Testing – End users perform acceptance tests to assess the overall functionality and
interoperability of an application.
End-to-End Testing – End users and system technicians perform end-to-end tests to assess the
interoperability of an application and other system components such as databases, hardware,
software, or communication devices.
Functional Testing – End users perform functional tests to assess the operability of a program against
predefined requirements. Functional tests include black box tests, which assess the operational
functionality of a feature against predefined expectations, or white box tests, which assess the
functionality of a feature’s code.
Integration Testing – End users and system technicians perform integration tests to assess the
interfaces of integrated software components.
Parallel Testing – End users perform parallel tests to compare the output of a new application against
a similar, often the original, application.
Regression Testing – End users retest applications to assess functionality after programmers make
code changes to previously tested applications.
Stress Testing – Technicians perform stress tests to assess the maximum limits of an application.
String Testing – Programmers perform string tests to assess the functionality of related code modules.
System Testing – Technicians perform system tests to assess the functionality of an entire system.
Unit Testing – Programmers perform unit tests to assess the functionality of small modules of code.
IMPLEMENTATION PHASE
The implementation phase involves installing approved applications into production environments.
Primary tasks include announcing the implementation schedule, training end users, and installing the
product. Additionally, organizations should input and verify data, configure and test system and security
parameters, and conduct post-implementation reviews. Management should circulate implementation
schedules to all affected parties and should notify users of any implementation responsibilities.
After organizations install a product, pre-existing data is manually input or electronically transferred to a
new system. Verifying the accuracy of the input data and security configurations is a critical part of the
implementation process. Organizations often run a new system in parallel with an old system until they
verify the accuracy and reliability of the new system. Employees should document any programming,
procedural, or configuration changes made during the verification process.
PROJECT EVALUATION
Management should conduct post-implementation reviews at the end of a project to validate the
completion of project objectives and assess project management activities. Management should interview
all personnel actively involved in the operational use of a product and document and address any
identified problems.
Management should analyze the effectiveness of project management activities by comparing, among
other things, planned and actual costs, benefits, and development times. They should document the
results and present them to senior management. Senior management should be informed of any
operational or project management deficiencies.
MAINTENANCE PHASE
The maintenance phase involves making changes to hardware, software, and documentation to support
its operational effectiveness. It includes making changes to improve a system’s performance, correct
problems, enhance security, or address user requirements. To ensure modifications do not disrupt
operations or degrade a system’s performance or security, organizations should establish appropriate
change management standards and procedures.
Routine changes are not as complex as major modifications and can usually be implemented in the
normal course of business. Routine change controls should include procedures for requesting, evaluating,
approving, testing, installing, and documenting software modifications.
Emergency changes may address an issue that would normally be considered routine, however, because
of security concerns or processing problems, the changes must be made quickly. Emergency change
controls should include the same procedures as routine change controls. Management should establish
abbreviated request, evaluation, and approval procedures to ensure they can implement changes quickly.
Detailed evaluations and documentation of emergency changes should be completed as soon as possible
after changes are implemented. Management should test routine and, whenever possible, emergency
changes prior to implementation and quickly notify affected parties of all changes. If management is
unable to thoroughly test emergency modifications before installation, it is critical that they appropriately
backup files and programs and have established back-out procedures in place.
Software patches are similar in complexity to routine modifications. This document uses the term "patch"
to describe program modifications involving externally developed software packages. However,
organizations with in-house programming may also refer to routine software modifications as patches.
Patch management programs should address procedures for evaluating, approving, testing, installing,
and documenting software modifications. However, a critical part of the patch management process
involves maintaining an awareness of external vulnerabilities and available patches.
Maintaining accurate, up-to-date hardware and software inventories is a critical part of all change
management processes. Management should carefully document all modifications to ensure accurate
system inventories. (If material software patches are identified but not implemented, management should
document the reason why the patch was not installed.)
Management should coordinate all technology related changes through an oversight committee and
assign an appropriate party responsibility for administering software patch management programs.
Quality assurance, security, audit, regulatory compliance, network, and end-user personnel should be
appropriately included in change management processes. Risk and security review should be done
whenever a system modification is implemented to ensure controls remain in place.
Refer to the "Maintenance" section of this booklet and the IT Handbook’s "Information Security Booklet"
for additional details regarding change controls.
DISPOSAL PHASE
The disposal phase involves the orderly removal of surplus or obsolete hardware, software, or data.
Primary tasks include the transfer, archiving, or destruction of data records. Management should transfer
data from production systems in a planned and controlled manner that includes appropriate backup and
testing procedures. Organizations should maintain archived data in accordance with applicable record
retention requirements. It should also archive system documentation in case it becomes necessary to
reinstall a system into production. Management should destroy data by overwriting old information or
degaussing (demagnetizing) disks and tapes. Refer to the IT Handbook’s “Information Security Booklet”
for more information on disposal of media.