Types of Computer Systems
Types of Computer Systems
1. Mainframe Computer
2. Desktop Computer
For output the laptop has an LCD or TFT screen and a set
of small speakers.
Many businesses are replacing desktop PCs with special plug-in workstations designed round laptop
computers because of the flexibility they offer.
Palmtop's have small keyboards and most let you open menus and select
icons by using a special pen or stylus. Most let you enter data by writing with
the stylus. They are powered by batteries and store their data on removable
memory units called flash cards.
You can run a wide range of software on palmtop's, for example simple word Movie: 4. Palmtop
processing, database and spreadsheet software as well as useful Computers.
applications such as electronic diaries. Many modern palmtop's:
are converging with mobile phones to let you access the internet
have wireless communications to let you access your local area network.
Types of processing systems
In this article, I’m going to explain five different types of data processing. The first two, scientific
and commercial data processing, are application specific types of data processing, the second three
are method specific types of data processing.
First a quick summary of data processing: Data processing is defined as the process of converting
raw data into meaningful information.
There are different types of data processing techniques, depending on what the data is needed for.
Types of data processing at a bench level may include statistical, algebraical, mapping and plotting,
forest and tree method, machine learning, linear models, non-linear models, relational and non-
relational processing. These are methodology and techniques which can be applied within the key
types of data processing.
What we’re going to discuss in this article is the five main hierarchical types of data processing. That
is the overarching types of systems in data analytics.
When used in scientific study or research and development work, data sets can require quite different
methods than commercial data processing.
Scientific data is a special type of data processing that is used in academic and research fields.
It’s vitally important for scientific data that there are no significant errors that contribute to wrongful
conclusions. Because of this, the cleaning and validating steps can take a considerably larger amount
of time than for commercial data processing.
Scientific data processing needs to draw conclusions, so the steps of sorting and summarization often
need to be performed very carefully, using a wide variety of processing tools to ensure no selection
biases or wrong relationships are produced.
Scientific data processing often needs a topic expert additional to a data expert to work with
quantities.
Commercial data processing usually applies standard relational databases, and uses batch processing,
however some, in particular technology applications may use non-relational databases.
There are still many applications within commercial data processing that lean towards a scientific
approach, such as predictive market research. These may be considered a hybrid of the two methods.
Within the main areas of scientific and commercial processing, different methods are used for
applying the processing steps to data. The three main types of data processing we’re going to discuss
are automatic/manual, batch, and real-time data processing.
It may not seem possible, but even today people still use manual data processing. Bookkeeping data
processing functions can be performed from a ledger, customer surveys may be manually collected
and processed, and even spreadsheet-based data processing is now considered somewhat manual. In
some of the more difficult parts of data processing, a manual component may be needed for intuitive
reasoning.
The first technology that led to the development of automated systems in data processing was punch
cards used in census counting. Punch cards were also used in early days of payroll data processing.
Computers started being used by corporations in the 1970’s when electronic data processing began to
develop. Some of the first applications for automated data processing in the way of specialized
databases were developed for customer relationship management (CRM), to drive better sales.
Electronic data management became widespread with the introduction of the personal computer in
the 1980s. Spreadsheets provided simple electronic assistance for even everyday data management
functions such as personal budgeting and expense allocations.
Database management provided more automation of data processing functions, which is why I refer
to spreadsheets as a now rather manual tool in data management. The user is required to manipulate
all the data in a spreadsheet, almost like a manual system, only calculations are aided. Whereas in a
database, users can extract data relationships and reports relatively easily, providing the setup and
entries are correctly managed.
Autonomous databases now look to be a data processing method of the future, especially in
commercial data processing. Oracle and Peloton are poised to offer users more automation with what
is termed a “self-driving” database. This development in the field of automatic data processing,
combined with machine learning tools for optimizing and improving service, aims to make accessing
and managing data easier for end users, without the need for highly specialized data professionals in-
house.
4. Batch Processing
To save computational time, before the widespread use of distributed systems architecture, or even
after it, stand-alone computer systems apply batch processing techniques. This is particularly useful
in financial applications or where data was secure such as medical records.
Batch processing completes a range of data processes as a batch, by simplifying single commands to
provide actions to multiple data sets. This is a little like the comparison of a computer spreadsheet to
a calculator in some ways. A calculation can be applied with one function, that is one step, to a whole
column or series of columns, giving multiple results from one action. The same concept is achieved
in batch processing for data. A series of actions or results can be achieved by applying a function to a
whole series of data. In this way, the computer processing time is far less.
Batch processing can complete a queue of tasks without human intervention, and data systems may
program priorities to certain functions or set times when batch processing can be completed.
Banks typically use this process to execute transactions after the close of business, where computers
are no longer involved in data capture and can be dedicated to processing functions.
For commercial uses, many large data processing applications require real-time processing. That is
they need to get results from data exactly as it happens. One application of this that most of us can
identify with is tracking stock market and currency trends. The data needs to be updated immediately
since investors buy in real time and prices update by the minute. Data on airline schedules and
ticketing, and GPS tracking applications in transport services have similar needs for real-time
updates.
The most common technology used in real time processing is stream processing. The data analytics
are drawn directly from the stream, that is, at the source. Where data is used to draw conclusions
without uploading and transforming, the process is much quicker.
Data virtualization techniques are another important development in real-time data processing, where
the data remains in its source form, the only information is pulled for the data processing. The beauty
of data virtualization is that where transformation is not necessary, so the error is reduced.
Data virtualization and stream processing mean that data analytics can be drawn in real time much
quicker, benefiting many technical and financial applications, reducing processing times and errors.
Other than these popular Data processing Techniques there are three more processing techniques
which are mentioned below-
6. ONLINE PROCESSING
This data processing technique is derived from Automatic data processing. This technique is now
known as immediate or irregular access handling. Under this technique, the activity by the
framework is prepared at the time of operation/processing. This can be viewed easily with continuous
preparing of data sets. This processing method highlights the fast contribution of exchange of data
and connects directly with the databases.
7. MULTI PROCESSING
This is the most commonly used data processing technique. However, it is used all over the globe
where we have the computer-based setups for Data capture and processing. As the name suggests –
Multiprocessing is not bound to one single CPU, With this, it has a collection of several CPU’s. As
the various set of processing devices are included in this method, therefore the outcome efficiency is
very useful. The jobs are broken into frames and then sent to the multiprocessors for processing. The
result obtained is expected to be in less time and the output is increased. The additional benefit is
every processing unit is independent thus failure of any will not impact the working of other
processing units.
8. TIME SHARING
This kind of Data processing is entirely based on Time. In this, one unit of processing data is used by
several users. Each user is allocated with the set timings on which they need to work on the same
CPU/processing Unit. Intervals are divided into segments and thus to users so there is no collapse of
timings which makes it as a multi-access system. This processing technique is also widely used and
mostly entertained in startups.
1. Understanding your requirement is a major point before choosing the best processing
techniques for your Project.
2. Filter out your data in a much more precise manner so you can apply processing
techniques.
3. Recheck your filter data again in a way that it still represents the first requirement and you
don’t miss out any important fields in it.
4. Think about the OUTPUT which you would like to have so you can follow your idea.
5. Now you have the filter data and the output you wish to have, Check the best and most
reliable processing technique.
6. Once you choose your technique as per your requirement it will be easy to follow up for
the end result.
7. The chosen technique must be checked simultaneously so there are no loopholes in order
to avoid mistakes.
8. Always apply ETL functions to recheck your datasets.
9. With this don’t forget to apply a timeline to your requirement as without a specific
timeline it is useless to apply energy.
10. Test your OUTPUT again with the initial requirement for a better delivery.
Summary
This has been a little bit of an introduction to some of the different types of data processing. If you
like what you’ve read here and want to learn more, take a look around on our blog for more about
data processing systems.
a reliable system is one that is capable of operating without material error, fault, or failure durin
specified environment
AICPA Trust Services: provides assurance on information systems, uses a framework with five
system; when a principle is not met a risk exists:
Logical access:
Malicious(or accidental) alteration or damage to
files and/or system
Computer based fraud
Unauthorized access to confidential data
Availability: available for operation and use Interruption of business operations
as agreed and in conformity with policies Loss of data
Processing Integrity: complete, accurate, Invalid, incomplete or inaccurate input data, data
timely, and authorized processing, updating of master files, and creation of
output
Online Privacy: personal information obtained Disclosure of customer information, such as SSN,
as a result of e-commerce is collected, used, credit card numbers, credit rating
disclosed, and retained as agreed
Confidentiality: protected as agreed disclosing confidential data, such as transaction
details, business plans, and legal documents
2. Control Environment
a. Segregation of Controls
at minimum, segregate programming, operations, and the library function within the inform
a more complete segregation of key functions within the IS department would be to separate
o System analysis- analyzes the present user environment and requirements and may (1)
changes, (2) recommend the purchase of a new system, or (3) design a new information s
o Applications programming- responsible for writing, testing, and debugging the applica
specifications provided by the systems analyst
o Database administration- responsible for maintaining the database and restricting acces
authorized personnel
o Data preparation- may be prepared by user departments and input by key to magnetic t
o Operations- responsible for daily computer operations of both hardware and software; m
the tape drives, supervises operations on the operator’s console, accepts any required inp
generate output; also responsible for help desks
o Data library- responsible for custody of the removable media and for the maintenance o
documentation
o Data control- acts as liaison between users and the processing center; records input dat
the progress of processing, distributes output, and ensures compliance with control totals
o Web administrator- responsible for overseeing the development, planning, and the imp
site; usually managerial
o Web master- responsible for providing expertise and leadership in the development of
design, analysis, security, maintenance, content development, and updates
o Web designer- responsible for creating the visual content of the Web site
o Intranet/Extranet developer- responsible for writing programs based on the needs of the
3. Risk Assessment
changes in computerized information systems and in operations may increase the risk of improp
affected by whether the company uses small computers and/or a complex mainframe system
5. Monitoring
requires adequate computer skills to evaluate the propriety of processing of computerized applic
IT can evaluate data and transactions based on established criteria and highlight items that appe
unusual
o Segregation Controls
Management, users, and information systems personnel approve new systems befor
operation.
All master and transaction file conversions should be controlled to prevent unautho
verify the accuracy of the results.
o Computer Hardware
Parity check- small bit added to each character that can detect if the hardware loses
movement of a character
Echo check- during the sending and receiving of characters, the receiving hardware
sending hardware what it received and the sending hardware automatically resends an
received incorrectly
Boundary protection- ensure that simultaneous jobs by the CPU cannot destroy or c
memory of another job
Periodic maintenance
A code comparison program may be used to compare source and/or object codes of
program with the program currently used
o Segregation Controls
Access to data files and programs should be limited to those individuals authorized
Call back- specialized form of user identification in which the user dials the system
is disconnected from the system; then either the individual manually finds the authori
the system automatically finds the authorized telephone number and calls back
Encryption- protects data, since to use the data unauthorized users must not only ob
translate the coded data; encryption performed by physically secure hardware is ordin
more costly than that performed by software
o Segregation Controls
should have operation manual that contains instructions for processing programs an
operational programs, but not with detailed program documentation
control group should monitor the operator’s activities and jobs should be scheduled
o Other Controls
Contingency processing
File protection ring- processing control to ensure than an operator does not use a m
write on when it actually has critical information on it; if ring is on the tape, it cannot
Input Controls
o Overall: inputs should be authorized and approved; system should verify all significant dat
data into machine-readable form should be controlled and verified for accuracy
o Preprinted form
o Check Digit- extra digit added to ID number to detect data transmission errors
o Control, batch, or proof total- total of one numerical field for all records of a batch that wo
o Hash total- control total where total is meaningless for financial purposes
o Limit (reasonableness) test- test of reasonable of field of data, given a pre-determined uppe
o Field check- control that limits the types of characters accepted into a specific data field
o Validity check- control that allows only “valid” transactions or data to entered into the syst
o Missing data check- control that searches for blank inappropriately existing in input data
o Field size check- control of an exact number of characters to be input
o Logic check- ensures that illogical combinations of input are not accepted
o Redundant data check- uses 2 identifiers in each transaction record to confirm that the corr
being updated
o Closed-loop verification- control that allows data entry personnel to check the accuracy of
instead of redundant data check
Processing controls: should include limit tests, record counts, and control totals
effectiveness depends on the effectiveness of both the programmed control activities that produ
manual follow-up activities
10. User Control Activities to Test Completeness and Accuracy of Computer-Processed Controls
o Checks of computer out against source documents, control totals, or other input to provide
programmed aspects of the financial reporting system and control activities have operated ef
o Reviewing computer processing logs to determine that all of the correct computer jobs exe
o Priorities
o Backup approaches
Computer operations
Installing software
o Documentation of plan