Computer Science: Jump To Navigation Jump To Search
Computer Science: Jump To Navigation Jump To Search
Computer science deals with the theoretical foundations of computation and practical techniques for their
application.
Computer science
History
Outline
Glossary
Category
v
t
e
Computer science is the study of processes that interact with data and that can be represented
as data in the form of programs. It enables the use of algorithms to manipulate, store,
and communicate digital information. A computer scientist studies the theory of computation and
the design of software systems.[1]
Its fields can be divided into theoretical and practical disciplines. Computational complexity
theory is highly abstract, while computer graphics emphasizes real-world
applications. Programming language theory considers approaches to the description of
computational processes, while software engineering involves the use of programming
languages and complex systems. Human–computer interaction considers the challenges in
making computers useful, usable, and accessible.
Contents
1History
2Etymology
3Philosophy
4Fields
4.1.2Theory of computation
4.1.5Formal methods
o 4.2Computer systems
4.2.4Computer networks
4.2.6Databases
o 4.3Computer applications
4.3.2Human–computer interaction
4.3.4Artificial intelligence
o 4.4Software engineering
5Discoveries
6Programming paradigms
7Academia
8Education
9See also
10Notes
11References
12Further reading
o 12.1Overview
o 12.2Selected literature
o 12.3Articles
13External links
o 13.2Professional organizations
o 13.3Misc
History[edit]
Main article: History of computer science
History of computing
Hardware
Software
Software
Unix
Free software and open-source software
Computer science
Artificial intelligence
Compiler construction
Computer science
Operating systems
Programming languages
Prominent pioneers
Software engineering
Modern concepts
General-purpose CPUs
Graphical user interface
Internet
Laptops
Personal computers
Video games
World Wide Web
By country
Bulgaria
Poland
Romania
Soviet Bloc
Soviet Union
Yugoslavia
Timeline of computing
before 1950
1950–1979
1980–1989
1990–1999
2000–2009
2010–2019
more timelines ...
Category
v
t
e
Charles Babbage, sometimes referred to as the "father of computing".[2]
Ada Lovelace is often credited with publishing the first algorithm intended for processing on a computer.[3]
The earliest foundations of what would become computer science predate the invention of the
modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have
existed since antiquity, aiding in computations such as multiplication and division. Algorithms for
performing computations have existed since antiquity, even before the development of
sophisticated computing equipment.
Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623.[4] In
1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped
Reckoner.[5] He may be considered the first computer scientist and information theorist, for,
among other reasons, documenting the binary number system. In 1820, Thomas de
Colmar launched the mechanical calculator industry[note 1] when he invented his
simplified arithmometer, which was the first calculating machine strong enough and reliable
enough to be used daily in an office environment. Charles Babbage started the design of the
first automatic mechanical calculator, his Difference Engine, in 1822, which eventually gave him
the idea of the first programmable mechanical calculator, his Analytical Engine.[6] He started
developing this machine in 1834, and "in less than two years, he had sketched out many of
the salient features of the modern computer". [7] "A crucial step was the adoption of a punched
card system derived from the Jacquard loom"[7] making it infinitely programmable. [note 2] In 1843,
during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of
the many notes she included, an algorithm to compute the Bernoulli numbers, which is
considered to be the first published algorithm ever specifically tailored for implementation on a
computer.[8] Around 1885, Herman Hollerith invented the tabulator, which used punched cards to
process statistical information; eventually his company became part of IBM. Following Babbage,
although unaware of his earlier work, Percy Ludgate in 1909 published [9] the 2nd of the only two
designs for mechanical analytical engines in history. In 1937, one hundred years after Babbage's
impossible dream, Howard Aiken convinced IBM, which was making all kinds of punched card
equipment and was also in the calculator business[10] to develop his giant programmable
calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used
cards and a central computing unit. When the machine was finished, some hailed it as
"Babbage's dream come true".[11]
During the 1940s, as new and more powerful computing machines such as the Atanasoff–Berry
computer and ENIAC were developed, the term computer came to refer to the machines rather
than their human predecessors.[12] As it became clear that computers could be used for more than
just mathematical calculations, the field of computer science broadened to study computation in
general. In 1945, IBM founded the Watson Scientific Computing Laboratory at Columbia
University in New York City. The renovated fraternity house on Manhattan's West Side was IBM's
first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division,
which today operates research facilities around the world. [13] Ultimately, the close relationship
between IBM and the university was instrumental in the emergence of a new scientific discipline,
with Columbia offering one of the first academic-credit courses in computer science in 1946.
[14]
Computer science began to be established as a distinct academic discipline in the 1950s and
early 1960s.[15][16] The world's first computer science degree program, the Cambridge Diploma in
Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first
computer science department in the United States was formed at Purdue University in 1962.
[17]
Since practical computers became available, many applications of computing have become
distinct areas of study in their own rights.
Although many initially believed it was impossible that computers themselves could actually be a
scientific field of study, in the late fifties it gradually became accepted among the greater
academic population.[18][19] It is the now well-known IBM brand that formed part of the computer
science revolution during this time. IBM (short for International Business Machines) released the
IBM 704[20] and later the IBM 709[21] computers, which were widely used during the exploration
period of such devices. "Still, working with the IBM [computer] was frustrating […] if you had
misplaced as much as one letter in one instruction, the program would crash, and you would
have to start the whole process over again". [18] During the late 1950s, the computer science
discipline was very much in its developmental stages, and such issues were commonplace. [19]
The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John
Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first
working transistor, the point-contact transistor, in 1947.[22][23] In 1953, the University of
Manchester built the first transistorized computer, called the Transistor Computer.[24] However,
early junction transistors were relatively bulky devices that were difficult to manufacture on a
mass-production basis, which limited them to a number of specialised applications. [25] The metal–
oxide–silicon field-effect transistor (MOSFET, or MOS transistor) was invented by Mohamed
Atalla and Dawon Kahng at Bell Labs in 1959.[26][27] It was the first truly compact transistor that
could be miniaturised and mass-produced for a wide range of uses.[25] The MOSFET made it
possible to build high-density integrated circuit chips,[28][29] leading to what is known as
the computer revolution[30] or microcomputer revolution.[31]
Time has seen significant improvements in the usability and effectiveness of computing
technology.[32] Modern society has seen a significant shift in the users of computer technology,
from usage only by experts and professionals, to a near-ubiquitous user base. Initially,
computers were quite costly, and some degree of humanitarian aid was needed for efficient use
—in part from professional computer operators. As computer adoption became more widespread
and affordable, less human assistance was needed for common usage.
See also: History of computing and History of informatics
Etymology[edit]
See also: Informatics § Etymology
Although first proposed in 1956,[19] the term "computer science" appears in a 1959 article
in Communications of the ACM,[33] in which Louis Fein argues for the creation of a Graduate
School in Computer Sciences analogous to the creation of Harvard Business School in 1921,
[34]
justifying the name by arguing that, like management science, the subject is applied and
interdisciplinary in nature, while having the characteristics typical of an academic discipline. [33] His
efforts, and those of others such as numerical analyst George Forsythe, were rewarded:
universities went on to create such departments, starting with Purdue in 1962. [35] Despite its
name, a significant amount of computer science does not involve the study of computers
themselves. Because of this, several alternative names have been proposed. [36] Certain
departments of major universities prefer the term computing science, to emphasize precisely that
difference. Danish scientist Peter Naur suggested the term datalogy,[37] to reflect the fact that the
scientific discipline revolves around data and data treatment, while not necessarily involving
computers. The first scientific institution to use the term was the Department of Datalogy at the
University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy.
The term is used mainly in the Scandinavian countries. An alternative term, also proposed by
Naur, is data science; this is now used for a multi-disciplinary field of data analysis, including
statistics and databases.
Also, in the early days of computing, a number of terms for the practitioners of the field of
computing were suggested in the Communications of the ACM—turingineer, turologist, flow-
charts-man, applied meta-mathematician, and applied epistemologist.[38] Three months later in
the same journal, comptologist was suggested, followed next year by hypologist.[39] The
term computics has also been suggested.[40] In Europe, terms derived from contracted
translations of the expression "automatic information" (e.g. "informazione automatica" in Italian)
or "information and mathematics" are often used,
e.g. informatique (French), Informatik (German), informatica (Italian,
Dutch), informática (Spanish, Portuguese), informatika (Slavic languages and Hungarian)
or pliroforiki (πληροφορική, which means informatics) in Greek. Similar words have also been
adopted in the UK (as in the School of Informatics of the University of Edinburgh).[41] "In the U.S.,
however, informatics is linked with applied computing, or computing in the context of another
domain."[42]
A folkloric quotation, often attributed to—but almost certainly not first formulated by—Edsger
Dijkstra, states that "computer science is no more about computers than astronomy is about
telescopes."[note 3] The design and deployment of computers and computer systems is generally
considered the province of disciplines other than computer science. For example, the study of
computer hardware is usually considered part of computer engineering, while the study of
commercial computer systems and their deployment is often called information technology
or information systems. However, there has been much cross-fertilization of ideas between the
various computer-related disciplines. Computer science research also often intersects other
disciplines, such as philosophy, cognitive
science, linguistics, mathematics, physics, biology, statistics, and logic.
Computer science is considered by some to have a much closer relationship with mathematics
than many scientific disciplines, with some observers saying that computing is a mathematical
science.[15] Early computer science was strongly influenced by the work of mathematicians such
as Kurt Gödel, Alan Turing, John von Neumann, Rózsa Péter and Alonzo Church and there
continues to be a useful interchange of ideas between the two fields in areas such
as mathematical logic, category theory, domain theory, and algebra.[19]
The relationship between Computer Science and Software Engineering is a contentious issue,
which is further muddied by disputes over what the term "Software Engineering" means, and how
computer science is defined.[43] David Parnas, taking a cue from the relationship between other
engineering and science disciplines, has claimed that the principal focus of computer science is
studying the properties of computation in general, while the principal focus of software
engineering is the design of specific computations to achieve practical goals, making the two
separate but complementary disciplines.[44]
The academic, political, and funding aspects of computer science tend to depend on whether a
department formed with a mathematical emphasis or with an engineering emphasis. Computer
science departments with a mathematics emphasis and with a numerical orientation consider
alignment with computational science. Both types of departments tend to make efforts to bridge
the field educationally if not across all research.
Philosophy[edit]
Main article: Philosophy of computer science
A number of computer scientists have argued for the distinction of three separate paradigms in
computer science. Peter Wegner argued that those paradigms are science, technology, and
mathematics.[45] Peter Denning's working group argued that they are theory, abstraction
(modeling), and design.[46] Amnon H. Eden described them as the "rationalist paradigm" (which
treats computer science as a branch of mathematics, which is prevalent in theoretical computer
science, and mainly employs deductive reasoning), the "technocratic paradigm" (which might be
found in engineering approaches, most prominently in software engineering), and the "scientific
paradigm" (which approaches computer-related artifacts from the empirical perspective of natural
sciences, identifiable in some branches of artificial intelligence).[47] Computer science focuses on
methods involved in design, specification, programming, verification, implementation and testing
of human-made computing systems. [48]
Fields[edit]
Computer science is no more about computers than astronomy is about telescopes.
— Michael Fellows
Further information: Outline of computer science
As a discipline, computer science spans a range of topics from theoretical studies of algorithms
and the limits of computation to the practical issues of implementing computing systems in
hardware and software.[49][50] CSAB, formerly called Computing Sciences Accreditation Board—
which is made up of representatives of the Association for Computing Machinery (ACM), and
the IEEE Computer Society (IEEE CS)[51]—identifies four areas that it considers crucial to the
discipline of computer science: theory of computation, algorithms and data
structures, programming methodology and languages, and computer elements and architecture.
In addition to these four areas, CSAB also identifies fields such as software engineering, artificial
intelligence, computer networking and communication, database systems, parallel computation,
distributed computation, human–computer interaction, computer graphics, operating systems,
and numerical and symbolic computation as being important areas of computer science.[49]
Theoretical Computer Science is mathematical and abstract in spirit, but it derives its motivation
from the practical and everyday computation. Its aim is to understand the nature
of computation and, as a consequence of this understanding, provide more efficient
methodologies. All studies related to mathematical, logic and formal concepts and methods could
be considered as theoretical computer science, provided that the motivation is clearly drawn from
the field of computing.
Data structures and algorithms[edit]
Main articles: Data structure and Algorithm
Data structures and algorithms are the studies of commonly used computational methods and
their computational efficiency.
O(n2)
Analysis of Combinatorial Computational
Algorithms Data structures
algorithms optimization geometry
Theory of computation[edit]
Main article: Theory of computation
According to Peter Denning, the fundamental question underlying computer science is, "What
can be (efficiently) automated?" [15] Theory of computation is focused on answering fundamental
questions about what can be computed and what amount of resources are required to perform
those computations. In an effort to answer the first question, computability theory examines
which computational problems are solvable on various theoretical models of computation. The
second question is addressed by computational complexity theory, which studies the time and
space costs associated with different approaches to solving a multitude of computational
problems.
The famous P = NP? problem, one of the Millennium Prize Problems,[52] is an open problem in the
theory of computation.
GNITIRW-
P = NP?
TERCES
Information theory is related to the quantification of information. This was developed by Claude
Shannon to find fundamental limits on signal processing operations such as compressing data
and on reliably storing and communicating data. [53] Coding theory is the study of the properties
of codes (systems for converting information from one form to another) and their fitness for a
specific application. Codes are used for data compression, cryptography, error detection and
correction, and more recently also for network coding. Codes are studied for the purpose of
designing efficient and reliable data transmission methods. [54]
Programming language theory[edit]
Main article: Programming language theory
Programming language theory is a branch of computer science that deals with the design,
implementation, analysis, characterization, and classification of programming languages and
their individual features. It falls within the discipline of computer science, both depending on and
affecting mathematics, software engineering, and linguistics. It is an active research area, with
numerous dedicated academic journals.
Computer systems[edit]
Computer architecture and computer engineering[edit]
Main articles: Computer architecture and Computer engineering
Computer performance analysis is the study of work flowing through computers with the general
goals of improving throughput, controlling response time, using resources efficiently,
eliminating bottlenecks, and predicting performance under anticipated peak loads.
[57]
Benchmarks are used to compare the performance of systems carrying different chips and/or
system architectures.[58]
Concurrent, parallel and distributed systems[edit]
Main articles: Concurrency (computer science) and Distributed computing
This branch of computer science aims to manage networks between computers worldwide.
Computer security and cryptography[edit]
Main articles: Computer security and Cryptography
A database is intended to organize, store, and retrieve large amounts of data easily. Digital
databases are managed using database management systems to store, create, maintain, and
search data, through database models and query languages.
Computer applications[edit]
Computer graphics and visualization[edit]
Main article: Computer graphics (computer science)
Computer graphics is the study of digital visual contents and involves the synthesis and
manipulation of image data. The study is connected to many other fields in computer science,
including computer vision, image processing, and computational geometry, and is heavily applied
in the fields of special effects and video games.
Human–computer interaction[edit]
Main article: Human–computer interaction
Research that develops theories, principles, and guidelines for user interface designers, so they
can create satisfactory user experiences with desktop, laptop, and mobile devices.
Scientific computing and simulation[edit]
Scientific computing (or computational science) is the field of study concerned with
constructing mathematical models and quantitative analysis techniques and using computers to
analyze and solve scientific problems. A major usage of scientific computing is simulation of
various processes, including computational fluid dynamics, physical, electrical, and electronic
systems and circuits, as well as societies and social situations (notably war games) along with
their habitats, among many others. Modern computers enable optimization of such designs as
complete aircraft. Notable in electrical and electronic circuit design are SPICE, [62] as well as
software for physical realization of new (or modified) designs. The latter includes essential design
software for integrated circuits.[citation needed]
Numerical analysis Computational physics Computational chemistry Bioinformatics
Artificial intelligence[edit]
Main article: Artificial intelligence