0% found this document useful (0 votes)
64 views4 pages

Importance of Information Technology in The Society: Pedagogy

The document discusses the history and evolution of computers from the earliest calculating devices like the abacus to modern computers. It describes 4 generations of computers: 1) First generation used vacuum tubes from 1940s-1950s. 2) Second generation used transistors instead of vacuum tubes from 1955-1960. 3) Third generation used integrated circuits/microchips from the 1960s. 4) Fourth generation began using microprocessors on a single chip starting in 1971 leading to the development of microcomputers and personal computers.

Uploaded by

Juvie Ynes
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views4 pages

Importance of Information Technology in The Society: Pedagogy

The document discusses the history and evolution of computers from the earliest calculating devices like the abacus to modern computers. It describes 4 generations of computers: 1) First generation used vacuum tubes from 1940s-1950s. 2) Second generation used transistors instead of vacuum tubes from 1955-1960. 3) Third generation used integrated circuits/microchips from the 1960s. 4) Fourth generation began using microprocessors on a single chip starting in 1971 leading to the development of microcomputers and personal computers.

Uploaded by

Juvie Ynes
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

IMPORTANCE OF INFORMATION TECHNOLOGY IN THE SOCIETY

While technology is playing a larger role in society overall, developing a holistic, up-to-date system is particularly critical in higher education because it offers new avenues to explore academically, socially and recreationally. It is becoming increasingly important in at least three distinct areas:

Pedagogy
Volume of data The amount of information/resources available online through Internet engines and portals enables users to search through large amounts of materials from library databases around the world. Therefore, time can be spent more productively analyzing and synthesizing data rather than just digging for and retrieving it. Immediacy and collaboration Technology-enabled pedagogy allows professors and students to interact together in real-time with rapidly changing information. For example, a Constitutional Law class can use online news resources to discuss the outcomes from a current Supreme Court case which is too recent to be included in a textbook. Interactive multi-media Students and faculty can access a vast array of online resources, past and present, and can be studied in a dynamic multi-media application by viewing DVDs or online content.

Career preparation
Computer skills are necessary for all careers from technology-based positions to the field of medicine to the fine arts.

Administrative
Student information (financial data, names, addresses, grades, class schedules, etc.) is readily available in a centralized system. The ACI program provides students with around-the-clock, secure access to their personal information. This becomes increasingly important as the need for student data tracking and reporting in such areas as financial aid and federal compliance grows.

HISTORY OF COMPUTER
What is Computer? In its most basic form a computer is any device which aids humans in performing various kinds of computations or calculations. In that respect the earliest computer was the abacus, used to perform basic arithmetic operations. Every computer supports some form of input, processing, and output. This is less obvious on a primitive device such as the abacus where input, output and processing are simply the act of moving the pebbles into new positions, seeing the changed positions, and counting. Regardless, this is what computing is all about, in a nutshell. We input information; the computer processes it according to its basic logic or the program currently running, and outputs the results. Modern computers do this electronically, which enables them to perform a vastly greater number of calculations or computations in less time. Despite the fact that we currently use computers to process images, sound, text and other non-numerical forms of data, all of it depends on nothing more than basic numerical calculations. Graphics, sound etc. are merely abstractions of the numbers being crunched within the machine; in digital computers these are the ones and zeros, representing electrical on and off states, and endless combinations of those. In other words every image, every sound, and every word have a corresponding binary code. While abacus may have technically been the first computer most people today associate the word computer with electronic computers which were invented in the last century, and have evolved into modern computers we know of today.

First Generation Computers (1940s 1950s)


First electronic computers used vacuum tubes, and they were huge and complex. The first general purposes electronic computer was the ENIAC (Electronic Numerical Integrator And Computer). It was digital, although it didnt operate with binary code, and was reprogrammable to solve a complete range of computing problems. It was programmed using plug boards and switches, supporting input from an IBM card reader, and output to an IBM card punch. It took up 167 square meters, weighed 27 tons, and consuming 150 kilowatts of power. It used thousands of vacuum tubes, crystal diodes, relays, resistors, and capacitors. The first non-general purpose computer was ABC (AtanasoffBerry Computer), and other similar computers of this era included German Z3, ten British Colossus computers, LEO, Harvard Mark I, and UNIVAC.

Second Generation Computers (1955 1960)


The second generation of computers came about thanks to the invention of the transistor, which then started replacing vacuum tubes in computer design. Transistor computers consumed far less power, produced far less heat, and were much smaller compared to the first generation, albeit still big by todays standards.

The first transistor computer was created at the University of Manchester in 1953. The most popular of transistor computers was IBM 1401. IBM also created the first disk drive in 1956, the IBM 350 RAMAC.

Third Generation Computers (1960s)


The invention of the integrated circuits (ICs), also known as microchips, paved the way for computers as we know them today. Making circuits out of single pieces of silicon, which is a semiconductor, allowed them to be much smaller and more practical to produce. This also started the ongoing process of integrating an ever larger number of transistors onto a single microchip. During the sixties microchips started making their way into computers, but the process was gradual, and second generation of computers still held on. First appeared minicomputers, first of which were still based on non-microchip transistors, and later versions of which were hybrids, being based on both transistors and microchips, such as IBMs System/360. They were much smaller, and cheaper than first and second generation of computers, also known as mainframes. Minicomputers can be seen as a bridge between mainframes and microcomputers, which came later as the proliferation of microchips in computers grew.

Fourth Generation Computers (1971 present)


First microchips-based central processing units consisted of multiple microchips for different CPU components. The drive for ever greater integration and miniaturization led towards single-chip CPUs, where all of the necessary CPU components were put onto a single microchip, called a microprocessor. The first single-chip CPU, or a microprocessor, was Intel 4004. The advent of the microprocessor spawned the evolution of the microcomputers, the kind that would eventually become personal computers that we are familiar with today.

First Generation of Microcomputers (1971 1976)


First microcomputers were a weird bunch. They often came in kits, and many were essentially just boxes with lights and switches, usable only to engineers and hobbyists whom could understand binary code. Some, however, did come with a keyboard and/or a monitor, bearing somewhat more resemblance to modern computers. It is arguable which of the early microcomputers could be called a first. CTC Data point 2200 is one candidate, although it actually didnt contain a microprocessor (being based on a multi-chip CPU design instead), and wasnt meant to be a standalone computer, but merely a terminal for the mainframes. The reason some might consider it a first microcomputer is because it could be used as a de-facto standalone computer, it was small enough, and its multi-chip CPU architecture actually became a basis for the x86 architecture later used in IBM PC and its descendants. Plus, it even came with a keyboard and a monitor, an exception in those days. However, if we are looking for the first microcomputer that came with a proper microprocessor, was meant to be a standalone computer, and didnt come as a kit then it would be Micral N, which used Intel 8008 microprocessor.

Popular early microcomputers which did come in kits include MOS Technology KIM-1, Altair 8800, and Apple I. Altair 8800 in particular spawned a large following among the hobbyists, and is considered the spark that started the microcomputer revolution, as these hobbyists went on to found companies centered around personal computing, such as Microsoft, and Apple.

Second Generation Microcomputers (1977 present)


As microcomputers continued to evolve they became easier to operate, making them accessible to a larger audience. They typically came with a keyboard and a monitor, or could be easily connected to a TV, and they supported visual representation of text and numbers on the screen. In other words, lights and switches were replaced by screens and keyboards, and the necessity to understand binary code was diminished as they increasingly came with programs that could be used by issuing more easily understandable commands. Famous early examples of such computers include Commodore PET, Apple II, and in the 80s the IBM PC. The nature of the underlying electronic components didnt change between these computers and modern computers we know of today, but what did change was the number of circuits that could be put onto a single microchip. Intels co-founder Gordon Moore predicted the doubling of the number of transistor on a single chip every two years, which became known as Moores Law, and this trend has roughly held for over 30 years thanks to advancing manufacturing processes and microprocessor designs. The consequence was a predictable exponential increase in processing power that could be put into a smaller package, which had a direct effect on the possible form factors as well as applications of modern computers, which is what most of the forthcoming paradigm shifting innovations in computing were about.

You might also like