0% found this document useful (0 votes)
21 views16 pages

NETWORK ADMINISTRATION Rcj2no

The document discusses network administration and management. It covers topics like computer networks, network topologies, network management, modulation techniques including amplitude modulation and frequency modulation, encoding, and transmission media. It provides background information and literature reviews on these topics.

Uploaded by

Otieno Steve
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views16 pages

NETWORK ADMINISTRATION Rcj2no

The document discusses network administration and management. It covers topics like computer networks, network topologies, network management, modulation techniques including amplitude modulation and frequency modulation, encoding, and transmission media. It provides background information and literature reviews on these topics.

Uploaded by

Otieno Steve
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

CIT 3253: NETWORK ADMINISTRATION AND

MANAGEMENT
STUDENT NAME: ONYANGO STEPHEN
OMONDI
REGISTRATION NUMBER:
CT201/106102/21
COURSE: BACHELOR OF SCIENCE
(COMPUTER SCIENCE)
DATE OF SUBMISSION: 6TH FEBRUARY 2024
INTRODUCTION
A computer network is a telecommunication channel that links a group of
computers so as to allow communication and data transfer between the
systems, the software application and their users. Machines are said to have
a network when a process in one machine can communicate with a process in
another machine. The techniques used to define networks include the use of
various forms of media used for the transmission of signals, the
communication protocols used to organize network traffic, the nature of the
network, network topography and the organizational scope of the
network. Internet is not only the best known but is also the most powerful
computer network.

Networks are often interconnected to form larger networks, with the Internet
being the best-known example of a network of networks. There are many
ambiguities in the literature between a computer network and a distributed
system. The main difference would be the fact that a number of separate
computers appear to their users as one unified coherent system in case of a
distributed system. It has a single model or paradigm that it presents to the
users. This is often done by a layer of software on top of the os called
middleware that implements this model. The World Wide Web is one of the
well-established cases of a distributed system. It is built on top of the Internet
and it is based on a model where the document (web page) is everything.

In network administration, we are primarily dealing with the operational


management of human-computer systems. It addresses both the device
called the computer system and those who use the same technology in a
balanced way. It is about linking together a network of computers (work
stations, PCs and computers) and making them work even when there are
attempts to prevent them from working by malicious users.
Network management is a service which uses distinct protocols, tools,
applications, and devices that are used by human network managers to
control and to monitor the correct resources for software and hardware to the
needs of the service

MODULATION AND ENCODING TECHNIQUES


BACKGROUND OF STUDY
1.1 Modulation
Modulation is the process of changing or modifying one property or one
signal parameter according to the rate of change of another signal. The
signal that is usually called the modulating signal and the high frequency
signal that is the carrier, whose parameters are altered, are original. This
signal is known in the end as the modulated signal. For instance, in
modulation via amplitude, the amplitude of the carrier wave is modulated
based on the amplitude of the message signal, whereas in modulation via
angular or frequency, the angle of the phase of the carrier is modulated
with the message signal.
LITERATURE REVIEW
1.1.1 Benefits of Modulation
1.Modulation can modulate the frequency spectrum of a message signal
to a band that is more favorable to the channel. An antenna can only
radiate efficiently and admit signals well whose wavelength is comparable
to the physical aperture of the antenna. Therefore, to transmit and receive
say, voice, by radio we need to translate the voice signal to an extremely
high frequency band.

2. Modulation allows multiplexing. In particular, multiplexing refers to the


fact of combined transmission by several users on a shared media. For
instance, the physical need to share the radio frequency spectrum and
modulation provides a way to divide the users by ways of bands.
3. Modulation can provide some control over noise and interference. For
example, the effect of noise can be controlled to a large extent by
frequency modulation.
1.1.2 Classifications of modulation
1.Amplitude modulation
Amplitude modulation is one of the techniques applied in electronic
communication, mainly for transmitting information with the help of a high
frequency carrier wave. AM works on by changing the strength of the
transmitted signal in relation to the data being transmitted. For instance,
signal strength modification can signify sounds that should be replicated by
the load speaker or light intensity of television pixels. The first practical
implementations that were of Amplitude modulation are referred to as
“undulatory currents.
2. Analog Modulation
The simplest kind of the modulation is analog modulation. In analog
modulation the modulations are applied as a function of the amplitude spectra
of analytics information signal.
There are two prominent analog modulation techniques; Amplitude
modulation and angle modulation.

Advantages of Amplitude Modulation


 Can easily be demodulated using a Diode Detector
 Coverage area of AM receiver is wider
 Amplitude modulation is long distance propagation because of its
high power
 Amplitude modulation is the cheapest and least complex
Disadvantages of Amplitude Modulation
 It is very much sensitive to noise and thus the performance is very
low
 Signal of Amplitude modulation is not stronger than that of FM when
it propagates through a signal
 It requires more than twice the amplitude frequency to modulate the
signal with a carrier
2. Frequency Modulation
FM, frequency modulation is a popular form of modulation which is used for
many radio communications applications. FM broadcasts on the VHF bands
will still offer exceptionally fully clear audio play, and FM is additionally used
for many forms of two-way radio communication and has higher employment
for mobile radio communication where it is used on taxis and various other
sorts of vehicular transmissions. being used virtually in all transmissions due
to availability of variety of digital transmission leads to who form of modulation
is frequency modulation, FM, which is one of the prominent essential
modulation.
Advantages of using Frequency Modulation
 Does not require a linear amplifier in the transmitter
 Enables greater efficiency than any other modes.
 Resilient to noise
 Resilient to other signal variations
Disadvantages of using Frequency Modulation
 It requires more complicated demodulator
 Cannot cover large areas
 Transmitting and receiving equipment is very expensive
1.2 Encoding
Encoding is the method employed for the conversion of analog signals
into their digital counterparts in communication systems.
Digital to Analog converters

The digital-to-analog converter is an electronic circuit that takes digital input


and turns it into an analog waveform. Digital-to-analog converters are
commonly used to create audio signals from digital data in music players. To
have shades and colors on television and cell phones, digital video signals
are converted into analog.

Analog to Digital Converters

An analog-to-digital converter is an element that transforms the continuous


physical parameter to a numerical value in a digital form depicting the
quantity’s amplitude. Instead of a single conversion, an ADC usually perform
conversions [sic] by a sampling the input. This produces a series of digital
values that have digitized a continuous – time and continuous-amplitude
analog signal into discrete – time and discrete-amplitude digital signals. The
most widely used A/D converter is the Ramp circuit. It employs a comparator
to compare the voltage levels

CONCLUSION

The encoding and the modulation are both required for the proper
transmission and storage of the relevant information.

The technique that should be applied is, therefore, influenced by the nature of
the data, mode of transmission, together with other desirable characteristics
such as noise resistance and spectral efficiency.

Contrast and compare of each technique, as well as comprehending its usage


for the communications systems, signal processing, and data processing is
very vital to most fields.

TRANSMISSION MEDIA
BACKGROUND OF STUDY
As technology continues to develop, we can only expect more innovations in
transmission media. With higher bandwidths, greater security, and even
weirder forms of transmitted data (such as using lasers or underwater cables)
coming into view,
Historically, the story begins with the telegraph relying on wires to transport
electrical waveforms over distances this was the birth of guided media where
data follows a defined path During the 1900s to 1990s telephone networks
thrived and twisted-pair and coaxial cables took center stage. These provided
much greater bandwidths and paved the way for data communication. Then
wireless evolution was born with the introduction of Radio waves and
microwaves, which brought the age of unguided media, i.e. data transmission
without physical cables.
Depending on the development of technology, we can further innovate in
terms of the transmission media. In the future, we will have higher bandwidth,
better security measures, and even more sophisticated ways of transmitting
data, such as using lasers or underwater cables.

LITERATURE REVIEW

Media of transmission physically create the foundation of networks them to


facilitate transmission of the data from one device to another. This synthesis
considers various types of media, classifications, features, usages, as well as
tendencies to develop within the following few years.

Guided Media:
 Twisted-pair cable: However, the studies emphasize its cost-
effectiveness, straightforwardness, and common use in LAN’s (e.g.,
[Onwuka, 2020]). some shortcomings such as interference and low
bandwidth are observed (Stallings, 2016).
 Coaxial cable: Its higher bandwidth is the primary topic of the research,
which means that it is a good fit for use as a component of cable
television and early internet networks ([Hafner, 2019]). But due to its
increased size and price, this modality has slowly disappeared ([Held,
2021]).
 Fiber optic cable: Large literature are emphasizing its advantages in
connection with speed, bandwidth, and security and as a result, make it
one of the pioneering techniques of high-performance networks and
long-distance communication (Agrawal, 2023, Ouzounova, 2020). Multi-
core fibers are another significant step in the technology of fiber optics
that has been studied as well (Li, 2022).
Unguided Media:
 Radio waves: Studies highlight their use for broadcasting, mobile
communication, and long-range networks, pointing out their
adaptability and at large areas of coverage (Rappaport,
2019). But limitations such as interference, security issues, and
differences in available bandwidth by frequency are also
recognized (Geier, 2010).

 Microwaves: Communication via microwaves is discussed as


point-to-point and links with limitation of propagation distance
compared to radio waves. Being specific, studies focus on the
mentioned advancements in millimeter-wave technologies for
millimeter-wave high-speed applications in future ([Pi, 2020]).
 Infrared light: Its limited range and the line-of-sight are high
lightened by research and it is the reason why it is used to
transmit data on short-range and remote control systems (
[Szymanski, 2017]). The other studies focus on the possibility of
Li-Fi, which utilizes visible light to conduct indoor high-data-rate
communication ([March, 2014]).
Emerging Trends:
 New materials and technologies: The study pretty much studies
technologies and materials that increases the rate of light, such as
the case of Graphene and Metamaterials which are believed to be
an advanced medium for telecommunication deservingly faster

 New materials and technologies: The study pretty much studies


technologies and materials that increases the rate of light, such as
the case of Graphene and Metamaterials which are believed to be
an advanced medium for telecommunication deservingly faster

Conclusions
Factors like distance, bandwidth, security and cost affect the
choice of transmission media. Getting to know these
characteristics is important for designing and deploying reliable
networks. Future trends lean towards hybrid networks, new
materials, and technologies that will extend the limits of data
transmission speed.

OSI REFERENCE MODEL


BACKGROUND OF STUDY
The Open Systems Interconnection (OSI) reference model is
a conceptual that outline the different layers involved in
network communication. I was developed by the International
Organization for Standardization (ISO) in the 1980s,and
aimed to design a standardized approach to network designs
and interoperability, regardless of specific technologies. The
OSI Model is a valuable tool for understanding the
fundamentals principles on network communication.

LITERATURE REVIEW
The OSI model has seven key layers which are:
1. Physical Layer
This layer deals with he physical transmission of data through
cables, wireless signals or other media
2. Data Link Layer
Ensures reliable data transmission between network devices
over a single physical link
3. Network Layer
This layer is responsible for handling routing data packets
across networks, determining the best path for delivery form
the source to the destination
4. Transport layer
It provides a reliable end-to-end data transfer between
applications on different devices
5. Session layer
This layer establishes, manages, and terminates
communication sessions between applications
6. Presentation layer
This layer deals with data format conversions and
encryption/decryption for different applications
7. Application layer
Application layer provides network services to applications like
file transfer, email and web browsing

Benefits of the OSI Model


1. It provides a clear, structured way to understand the
complex process involved in networking
2. It becomes easier to troubleshoot network problems
3. Provides a historical context for understanding the
evolution of network technologies and protocols

Conclusions
Despite not being the underlying structure of the internet, the OSI reference
model continues to hold significant value. Its clear layered approach provides
a fundamental understanding of network communication principles, aids in
education and troubleshooting, and serves as a reference point for analyzing
and designing new network technologies.

References
 Comer, D. E. (2011). Interworking with TCP/IP: Principles,
Protocols, and Architectures. Addison-Wesley.
 Heijden, R. G. van der. (2012). The Internet Architecture. Elsevier.
 Forouzan, B. A. (2007). TCP/IP Protocol Suite. McGraw-Hill
Education.
 Lamberton, B. (2000). Networking Simplified. Addison-Wesley.
INTERNETWORKING
BRIDGES
BACKGROUND OF STUDY
Bridges are network devices that serve as connector between two or
more Local Area Networks(LANs).In the early 1970s emerging alongside
ethernet technology, bridges addressed the need to expand LANs. In the
modern era with the rise of faster networks , bridges support higher
bandwidths and advanced features like VLANS
CONCLUSION

bridges remain significant network components. Understanding their historical


context, types, and applications offers valuable insights for network design,
troubleshooting, and appreciating the evolution of data communication. In
specific scenarios, bridges can still be cost-effective solutions, particularly in
extending simple networks or connecting specialized systems.
REFERENCES
Huitema, C. (2014). IPv6: The New Internet Protocol. John Wiley & Sons.
Tanenbaum, A. S. (2003). Computer Networks (4th ed.). Prentice Hall.
Heijden, R. G. van der. (2012). The Internet Architecture. Elsevier.
LITERATURE REVIEW

Bridges primarily connect Local Area Networks (LANs) that share the
same communication language (protocol) at layer 2, like Ethernet.
However, some advanced bridges can translate between different
dialects (protocols) like Ethernet and Token Ring.
Functions of Bridges:
 Decide on message forwarding:
 Expand networks, minimize local traffic
 Modify message headers slightly
 Connects two or more LANs

ROUTERS
BACKGROUND OF STUDY
A router is a network device responsible for directing data packets
across diverse paths to there intended destinations .In the early days
routers manually managed message forwarding based on network maps

LITERATURE REVIEW
Routers play a central role in the internet by directing data packets across
networks to their intended destinations. This review explores the existing
literature on routers, examining their history, types, functionalities, and future
trends.
Types of routers
1. Wired and wireless routers
2. Core and edge routers
Significance of routers
1. Efficient data delivery
2. Network Scalability
3. Security and network management

Emerging technologies in router


1. High speed and capacity
2. Software-defined Networking

CONCLUSION

Routers represent a fundamental technology underpinning the internet's


functionality. Understanding their history, types, functionalities, and future
trends provides valuable insights into the complex mechanisms that
enable global data communication. As technology evolves, routers will
continue to adapt and play a crucial role in managing the ever-growing
demands of our interconnected world.

REFERENCE

 Forouzan, B. A. (2007). TCP/IP Protocol Suite. McGraw-Hill


Education.
 Ford, W. C. (2013). Computer Networking: A Top-Down
Approach. Pearson Education.

GATEWAYS
BACKGROUND OF STUDY
Gateways act as translators and mediators facilitating communication
between networks with different protocols, mediums or security policies
The early gateways in the early 1970s – 1980s we custom-built,
perfuming basic control conversion and routing traffic between
incompatible networks

Types of gateways
 Packet-level gateway – enable communication between networks
with dissimilar structures
 Application-level gateways – enforce security and manages access
to resources

Functions of Gateways
1. Routing
2. Security
3. Access Control
4. Network address translation
5. Protocol conversion

CONCLUSION
Gateways represent a dynamic technology adapting to evolving needs,
ensuring data traverses various networks securely and efficiently.
REFERENCES
 Huitema, C. (2014). IPv6: The New Internet Protocol. John
 Forouzan, B. A. (2007). TCP/IP Protocol Suite. McGraw-Hill Education.

CARRIER SENSE MULTIPLE ACCCES WITH COLLISION DETECTION


(CSMA/CD)
BACKGROUND OF STUDY
CSMA/CD Plays a crucial role in development of networking
particularly shaping the foundation of ethernet topology

Throughout the 1980s – 1990s CSMA/CD remained the dominant


protocol for ethernet, enabling a widespread of LANs
CONCLUSIONS
Even while CSMA/CD is less common now, its historical influence on
networking cannot be overstated. We may learn a great deal about
network architecture, media access control, and the continuous
development of communication technology by comprehending its
foundations and historical development.

CARRIER SENSE MULTIPLE ACCESS WITH COLLISSION AVOIDANCE


(CSMA/CA)
BACKGROUND OF STUDY
A media access control (MAC) protocol called CSMA/CA is utilized in
wireless networks such as Bluetooth and Wi-Fi. By reducing data packet
collisions, it makes it possible for numerous devices to share a single
wireless channel effectively. It is essential to understand CSMA/CA in
order to understand wireless network operation and performance.
LITERATURE REVIEW
Collision Avoidance techniques
Preamble: A brief signal that is broadcast prior to the data itself,
allowing other devices to be aware and set their own backoff clocks.
When transferring greater amounts of data, the RTS/CTS handshake
technique is used to reserve the channel before transmission.
Carrier Sense Multiple Access with Collision Avoidance and
Acknowledgment (CSMA/CA-A) and other techniques are used in virtual
carrier sensing to handle exposed and concealed node issues in multi-
hop networks.
CONCLUSION
An important contribution to the creation of dependable and effective
wireless networks has been made by CSMA/CA. It allows smooth
communication in increasingly crowded wireless situations with
ongoing improvements to its core mechanisms and when paired with
other protocols. Future versions of CSMA/CA or other access
techniques may appear as technology develops to meet the always
expanding needs of wireless communication.

TOKEN BUS AND TOKEN RING

BACKGROUND OF STUDY
Token Bus and Token Ring were two well-known protocols that
dominated the early days of networking in terms of limiting access to
the shared media. Both made use of a token-passing technique to
reduce collisions and guarantee orderly data transfer. This paper
explores their functions, distinctions, historical background, and
eventual demise, providing important insights into the development of
networking protocols.
LITERATURE REVIEW
Token Bus
 Virtual Ring on Bus Topology: Token Bus constructed a logical ring
structure on a bus or tree topology in place of a real ring.
 Complexities: Handling collisions and preserving the virtual ring
added more complexity.
 Scalability: It can accommodate larger networks since it is more
scalable than Token Ring.
 Applications: Because it can provide real-time performance, it is
mostly employed in factory automation and process control.
Token Ring
 Physical Ring Topology: The token was passed progressively from
one device to the next by those connected in a physical ring.
 Simplicity: The actual ring construction made troubleshooting and
operation more straightforward.
 Limited Scalability: Mostly utilized in smaller office settings, ring
size hindered scalability.
 Popularity: Widely used by IBM, it initially gained popularity
because of its consistency and dependability in terms of
performance.
Similarities and difference
 Complexity: Virtual ring management makes the token bus more
difficult.
 Scalability: Ring provided simplicity, but Token Bus was more
scalable.
 Performance: Token Bus might have more bandwidth, and Ring
could have steady performance.
 Cost: Because of cabling and transceivers, token buses are more
expensive.
 Applications: A variety of uses motivated by their attributes
CONCLUSION
Although Token Bus and Token Ring represented notable innovations at
the time, their constraints ultimately caused them to fail.

Complexity: Compared to more straightforward protocols like Ethernet,


both had difficulties with implementation and management.
Scalability: Token Bus's virtual ring complexity grew with growth,
whereas Token Ring's actual ring restricted scalability.
Growing Ethernet: Ethernet was widely used once it was developed
with improved CSMA/CD and later enhancements that solved collision
issues and offered higher performance.

You might also like