0% found this document useful (0 votes)
56 views14 pages

Chapter2 - The Technology

z/OS and Unix are two common operating systems. z/OS runs on IBM mainframes and derives from OS/390. Unix was developed at Bell Labs and has many variants like Linux, Solaris, and BSD. z/OS generally requires less user intervention than Unix systems. Unix traditionally uses command line shells while GUIs are growing for many tasks. z/OS and Unix also differ in how they manage I/O operations. z/OS is typically used for large user loads, batch processing, databases, and enterprise applications. Unix is commonly used for application and database servers, workstations, and internet services. Linux can also run on IBM mainframes to combine Linux flexibility with z/OS reliability and scalability
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views14 pages

Chapter2 - The Technology

z/OS and Unix are two common operating systems. z/OS runs on IBM mainframes and derives from OS/390. Unix was developed at Bell Labs and has many variants like Linux, Solaris, and BSD. z/OS generally requires less user intervention than Unix systems. Unix traditionally uses command line shells while GUIs are growing for many tasks. z/OS and Unix also differ in how they manage I/O operations. z/OS is typically used for large user loads, batch processing, databases, and enterprise applications. Unix is commonly used for application and database servers, workstations, and internet services. Linux can also run on IBM mainframes to combine Linux flexibility with z/OS reliability and scalability
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Cavite State University ITEC 110 – System Administration and Maintenance-

UNIX AND Z/OS

z/OS is a 64-bit operating system for IBM z/Architecture mainframes, introduced by IBM in
October 2000. It derives from and is the successor to OS/390, which in turn followed a string of
MVS versions. Like OS/390, z/OS combines a number of formerly separate, related products,
some of which are still optional.

The development of Unix started in the early 1970s at AT&T Bell Labs on a DEC PDP-7, initially
by Ken Thompson and Dennis Ritchie. It was designed to be a portable operating system. The C
programming language was developed in order to rewrite most of the system at high level, and
this contributed greatly to its portability. There exists a myriad of Unix dialects today, e.g. HP-UX,
AIX, Solaris, *BSD, GNU/Linux and IRIX.

User interfaces
z/OS generally requires less user intervention than Unix systems, although there is currently an
interest in ‘autonomic’ computing which aims to make Unix and Windows operating systems less
human-dependent.
In Unix, users have traditionally interacted with the systems through a command line interpreter
called a shell. This is still the most powerful interface to many system services, though today
Graphical User Interfaces (GUI) are growing in popularity for many tasks. A shell is command
driven, and the original shell was the Bourne shell ‘sh’.

I/O operations
The way I/O operations are managed differs a lot between z/OS and Unix systems. The
z/Architecture is tailor-built for high-throughput and performs I/O operations in a very efficient way
[6]. I/O management is offloaded to dedicated processors, called System Assist Processors (SAP)
and other specialized hardware and software components, while general processors can
concentrate on user related work in parallel. Big level-two caches, fast data buses and many I/O
interconnects make sure that a large number of I/O operations can be handled simultaneously in
a controlled and efficient manner.

Platform applicability and typical workloads


z/OS is typically used for:
• Handling large amounts of users simultaneously
• Batch processing
• Online transaction processing
• Hosting (multiple) bigger databases
• Enterprise application servers

Introduction to Networking *Department of Information Technology


Page 1 of 5
Cavite State University ITEC 110 – System Administration and Maintenance-

Unix systems are typically used as:


• Application servers • Hosting single databases
• Powerful workstations
• Internet and infrastructure services like mail, web servers, DNS and firewalls
• Processing intensive workloads

Linux on the mainframe


Linux is a popular operating system in such environments because of its lightweight kernel, and
flexible and modular design. The fact that GNU/Linux is open-source software makes it possible
to customize the kernel for almost whatever purpose, thus it is very popular in academic and
research environments. Linux is widely used in commercial environments as well. The fact that
Linux is cost effective, stable and portable makes it popular as a server operating system, also
on the System z servers

Linux makes the System z hardware platform an alternative to small distributed servers, and
combines the strengths of Linux with the high availability and scalability characteristics of the
z/Architecture hardware. Linux complement z/OS in supporting diverse workload on the same
physical box; it gives more choices for the z platform. Some of the benefits of running Linux on
System z are better hardware utilization and infrastructure simplification. A System z server is
capable of running hundreds of Linux images simultaneously. z/VM makes it possible to set up
virtual LANs and virtual network switches between the guest operating systems, and data transfer
across these virtual network interconnects is as fast as moving data from one place in memory to
another.

Introduction to Networking *Department of Information Technology


Page 2 of 5
Cavite State University ITEC 110 – System Administration and Maintenance-

EMAIL

Mail architecture
Mail addresses
The oldest mail address format was the one used in ARPANET, which was later also used in
BITNET, being user@host.

Mail hierarchy and DNS


The solution to the problem of the large namespace came with the invention of DNS, the Domain
Name System
MX records provide an abstraction layer for email addresses. For instance mail to
[email protected] could be instructed towards the next hop by way of an MX record, for
example pointing to relay.some.domain. This relaying machine could accept and store the mail in
order to forward it on to its, possible final, destination.

Mail protocols
Now that addressing has been taken care of, we can start looking at the protocols used for mail
transport and pickup. As already mentioned, email became a first-class citizen only after the
specification of SMTP in August 1982 in RFC 821 [37]. This protocol is used for mail transport.
Later on protocols like POP and IMAP were introduced to facilitate a network based interaction
with mail readers.

(Extended) Simple Mail Transfer Protocol


SMTP was indeed a very simple protocol intended for mail transport. It lacked some of the more
advanced features of the X.400 message service. In the original specification only a few
commands were defined. The most important commands are:
• The Hello command (HELO) It’s the first SMTP command: is starts the conversation
identifying the sender server and is generally followed by its domain name.
• EHLO An alternative command to start the conversation, underlying that the server is
using the Extended SMTP protocol.
• The MAIL command (MAIL FROM:) With this SMTP command the operations begin: the
sender states the source email address in the “From” field and actually starts the email
transfer.
• One or more RCPT commands (RCPT TO:) It identifies the recipient of the email; if there
are more than one, the command is simply repeated address by address.
• With the DATA command the email content begins to be transferred; it’s generally followed
by a 354-reply code given by the server, giving the permission to start the actual
transmission.
• The QUIT command is used for graceful termination of the SMTP session

POP and IMAP


POP stands for Post Office Protocol, and was designed as a simple way to access a remote
email server. The most recent version is POP 3, and is supported by virtually all email clients and
servers.

POP works by downloading your emails from your provider's mail server, and then marking them
for deletion there. This means you can only ever read those email messages in that email client,
and on that computer. You won't be able to access any previously downloaded emails from any
other device, with any other email client, or through webmail.

Introduction to Networking *Department of Information Technology


Page 3 of 5
Cavite State University ITEC 110 – System Administration and Maintenance-

IMAP stands for Internet Message Access Protocol, and was designed specifically to eliminate
the limitations of POP.

IMAP allows you to access your emails from any client, on any device, and sign in to webmail at
any time, until you delete them. You'll always see the same emails, no matter how you access
your provider's server.

Since your email is stored on the provider's server and not locally, you may run into email storage
limits when using IMAP.

Mail Format
An e-mail consists of three parts that are as follows:
1. Envelope
2. Header
3. Body

1. Envelope: The envelope part encapsulates the message. It contains all information that is
required for sending any e-mail such as destination address, priority and security level. The
envelope is used by MTAs for routing message.
2. Header: The header consists of a series of lines. Each header field consists of a single line
of ASCII text specifying field name, colon and value. The main header fields related to message
transport are:
1. To: It specifies the DNS address of the primary recipient(s).
2. Cc: It refers to carbon copy. It specifies address of secondary recipient(s).
3. BCC: It refers to blind carbon copy. It is very similar to Cc. The only difference
between Cc and Bcc is that it allow user to send copy to the third party without
primary and secondary recipient knowing about this.
4. From: It specifies name of person who wrote message.
5. Sender: It specifies e-mail address of person who has sent message.
6. Received: It refers to identity of sender’s, data and also time message was received.
It also contains the information which is used to find bugs in routing system.
7. Return-Path: It is added by the message transfer agent. This part is used to specify
how to get back to the sender.
3. Body: - The body of a message contains text that is the actual content/message that needs
to be sent, such as “Employees who are eligible for the new health care program should contact
their supervisors by next Friday if they want to switch.” The message body also may include
signatures or automatically generated text that is inserted by the sender’s email system.

Introduction to Networking *Department of Information Technology


Page 4 of 5
Cavite State University ITEC 110 – System Administration and Maintenance-

The above-discussed field is represented in tabular form as follows:

SSL and TLS


SSL stands for Secure Sockets Layer and, in short, it's the standard technology for keeping an
internet connection secure and safeguarding any sensitive data that is being sent between two
systems, preventing criminals from reading and modifying any information transferred, including
potential personal details. The two systems can be a server and a client (for example, a shopping
website and browser) or server to server (for example, an application with personal identifiable
information or with payroll information).
TLS (Transport Layer Security) is just an updated, more secure, version of SSL. We still refer to
our security certificates as SSL because it is a more commonly used term, but when you
are buying SSL from DigiCert you are actually buying the most up to date TLS certificates with
the option of ECC, RSA or DSA encryption.
HTTPS (Hyper Text Transfer Protocol Secure) appears in the URL when a website is secured by
an SSL certificate. The details of the certificate, including the issuing authority and the corporate
name of the website owner, can be viewed by clicking on the lock symbol on the browser bar.

Introduction to Networking *Department of Information Technology


Page 5 of 5
Cavite State University ITEC 110 – System Administration and Maintenance-

XML-BASED NETWORK MANAGEMENT

XML stands for extensible markup language. A markup language is a set of codes, or tags, that
describes the text in a digital document. The most famous markup language is hypertext markup
language (HTML), which is used to format Web pages.

Applicability of XML technologies to management tasks


Network management involves the following essential tasks: modeling management information,
instrumenting management information in managed resources, communicating between manager
and agent, analyzing the collected data and presenting the analysis results to users.
Network management systems consist of manager and agent systems. They perform various
management tasks to process management information. Agent’s tasks involve accessing
attributes of managed objects, event reporting and processing the management requests. Under
normal conditions, agent’s tasks are so simple that the management overhead of the agent can
be negligible. On the other hand, manager systems perform complex management tasks to satisfy
management objectives while they depend on management applications.

Management protocol
A management protocol must deal with the specifications of management operations and
transport protocol for the exchange of management information. It should also define the syntax
and semantics for the protocol data unit. With respect to management protocols, XML-based
network management follows the model of transferring data over HTTP. Further, it uses XML as
management information encoding syntax. This means management data is transferred over the
HTTP payload in the form of an XML document. The management information in XML document
format is distributed through HTTP over TCP.

Architecture of XML-based manager


The XML-based manager needs to define management information added to the management
information of the XML-based agent, retrieve the management information from the agent, then analyze

Introduction to Networking *Department of Information Technology


Page 6 of 5
Cavite State University ITEC 110 – System Administration and Maintenance-

the retrieved information, and present to the user. In this section, we explain the management tasks of
the XML-based manager on the aspects of information modeling, management protocol, analysis and
presentation. Figure 5 illustrates the architecture of an XML-based manager. The manager includes HTTP
Server and Client, SOAP Server and Client, Management Script, Management Functions Module, DOM
Interface Module, XSLT Processor, XMLDB and XSLT Template Repository.

Introduction to Networking *Department of Information Technology


Page 7 of 5
Cavite State University ITEC 110 – System Administration and Maintenance-

Introduction to Networking *Department of Information Technology


Page 8 of 5
Cavite State University ITEC 110 – System Administration and Maintenance-

OPEN TECHNOLOGY

What is open source? Software is open source if its source code is available to the general public
without restrictions that limit studying it, changing it, or improving upon it. In legal terms, open
software is published under an Open-Source license. Several such licenses, and licensing
strategies, exist. The Open-Source Initiative (OSI)1 is a non-profit organization ‘dedicated to
managing and promoting the Open-Source Definition for the good of the community’ [14]. It
registers, classifies (and certifies) software licenses, and seeks to explicitly define criteria and
metacriteria for Open Source. To do so, OSI publishes a document called ‘The Open-Source
Definition’, based upon work by Bruce Perens.
OSIs definition demands, paraphrased here:
• No restrictions on redistribution of the software;
• Source code must be included, or at least be easily obtainable, without charge and
without obfuscation;
• Modification and creation of derived work is explicitly permitted;
• Distribution of modified source code may be restricted only if orthogonal modifications
(‘patch files’) are explicitly allowed, and if distribution of software built from modified
source code is explicitly allowed. For such modified software, the license may require
that modifications are clearly reflected in name or version number;
• The license must not discriminate against persons or groups in any way;
• The license may pose no restrictions on the way the software’s use is allowed;
• When the software is redistributed, the license still applies and travels along;
• The license is not specific to a particular product; if it is extracted from a particular
distribution and redistributed, the license still applies;
• The license must be technology-neutral, imposing no conditions on the way it is
accepted.
In the case that the design of a particular system is unspecified or undisclosed an open
implementation, accessible to anyone who wants, may even play the role of an implicit
specification. The behaviour of the system can be inspected, and its design may, in principle, be
extracted from that particular incarnation.

Examples of open-source licenses


The two most popular licenses, or license schemes, are:
• The BSD License is actually a boilerplate text to serve as a licensing scheme. It is
probably the most senior Open-Source license, and huge amounts of software
distributions, including widely used BSD Unix family members, use it. The BSD license
typically poses no restriction whatsoever on use and modification of the distributed source
code, although it typically does require proper attribution
• The GNU General Public License (GPL) is a license that ‘is intended to guarantee your
freedom to share and change free software’ [5]. The Linux kernel, as well as a significant
amount of the ‘userland’ software constituting most Linux (or, GNU/Linux, as some would
have us say for that reason) is distributed under version 2 of the GPL, and is likely to stay
that way.

Introduction to Networking *Department of Information Technology


Page 9 of 5
Cavite State University ITEC 110 – System Administration and Maintenance-

SYSTEM BACKUP: METHODOLOGIES, ALGORITHMS AND EFFICIENCY MODELS

Causes of data loss


Data loss can be caused by many different factors, and each poses a unique problem for data
recovery. Hard drive crashes account for the highest percentage of data loss, but human errors
and issues with software follow closely behind. According to data from Kroll Ontrack:

67 percent of data loss is caused by hard drive crashes or system failure


14 percent of data loss is caused by human error
10 percent of data loss is a result of software failure

Awareness of the types of data loss and the risks associated with losing data is essential for
preventing data loss that can be a major cost to your business.

1. Human Error
2. Viruses & Malware
3. Hard Drive Damage
4. Power Outages
5. Computer Theft
6. Liquid Damage
7. Disasters
8. Software Corruption
9. Hard Drive Formatting
10. Hackers and Insiders

Critical Factors in Developing Backup Strategies


When you start thinking about your backup strategy, keep these considerations in mind. You’ll
need to balance these factors to come up with a strategy that truly protects your business.

1. Cost. Like everything else, backups cost money. You may have to buy hardware and
software, pay for a maintenance agreement, and train your staff.
2. Backup location. Today, many default their backups to the cloud. However, you should
still consider potentially keeping a copy of your data in another location as well. Cloud
outages are rare but do happen.
3. Backup method. You can choose from different kinds of backups. Each backup method
requires a different amount of storage, impacting costs, and a different amount of time,
impacting both the length of the backup procedure and the length of the recovery
procedure.
4. Backup (and recovery) flexibility. When creating backups, you generally want to backup
everything, but that’s not true for recovery. Recovery needs to be able to scale from
restoring a single file to restoring an entire server.
5. Backup schedule. Your backups should be automated and run on a schedule, not rely on
someone remembering to execute them manually. They should be scheduled to run
frequently enough that you’ll capture data that changes often as well as data that changes
rarely. They should be scheduled around production workflow needs. Your recovery point
objective and recovery time objective come into play here; note those targets shouldn’t be
global but should be tailored to the needs of each system. Your backup schedule may be
unique to each system as well

Introduction to Networking *Department of Information Technology


Page 10 of 5
Cavite State University ITEC 110 – System Administration and Maintenance-

6. Scalable. You can expect your data to grow and your backup needs to grow along with it.
Your backup process should be able to handle expected volumes of new data. You should
have a process that ensures new servers, applications, and data stores are added to your
backups.
7. Backup security. Backups need to be accessible when needed, but they shouldn’t be
accessible by just anyone. Making sure backups are safe from tampering is vital to protect
your business.

What is Backup Retention Policy? How is it implemented?


A retention policy is a protocol that defines the lifecycle of data in an organization.
This lifecycle describes the following things:
1. For how long the organization will retain a piece of information;
2. How this information will be stored;
3. What data should be stored and why;
4. When to dispose of the particular data.
A retention policy is crucial for businesses of every size. It helps you manage your data and
backups, allowing you to control your records’ growth. Not having this policy will, at the very least,
result in you spending lots of money on the storage for unnecessary files. In the worst-case
scenario, not having a retention policy may lead you to break the law by not keeping some data
long enough or keeping it for no good reason.
Also, a thorough backup retention policy helps you to quickly find the information you need so you
can restore it or present it as evidence in a legal case.

To create a data retention policy, you need to know two things:


1. Business needs the retention policy must solve for your organization;
2. Compliance regulations regarding data applied to your organization.
How do you find out these things? Simply by seeking assistance from the law department and C-
level management of your company.

Company business needs


To function properly, most companies rely on operational day-to-day data flow like emails,
spreadsheets, text documents, etc. By backing up this business-critical information, you secure it
from data losses and reduce the potential downtime due to disruptions that usually cost
businesses a fortune.
The time for which you have to store this data depends solely on your business goals. The mistake
many companies make is keeping this type of data for as long as possible. It feels like a safe
choice, but it will only take up space and computing resources by piling up your storage with
useless information.

To avoid this mistake, answer the following questions:


• What data to keep?
• Why do we need to keep it?
• For how long do we need to keep it?

Backup Retention Policy: Best Practices to Follow


1. Classify data by type and needs
Here is how you need to classify data:

Introduction to Networking *Department of Information Technology


Page 11 of 5
Cavite State University ITEC 110 – System Administration and Maintenance-

• What data is valuable from the point of compliance regulations;


• What data is valuable from the point of your business needs;
• What data refers to public, proprietary, or confidential information.
2. Categorize data by lifecycle
Here is how you can categorize data by their lifecycle:
• Records that should be retained for up to six months;
• Records that should be retained for one year;
• Records that should be retained for up to three years, and so on.
3. Decide what and when to delete
Here are the possible implications you may experience for not deleting data in time:
• Putting your company at risk of legal proceedings and penalties for non-compliance;
• Risking your client’s data security and your reputation;
• Cluttering and overburdening your hardware/software with unnecessary data;
• Spending money on extra storage occupied with data that has no value for the company;
• Making data navigation too complicated.
4. Define the number and type of versions to store
Use these parameters of the versioned backup:
By the number of versions to store:
• Additional (inactive) versions
• Last versions of files that have been deleted
By the amount of time to store:
• Existing data
• Deleted data
5. Decide about the types of backups and their frequency
There are three types of backups:
1. Full backup – a full copy of all existing files;
2. Differential backup – a copy of all changes made from the last full backup;
3. Incremental backup – a copy of all changes since the last backup of any kind (full,
differential, or incremental).
6. Choose a compliant and cost-effective data backup service
Here are the key things to consider when choosing a perfect backup provider:
1. Cloud-to-cloud or on-premises. There are many pros and cons to both options, but we are
of the opinion that if you keep your data in the cloud services like Google or Microsoft, the
cloud-to-cloud backup is the best option for your business. We justify our opinion in this
article.
2. Scalability and flexibility. By these words, we mean the ability to start from the minimum
number of licenses and scale it when your business grows. Some backup services have
prefixed numbers of licenses you have to buy to start using the service, which may be a
waste of money for small businesses.
3. Type of backup and restore. For small-to-medium businesses that keep data in the cloud,
a backup service with an incremental-based backup model, granular restore, and version
control will be the best fit.

Introduction to Networking *Department of Information Technology


Page 12 of 5
Cavite State University ITEC 110 – System Administration and Maintenance-

INTERNET MANAGEMENT PROTOCOLS

Simple Network Management Protocol (SNMP)


SNMP is the foremost standard protocol, which queries relevant objects in a bid to extract data
from devices such as switches, WLAN controllers, servers, printers, routers, modems, etc., which
have been attached to a network. Data collected is used to develop information used in monitoring
the performance of the network based on interface status, CPU utilization, bandwidth utilization,
network latency, etc.

Introduction to Networking *Department of Information Technology


Page 13 of 5
Cavite State University ITEC 110 – System Administration and Maintenance-

Internet Control Message Point (ICMP)


ICMP is a network monitoring protocol specially designed for error reporting. Network devices
such as routers make use of ICMP to send error messages at situations where for example a
host/client can’t be reached or requested information is not available. Unlike SNMP, ICMP does
not get involved in the exchange of data within or between systems.
Some of the common error messages ICMP reports include but not limited to:
• Time to live (TTL) exhaustion message, generated when a packets TTL hits 0.
• Source quench message, automated when a recipient notices an unusual increase in
the transfer rate of packet transmission.
• Parameter error message, which is generated when there is a packet mismatch in the
traffic, halting the reception of unapproved packets.
• Unreachable destination message, which pops up when a router or a destination host
sends out an error message bordering on the unavailability of a destination to be reached
due to port, link or hardware failure. Or any other failure as the case may be.

Introduction to Networking *Department of Information Technology


Page 14 of 5

You might also like