Chapter2 - The Technology
Chapter2 - The Technology
z/OS is a 64-bit operating system for IBM z/Architecture mainframes, introduced by IBM in
October 2000. It derives from and is the successor to OS/390, which in turn followed a string of
MVS versions. Like OS/390, z/OS combines a number of formerly separate, related products,
some of which are still optional.
The development of Unix started in the early 1970s at AT&T Bell Labs on a DEC PDP-7, initially
by Ken Thompson and Dennis Ritchie. It was designed to be a portable operating system. The C
programming language was developed in order to rewrite most of the system at high level, and
this contributed greatly to its portability. There exists a myriad of Unix dialects today, e.g. HP-UX,
AIX, Solaris, *BSD, GNU/Linux and IRIX.
User interfaces
z/OS generally requires less user intervention than Unix systems, although there is currently an
interest in ‘autonomic’ computing which aims to make Unix and Windows operating systems less
human-dependent.
In Unix, users have traditionally interacted with the systems through a command line interpreter
called a shell. This is still the most powerful interface to many system services, though today
Graphical User Interfaces (GUI) are growing in popularity for many tasks. A shell is command
driven, and the original shell was the Bourne shell ‘sh’.
I/O operations
The way I/O operations are managed differs a lot between z/OS and Unix systems. The
z/Architecture is tailor-built for high-throughput and performs I/O operations in a very efficient way
[6]. I/O management is offloaded to dedicated processors, called System Assist Processors (SAP)
and other specialized hardware and software components, while general processors can
concentrate on user related work in parallel. Big level-two caches, fast data buses and many I/O
interconnects make sure that a large number of I/O operations can be handled simultaneously in
a controlled and efficient manner.
Linux makes the System z hardware platform an alternative to small distributed servers, and
combines the strengths of Linux with the high availability and scalability characteristics of the
z/Architecture hardware. Linux complement z/OS in supporting diverse workload on the same
physical box; it gives more choices for the z platform. Some of the benefits of running Linux on
System z are better hardware utilization and infrastructure simplification. A System z server is
capable of running hundreds of Linux images simultaneously. z/VM makes it possible to set up
virtual LANs and virtual network switches between the guest operating systems, and data transfer
across these virtual network interconnects is as fast as moving data from one place in memory to
another.
Mail architecture
Mail addresses
The oldest mail address format was the one used in ARPANET, which was later also used in
BITNET, being user@host.
Mail protocols
Now that addressing has been taken care of, we can start looking at the protocols used for mail
transport and pickup. As already mentioned, email became a first-class citizen only after the
specification of SMTP in August 1982 in RFC 821 [37]. This protocol is used for mail transport.
Later on protocols like POP and IMAP were introduced to facilitate a network based interaction
with mail readers.
POP works by downloading your emails from your provider's mail server, and then marking them
for deletion there. This means you can only ever read those email messages in that email client,
and on that computer. You won't be able to access any previously downloaded emails from any
other device, with any other email client, or through webmail.
IMAP stands for Internet Message Access Protocol, and was designed specifically to eliminate
the limitations of POP.
IMAP allows you to access your emails from any client, on any device, and sign in to webmail at
any time, until you delete them. You'll always see the same emails, no matter how you access
your provider's server.
Since your email is stored on the provider's server and not locally, you may run into email storage
limits when using IMAP.
Mail Format
An e-mail consists of three parts that are as follows:
1. Envelope
2. Header
3. Body
1. Envelope: The envelope part encapsulates the message. It contains all information that is
required for sending any e-mail such as destination address, priority and security level. The
envelope is used by MTAs for routing message.
2. Header: The header consists of a series of lines. Each header field consists of a single line
of ASCII text specifying field name, colon and value. The main header fields related to message
transport are:
1. To: It specifies the DNS address of the primary recipient(s).
2. Cc: It refers to carbon copy. It specifies address of secondary recipient(s).
3. BCC: It refers to blind carbon copy. It is very similar to Cc. The only difference
between Cc and Bcc is that it allow user to send copy to the third party without
primary and secondary recipient knowing about this.
4. From: It specifies name of person who wrote message.
5. Sender: It specifies e-mail address of person who has sent message.
6. Received: It refers to identity of sender’s, data and also time message was received.
It also contains the information which is used to find bugs in routing system.
7. Return-Path: It is added by the message transfer agent. This part is used to specify
how to get back to the sender.
3. Body: - The body of a message contains text that is the actual content/message that needs
to be sent, such as “Employees who are eligible for the new health care program should contact
their supervisors by next Friday if they want to switch.” The message body also may include
signatures or automatically generated text that is inserted by the sender’s email system.
XML stands for extensible markup language. A markup language is a set of codes, or tags, that
describes the text in a digital document. The most famous markup language is hypertext markup
language (HTML), which is used to format Web pages.
Management protocol
A management protocol must deal with the specifications of management operations and
transport protocol for the exchange of management information. It should also define the syntax
and semantics for the protocol data unit. With respect to management protocols, XML-based
network management follows the model of transferring data over HTTP. Further, it uses XML as
management information encoding syntax. This means management data is transferred over the
HTTP payload in the form of an XML document. The management information in XML document
format is distributed through HTTP over TCP.
the retrieved information, and present to the user. In this section, we explain the management tasks of
the XML-based manager on the aspects of information modeling, management protocol, analysis and
presentation. Figure 5 illustrates the architecture of an XML-based manager. The manager includes HTTP
Server and Client, SOAP Server and Client, Management Script, Management Functions Module, DOM
Interface Module, XSLT Processor, XMLDB and XSLT Template Repository.
OPEN TECHNOLOGY
What is open source? Software is open source if its source code is available to the general public
without restrictions that limit studying it, changing it, or improving upon it. In legal terms, open
software is published under an Open-Source license. Several such licenses, and licensing
strategies, exist. The Open-Source Initiative (OSI)1 is a non-profit organization ‘dedicated to
managing and promoting the Open-Source Definition for the good of the community’ [14]. It
registers, classifies (and certifies) software licenses, and seeks to explicitly define criteria and
metacriteria for Open Source. To do so, OSI publishes a document called ‘The Open-Source
Definition’, based upon work by Bruce Perens.
OSIs definition demands, paraphrased here:
• No restrictions on redistribution of the software;
• Source code must be included, or at least be easily obtainable, without charge and
without obfuscation;
• Modification and creation of derived work is explicitly permitted;
• Distribution of modified source code may be restricted only if orthogonal modifications
(‘patch files’) are explicitly allowed, and if distribution of software built from modified
source code is explicitly allowed. For such modified software, the license may require
that modifications are clearly reflected in name or version number;
• The license must not discriminate against persons or groups in any way;
• The license may pose no restrictions on the way the software’s use is allowed;
• When the software is redistributed, the license still applies and travels along;
• The license is not specific to a particular product; if it is extracted from a particular
distribution and redistributed, the license still applies;
• The license must be technology-neutral, imposing no conditions on the way it is
accepted.
In the case that the design of a particular system is unspecified or undisclosed an open
implementation, accessible to anyone who wants, may even play the role of an implicit
specification. The behaviour of the system can be inspected, and its design may, in principle, be
extracted from that particular incarnation.
Awareness of the types of data loss and the risks associated with losing data is essential for
preventing data loss that can be a major cost to your business.
1. Human Error
2. Viruses & Malware
3. Hard Drive Damage
4. Power Outages
5. Computer Theft
6. Liquid Damage
7. Disasters
8. Software Corruption
9. Hard Drive Formatting
10. Hackers and Insiders
1. Cost. Like everything else, backups cost money. You may have to buy hardware and
software, pay for a maintenance agreement, and train your staff.
2. Backup location. Today, many default their backups to the cloud. However, you should
still consider potentially keeping a copy of your data in another location as well. Cloud
outages are rare but do happen.
3. Backup method. You can choose from different kinds of backups. Each backup method
requires a different amount of storage, impacting costs, and a different amount of time,
impacting both the length of the backup procedure and the length of the recovery
procedure.
4. Backup (and recovery) flexibility. When creating backups, you generally want to backup
everything, but that’s not true for recovery. Recovery needs to be able to scale from
restoring a single file to restoring an entire server.
5. Backup schedule. Your backups should be automated and run on a schedule, not rely on
someone remembering to execute them manually. They should be scheduled to run
frequently enough that you’ll capture data that changes often as well as data that changes
rarely. They should be scheduled around production workflow needs. Your recovery point
objective and recovery time objective come into play here; note those targets shouldn’t be
global but should be tailored to the needs of each system. Your backup schedule may be
unique to each system as well
6. Scalable. You can expect your data to grow and your backup needs to grow along with it.
Your backup process should be able to handle expected volumes of new data. You should
have a process that ensures new servers, applications, and data stores are added to your
backups.
7. Backup security. Backups need to be accessible when needed, but they shouldn’t be
accessible by just anyone. Making sure backups are safe from tampering is vital to protect
your business.