0% found this document useful (0 votes)
4 views62 pages

Question Bank Problem Solving in C

The document is a question bank for the B.Tech Information Technology course at Mahendra Institute of Technology for the academic year 2024-2025, focusing on Problem Solving Techniques using C. It includes various topics such as computer hardware and software, programming languages, C programming structure, data types, and storage technology evolution. The document outlines questions categorized into Part-A, Part-B, and Part-C, covering theoretical and practical aspects of the subject matter.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views62 pages

Question Bank Problem Solving in C

The document is a question bank for the B.Tech Information Technology course at Mahendra Institute of Technology for the academic year 2024-2025, focusing on Problem Solving Techniques using C. It includes various topics such as computer hardware and software, programming languages, C programming structure, data types, and storage technology evolution. The document outlines questions categorized into Part-A, Part-B, and Part-C, covering theoretical and practical aspects of the subject matter.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 62

MAHENDRA INSTITUTE OF TECHNOLOGY (AUTONOMOUS)

Mahendhirapuri, Mallasamudram,Namakkal- 637 503

DEPARTMENT OF B.TECH INFORMATION TECHNOLOGY

QUESTION BANK

Academic Year: 2024-2025 (ODD Semester)

Year/Sem.: I/I
Course Code &Title: IT24111 & PROBLEM SOLVING TECHNIQUES USING C
Regulation: R2022

Prepared By Approved By

(M.R.NITHYAA , AP/AI&ML) (HoD)

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 1
UNIT-I

PROBLEM SOLVING ASPECTS

Computers:Hardware-Software-Processor-Memory-I/o devices-Interface-Programming
Languages.Problem Solving Aspects:Algorithms,Pseudo code,Flowchart,Steps in Problem Solving-simple
strategies for developing algorithms(iteration,recursion)-Steps for Creating and Running programs
Introduction to C programming-Header files-Structure of a C program-compliation and linking processes-
Constants,Variables-Data Types.

Part-A ( Five Questions)

S.No Questions BTL CO

1 Write a short note on Hardware and Software in Computer. K1 CO1

• Computer hardware includes the physical parts of a computer,


such as a case, central processing unit (CPU), random access
memory (RAM), monitor, and mouse which processes the input
according to the set of instructions provided to it by the user and

gives the desired output .


 Computer Software refers to the collection of instructions, data,
or programs that are used to operate computers and execute
specific tasks. These are important for the functioning of modern

computing systems.
2 Define complier . K1 CO1

• A compiler is a software that converts the source code to the object


code.
• It converts the high-level language to machine/binary language.
• Some compilers convert the high-level language to an assembly
language as an intermediate step. Whereas some others convert it
directly to machine code. This process of converting the source code
into machine code is called compilation.
3 Give the structure of C program. K1 CO1
 Documentations (Documentation Section)

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 2
 Preprocessor Statements (Link Section)
 Global Declarations (Definition Section)
 The main() function
o Local Declarations

o Program Statements and Expressions

 User Defined Functions

4 How do you declare and initialize a variable in C? K1 CO1

VARIABLE DECLARATION:
Variable declaration in C tells the compiler about the existence of the
variable with the given name and data type.
data_type variable name = value;
or
data_type variable_name1, variable_name2;
INITIALIZATION OF VARIABLE:
Initialization of a variable is the process where the user assigns some
meaningful value to the variable when creating the variable.
int var = 10;

5 Identify the data types of C programming Language. K2 CO1


The data type specifies the size and type of information the variable will
store.
Data Type Size Description Example
2 or 4 bytes Stores whole numbers,
int 1
without decimals
4 bytes Stores fractional
numbers, containing
Float one or more decimals. 1.99
Sufficient for storing 6-
7 decimal digits
8 bytes Stores fractional
numbers, containing
double one or more decimals. 1.99
Sufficient for storing
15 decimal digits
1 byte Stores a single
char character/letter/number, 'A'
or ASCII values

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 3
Part-B(Three Questions) ( 13 Marks)

S.No Questions BTL CO

1 Explain in detail about the various types of programming Languages. K1 CO1

A programming language is a set of instructions and syntax used to create


software programs.
A programming language is a formal language that specifies a set of
instructions for a computer to perform specific tasks
1.Low-Level Programming Languages
1. Machine Language
2. Assembly Language
2.High-Level Programming Languages
1. Procedural-Oriented Programming Language
2. Object-Oriented Programming Language
3. Functional Programming Language
4. Problem-Oriented Programming Language
5. Scripting Programming Language
6. Artificial Intelligence Programming Language
1.LOW-LEVEL PROGRAMMING LANGUAGES:
 Low-Level Programming Languages are very close to the machine and are
also known as Computer-Friendly Languages.
 These are the Programming Languages with very less or no abstraction at all.
 Low-Level Programming Languages are the hardest languages to understand
by programmers and need a really good knowledge of Computer Architecture
and it’s working.
1.1MACHINE LANGUAGE:
 Machine Language is also known as the First Generation Programming
Language

 These two states (ON and OFF) can also be defined as 1 and 0 which is called
Binary Code.

 A computer just understands the language of 0s and 1s (Binary Code).


Machine Language doesn’t need a Compiler, Interpreter, or any type of
program to convert its code. So, it is the fastest Programming Language.

 However, working with Machine Language is not an easy task. As you need a

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 4
good understanding of the architecture of the CPU and its working.

1.2ASSEMBLY LANGUAGE:
 Assembly Language is also known as Second Generation Programming
Language (2GL).
 It is another Low-Level Programming Language and the second closest
language to the Computer.
 Assembly Language is slower as compared to the Machine Language.
However, it is very fast when compared to High-Level Programming
Languages (like – C, C++, Java).
 Unlike Machine Language, the Assembly Language need a program (called
Assembler) to convert its Assembly Code to Machine Code.
 Programming in Assembly Language is comparatively much easier as
compared to working with Machine Language.

2.HIGH-LEVEL PROGRAMMING LANGUAGES:

 High-Level Programming Languages are also known as humans or


programmers-friendly languages.

 In order to run a program written in a high-level language, we need a compiler


or interpreter, which will convert the code written in High-Level Language to
the Low-Level Language (Assembly Code > Machine Code).

 Since High-Level Programming Languages are very easy to understand and


work with.

 Almost all programmers use High-Level Programming Languages for writing


the code or creating a program.

PROCEDURAL-ORIENTED PROGRAMMING LANGUAGE:


Procedural-Oriented Programming is also known as Third Generation Programming
Language (3GL).

In Procedural-Oriented Programming, instead of focusing on data, we majorly focus


on the procedure of the program.

The main goal of Procedural-Oriented Programming is to solve a problem.

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 5
We create a number of statements in order to solve any problem.

It uses a Top-Down approach in order to solve any problem

It is very important to maintain the order of every step or statement. Therefore, we


make use of functions in Procedural-Oriented Programming.

2 Elaborate on the Evolution of storage technology and architecture in detail. K1 CO1

The organizations had centralized computers (mainframe) and information


storage devices (tape reels and disk packs) in their data center.

• The evolution of open systems and the affordability and ease of


deployment that they offer made it possible for business
units/departments to have their own servers and storage.
• In earlier implementations of open systems, the storage was typically
internal to the server.
• The proliferation of departmental servers in an enterprise resulted in
unprotected, unmanaged, fragmented islands of information and
increased operating costs.
• Originally, there were very limited policies and processes for
managing these servers and the data created.
To overcome these challenges, storage technology evolved from non-
intelligent internal storage to intelligent networked storage.

Redundant Array of Independent Disks (RAID):


This technology was developed to address the cost, performance, and
availability requirements of data. It continues to evolve today and is used in all
storage architectures such as DAS, SAN, and so on.
Direct-attached storage (DAS):
This type of storage connects directly to a server (host) or a group of servers in a

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 6
cluster. Storage can be either internal or external to the server. External DAS
alleviated the challenges of limited internal storage capacity.

Storage area network (SAN):


This is a dedicated, high-performance Fibre Channel (FC) network to facilitate
block-level communication between servers and storage.
Storage is partitioned and assigned to a server for accessing its data.
SAN offers scalability, availability, performance, and cost benefits compared
to DAS.
Network-attached storage (NAS): This is dedicated storage for file-serving
applications. Unlike a SAN, it connects to an existing communication network
(LAN) and provides file access to heterogeneous clients. Because it is
purposely built to provide storage to file server applications, it offers higher
scalability, availability, performance, and cost benefits compared to general-
purpose file servers.
Internet Protocol SAN (IP-SAN): One of the latest evolutions in storage
architecture, IP-SAN is a convergence of technologies used in SAN and NAS.
IP-SAN provides block-level communication across a local or wide area
network (LAN or WAN), resulting in greater consolidation and availability of
data.

3 K2 CO1
Describe in detail about the Information Lifecycle Management system.

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 7
INFORMATION LIFECYCLE MANAGEMENT:
Information lifecycle management (ILM) is a proactive strategy that enables an
IT organization to effectively manage the data throughout its lifecycle, based
on predefined business policies.

An ILM strategy should include the following characteristics:

Business-centric: It should be integrated with key processes, applications, and


initiatives of the business to meet both current and future growth in
information.
Centrally managed: All the information assets of a business should be under
the purview of the ILM strategy.
Policy-based: The implementation of ILM should not be restricted to a few
departments. ILM should be implemented as a policy and encompass all
business applications, processes, and resources.
Heterogeneous: An ILM strategy should take into account all types of storage
platforms and operating systems.
Optimized: Because the value of information varies, an ILM strategy should
consider the different storage requirements and allocate storage resources based on
the information’s value to the business.
ILM IMPLEMENTATION:
The process of developing an ILM strategy includes four activities— classifying,
implementing, managing, and organizing:

Classifying data and applications on the basis of business rules and policies to
enable differentiated treatment of information
■ Implementing policies by using information management tools, starting from

the creation of data and ending with its disposal


■ Managing the environment by using integrated tools to reduce operational

complexity
■ Organizing storage resources in tiers to align the resources with data classes,

and storing information in the right type of infrastructure based on the


information’s current value.

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 8
Step 1 :
The goal is to implement a storage networking environment. Storage architectures
offer varying levels of protection and performance and this acts as a foundation for
future policy-based information management in Steps 2 and 3.
Step 2:
Takes ILM to the next level, with detailed application or data classification and
linkage of the storage infrastructure to business policies.
This classification and the resultant policies can be automatically executed using
tools for one or more applications, resulting in better management and optimal
allocation of storage resources.
Step 3 : The implementation is to automate more of the applications or
data classification and policy management activities in order to scale
to a wider set of enterprise applications.
Part-C ( One Question) ( 15 Marks)
S.No Questions BTL CO

i)Summarize the idea of a “Data Centre Environment” (8). K2 CO1


1
ii)Discuss the benefits and the key components of the software-defined data center. (7) K2 CO1
A data center is a facility that provides shared access to applications and data using
a complex network, compute, and storage infrastructure.
EVOLUTION OF THE DATA CENTER TO THE CLOUD
 The fact that virtual cloud DC can be provisioned or scaled down with only

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 9
a few clicks is a major reason for shifting to the cloud. In modern data
centers, software-defined networking (SDN) manages the traffic flows via
software.
 Infrastructure as a Service (IaaS) offerings, hosted on private and public
clouds, spin up whole systems on-demand.

TYPES OF DATA CENTERS:


Enterprise data centers are typically constructed and used by a single organization for
their own internal purposes. These are common among tech giants.

Colocation data centers function as a kind of rental property where the space and
resources of a data center are made available to the people willing to rent it.

Managed service data centers offer aspects such as data storage, computing, and other
services as a third party, serving customers directly.

Cloud data centers are distributed and are sometimes offered to customers with the help
of a third-party managed service provider.
BUILDING BLOCKS OF A DATA CENTER :
Data centers are made up of three primary types of components:

Compute, storage, and network.

Apart from the Data Centers, support infrastructure is essential to meeting the
service level agreements of an enterprise data center.

Data Center Computing


• Servers are the engines of the data center. On servers, the processing and
memory used to run applications may be physical, virtualized, distributed
across containers, or distributed among remote nodes in an edge computing
model.
• Data centers must use processors that are best suited for the task, e.g. general-
purpose CPUs may not be the best choice to solve artificial intelligence (AI) and
machine learning (ML) problems.
Data Center Storage
 Data centers host large quantities of sensitive information, both for their
purposes and the needs of their customers. Decreasing costs of storage
media increases the amount of storage available for backing up the data

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 10
either locally, remote, or both.
 Advancements in non-volatile storage media lowers data access times.
 In addition, as in any other thing that is software-defined, software-defined
storage technologies increase staff efficiency for managing a storage
system.
Data Center Networks
 Datacenter network equipment includes cabling, switches, routers, and
firewalls that connect servers and to the outside world. Properly configured
and structured, they can manage high volumes of traffic without
compromising performance.
 A typical three-tier network topology is made up of core switches at the edge
connecting the data center to the Internet and a middle aggregate layer that
connects the core layer to the access layer where the servers reside.
 Advancements, such as hyper-scale network security and software-defined
networking, bring cloud-level agility and scalability to on-premises networks.

ii)SOFTWARE - DEFINED DATA CENTER (SDDC)


 A traditional data center is a facility where organizational data, applications,
networks, and infrastructure are centrally housed and accessed.
 It is the hub for IT operations and physical infrastructure equipment, including
servers, storage devices, network equipment, and security devices.
Traditional data centers can be hosted:
• On-premise
• With a managed service provider (MSP)

• In the cloud
In software-defined data center is an IT-as-a-Service (ITaaS) platform that services
an organization’s software, infrastructure, or platform needs.
An SDDC can be housed on-premise, at an MSP, and in private, public, or hosted
clouds.
Like traditional data centers, SDDCs also host servers, storage devices, network
equipment, and security devices. You can manage SDDCs from any location, using
remote APIs and Web browser interfaces. SDDCs also make extensive use of
automation capabilities to:
• Reduce IT resource usage
• Provide automated deployment and management for many core
functions

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 11
KEY COMPONENTS OF SDDC

• Compute virtualization, where virtual machines (VMs)—including their


operating systems, CPUs, memory, and software—reside on cloud servers.
Compute virtualization allows users to create software implementations of
computers that can be spun up or spun down as needed, decreasing
provisioning time.
• Network virtualization, where the network infrastructure servicing your
VMs can be provisioned without worrying about the underlying hardware.
Network infrastructure needs—telecommunications,
firewalls, subnets, routing, administration, DNS, etc.—are configured
inside your cloud SDDC on the vendor’s abstracted hardware. No network
hardware assembly is required.
• Storage virtualization, where disk storage is provisioned from the SDDC
vendor’s storage pool. You get to choose your storage types, based on
your needs and costs. You can quickly add storage to a VM when needed.
• Management and automation software. SDDCs use management and
automation software to keep business-critical functions working around
the clock, reducing the need for IT manpower. Remote management and
automation is delivered via a software platform accessible from any
suitable location, via APIs or Web browser access.

Benefits of SDDCs
Business agility
An SDDC offers several benefits that improve business agility with a focus on three key
areas:
• Balance
• Flexibility
• Adaptability

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 12
Reduced cost
• In general, it costs less to operate an SDDC than housing data in brick-and-mortar
data centers.
• Cloud SDDCs operate similarly to SaaS platforms that charge a recurring monthly
cost.
• This is usually an affordable rate, making an SDDC accessible to all types of
businesses, even those who may not have a big budget for technology
spending.

Increased scalability
By design, cloud SDDCs can easily expand along with your business. Increasing
your storage space or adding functions is usually as easy as contacting the data
facility to get a revised monthly service quote.

UNIT-II

Components of an intelligent storage system,Componenets, addressing and performance of hard disk drives
and solid state drives, RAID, Types of intelligent storage systems, Scale-up, and scaleout storage
Architecture.

Part A ( Five Questions)

S.No Questions BTL CO

1 What is an intelligent storage system? K1 CO2

Intelligent storage is a storage system or service that uses AI to


continuously learn and adapt to its hybrid cloud environment to better
manage and serve data. It can be deployed as hardware on-premises, as a
virtual appliance, or as a cloud service. It also features RAID arrays that
provide highly optimized I/O processing capabilities.

2 Define command Queueing. K1 CO2


Command queuing is a technique implemented on front-end controllers. It
determines the execution order of received commands and can reduce
unnecessary drive head movements and improve disk performance. When a

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 13
command is received for execution, the command queuing algorithms
assign a tag that defines a sequence in which the commands can be
executed.

3 List out the RAID levels. K1 CO2

 RAID 0- Striped array with no fault tolerance


 RAID 1- Disk mirroring
 RAID 3- Parallel access array with dedicated parity disk
 RAID 4- Striped array with independent disks and a dedicated
parity disk
 RAID 5- Striped array with independent disks and distributed parity
 RAID 6- Striped array with independent disks and dual distributed
parity
 Nested- Combinations of RAID levels. Example: RAID 1 + RAID 0

4 Write about cache mirroring and cache vaulting. K1 CO2


cache mirroring:
Each write-to cache is held in two different memory locations on
two independent memory cards. In the event of a cache failure, the
write data will still be safe in the mirrored location and can be
committed to the disk.
cache vaulting:
The cache is exposed to the risk of uncommitted data loss due to
power failure. This problem can be addressed in various ways:
powering the memory with a battery until AC power is restored or
using battery power to write the cache content to the disk.
5 Define scale-up and scale-out storage architecture. K1 CO2
Scale-up data storage architecture, storage drives are added to increase
storage capacity and performance.
Scale-out storage architecture uses software-defined storage (SDS) to
separate the storage hardware from the storage software, letting the
software act as the controllers
Part-B(Three Questions) ( 13 Marks)

S.No Questions BTL CO

1 Explain key components of an intelligent storage system. K2 CO2

An intelligent storage system consists of four key components: front end,


cache, back end, and physical disks.

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 14
Front-End Back-End P hysical
Host Disks
Connectivit Cach
y e
Storage
Netw
ork

P ort P ort

An I/O request from the host at the front-end port is processed through a cache
and the back end, to enable storage and retrieval of data from the physical
disk. A read request can be serviced directly from the cache if the requested
data is found in the cache.

FRONT END
• The front end provides the interface between the storage system and the
host. It consists of two components: front-end ports and front-end
controllers.
• The front-end ports enable hosts to connect to the intelligent storage system.
Each
front-end port has processing logic that executes the appropriate transport
protocol, such as SCSI, Fibre Channel, or iSCSI, for storage connections.
 Front-end controllers route data to and from the cache via the internal data bus.
 When the cache receives write data, the controller sends an acknowledgment message
back to the host. Controllers optimize I/O processing by using command queuing
algorithms.

Front-End Command Queuing

• Command queuing is a technique implemented on front-end controllers. It


determines the execution order of received commands and can reduce
unnecessary drive head movements and improve disk performance.
• When a command is received for execution, the command queuing
algorithms assign a tag that defines a sequence in which commands
should be executed.
• With command queuing, multiple commands can be executed
concurrently based on the organization of data on the disk, regardless of
the order in which the commands were received.
The most commonly used command queuing algorithms are as follows:
First In First Out (FIFO): This is the default algorithm where commands are
executed in the order in which they are received. There is no reordering of
requests for optimization; therefore, it is inefficient in terms of performance.

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 15
Seek Time Optimization: Commands are executed based on optimizing
read/write head movements, which may result in a reordering of commands.
Access Time Optimization: Commands are executed based on the
combination of seek time optimization and an analysis of rotational latency
for optimal performance.

CACHE
• The cache is semiconductor memory where data is placed temporarily to reduce
the time required to service I/O requests from the host.
• Accessing data from the cache takes less than a millisecond. Write data is placed
in the cache and then written to disk. After the data is securely placed in the cache,
the host is acknowledged immediately.

Structure of Cache:
✓ The cache is organized into pages or slots, which is the smallest unit of cache
allocation.

 The size of a cache page is configured according to the application I/O size.
The cache consists of the data store and tag RAM.

 The data store holds the data while tag RAM tracks the location of the data in
the data store and disk.

 Entries in tag RAM indicate where data is found in cache and where the data
belongs on the disk. Tag RAM includes a dirty bit flag, which indicates whether
the data in cache has been committed to the disk or not.

 It also contains time-based information, such as the time of last access, which
is used to identify cached information that has not been accessed for a long
period and may be freed up.

Cache Implementation

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 16
The cache can be implemented as either a dedicated cache or a global cache. With a
dedicated cache, separate sets of memory locations are reserved for reads and
writes. In the global cache, both reads and writes can use any of the available
memory addresses. Cache management is more efficient in a global cache
implementation, as only one global set of addresses has to be managed.

BACK END:
• The back end provides an interface between cache and the physical
disks. It con- sists of two components: back-end ports and back-end
controllers.
• The back end controls data transfers between cache and the physical disks.
From cache,
data is sent to the back end and then routed to the destination disk. Physical
disks are connected to ports on the back end.
• The back-end controller communicates with the disks when performing
reads and writes
and also provides additional, but limited, temporary data storage.

PHYSICAL DISK:

 A physical disk stores data persistently.


 Disks are connected to the back end with either SCSI or a Fibre
Channel interface.
 An intelligent storage system enables the use of a mixture of SCSI or Fibre
Channel drives and IDE/ATA drives.
Logical Unit Number

 Physical drives or groups of RAID-protected drives can be logically split into


volumes known as logical volumes, commonly referred to as Logical Unit
Numbers (LUNs).

 The use of LUNs improves disk utilization.

 For example, without the use of LUNs, a host requiring only 200 GB could be
allocated an entire 1TB physical disk. Using LUNs, only the required 200 GB
would be allocated to the host, allowing the remaining 800 GB to be allocated to
other hosts.

 LUNs 0 and 1 are presented to hosts 1 and 2, respectively, as physical volumes


for storing and retrieving data. The usable capacity of the physical volumes is
determined by the RAID type of the RAID set.

 The capacity of a LUN can be expanded by aggregating other LUNs with it.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 17
The result of this aggregation is a larger capacity LUN, known as a meta-
LUN. The mapping of LUNs to their physical location on the drives is
managed by the operating environment of an intelligent storage system.

2 Discuss in detail about the Disk Drive Components. K2 CO2


 A disk drive uses a rapidly moving arm to read and write data across a flat
platter coated with magnetic particles. Data is transferred from the magnetic
platter through the R/W head to the computer.

 Several platters are assembled together with the R/W head and controller, most

 commonly referred to as a hard disk drive (HDD).

Key components of a disk drive are platter, spindle, read/write head, actuator arm
assembly, and controller

PLATTER:

 A typical HDD consists of one or more flat circular disks called platters (Figure
2-3). The data is recorded on these platters in binary codes (0s and 1s).

 The set of rotating platters is sealed in a case, called a Head Disk Assembly
(HDA). A platter is a rigid, round disk coated with magnetic material on both
surfaces (top and bottom).

 The data is encoded by polarizing the magnetic area, or domains, of the disk
surface. Data can be written to or read from both surfaces of the platter.

 The number of platters and the storage capacity of each platter determine the
total capacity of the drive.

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 18
SPINDLE
✓ A spindle connects all the platters, as shown in Figure 2-3, and is
connected to a motor. The motor of the spindle rotates with a
constant speed.
✓ The disk platter spins at a speed of several thousands of
revolutions per minute (rpm). Disk drives have spindle speeds of
7,200 rpm, 10,000 rpm, or 15,000 rpm. Disks used on current
storage systems have a platter diameter of 3.5” (90 mm).
✓ When the platter spins at 15,000 rpm, the outer edge is moving at
around 25
percent of the speed of sound.

READ/WRITE HEAD
✓ Read/Write (R/W) heads, shown in Figure 2-4, read and write
data from or to a platter.
✓ Drives have two R/W heads per platter, one for each surface of the platter.
✓ The R/W head changes the magnetic polarization on the surface ofthe
platter when writing data. While reading data, this head detects
magnetic polarization on the surface of the platter.
✓ During reads and writes, the R/W head senses the magnetic
polarization and never touches the surface of the platter. When the
spindle is rotating, there is a microscopic air gap between the R/W
heads and the platters, known as the head flying height.
✓ This air gap is removed when the spindle stops rotating and the R/W
head rests on a special area on the platter near the spindle. This area is
called the landing zone. The landing zone is coated with a lubricant

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 19
to reduce friction between the head and the platter.
✓ The logic on the disk drive ensures that heads are moved to the landing
zone before they touch the surface. If the drive malfunctions and the
R/W head accidentally touches the surface of the platter outside the
landing zone, a head crash occurs.

ACTUATOR ARM ASSEMBLY:


The R/W heads are mounted on the actuator arm assembly, which positions the
R/W head at the location on the platter where the data needs to be written or read.
The R/W heads for all platters on a drive are attached to one actuator arm assembly
and move across the platters simultaneously.
CONTROLLER:
The controller is a printed circuit board, mounted at the bot- tom of a disk
drive. It consists of a microprocessor, internal memory, circuitry, and
firmware.
 The firmware controls power to the spindle motor and the speed
of the motor. It also manages communication between the drive and
the host.
 In addition, it controls the R/W operations by moving the actuator
arm and switching between different R/W heads and performing the
optimization of data access.

3 Describe the two types of RAID implementation and Array Components in detail. K2 CO2

RAID is a way of storing the same data in different places on multiple


hard disks or solid-state drives (SSDs) to protect data in the case of a
drive failure.

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 20
There are two types of RAID implementation, hardware and software.

Software RAID

✓ Software RAID uses host-based software to provide RAID functions.

✓ It is implemented at the operating-system level and does not use a


dedicated hardware controller to manage the RAID array.

✓ Software RAID implementations offer cost and simplicity benefits when


com- pared with hardware RAID. However, they have the following
limitations:

✓ Performance: Software RAID affects overall system performance.


This is due to the additional CPU cycles required to perform RAID
calculations.

✓ Supported features: Software RAID does not support all RAID levels.

✓ Operating system compatibility: Software RAID is tied to the host


operat- ing system hence upgrades to software RAID or to the operating
system should be validated for compatibility. This leads to inflexibility in
the data processing environment.

Hardware RAID

✓ In hardware RAID implementations, a specialized hardware controller is


imple- mented either on the host or on the array. These implementations
vary in the way the storage array interacts with the host.

✓ Controller card RAID is host-based hardware RAID implementation in


which a specialized RAID controller is installed in the host and HDDs are
connected to it.

✓ The RAID Controller interacts with the hard disks using a PCI bus.
Manufacturers also integrate RAID controllers on motherboards. This
integra- tion reduces the overall cost of the system, but does not provide
the flexibility required for high-end storage systems.

✓ The external RAID controller is an array-based hardware RAID. It acts as


an interface between the host and disks. It presents storage volumes to the
host, which manage the drives using the supported protocol. Key functions
of RAID controllers are:

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 21
Management and control of disk aggregations

■ Translation of I/O requests between logical disks and physical disks


■ Data regeneration in the event of disk failures

RAID Array Components


✓ RAID array is an enclosure that contains a number of HDDs and the
supporting hardware and software to implement RAID. HDDs inside a
RAID array are usually contained in smaller sub- enclosures.

✓ These sub-enclosures, or physical arrays, hold a fixed number of HDDs,


and may also include other supporting hardware, such as power supplies.
A subset of disks within a RAID array can be grouped to form logical
associations called logical arrays, also known as a RAID set or a RAID
group (see Figure 3-1).

✓ Logical arrays are comprised of logical volumes (LV). The operating


system recognizes the LVs as if they are physical HDDs managed by the
RAID controller.

✓ The number of HDDs in a logical array depends on the RAID level used.
Configurations could have a logical array with multiple physical arrays or
a physical array with multiple logical arrays.

Part-C ( One Question) ( 15 Marks)


S.N BT
Questions CO
o L

CO
i) Discuss the steps involved in various RAID levels models. (10) K2 2
1
ii) Explain the Read and Write operation performed in cache memory (5) K2 CO
2
i)RAID levels are defined based on striping, mirroring, and parity techniques. These techniques
determine the data availability and performance characteristics of an array.

RAID 0: Striping
• RAID 0, also known as a striped set or a striped volume, requires a minimum of two
disks. The disks are merged into a single large volume where data is stored evenly
across the number of disks in the array.
• This process is called disk striping and involves splitting data into blocks and
writing it simultaneously/sequentially on multiple disks. Therefore, RAID 0 is

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 22
generally implemented to improve speed and efficiency.

Advantages of RAID 0
• Cost-efficient and straightforward to implement.
• Increased read and write performance.
• No overhead (total capacity use).
Disadvantages of RAID 0
• Doesn't provide fault tolerance or redundancy.

RAID 1: Mirroring
✓ RAID 1 is an array consisting of at least two disks where the same data is stored on
each to ensure redundancy. The most common use of RAID 1 is setting up a mirrored
pair consisting of two disks in which the contents of the first disk is mirrored in the
second. This is why such a configuration is also called mirroring.

Advantages of RAID 1
• Increased read performance.
• Provides redundancy and fault tolerance.
• Simple to configure and easy to use.

Disadvantages of RAID 1
• Uses only half of the storage capacity.
• More expensive (needs twice as many drivers).
• Requires powering down your computer to replace the failed drive.

Raid 2: Bit-Level Striping with Dedicated Hamming-Code Parity

It combines bit-level striping with error checking and information correction. This RAID
implementation requires two groups of disks – one for writing the data and another for writing
error correction codes. RAID 2 also requires a special controller for the synchronized
spinning of all disks.

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 23
Advantages of RAID 2
• Reliability.
• The ability to correct stored information.
Disadvantages of RAID 2
• Expensive.
• Difficult to implement.
• Require entire disks for ECC.

Raid 3: Bit-Level Striping with Dedicated Parity


✓ This RAID implementation utilizes bit-level striping and a dedicated parity
disk. Because of this, it requires at least three drives, where two are used for
storing data strips, and one is used for parity.
✓ To allow synchronized spinning, RAID 3 also needs a special controller. Due to
its configuration and synchronized disk spinning, it achieves better performance
rates with sequential operations than random read/write operations.

Advantages of RAID 3
• Good throughput when transferring large amounts of data.
• High efficiency with sequential operations.
• Disk failure resiliency.
Disadvantages of RAID 3
• Not suitable for transferring small files.
• Complex to implement.
• Difficult to set up as software RAID.

Raid 4: Block-Level Striping with Dedicated Parity


RAID 4 is another unpopular standard RAID level. It consists of block-level data striping
across two or more independent diss and a dedicated parity disk.

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 24
• Fast read operations.

• Low storage overhead.

• Simultaneous I/O requests.

Disadvantages of RAID 4

• Bottlenecks that have big effect on overall performance.


• Slow write operations.
• Redundancy is lost if the parity disk fails.

Raid 5: Striping with Parity


RAID 5 is considered the most secure and most common RAID implementation. It combines
striping and parity to provide a fast and reliable setup. Such a configuration gives the user
storage usability as with RAID 1 and the performance efficiency of RAID 0.

Parity bits are distributed evenly on all disks after each sequence of data has been saved.
Advantages of RAID 5
• High performance and capacity.
• Fast and reliable read speed.
• Tolerates single drive failure.
Disadvantages of RAID 5
• Longer rebuild time.
• Uses half of the storage capacity (due to parity).
• If more than one disk fails, data is lost.
• More complex to implement.

Raid 6: Striping with Double Parity


• RAID 6 is an array similar to RAID 5 with an addition of its double parity feature.

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 25
For this reason, it is also referred to as the double-parity RAID.
• Block-level striping with two parity blocks allows two disk failures before any data
is lost. This means that in an event where two disks fail, RAID can still reconstruct
the required data.

Advantages of RAID 6
• High fault and drive-failure tolerance.
• Storage efficiency (when more than four drives are used).
• Fast read operations.
Disadvantages of RAID 6
• Rebuild time can take up to 24 hours.
• Slow write performance.
• Complex to implement.
• More expensive.

Raid 10: Mirroring with Striping


• RAID 10 is part of a group called nested or hybrid RAID, which means it is a
combination of two different RAID levels. In the case of RAID 10, the array
combines level 1 mirroring and level 0 striping. This RAID array is also known as
RAID 1+0.
RAID 10 uses logical mirroring to write the same data on two or more drives to provide
redundancy. If one disk fails, there is a mirrored image of the data stored on another disk..

Advantages of RAID 10
• High performance.
• High fault-tolerance.
• Fast read and write operations.
• Fast rebuild time.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 26
Disadvantages of RAID 10
• Limited scalability.
• Costly (compared to other RAID levels).
• Uses half of the disk space capacity.
• More complicated to set up.
ii) Read Operation with Cache

✓ when a host issues a read request, the front-end controller accesses the tag
RAM to determine whether the required data is available in the cache.
✓ If the requested data is found in the cache, it is called a read cache hit or
read hit and data is sent directly to the host, without any disk operation.
This provides a fast response time to the host (about a millisecond).
✓ If the requested data is not found in the cache, it is called a cache miss and
the data must be read from the disk.
✓ The back-end controller accesses the appropriate disk and retrieves the
requested data. Data is then placed in the cache and is finally sent to the
host through the front-end controller. Cache misses increase I/O response
time.
✓ A pre-fetch, or read-ahead, algorithm is used when read requests are
sequential. In a sequential read request, a contiguous set of associated
blocks is retrieved. Several other blocks that have not yet been requested
by the host can be read from the disk and placed into the cache in advance.
✓ The intelligent storage system offers fixed and variable pre-fetch sizes.
✓ In fixed pre-fetch, the intelligent storage system pre-fetches a fixed amount
of data. It is most suitable when I/O sizes are uniform.
In variable pre-fetch, the storage system pre-fetches an amount of data in
multiples of the size of the host request.
✓ Read performance is measured in terms of the read hit ratio, or the hit
rate, usually expressed as a percentage.
This ratio is the number of read hits with respect to the total number of read requests. A
higher read-hit ratio improves the read performance.

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 27
Write Operation with Cache:

Write operations with cache provide performance advantages over writing directly to
disks. When an I/O is written to the cache and acknowledged, it is completed in far
less time (from the host’s perspective) than it would take to write directly to disk
• Write-back cache: Data is placed in the cache and an acknowledgment is sent to the
host immediately. Later, data from several writes are committed (de-staged) to the
disk. Write response times are much faster, as the write operations are isolated from
the mechanical delays of the disk. However, uncommitted data is at risk of loss in the
event of cache failures.

• Write-through cache: Data is placed in the cache and immediately written to the
disk, and an acknowledgment is sent to the host. Because data is committed to disk as
it arrives, the risks of data loss are low but write response time is longer because of the
disk operations.

The cache can be bypassed under certain conditions, such as very large size write I/O.
In this implementation, if the size of an I/O request exceeds the pre-defined size, called
write aside size, writes are sent to the disk directly to reduce the impact of large writes
consuming a large cache area.

UNIT-III

STORAGE NETWORKING TECHNOLOGIES AND VIRTUALIZATION

Block-Based Storage System, File-Based Storage System, Object-Based and Unified Storage. Fibre Channel
SAN: Software-defined networking, FC SAN components and architecture, FC SAN topologies, link

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 28
aggregation, and zoning, Virtualization in FC SAN environment. Internet Protocol SAN: iSCSI protocol,
network components, and connectivity, Link aggregation, switch aggregation, and VLAN, FCIP protocol,
connectivity, and configuration.

Part-A ( Five Questions)

S.No Questions BTL CO

1 List the types of storage systems. K1 CO3


Different types of storage systems are as follows,
 Block-Based Storage System – Examples – SAN (Storage Area
Network), iSCSI, and local disks.
 File-Based Storage System – Examples – NTFS (New Technology
File System), FAT (File Allocation Table), EXT (Extended File
System).
 Object-Based Storage System – Examples – Google cloud storage,
Amazon Simple Storage Options.
 United Storage System – Examples – Dell EMC Unity XT All-Flash
United Storage and Dell EMC Unity XT Hybrid United Storage
2 State the connectivity of iSCSI protocol. K1 CO3
Native iSCSI connectivity - Native topologies do not have any FC components;
they Perform all communication over IP. The initiators may be either directly
attached to targets or connected using standard IP routers and switches.

Bridged iSCSI connectivity - Bridged topologies enable the co-existence of FC


with IP
by providing iSCSI-to-FC bridging functionality. For example, the initiators can
exist in an IP environment while the storage remains in an FC SAN

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 29
3 What is meant by a file-based storage system? K1 CO3

 File storage, also called file-level or file-based storage, stores data in


a hierarchical structure. The data is saved included and folders, and
presented to both the system storing it and the system retrieving it in
the same format.
 Data can be accessed using the Network File System (NFS) protocol
for Unix or Linux, or the Server Message Block (SMB) protocol for
Microsoft Windows.

4 Difference between Multimode fiber (MMF) cable and Single-mode fiber K1 CO3
(SMF).
Multimode fiber cable Single-mode fiber

Multimode fiber (MMF) cable Single-mode fiber (SMF) carries a single


carries multiple beams of light ray of light projected at the center of the
projected at different angles core. The small core and the single light
simultaneously onto the core of the wave help to limit modal dispersion.
cable

In an MMF transmission, multiple Single mode Provides minimum signal


light beams traveling inside the attenuation over a maximum distance (up
cable tend to disperse and collide. to 10 km).
This collision weakens the signal
strength after it travels a certain
distance a process known as modal
dispersion.

5 Desfine Link aggregation.


K1 CO3
Link aggregation allows combining multiple Ethernet links into a single logical link
between two networked devices. Link aggregation is sometimes called by other
names: Ethernet bonding. Ethernet teaming.
Link aggregation provides greater bandwidth between the devices at each
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 30
end of the aggregated link.

Part-B(Three Questions) ( 13 Marks)

S.No Questions BTL CO

1 Explain in detail about Block-based Storage system K1 CO3


• Block storage is for flexible, fast access

• Block storage is a form of cloud storage that is used to store data,


often on storage area networks (SANs).

• Data is stored in blocks, with each block stored separately based on


the efficiency

• Each block is assigned a unique address, which is then used by a


management application controlled by the server's operating system to
retrieve and compile data into files upon request.

• Block storage offers efficiency due to the way blocks can be


distributed across multiple systems and even configured to work with
different operating systems.

• This makes using block storage quite similar to storing data on a hard
drive within a server, except the data is stored in a remote location
rather than on local hardware.

Working of Block Storage:

• A block is a fixed-size amount of memory within storage media that’s


capable of storing a piece of data. The size of each block is determined
by the management system.

• The block size is generally too small to fit an entire piece of data, and
so the data for any particular file is broken up into numerous blocks for
storage.

• Each block is given a unique identifier without any higher-level


metadata; details such as data format, type, and ownership are not noted.

• The operating system allocates and distributes blocks across the


storage network to balance efficiency and functionality.

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 31
• When a file is requested, the management application uses addresses to
identify the necessary blocks and then compiles them into the complete
file for use.

• By enabling storage across multiple environments, block storage


separates data from the limitations of individual user environments. As a
result, data can be retrieved through any number of paths to maximize
efficiency, with high input/output operations per second (IOPS).

Benefits of block storage

• High efficiency: Block storage’s high IOPS and low latency make it
ideal for applications that demand high performance.

• Compatibility: Block storage works across different operating systems


and file systems, making it compatible for enterprises whatever their
configuration and environment.

• Flexibility: With block storage, horizontal scaling is extremely flexible.


Cluster nodes can be added as needed, allowing for greater overall
storage capability.

• Large file efficiency: For large files, such as archives and video files,
data must be completely overwritten when using file or object storage..

Limitations of block storage

• Greater cost: While block storage is easily scalable, it can also be


expensive due to the cost of SANs. In addition, managing block storage
requires more-specialized training for management and maintenance,
increasing the overall expense.

• Performance limitations: With block storage, metadata is built in and


hierarchical, and it is defined by the file system. Because data is broken
up into blocks, searching for a complete file requires the proper
identification of all its pieces. This can create performance issues for
operations accessing the metadata, particularly with folders featuring a
large number of files.

Block storage use cases:

• Containers: Block storage supports the use of container platforms


such as Kubernetes, creating a block volume that enables persistent
storage for the entire container. This allows for the clean management
and migration of containers as needed.

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 32
• Email servers: Email servers can take advantage of block storage’s
flexibility and scalability. In fact, in the case of Microsoft Exchange,
block storage is required due to the lack of support for network-attached
storage.

• Databases: Block storage is fast, efficient, flexible, and scalable, with


support for redundant volumes. This allows it to support databases,
particularly those that handle a heavy volume of queries and where
latency must be minimized.

• Disaster recovery: Block storage can be a redundant backup solution


for nearline storage and quick restoration, with data swiftly moved from
backup to production through easy access.

Need for block storage :

• Block storage continues to be an efficient and flexible cloud storage


option for enterprises require high-performance workloads or need to
manage large files.

2 Discuss the various FC topologies in detail. K1 CO3


• Fabric design follows standard topologies to connect devices.
• Core-edge fabric is one of the popular topology designs.

Core-Edge Fabric
In the core-edge fabric topology, there are two types of switch tiers in this
fabric.

• The edge tier usually comprises switches and offers an inexpensive


approach to adding more hosts in a fabric.

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 33
• The tier at the edge fans out from the tier at the core. The nodes on
the edge can communicate with each other.

• The core tier usually comprises enterprise directors that ensure high
fabric availability.

• All traffic has to either traverse through or terminate at this tier. In a


two- tier configuration, all storage devices are connected to the core tier,
facilitating fan- out.

The host-to-storage traffic has to traverse one and two ISLs in a two-tier and
three-tier configuration, respectively.

Hosts used for mission-critical applications can be connected directly to the


core tier and consequently avoid traveling through the ISLs to process I/O
requests from these hosts.

The core-edge fabric topology increases connectivity within the SAN


while conserving overall port utilization. If expansion is required, an additional
edge switch can be connected to the core.

This topology can have different variations. In a single-core topology, all


hosts are connected to the edge tier and all storage is connected to the core tier.

A dual-core topology can be expanded to include more core switches.

However, to maintain the topology, it is essential that new ISLs are created to
connect each edge switch to the new core switch that is added.

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 34
Benefits and Limitations of Core-Edge Fabric
 The core-edge fabric provides one-hop storage access to all storage in
the system. Because traffic travels in a deterministic pattern, a core-
edge provides easier calculation of ISL loading and traffic patterns.

 Each tier’s switch is used for either storage or hosts, one can easily
identify which resources are approaching their capacity, making it easier
to develop a set of rules for scaling and apportioning.

 Core-edge fabrics can be scaled to larger environments by linking core


switches, adding more core switches, or adding more edge switches.

 This method can be used to extend the existing simple core-edge


model or to expand the fabric into a compound or complex core-edge
model.

 The core-edge fabric may lead to some performance-related problems


because scaling a core-edge topology involves increasing the number of
ISLs in the fabric.

 The domain count in the fabric increases. A common best practice is to


keep the number of host-to-storage hops unchanged, at one hop, in a
core-edge.

 Hop count represents the total number of devices a given piece of data
(packet) passes through.

 A large hop count means greater the transmission delay between data
traverse from its source to destination.

As the number of cores increases, it may be prohibitive to continue to


maintain ISLs from each core to each edge switch. When this happens, the
fabric design can be changed to a compound or complex core-edge design.

Mesh Topology
 In a mesh topology, each switch is directly connected to other switches
by using ISLs. This topology promotes enhanced connectivity within the
SAN.

 When the number of ports on a network increases, the number of


nodes that can participate and communicate also increases.

 A mesh topology may be one of the two types: full mesh or partial
mesh. In a full mesh, every switch is connected to every other switch in

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 35
the topology. Full mesh topology may be appropriate when the number
of switches involved is small.

 A typical deployment would involve up to four switches or directors,


with each of them servicing highly localized host-to-storage traffic.

 In a full mesh topology, a maximum of one ISL or hop is required for


host-to-storage traffic. In a partial mesh topology, several hops or ISLs
may be required for the traffic to reach its destination.

 Hosts and storage can be located anywhere in the fabric, and storage
can be localized to a director or a switch in both mesh topologies.

 A full mesh topology with a symmetric design results in an even number


of switches, whereas a partial mesh has an asymmetric design and may
result in an odd number of switches.

3 Explain in detail about the components and architecture of FCoE. K1 CO3


FCOE SAN COMPONENTS
The key FCoE SAN components are:

 Network adapters such as Converged Network Adapter (CNA) and


software FCoE adapter

 Cables such as copper cables and fiber optical cables

 FCoE switch

Converged Network Adapter (CNA)


Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 36
 The CNA is a physical adapter that provides the functionality of both a
standard NIC and an FC HBA in a single device.

 It consolidates both FC traffic and regular Ethernet traffic on a


common Ethernet infrastructure.

 FC traffic onto Ethernet frames and forwarding them to FCoE


switches over CEE links.

 They eliminate the need to deploy separate adapters and cables for FC
and Ethernet communications, thereby reducing the required number
of network adapters and switch ports.

 A CNA offloads the FCoE protocol processing task from the compute
system, thereby freeing the CPU resources of the compute system for
application processing.

 It contains separate modules for 10 Gigabit Ethernet (GE), FC, and


FCoE Application Specific Integrated Circuits (ASICs).

Software FCoE Adapter


 Instead of a CNA, a software FCoE adapter may also be used. A
software FCoE adapter is OS or hypervisor kernel-resident software that
performs FCoE processing.

 The FCoE processing consumes hosts CPU cycles.

 With software FCoE adapters, the OS or hypervisor implements FC


protocol in software that handles SCSI to FC processing.

 The software FCoE adapter performs FC to Ethernet encapsulation.


Both FCoE traffic

 (Ethernet traffic that carries FC data) and regular Ethernet traffic are
transferred through supported NICs on the hosts.

FCOE Switch
 An FCoE switch has both Ethernet switch and FC switch functionalities.
It has a Fibre Channel Forwarder (FCF), an Ethernet Bridge, and a set of
ports that can be used for FC and Ethernet connectivity.

 FCF handles FCoE login requests, applies zoning, and provides the
fabric services typically associated with an FC switch.

 It also encapsulates the FC frames received from the FC port into the
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 37
Ethernet frames and decapsulates the Ethernet frames received from the
Ethernet Bridge to the FC frames.

 Upon receiving the incoming Ethernet traffic, the FCoE switch inspects
the Ethertype of the incoming frames and uses that to determine their
destination.

 If the Ethertype of the frame is FCoE, the switch recognizes that the
frame contains an FC payload and then forwards it to the FCF.

 From there, the FC frame is extracted from the Ethernet frame and
transmitted to the FC SAN over the FC ports.

 If the Ethertype is not FCoE, the switch handles the traffic as usual
Ethernet traffic and forwards it over the Ethernet ports.

FCoE ARCHITECTURE
 Fibre Channel over Ethernet (FCoE) is a method of supporting
converged Fibre Channel (FC) and Ethernet traffic on a data center
bridging (DCB) network.

 FCoE encapsulates unmodified FC frames in Ethernet to transport the


FC frames over a physical Ethernet network.

 An FCoE frame is the same as any other Ethernet frame because the
Ethernet encapsulation provides the header information needed to
forward the frames. However, to achieve the lossless behavior that FC
transport requires, the Ethernet network must conform to DCB
standards.

 DCB standards create an environment over which FCoE can


transport native FC traffic encapsulated in Ethernet while preserving
the mandatory class of service (CoS) and other characteristics that FC
traffic requires.

 Supporting FCoE in a DCB network requires that the FCoE devices in


the Ethernet network and the FC switches at the edge of the SAN
network handle both Ethernet and native FC traffic. To handle Ethernet
traffic, an FC switch does one of two things:

 Incorporates FCoE interfaces.

Uses an FCoE-FC gateway such as a QFX3500 switch to de-encapsulate FCoE


traffic from FCoE devices into native FC and to encapsulate native FC traffic
from the FC switch into FCoE and forward it to FCoE devices through the
Ethernet network
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 38
Part-C ( One Question) ( 15 Marks)
S.No Questions BTL CO

1 Illustrate the components and architecture of FC SAN. K2 CO3

A SAN consists of three basic components: servers, network infrastructure,


and storage.

These components can be further broken down into the following key
elements: node ports, cabling, interconnecting devices (such as FC switches
or hubs), storage arrays, and SAN management software

Node Ports

In fibre channel, devices such as hosts, storage and tape libraries are all
referred to as nodes. Each node is a source or destination of information for
one or more nodes.

Each node requires one or more ports to provide a physical


interface for communicating with other nodes. These ports are integral
components of an HBA and the storage front-end adapters.

A port operates in full-duplex data transmission mode with a transmit (Tx)


link and a receive (Rx) link.

Cabling:

 SAN implementations use optical fiber cabling. Copper can be used


for shorter distances for back-end connectivity, as it provides a better
signal-to-noise ratio for distances up to 30 meters.

 Optical fiber cables carry data in the form of light.

 There are two types of optical cables, multi-mode and single-mode.

 Multi-mode fiber (MMF) cable carries multiple beams of light


projected at different angles simultaneously onto the core of the
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 39
cable.

Based on the bandwidth, multi-mode fibers are classified as

 OM1 (62.5µm),
 OM2 (50µm)
 laser optimized OM3 (50µm).
In an MMF transmission, multiple light beams traveling inside the cable
tend to disperse and collide.
 This collision weakens the signal strength after it travels a
certain distance — a process known as modal dispersion.

 An MMF cable is usually used for distances of up to 500 meters


because of signal degradation (attenuation) due to modal
dispersion.

 Single-mode fiber (SMF) carries a single ray of light projected at the


center of the core.

 These cables are available in diameters of 7–11 microns; the most


common size is 9 microns.

 In an SMF transmission, a single light beam travels in a straight line


through the core of the fiber.

 The small core and the single light wave limits modal dispersion.
Among all types of fibre cables, single-mode provides minimum
signal attenuation over a maximum distance (up to 10 km).

 A single-mode cable is used for long-distance cable runs, limited


only by the power of the laser at the transmitter and sensitivity of the
receiver

MMFs are generally used within data centers for shorter distance runs, while
SMFs are used for longer distances. MMF transceivers are less expensive as
compared to SMF transceivers.

 A Standard connector (SC) and a Lucent connector (LC) are two


commonly used connectors for fiber optic cables.

 An SC is used for data transmission speeds up to 1 Gb/s, whereas an


Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 40
LC is used for speeds up to 4 Gb/s.

 A Straight Tip (ST) is a fiber optic connector with a plug and a


socket that is locked with a half-twisted bayonet lock.

 In the early days of FC deployment, fiber optic cabling


predominantly used ST connectors. This connector is often used with
Fibre Channel patch panels

The Small Form-factor Pluggable (SFP) is an optical transceiver used in


optical communication. The standard SFP+ transceivers support data rates
up to 10 Gb/s.

Interconnect Devices
 Hubs, switches, and directors are the interconnect devices commonly
used in SAN.

 Hubs are used as communication devices in FC-AL implementations.

 Hubs physically connect nodes in a logical loop or a physical star


topology.

 All the nodes must share the bandwidth because data travels through
all the connection points. Because of the availability of low-cost and
high-performance switches, hubs are no longer used in SANs.

Storage Arrays
 The fundamental purpose of a SAN is to provide host access to
storage resources.

 The large storage capacities offered by modern storage arrays have


been exploited in SAN environments for storage consolidation and
centralization.

 SAN implementations complement the standard features of


storage arrays by providing high availability and redundancy,
improved performance, business continuity, and multiple host
connectivity.

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 41
 SAN management software manages the interfaces between hosts,
interconnect devices, and storage arrays.

 The software provides a view of the SAN environment and


enables t h e management of various resources from one central
console.

 It provides key management functions, including mapping of storage


devices, switches, and servers, monitoring and generating alerts for
discovered devices, and logical partitioning of the SAN, called
zoning.

FC ARCHITECTURE

 The FC architecture represents true channel/network integration with


standard interconnecting devices. Connections in a SAN are
accomplished using FC.

 Transmissions from host to storage devices are carried out over


channel connections such as a parallel bus. Channel technologies
provide high levels of performance with low protocol overheads.

 Such performance is due to the static nature of channels and the high
level of hardware and software integration provided by the channel
technologies.

The key advantages of FCP are as follows:

Sustained transmission bandwidth over long distances.

 Support for a larger number of addressable devices over a


network. FC can support over 15 million device addresses on a
network.

 Exhibits the characteristics of channel transport and provides


speeds up to 8.5 Gb/s (8 GFC).

The FC standard enables mapping several existing Upper Layer Protocols


(ULPs) to FC frames for transmission, including SCSI, IP, High
Performance Parallel Interface (HIPPI), Enterprise System Connection
(ESCON), and Asynchronous Transfer Mode (ATM).

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 42
UNIT-IV

TQM TOOLS AND TECHNIQUES II

Quality Circles - Cost of Quality - Quality Function Deployment (QFD) - Taguchi quality loss function -
TPM - Concepts, improvement needs - Performance measures.

Part-A ( Five Questions)

S.No Questions BTL CO

1 State “Taguchi quality loss function”. K1 CO4


The Taguchi quality loss function is a way to assess economic loss from a
deviation in quality without having to develop the unique function for each
quality characteristic. As a function of the traditionally used process
capability index, it also puts this unitless value into monetary units.
2 K2 CO4
Write the short notes on QFD.
Quality Function Deployment is a planning tool used to fulfill customer
expectations. It is a disciplined approach to product design, engineering,
and production and provides in-depth evaluation of a product

3 What is Quality circle? K1 CO4


Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 43
A Quality Circle is a small group of employees within an organization who
voluntarily come together to identify, analyze, and solve work-related
problems and improve processes within their area of responsibility. Quality
Circles are typically part of a broader Total Quality Management (TQM) or
continuous improvement initiative within an organization.
4 Define Quality Cost and its Elements. K1 CO4
Quality costs are defined as those costs associated with the non-
achievement of product/service quality as defined by the requirements
established by the organization and its contracts with customers an society.
Cost of prevention
 Cost of appraisal
 Cost of internal failures
 Cost of external failures.
5 K1 CO4
List out the benefits of TPM?
 Increased equipment productivity
 Increased equipment reliability
 Reduced equipment downtime
 Increased plant capacity
 Extended machine line
Part-B( Three Questions) ( 13 Marks)

S.No Questions BTL CO

1 Briefly discuss the benefits, Objectives and process of QFD. K2 CO4


New products and service or for improving the existing product and services
Quality function deployment (QFD) is a “method to transform user demands into
design quality, to deploy the functions forming quality, and to deploy methods for
achieving the design quality into subsystems and component parts, and ultimately
to specific elements of the manufacturing process.
QFD helps transform customer needs into engineering characteristics for a product
or service, prioritizing each product or service characteristic while simultaneously
setting development targets for product or service.
Objectives of QFD
The objectives of QFD are:
1. To identify the true voice of the customer and to use this knowledge to
develop products which satisfy customers.
2. To help in the organization and analysis of all the pertinent information
associated with the project.

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 44
Benefits of using QFD
 Products meet customer expectations better
 Improved design traceability
 Reduced lead times
 Reduced product cost
 Improved communication within organization and with customer
 Reduction in design changes.
 Improved quality.
 Increased customer satisfaction.
 Reduced rework.
 Enables incurrent engineering.
 Improved performance of the products.
 Useful for gathering consumer requirements.
 Reduced time to market.
 Reduction in design changes.
 Decreased design and manufacturing costs.
 Improved quality.
 Increased customer satisfaction.
 Reduces product development time up to 50%
 Creating a list of actions that will move the project forward
The QFD Team
When an organization decides to implement QFD, the project manager and team
members need to be able to commit a significant amount of time to it, especially in
the early stages. The priorities, of the projects need to be defined and told to all
departments within the organization so team members can budget their time
accordingly. Also, the scope of the project must also be clearly defined so
questions about why the team was formed do not arise. One of the most important
tools in the QFD process is communication.
There are two types of teams—new product or improving an existing product.
Teams are composed of members from
 Marketing
 Design
 Quality
 Testing
 Purchase
 Quality assurance
 Finance
 Production.
The existing product team usually has fewer members, because the QFD process
will only need to be modified. Time and inter-team communication are two very
important things that each team must utilize to their fullest potential. Using time
effectively is the essential resource in getting the project done on schedule. Using
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 45
inter-team communication to its fullest extent will alleviate unforeseen problems
and make the project run smoothly.
Team meetings are very important in the QFD process. The team leader needs
to ensure that the meetings are run in the most efficient manner and that the
members are kept informed. There are advantages to shorter meetings, and
sometimes a lot more can be accomplished in a shorter meeting. Shorter meetings
allow information to be collected between times that will ensure that the right
information is being entered into the QFD matrix. Also, they help keep the team
focused on a quality improvement goal.
Four phases of QFD process:
Quality Function Development (QFD) may be defined as a system for
translating consumer requirements into appropriate requirements at every stage,
from research through product design and development, to manufacture,
distribution, installation and marketing, sales and services.

The first phase of QFD process is product – planning phase. For each of the
customer requirements, a set of design requirements us determined which it
satisfied will result in achieving customer requirements.
Second phase is part development. The term part quality characteristics are
applied to any elements that can and in measuring the evolution of quality. This
chart translates the design requirements into specific part details.
Key process operations are identified in third phase. Production
requirements are determined from key process operation.
Iterative QFD
The process of QFD can further extended. In first iteration, WHATs and
HOWs were found. The HOWs are technical requirements. In the second iteration
of QFD,the HOWs can be treated as WHATs required and detailed technical
requirement can be hound. There are the new How which will be very close to the
transfer for ached implementation.
Iterative QFD

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 46
New New
HOWs HOWs HOWs

WHATs
New New
WHATs WHATs
First Second Third
Iteration Iteration Iteration
Tips for success of QFD
 A consultant is needed to guide through at least the first few projects.
 The activity should be a formal activity and every member should take
part, fully prepared.
 The meetings should be planned at regular internals for shorter duration so
as to get the best out of this exercise through maintaining focus.
 Elicitation and recording customer requirements is key to success.

2 Summarize the various concepts of TPM. K2 CO4

Total Productive Maintenance (TPM) is a maintenance program which involves a


newly defined concept for maintaining plants and equipment. TPM seeks to
maximize equipments effectiveness throughout the life time of that equipment. It
strives to maintain optimum equipment conditions in order to prevent unexpected
break downs, speed loses, and quality defects arising from process activities.
Total=All encompassing by maintenance and production individuals working
together.
Productive=Production of goods and services thae meet or exceed customer’s
expectations.
Maintenance=Keeping equipments and plaint in as good as or better than original
condition at all times.
Objective Of TPM
• Avoid wastage in a quickly changing economic environment.
• Producing goods without reducing product quality.
• Reduce cost.
• Produce target quantity at the earliest possible time.
• Goods send to the customers must be non defective.
Six core principles of TPM
1. Obtain Minimum 90% OEE (Overall Equipment Effectiveness) Run the
machines even during lunch.
2. Operate in a manner, so that there are no customer complaints.
3. Reduce the manufacturing cost by 30%.
4. Achieve 100% success in delivering the goods as required by the customer.

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 47
5. Maintain a accident free environment.
6. Increase the suggestions by 3 times.

PILLAR -1 - JISHU HOZEN ( Autonomous maintenance )


This pillar is geared towards developing operators to be able to take care of small
maintenance tasks, thus freeing up the skilled maintenance people to spend time on
more value added activity
and technical repairs. The operators are responsible for upkeep of their equipment
to prevent it from deteriorating.
STEPS IN JISHU HOZEN
1. Preparation of employees.
2. Initial cleanup of machines.
3. Take counter measures
4. Fix tentative JH standards
5. General inspection
6. Autonomous inspection
7.Standardization and
8.Autonomous management.
PILLAR -2 – KOBETSU KAIZEN
"Kai" means change, and "Zen" means good ( for the better ). Basically kaizen is
for small improvements, but carried out on a continual basis and involve all people
in the organization. Kaizen is opposite to big spectacular innovations. Kaizen
requires no or little investment. The principle behind is that "a very large number
of small improvements are move effective in an organizational environment than a
few improvements of large value. This pillar is aimed at reducing losses in the
workplace that affect our efficiencies.
Kaizen Target
Achieve and sustain zero loses with respect to minor stops, measurement and

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 48
adjustments, defects and unavoidable downtimes. It also aims to achieve 30%
manufacturing cost reduction.
Tools used in Kaizen
1. Problem analysis
2. (Root cause ) Why - Why analysis
3. Summary of losses
4. Kaizen register
5. Kaizen summary sheet.
PILLAR -3 - PLANNED MAINTENANCE
It is aimed to have trouble free machines and equipments producing defect free
products for total customer satisfaction. This breaks maintenance down into 4
"families" or groups which was defined earlier.
1. Preventive Maintenance
2. Breakdown Maintenance
3. Corrective Maintenance
4. Maintenance Prevention
PILLAR -4 – QUALITY MAINTENANCE
It is aimed towards customer delight through highest quality through defect free
manufacturing. Focus is on eliminating non conformances in a systematic manner,
much like Focused Improvement. We gain understanding of what parts of the
equipment affect product quality and begin to eliminate current quality concerns,
then move to potential quality concerns. Transition is from reactive to proactive.
QM activities is to set equipment conditions that preclude quality defects, based on
the basic concept of maintaining perfect equipment to maintain perfect quality of
products. The condition are checked and measure in time series to very that
measure values are within standard values to prevent defects.
PILLAR – 5 DEVELOPMENT MANAGEMENT / EARLY MANAGEMENT
Early management or development management helps in drastically reducing the
time taken to receive, install, and set – up newly purchased equipments. Early
management can also be used for reducing the time to manufacture a new product
in the factory.
PILLAR 6 – TRAINING and EDUCATION
Education is given to operators to upgrade their skill. It is not sufficient know only
"Know-How" by they should also learn "Know-why". By experience they gain,
"Know-How" to overcome a problem what to be done. This they do without
knowing the root cause of the problem and why they are doing so. Hence it become
necessary to train them on knowing "Know-why". The employees should be
trained to achieve the four phases of skill. The goal is to create a factory full of
experts.
The different phase of skills are
Phase – 1 Do not know

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 49
Phase – 2 Know the theory but cannot do.
Phase – 3 Can do but can not teach
Phase – 4 Can do and also teach
PILLAR- 7 SAFETY, HEALTH AND ENVIRONMENT Target
1. Zero accident,
2. Zero health damage
3. Zero fires.
In this area focus is on to create a safe workplace and a surrounding area that is not
damaged by our process or procedures. This pillar will play an active role in each
of the other pillars on a regular basis.
PILLAR -8 OFFICE TPM
Office TPM should be started after activating four other pillars of TPM .Office
TPM must be followed to improve productivity, efficiency in the administrative
functions and identify and eliminate losses. This includes analyzing processes and
procedures towards increased office automation. Office TPM addresses twelve
major losses. They are
1. Processing loss
2. Cost loss including in areas such as procurement, accounts, marketing, sales
leading to high inventories
3. Communication loss
4. Idle loss
5. Set-up loss
6. Accuracy loss
7. Office equipment breakdown
8. Communication channel breakdown, telephone and fax lines
9. Time spent on retrieval of information

Explain the Taguchi’s Quality loss function. How it differs from traditional K1 CO4
3
approach of quality loss cost?
Taguchi defines quality as “the loss imparted by the product to society from
the time the product is shipped”.
 Taguchi methods are statistical methods developed by Genichi Taguchi
to improve the quality of manufactured goods.
 Taguchi defines quality as “the loss imparted by the product to society
from the time the product is shipped.
 This loss includes costs to operate, failure to functions, maintenance and
repair costs, customer dissatisfaction, injuries caused by poor design
and similar costs.
 Defective products or parts that are detected, repaired and reworked
before shipment are not considered part of this loss.
 The essence of the loss function concept is that whenever a product

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 50
deviates from its target performance, it generates a loss to society. This
loss is minimum when performance is right on target, but it grows
gradually as one deviates from the target.
 Therefore the loss function philosophy says that for a manufacturer, the
best strategy is to produce products as close to the target as possible,
rather than aiming at being within specifications.
Taguchi’s approach Vs Traditional approach:
Consider two products and one is within the specified limits and the other is
just outside the specified limits. In the traditional approach, the product within the
limits is considered as a good product while the outside one is considered as bad
product.
Taguchi disagrees with this traditional approach. He believes that when a
product moves from its target value, that move causes a loss no matter if the move
falls inside or outside the specified limits.

Taguchi uses a quadratic equation to determine this curve.

Where,
L (x) = Loss Function
K = Constant of Proportionality
X = Quality characteristics of selected product
N = Nominal value of the chosen product
(x-N) = Tolerance
To estimate the loss, the value of ‘K’ in equation 1 should be determined first.
C

K= d

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 51
Where,
C = Loss associated with the specification limit and
d = deviation of the specification from the target value.
The Value of K determines the slope of quality loss function.
The loss function philosophy says that “for a manufacturer, the best strategy is to
produce products as close to the target as possible, rather than aiming at”being
within specifications”.
Part-C ( One Question) ( 15 Marks)
S.No Questions BTL CO

With suitable example, explain the various stages of building a House of quality
1 K1 CO4
matrix.
House of quality:
The primary planning tool used in QFD is the House of Quality (HOQ). The house of
quality converts the voice of the customer into product design characteristics. QFD
uses a series of matrix diagrams, also called ‘quality tables’ that resemble connected
houses.
House of quality is a graphic tool for defining the relationship between customer
desires and the firm/product capabilities. It is a part of the Quality function
Deployment (QFD) and it utilizes a planning matrix to relate what the customer wants
to how a firm (that produces the products) is going to meet these wants.
It looks like a house with correlation matrix as its roof, customer wants versus product
features as the main part, competitor evaluation as the porch etc. it is based on “the It
also is reported to increase cross functional integration within organizations using it,
especially between marketing, engineering and manufacturing.
Parts of house of quality(HOQ)
 Customer requirements
 Prioritized customer requirements
 Technical descriptors
 Prioritized technical descriptors
 Relationship between requirements and descriptors
 Interrelationship between technical descriptors
Construction of house of quality(HOQ):
 List customer requirements
 List technical descriptors
 Develop a relationship matrix between WHATs and HOWs\
 Develop an interrelationship matrix between HOWs
 Competitive assessments
 Develop prioritized customer requirements
 Develop prioritized technical descriptors
The steps in building a house of quality are:

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 52
1. List Customer Requirements (WHAT’s)
2. List Technical Descriptors (HOW’s)
3. Develop a Relationship Matrix between WHAT’s and HOW’s
4. Develop an Inter-relationship Matrix between HOW’s
5. Competitive Assessments
a. Customer Competitive Assessments
b. Technical Competitive Assessments
6. Develop Prioritized Customer Requirements
7. Develop Prioritized Technical Descriptors

Constructing the House of Quality: The steps required for building the house of
quality are listed below.
1. List Customer Requirements (WHAT’s)
 Define the customer and establish full identification of customer wants and
dislikes.
 Measure the priority of these wants and dislikes using weighing scores.
 Summarize these customer wants into a small number of major wants,
supported by a number of secondary and tertiary wants.
2. List Technical Descriptors (HOW’s)
 Translate and identified customer wants into corresponding how’s or design
characteristics.
 Express them in terms of quantifiable technical parameters or product
specifications.
3. Develop a Relationship Matrix between WHAT’s and HOW’s
 Investigate the relationships between the customer’s expectations (WHATs) and
the descriptors (HOWs)

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 53
 If a relationship exists, categories it as strong, medium or weak (or by assigning
scores).
4. Develop an Inter-relationship Matrix between HOW’s
 Identify any inter relationship between each of the technical descriptors.
 These relationships are marked in the correlation matrix by either positive or
negative.
 Here a positive correlation represents a strong relationships and a negative
correlation represents a weak relationship.
5. Competitive Assessments
 Compare the performance of the product with that of competitive products.
 Evaluate the product and note the strong and weak points of the product
against its competitor’s product according to the customer.
 This competitive assessment tables include two categories.
a. Customer Competitive Assessments
b. Technical Competitive Assessments
6. Develop Prioritized Customer Requirements
 Develop the prioritized customer requirements corresponding to each customer
requirement in the house of quality on the right side of the customer
competitive assessment.
 These prioritized customer requirements contain columns for importance to
customer, target value, and scale up factors, sales point and an absolute weight.
7. Develop Prioritized Technical Descriptors
 Develop the prioritized technical descriptors corresponding to each technical
descriptor in the house of quality below the technical competitive assessment.
 These prioritized technical descriptors include degree of technical difficulty,
target value and absolute and relative weights.
At the end of HOQ analysis, the completed matrix contains much information about
which customer requirements are most important, how they relate to proposed new
product features and how competitive products compare with respect to these input and
output requirements.

UNIT-V

QUALITY MANAGEMENT SYSTEM

Introduction—Benefits of ISO Registration—ISO 9000 Series of Standards—Sector-Specific Standards—


AS 9100, TS16949 and TL 9000- ISO 9001 Requirements—Implementation— Documentation—Internal
Audits—Registration--ENVIRONMENTAL MANAGEMENT SYSTEM: Introduction—ISO 14000 Series
Standards—Concepts of ISO 14001—Requirements of ISO 14001— Benefits of EMS

Part-A ( Five Questions)


Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 54
S.No Questions BTL CO

1 K2 CO5
Write the significance of quality audit and list out the types of audits.
Quality audit examine the elements of quality management system in order
to evaluate how well these elements comply with quality system
requirement
Types:
 Internal audit
 External audit
2 What is meant control charts and its uses? K1 CO5
Control chart is defined as a display of data in the order that they occur
with statistically determined upper and lower limits of expected common
cause variations. It is used to indicate special causes of process variations
to monitor a process for maintenance.
It is used to keep a continuing record of a particular quality characteristic.
It is a picture of process over time.
3 K1 CO5
Mention the elements of ISO 14000.
a. Global
i. Facilitate trade and remove trade barriers
ii. Improve environmental performance of planet earth
iii. Build consensus that there is a need for environment
management and a common
terminology for EMS.
b. Organizational
4 Write the need for ISO 9000. K2 CO5
The QS 9000 standard defines the fundamental quality expectations from
the suppliers of production and service parts. The QS 9000 standard uses
ISO 9000 as its base with much broader requirements.
ISO 9000 is needed to unify the quality terms and definitions used by
industrialized nations and use terms to demonstrate a supplier’s capability
of controlling its processes.

5 Define Environmental policy. K1 CO5

The environmental policy should address the following issues:


 Management commitment to continue improvement
 Prevention of pollution
 Compliance with environmental laws and regulation, cooperation
with public authorities
Part-B( Three Questions) ( 13 Marks)

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 55
S.No Questions BTL CO

1 Discuss the concept and benefits ISO 14000 standard. K1 CO5

ISO 14000 is a series of international standards developed by the


International Organization for Standardization (ISO) that focus on
environmental management. The standards provide organizations with a
framework for establishing, implementing, maintaining, and improving
environmental management systems (EMS). The goal is to help
organizations minimize their negative impact on the environment, comply
with environmental regulations, and continually improve their
environmental performance.
Key Components
Environmental Policy: Organizations must establish and maintain an
environmental policy that outlines their commitment to environmental
protection, compliance with relevant regulations, and continuous
improvement.
Planning: This involves identifying environmental aspects of the
organization's activities, products, and services that can interact with the
environment (such as emissions, waste generation, resource use) and
assessing their potential environmental impacts. Based on this
assessment, organizations must set environmental objectives and targets
aligned with their environmental policy and legal requirements.
Implementation and Operation: Organizations must establish and
implement the necessary processes and procedures to achieve their
environmental objectives and targets. This includes defining roles,
responsibilities, and authorities, providing training and awareness
programs for employees, and establishing communication mechanisms
both internally and externally.
Monitoring and Measurement:
Organizations must establish procedures to monitor and measure their
environmental performance regularly. This involves tracking key
environmental indicators, such as energy consumption, greenhouse gas
emissions, waste generation, and water usage, to evaluate progress
towards objectives and targets.
Evaluation of Compliance:
Organizations must periodically evaluate their compliance with relevant
environmental regulations and other requirements to ensure that they are
meeting their legal obligations.
Management Review:
Top management must review the organization's EMS periodically to
ensure its continued suitability, adequacy, effectiveness, and alignment
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 56
with the organization's strategic direction.
Continual Improvement:
Organizations must continually seek opportunities to improve their
environmental performance by taking corrective and preventive actions,
implementing best practices, and incorporating lessons learned into their
EMS.
Benefits of ISO 14000
• Proactive environmental management.
• Improved employee environmental awareness and
performance.
• Increased operating efficiency and cost-effectiveness.
• Increased operating efficiency and cost-effectiveness.
• Enhanced relationships and communication with employees,
regulators, and Stakeholders
• Reduced environmental risk.
2 Explain the need for documentation and the documents to be K1 CO5
prepared for QMS.
Need For Documentation
 Standardization of Processes
 Quality Planning and Control
 Training and Development
 Problem Identification and Resolution.
 Continuous Improvement
 Customer Satisfaction
 Regulatory Compliance.
 Knowledge Management
QMS documentation structure is a hierarchical organization of documents
within the QMS. The documentation hierarchy makes it easy to
understand, communicate, and visualize the documentation structure.
Each documentation level in the structure builds upon the previous one
and contributes to the overall effectiveness of the QMS.
the four levels of documents in the QMS pyramid includes:
 Quality policy
 Procedures
 Work instructions
 Records

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 57
Quality Policy
The quality policy is a statement that defines the company’s commitment
to quality. It is a high-level document outlining the values and principles
regarding quality and providing a framework for setting quality
objectives.
At SimplerQMS, we advocate for creating quality policies that clearly
state the company’s desire to support high and uniform quality in
products and processes. It is important for people reading the Quality
Policy to see your company’s identity reflected in your quality policy.
Procedures
A procedure describes the step-by-step activities of processes within the
company. It includes elements such as the responsible departments or
functions and the frequency of the action.
These procedures provide clear guidelines, helping achieve efficiency,
quality output, and consistent performance while reducing
miscommunication and noncompliance with relevant requirements.
Work Instructions
Work instructions are the most detailed documents in the QMS structure.
They are typically written by the people who perform the work or people
who are responsible for leading those who perform the work. Work
instruction can be developed in the company or provided by customers.
The instructions help ensure tasks are carried out consistently and
effectively and meet the applicable requirements of the quality
management system.
Records
Records provide evidence that activities and events were conducted,
providing a historical record of actions.
By performing internal audits and reviewing the records, companies can
support evidence-based decision-making and demonstrate compliance
with requirements
3 i)Discuss the various benefits of EMS.(6) K2 CO5

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 58
ii) List out the various requirements of ISO 14001.(7) K2 CO5
i)Benefits Of EMS.
Global Benefits
 Facilitate trade & remove trade barrier
 Improve environmental performance of planet earth
 Build consensus that there is a need for environmental
management and a common terminology for EMS
Organizational Benefits
 Assuring customers of a commitment to environmental
management
 Meeting customer requirement
 Improve public relation
 Increase investor satisfaction
 Market share increase
 Conserving input material & energy
 Better industry/government relation
 Low cost insurance, easy attainment of permits & authorization
ii)Requirements Of ISO 14001
Planning
 Environmental Aspects
 Legal & other Requirements
 Objectives & Targets
 Environmental Management Programs
Implementation & Operation
 Structure & Responsibility
 Training, Awareness & Competency
 Communication
 EMS Documentation
 Document Control
 Operational Control
 Emergency Preparedness & Response
Checking & Corrective Action
 Monitoring & Measuring
 Nonconformance & Corrective & Preventive action
 Records
 EMS Audit
Management Review
 Review of objectives & targets
 Review of Environmental performance against legal & other
requirement
 Effectiveness of EMS elements
 Evaluation of the continuation of the policy
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 59
Part-C ( One Question) ( 15 Marks)

S.No Questions BTL CO

1 Illustrate the various elements of ISO 9000:2000 quality system. K2 CO5


Elements/Clauses in ISO 9001:2000
ISO 9001 defines 20 elements necessary for a quality management
system, as listed below:
Management Responsibility (Element 1)
The company has to define its commitment to a quality policy, which is
understood, implemented and maintained at all levels of the organization,
and to define its quality goals. Responsibilities and authorities have to be
defined and documented. The company must provide adequate resources
and appoint a member of the management as a representative for quality
management. At least once a year, a management review must be held
and recorded to evaluate the quality system.
Quality System (Element 2)
A quality manual, covering all elements of the ISO standard, has to be
prepared to document the quality system. Procedures must be documented
and controlled. The company has to prepare a quality plan to ensure that
quality requirements are understood and fulfilled.
Contract Review (Element 3)
The company has to establish and maintain documented procedures for
contract review, to document the customers' requirements and ensure the
capability to fulfill the contract or order requirements. Records of contract
review shall be maintained.
Design Control (Element 4)
The company has to establish and maintain documented procedures to
control and verify the design of a new product or service to fulfill
customers' requirements. The requirements must be identified and there
must be design reviews, design verification and design validation.
Design changes shall be documented, reviewed and authorized.
Document Control (Element 5)
All documents relevant for quality have to be controlled to ensure that the
pertinent issues of appropriate documents are available at all locations.
When necessary, they are to be replaced by updated versions. Changes
shall be reviewed and approved by the same organization/person that
performed the original review or approval.
Purchasing (Element 6)
The company must monitor the flow of purchasing and evaluate the
subcontractor's ability to fulfill specified requirements.

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 60
Purchaser Supplied Product (Element 7)
Goods supplied by the customer have to be recorded. It must be ensured
that they are separately controlled and stored to prevent loss or damage.
Product Identification And Traceability (Element 8)
Where appropriate, purchased and delivered products or services must be
made traceable through documentation or batches.
Process Control (Element 9)
All processes of production or service that directly affect quality must be
documented and planned and carried out under controlled conditions to
add consistency to the process. Control of process parameters and product
characteristics must ensure that the specified requirements are met.
Inspection And Testing (Element 10)
The company must ensure receiving inspection and testing, in-process
inspection and testing, and final inspection and testing. These inspections
and tests must be recorded.
Test Equipment (Element 11)
The items of equipment used for inspection, measuring and testing must
be identified and recorded. They must be controlled, calibrated and
checked at prescribed intervals.
Inspection And Test Status (Element 12)
The status of the product or service must be identified at all stages as
conforming or nonconforming. This is to ensure that only conforming
products or services are dispatched or used.
Control Of Nonconforming Product (Element 13)
The company must establish procedures to ensure that nonconforming
products or services are prevented from unintended use. The disposal of
nonconforming products must be determined and recorded.
Correctional Prevention (Element 14)
Procedures must be established to ensure effective handling of customer
complaints and corrective actions after identifying nonconformities. The
cause of nonconformities is to be investigated in order to prevent
recurrence. The corrective action shall be monitored to ensure its long-
term effectiveness. Preventive actions are to be initiated to eliminate
potential causes of nonconformance.
Delivery (Element 15)
Documented procedures must be established to ensure that products are
not damaged and reach the customer in the required condition.
Control Of Quality Records (Element 16)
All records related to the quality system must be identified, collected and
stored together. The quality records demonstrate conformity with
specified requirements and verify effective operation of the quality

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 61
system.
Internal Quality Audits (Element 17)
The company must establish and maintain documented procedures for
planning and implementing internal quality audits to determine the
effectiveness of the quality system. The comments made by internal
auditors must be recorded and brought to the attention of the personnel
having responsibility in the area audited. Follow-up audit activities shall
verify and record the implementation and effectiveness of the corrective
action taken.
Training (Element 18)
The company shall establish and maintain documented procedures for
identifying training needs and must have a training record for each
employee.
Servicing (Element 19)
Where servicing is a specific requirement, the company must establish
and maintain documented procedures for performing, verifying and
reporting that the servicing meets the specified requirements.
Statistical Techniques (Element 20)
The company must establish and maintain documented procedures to
implement and control the application of statistical techniques which have
been identified as necessary for performance information.

Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 62

You might also like