Question Bank Problem Solving in C
Question Bank Problem Solving in C
QUESTION BANK
Year/Sem.: I/I
Course Code &Title: IT24111 & PROBLEM SOLVING TECHNIQUES USING C
Regulation: R2022
Prepared By Approved By
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 1
UNIT-I
Computers:Hardware-Software-Processor-Memory-I/o devices-Interface-Programming
Languages.Problem Solving Aspects:Algorithms,Pseudo code,Flowchart,Steps in Problem Solving-simple
strategies for developing algorithms(iteration,recursion)-Steps for Creating and Running programs
Introduction to C programming-Header files-Structure of a C program-compliation and linking processes-
Constants,Variables-Data Types.
computing systems.
2 Define complier . K1 CO1
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 2
Preprocessor Statements (Link Section)
Global Declarations (Definition Section)
The main() function
o Local Declarations
VARIABLE DECLARATION:
Variable declaration in C tells the compiler about the existence of the
variable with the given name and data type.
data_type variable name = value;
or
data_type variable_name1, variable_name2;
INITIALIZATION OF VARIABLE:
Initialization of a variable is the process where the user assigns some
meaningful value to the variable when creating the variable.
int var = 10;
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 3
Part-B(Three Questions) ( 13 Marks)
These two states (ON and OFF) can also be defined as 1 and 0 which is called
Binary Code.
However, working with Machine Language is not an easy task. As you need a
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 4
good understanding of the architecture of the CPU and its working.
1.2ASSEMBLY LANGUAGE:
Assembly Language is also known as Second Generation Programming
Language (2GL).
It is another Low-Level Programming Language and the second closest
language to the Computer.
Assembly Language is slower as compared to the Machine Language.
However, it is very fast when compared to High-Level Programming
Languages (like – C, C++, Java).
Unlike Machine Language, the Assembly Language need a program (called
Assembler) to convert its Assembly Code to Machine Code.
Programming in Assembly Language is comparatively much easier as
compared to working with Machine Language.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 5
We create a number of statements in order to solve any problem.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 6
cluster. Storage can be either internal or external to the server. External DAS
alleviated the challenges of limited internal storage capacity.
3 K2 CO1
Describe in detail about the Information Lifecycle Management system.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 7
INFORMATION LIFECYCLE MANAGEMENT:
Information lifecycle management (ILM) is a proactive strategy that enables an
IT organization to effectively manage the data throughout its lifecycle, based
on predefined business policies.
Classifying data and applications on the basis of business rules and policies to
enable differentiated treatment of information
■ Implementing policies by using information management tools, starting from
complexity
■ Organizing storage resources in tiers to align the resources with data classes,
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 8
Step 1 :
The goal is to implement a storage networking environment. Storage architectures
offer varying levels of protection and performance and this acts as a foundation for
future policy-based information management in Steps 2 and 3.
Step 2:
Takes ILM to the next level, with detailed application or data classification and
linkage of the storage infrastructure to business policies.
This classification and the resultant policies can be automatically executed using
tools for one or more applications, resulting in better management and optimal
allocation of storage resources.
Step 3 : The implementation is to automate more of the applications or
data classification and policy management activities in order to scale
to a wider set of enterprise applications.
Part-C ( One Question) ( 15 Marks)
S.No Questions BTL CO
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 9
a few clicks is a major reason for shifting to the cloud. In modern data
centers, software-defined networking (SDN) manages the traffic flows via
software.
Infrastructure as a Service (IaaS) offerings, hosted on private and public
clouds, spin up whole systems on-demand.
Colocation data centers function as a kind of rental property where the space and
resources of a data center are made available to the people willing to rent it.
Managed service data centers offer aspects such as data storage, computing, and other
services as a third party, serving customers directly.
Cloud data centers are distributed and are sometimes offered to customers with the help
of a third-party managed service provider.
BUILDING BLOCKS OF A DATA CENTER :
Data centers are made up of three primary types of components:
Apart from the Data Centers, support infrastructure is essential to meeting the
service level agreements of an enterprise data center.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 10
either locally, remote, or both.
Advancements in non-volatile storage media lowers data access times.
In addition, as in any other thing that is software-defined, software-defined
storage technologies increase staff efficiency for managing a storage
system.
Data Center Networks
Datacenter network equipment includes cabling, switches, routers, and
firewalls that connect servers and to the outside world. Properly configured
and structured, they can manage high volumes of traffic without
compromising performance.
A typical three-tier network topology is made up of core switches at the edge
connecting the data center to the Internet and a middle aggregate layer that
connects the core layer to the access layer where the servers reside.
Advancements, such as hyper-scale network security and software-defined
networking, bring cloud-level agility and scalability to on-premises networks.
• In the cloud
In software-defined data center is an IT-as-a-Service (ITaaS) platform that services
an organization’s software, infrastructure, or platform needs.
An SDDC can be housed on-premise, at an MSP, and in private, public, or hosted
clouds.
Like traditional data centers, SDDCs also host servers, storage devices, network
equipment, and security devices. You can manage SDDCs from any location, using
remote APIs and Web browser interfaces. SDDCs also make extensive use of
automation capabilities to:
• Reduce IT resource usage
• Provide automated deployment and management for many core
functions
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 11
KEY COMPONENTS OF SDDC
Benefits of SDDCs
Business agility
An SDDC offers several benefits that improve business agility with a focus on three key
areas:
• Balance
• Flexibility
• Adaptability
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 12
Reduced cost
• In general, it costs less to operate an SDDC than housing data in brick-and-mortar
data centers.
• Cloud SDDCs operate similarly to SaaS platforms that charge a recurring monthly
cost.
• This is usually an affordable rate, making an SDDC accessible to all types of
businesses, even those who may not have a big budget for technology
spending.
Increased scalability
By design, cloud SDDCs can easily expand along with your business. Increasing
your storage space or adding functions is usually as easy as contacting the data
facility to get a revised monthly service quote.
UNIT-II
Components of an intelligent storage system,Componenets, addressing and performance of hard disk drives
and solid state drives, RAID, Types of intelligent storage systems, Scale-up, and scaleout storage
Architecture.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 13
command is received for execution, the command queuing algorithms
assign a tag that defines a sequence in which the commands can be
executed.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 14
Front-End Back-End P hysical
Host Disks
Connectivit Cach
y e
Storage
Netw
ork
P ort P ort
An I/O request from the host at the front-end port is processed through a cache
and the back end, to enable storage and retrieval of data from the physical
disk. A read request can be serviced directly from the cache if the requested
data is found in the cache.
FRONT END
• The front end provides the interface between the storage system and the
host. It consists of two components: front-end ports and front-end
controllers.
• The front-end ports enable hosts to connect to the intelligent storage system.
Each
front-end port has processing logic that executes the appropriate transport
protocol, such as SCSI, Fibre Channel, or iSCSI, for storage connections.
Front-end controllers route data to and from the cache via the internal data bus.
When the cache receives write data, the controller sends an acknowledgment message
back to the host. Controllers optimize I/O processing by using command queuing
algorithms.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 15
Seek Time Optimization: Commands are executed based on optimizing
read/write head movements, which may result in a reordering of commands.
Access Time Optimization: Commands are executed based on the
combination of seek time optimization and an analysis of rotational latency
for optimal performance.
CACHE
• The cache is semiconductor memory where data is placed temporarily to reduce
the time required to service I/O requests from the host.
• Accessing data from the cache takes less than a millisecond. Write data is placed
in the cache and then written to disk. After the data is securely placed in the cache,
the host is acknowledged immediately.
Structure of Cache:
✓ The cache is organized into pages or slots, which is the smallest unit of cache
allocation.
The size of a cache page is configured according to the application I/O size.
The cache consists of the data store and tag RAM.
The data store holds the data while tag RAM tracks the location of the data in
the data store and disk.
Entries in tag RAM indicate where data is found in cache and where the data
belongs on the disk. Tag RAM includes a dirty bit flag, which indicates whether
the data in cache has been committed to the disk or not.
It also contains time-based information, such as the time of last access, which
is used to identify cached information that has not been accessed for a long
period and may be freed up.
Cache Implementation
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 16
The cache can be implemented as either a dedicated cache or a global cache. With a
dedicated cache, separate sets of memory locations are reserved for reads and
writes. In the global cache, both reads and writes can use any of the available
memory addresses. Cache management is more efficient in a global cache
implementation, as only one global set of addresses has to be managed.
BACK END:
• The back end provides an interface between cache and the physical
disks. It con- sists of two components: back-end ports and back-end
controllers.
• The back end controls data transfers between cache and the physical disks.
From cache,
data is sent to the back end and then routed to the destination disk. Physical
disks are connected to ports on the back end.
• The back-end controller communicates with the disks when performing
reads and writes
and also provides additional, but limited, temporary data storage.
PHYSICAL DISK:
For example, without the use of LUNs, a host requiring only 200 GB could be
allocated an entire 1TB physical disk. Using LUNs, only the required 200 GB
would be allocated to the host, allowing the remaining 800 GB to be allocated to
other hosts.
The capacity of a LUN can be expanded by aggregating other LUNs with it.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 17
The result of this aggregation is a larger capacity LUN, known as a meta-
LUN. The mapping of LUNs to their physical location on the drives is
managed by the operating environment of an intelligent storage system.
Several platters are assembled together with the R/W head and controller, most
Key components of a disk drive are platter, spindle, read/write head, actuator arm
assembly, and controller
PLATTER:
A typical HDD consists of one or more flat circular disks called platters (Figure
2-3). The data is recorded on these platters in binary codes (0s and 1s).
The set of rotating platters is sealed in a case, called a Head Disk Assembly
(HDA). A platter is a rigid, round disk coated with magnetic material on both
surfaces (top and bottom).
The data is encoded by polarizing the magnetic area, or domains, of the disk
surface. Data can be written to or read from both surfaces of the platter.
The number of platters and the storage capacity of each platter determine the
total capacity of the drive.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 18
SPINDLE
✓ A spindle connects all the platters, as shown in Figure 2-3, and is
connected to a motor. The motor of the spindle rotates with a
constant speed.
✓ The disk platter spins at a speed of several thousands of
revolutions per minute (rpm). Disk drives have spindle speeds of
7,200 rpm, 10,000 rpm, or 15,000 rpm. Disks used on current
storage systems have a platter diameter of 3.5” (90 mm).
✓ When the platter spins at 15,000 rpm, the outer edge is moving at
around 25
percent of the speed of sound.
READ/WRITE HEAD
✓ Read/Write (R/W) heads, shown in Figure 2-4, read and write
data from or to a platter.
✓ Drives have two R/W heads per platter, one for each surface of the platter.
✓ The R/W head changes the magnetic polarization on the surface ofthe
platter when writing data. While reading data, this head detects
magnetic polarization on the surface of the platter.
✓ During reads and writes, the R/W head senses the magnetic
polarization and never touches the surface of the platter. When the
spindle is rotating, there is a microscopic air gap between the R/W
heads and the platters, known as the head flying height.
✓ This air gap is removed when the spindle stops rotating and the R/W
head rests on a special area on the platter near the spindle. This area is
called the landing zone. The landing zone is coated with a lubricant
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 19
to reduce friction between the head and the platter.
✓ The logic on the disk drive ensures that heads are moved to the landing
zone before they touch the surface. If the drive malfunctions and the
R/W head accidentally touches the surface of the platter outside the
landing zone, a head crash occurs.
3 Describe the two types of RAID implementation and Array Components in detail. K2 CO2
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 20
There are two types of RAID implementation, hardware and software.
Software RAID
✓ Supported features: Software RAID does not support all RAID levels.
Hardware RAID
✓ The RAID Controller interacts with the hard disks using a PCI bus.
Manufacturers also integrate RAID controllers on motherboards. This
integra- tion reduces the overall cost of the system, but does not provide
the flexibility required for high-end storage systems.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 21
Management and control of disk aggregations
✓ The number of HDDs in a logical array depends on the RAID level used.
Configurations could have a logical array with multiple physical arrays or
a physical array with multiple logical arrays.
CO
i) Discuss the steps involved in various RAID levels models. (10) K2 2
1
ii) Explain the Read and Write operation performed in cache memory (5) K2 CO
2
i)RAID levels are defined based on striping, mirroring, and parity techniques. These techniques
determine the data availability and performance characteristics of an array.
RAID 0: Striping
• RAID 0, also known as a striped set or a striped volume, requires a minimum of two
disks. The disks are merged into a single large volume where data is stored evenly
across the number of disks in the array.
• This process is called disk striping and involves splitting data into blocks and
writing it simultaneously/sequentially on multiple disks. Therefore, RAID 0 is
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 22
generally implemented to improve speed and efficiency.
Advantages of RAID 0
• Cost-efficient and straightforward to implement.
• Increased read and write performance.
• No overhead (total capacity use).
Disadvantages of RAID 0
• Doesn't provide fault tolerance or redundancy.
RAID 1: Mirroring
✓ RAID 1 is an array consisting of at least two disks where the same data is stored on
each to ensure redundancy. The most common use of RAID 1 is setting up a mirrored
pair consisting of two disks in which the contents of the first disk is mirrored in the
second. This is why such a configuration is also called mirroring.
Advantages of RAID 1
• Increased read performance.
• Provides redundancy and fault tolerance.
• Simple to configure and easy to use.
Disadvantages of RAID 1
• Uses only half of the storage capacity.
• More expensive (needs twice as many drivers).
• Requires powering down your computer to replace the failed drive.
It combines bit-level striping with error checking and information correction. This RAID
implementation requires two groups of disks – one for writing the data and another for writing
error correction codes. RAID 2 also requires a special controller for the synchronized
spinning of all disks.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 23
Advantages of RAID 2
• Reliability.
• The ability to correct stored information.
Disadvantages of RAID 2
• Expensive.
• Difficult to implement.
• Require entire disks for ECC.
Advantages of RAID 3
• Good throughput when transferring large amounts of data.
• High efficiency with sequential operations.
• Disk failure resiliency.
Disadvantages of RAID 3
• Not suitable for transferring small files.
• Complex to implement.
• Difficult to set up as software RAID.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 24
• Fast read operations.
Disadvantages of RAID 4
Parity bits are distributed evenly on all disks after each sequence of data has been saved.
Advantages of RAID 5
• High performance and capacity.
• Fast and reliable read speed.
• Tolerates single drive failure.
Disadvantages of RAID 5
• Longer rebuild time.
• Uses half of the storage capacity (due to parity).
• If more than one disk fails, data is lost.
• More complex to implement.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 25
For this reason, it is also referred to as the double-parity RAID.
• Block-level striping with two parity blocks allows two disk failures before any data
is lost. This means that in an event where two disks fail, RAID can still reconstruct
the required data.
Advantages of RAID 6
• High fault and drive-failure tolerance.
• Storage efficiency (when more than four drives are used).
• Fast read operations.
Disadvantages of RAID 6
• Rebuild time can take up to 24 hours.
• Slow write performance.
• Complex to implement.
• More expensive.
Advantages of RAID 10
• High performance.
• High fault-tolerance.
• Fast read and write operations.
• Fast rebuild time.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 26
Disadvantages of RAID 10
• Limited scalability.
• Costly (compared to other RAID levels).
• Uses half of the disk space capacity.
• More complicated to set up.
ii) Read Operation with Cache
✓ when a host issues a read request, the front-end controller accesses the tag
RAM to determine whether the required data is available in the cache.
✓ If the requested data is found in the cache, it is called a read cache hit or
read hit and data is sent directly to the host, without any disk operation.
This provides a fast response time to the host (about a millisecond).
✓ If the requested data is not found in the cache, it is called a cache miss and
the data must be read from the disk.
✓ The back-end controller accesses the appropriate disk and retrieves the
requested data. Data is then placed in the cache and is finally sent to the
host through the front-end controller. Cache misses increase I/O response
time.
✓ A pre-fetch, or read-ahead, algorithm is used when read requests are
sequential. In a sequential read request, a contiguous set of associated
blocks is retrieved. Several other blocks that have not yet been requested
by the host can be read from the disk and placed into the cache in advance.
✓ The intelligent storage system offers fixed and variable pre-fetch sizes.
✓ In fixed pre-fetch, the intelligent storage system pre-fetches a fixed amount
of data. It is most suitable when I/O sizes are uniform.
In variable pre-fetch, the storage system pre-fetches an amount of data in
multiples of the size of the host request.
✓ Read performance is measured in terms of the read hit ratio, or the hit
rate, usually expressed as a percentage.
This ratio is the number of read hits with respect to the total number of read requests. A
higher read-hit ratio improves the read performance.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 27
Write Operation with Cache:
Write operations with cache provide performance advantages over writing directly to
disks. When an I/O is written to the cache and acknowledged, it is completed in far
less time (from the host’s perspective) than it would take to write directly to disk
• Write-back cache: Data is placed in the cache and an acknowledgment is sent to the
host immediately. Later, data from several writes are committed (de-staged) to the
disk. Write response times are much faster, as the write operations are isolated from
the mechanical delays of the disk. However, uncommitted data is at risk of loss in the
event of cache failures.
• Write-through cache: Data is placed in the cache and immediately written to the
disk, and an acknowledgment is sent to the host. Because data is committed to disk as
it arrives, the risks of data loss are low but write response time is longer because of the
disk operations.
The cache can be bypassed under certain conditions, such as very large size write I/O.
In this implementation, if the size of an I/O request exceeds the pre-defined size, called
write aside size, writes are sent to the disk directly to reduce the impact of large writes
consuming a large cache area.
UNIT-III
Block-Based Storage System, File-Based Storage System, Object-Based and Unified Storage. Fibre Channel
SAN: Software-defined networking, FC SAN components and architecture, FC SAN topologies, link
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 28
aggregation, and zoning, Virtualization in FC SAN environment. Internet Protocol SAN: iSCSI protocol,
network components, and connectivity, Link aggregation, switch aggregation, and VLAN, FCIP protocol,
connectivity, and configuration.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 29
3 What is meant by a file-based storage system? K1 CO3
4 Difference between Multimode fiber (MMF) cable and Single-mode fiber K1 CO3
(SMF).
Multimode fiber cable Single-mode fiber
• This makes using block storage quite similar to storing data on a hard
drive within a server, except the data is stored in a remote location
rather than on local hardware.
• The block size is generally too small to fit an entire piece of data, and
so the data for any particular file is broken up into numerous blocks for
storage.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 31
• When a file is requested, the management application uses addresses to
identify the necessary blocks and then compiles them into the complete
file for use.
• High efficiency: Block storage’s high IOPS and low latency make it
ideal for applications that demand high performance.
• Large file efficiency: For large files, such as archives and video files,
data must be completely overwritten when using file or object storage..
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 32
• Email servers: Email servers can take advantage of block storage’s
flexibility and scalability. In fact, in the case of Microsoft Exchange,
block storage is required due to the lack of support for network-attached
storage.
Core-Edge Fabric
In the core-edge fabric topology, there are two types of switch tiers in this
fabric.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 33
• The tier at the edge fans out from the tier at the core. The nodes on
the edge can communicate with each other.
• The core tier usually comprises enterprise directors that ensure high
fabric availability.
The host-to-storage traffic has to traverse one and two ISLs in a two-tier and
three-tier configuration, respectively.
However, to maintain the topology, it is essential that new ISLs are created to
connect each edge switch to the new core switch that is added.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 34
Benefits and Limitations of Core-Edge Fabric
The core-edge fabric provides one-hop storage access to all storage in
the system. Because traffic travels in a deterministic pattern, a core-
edge provides easier calculation of ISL loading and traffic patterns.
Each tier’s switch is used for either storage or hosts, one can easily
identify which resources are approaching their capacity, making it easier
to develop a set of rules for scaling and apportioning.
Hop count represents the total number of devices a given piece of data
(packet) passes through.
A large hop count means greater the transmission delay between data
traverse from its source to destination.
Mesh Topology
In a mesh topology, each switch is directly connected to other switches
by using ISLs. This topology promotes enhanced connectivity within the
SAN.
A mesh topology may be one of the two types: full mesh or partial
mesh. In a full mesh, every switch is connected to every other switch in
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 35
the topology. Full mesh topology may be appropriate when the number
of switches involved is small.
Hosts and storage can be located anywhere in the fabric, and storage
can be localized to a director or a switch in both mesh topologies.
FCoE switch
They eliminate the need to deploy separate adapters and cables for FC
and Ethernet communications, thereby reducing the required number
of network adapters and switch ports.
A CNA offloads the FCoE protocol processing task from the compute
system, thereby freeing the CPU resources of the compute system for
application processing.
(Ethernet traffic that carries FC data) and regular Ethernet traffic are
transferred through supported NICs on the hosts.
FCOE Switch
An FCoE switch has both Ethernet switch and FC switch functionalities.
It has a Fibre Channel Forwarder (FCF), an Ethernet Bridge, and a set of
ports that can be used for FC and Ethernet connectivity.
FCF handles FCoE login requests, applies zoning, and provides the
fabric services typically associated with an FC switch.
It also encapsulates the FC frames received from the FC port into the
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 37
Ethernet frames and decapsulates the Ethernet frames received from the
Ethernet Bridge to the FC frames.
Upon receiving the incoming Ethernet traffic, the FCoE switch inspects
the Ethertype of the incoming frames and uses that to determine their
destination.
If the Ethertype of the frame is FCoE, the switch recognizes that the
frame contains an FC payload and then forwards it to the FCF.
From there, the FC frame is extracted from the Ethernet frame and
transmitted to the FC SAN over the FC ports.
If the Ethertype is not FCoE, the switch handles the traffic as usual
Ethernet traffic and forwards it over the Ethernet ports.
FCoE ARCHITECTURE
Fibre Channel over Ethernet (FCoE) is a method of supporting
converged Fibre Channel (FC) and Ethernet traffic on a data center
bridging (DCB) network.
An FCoE frame is the same as any other Ethernet frame because the
Ethernet encapsulation provides the header information needed to
forward the frames. However, to achieve the lossless behavior that FC
transport requires, the Ethernet network must conform to DCB
standards.
These components can be further broken down into the following key
elements: node ports, cabling, interconnecting devices (such as FC switches
or hubs), storage arrays, and SAN management software
Node Ports
In fibre channel, devices such as hosts, storage and tape libraries are all
referred to as nodes. Each node is a source or destination of information for
one or more nodes.
Cabling:
OM1 (62.5µm),
OM2 (50µm)
laser optimized OM3 (50µm).
In an MMF transmission, multiple light beams traveling inside the cable
tend to disperse and collide.
This collision weakens the signal strength after it travels a
certain distance — a process known as modal dispersion.
The small core and the single light wave limits modal dispersion.
Among all types of fibre cables, single-mode provides minimum
signal attenuation over a maximum distance (up to 10 km).
MMFs are generally used within data centers for shorter distance runs, while
SMFs are used for longer distances. MMF transceivers are less expensive as
compared to SMF transceivers.
Interconnect Devices
Hubs, switches, and directors are the interconnect devices commonly
used in SAN.
All the nodes must share the bandwidth because data travels through
all the connection points. Because of the availability of low-cost and
high-performance switches, hubs are no longer used in SANs.
Storage Arrays
The fundamental purpose of a SAN is to provide host access to
storage resources.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 41
SAN management software manages the interfaces between hosts,
interconnect devices, and storage arrays.
FC ARCHITECTURE
Such performance is due to the static nature of channels and the high
level of hardware and software integration provided by the channel
technologies.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 42
UNIT-IV
Quality Circles - Cost of Quality - Quality Function Deployment (QFD) - Taguchi quality loss function -
TPM - Concepts, improvement needs - Performance measures.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 44
Benefits of using QFD
Products meet customer expectations better
Improved design traceability
Reduced lead times
Reduced product cost
Improved communication within organization and with customer
Reduction in design changes.
Improved quality.
Increased customer satisfaction.
Reduced rework.
Enables incurrent engineering.
Improved performance of the products.
Useful for gathering consumer requirements.
Reduced time to market.
Reduction in design changes.
Decreased design and manufacturing costs.
Improved quality.
Increased customer satisfaction.
Reduces product development time up to 50%
Creating a list of actions that will move the project forward
The QFD Team
When an organization decides to implement QFD, the project manager and team
members need to be able to commit a significant amount of time to it, especially in
the early stages. The priorities, of the projects need to be defined and told to all
departments within the organization so team members can budget their time
accordingly. Also, the scope of the project must also be clearly defined so
questions about why the team was formed do not arise. One of the most important
tools in the QFD process is communication.
There are two types of teams—new product or improving an existing product.
Teams are composed of members from
Marketing
Design
Quality
Testing
Purchase
Quality assurance
Finance
Production.
The existing product team usually has fewer members, because the QFD process
will only need to be modified. Time and inter-team communication are two very
important things that each team must utilize to their fullest potential. Using time
effectively is the essential resource in getting the project done on schedule. Using
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 45
inter-team communication to its fullest extent will alleviate unforeseen problems
and make the project run smoothly.
Team meetings are very important in the QFD process. The team leader needs
to ensure that the meetings are run in the most efficient manner and that the
members are kept informed. There are advantages to shorter meetings, and
sometimes a lot more can be accomplished in a shorter meeting. Shorter meetings
allow information to be collected between times that will ensure that the right
information is being entered into the QFD matrix. Also, they help keep the team
focused on a quality improvement goal.
Four phases of QFD process:
Quality Function Development (QFD) may be defined as a system for
translating consumer requirements into appropriate requirements at every stage,
from research through product design and development, to manufacture,
distribution, installation and marketing, sales and services.
The first phase of QFD process is product – planning phase. For each of the
customer requirements, a set of design requirements us determined which it
satisfied will result in achieving customer requirements.
Second phase is part development. The term part quality characteristics are
applied to any elements that can and in measuring the evolution of quality. This
chart translates the design requirements into specific part details.
Key process operations are identified in third phase. Production
requirements are determined from key process operation.
Iterative QFD
The process of QFD can further extended. In first iteration, WHATs and
HOWs were found. The HOWs are technical requirements. In the second iteration
of QFD,the HOWs can be treated as WHATs required and detailed technical
requirement can be hound. There are the new How which will be very close to the
transfer for ached implementation.
Iterative QFD
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 46
New New
HOWs HOWs HOWs
WHATs
New New
WHATs WHATs
First Second Third
Iteration Iteration Iteration
Tips for success of QFD
A consultant is needed to guide through at least the first few projects.
The activity should be a formal activity and every member should take
part, fully prepared.
The meetings should be planned at regular internals for shorter duration so
as to get the best out of this exercise through maintaining focus.
Elicitation and recording customer requirements is key to success.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 47
5. Maintain a accident free environment.
6. Increase the suggestions by 3 times.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 48
adjustments, defects and unavoidable downtimes. It also aims to achieve 30%
manufacturing cost reduction.
Tools used in Kaizen
1. Problem analysis
2. (Root cause ) Why - Why analysis
3. Summary of losses
4. Kaizen register
5. Kaizen summary sheet.
PILLAR -3 - PLANNED MAINTENANCE
It is aimed to have trouble free machines and equipments producing defect free
products for total customer satisfaction. This breaks maintenance down into 4
"families" or groups which was defined earlier.
1. Preventive Maintenance
2. Breakdown Maintenance
3. Corrective Maintenance
4. Maintenance Prevention
PILLAR -4 – QUALITY MAINTENANCE
It is aimed towards customer delight through highest quality through defect free
manufacturing. Focus is on eliminating non conformances in a systematic manner,
much like Focused Improvement. We gain understanding of what parts of the
equipment affect product quality and begin to eliminate current quality concerns,
then move to potential quality concerns. Transition is from reactive to proactive.
QM activities is to set equipment conditions that preclude quality defects, based on
the basic concept of maintaining perfect equipment to maintain perfect quality of
products. The condition are checked and measure in time series to very that
measure values are within standard values to prevent defects.
PILLAR – 5 DEVELOPMENT MANAGEMENT / EARLY MANAGEMENT
Early management or development management helps in drastically reducing the
time taken to receive, install, and set – up newly purchased equipments. Early
management can also be used for reducing the time to manufacture a new product
in the factory.
PILLAR 6 – TRAINING and EDUCATION
Education is given to operators to upgrade their skill. It is not sufficient know only
"Know-How" by they should also learn "Know-why". By experience they gain,
"Know-How" to overcome a problem what to be done. This they do without
knowing the root cause of the problem and why they are doing so. Hence it become
necessary to train them on knowing "Know-why". The employees should be
trained to achieve the four phases of skill. The goal is to create a factory full of
experts.
The different phase of skills are
Phase – 1 Do not know
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 49
Phase – 2 Know the theory but cannot do.
Phase – 3 Can do but can not teach
Phase – 4 Can do and also teach
PILLAR- 7 SAFETY, HEALTH AND ENVIRONMENT Target
1. Zero accident,
2. Zero health damage
3. Zero fires.
In this area focus is on to create a safe workplace and a surrounding area that is not
damaged by our process or procedures. This pillar will play an active role in each
of the other pillars on a regular basis.
PILLAR -8 OFFICE TPM
Office TPM should be started after activating four other pillars of TPM .Office
TPM must be followed to improve productivity, efficiency in the administrative
functions and identify and eliminate losses. This includes analyzing processes and
procedures towards increased office automation. Office TPM addresses twelve
major losses. They are
1. Processing loss
2. Cost loss including in areas such as procurement, accounts, marketing, sales
leading to high inventories
3. Communication loss
4. Idle loss
5. Set-up loss
6. Accuracy loss
7. Office equipment breakdown
8. Communication channel breakdown, telephone and fax lines
9. Time spent on retrieval of information
Explain the Taguchi’s Quality loss function. How it differs from traditional K1 CO4
3
approach of quality loss cost?
Taguchi defines quality as “the loss imparted by the product to society from
the time the product is shipped”.
Taguchi methods are statistical methods developed by Genichi Taguchi
to improve the quality of manufactured goods.
Taguchi defines quality as “the loss imparted by the product to society
from the time the product is shipped.
This loss includes costs to operate, failure to functions, maintenance and
repair costs, customer dissatisfaction, injuries caused by poor design
and similar costs.
Defective products or parts that are detected, repaired and reworked
before shipment are not considered part of this loss.
The essence of the loss function concept is that whenever a product
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 50
deviates from its target performance, it generates a loss to society. This
loss is minimum when performance is right on target, but it grows
gradually as one deviates from the target.
Therefore the loss function philosophy says that for a manufacturer, the
best strategy is to produce products as close to the target as possible,
rather than aiming at being within specifications.
Taguchi’s approach Vs Traditional approach:
Consider two products and one is within the specified limits and the other is
just outside the specified limits. In the traditional approach, the product within the
limits is considered as a good product while the outside one is considered as bad
product.
Taguchi disagrees with this traditional approach. He believes that when a
product moves from its target value, that move causes a loss no matter if the move
falls inside or outside the specified limits.
Where,
L (x) = Loss Function
K = Constant of Proportionality
X = Quality characteristics of selected product
N = Nominal value of the chosen product
(x-N) = Tolerance
To estimate the loss, the value of ‘K’ in equation 1 should be determined first.
C
K= d
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 51
Where,
C = Loss associated with the specification limit and
d = deviation of the specification from the target value.
The Value of K determines the slope of quality loss function.
The loss function philosophy says that “for a manufacturer, the best strategy is to
produce products as close to the target as possible, rather than aiming at”being
within specifications”.
Part-C ( One Question) ( 15 Marks)
S.No Questions BTL CO
With suitable example, explain the various stages of building a House of quality
1 K1 CO4
matrix.
House of quality:
The primary planning tool used in QFD is the House of Quality (HOQ). The house of
quality converts the voice of the customer into product design characteristics. QFD
uses a series of matrix diagrams, also called ‘quality tables’ that resemble connected
houses.
House of quality is a graphic tool for defining the relationship between customer
desires and the firm/product capabilities. It is a part of the Quality function
Deployment (QFD) and it utilizes a planning matrix to relate what the customer wants
to how a firm (that produces the products) is going to meet these wants.
It looks like a house with correlation matrix as its roof, customer wants versus product
features as the main part, competitor evaluation as the porch etc. it is based on “the It
also is reported to increase cross functional integration within organizations using it,
especially between marketing, engineering and manufacturing.
Parts of house of quality(HOQ)
Customer requirements
Prioritized customer requirements
Technical descriptors
Prioritized technical descriptors
Relationship between requirements and descriptors
Interrelationship between technical descriptors
Construction of house of quality(HOQ):
List customer requirements
List technical descriptors
Develop a relationship matrix between WHATs and HOWs\
Develop an interrelationship matrix between HOWs
Competitive assessments
Develop prioritized customer requirements
Develop prioritized technical descriptors
The steps in building a house of quality are:
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 52
1. List Customer Requirements (WHAT’s)
2. List Technical Descriptors (HOW’s)
3. Develop a Relationship Matrix between WHAT’s and HOW’s
4. Develop an Inter-relationship Matrix between HOW’s
5. Competitive Assessments
a. Customer Competitive Assessments
b. Technical Competitive Assessments
6. Develop Prioritized Customer Requirements
7. Develop Prioritized Technical Descriptors
Constructing the House of Quality: The steps required for building the house of
quality are listed below.
1. List Customer Requirements (WHAT’s)
Define the customer and establish full identification of customer wants and
dislikes.
Measure the priority of these wants and dislikes using weighing scores.
Summarize these customer wants into a small number of major wants,
supported by a number of secondary and tertiary wants.
2. List Technical Descriptors (HOW’s)
Translate and identified customer wants into corresponding how’s or design
characteristics.
Express them in terms of quantifiable technical parameters or product
specifications.
3. Develop a Relationship Matrix between WHAT’s and HOW’s
Investigate the relationships between the customer’s expectations (WHATs) and
the descriptors (HOWs)
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 53
If a relationship exists, categories it as strong, medium or weak (or by assigning
scores).
4. Develop an Inter-relationship Matrix between HOW’s
Identify any inter relationship between each of the technical descriptors.
These relationships are marked in the correlation matrix by either positive or
negative.
Here a positive correlation represents a strong relationships and a negative
correlation represents a weak relationship.
5. Competitive Assessments
Compare the performance of the product with that of competitive products.
Evaluate the product and note the strong and weak points of the product
against its competitor’s product according to the customer.
This competitive assessment tables include two categories.
a. Customer Competitive Assessments
b. Technical Competitive Assessments
6. Develop Prioritized Customer Requirements
Develop the prioritized customer requirements corresponding to each customer
requirement in the house of quality on the right side of the customer
competitive assessment.
These prioritized customer requirements contain columns for importance to
customer, target value, and scale up factors, sales point and an absolute weight.
7. Develop Prioritized Technical Descriptors
Develop the prioritized technical descriptors corresponding to each technical
descriptor in the house of quality below the technical competitive assessment.
These prioritized technical descriptors include degree of technical difficulty,
target value and absolute and relative weights.
At the end of HOQ analysis, the completed matrix contains much information about
which customer requirements are most important, how they relate to proposed new
product features and how competitive products compare with respect to these input and
output requirements.
UNIT-V
1 K2 CO5
Write the significance of quality audit and list out the types of audits.
Quality audit examine the elements of quality management system in order
to evaluate how well these elements comply with quality system
requirement
Types:
Internal audit
External audit
2 What is meant control charts and its uses? K1 CO5
Control chart is defined as a display of data in the order that they occur
with statistically determined upper and lower limits of expected common
cause variations. It is used to indicate special causes of process variations
to monitor a process for maintenance.
It is used to keep a continuing record of a particular quality characteristic.
It is a picture of process over time.
3 K1 CO5
Mention the elements of ISO 14000.
a. Global
i. Facilitate trade and remove trade barriers
ii. Improve environmental performance of planet earth
iii. Build consensus that there is a need for environment
management and a common
terminology for EMS.
b. Organizational
4 Write the need for ISO 9000. K2 CO5
The QS 9000 standard defines the fundamental quality expectations from
the suppliers of production and service parts. The QS 9000 standard uses
ISO 9000 as its base with much broader requirements.
ISO 9000 is needed to unify the quality terms and definitions used by
industrialized nations and use terms to demonstrate a supplier’s capability
of controlling its processes.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 55
S.No Questions BTL CO
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 57
Quality Policy
The quality policy is a statement that defines the company’s commitment
to quality. It is a high-level document outlining the values and principles
regarding quality and providing a framework for setting quality
objectives.
At SimplerQMS, we advocate for creating quality policies that clearly
state the company’s desire to support high and uniform quality in
products and processes. It is important for people reading the Quality
Policy to see your company’s identity reflected in your quality policy.
Procedures
A procedure describes the step-by-step activities of processes within the
company. It includes elements such as the responsible departments or
functions and the frequency of the action.
These procedures provide clear guidelines, helping achieve efficiency,
quality output, and consistent performance while reducing
miscommunication and noncompliance with relevant requirements.
Work Instructions
Work instructions are the most detailed documents in the QMS structure.
They are typically written by the people who perform the work or people
who are responsible for leading those who perform the work. Work
instruction can be developed in the company or provided by customers.
The instructions help ensure tasks are carried out consistently and
effectively and meet the applicable requirements of the quality
management system.
Records
Records provide evidence that activities and events were conducted,
providing a historical record of actions.
By performing internal audits and reviewing the records, companies can
support evidence-based decision-making and demonstrate compliance
with requirements
3 i)Discuss the various benefits of EMS.(6) K2 CO5
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 58
ii) List out the various requirements of ISO 14001.(7) K2 CO5
i)Benefits Of EMS.
Global Benefits
Facilitate trade & remove trade barrier
Improve environmental performance of planet earth
Build consensus that there is a need for environmental
management and a common terminology for EMS
Organizational Benefits
Assuring customers of a commitment to environmental
management
Meeting customer requirement
Improve public relation
Increase investor satisfaction
Market share increase
Conserving input material & energy
Better industry/government relation
Low cost insurance, easy attainment of permits & authorization
ii)Requirements Of ISO 14001
Planning
Environmental Aspects
Legal & other Requirements
Objectives & Targets
Environmental Management Programs
Implementation & Operation
Structure & Responsibility
Training, Awareness & Competency
Communication
EMS Documentation
Document Control
Operational Control
Emergency Preparedness & Response
Checking & Corrective Action
Monitoring & Measuring
Nonconformance & Corrective & Preventive action
Records
EMS Audit
Management Review
Review of objectives & targets
Review of Environmental performance against legal & other
requirement
Effectiveness of EMS elements
Evaluation of the continuation of the policy
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 59
Part-C ( One Question) ( 15 Marks)
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 60
Purchaser Supplied Product (Element 7)
Goods supplied by the customer have to be recorded. It must be ensured
that they are separately controlled and stored to prevent loss or damage.
Product Identification And Traceability (Element 8)
Where appropriate, purchased and delivered products or services must be
made traceable through documentation or batches.
Process Control (Element 9)
All processes of production or service that directly affect quality must be
documented and planned and carried out under controlled conditions to
add consistency to the process. Control of process parameters and product
characteristics must ensure that the specified requirements are met.
Inspection And Testing (Element 10)
The company must ensure receiving inspection and testing, in-process
inspection and testing, and final inspection and testing. These inspections
and tests must be recorded.
Test Equipment (Element 11)
The items of equipment used for inspection, measuring and testing must
be identified and recorded. They must be controlled, calibrated and
checked at prescribed intervals.
Inspection And Test Status (Element 12)
The status of the product or service must be identified at all stages as
conforming or nonconforming. This is to ensure that only conforming
products or services are dispatched or used.
Control Of Nonconforming Product (Element 13)
The company must establish procedures to ensure that nonconforming
products or services are prevented from unintended use. The disposal of
nonconforming products must be determined and recorded.
Correctional Prevention (Element 14)
Procedures must be established to ensure effective handling of customer
complaints and corrective actions after identifying nonconformities. The
cause of nonconformities is to be investigated in order to prevent
recurrence. The corrective action shall be monitored to ensure its long-
term effectiveness. Preventive actions are to be initiated to eliminate
potential causes of nonconformance.
Delivery (Element 15)
Documented procedures must be established to ensure that products are
not damaged and reach the customer in the required condition.
Control Of Quality Records (Element 16)
All records related to the quality system must be identified, collected and
stored together. The quality records demonstrate conformity with
specified requirements and verify effective operation of the quality
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 61
system.
Internal Quality Audits (Element 17)
The company must establish and maintain documented procedures for
planning and implementing internal quality audits to determine the
effectiveness of the quality system. The comments made by internal
auditors must be recorded and brought to the attention of the personnel
having responsibility in the area audited. Follow-up audit activities shall
verify and record the implementation and effectiveness of the corrective
action taken.
Training (Element 18)
The company shall establish and maintain documented procedures for
identifying training needs and must have a training record for each
employee.
Servicing (Element 19)
Where servicing is a specific requirement, the company must establish
and maintain documented procedures for performing, verifying and
reporting that the servicing meets the specified requirements.
Statistical Techniques (Element 20)
The company must establish and maintain documented procedures to
implement and control the application of statistical techniques which have
been identified as necessary for performance information.
Mahendra Institute of Technology- Dept.of IT-IT24111 & Problem Solving Techniques Using C Page 62