Performance Evaluation Techniques

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 107

KARNATAKA STATE OPEN

UNIVERSITY
Mukthagangotri, mysore-
570006
Department of studies and research in
management

Project report on
PERFORMAMNCE EVALUATION TECHNIQUES

A STUDY WITH REFERENCE TO HINDUSTAN SPRING MANUFACTURING


COMPANY LIMITED

A project report submitted for the partial fulfilment of the award of


Master of Business Administration Degree in KSOU – 2024

-: Submitted By: -
Yashaswini k
08P221013101302

-: Under the Guidance of: -

INTERNAL GUIDE EXTERNAL GUIDE


Samanth Thara K V
Assistant professor Human
resources (HR)
Karnataka state open university Hindustan
Spring manufacturing Mysore-570006
Company Mysore-570002
Department of Studies and Research in
Management
Mukthagangotri, mysore-570006

KARNATAKA STATE OPEN


UNIVERSITY
Mukthagangotri, mysore-
570006
Department of studies and research in
management

Date: - 30/10/2024

PROJECT CERTIFICATE

This is to certify the Mis Yashaswini k bearing the Roll Number:


08P221013101302 Has successfully completed the project work on “PERFORMAMNCE
EVALUATION TECHNIQUES – A study reference to Mr Samanth, Assistant professor at
Karnataka state open university of Mysore – 570006 (internal guide) And Thara K V, HR
manager, Training and development Hindustan Spring manufacturing Company Mysore-
570002 (External guid).The project report is submitted to the Karnataka state open
university in partial fulfilment of the requirement for the award of Master of Business
Administration (MBA) in KSOU

Chairman
Department of studies
Research
and management
Department of Studies and Research in
Management
Mukthagangotri, mysore-570006

KARNATAKA STATE OPEN


UNIVERSITY
Mukthagangotri, mysore-
570006
Department of studies and research in
management

DECLARATION

Yashaswini k, a student of MBA bearing the Roll Number:


08P221013101302 Has successfully completed the project work on “PERFORMAMNCE
EVALUATION TECHNIQUES – A study reference to Mr Samanth, Assistant professor at
Karnataka state open university of Mysore – 570006 (internal guide) And Thara K V, HR
manager, Training and development Hindustan Spring manufacturing Company Mysore-
570002 (External guid).

I also declare that is project work is toward the partial fulfilment of the
university regulation for the award for the award of Master of Business Administration
degree in KSOU,

Sincerely.

Date:30/10/2024
Place: Mysore
Thanking You
Yashaswini k
08P221013101120

Department of Studies and Research in


Management
Mukthagangotri, mysore-570006

ACKNOWLEDGEMENT

I am honoured to submit my MBA project report


titled “PERFORMAMNCE EVALUATION TECHNIQUES - a study with reference to
Hindustan Spring manufacturing Company Mysore-570002” as a part of my Master of
business administration program. I would like to express my heartfelt gratitude to the
chairman of the Department of studies and research in management at Karnataka state
open university Mysore, for supporting me in the successful completion of this project.

I extend my sincere thanks to “Mr. Samanth, Assistant professor SBRR Karnataka


state open university Mysore” (internal guide) Thara K V, HR manager, training &
development of Hindustan Spring manufacturing company, Mysore (External guide)
For their invaluable guidance, insightful feedback and continuous encouragement, their
expertise and advice have been pivotal in shaping this project.

I am also grateful to Hindustan Spring manufacturing support and providing a


conducive learning environment.

“Thank you all for your contributions to my academic journey”,


Sincerely.

Date:3028/10/2024
Place: Mysore
Thanking You
Yashaswini k
08P221013101120

INTERNAL GUIDE CERTIFICATE

This is to certify that the project work entitled A


Study with Reference to Hindustan spring manufacture company
Mysore is based on original study conducted by Mr. Yashaswini k
bearing the Roll Number: 08P221013101302 under my guidance and
supervision. The work has been satisfactory and is recommended for
consideration towards partial fulfilment of the requirement for the MBA
degree in KSOU

Date:30/10/2024

Place: Mysore
Samanth
Assistant
professor
Karnataka state open
university
Mysore-5700016

Department of Studies and Research in


Management
Mukthagangotri, mysore-570006

PROJECT COMPLETION CERTIFICATE

This is to certify that the project work entitled " PERFORMAMNCE

EVALUATION TECHNIQUES” A Study with Reference to Hindustan spring

manufacturing company, Mysore" is based on original study conducted by

Mis. Yashaswini k bearing the Roll Number: 08P221013101302 in our

company. The work has been satisfactory and is recommended for

consideration towards partial fulfilment of the requirement for the MBA

degree in KSOU.

Date:30/10/2024

Place: Mysore

Chairman
Department of studies
& Research in
Management
KSOU Mysore

Department of Studies and Research in


Management
Mukthagangotri, mysore-570006

TABLE OF CONTENTS
CHAPTE PARTICULARS PAGE NO
RS

Chapter Introduction 0601-


1 Executive Summary
0712
Chapter Performance Measurement 08-22
12 Techniques
Introduction

Chapter Performance Modelling 23-29


23 Techniques Performance
Measurement Techniques

Chapter Workloads and 30-42


34 Benchmarks Performance
Modelling Techniques

Chapter IMPACT OF SUCCESSION PLANNING ON 43-7365


45
ORGANISATIONAL PERFORMANCE
EVALUVATION Workloads
and Benchmarks

Chapter Performance Evaluation – Methods and


56 Techniques Survey
74-7565-
75
Chapter THEORETICAL BACKGROUND OF
75-95
7 THE STUDY
Bibliography
Summary of Findings, suggestions,
and Conclusion
7595-101
Chapter Summary
8
EXECUTIVE SUMMARY

Performance evaluation (PE) is key factor in improving the quality of work input, inspires
staffs make them more engaged. PE also introduces a foundation for upgrades and increments in the
development an organization and employee succession plans. Performance appraisal system varies
according to nature of the work and designation within an organization. This paper presents a
comprehensive survey of classical performance methods such as ranking method and graphic rating scale as
well as modern methods such as 360-degree appraisal and Management by Objectives (MBO). The survey
also provides a comprehensive review of various fuzzy hybrid Multi Criteria Decision Making (MCDM)
techniques such as Fuzzy Technique for Order Preference by Similarity to Ideal Solution (TOPSIS &
FTOPSIS), Fuzzy Analytic Hierarchy Process (AHP & FAHP), Multistage and Cascade fuzzy Technique,
Hybrid Neuro-Fuzzy (NF) technique and Type-2 fuzzy technique. Furthermore, this paper introduces a new
proposal for Performance Evaluation of Sudanese Universities and Academic staff using fuzzy logic.
Furthermore, succession planning improves decision-making processes since successors are
typically developed over time, enabling them to acquire a thorough understanding of the company’s culture,
values, and long-term goals. This promotes continuity and alignment with organizational objectives.
Additionally, companies with a strong succession plan are generally more agile, capable of adapting to shifts
in the competitive landscape with leaders prepared to take on critical roles.

knowledge in particular field, skills to achieve a goal and target achieving attitude in order to decide
on the employee’s performance level. Since these factors mostly are uncertain and vague in nature a fuzzy
performance appraisal method is more appropriate. Several appraisal methods are used for employee
performance appraisal such as Graphic rating scale method, forced choice distribution method, behavioural
check list method, etc. Some methods that were utilized in the past are not currently used like ranking,
critical incident, and narrative essays. New methods have been suggested for performance appraisal
technique like MBO and assessment Centres. The survey also reviews and classifies some evaluation
techniques used in multi criteria environment.

Employee performance is related to job duties which are expected of a worker and how perfectly those

duties were accomplished. Many managers assess the employee performance on an annual or quarterly basis

to help them identify suggested areas for enhancement. Performance appraisal (PA) system depends on the

type of the business for an organization. PA mostly relates to the product output of a company or the end

users of an organization.
CHAPTER 1
INTRODUCTION

Introduction
State-of-the-art high-performance microprocessors contain tens of millions of transistors and
operate at frequencies close to 2GHz. These processors perform several tasks in overlap, employ significant
amounts of speculation and out-of-order execution, and other microarchitectural techniques, and are true
marvels of engineering. Designing and evaluating these microprocessors is a major challenge, especially
because one second of program execution on these processors involves several billion instructions and
analysing one second of execution may involve dealing with tens of billion pieces of information.
In general, design of microprocessors and computer systems involves several steps (i) understanding
applications and workloads that the systems will be running (ii) innovating potential designs (iii) evaluating
performance of the candidate designs, and (iv) selecting the best design. The large number of potential
designs and the constantly evolving nature of workloads have resulted in designs being largely ad hoc. In
this article, we investigate major techniques used in the performance evaluation process.
It should be noted that performance evaluation is needed at several stages of the design. In early stages,
when the design is being conceived, performance evaluation is used to make early design trade-offs.
Usually, this is accomplished by simulation models, because building prototypes of state-of-the-art
microprocessors is expensive and time consuming. Several design decisions are made before any
prototyping is done. Once the design is finalized and is being implemented, simulation is used to evaluate
functionality and performance of subsystems. Later, performance measurement is done after the product is
available to understand the performance of the actual system to various real-world workloads and to
identify modifications to incorporate in future designs.
Performance evaluation can be classified into performance modelling and performance measurement, as
illustrated in Table 1. Performance measurement is possible only if the system of interest is available for
measurement and only if one has access to the parameters of interest. Performance measurement may
further be classified into on-chip hardware monitoring, off-chip hardware monitoring, software monitoring
and microcode instrumentation. Performance modelling is typically used when actual systems are not
available for measurement or if the actual systems do not have test points to measure every detail of
interest. Performance modelling may further be classified into simulation modelling and analytical
modelling. Simulation models may further be classified into numerous categories depending on the
mode/level of detail of simulation. Analytical models use probabilistic models, queueing theory, Markov
models or Petri nets.

Table 1. A Classification of Performance Evaluation Techniques

Microprocessor On-chip Performance Monitoring Counters


Off-chip Hardware Monitoring
Performance Measurement Software Monitoring
Micro-coded Instrumentation
Trace Driven Simulation
Execution Driven Simulation
Simulation Complete System Simulation
Event Driven Simulation
Performance Modelling Software Profiling
Probabilistic Models
Analytical Modelling Queuing Models
Markov Models
Petri Net Models

There are several desirable features that performance modelling/measurement techniques and tools should
possess.
 They must be accurate. It is easy to build models that are heavily sanitized, however, such
models will not be accurate
 They must be non-invasive. The measurement process must not alter the system or degrade the
system's performance.
 They must not be expensive. Building the performance measurement facility should not cost
significant amount of time or money.
 They must be easy to change or extend. Microprocessors and computer systems constantly undergo
changes, and it must be easy to extend the modelling/measurement facility to include the upgraded
system.
 They must not need source code of applications. If tools and techniques necessitate source code, it
will not be possible to evaluate commercial applications where source is not often available.
 They should measure all activity including kernel and user activity. Often it is easy to build tools that
measure only user activity. This was acceptable in traditional scientific and engineering workloads,
however in database, web server, and Java workloads, there is significant operating system activity,
and it is important to build tools that measure operating system activity as well.
 They should be capable of measuring a wide variety of applications including those that use signals,
exceptions and DLLs (Dynamically Linked Libraries).
 They should be user-friendly. Hard to use tools often are under-utilized. Hard-to-use tools also result
in more user error.
 They should be fast. If a performance model is very slow, long-running workloads which take hours
to run may take days or weeks to run on the model. If an instrumentation tool is slow, it can be
invasive.
 Models should provide control over aspects that are measured. It should be possible to selectively
measure what is required.
 Models and tools should handle multiprocessor systems and multithreaded applications. Dual and
quad-processor systems are very common nowadays. Applications are becoming increasingly
multithreaded especially with the advent of Java, and it is important that the tool handles these.
 It will be desirable for a performance evaluation technique to be able to evaluate the performance of
systems that are not yet built.

Many of these requirements are often conflicting. For instance, it is difficult for a mechanism to be fast and
accurate. Consider mathematical models. They are fast; however, several simplifying assumptions go into
their creation and often they are not accurate. SimilarlySimilarly, it is difficult for a tool to be non-invasive
and user friendly. Many users like graphical user interfaces (GUIs), however, most instrumentation and
simulation tools with GUIs are slow and invasive

Benchmarks and metrics to be used for performance evaluation have always been interesting and
controversial issues. There has been a lot of improvement in benchmark suites since 1988. Before that
computer performance evaluation has been largely with small benchmarks such as kernels extracted from
applications (egg: Lawrence Livermore Loops), Dhrystone and Whetstone benchmarks, LINPAC, Sorting,
Sieve of Eratosthenes, 8-queens problem, Tower of Hanoi, etc. [1]. The Standard Performance Evaluation
Cooperative (SPEC) consortium and the Transactions Processing Council (TPC) formed in 1988 have made
available several benchmark suites and benchmarking guidelines to improve the quality of benchmarking.
Several state-of-the-art benchmark suites are described in section 4.

Another important issue in performance evaluation is the choice of performance metric. For a system level
designer, execution time and throughput are two important performance metrics. Execution time is generally
the most important measure of performance. Execution time is the product of the number of instructions,
cycles per instruction (CPI) and the clock period. Throughput of an application is a more important metric,
especially in server systems. In servers that serve the banking industry, airline industry, or other similar
business, what is important is the number of transactions that could be completed in unit time. Such servers,
typically called transaction processing systems use transactions per minute (tph) as a performance metric.
MIPS (Millions of Instructions Per Second) and MFLOPS (Millions of Floating-Point Operations Per
Second) have been very popular measures of performance in the past. Both are very simple and
straightforward to understand and hence have been used often, however, they do not contain all three
components of program execution time and hence are incomplete measures of performance. There are also
several low-level metrics of interest to microprocessor designers, to help them identify performance
bottlenecks and tune their designs. Cache hit ratios, branch misprediction ratios, number of off-chip memory
accesses, etc are examples of such measures.

Another major problem is the issue of reporting performance with a single number. A single number is easy
to understand and easy to be used by the trade press. Use of several benchmarks also make it necessary to
find a mean. Arithmetic Mean, Geometric Mean and Harmonic Mean are three ways of finding the central
tendency of a group of numbers, however, it should be noted that each of these means should be used in
appropriate conditions depending on the nature of the numbers which need to be averaged. Simple
arithmetic mean can be used to find average execution time from a set of execution times. Geometric mean
can be used to find the central tendency of metrics that are in the form of ratios (e.g.: speedup) and harmonic
mean can be used to find the central tendency of measures that are in the form of a rate (e.g.: throughput).
Crag on [2] and Smith [3] discuss the use of the appropriate mean for a given set of data. Crag on [2] and
Patterson and Hennessy [4] illustrate several mistakes one could possibly make while finding a single
performance number.

The rest of this article is organized as follows. Section 2 describes performance measurement techniques
including hardware on-chip performance monitoring counters on microprocessors. Section 3 describes
simulation and analytical modelling of microprocessors and computer systems. Section 4 presents several
state-of-the-art benchmark suites for a variety of workloads. Due to limitations of space in this article, we
describe some typical examples of tools and techniques and provide the reader with pointers for more
information.

Performance evaluation process:

 Establish a performance management timeline.


 Determine who should evaluate employee performance.
 Choose performance review questions.
 Set performance management goals.
 Consider an employee feedback process.
 Introduce employee and manager training.
 Tie it together with performance management software.
1. Establish a performance management timeline:
How often you should conduct formal reviews will depend on your organization’s strategic
objectives, business model, sales cycles and other criteria that will vary from one company to another. We
believe that your performance management process should be as unique as your company is. While reviews
have traditionally been done on an annual basis, many people believe that this is far too infrequent—
including employees who prefer to have more frequent development-related discussions with their
managers.

2. Determine who should evaluate employee performance:


Who should conduct performance reviews and evaluate employee performance? The answer to
this question can be both simple and complex. Clearly, those who evaluate employee performance should be
those most familiar with the work the employee is doing. But, while it may seem that the manager is the
obvious choice, the truth is that others may be more aware of employee performance—peers, mentors, even
customers. This is the reason that 360 degree reviews have become common in many organizations; it’s a
process that involves gathering feedback from a wide range of people who can offer insights into
employees’ performance.

3. Choose performance review questions:


Asking the right questions as part of the performance review process is critical to ensuring that the
feedback will be relevant and aligned with organizational and individual goals. Start with a focus on your
purpose for the review. Once you’re clear about your intent, it’s important to frame questions so they are
clear and non-biased. The intent of each question should align with the intent of your performance
management strategy. Another aspect of performance review questions are ratings. Having a scale with an
odd number of choices will result in a neutral option. If you want to “force” a positive/negative choice use
an even number of options—for instance a 4-point versus a 5-point scale.

4. Set performance management goals:


Establish your goals or approach to goal setting. Will you include some form of goal setting
and goal check-ins to help translate performance discussions into action? This is an important consideration
because performance reviews should be more forward-looking than backward-looking. In practice, though,
too often reviews focus more on past behaviour. We believe that reviews should be more developmental—
your employees do too. Managers should work with employees to develop performance management goals
that are both aligned with organizational goals and reflect employees’ own personal and professional
desires. As you set goals, keeping the SMART acronym in mind can help ensure that they’re focused and
specific.

5. Consider an employee feedback process:


While your formal performance management system may occur on a semi-annual, monthly,
or some other timeline, continuous feedback is important. It’s important to think about your employee
feedback process and whether it offers feedback, guidance and both positive and constructive feedback
regularly enough to ensure employees are getting the coaching and counselling they need. Today’s
employees, more than ever, crave that kind of input from their managers and others. Your employee
feedback process may occur through 1-on-1s or check-ins, as part of monthly dashboard reviews, etc.
Cultivating a culture of continuous feedback can help ensure employees are focused on the right goals and
objectives and have the resources and support to be successful.

6. Introduce employee and manager training:


Performance management systems are only as good as the interactions they drive between
managers and employees. Training everyone, but especially managers, to deliver quality and effective
feedback is important to ensure your performance management process is working the way you want it to.
Don’t assume that managers—even seasoned managers—have the knowledge and competencies they need
to conduct performance evaluations effectively, especially if their experience comes from working in other
organizations. Again, each organization is unique and each organization’s performance review process will
be different. Make sure you’re taking the time to train your managers, and employees, to participate in
reviews that will drive results.

7. Tie it together with performance management software:


Successful performance management is about more than just forms and meetings. It’s dependent
on a number of steps and processes all coming together to create an aligned and smoothly flowing system.
How will you alert employees, and managers, about what they need to do next? How will you follow up
with managers who are falling behind? How will you stage reviews when you have more than one source of
feedback? How will you ensure anonymous feedback doesn't get released? Will you have HR or managers
sign-off on reviews? How will you store and control access to the data? How will you analyse the data? And
on and on. Keeping track of, and staying on top of the many moving parts of an effective, and continuous,
performance management system can be aided significantly by tying it all together with performance
management software. PerformYard supplies the flexibility and personalization you need.

Evaluating Employee Performance:


Companies need a standard evaluation framework in place to evaluate the performance
of an employee effectively and review each employee against those standard performance
metrics. Here’s a step-by-step guide to effectively evaluating employees:
1. Set Performance Standards
2. Set Specific Goals
3. Take Notes Throughout the Year
4. Be Prepared
5. Be Honest and Specific with Criticism
6. Don’t Compare Employees
7. Evaluate the Performance, Not the Personality
8. Have a Conversation
9. Ask Specific Questions
10. Give Ongoing Feedback

1. Set Performance Standards:


It’s important to set clear performance standards that outline what an employee in a
specific role is expected to accomplish and how the work should be done. The same
standards must apply to an employee’s performance for every employee in the same
position. All standards should be achievable and relate directly to the person’s job
description.
2. Set Specific Goals:

You should also set specific goals for each employee, unlike performance
standards, which can apply to multiple workers. Goals are particular to the strengths and
weaknesses of the individual employee and can help them improve their skills or learn new
ones.
Working to achieve career goals and overcome challenges will help workers to feel more
engaged with their job while providing higher job satisfaction and better productivity. Work
with each employee to set goals that are reasonable and relevant to their position to set them
up for success.
3. Take Notes Throughout the Year:
Track the performance of your employees and create a performance file for each
worker. Keep records of notable accomplishments or incidents, whether positive or negative.
Remember that you can give immediate feedback to employees when something stands out
as well, you don’t have to wait until the year-end performance review process to give praise
or constructive criticism.

4. Be Prepared
When it comes time to give an employee evaluation, it’s best to prepare for the
meeting beforehand. Review your documentation for the employee before the meeting and
note what you want to discuss with the employee.
The performance review should be mostly about the positive elements of the employee’s
performance, with helpful advice on future improvement. After all, if the worker’s last
performance review was mostly negative, they probably wouldn’t still be working for you.

5. Be Honest and Specific with Criticism


When you do need to give criticism in an evaluation, be honest and straightforward
when giving feedback. Don’t try to sugarcoat or downplay the situation, which can confuse
the employee. Give clear examples and then provide helpful, specific advice on how the
employee can grow and improve employee performance in the future.

6. Don’t Compare Employees:


The purpose of an employee evaluation is to review the performance of each staff
member against a set of standard performance metrics. It’s not helpful to compare one
employee’s performance to another employee’s performance, and doing so can lead to
unhealthy competition and resentment. Always circle back to your evaluation framework to
evaluate one employee’s performance, not the performance of other workers.
Utilizing employee performance evaluations and review templates can keep things on equal
footing, as you will ask the same questions and analyze the same metrics between both
employees.

7. Evaluate the Performance, Not the Personality: Your evaluation should


focus on how well the employee performs their job rather than their personality traits.
Wheyou judge the employee’s personality, they can feel attacked, and the conversation can
turn hostile.
For example, rather than providing feedback about an employee being immature or
emotional, it’s more productive to give specific examples of the employee’s actions in the
workplace that demonstrate those characteristics. Don’t take criticism personally; always tie
it back to work.

8. Have a Conversation:
An employee evaluation shouldn’t be a one-way street where the manager gives
constructive feedback and the employee listens without responding. Instead, a productive
employee evaluation should be a conversation between the two of you. Listen to your
employees’ concerns and how they’d like their careers to grow. Find out how you and the
larger team can help employees meet their career goals.
You may also ask employees to self-evaluate how they think they performed at their job for
the year. A performance review should allow employees to review the workplace, their
managers, and themselves and reflect on their career growth.

9. Ask Specific Questions:


To foster productive conversations with employees during the evaluation period, it
can help to enter the room with specific questions you’d like to discuss with the worker.
Here are some questions you can ask workers to spark conversation and receive valuable
information:
 What do you hope to achieve within the company this year?
 What resources or support do you need from the department to reach your goals?
 What will your biggest challenges be in meeting your business goals this year?
 How often would you like to receive feedback?
 How can I be a better manager to you?
 What do you enjoy about your work?
 What work or personal goals have you recently achieved?
 Is there an experience or action you are most proud of since our last review?
 What are your long-term career goals, and how can the organization help you achieve
them?
 What new skills would you like to develop this year? Is there training we can provide
to help develop those skills?
 What brings you joy in the work you do?
 What project or goals are you interested in working on in the future?
Asking questions will allow your staff to express their feelings, concerns, and opinions
without fear and make them feel heard.

10. Give Ongoing Feedback:


Ideally, employee evaluation is an ongoing process, not a one-time task. Offer constructive
feedback regularly and touch base with a worker to see how they’re working toward their
yearly goals to help improve worker morale and keep employees on track while improving
work quality. Opening up communication between yourself and your team will improve the
company’s culture and make staff more willing to come to you if any issues arise before they
become big problems.

Benefits of Tracking Employee Performance:


Benefits of tracking employee performance include:
 Staying up-to-date and on the same page with your team members.
 You have the opportunity to identify future leaders and reward good work.
 You will gain important information that allows you to set reasonable targets, which
can improve morale and reduce turnover.
 You will find out about processes that may hinder your business’s productivity.
 You may find opportunities to provide further training and support to team members.
 Knowing they are tracked may help employees stay focused, improving their
efficiency.
 Employee reviews let you speak one-on-one with each team member to receive
feedback on their role and expectations.
 Reviews allow you to ensure each employee understands your expectations of them.
 Effective reviews can foster a collaborative work environment.

What are the 5 areas of improvement?

The top five areas for improvement in job performance are:


 Time management – using time effectively and productively
 Delegation – knowing how to prioritize tasks and how to distribute work
appropriately
 Organization – the ability to keep things on track without missing or forgetting
anything
 Communication – being able to define goals, express concerns, and give instructions
clearly
 Engagement – enthusiasm and involvement in work projects

Management by Objectives:
Management by objectives (MBO) is a strategic management model that aims to
improve the performance of an organization by clearly defining objectives that are agreed to
by both management and employees. According to the theory, having a say in goal setting
and action plans encourages participation and commitment among employees, and aligns
objectives across the organization

Key Takeaways:

 Management by objectives (MBO) is a process in which a manager and an employee


agree on specific performance goals and then develop a plan to reach them.
 It is designed to align objectives throughout an organization and boost employee
participation and commitment.

 There are five steps: define objectives, share them with employees, encourage
employees to participate, monitor progress, and finally, evaluate performance and
reward achievements.

 Critics of MBO argue that it incentivizes employees to achieve these goals by any
means necessary, often at the cost of the company.

Understanding Management by Objectives (MBO)

Management by objectives (also known as management by planning) is the


establishment of a management information system (MIS) to compare actual performance
and achievements with the defined objectives. Practitioners claim the major benefits of MBO
are that it improves employee motivation and commitment and allows for better
communication between management and employees.
However, a cited weakness of MBO is that it unduly emphasizes the setting of goals to
attain objectives, rather than working on a systematic plan to do so. Critics of MBO, such
as W. Edwards Deming, argue that setting particular goals like production targets leads
workers to meet those targets by any means necessary, including shortcuts that result in poor
quality.
In his book that coined the term, Peter Drucker set forth several principles for MBO.
Objectives are laid out with the help of employees and are meant to be challenging but
achievable. Employees receive daily feedback, and the focus is on rewards rather than
punishment. Personal growth and development are emphasized, rather than negative
feedback for failing to reach objectives

Steps of MBO:
MBO outlines five steps that organizations should use to put the management technique into
practice

1. Either determine or revise organizational objectives for the entire company. This
broad overview should be derived from the firm’s mission and vision.
2. Translate the organizational objectives to employees. In 1981, George T. Doran used
the acronym SMART (specific, measurable, acceptable, realistic, time-bound) to
express the concept.3
3. Stimulate the participation of employees in setting individual objectives. After the
organization’s objectives are shared with employees from the top to the bottom,
employees should be encouraged to help set their own objectives to achieve these
larger organizational objectives. This gives employees greater motivation since they
have greater empowerment.
4. Monitor the progress of employees. In this way, managers can measure and track the
goals set by employees. As step two states, a key component of the objectives is that
they are measurable for employees and managers to determine how well they are met
across a given timeframe.
5. Evaluate and reward employee progress. This step includes honest feedback on what
was achieved and not achieved for each employee.

Advantages and Disadvantages MBO:

Advantages:

 Employees take pride in their work and are assigned goals they know they can
achieve that match their strengths, skills, and educational experiences.
 Assigning tailored goals brings a sense of importance to employees, boosting
their output and loyalty to the company.
 Communication between management and employees is increased.
 Management can create goals that lead to the success of the company.

Disadvantages:
 As MBO is focused on goals and targets, it often ignores other parts of a
company, such as the culture of conduct, a healthy work ethos, and areas for
involvement and contribution.
 Strain is increased on employees to meet the goals in a specified time
frame.
 Strain is increased on employees to meet the goals in a specified time frame
Employees are encouraged to meet targets by any means necessary, meaning
that shortcuts could be taken and the quality of work compromised.
 Employees are encouraged to meet targets by any means necessary ,
meaning that shortcuts could be taken and the quality of work
compromised
 If management solely relies on MBO for all management responsibilities, it
can be problematic for areas that don’t fit under MBO.
CHAPTER 2
Performance Measurement

2.Performance measurement:

Performance measurement is used for understanding systems that are already built or prototyped. There are
two major purposes performance measurement can serve: (i) tune this system or systems to be built (ii) tune
the application if source code and algorithms can still be changed. Essentially, the process involves (i)
understanding the bottlenecks in the system that has been built (ii) understanding the applications that are
running on the system and the match between the features of the system and the characteristics of the
workload, and (iii) innovating design features that will exploit the workload features. Performance
measurement can be done via the following means:
 Microprocessor on-chip performance monitoring counters
 Off-chip hardware monitoring
 Software monitoring
 Micro coded instrumentation

2.1 On-chip Performance Monitoring Counters

All state-of-the-art high performance microprocessors including Intel's Pentium III and Pentium IV, IBM's
POWER 3 and POWER 4 processors, AMD's Athlon, Compaq's Alpha, and Sun's UltraSPARC processors
incorporate on-chip performance monitoring counters which can be used to understand
performance of these microprocessors while they run complex, real-world workloads. This ability has
overcome a serious limitation of simulators, that they often could not execute complex workloads. Now,
complex run time systems involving multiple software applications can be evaluated and monitored very
closely. All microprocessor vendors nowadays release information on their performance monitoring
counters, although they are not part of the architecture.

For illustration of on-chip performance monitoring, we use the Intel Pentium processors. The
microprocessors in the Intel Pentium contain two performance monitoring counters. These counters can be
read with special instructions (e.g.: RDPMC) on the processor. The counters can be made to measure user
and kernel activity in combination or in isolation. A variety of performance events can be measured using
the counters [50]. For illustration of the nature of the events that can be measured, Table 2 lists a small
subset of the events that can be measured on the Pentium III. While more than 200 distinct events can be 5
measured on the Pentium III, only 2 events can be measured simultaneously. For design simplicity, most
microprocessors limit the number of events that can be simultaneously measured to 4 or 5. At times, certain
events are restricted to be accessible only through a particular counter. These steps are necessary to reduce
the overhead associated with on-chip performance monitoring. Performance counters do consume on-chip
real estate. Unless carefully implemented, they can also impact the processor cycle time.

Table 2. Examples of events that can be measured using performance monitoring counters
on an Intel Pentium III processor
EVENT Description of Event Event Number in
Hex
DATA_MEM_REFS All loads and stores from/to memory 43H
DCU_LINES_IN Total lines allocated in the data cache unit 45H
IFU_IFETCH Number of instructions fetches (cacheable and uncacheable) 80H
IFU_IFETCH_MISS Number of instructions fetch misses 81H
ITLB_MISS Number of Instruction TLB misses 85H
IFU_MEM_STALL Number of cycles instruction fetch is stalled for reason 86H
L2_IFETCH Number of L2 instruction fetches 28H
L2_LD Number of L2 data loads 29H
L2_ST Number of L2 data stores 2AH
L2_LINES_IN Number of lines allocated in the L2 24H
L2_RQSTS Total number of L2 requests 2EH
INST_RETIRED Number of instructions retired C0H
UOPS_RETIRED Number of micro-operations retired C2H
INST_DECODED Number of instructions decoded D0H
RESOURCE_STALLS Number of cycles in which there is a resource related stall A2H
MMX_INSTR_EXEC Number of MMX Instructions Executed B0H
BR_INST_RETIRED Number of branch instructions retired C4H
BR_MISS_PRED_RETIRED Number of mis predicted branches retired C5H
BR_TAKEN_RETIRED Number of taken branches retired C9H
BR_INST_DECODED Number of branch instructions decoded C9H
BTB_MISSES Number of branches for which BTB did not predict E2H

There are several tools available to measure performance using performance monitoring counters. Table 3
lists some of the available tools. Intel's V tune software may be used to perform measurements using the
Intel processor performance counters [5]. The P6Perf utility is a plug in for Windows NT performance
monitoring [6]. The Compaq DIGITAL Continuous Profiling Infrastructure (DCPI) is a very powerful tool
to profile programs on the Alpha processors [7,8]. The performance monitor perf-moon is a small hack that
uses the on-chip counters on UltraSPARC-I/II processors to gather statistics [9]. Packages like V tune
perform extensive post-processing and present data in graphical forms. However, sometimes, extensive post-
processing can result in tools that are somewhat invasive. PMON [10] is a counter reading software written
by Juan Rubio of the Laboratory for Computer Architecture at the University of Texas. It provides a
mechanism to read specified counters with minimal or no perceivable overhead. All these tools measure user
and operating system activity. Since everything on a processor is counted, effort should be made to have
minimal or no other undesired process running during experimentation. This type of performance
measurement can be done on binaries, and no source code is desired

Table 3. Software packages for performance counter measurement

Tool Platform Reference


V Tune IA-32 http://developer.intel.com/software/products/vtune/vtune_oview.htm
P6Perf IA-32 http://developer.intel.com/vtune/p6perf/index.htm PMON
PMON IA-32 http://www.ece.utexas.edu/projects/ece/lca/pmon DCPI Alpha
DCPI Alpha http://www.research.digital.com/SRC/dcpi/
http://www.research.compaq.com/SRC/dcpi/ Perf-moon UltraSPARC
Per moon UltraSPARC http://www.research.digital.com/SRC/dcpi/

2.2 Off-chip hardware measurement

Instrumentation using hardware means can also be done by attaching off-chip hardware, two examples of
which are described in this section.

Speed Tracer from AMD: AMD developed this hardware tracing platform to aid in the design of their x86
microprocessors. When an application is being traced, the tracer interrupts the processor on each instruction
boundary. The state of the CPU is captured on each interrupt and then transferred to a separate control
machine where the trace is stored. The trace contains virtually all valuable pieces of information for each
instruction that executes on the processor. Operating system activity can also be traced. However, tracing in
this manner can be invasive, and may slow down the processor. Although the processor is running slower,
external events such as disk and memory accesses still happen in real time, thus looking very fast to the
slowed-down processor. Usually, this issue is addressed by adjusting the timer interrupt frequency. Use of
this performance monitoring facility can be seen in Merten [11] and Bhargava [12].

Logic Analysers: Pours Panj and Christie [13] use a Tektronix TLA 700 logic analyser to
analyse 3D graphics workloads on AMD-K6-2 based systems. Detailed logic analyser traces
are limited by restrictions on sizes and are typically used for the most important sections of
the program under analysis. Preliminary coarse level analysis can be done by performance
monitoring counters and software instrumentation. Pours Panj and Christie used logic
analyser traces for a few tens of frames which covered a second or two of smooth motion
[13].

2.3 Software Monitoring:


Software monitoring is often performed by utilizing architectural features such as a trap
instruction or a breakpoint instruction on an actual system, or on a prototype. The VAX
processor from Digital (now Compaq) had a T-bit that caused an exception after every
instruction. Software monitoring used to be an important mode of performance evaluation
before the advent of on-chip performance monitoring counters. The primary advantage of
software monitoring is that it is easy to do. However, disadvantages include that the
instrumentation can slow down the application. The overhead of servicing the exception,
switching to a data collection process, and performing the necessary tracing can slow down a
program by more than 1000 times. Another disadvantage is that software monitoring systems
typically only handle the user activity

2.4 Micro coded Instrumentation:

Digital (now Compaq) used microcode instrumentation to obtain traces of VAX and Alpha
architectures. The ATUM tool [14] used extensively by Digital in the late 1980s and early
1990s uses microcode instrumentation. This is a technique lying between trapping
information on each instruction using hardware interrupts (traps) or software traps. The
tracing system essentially modified the VAX microcode to record all instruction and data
references in a reserved portion of memory. Unlike software monitoring, ATUM could trace
all processes including the operating system. However, this kind of tracing is invasive, and
can slow down the system by a factor of 10 without including the time to write the trace to
the disk.

Factors to Consider During a Performance Evaluation:

1. Job knowledge and skills


How well does your employee know their position? Do they demonstrate all necessary
skills at a level that meets your expectations?

2. Quality of work
Check over the work done in a specific period of time and evaluate the overall quality
by checking for mistakes, ensuring it was thoroughly thought through, and considering
feedback from clients and other team members.

3. Quantity of work
Are they keeping up with the expected work pace for their position? You can
compare their workload and production to others in similar roles for their evaluation.

4. Communication skills
Is your employee effective at sharing knowledge, asking questions, and taking
direction? Do they convey their thoughts clearly while speaking and in writing? Do they
display a professional and positive attitude and work well in a team setting?

5. Initiative and problem-solving skills


Does the employee identify and solve issues as they arise, or do they need to be
told? How do they handle problems that occur in their work? Are they comfortable
delegating or asking for help when needed?

6. Attendance and punctuality


Are they on time and ready to work when expected? Do they call out often, and if so,
do they have appropriate reasons for doing so?

7. Performance against goals

Do they meet the goals set out for them by their supervisors or managers? Do they set
and meet their own professional goals?
Suppose you’re unsure where to start with an evaluation. In that case, consider implementing
the usage of the software that tracks time and project management so that you can keep easy
tabs on employee productivity. FreshBooks is a cloud-based accounting software that
offers time-tracking tools, project management services, and more, increasing the efficiency
and accuracy of tracking employee achievements. Sign up for a free trial here.

The Five WPIs:

1. Mix:
How much of your work, or workforce, is dedicated to running the business versus
changing the business? This differs across companies. In marketing, you refer to this as new
versus existing work. And in IT, it’s keep the lights on (KTLO) versus new development.
The key is knowing your allocation proportions—whether by count, percentage, or total
hours.
To measure mix, consider the following:
1. Add a metadata tag or drop-down field (e.g., run versus change) to all of your projects.
(If you’re using Workfront, you can use a Custom Form.)
2. Once projects have been tagged, create a report that shows total project count, or total
hours allocated across the organization.
3. Assess how many projects or how many hours are dedicated to this type of work.
Imagine if Apple hadn’t changed their approach to work since the 1980s. We would not have
iTunes, or iPods, or iPhones. If Amazon had focused only on selling books, they wouldn’t be
the conglomerate they are today. Each organization is different, but as you can see, it pays to
know where time and effort is being spent.

2. Capacity:
Can it get done? For the past 50 years, average capacity utilization averaged 80
percent. Separate studies by Workfront and McKinsey found that modern worker utilization
is less than 40 percent. That represents more than $3 trillion of wasted human capital
investment each year. We’re talking pallet size stacks of cash.
In short, it’s no longer a luxury to know total available capacity or utilization—it’s a
necessity. If you ran a manufacturing company at 40 percent capacity, you would be fired.
A change management expert I know shared that the number one question in 2019 keeping
CEOs up at night is: Does our organization have the capacity to change it needs to?

3. Velocity:
How fast are you working? This WPI is defined by total work cycle time and work-to-
commit ratio. Total work cycle time is how long it takes to complete a piece of work. Work-
to-commit is how frequently work is done in the time originally committed. Both can be
tracked easily inside Workfront.
We live in an “I want it when I want it” culture, so being able to look at total work cycle
time and work-to-commit is critical. Ultimately, velocity tells us how long it takes to get
things done in your organization.

4. Quality:
What is the perception of work quality within the organization?
Now you might say, “We already measure quality and don’t need any
help here, thanks!” However, traditional quality measurements is not
what we’re referring to. This WPI is about making sure the work that you
and your teams are producing meets the needs of the stakeholders they
serve.

For example, as the manager of learning programs at Workfront, I know our customer-facing
learning content is more on point if we are crystal clear about audience, performance goals,
and objectives. For you and your organization, defining quality will be different, but that
doesn’t mean you can ignore it. For example, companies like TripAdvisor and Amazon have
influenced millions of consumers based on quality ratings and would be nowhere near as
successful as they are today if they ignored these ratings.

5. Engagement:
Do people take pride in their work? Are they committed to
the organization’s goals?
According to a new meta-analysis of 1.4 million employees conducted by the Gallup
Organization, organizations with a high level of engagement report 22% higher productivity.
Highly engaged organizations have double the rate of success. This WPI focuses your team
members on three simple questions:
1. Did you understand what was expected of you?
2. Did the work you were assigned make a difference to the organization?
3. Did you do great work?
Work aside, another recent article showed how engaged students came out on top. Now think
what truly engaged employees could do for your business.

Integrating All Five WPIs:


The five WPIs are paramount and most valuable when combined. One Workfront
customer tracks “touches” on collateral to hone in on the correlation between velocity and
quality. She uses these measurements to master modern work. Another Workfront customer
secured an additional $1 million in funding by exposing the capacity issues within their
organization. Another customer of Workfront used mix to figure out how much of their time
was being spent on non-revenue generating work. They turned this around from 77% non-
revenue generating to 66% revenue generating.
These five WPIs give you a clearer view of what’s happening inside of your organization.
You’ll learn what you need to fix and where you’re winning already. You may have to adjust
course, stay on path, and fan the embers of motivation. Pick two WPIs to start off with and
pave your path to greater success.
CHAPTER 3
Performance Modelling
3. Performance Modelling:

Performance measurement as described in the previous section can be done only if the actual
system or a prototype exists. It is expensive to build prototypes for early-stage evaluation.
Hence one needs to resort to some kind of modelling to study systems yet to be built.
Performance modelling can be done using simulation models or analytical models.

3.1 Simulation

Simulation has become the defector performance modelling method in the evaluation of
microprocessor architectures. There are several reasons for this. The accuracy of analytical
models in the past has been insufficient for the type of design decisions computer architects
wish to make (for instance, what kind of caches or branch predictors are needed). Hence
cycle accurate simulation has been used extensively by architects. Simulators model existing
or future machines or microprocessors. They are essentially a model of the system being
simulated, written in a high-level computer language such as C or Java, and running on some
existing machine. The machine on which the simulator runs is called the host machine and
the machine being modelled is called the target machine. Such simulators can be constructed
in many ways.

Simulators can be functional simulators or timing simulators. They can be trace driven or
execution driven simulators. They can be simulators of components or that of the complete
system. Functional simulators simulate functionality of the target processor, and in essence
provide a component like the one being modelled. The register values of the simulated
machine are available in the equivalent registers of the simulator. In addition to the values,
the simulators also provide performance information in terms of cycles of execution, cache
hit ratios, branch prediction rates, etc. Thus, the simulator is a virtual component
representing the microprocessor or subsystem being modelled plus a variety of performance
information.
If performance evaluation is the only objective, one does not need to model the functionality.
For instance, a cache performance simulator does not need to store values in the cache; it
only needs to store information related to the address of the value being cached. That
information is sufficient to determine a future hit or miss. While it is nice to have the values
as well, a simulator that model’s functionality in addition to performance is bound to be
slower than a pure performance simulator. Register Transfer Language (RTL) models used
for functional verification may also be used for performance simulations, however, these
models are very slow for performance estimation with real world workloads, and hence are
not discussed in this article.

3.1.1 Trace Driven Simulation:

Trace-driven simulation consists of a simulator model whose input is modelled as a trace or


sequence of information representing the instruction sequence that would have executed on
the target machine. A simple trace driven cache simulator needs a trace consisting of address
values. Depending on whether the simulator is modelling a unified instruction or data cache,
the address trace should contain addresses of instruction and data references.

Cachesim5 and Dinero IV are examples of cache simulators for memory reference traces.
Cachesim5 comes from Sun Microsystems along with their Shade package [15]. Dinero IV
[16] is available from the University of Wisconsin, Madison. These simulators are not timing
simulators. There is no notion of simulated time or cycles, only references. They are not
functional simulators. Data and instructions do not move in and out of the caches. The
primary result of simulation is hit and miss information. The basic idea is to simulate a
memory hierarchy consisting of various caches. The various parameters of each cache can be
set separately (architecture, mapping policies, replacement policies, write policy, statistics).
During initialization, the configuration to be simulated is built up, one cache at a time,
starting with each memory as a special case. After initialization, each reference is fed to the
appropriate top-level cache by a single simple function call. Lower levels of the hierarchy
are handled automatically. One does not need to store a trace while using cachesim5,
because Shade can directly feed the trace into cachesim5.

Trace driven simulation is simple and easy to understand. The simulators are easy to debug.
Experiments are repeatable because the input information is not changing from run to run.
However, trace driven simulation has two major problems:
1. Traces can be prohibitively long if entire executions of some real-world applications are
considered. The storage needed by the traces may be prohibitively large. Trace size is
proportional to the dynamic instruction count of the benchmark.
2. The traces do not represent the actual stream of processors with branch predictions. Most
trace generators generate traces of only completed or retired instructions in speculative
processors. Hence, they do not contain instructions from the mis predicted path.

The first problem is typically solved using trace sampling and trace reduction techniques.
Trace sampling is a method to achieve reduced traces. However, the sampling should be
performed in such a way that the resulting trace is representative of the original trace. It may
not be sufficient to periodically sample a program execution. Locality properties of the
resulting sequence may be widely different from that of the original sequence. Another
technique is to skip tracing for a certain interval, then collect for a fixed interval and then
skip again. It may also be needed to leave a warmup period after the skip interval, to let the
caches and other such structures to warm up [17]. Several trace sampling techniques are
discussed by Crowley and Baer [18]. The QPT trace collection system [19] solves the trace
size issue by splitting the tracing process into a trace record generation step and a trace
regeneration process. The trace record has a size like the static code size, and the trace
regeneration expands it to the actual full trace upon demand.

The second problem can be solved by reconstructing the mis predicted path [20]. An image
of the instruction memory space of the application is created by one pass through the trace,
and thereafter fetching from this image as opposed to the trace. While 100% of the mis
predicted branch targets may not be in the recreated image, studies show that more than 95%
of the targets can be located.

3.1.2 Execution Driven Simulation:

There are two meanings in which this term is used by researchers and practitioners. Some
refer to simulators that take program executables as input as execution driven simulators.
These simulators utilize the actual input executable and not a trace. Hence the size of the
input is proportional to the static instruction count and not the dynamic instruction count.
Mis predicted branches can be accurately simulated as well. Thus, these simulators solve the
two major problems faced by trace-driven simulators. The widely used Simple scalar
simulator [21] is an example of such an execution driven simulator. With this tool set, the
user can simulate real programs on a range of modern processors and systems, using fast
execution driven simulation. There is a fast functional simulator and a detailed, out-of-order
issue processor that supports non-blocking caches, speculative execution, and state-of-the-art
branch prediction.

Some others consider execution driven simulators to be simulators that rely on actual
execution of parts of code on the host machine (hardware acceleration by the host instead of
simulation) [22]. This execution driven simulators do not simulate every individual
instruction in the application. Only the instructions that are of interest are simulated. The
remaining instructions are directly executed by the host computer. This can be done when
the instruction set of the host is the same as that of the machine being simulated. Such
simulation involves two stages. In the first stage or preprocessing, the application program is
modified by inserting calls to the simulator routines at events of interest. For instance, for a
memory system simulator, only memory access instructions need to be instrumented. For
other instructions, the only important thing is to make sure that they get performed and that
their execution time is properly accounted for. The advantage of execution driven simulation
is speed. By directly executing most instructions at the machine's execution rate, the
simulator can operate orders of magnitude faster than cycle by cycle simulators that emulate
each individual instruction. Tango, Proteus and FAST are examples of such simulators.

3.1.3 Complete system simulation:

Many execution and trace driven simulators only simulate the processor and memory
subsystem. Neither I/O activity nor operating system activity is handled in simulators like
Simple scalar. But in many 9 workloads, it is extremely important to consider I/O and
operating system activity. Complete system simulators are complete simulation
environments that model hardware components with enough detail to boot and run a full-
blown commercial operating system. The functionality of the processors, memory
subsystem, disks, buses, SCSI/IDE/FC controllers, network controllers, graphics controllers,
CD-ROM, serial devices, timers, etc are modelled accurately in order to achieve this. While
functionality stays the same, different microarchitectures in the processing component can
lead to different performance. Most of the complete system simulators use microarchitectural
models that can be plugged in and out. For instance, Simos [23], a popular complete system
simulator provides a simple pipelined processor model and an aggressive superscalar
processor model. Simos and SIMICS [24,25] can simulate uniprocessor and multiprocessor
systems. Table 4 lists popular complete system simulators.

Table 4. Examples of complete system simulators

Simulator Information Site Instruction Set Operating System


Simos Stanford University MIPS SGI IRIX
http://simos.stanford.edu/
SIMICS Virtutes http://www.simics.com PC, SPARC and Solaris 7 and 8, Red Hat
http://www.virtutech.com Linux 6.2 (both x86,
Alpha
SPARC V9, and Alpha
versions), Tru64
(Digital Unix 4.0F), and
Windows NT 4.0
Booch’s http://bochs.sourceforge.net X86 Windows 95, Windows
NT, Linux, FreeBSD

3.1.4 Stochastic Discrete Event Driven Simulation:

It is possible to simulate systems in such a way that the input is derived stochastically rather
than as a trace/executable from an actual execution. For instance, one can construct a
memory system simulator in which the inputs are assumed to arrive according to a Gaussian
distribution. Such models can be written in general purpose languages such as C or using
special simulation languages such as SIMSCRIPT. Languages such as SIMSCRIPT have
several built-in primitives to allow quick simulation of most kinds of common systems.
There are built-in input profiles, resource templates, process templates, queue structures, etc.
to facilitate easy simulation of common systems. An example of the use of event-driven
simulators using SIMSCRIPT may be seen in the performance evaluation of multiple-bus
multiprocessor systems in Kurian et.

3.1.5 Program Profilers:

There are a class of tools called software profiling tools, which are like simulators and
performance measurement tools. These tools are used to generate traces, to obtain instruction
mix, and a variety of instruction statistics. They can be thought of as software monitoring on
a simulator. They input an executable and decode and analyse each instruction in the
executable. These program profilers can be used as the front end of simulators. A popular
program profiling tool is Shade for the UltraSPARC.

Shade

SHADE is a fast instruction-set simulator for execution profiling. It is a simulation and


tracing tool that provides features of simulators and tracers in one tool. Shade analyses the
original program instructions and cross-compiles them to sequences of instructions that
simulate or trace the original code. Static cross compilation can produce fast code, but purely
static translators cannot simulate and trace all details of dynamically linked code. One can
develop a variety of 'analysers' to process the information generated by Shade and create the
performance metrics of interest. For instance, one can use shade to generate address 10 traces
to feed into a cache analyser to compute hit-rates and miss rates of cache configurations. The
shade analyser cachesim5 does exactly this.

Jaba:JabaJaba: Jaba is a Java Bytecode Analyzer developed at the University of Texas for
tracing Java programs. While Java programs can be traced using shade to obtain profiles of
native execution, Jaba can yield profiles at the bytecode level. It uses JVM specification 1.1.
It allows the user to gather information about the dynamic execution of a Java application at
the Java bytecode level. It provides information on bytecodes executed, load operations,
branches executed, branch outcomes,

3.2 Analytical Modelling:

Analytical performance models, while not popular for microprocessors are suitable for
evaluation of large computer systems. In large systems where details cannot be modelled
accurately for cycle accurate simulation, analytical modelling is an appropriate way to obtain
approximate performance metrics. Computer systems can generally be considered as a set of
hardware and software resources and a set of tasks or jobs competing for using the resources.
Multicomputer systems and multiprogram med systems are examples.

Analytical models rely on probabilistic methods, queuing theory, Markov models, or Petri
nets to create a model of the computer system. A large body of literature on analytical
models of computer exists from the 1970s and early 1980s. Heidelberger and Levenberg [28]
published an article summarizing research on computer performance evaluation models. This
article contains 205 references, which cover all important work on performance evaluation
until 1984. Readers interested in analytical modelling should read this article.

Analytical models are cost-effective because they are based on efficient solutions to
mathematical equations. However, in order to be able to have tractable solutions, often,
simplifying assumptions are made regarding the structure of the model. As a result,
analytical models do not capture all the detail typically built into simulation models. It is
generally thought that carefully constructed analytical models can provide estimates of
average job throughputs and device utilizations to within 10% accuracy and average
response times within 30% accuracy. This level of accuracy while insufficient for
microarchitectural enhancement studies, is sufficient for capacity planning in multicomputer
systems, I/O subsystem performance evaluation in large server farms, and in early design
evaluations of multiprocessor systems.

There has not been much work on analytical modelling of microprocessors. The level of
accuracy needed in trade off analysis for microprocessor structures is more than what typical
analytical models can provide. However, some effort into this arena came from Noon burg
and Shen [29] and Sorin et. al [30]. Those interested in modelling superscalar processors
using analytical models should read Noon burg et. all’s work [29] and Sorin et. all’s work
[30]. Noon burg et. al used a Markov model to model a pipelined processor. Sorin et. al used
probabilistic techniques to processor a multiprocessor composed of superscalar processors.
Queuing theory is also applicable to superscalar processor modelling, as modern superscalar
processors contain instruction queues in which instructions wait to be issued to one among a
group of functional units.
Chapter 4
Workloads and Benchmarks
4. Workloads and Benchmarks:
Benchmarks used for performance evaluation of computers should be representative of
applications that are run on actual systems. Contemporary computer applications include a
variety of applications, and different 11 benchmarks are appropriate for systems targeted for
different purposes. Table 5 lists several popular benchmarks for different classes of

workloads.

Table 5. Popular benchmarks for different categories of


workloads :
Workload Category Example Benchmark Suite
SPEC CPU 2000 [31]
CPU Benchmarks Uniprocessor Java Grande Forum Benchmarks [32]
Skymark [33]
Uniprocessor ASCI [34]
Parallel Processor SPLASH [35]
NASPAR [36]
Multimedia Media Bench [37]
Embedded EEMBC benchmarks [38]
Digital Signal Processing BDTI benchmarks [39]
Java Client side SPECjvm98 [31]
Caffeine Mark [40]
Server side SPECjBB2000 [31]
Landmark [41]
Scientific Java Grande Forum Benchmarks [32]
Skymark [33]
Transaction Processing OLTP (On-Line TPC-C [42]
Transaction Processing)
TPC-W [42]
TPC-H [42]
DSS (Decision TPC-R [42]
Support system
Web Server SPEC web99 [31]
TPC-W [42]
Landmark [41]
Electronic commerce With commercial TPC-W [42]
database
Without commercial SPECjBB2000 [31]
database
Mail-server SPECmail2000 [31]
Network File System SPEC SFS 2.0 [31]
Personal Computer SYSMARK [43]
Ziff Davis WinBench [44]
3DMarkMAX99 [45]
4.1 CPU Benchmarks:
SPEC: CPU2000 is the industry-standardized CPU-intensive benchmark suite. The System
Performance Evaluation Cooperative (SPEC) was founded in 1988 by a small number of
workstation vendors who realized that the marketplace was in desperate need of realistic,
standardized performance tests. The basic SPEC methodology is to provide the benchmarked
with a standardized suite of source code based upon existing applications that has already
been ported to a wide variety of platforms by its membership. The benchmarked then
takes this source code, compiles it for the system in question. The use of already accepted
and ported source code greatly reduces the problem of making apples-to-oranges
comparisons SPEC designed CPU2000 to provide a comparative measure of compute
intensive performance across the widest practical range of hardware. The implementation
resulted in source code benchmarks developed from real user applications. These
benchmarks measure the performance of the processor, memory and compiler on the tested
system. The suite contains 14 floating point programs written in C/Fortran and 11 integer
programs (10 written in C and 1 in C++). The SPEC CPU2000 benchmarks replace the
SPEC89, SPEC92 and SPEC95 benchmarks.

The Java Grande Forum Benchmark suite consists of three groups of benchmarks,
microbenchmarks that test individual low-level operations (egg: arithmetic, cast, create),
Kernel benchmarks which are the 12 heart of the algorithms of commonly used applications
(e.g.: heapsort, encryption/decryption, FFT, Sparse matrix multiplication, etc), and
applications (e.g.: Raytraced, Montecarlo simulation, Euler equation solution, Molecular
dynamics, etc) [48]. These are computing intensive benchmarks available in Java.
Sci Mark: is a composite Java benchmark measuring the performance of numerical codes
occurring in scientific and engineering applications. It consists of five computational kernels:
FFT, Gauss-Seidel relaxation, Sparse matrix-multiply, Monte Carlo integration, and dense
LU factorization. These kernels are chosen to provide an indication of how well the
underlying Java Virtual Machines perform on applications utilizing these types of
algorithms. The problems sizes are purposely chosen to be small to isolate the effects of
memory hierarchy and focus on internal JVM/JIT and CPU issues. A larger version of the
benchmark (Skymark 2.0 LARGE) addresses performance of the memory subsystem with
out-of-cache problem sizes.

ASCI: The Accelerated Strategic Computing Initiative (ASCI) of the Lawrence Livermore
laboratories contain several numeric codes suitable for evaluation of compute intensive
systems. The programs are available from [34].

SPLASH: The SPLASH suite was created by Stanford researchers [35]. The suite contains
six scientific and engineering applications, all of which are parallel applications.

The NAS Parallel Benchmarks (NPB) are a set of 8 programs designed to help evaluate the
performance of parallel supercomputers. The benchmarks, which are derived from
computational fluid dynamics (CFD) applications, consist of five kernels and three pseudo-
applications.

4.2 Embedded and Media Benchmarks:

EEMBC Benchmarks

The EDN Embedded Microprocessor Benchmark Consortium (EEMBC - pronounced


embassy) was formed in April 1997 to develop meaningful performance benchmarks for
processors in embedded applications. EEMBC is backed by the majority of the processor
industry and has therefore established itself as the industry-standard, embedded processor
benchmarking forum. EEMBC establishes benchmark standards and provides certified
benchmarking results through the EEMBC Certification Labs (ECL) in Texas and California.
The EEMBC's benchmarks comprise a suite of benchmarks designed to reflect realworld
applications, while it also includes some synthetic benchmarks. These benchmarks target the
automotive/industrial, consumer, networking, office automation, and telecommunications
markets. More specifically, these benchmarks target specific applications that include engine
control, digital cameras, printers, cellular phones, modems, and similar devices with
embedded microprocessors. The EEMBC consortium dissected these applications and
derived 37 individual algorithms that constitutes the EEMBC's Version 1.0 suite of
benchmarks.

BDTI Benchmarks:

Berkeley Design Technology, Inc. (BDTI) is a technical services company that has focused
exclusively on Digital Signal Processing since 1991. BDTI provides the industry standard
BDTI Benchmarks™, a proprietary suite of DSP benchmarks. BDTI also develops custom
benchmarks to determine performance on specific applications The benchmarks contain DSP
routines such as FIR filter, IIR filter, FFT, dot product, and Viterbi decoder.

Media Bench:

The Media Bench benchmark suite consists of several applications belonging to the image
processing, communications and DSP applications. Examples of applications that are
included are JPEG, MPEG, GSM, G.721 Voice compression, Ghost script, ADPCM, etc.
JPEG is the compression program for images, MPEG involves encoding/decoding for video
transmission, Ghost script is an interpreter for the Postscript language, and ADPCM is
Adaptive differential pulse code modulation. The Media Bench is an academic 13 effort to
assemble several media processing related benchmarks. An example of the use of these
benchmarks may be found in [49].

4.3 Java Benchmarks:


SPECjvm98 The SPECjvm98 suite consists of a set of programs intended to evaluate
performance for the combined hardware (CPU, cache, memory, and other platform-specific
performance) and software aspects (efficiency of JVM, the JIT compiler, and OS
implementations) of the JVM client platform [31]. The SPECjvm98 uses common
computing features such as integer and floating-point operations, library calls and I/O, but
does not include AWT (window), networking, and graphics. Each benchmark can be run
with three different input sizes referred to as S1, S10 and S100. The 7 programs are
compression/decompression (compress), expert system (jess), database (dB), Java compiler
(javas), mpeg3 decoder (MPEG audio), raytraced (metre) and a parser (jack).

SPECjbb2000: (Java Business Benchmark) is SPEC's first benchmark for evaluating the
performance of server-side Java. The benchmark emulates an electronic commerce workload
in a 3-tier system. The benchmark contains business logic and object manipulation, primarily
representing the activities of the middle tier in an actual business server. It models a
wholesale company with warehouses serving several districts. Customers initiate a set of
operations such as placing new orders and checking the status of existing orders. It is written
in Java, adapting a portable business-oriented benchmark called probe written by IBM.
Although it is a benchmark that emulates business transactions, it is very different from the
Transaction Processing Council (TPC) benchmarks. There are no actual clients, but they are
replaced by driver threads. Similarly, there is no actual database access. Data is stored as
binary trees of objects.

The caffeine Mark 2.5 is the latest in the series of Caffeine Mark benchmarks. The
benchmark suite analyses Java system performance in eleven different areas, nine of which
can be run directly over the internet. It is almost the industry standard Java benchmark. The
caffeine Mark can be used for comparing applet viewers, interpreters and JIT compilers from
different vendors. The Caffeine Mark benchmarks can also be used as a measure of Java
applet/application performance across platforms
Landmark: is a pure Java server benchmark with long-lasting network connections and high
thread counts. It can be divided into two parts: server and client, although they are provided
in one package. It is based on a commercial chat server application, the Viracocha which is
used in several countries worldwide. The server accepts connections from the chat client.
The chat client simulates many chat rooms and many users in each chat room. The client
continuously sends messages to the server and waits for the server to broadcast the messages
to the users in the same chat room. landmark creates two threads for each client connection.
Landmarc can be used to test both speed and scalability of a system. In speed test, it is run in
an iterative fashion on a single machine. In scalability test, the server and client are run on
separate machines with high-speed network connection
Skymark, see CPU Benchmarks, section 4.1
Java Grande Forum Benchmarks, see CPU Benchmarks, section 4.1
4.4 Transaction Processing Benchmarks
The Transaction Processing Council (TPC) is a non-profit corporation founded in 1988 to
define transaction processing and database benchmarks and to disseminate objective,
verifiable TPC performance data to the industry. The term transaction is often applied to a
wide variety of business and computer functions. Looked at it as a computer function, a
transaction could refer to a set of operations including disk read/writes, operating system
calls, or some form of data transfer from one subsystem to another. TPC regards a
transaction as it is commonly understood in the business world: a commercial exchange of
goods, services, or money. A typical transaction, as defined by the TPC, would include the
updating to a database system for such things as inventory control (goods), airline
reservations (services), or banking (money). In these environments, a number of customers
or service representatives’ input and manage their transactions via a terminal or desktop
computer connected to a database. Typically, the TPC produces benchmarks that 14 measure
transaction processing (TP) and database (DB) performance in terms of how many
transactions a given system and database can perform per unit of time, e.g., transactions per
second or transactions per minute. The TPC benchmarks can be classified into 2 categories,
Online Transaction Processing (OLTP) and Decision Support Systems (DSS). OLTP systems
are used in day-to-day business operations (airline reservations, banks), and are characterized
by large number of clients who continually access and update small portions of the database
through short running transactions. Decision support systems are primarily used for business
analysis purposes, to understand business trends, and for guiding future business directions.
Information from the OLTP side of the business is periodically fed into the DSS database
and analysed. DSS workloads are characterized by long running queries that are primarily
read-only and may span a large fraction of the database. There are four benchmarks that are
active, TPC-C, TPC-W, TPC-R and TPC-H. These benchmarks can be run with different
data sizes, or scale factors. In the smallest case (or scale factor =1), the data size is
approximately 1 GB. The earlier TPC benchmarks, namely TPC-A,

TPC-C
TPC-C is an OLTP benchmark. It simulates a complete computing environment where a
population of users executes transactions against a database. The benchmark is centred
around the principal activities (transactions) of a business like that of a world-wide
wholesale supplier. The transactions include entering and delivering orders, recording
payments, checking the status of orders, and monitoring the level of stock at the warehouses.
While the benchmark portrays the activity of a wholesale supplier, TPC-C is not limited to
the activity of any business segment, but rather represents any industry that must manage,
sell, or distribute a product or service. TPC-C involves a mix of five concurrent transactions
of different types and complexity either executed on-line or queued for deferred execution.
There are multiple on-line terminal sessions. The benchmark can be configured to use any
commercial database system such as Oracle, DB2 (IBM) or Informix. Significant disk input
and output are involved. The databases consist of many tables with a wide variety of sizes,
attributes, and relationships. The queries result in contention on data accesses and updates.
TPC-C performance is measured in new-order transactions per minute. The primary metrics
are the transaction rate (topic) and price per transaction ($/topic).
CHAPTER 5
Performance Evaluation – Methods and
Techniques Survey
5.0 Introduction

Employee performance is related to job duties which are expected of a worker and how
perfectly those duties were accomplished. Many managers assess the employee performance
on an annual or quarterly basis in order to help them identify suggested areas for
enhancement. Performance appraisal (PA) system depends on the type of the business for an
organization. PA mostly relates to the product output of a company or the end users of an
organization. Generally, performance appraisal aims to recognize current skills’ status of
their work force. Any standard appraisal system consists of collection of data in which
information is extracted from then converted into a real number called performance rating.
The employees’ contribution to an organization depends on the evaluation of his/her rating.
It is essential to have accurate unbiased appraisal assessment in order to measure the
employees’ contribution to organization objectives. Employers/managers use characteristics
such as knowledge in particular field, skills to achieve a goal and target achieving attitude in
order to decide on the employee’s performance level. Since these factors mostly are
uncertain and vague in nature a fuzzy performance appraisal method is more appropriate.

Several appraisal methods are used for employee performance appraisal such as Graphic
rating scale method, forced choice distribution method, behavioral check list method, etc.
Some methods that were utilized in the past are not currently used like ranking, critical
incident, and narrative essays. New methods have been suggested for performance appraisal
technique like MBO and assessment Centers. The survey also reviews and classifies some
evaluation techniques used in multi criteria environment.

The rest of this paper is organized as follows: Section II reviews both performance appraisal
methods: traditional and modern method. Section III explains and classifies the fuzzy related
performance appraisal techniques including the MCDM techniques. A new proposal for
Performance Evaluation of Sudanese Universities and Academic staff Using Fuzzy logic is
introduced in Section IV. Other performance evaluation methods and Conclusion are
provided in Sections V & VI.

II. Performance Appraisal Methods

Performance Appraisal can be generally categorized into two groups: Traditional (Past
oriented) methods and Modern (future oriented) methods [1]. Other researchers [4] have
classified the existent methods to three groups; absolute standards, relative standards and
objectives. The performance appraisal methods are:

A. Traditional Methods:

Traditional methods are comparatively older methods of performance appraisal. These


methods were past oriented approaches which concentrated only on the past performance.

The following are the topical traditional methods that were used in the past:

a) Ranking Method: Superior ranks his employee based on merit from best to worst
[2]. However, how best and why best are not elaborated in this
method.
b) Graphic Rating Scales: In 1931 a behaviourism enhancement was introduced to
graph rating scale [3]. According to [2], graphic rating scale is a scale that lists several
traits and a range of performance foreach. The employee is then graded by finding the
score that best defines his or her level of performance for each trait.

c) Critical Incident Method: This method is concentrated on certain critical


behaviours of employee that makes significant difference in the performance.
According to [2], critical incident method keeps a record of unusually employee’s
work-related behaviour and revisit it with the employee at prearranged times.

d) Narrative Essay: In this method the administrator writes an explanation about


employee’s strength and weakness points for improvement at the end of evaluation
time. This method primarily attempts to concentrate on behaviours [4]. Some of the
evaluation criteria are as follows: overall impression of performance, existing
capabilities & qualifications, previous performance, and suggestions by others.

B. Modern Methods:

Modern Methods were formulated to enhance the conventional methods. It tried to


enhance the shortcomings of the old methods such as biasness and subjectivity. The
following presents the typical modern methods:

e) Management by Objectives (MBO): The performance is graded


against the achievement of the objectives specified by the management. MBO includes three
main processes: object formulation, execution process and performance feedback [5].
Weinrich [6] proposed the system approach to management by objectives. It consists of
seven components: strategic planning and hierarchy of objects, setting objectives, planning
for action, implementation of MBO, control and appraisal, subsystems and organizational
and management development.

f) Behaviourally Anchored Rating Scales (BARS): BARS contrast an


individual’s performance against specific examples of behaviours that are anchored to
numerical ratings. For example, a level three rating for a doctor may require them to show
sympathy to patients while a level five rating may require them to show higher levels of
empathy. BARS utilize behavioural statements or solid examples to explain various stages of
performance for each element of performance [7].

g) Humans Resource Accounting (HRA): In this method, the performance is


judged in terms of cost and contribution of the employees. Johnson [8] incorporate both
HRA models and utility analysis models (UA) to form the concept of human resource
costing and accounting (HRCA).

h) Assessment Centre: An assessment centre is a central location


where managers may come together to have their participation in job related exercises
evaluated by trained observers. It is more focused on observation of behaviours across a
series of select exercises or work samples. Appraisees are requested to participate in in
basket exercises, work groups, computer simulations, fact finding exercises,
analysis/decision making problems, role playing and oral presentation exercises [9].

i) 360 Degree: It is a popular performance appraisal technique that includes evaluation


inputs from a few stakeholders like immediate supervisors, team members, customers, peers
and self [4]. 360 Degree provides people with information about the influence of their action
on others.

j) 720 Degree: 720-degree method concentrates on what matter most, which is the
customer or investor knowledge of their work [10]. In 720-degree appraisal feedback is taken
from external sources such as stakeholders, family, suppliers, and communities. 720 degree
provides individuals with extremely changed view of themselves as leaders and growing
individuals. It is 360-degree appraisal method practiced twice.

Table 1: Appraisal performance Methods Summary

S Appraisal Methods Key Concept Pros Cons


R

a) Ranking Method Rank employees from  Simple and easy  Less objective.
best to worst on a to use.  Not suitable for large
particular trait.  Fast & workforce.
Transparent.  Difficult to determine
workers strengths and
weakness.
b) Graphic Rating Scales Rating scales consists of  Adaptability.  Rater’s bias
several numerical scales  Easy to use and (subjectivity).
representing job related easily  Equal weight for all
performance criterions constructed. criteria.
such as dependability,  Low cost.
initiative, output,  Every type of job
attendance, attitude etc. can be evaluated.
The employee is rated  Large number of
by identifying the score employees
that best define his or covered.
her performance for
each TRAIT

c) Critical Incident The method is  Feedback is  Analysing and


concentrating on certain easy. summarizing data is
critical behaviours of  Assessment time consuming.
employee that makes all based on actual  Difficult to gather info
the difference in the job behaviours. about critical incidents
performance.  Chances of via a survey.
subordinate
improvement are
high.

d) Narrative Essay Rater writes down the  Filing  Time consuming.


employee description in information  Easy rater bias.
detail within a no. of gaps about the  Required Effective
general groups such as employees. writers.
overall impression of  Address all
performance, existing factors.
capabilities and  Provide
qualifications of comprehensive
performing jobs, feedbac

e) Management by Objectives The performance is  Easy to execute  Difference in goal


rated against the and measure. interpretation.
objectives achievement  Employees have  Possibility of missing
stated by the clear integrity, quality, etc.
management. understanding  Difficult for appraise
of the roles and to agree on objectives.
responsibilities  Not applicable to all
expected of jobs
them.
 Assists
employee
advising and
direction

f )f Behaviourally Anchored BARS links aspects  Employee  Scale independence


) Rating Scale from critical incident performance is may not be valid/
and graphic rating scale defined by Job reliable
methods. The manager behaviours in an  Behaviors are activity
grades employees’ expert approach. oriented rather than
according to items on a  Involvement of result oriented
numerical scale appraiser and  Time consuming
appraisee lead to  Each job requires spate
more acceptance. BARS scale.
 Helps overcome
rating errors.

g) Human Resource Accounting The people are valuable  Improvement of  No clear-cut guidelines
(HRA resources of an human resources for finding cost and
organization.  Development value of human
Performance is assessed and resources
from the monetary implementation  The method measures
incomes yields to his or of personnel only the cost to the
her organization. It is policies organization and
more reliant on cost and  Return on ignores employee
benefit analysis. investment on value to the
human resources organization
 Enhance the  Unrealistic to measure
proficiencies of employee under
employees uncertainty.

h) Assessment Centre’s Employees are  Better forecasts  Costly and difficult to


appraised by monitoring of future manage.
their behaviours across a performance and  Needs a large staff and
series of selected progress. a great deal of time
exercises.  Concepts are  Limited number of
simple. people can be
 Flexible processed at a time.
methodology
 Assists in
promotion
decisions and
diagnosing
employee
development
needs.
 Allow multiple
traits
measurement

i) 360 Degree It depends on the input  Allows  Time consuming and


of an employee’s employees to very costly
superior, peers, gain a more  Difficult to interpret
subordinates, sometimes understanding the findings when
suppliers and customers. of their impact they differ from group
on people they to group.
interact with  Difficult to execute in
every day cross functional teams
 Excellent  Difficult to maintain
employee confidentiality
development
tool.
 Precise and
dependable
system
 Legally more
justifiable.

C. The comparison of Performance Appraisal Methods As shown in table 1 each method has
pros and cons. In order to determine the best appraisal method, you need to answer this
question; “Evaluation with respect to what “best”?” The organization goals and performance
type are key factors to decide the best method. Jafari [60] proposed a framework for the
selection of appraisal methods and compared some performance evaluation methods to
facilitate the selection process. The framework is based on six criteria which are maintained
by an expert as shown in table 2 (a: Ranking Method, b: graphic rating scales method, etc.).
Table 2: Performance appraisal methods' comparison

Methods A B C D E F G
Criteria
Training needs evaluation C B A B A A A
Coincidence with institutes C A A B A A B
Excite staff to be better C C B C B B A
Ability to compare C B C C A B A
Cost of method C A B A C C B
Free of error C C C C B B V
The matrix below is extracted from table 2 where A is replaced by 3, B with 2 and C with 1.

X1 1 2 3 2 3 3 3

X2 1 3 3 2 3 3 2

X3 1 1 2 1 2 2 3

X4 3 2 1 1 3 2 3

X5 3 3 2 3 1 1 3
X6 3 1 1 1 2 2 3
 The scores are normalized by a linear scale using one of the following formulas:
Benefits: rIj = xij / max (xi), or Cost: rij = min (xi) / xij

The matrix after normalizing with respect to Benefits looks as follows

A B C D E F G

X1 0.33 0.62 1.00 0.67 1.00 1.00 1.00

X2 0.33 1.00 1.00 0.67 1.00 1.00 0.67

X3 0.33 0.67 0.33 0.33 1.00 0.67 1.00

X4 1.00 0.67 0.33 0.33 1.00 0.67 1.00

X5 0.33 0.33 0.50 0.33 1.00 1.00 0.50

X6 1.00 0.33 0.33 0.33 0.67 0.67 1.00


 Then define normalized weight for each criterion using multiple linear regressions to define
straight rank of each criterion by using the following formula:
N
W j=(n-rj+1)/∑ (n-rk+1)
K=1

 Where w j is the normalized weight for the jet h criterion, n is the number of criteria
under consideration and raj is the rank position of criterion.

Table 3: Rank, weight and we of each criteri

Rank Weight Wj
Criteria (r j) (n-r j + 1)
Training needs evaluation 4 3 0.14
Coincidence with institutes 6 1 0.05
Excite staff to be better 5 2 0.1
Ability to compare 1 6 0.29
Cost of method 2 5 0.24
Free of error 3 4 0.19
 Then use each criteria weight in table 3 with the above normalized matrix to rank the
appraisal method as shown in the table 4. In this example MBO is on the top of the
list, then followed by 360 Degree, etc.

Table 4: Methods Ranking

Methods Methods' grades


e. MBO 0.91 0.91
i. 360 Degree Feedback 0.87
f. BARS 0.82
a. Ranking 0.66
c. The critical incident 0.54
b. The graphic rating scale 0.51
0.51 0.4
d. The essay
III. Fuzzy Related Appraisal Techniques.

There are many fuzzy related appraisal techniques in literature. In this


section we will present them

A. AHP & FAHP

a. Analytic Hierarchy Process (AHP) Technique: Analytic


hierarchy process (AHP) is a quantitative technique for ranking decision
alternatives using multiple criteria [11]. Structuring the alternatives into a
hierarchical framework is the AHP technique to resolve complicated
decisions. The hierarchy is formed through pair-wise comparisons of
individual judgments rather than attempting to rank the entire list of
decisions and criteria at the same time. This process normally includes six
steps [23]; defining the unstructured problem, specifying criteria and
alternatives, recruiting pair wise comparisons among decision elements,
using the eigenvalue method to forecast the relative weights of the
decision elements, calculating the consistency properties of the matrix
and gathering the weighted decision elements. Deciding and selecting the
essential factors for decision-making is the most inventive job in making
decision. In the AHP, the selected factors are arranged in a hierarchic
structure descending from a global goal through criteria to sub-criteria in
their appropriate successive levels [12, 16]. The Saaty [12] help
introducing AHP. The principles are reviewed giving overall background
information on the measurement type utilized, its properties and
application. Saaty [12] also presented how to structure a decision
problem, how to drive relative scales utilizing judgment or data from a
standard scale and how to execute the subsequent arithmetic operation
on such scales avoiding useless number crunching. The decision is given
in the form of paired comparison [13, 14, and 15]. The AHP is utilized with
two types of measurement which are relative and absolute [12]. The
paired comparisons in both measurements are performed to derive
priorities for criteria with respect to the goal. Figure 1 shows an example
for relative measurement for “Choosing the best house to buy” where the
paired comparisons are performed throughout the hierarchy. In this
example, the problem was to determine which of the three houses to
select. The first step is to structure the problem as hierarchy (as shown in
figure 1). The top level is overall objective “Satisfaction with house”. The
2nd level contains the eight criteria that contribute to the objective and
the bottom level contains the three nominee houses that are to be
assessed against the criteria in the 2 Nd level.
CHAPTER-06

IMPACT OF SUCCESSION PLANNING ON


ORGANISATIONAL PERFORMANCE
EVALUVATION

1.1 INTRODUCTION

In the ever-changing business landscape of today, firms must overcome many obstacles to remain
competitive, sustainable, and continuous.
One of the most critical factors contributing to an organization's long-term success is its ability to manage
talent effectively. Succession planning involves identifying and cultivating future leaders within the
organization internal personnel to take on essential leadership positions, is an essential strategy in this
regard.
There are more benefits to succession planning than just covering open positions. By putting the right people
in place to advance an organization's strategic vision and objectives, it plays a crucial part in determining
how it will develop down the line. By encouraging a practice of ongoing development and leadership
preparation, Strategic Succession planning not only facilitates businesses be ready for the unplanned
departure of important employees, but it equally improves overall organizational performance.
There are several methods by which succession planning affects the performance of a company. The
aforementioned factors impact employee retention, lower recruitment expenses, improve leadership
continuity, and synchronize leadership competencies with the strategic goals of the firm. Furthermore, when
employees perceive prospects for growth and progress inside the company, a well-executed succession plan
can boost employee engagement and morale.
The objective of this research is to investigate how organizational performance and succession planning is
related. It will look in relation to succession planning techniques used by firms and evaluate the results
utilizing a diverse set of performance indicators. Organizations can enhance their future readiness and
maintain competitiveness in a constantly changing business environment by comprehending the significance
in relation to succession planning.

IMPORTANCE:
Succession planning refers to increasingly recognized as a critical component of organizational strategy,
particularly in an era characterized by rapid change, intense competition, and demographic shifts. As
businesses strive for longevity and success, the value of having a well-structured succession plan cannot be
overstated. The competence to find and nurture future leaders from within the organization is not merely a
tool for ensure leadership continuity but also a strategic advantage that can significantly impact overall
performance. The relevance of succession planning lies in its potential to significantly influence several key
areas of organizational performance:

1. Leadership Continuity: Succession planning ensures that leadership transitions occur smoothly,
minimizing disruptions and maintaining strategic momentum. This continuity is crucial for sustaining
operational efficiency and achieving long-term objectives.

2. Employee Morale and Retention: Workers are more apt to stay engaged and
devoted if they perceive a clear path for professional growth. A company that loves
its workers and is committed to their development will show via succession
planning, which can boost morale and increase retention rates.
3. Risk Management: Organizations without a succession plan are vulnerable to the risks associated with
unexpected departures of key leaders. Succession planning mitigates these risks by preparing qualified
individuals to step into leadership roles, reducing the potential for crises.

4. Strategic Alignment: Businesses may guarantee that upcoming executives have


the abilities and perspective needed to propel the firm ahead by coordinating
succession planning with the organization's strategic vision. This alignment
improves the company's capacity to take advantage of fresh possibilities and
adjust to shifting market conditions.
5. Cost Efficiency: Robust succession planning reduces the need for external recruitment, which can be
costly and time-consuming. Developing internal talent is often more cost-effective and ensures that new
leaders are already familiar with the organization's culture and operations.

Defining Succession Planning:


It is the proactive process of locating, grooming, and readying possible candidates to
take up important leadership roles in a company when they open. To create a pipeline
of prepared and skilled leaders who can fill in for key roles as needed, this approach
includes evaluating the skills and competences needed for future responsibilities and
offering targeted development opportunities.
Succession planning is a proactive strategy that synchronizes personnel management
with the organization's long-term objectives, in contrast to replacement planning,
which concentrates on filling positions as they become available. It is about making
sure the company is ready for the inevitable turnover in leadership and capable of
maintaining and improving performance through the strategic development of internal
talent.
1.2 INDUSTRY PROFILE
In the IT industry, where rapid technical breakthroughs, substantial staff turnover, and
fierce competition for qualified talent are commonplace, succession planning
represents an essential strategic endeavour. In this sector, the role of succession
planning cannot be emphasized enough, since it has a direct bearing on an
organization's capacity for innovation, leadership continuity, and employee retention.
An analysis by Deloitte indicates that IT organizations with strong succession
planning strategies have a 1.7-fold increased chance of being regarded as leaders in
the field when it comes to innovation and flexibility. As evidence of the value of
succession planning in lowering employee attrition which is especially high in the IT
sector, where the average yearly turnover rate is about 13.2% these organizations
also have 20% higher employee retention rates than those without formal succession
plans (LinkedIn Workforce Report, 2023).
Financial performance is significantly impacted by effective succession planning
as well. Comparing IT organizations with well-established succession planning
procedures to their peers, the former report a 1.5 times higher return on investment
(ROI) in leadership development. The rationale behind succession planning is to
guarantee that qualified persons who are in accordance with the company's strategic
goal and culture occupy senior positions.
Furthermore, succession planning has come to place a greater emphasis on
diversity. IT businesses with diverse leadership teams have a 35% higher chance of
outperforming their peers financially, according to a McKinsey analysis. This is crucial
in the IT industry, where inclusiveness and diversity are viewed as catalysts for
creativity that enable businesses to access a wider variety of viewpoints and ideas.
For instance, Microsoft's competitive advantage in the worldwide market has been
attributed to their succession planning approach. By emphasizing cross-functional
training and leadership development, the company's approach has reduced the time it
takes to fill essential leadership positions by 25%, assuring stability and continuity
during changes.
Succession planning is more than just a risk management tool in the IT sector but
a driver of competitive advantage. Companies that prioritize and invest in succession
planning are better positioned to navigate the challenges of rapid technological
change, retain top talent, and achieve sustained financial success.
Global IT Sector Industry Profile
 Market Size and Growth:
o Value: ~$5.2 trillion (2023)
o Growth Rate: ~5.6% CAGR from 2024 to 2028
Source: IDC, 2023
 Major Segments:
o Software: ~$720 billion by 2025
o Hardware: ~$1.1 trillion by 2024
o IT Services: ~$1.3 trillion by 2024
o Telecommunications: ~$1.8 trillion by 2025
Source: Gartner, 2023
 Key Trends:
o Cloud Computing: Market to reach ~$832 billion by 2025
o AI: Market projected to grow from ~$136 billion in 2023 to ~$1.8 trillion by 2030
o Cybersecurity: Expected to reach ~$250 billion by 2025
o 5G: Global connections to exceed 3 billion by 2025
Source: Forrester Research, 2023
 Regional Insights:
o North America: Largest market, leading in cloud, AI, and cybersecurity
o Europe: Growth in IT services and digital transformation
o Asia-Pacific: Fastest-growing, driven by digitalization
o Latin America & MEA: Gradual growth, increasing IT investments
Source: McKinsey & Company, 2023
 Employment and Talent Trends:
o Employment: Over 50 million globally
o Skills Demand: High demand for cloud, AI, data analytics, and cybersecurity skills
Source: LinkedIn Workforce Report, 2023
GLOBAL PLAYERS
A few major international businesses dominate the IT industry, leading in terms of
innovation, market share, and impact. Through constant innovation and technological
adaption, these businesses have been recognized as leaders in their respective fields.

1. Microsoft corporation
2. Apple Inc.
3. Google LLC
4. Amazon web services
5. IBM Corporation
6. Intel Corporation
7. SAP SE
8. Dell Technologies
9. Oracle Corporation
10. Cisco Systems Inc.

COMPANY PROFILE

1. CAPGEMINI
PROMOTERS:
Capgemini, founded by Serge Kampf in France in 1967, has evolved into a global leader in consulting,
technology services, and digital transformation.

VISION:
To be the foremost provider of consulting, technology, professional, and outsourcing services, recognized by
clients for delivering excellence.

MISSION:
To leverage technology and innovation to empower businesses and drive sustainable growth for clients
worldwide.

QUALITY POLICY:
Capgemini is committed to delivering high-quality solutions and services that meet or exceed client
expectations, while continually improving processes and performance.

PRODUCT OR SERVICE PROFILE:


 Consulting services: Strategy, transformation, and innovation consulting.
 Technology services: Application development, infrastructure management, and cybersecurity.
 Digital transformation: Cloud, analytics, mobility, and digital customer experience solutions.
 Outsourcing services: Business process outsourcing (BPO), IT outsourcing, and managed services.

COMPETITOR'S INFORMATION:
 Accenture
 IBM
 Deloitte
 TCS
 Infosys

SWOT ANALYSIS
STRENGTHS:
 Global presence with a diverse client base.
 Strong expertise in consulting, technology, and digital transformation.
 Forming strategic alliances with top technology providers.
 Emphasis on innovation and research and development.
 Robust financial performance and sustainable growth.

WEAKNESSES:
 Reliance on external vendors for technology components.
 Competition from established players and niche providers.
 Need for continuous investment in talent development and training.
 Integration challenges from mergers and acquisitions.

OPPORTUNITIES:
 Escalating need for cloud-based services and digital evolution.
 Expansion into emerging markets and industries.
 Strategic acquisitions to enhance capabilities and market reach.
 Focus on sustainability and environmental initiatives.
 Leveraging data analytics and AI for personalized solutions.
THREATS:
 Intense competition from global and regional players.
 Economic uncertainties and geopolitical risks.
 Rapid technological advancements and disruptive innovations.
 Data privacy and cybersecurity concerns.
 Regulatory changes impacting business operations.

2. DELOITTE
PROMOTERS:

Deloitte was founded in 1845 by William Welch Deloitte in the United Kingdom. It has
grown to become one of the largest professional services firms in the world.

VISION:

To set the benchmark for excellence in professional services by delivering outstanding


value to clients and creating a meaningful impact.

MISSION:

To assist clients in addressing their most difficult challenges and generating value
through innovation, collaboration, and a commitment to integrity.

QUALITY POLICY:

Deloitte provides top-tier professional services that consistently meet or surpass client
expectations, uphold ethical principles, and support the success of both clients and
society.

PRODUCT OR SERVICE PROFILE:

 Audit and assurance

 Consulting

 Tax advisory

 Risk consulting
 Financial consulting

 Legal services and regulatory services

COMPETITOR'S INFORMATION:

 PricewaterhouseCoopers (PwC)

 Ernst & Young (EY)

 KPMG

 Accenture

 Capgemini

SWOT ANALYSIS

STRENGTHS:

 Established brand reputation and global presence.

 Extensive service offerings across industries and sectors.

 Strong consulting, advisory, and professional services capabilities.

 Collaborative culture and emphasis on employee development.

 Thought leadership and industry expertise.

WEAKNESSES:

 Dependency on client industries and economic cycles.

 Complex organizational structure and potential conflicts of interest.

 Limited market share in certain regions or service segments.

 Compliance and regulatory challenges in diverse geographies.

 Attrition and talent retention issues in a competitive market.

OPPORTUNITIES:
 Digital transformation and technology adoption across industries.

 Expansion into emerging markets and sectors.

 Strategic partnerships and alliances to enhance service offerings.

 Emphasis on sustainability and Environmental, Social, and Governance


(ESG)efforts.

 Integration of AI, analytics, and automation into service delivery.


THREATS:

 Intense competition from traditional and new entrants.

 Economic downturns impacting client spending and demand.

 Regulatory changes and compliance requirements.

 Cybersecurity threats and data privacy concerns.

 Talent shortages and skill gaps in emerging technologies.

3. TATA CONSULTANCY SERVICES (TCS)


PROMOTERS:

Founded in 1968 by Tata Sons in India, Tata Consultancy Services (TCS) is a member
of the Tata Group, one of India's largest and most esteemed conglomerates.

VISION:

To assist customers in reaching their business goals through cutting-edge IT solutions


and services.

MISSION:

To be a global leader in IT services, consulting, and business solutions, delivering


excellence and value to customers worldwide.

QUALITY POLICY:
TCS is committed to delivering high-quality IT solutions and services that meet or
exceed customer requirements, while continuously improving processes and
performance.

PRODUCT OR SERVICE PROFILE:

 Application development and maintenance

 IT infrastructure services

 Consulting and business solutions

 Enterprise solutions (ERP, CRM, SCM)

 Digital transformation services

 Business process outsourcing (BPO)

COMPETITOR'S INFORMATION:

 Infosys

 Wipro

 Accenture

 Cognizant

 IBM

SWOT ANALYSIS

STRENGTHS:

 Largest IT services company in India with a global footprint.

 Extensive portfolio of IT services and solutions.

 Strong focus on innovation and research and development.

 Extensive domain knowledge across various industries and sectors.

 Robust financial performance and stable growth.

WEAKNESSES:
 Dependency on a few key markets and clients.

 Competition from global and regional players.

 Limited brand recognition compared to competitors.

 Need for continuous investment in talent development.

 Margin pressures due to pricing and cost competition.

OPPORTUNITIES:

 Digital transformation and cloud adoption trends.

 Expansion into new geographies and industry verticals.

 Strategic acquisitions to enhance capabilities and market reach.

 Focus on sustainability and green initiatives.

 Leveraging AI, analytics, and automation for efficiency and innovation.


THREATS:

 Intense competition from traditional and new competitors.

 Economic uncertainties and geopolitical risks.

 Data privacy and cybersecurity concerns.

 Regulatory changes impacting business operations.

 Talent shortages and skill gaps in emerging technologies.

4. GENPACT
PROMOTERS:

Genpact was founded in 1997 as a business process outsourcing (BPO) division of


General Electric (GE) in the United States. It later became an independent company in
2005.

VISION:

To be a trusted partner for clients, delivering innovative solutions and services that
drive business transformation and growth.
MISSION:

To leverage technology, analytics, and process expertise to assist clients in attaining


operational excellence and gaining a competitive edge.

QUALITY POLICY:

Genpact is committed to delivering high-quality business process services and


solutions that meet or exceed client expectations, while adhering to the highest
ethical and professional standards.

PRODUCT OR SERVICE PROFILE:

 Finance and accounting

 Procurement and supply chain

 Customer experience management

 Risk and compliance

 Analytics and insights

 Digital transformation services


COMPETITOR'S INFORMATION:

 Accenture

 IBM

 Capgemini

 Wipro

 Infosys

SWOT ANALYSIS

STRENGTHS:

 Global leader in business process management and outsourcing.

 Extensive industry expertise and domain knowledge.


 Strong focus on analytics, digital, and automation solutions.

 Robust delivery capabilities and scalable operations.

 Strategic partnerships and alliances with technology providers.

WEAKNESSES:

 Dependency on a few key clients and industries.

 Margin pressures due to pricing and cost competition.

 Limited brand recognition compared to larger competitors.

 Need for continuous investment in technology and innovation.

 Attrition and talent retention challenges in a competitive market.

OPPORTUNITIES:

 Digital transformation and automation trends.

 Expansion into new geographies and industry verticals.

 Strategic acquisitions to enhance capabilities and market reach.

 Focus on sustainability and ESG initiatives.

 Leveraging AI, analytics, and cloud for business optimization.


THREATS:

 Intense competition from traditional and new entrants.

 Economic downturns impacting client spending and demand.

 Regulatory changes and compliance requirements.

 Data privacy and cybersecurity risks.

 Talent shortages and skill gaps in emerging technologies.

5. IBM:
PROMOTERS:
IBM (International Business Machines Corporation) was founded in 1911 as the
Computing-Tabulating-Recording Company (CTR) by Charles Ranlett Flint. It later
became IBM in 1924.

VISION:

To be the global leader in delivering innovative technology solutions and services that
propel business transformation and advance societal progress.

MISSION:

To assist clients in addressing their most challenging problems through technology


and innovation, delivering value and driving sustainable growth.

QUALITY POLICY:

IBM is committed to delivering high-quality technology offerings and support that


meet or exceed client requirements, while maintaining the highest standards of
integrity and professionalism.

PRODUCT OR SERVICE PROFILE:

 Hardware: Mainframe systems, servers, storage, and networking equipment.

 Software: Operating systems, middleware, analytics, and AI solutions.

 Cloud computing: Public, private, and hybrid cloud platforms and services.

 Consulting: Strategy, transformation, and implementation services.

 Business process outsourcing (BPO): Finance, HR, procurement, and customer


service.

COMPETITOR'S INFORMATION:

 Microsoft

 Google
 Amazon Web Services (AWS)

 Oracle

 SAP

SWOT ANALYSIS

STRENGTHS:

 Established Brand prestige and global presence.

 Diverse portfolio of IT solutions and services.

 Strong expertise in cloud computing, AI, and analytics.

 Focus on research and innovation with IBM Research labs.

 Long-standing relationships with enterprise clients.

WEAKNESSES:

 Declining revenue from legacy hardware and software businesses.

 Competition from agile and innovative tech companies.

 Integration challenges from mergers and acquisitions.

 Dependency on a few key clients and industries.

 Need for continuous investment in talent and technology.

OPPORTUNITIES:

 Growth opportunities in cloud computing and AI.

 Expansion into emerging markets and industries.

 Strategic partnerships and alliances to enhance offerings.

 Focus on sustainability and environmental initiatives.

 Leveraging data analytics and AI for personalized solutions.


THREATS:

 Intense competition from global and regional players.

 Economic uncertainties and geopolitical risks.


 Rapid technological advancements and disruptive innovations.

 Data privacy and cybersecurity concerns.


Regulatory changes impacting business operations

3.1 STATEMENT OF THE PROBLEM

In todays in a fast-evolving business landscape, companies confront the challenge of


maintaining consistent performance amidst leadership transitions. Succession
planning, a strategic process designed to identify and develop future leaders, is
critical in ensuring organizational stability and continuity. However, numerous
organizations find it challenging to implementing successful succession planning
practices, leading to potential disruptions in performance, reduced employee morale,
and an absence of strategic direction.

As organizations navigate an increasingly complex and volatile business environment,


the effectiveness for succession planning has emerged as a pivotal factor in sustaining
organizational performance and competitiveness. Despite the critical role of
succession planning in preparing for leadership transitions and ensuring continuity,
many organizations face significant challenges in developing and implementing robust
succession plans that are in line with contemporary demands and trends.

The problem at hand is that insufficient or ineffective succession planning can lead to
several adverse outcomes, including leadership vacuums, strategic disarray, and
decreased employee morale. With the rise of digital transformation, global
competition, and evolving workforce expectations, there is a pressing need to
understand how modern succession planning practices impact key performance
metrics such as company-specific resilience, innovation and general productivity.

This study seeks to explore the connection between succession planning practices
and organizational performance, focusing on how current trends and challenges
influence this dynamic. By analysing The ways in which effective succession planning
can mitigate risks and enhance performance, the research seeks to provide essential
insights for companies to optimize their succession strategies and maintain a
competitive edge in a rapidly changing business landscape.

3.2 NEED FOR THE STUDY


In a time characterized by rapid technological advancements, global competition, and
evolving workforce expectations, organizations must be agile and strategically
prepared to navigate leadership transitions effectively. Succession planning, when
executed proficiently, serves as a critical mechanism for ensuring organizational
continuity, stability, and long-term success. However, many organizations struggle to
develop and implement succession plans that address contemporary challenges and
align with emerging trends.

The need for this study is underscored by several factors:

1. Increasing Complexity of Leadership Roles: As organizations adapt to digital


transformation and changing market conditions, leadership roles are evolving to be
more complex. Effective succession planning is essential to prepare leaders who
can manage these complexities and drive organizational success.
2. Risk Mitigation: Leadership transitions can pose significant risks to organizational
performance. Understanding how effective succession planning mitigates these
risks can help organizations maintain performance levels and avoid disruptions
during transitions.

3. Alignment with Modern Trends: Traditional succession planning practices may


not completely address current trends as seen in the integration of technology,
diversity and inclusion, and remote work. The objective of this study is to analyse
how contemporary practices can be integrated to enhance organizational
performance.

4. Evidence-Based Insights: We have a Need for empirical research about the


direct impact of succession planning on performance metrics. This study seeks to
bridge the gap through providing data-driven insights to assist organizations in
refining their succession strategies.

5. Strategic Advantage: Organizations that successfully implement succession


planning secures a competitive edge byensuring a pipeline of capable leaders
ready to meet future challenges. This study aims to offer practical
recommendations for leveraging succession planning as a strategic advantage.
3.3 OBJECTIVES

1.To Analyse the Current State of Succession Planning Practices:

 Assess how organizations are currently implementing succession planning


strategies.

 Identify common practices, challenges, and gaps in succession planning.

2.To Identify Best Practices and Trends in Succession Planning:

 Explore contemporary trends in succession planning, including digital tools,


diversity and inclusion considerations, and adaptation to remote work
environments.

 Determine recommended practices that match modern organizational needs


and performance objectives.

3.To review the outcomes of Succession Planning on Leadership Transition


and Continuity:

 Analyse how succession planning affects leadership transitions and the


continuity of strategic initiatives.

 Assess the role of succession planning related to mitigating challenges related


to gaps in leadership and ensuring smooth transitions.

4.To Provide Recommendations for Enhancing Succession Planning


Strategies:

 Develop actionable recommendations for organizations to improve their


succession planning practices.

 Offer insights on how to align succession planning with present and forthcoming
organizational goals and trends.
3.4 SCOPE OF THE STUDY

1. Organizational Focus:

 The research will examine succession planning approaches in diverse


organizations, including large corporations, mid-sized companies, and small
enterprises.

 It will focus on organizations within specific industries, such as technology,


manufacturing, and services, to understand industry-specific nuances.

2. Succession Planning Practices:

 The study will analyse current succession planning practices, including methods
for identifying and developing future leaders, assessment tools, and training
programs.

 It will analyse how these practices align with modern trends such as digital
transformation, diversity and inclusion, and remote work.

3. Performance Metrics:

 The research will investigate key performance metrics influenced by succession


planning, including productivity, financial performance, employee engagement,
and organizational resilience.

 It will assess how effective succession planning impacts leadership transitions


and the continuity of strategic initiatives.

4. Geographical Context:

 The study will focus on organizations operating in specific geographical regions,


such as North America, Europe, and Asia, to understand regional differences
and similarities in succession planning practices.

5. Time Frame:

 The research will review succession planning practices and organizational


performance data from the past five years to ensure relevance to current trends
and challenges.

6. Data Sources:

 The study will utilize a combination of primary data, including surveys and
interviews with HR professionals and organizational leaders, and secondary data
from industry reports, academic literature, and case studies.
7. Limitations:

 The study will acknowledge limitations such as potential biases in self-reported


data, the availability of comprehensive performance metrics, and the challenge
of generalizing findings across different industries and regions.

3.5 Research Methodology

The analysis of the influence of succession planning on organizational performance


will utilize a mixed-methods strategy integrating quantitative and qualitative research
methodologies to gain a complete insight into the subject.

 Type of Study: The research will act as a descriptive and analytical study. It will
aim to describe current approaches to succession planning and analyse their
effects on organizational performance metrics. The study will examine the links
between succession planning strategies and various performance outcomes to
provide actionable insights.

 Sources of Data: Data in the context of the study, it will be obtained from both
main and supplementary. Primary information will be acquired through surveys and
interviews with people directly engaged in or knowledgeable about succession
planning within organizations. Secondary data will be gathered from existing
literature, industry reports, and organizational performance data.

 Primary Data Sources:


o Surveys: Structured questionnaires will be implemented for HR
professionals, senior leaders, and managers to collect numerical data on
succession planning practices, challenges, and their perceived impact on
performance metrics.
o Interviews: Semi-structured interviews involving key figures like HR
managers, executives, and succession planning professionals will furnish
qualitative insights into the implementation and effectiveness of
succession planning strategies.
 Secondary Data Sources:
o Literature Review: Review of academic journals, industry reports, and
case studies to understand existing theories, trends, and gaps in
succession planning research.
o Organizational Performance Data: Analysis of performance metrics
such as productivity, financial performance, and employee engagement
from company reports and industry databases.

 Sampling Plan: The research will use stratified random sampling to ensure
diverse representation across various industries and organizational sizes. The
sampling plan includes:
 Target Population: Organizations across different sectors, including
technology, manufacturing, and services, as well as organizations of various
sizes (small, medium, and large).
 Sample Size: Enough organizations will be sampled to ensure statistically
significant results and capture a wide range of perspectives. The sample size
will be finalized based on preliminary research and available resources.

 Tools and Techniques for Data Collection:


 Surveys:
o Design: Structured questionnaires with a mix of closed and open-ended
questions will be developed to capture quantitative data and initial
qualitative insights.
o Administration: Surveys will be distributed electronically through
platforms such as SurveyMonkey or Google Forms. Responses will be
acquired and kept securely for analysis.
o Analysis: Measured data will be examined using statistical techniques
like regression analysis and correlation analysis, utilizing software tools
like SPSS or Excel.

 Interviews:
o Design: Semi-structured interview guides will be created to ensure
uniformity while offering flexibility for in-depth exploration of key topics.
o Conduct: Interviews will be held either in person, over the phone, or via
video conferencing tools like Zoom or Microsoft Teams. Sessions will be
noted (with permission) and transcribed for analysis.
o Analysis: Qualitative findings from interviews will be analysed using
thematic analysis to identify common themes and patterns. Qualitative
analysis software like NVivo may be used to assist with coding and
organizing the data.

3.6 HYPOTHESES

Hypotheses 1:

Null Hypothesis (H₀): Transfer of critical knowledge has no impact on succession


planning.

Alternative Hypotheses (H₁): Transfer of critical knowledge has an impact on


succession planning.

This alternative hypothesis directly challenges the null by suggesting demonstrating


that there is an effect or relationship between the transfer of critical knowledge and
the efficacy of succession planning.

ANOVA

Sum of Mean
Model df F Sig.
Squares Square

Regressi
76.861 1 76.861 236.209 .000b
on
1
Residual 31.889 98 0.325

Total 108.75 99
a. Dependent Variable: x27

b. Predictors: (Constant): x26

INTERPRETATION:

 Model Significance: The foremost thing to note is the significance (Sig.) value
of .000. This indicates indicating that the model is statistically significant at p <
0.001 level. In simpler terms, there's strong evidence that independent variable(s)
affect the dependent variable significantly.

 F-statistic: The F-value is 236.209, which is quite large. This value compares the
level of variance elucidated the regression to the unexplained variance (residual).
A large F-value suggests that the model explains a significant amount of the
variability in the data.

 Degrees of Freedom (df):


 Regression df = 1: This suggests there is one independent variable present in
model.
 Residual df = 98: This specifies the number of observations (100) minus the total
of parameters estimated (2: one for the intercept and one for the independent
variable).
 Total df = 99: This confirms you have 100 total observations.

 Sum of Squares:
 Regression Sum of Squares = 76.861
 Residual Sum of Squares = 31.889
 Total Sum of Squares = 108.75 The regression explains about 70.7% (76.861 /
108.75) of the total variance in your dependent variable, which is quite good.

 Mean Square:
 Regression Mean Square = 76.861
 Residual Mean Square = 0.325 The Mean Square is the Sum of Squares divided by
its respective degrees of freedom.
Model Summary

Std. Change Statistics


Error of
R Adjuste the R F
Mode Squar d R Estimat Square Chang Sig. F Durbin-
l R e Square e Change e df1 df2 Change Watson

1 .841a .707 .704 .570 .707 236.20 1 98 .000 1.991


9

a. Predictors: (Constant), x26

b. Dependent Variable: x27

INTERPRETATION:

 Model Fit: The R value of 0.841 indicates a strong positive correlation between the
predictor (x26) and the dependent variable (x27).

 Coefficient of Determination (R Square): The R Square value of 0.707 means that


70.7% of the variance in the dependent variable (x27) can be explained by the
independent variable (x26) in the model. This suggests a good fit, as the model
accounts for a substantial portion of the variability in the outcome.

 Adjusted R Square: The Adjusted R Square of 0.704 is very close to the R Square,
which is positive. The small difference suggests that the model is not overfitted,
especially considering there's only one predictor.

 Standard Error of the Estimate: The value of 0.570 represents the average
deviation of predicted values from the observed values in the original scale of the
dependent variable.

 R Square Change and F Change: The R Square Change of 0.707 is significant (Sig.
F Change = 0.000), which means that the addition of this predictor (x26) to the
model led to a statistically significant improvement in its explanatory power
compared to a model without any predictors.

 Durbin-Watson Statistic: The Durbin-Watson value of 1.991 is very close to 2,


indicating that there is likely no significant autocorrelation in the residuals. This
suggests that the assumption of independence of errors is met.

 Degrees of Freedom: The model has 1 predictor (df1 = 1) and 98 residual degrees
of freedom (df2 = 98), suggesting a total sample size of 100.

Coefficient

Unstandardized Standardized
Coefficients Coefficients

Model B Std. Error Beta t Sig.

1 (Constant) .397 .151 2.624 .010

x26 .835 .054 .841 15.369 .000

a. Dependent Variable: x27

Residuals Statistics

Minimum Maximum Mean Std. Deviation N

Predicted Value .40 3.74 2.55 .881 100

Residual -3.735 1.099 .000 .568 100

Std. Predicted -2.444 1.345 .000 1.000 100


Value

Std. Residual -6.548 1.927 .000 .995 100

a. Dependent Variable: x27


Hypotheses 2:

Null Hypotheses (H₀): Organizational values and culture have no influence

on succession planning.

Alternative Hypotheses(H1): Organizational values and culture have an

influence on succession planning.

This alternative hypothesis indicates that organizational values and


culture do play a role in succession planning.

ANOVA

Sum of Mean
Model df F Sig.
Squares Square

Regressi
71.498 4 17.874 61.055 .000b
on
1
Residual 27.812 95 0.293

Total 99.31 99

a. Dependent Variable: x30


b. Predictors: (Constant), x16, x21, x17, x22

INTERPRETATION:
 Model Significance: The significance (Sig.) value of .000 indicates that the overall
model is statistically significant at p < 0.001. This means the model is highly
unlikely to have occurred by chance.
 Predictors: The regression degrees of freedom (df) is 4, showing that this model
possesses four independent variables or predictors.

 F-statistic: The F-value of 61.055 is large and statistically significant. This suggests
that the independent variables, collectively, have a significant effect on the
dependent variable.

 Model Fit:
 Total Sum of Squares: 99.31
 Regression Sum of Squares: 71.498
 R-squared can be calculated as 71.498 / 99.31 = 0.72, or 72% This means the
model explains about 72% of the variability in the dependent variable, which is a
good fit.

 Residuals: The residual sum of squares (27.812) represents the unexplained


variance. The mean square residual (0.293) gives an estimate of the variance of the
error term.

 Sample Size: Total df + 1 = 100, so this analysis is based on a sample size of 100.

Model Summary

Change Statistics
Std.
R Adjusted Error of R F
Mode Squar R the Square Chang Sig. F Durbin-
l R e Square Estimate Change e df1 df2 Change Watson

1 .848a .720 .708 .541 .720 61.05 4 95 .000 2.683


5

a. Predictors: (Constant), x16, x21, x17, x22

b. Dependent Variable: x30

INTERPRETATION:

 Model Fit: The R value of 0.848 indicates a strong positive correlation between the
predictors (x16, x21, x17, x22) and the dependent variable (x30).
 Coefficient of Determination (R Square): The R Square value of 0.720 means that
72% of the variance in the dependent variable (x30) can be explained by the
independent variables in the model. This suggests a good fit, as the model accounts
for a substantial portion of the variability in the outcome.

 Adjusted R Square: The Adjusted R Square of 0.708 is slightly lower than the R
Square, which is normal. It adjusts for the number of predictors in the model and
provides a more conservative estimate of the model's explanatory power. The small
difference between R Square and Adjusted R Square suggests that the model is not
overfitted.

 Standard Error of the Estimate: The value of 0.541 represents the average deviation
of predicted values from the observed values in the original scale of the dependent
variable. A lower value indicates better prediction accuracy.

 R Square Change and F Change: The R Square Change of 0.720 is significant (Sig. F
Change = 0.000), which means that the addition of these predictors to the model led
to a statistically significant improvement in its explanatory power.

 Durbin-Watson Statistic: The Durbin-Watson value of 2.683 is used to detect the


presence of autocorrelation in the residuals. Values around 2 indicate no
autocorrelation, while values substantially less than or greater than 2 may indicate
positive or negative autocorrelation, respectively. The value here (2.683) suggests
that there might be a slight negative autocorrelation, but it is not severe.

Coefficients
Unstandardized Standardized
Coefficients Coefficients

Model B Std. Error Beta t Sig.

1 (Constant) .411 .154 2.665 .009

x21 .078 .156 .085 .501 .618

x22 .448 .188 .460 2.379 .019

x17 .270 .111 .272 2.426 .017

x16 .061 .058 .084 1.065 .290

a. Dependent Variable: x30


Residuals Statistics
Minimum Maximum Mean Std. Deviation N

Predicted Value .41 3.84 2.63 .850 100

Residual -2.203 1.940 .000 .530 100

Std. Predicted Value -2.611 1.423 .000 1.000 100

Std. Residual -4.072 3.586 .000 .980 100

a. Dependent Variable: x30

3.7 LIMITATIONS OF THE STUDY

1. Self-Reported Bias: Data from surveys and interviews may be influenced by


participants’ desire to provide socially desirable responses.
2. Generalizability: Findings may not be applicable to all industries or organizational
sizes, limiting their broader relevance.
3. Performance Data Access: Challenges in obtaining detailed and accurate
performance data may affect the analysis.
4. Variability in Practices: Differences in succession planning practices across
organizations may lead to inconsistent data.
5. Time Constraints: Limited time may impact the depth of data collection and
analysis.
6. Changing Trends: Evolving practices and trends may affect the long-term
relevance of the findings.
7. Response Rate: Low response rates may impact the representativeness of the
data.
8. Regional Differences: Results may not include regional variations in practices and
performance.
9. Confidentiality Issues: Protecting sensitive organizational data can be challenging.
10. Subjective Interpretation: Qualitative analysis might encompass subjective
interpretation, affecting consistenc
CHAPTER 07
THEORETICAL BACKGROUND OF THE STUDY

7.1 LITERATURE REVIEW

1."Succession Planning and Leadership Development: A Review of Current


Research"

Journal of Management Development, 2006.

Research Basis: This review synthesizes empirical studies and theoretical


frameworks on the interplay in connection with succession planning and leadership
development. It assesses the effect of leadership development programs on
organizational stability and growth.

Source: Emerald Insight

2."The Role of Succession Planning in Organizational Performance: A


Literature Review"

Global Journal of Human Resource Management, published in 2008.


Research Basis: This study reviews empirical evidence and case studies focusing on
how succession planning practices influence organizational performance. It includes
analysis of performance metrics and organizational outcomes.

Source: Taylor & Francis Online

3."Impact of Succession Planning on Organizational Effectiveness: An


Empirical Study"

Academy of Management Perspectives, 2010.

Research Basis: Empirical data from a range of organizations are analyzed to


determine the effect of succession planning on organizational effectiveness. The study
uses surveys and performance data to draw conclusions.

Source: Academy of Management

4."Succession Planning and its Effect on Organizational Performance: A


Meta-Analysis" Journal of Corporate Research, 2012.

Research Basis: This meta-analysis gathers information from multiple studies to


evaluate the overall effect of succession planning on organizational performance.
Statistical techniques are used to quantify the impact.

Source: Elsevier

5."A Review of Leadership Succession Planning and its Impact on


Organizational Success"

Leadership & Organization Development Journal, 2013.

Research Basis: Reviews existing literature and theoretical models of leadership


succession planning. It evaluates how different succession planning approaches
contribute to organizational success.

Source: Emerald Insight

6."The Importance of Succession Planning in IT Firms: A Literature Review"

Information Systems Management, 2014.

Research Basis: Focuses on case studies and industry reports related to succession
planning in the IT sector. It highlights sector-specific challenges and strategies.

Source: Taylor & Francis Online


7."Succession Planning and Organizational Performance: Evidence from
Emerging Markets"

Journal of Management, 2015.

Research Basis: Uses case studies and empirical data from firms in emerging
markets to examine the effect of succession planning on organizational performance.

Source: Sage Journals

8."Effectiveness of Succession Planning in the Technology Sector: A Review"

Journal of Technology Management & Innovation, 2016.

Research Basis: Analyzes case studies and industry reports specific to the
technology sector to determine the effectiveness of various succession planning
strategies.

Source: Scielo

9."Linking Succession Planning in Organizational Performance: A Theoretical


Framework"

Journal of Leadership & Organizational Studies, 2017.

Research Basis: Develops a theoretical framework based on existing literature to


link succession planning practices with organizational performance. It uses conceptual
analysis and literature synthesis.

Source: Sage Journals

10."The Impact of Succession Planning on Employee Retention in the IT


Sector"

Human Resource Management Review, 2018.

Research Basis: Examines empirical studies and surveys from IT firms to assess the
impact of succession planning on employee retention and satisfaction.

Source: Elsevier

11. "A Systematic Review of Succession Planning Practices and Their


Outcomes"

Journal of Strategic and International Studies, 2019.


Research Basis: Conducts a systematic review of research articles and case studies
to summarize best practices and outcomes associated with succession planning.

Source: Sage Journals

12."Succession Planning and Its Impact on Firm Performance: A


Comprehensive Review"

International Journal of Management Reviews, 2020.

Research Basis: Provides a comprehensive review of empirical research, case


studies, and theoretical papers to assess how succession planning impacts firm
performance.

Source: Wiley Online Library

13."The Role of Succession Planning in Enhancing Organizational Agility: A


Literature Review"

Organizational Dynamics, 2021.

Research Basis: Reviews literature and theoretical models on organizational agility


and succession planning, focusing on how effective planning enhances a firm's ability
to adapt.

Source: Elsevier

14."Review of Succession Planning Models and Their Effectiveness in High-


Tech Industries"

Technology Analysis & Strategic Management, 2022.

Research Basis: Reviews various succession planning models used in high-tech


industries, assessing their effectiveness through industry reports and case studies.

Source: Taylor & Francis Online

15."Understanding the Impact of Succession Planning on Organizational


Culture and Performance"
Journal of Organizational Behavior, 2022.
Research Basis: Analyzes empirical studies and case studies to explore how
succession planning influences organizational culture and performance metrics.
Source: Wiley Online Library
16."Succession Planning and Organizational Performance in the Digital Age"

Journal of Business Strategy, 2023.

Research Basis: Examines how digital transformation affects succession planning


and its subsequent impact on organizational performance, using recent case studies
and industry data.

Source: Emerald Insight

17."Critical Success Factors in Succession Planning: A Review and Research


Agenda"
Journal of Management & Organization, 2023.
Research Basis: Identifies and evaluates Fundamental factors for successful
succession planning, proposing a research agenda based on literature analysis and
expert interviews.
Source: Cambridge University Press

18."The Influence of Succession Planning on Innovation and Competitive


Advantage"

Strategic Management Journal, 2024.

Research Basis: Investigates how succession planning affects innovation and


competitive advantage using case studies and empirical data from various industries.

Source: Wiley Online Library

19." The Implications of Succession Planning on Organizational Adaptability


and Performance"
International Journal of Management Reviews, 2024.
Research Basis: Examines how succession planning impacts organizational
adaptability and performance, drawing on empirical data and case studies to provide
insights.
Source: Wiley Online Library

20."Succession Planning and Its effect on Leadership Development and


Organizational Success"
Leadership Journal & Organizational Studies, 2024.
Research Basis: Explores how succession planning affects leadership development
and organizational success using empirical studies and theoretical models.
Source: Sage Journals
7.2 RESEARCH GAP

1.Quantitative Impact Measurement:

Gap: Limited empirical studies providing quantitative data on how succession planning
directly influences organizational performance metrics.

Opportunity: Conduct research to quantify the connection between succession


planning practices and specific performance outcomes, such as financial performance,
employee productivity, and turnover rates.

2.Sector-Specific Analysis:

Gap: Research often aggregates data across various sectors without focusing on
industry-specific dynamics.

Opportunity: Investigate how succession planning impacts organizational performance


within specific sectors, such as IT, healthcare, or manufacturing.

3.Long-Term vs. Short-Term Effects:

Gap: Existing studies may focus primarily on short-term effects of succession


planning, with less emphasis on long-term organizational performance and
sustainability.

Opportunity: Explore how effective succession planning influences long-term


organizational health, including leadership continuity and strategic success.

4.Impact of Succession Planning on Employee Engagement:

Gap: Insufficient research on how succession planning affects employee engagement,


morale, and career development.

Opportunity: Study the relationship between succession planning and employee


engagement to understand how planning impacts workforce motivation and retention.

5.Role of Succession Planning in Organizational Culture:

Gap: Limited exploration of how succession planning shapes or is shaped by


organizational culture and values.

Opportunity: Examine how succession planning practices align with and influence
organizational culture and employee values.
6.Effectiveness of Different Succession Planning Models:

Gap: Lack of comparative studies evaluating the effectiveness of various succession


planning models and approaches.

Opportunity: Compare different succession planning models to discover which are the
most effective in enhancing organizational performance.

7.Influence of External Factors:

Gap: Insufficient analysis of How outside factors, particularly economic conditions or


regulatory changes, impact the efficacy of succession planning.

Opportunity: Investigate how external variables influence the outcomes of succession


planning initiatives and their influence on organizational performance.

8.Integration with Strategic Planning:

Gap: Limited research on how succession planning integrates with overall strategic
planning and its impact on organizational strategy execution.

Opportunity: Study how integrating succession planning integrated with strategic


planning processes affects organizational performance and goal achievement.
CHAPTER 08
Summary of Findings, suggestions, and
Conclusion
FINDINGS:

1. Age Group: Majority (75%) are in the 25-54 age range, key for leadership roles. Younger
employees (13%) offer a future pipeline for succession.

2. Gender: Balanced gender representation (53% male, 47% female), supporting diversity in
leadership development.

3. Qualification: Most employees (61%) hold a master's degree, indicating a highly educated
workforce ready for succession into senior roles.

4. Current Position: A mix of entry (23%), mid-level (21%), and senior management (27%)
shows a healthy spread of potential leaders across levels.

5. Years of Experience: Majority (33%) have over 10 years of experience, critical for
succession into top leadership roles.

6. Number of Employees: Most respondents (62%) work in large organizations (over 1000
employees), where structured succession planning is often more prevalent.

7. Department: Broad representation across departments, with a focus on Human Resources


(26%) and a high percentage in "Other" (33%), indicating varied roles in succession
planning.

8. Industry: A significant portion (40%) comes from diverse industries, which shows
succession planning practices vary widely.

9. Involvement Level: 42% are highly involved in succession planning, indicating strong
engagement in leadership transition processes.

10.Employment Status: Most respondents (74%) are full-time, which supports their active
participation in succession planning and leadership development.

11.Effectiveness: 54% rate succession planning as very or extremely effective, indicating


strong performance benefits from well-executed planning.

12.Leadership Development: 53% feel succession planning significantly or completely


supports leadership development, showing it is vital for cultivating future leaders.

13.Long-term Goals: 61% agree or firmly believe that succession planning aligns with long-
term organizational goals, ensuring future stability.
14.Transparency: 58% rate it as very or completely transparent, which is crucial for trust in
the succession process.

15.Leadership Roles: 55% believe succession planning prepares employees for leadership
roles very or extremely well, fostering leadership readiness.

16.Employee Morale and Engagement: 68% say succession planning positively impacts
morale, demonstrating its role in enhancing employee engagement.

17.Employee Turnover: 54% agree or strongly support the notion that succession planning
reduces turnover, indicating its effectiveness in retention.

18.Leadership Transitions: 58% feel very or extremely confident in leadership transitions,


showing that succession planning enhances leadership stability.

19.Competitive Advantage: 56% feel it significantly or completely provides a competitive


advantage, helping organizations stay ahead.

20.Satisfaction Level: 59% are satisfied or very satisfied with the succession planning
process, reflecting overall positive outcomes.

21.Learning Culture: 58% believe succession planning significantly or completely fosters a


learning culture, enhancing organizational development.

22.Leadership Roles (Well-prepared): 56% rate their leadership preparation as very or


extremely well, indicating readiness for leadership transitions.

23.Leadership Roles (Preparedness): 53% feel very or extremely prepared for leadership
roles, reflecting the advantages of succession planning.

24.Impact on Success: 55% think succession planning significantly or completely impacts


success, demonstrating its importance for organizational performance.

25.Future Leaders: 60% agree or emphatically agree that succession planning secures the
growth of future leaders, ensuring continued success.

26.Communication Level: 59% rate communication during succession planning as very or


extremely well, which is vital for transparency and trust.

27.Knowledge Transfer: 58% believe knowledge is significantly or completely transferred


during succession planning, preserving institutional knowledge.

28.Values and Culture: 60% feel succession planning aligns very or extremely well with
company values and culture, ensuring continuity in leadership ethos.
29.Importance of Succession Planning: 59% consider it very or extremely important,
showing widespread recognition of its value for organizational success.

30.Fairness: 62% agree or strongly believe that the process is fair, which boosts employee
trust and confidence in leadership development

SUGGESTIONS:

1. Enhance Leadership Development Programs: Offer targeted development opportunities,


such as mentoring, coaching, and job rotations, to enhance employees' readiness for
future leadership roles.

2. Increase Transparency: Continue to improve communication regarding the succession


planning process to ensure employees understand the criteria, opportunities, and
pathways for future leadership positions

3. Promote a Strong Learning Culture: Foster continuous education and upskilling


throughout the organization, embedding it as a fundamental aspect of succession
planning.

4. Focus on Knowledge Transfer: Implement structured knowledge-sharing initiatives, such


as mentorship programs, to ensure critical expertise is passed on during transitions.

5. Ensure Fairness and Diversity: Regularly review the succession planning process to
ensure it is unbiased and inclusive, offering fair opportunities to all employees regardless
of gender, age, or background.

6. Align with Long-term Goals: Regularly revisit succession plans to ensure they align with
evolving business objectives and strategic goals, promoting long-term success.

7. Monitor and Adjust the Process: Use feedback from employees to continuously refine the
succession planning process, making it more effective and responsive to organizational
needs.

8. Boost Employee Engagement: Actively involve employees in their career development


discussions and clearly outline how succession planning impacts their career growth,
which will improve morale and retention.
CONCLUSION:

Succession planning is crucial for achieving organizational performance, leadership


continuity, and long-term success. The findings demonstrate that effective succession
planning fosters leadership development, promotes a strong learning culture, and aligns with
the organization’s long-term goals. It enhances employee morale, engagement, and retention
by creating clear pathways for growth. While transparency and fairness are key strengths,
continuous improvement in these areas can further boost trust and confidence. Ultimately, a
well-executed succession plan is essential for mitigating risks, sustaining competitive
advantage, and securing a future-ready leadership pipeline.

Succession planning is not just a strategic HR tool but a vital component of overall
organizational sustainability and success. The research highlights that effective succession
planning positively impacts multiple facets of organizational performance, including
leadership readiness, employee engagement, knowledge transfer, and long-term
competitiveness.

Organizations that actively engage in structured and transparent succession planning are
better equipped to handle leadership transitions without significant disruption. A majority of
employees feel that succession planning significantly improves leadership development,
aligning the organization's growth with its long-term strategic goals. By preparing employees
well for leadership roles, companies can ensure a continuous supply of competent leaders
who understand the company’s values, culture, and direction. This fosters both
organizational stability and agility in adapting to industry changes.

Moreover, the role of succession planning in creating a learning culture is evident.


Employees who perceive opportunities for growth and leadership roles within the company
are more apt to engage in continuous learning and development. This contributes to overall
performance improvements and a higher retention rate, as employees feel valued and see
clear career advancement opportunities progression. the positive effect on employee morale
is evident, as a majority report that succession planning positively influences engagement
and reduces turnover.

However, transparency and fairness in the process are crucial. The findings indicate that
although many organizations aim for transparency in their succession planning, there is still
room for enhancement. Ensuring open communication about the selection process and
leadership criteria helps build trust among employees and promotes a sense of inclusion.
Organizations that do not uphold transparency risk alienating talent and diminishing morale,
which can undermine the overall effectiveness of their succession plans.

Succession planning offers a competitive advantage by ensuring a steady pipeline of future


leaders who are prepared to drive innovation and performance. It addresses the risks
associated with unexpected leadership gaps, reducing the time and costs related to external
recruitment and onboarding.

In conclusion, succession planning is a vital strategy for achieving organizational success,


both in the present and in the future. By continuously improving the process to maintain
fairness, transparency, and alignment with organizational objectives, companies can ensure
leadership continuity while boosting overall performance, employee satisfaction, and
competitive advantage. Organizations that prioritize succession planning are better equipped
to be resilient, agile, and ready to tackle future challenges.
Reference:
[1] Reinhold P. Weicker, An Overview of Common Benchmarks, IEEE Computer,
December 1990, pp.

65-75

[2] H. Crayon, Computer Architecture and Implementation, Cambridge University


Press, 2000

[3] J. E. Smith, Characterizing Computer Performance with a Single Number,


Communications of the

ACM, October 1988.

[4] Patterson and Hennessy, Computer Architecture: The Hardware/Software


Approach, by Hennessy and

Patterson, Morgan Kaufman Publishers, 2nd edition, 1998, ISBN 1558604286

[5] Tune profiling software,


http://developer.intel.com/software/products/vtune/vtune_oview.htm

[6] P6perf utility, http://developer.intel.com/vtune/p6perf/index.htm

[7] DCPI Tool home page, http://www.research.digital.com/SRC/dcpi/) and

http://www.research.compaq.com/SRC/dcpi/

[8] J. Dean, J. E. Hicks, C. A. Wald Spurger, W. E. Weihl, and G. Chryses, “Profile Me:
Hardware Support

for Instruction Level Profiling on Out of Order Processors”, MICRO-30 proceedings,


1997, pp. 292-302.

[9] Perf monitor for UltraSPARC, http://www.sics.se/~mch/perf-monitor/index.html

[10] PMON http://www.ece.utexas.edu/projects/ece/lca/pmon

[11] M. C. Merten, A. R. Trick, E. M. Nystrom, R. D. Barnes, and W. W. Hwu, "A


hardware-driven

profiling scheme for identifying hot spots to support runtime optimization",


Proceedings of the 26th

International Symposium on Computer Architecture, pp. 136-147, May 1999.


[12] R. Bhargava, J. Rubio, S. Kannan, L. K. John, D. Christie, and L. Klaes,
“Understanding the Impact of

x86/NT Computing on Microarchitecture”, Book Chapter in Characterization of


Contemporary Workloads,

pages 203- 228, Kluwer Academic Publishers, 2001, ISBN 0-7923-7315-4

[13] Ali Pours Panj and David Christie, “Generation of 3D Graphics Workload for
System Performance

Analysis”, Presented at the First Workshop on Workload Characterization, Also in


Workload

Characterization: Methodology and Case Studies, edited by John and Maynard, IEEE CS
Press, 1999

[14] A. Agarwal, R. L. Sites and M. Horowitz, “ATUM: A New Technique for Capturing
Address

Traces Using Microcode”” Proceedings of the 13th International Symposium on


Computer Architecture,

June 1986, pp. 119-127.

[15] B. Celik and D. Keppel, "Shade: A Fast instruction-set simulator for execution
profiling", Chapter 2

in “Fast Simulation of Computer Architectures”, by T. M. Conte and C. E. Gi marc,


Kluwer Academic

Publishers, 1995.

[16] Dinero IV cache simulator, www.cs.wisc.edu/~markhill/DineroIV

[17] P. Bose and T. M. Conte, Performance Analysis and Its Impact on Design, IEEE
Computer, May

1998, pp. 41-49

[18] P. Crowley and J-L Baer, "On the Use of Trace Sampling for Architectural Studies
of Desktop

Applications", Presented at the First Workshop on Workload Characterization, Also in


Workload

Characterization: Methodology and Case Studies, ISBN 0-7695-0450-7, edited by John


and Maynard,

IEEE CS Press, 1999, pp. 15-24.

[19] J. R. Larus, Efficient Program Tracing, IEEE Computer, May 1993, pp. 52-61
[20] Ravi Bhargava, Lizy K. John, and Francisco Matus, Accurately Modelling
Speculative Instruction

Fetching in Trace-Driven Simulation, Proceedings of the IEEE Performance, Computers


and

Communications Conference (IPCCC), Feb. 1999, pp. 65-71.

[21] The Simple scalar simulator suite, http://www.simplescalar.org or

http://www.cs.wisc.edu/~mscalar/simplescalar.html

[22] B. Boothe, Execution Driven Simulation of Shared Memory Multiprocessors",


Chapter 6 in “Fast

Simulation of Computer Architectures”, by T. M. Conte and C. E. Gi marc, Kluwer


Academic Publishers,

1995.

[23] The Simos complete system simulator, http://simos.stanford.edu/

[24] SIMICS www.simics.com

[25] SIMICS, VIRTUTECH http://www.virtutech.com

[26] L. Kurian, Performance Evaluation of Prioritized Multiple-Bus Multiprocessor


Systems, M. S. Thesis,

University of Texas at El Paso, Dec 1989.

[27] L. K. John, Yu-Cheng Liu, A Performance Model for Prioritized Multiple-Bus


Multiprocessor

Systems, IEEE Transactions on Computers, Vol.45, No.5, pp.580-588, May 1996.

19

[28] P. Heidelberger and S. S. Levenberg, Computer Performance Evaluation


Methodology, IEEE

Transactions on Computers, Dec 1984, pp. 1195-1220

[29] D. B. Noon burg and J. P. Shen, A Framework for Statistical Modelling of


Superscalar Processor

Performance, Proceedings of the 3rd International Symposium on High Performance


Computer

Architecture (HPCA), 1997pp. 298-309.

[30] D. J. Sorin, V. S. Pai, S. V. Adve, M. K. Vernon, and D. A. Wood, "Analytic


Evaluation of Shared
Memory Systems with ILP Processors, " Proceedings of the International Symposium
on Computer

Architecture, 1998, pp. 380-391.

[31] SPEC Benchmarks, www.spec.org

[32] Java Grande Benchmarks, http://www.epcc.ed.ac.uk/javagrande/

[33] Skymark, http://math.nist.gov/scimark2

[34] ASCI Benchmarks, http://www.llnl.gov/asci_benchmarks/asci/asci_code_list.html

[35] S. C. Woo, M. Ohara, E. Torrie, J. P. Singh and A. Gupta, “The SPLASH-2 Programs:

Characterization and Methodological Considerations”, Proceedings of the 22nd


International Symposium on

Computer Architecture, pages 24-36, June 1995.

[36] NAS Parallel Benchmarks, http://www.nas.nasa.gov/Software/NPB/

[37] Media Bench benchmarks, http://www.cs.ucla.edu/~leec/mediabench/

[38] EEMBC, www.eembc.org

[39] BDTI, http://www.bdti.com/

[40] The Caffeine benchmarks, http://www.pendragon-software.com/pendragon/cm3

[41] landmark, http://www.volano.com/benchmarks.html

[42] Transactions Processing Council, www.tpc.org

[43] SYSMARK, http://www.bapco.com/

[44] Ziff Davis Benchmarks, www.zdbop.com or


www.zdnet.com/etestinglabs/filters/benchmarks

[45] PC Benchmarks, www.pcbenchmarks.com

[46] The Jaba profiling tool, http://www.ece.utexas.edu/projects/ece/lca/jaba.html

[47] R. Radhakrishnan, J. Rubio and L. K. John, Characterization of Java Applications at


Bytecode and

Ultra-SPARC Machine Code Levels, Proceedings of IEEE International Conference on


Computer Design,

pp. 281-284

[48] J. A. Mathew, P. D. Coddington, and K. A. Hawick, Analysis and Development of


the Java Grande

Benchmarks, Proceedings of the ACM 1999 Java Grande Conference, June 1999
[49] C. Lee, M. Postojna, W. H. M. Smith, "Media Bench: A Tool for Evaluating and
Synthesizing

Multimedia and Communication Systems, Proceedings of the 30th International


Symposium on

Microarchitecture, pp. 330-335

20

[50] D. Bhandarkar and J. Ding, “Performance Characterization of the Pentium Pro


Processor”,

Proceedings of the 3rd High Performance Computer Architecture Symposium, 1997,


pp. 288-297.

[51] Ted Romer, Geoff Voelker, Dennis Lee, Alec Wolman, Wayne Wong, Hank Levy,
Brian Bershad and

Brad Chen. "Instrumentation and Optimization of Win32/Intel Executables Using Etch",


USENIX, 1997.

[52] T. M. Conte and C. E. Gi marc, “Fast Simulation of Computer Architectures”,


Kluwer Academic

Publishers, 1995.

BIBLIOGRAPHY:
Books:

1. Rothwell, W. J. (2015).
Effective Succession Planning: Ensuring Leadership Continuity and Building Talent
from Within (5th ed.). AMACOM.
2. Charan, R., Dotter, S., & Noel, J. (2011).
The Leadership Pipeline: How to Build the Leadership-Powered Company (2nd ed.).
Jossey-Bass.
3. Conger, J. A., & Fulmer, R. M. (2003).
Succession Management: How to Build Leadership Depth at Every Level. Harvard
Business Review Press.
4. Kesner, I. F., & Sebora, T. C. (1994).
Executive Succession: Past, Present & Future. Journal of Management, 20(2), 327-
372.
5. Groves, K. S. (2007).
Integrating Leadership Development and Succession Planning Best Practices.
Journal of Management Development, 26(3), 239-260.

You might also like