2019 Ibrahim H. Osman

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

JID: EOR

ARTICLE IN PRESS [m5G;February 25, 2019;20:45]


European Journal of Operational Research xxx (xxxx) xxx

Contents lists available at ScienceDirect

European Journal of Operational Research


journal homepage: www.elsevier.com/locate/ejor

A cognitive analytics management framework for the transformation


of electronic government services from users’ perspective to create
sustainable shared values
Ibrahim H. Osman a,∗, Abdel Latef Anouze b, Zahir Irani c, Habin Lee d, Tunç D. Medeni e,
Vishanth Weerakkody c
a
Olayan School of Business, American University of Beirut, Lebanon
b
College of Business and Economics, Qatar University, Qatar
c
Faculty of Business and Law, University of Bradford, UK
d
Brunel Business School, Brunel University, UK
e
Yıldırım Beyazıt University & Türksat, Turkey

a r t i c l e i n f o a b s t r a c t

Article history: Electronic government services (e-services) involve the delivery of information and services to stakehold-
Received 25 September 2017 ers via the Internet, Internet of Things and other traditional modes. Despite their beneficial values, the
Accepted 6 February 2019
overall level of usage (take-up) remains relatively low compared to traditional modes. They are also chal-
Available online xxx
lenging to evaluate due to behavioral, economical, political, and technical aspects. The literature lacks a
Keywords: methodology framework to guide the government transformation application to improve both internal
Analytics processes of e-services and institutional transformation to advance relationships with stakeholders. This
Behavioral OR paper proposes a cognitive analytics management (CAM) framework to implement such transformations.
Data Envelopment Analysis The ambition is to increase users’ take-up rate and satisfaction, and create sustainable shared values
Multiple criteria analysis, OR in government through provision of improved e-services. The CAM framework uses cognition to understand and frame
the transformation challenge into analytics terms. Analytics insights for improvements are generated us-
ing Data Envelopment Analysis (DEA). A classification and regression tree is then applied to DEA results to
identify characteristics of satisfaction to advance relationships. The importance of senior management is
highlighted for setting strategic goals and providing various executive supports. The CAM application for
the transforming Turkish e-services is validated on a large sample data using online survey. The results
are discussed; the outcomes and impacts are reported in terms of estimated savings of more than fifteen
billion dollars over a ten-year period and increased usage of improved new e-services. We conclude with
future research.
© 2019 The Authors. Published by Elsevier B.V.
This is an open access article under the CC BY-NC-ND license.
(http://creativecommons.org/licenses/by-nc-nd/4.0/)

1. Introduction services to various stakeholders using the Internet, Internet of


Things, and traditional modes for cutting-cost ideas in govern-
The Internet has been having a transformational effect on ment, Welsh (2014). Stakeholders include citizens; non-citizen and
our society, and governments worldwide have been undertaking business users; government employees; information technology
various initiatives to improve the efficiency and effectiveness of developers; government policy makers; public administrators
internal operations, communications with citizens and transac- and politicians; and organizations, Rowley (2011). E-government
tions with organizations with the aim to encourage the adoption services (e-services) have been developed to achieve various en-
of electronic government (e-government) initiatives. E-government vironmental, financial, political and social beneficial goals, Chircu
involves the delivery online of government information and (2008). Despite such benefits, they have several challenges includ-
ing governance; policy development; information management;
technological change; societal trends; and human factors, Dawes

Corresponding author.
(2009). E-services require high capital investments and have a
E-mail addresses: [email protected], [email protected] (I.H. Osman). limited take-up rate from users, Lee et al. (2008). The take-up rate

https://doi.org/10.1016/j.ejor.2019.02.018
0377-2217/© 2019 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license. (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Please cite this article as: I.H. Osman, A.L. Anouze and Z. Irani et al., A cognitive analytics management framework for the transformation
of electronic government services from users’ perspective to create sustainable shared values, European Journal of Operational Research,
https://doi.org/10.1016/j.ejor.2019.02.018
JID: EOR
ARTICLE IN PRESS [m5G;February 25, 2019;20:45]

2 I.H. Osman, A.L. Anouze and Z. Irani et al. / European Journal of Operational Research xxx (xxxx) xxx

is measured by the percentage of individuals aged between 16 and best-practice benchmark (lack of tools for improving inefficient e-
74 who have used the internet for interacting with public author- services), (Irani et al., 2012; Osman et al., 2014a); and inabil-
ities, (UN-Survey, 2012). It was reported that the expenditures on ity to analyze qualitative and quantitative data on environmen-
information and communication technology were 2.4% and 0.9% tal, financial, political, and social dimensions, (Bertot, Jaeger, &
of the Gross Domestic Products of the 27 European Union states Grimes, 2010; Chircu, 2008). Further reviews on evaluation method-
(EU27) and Turkey, whereas the take-up rates of e-services were ologies can be found in Irani et al. (2012), Osman et al. (2014a),
28% and 9% in EU27 and Turkey, respectively, EU-Archive (2010). Weerakkody et al. (2015) and Petter et al. (2008). These reviews
The low usage limits the impact of e-government services, and showed that the majority of methods builds conceptual frame-
more research needs to be done if governments are to success- works or apply statistical and descriptive analytics using fragmented
fully leverage e-government services and realize other benefits, measures for different reasons and from mixed perspectives rather
(UN-Survey, 2012). In response, an EU i2010 initiative on inclusive than using holistic measures to perform prescriptive and predictive
government was launched to achieve a government goal “close the analytics.
digital divide - no citizen to be left behind” through the provision Although, the existing studies were useful in providing a good
of improved e-services. Given the low take-up rate in Turkey, understanding of the complexity of evaluating e-services, and in
the Turkish agency in charge of the provision of e-services, has identifying factors of satisfaction on users of e-services, they have a
similarly endorsed the EU i2010 inclusive initiative and partici- number of limitations, (Irani et al., 2012; Weerakkody et al., 2015).
pated in this research project as an industrial partner. It should First, subjective data obtained from off-line distributed surveys may
be noted that the digital divides include access; affordability; age; contain transformation errors and bias. Whereas, experiential data
education; bandwidth; content; gender; mobile coverage; internet obtained immediately from online surveys after a completion of
speed; and useful usage among others, (UN-Survey 2012). It was interactions with an e-service can be of better quality, and free
envisaged that the inclusive government initiative can be achieved from subjective bias and errors found in traditional offline surveys
using two ways of government transformation, EU-Archive (2010): (Chen, 2010). Second, statistical methods are useful in establish-
one transformation of internal processes to improve e-services; and ing relationships among variables and capable of predicting trends,
another institutional transformation to improve relationships be- Petter et al. (2008). Third, they may consider the set of most effi-
tween governments and stakeholders for a creation of sustainable cient (and inefficient) e-services as outliers to drop from the sta-
shared values. Shared values are often created using innovative tistical analysis for the sake of generating average trends, (Lee &
ideas based on the interface of measurement and management to Kim, 2014). Hence, they may not be the most appropriate methods
balance the various tradeoffs when making long and short terms for conducting benchmarking analysis to identify the set of effi-
transformative decisions, Osman and Anouze (2014a). Shared value cient e-services (best practice benchmark) to suggest improvement
is measured by the total sum of business value to shareholders targets for the set of inefficient e-services where frontier analytics
and the other economic, environmental and social values to other are more appropriate for benchmarking the quality of e-services.
stakeholders, Porter and Kramer (2011). Luna-Reyes and Gil-Garcia Last, they have other limitations on multi-collinearity assumption,
(2014) reported that there was little or no evidence of govern- normality of data, large sample sizes, and internal validity of mea-
ment transformation applications; and such applications may sures, Norris and Lloyd (2006).
occur in the future. Therefore, our research goal is to propose a Reviews of the literature on emerging prescriptive methodolo-
methodology framework to guide the government transformation gies for evaluating the performance efficiency of operations showed
application to improve e-services, and to improve the relationships that Data Envelopment Analysis (DEA) is one of the most popular
between governments and its stakeholders, thus leading to an methods with over 485,0 0 0 hits on Google. DEA was introduced by
increase in the take-up rate of e-services and the creation of sus- Charnes, Cooper, and Rhodes (1978) to evaluate the relative perfor-
tainable shared values to all stakeholders. The associated research mance efficiency of operating units (often called Decision Making
questions include: (a) How can an e-service be improved? Can Units- DMUs). DEA aggregates multiple-input and multiple-output
we the efficiency and effectiveness of an e-service be measured? measures using data-driven variable weights to generate a rela-
(b) How can the relationship between government and citizens tive efficiency score for each DMU. The efficiency score measures
be improved? Can the characteristics of satisfaction of users be the quality of transformation of inputs into outputs. DEA can also
identified to increase the usage of e-services? (c) What are the es- identify the best-practice benchmark (or the set of efficient DMUs)
timated shared values to the stakeholders of e-services in Turkey? to suggest improvement targets for the set of inefficient units. In
(d) What is the methodology framework to guide the government the literature, there are a large number of relevant DEA applica-
transformation application to achieve the various objectives?” tions. Cook and Zhu (2006) developed a DEA model for treating
Reviews of the literature on addressing the low take-up chal- qualitative data with ordinal Likert Scale values. They showed that
lenge identified the followings. Millard (2008) called for improv- qualitative rank ordered data can be treated in a conventional DEA
ing the performance of e-services with a special focus on mea- methodology. De Witte and Geys (2013) argued that the citizens’
suring the efficiency utilization of input resources and the ef- coproduction of public services require a careful measurement of
fectiveness of generated outputs and impacts from users’ per- productive efficiency. They presented a DEA application to measure
spective. Whereas, Lee et al. (2008) and Petter, DeLone, and the technical efficiency of citizens’ co-production in the delivery of
McLean (2008) suggested to improve users’ satisfaction with e- library services. Osman, Berbary, Sidani, Al-Ayoubi, and Emrouzne-
services to increase the take-up rate. While reviews on method- jad (2011) used DEA to assess the performance efficiency of nurses
ologies for evaluating e-services listed several challenging aspects at an intensive care unit using Likert scale values. An appraisal and
including: identification of measures on users’ attitude and be- performance evaluation system was developed for a better motiva-
havior (knowledge of technology; personal traits and behavioral tion of nurses. It corrected the evaluation bias which was found
change during online interactions with e-services), Magoutas and in a traditional approach based on fixed weights to combine mea-
Mentzas (2010); implementation of ICT processes (breaks of In- sures. Esmaeili and Horri (2014) used a fuzzy DEA approach to
ternet connectivity and communication systems, internal disin- evaluate the satisfaction of customers with online banking services
tegration and operability, electronic system’s security, and ser- using variables with qualitative values. For more details on the
vice development), (Weerakkody, Irani, Lee, Osman, & Hindi, 2015; DEA theory and applications in the public and private domains;
Yildiz, 2007); inappropriate usage of methodologies to identify the we refer to the handbook on strategic performance measurement

Please cite this article as: I.H. Osman, A.L. Anouze and Z. Irani et al., A cognitive analytics management framework for the transformation
of electronic government services from users’ perspective to create sustainable shared values, European Journal of Operational Research,
https://doi.org/10.1016/j.ejor.2019.02.018
JID: EOR
ARTICLE IN PRESS [m5G;February 25, 2019;20:45]

I.H. Osman, A.L. Anouze and Z. Irani et al. / European Journal of Operational Research xxx (xxxx) xxx 3

and management using DEA in Osman, Anouze and Emrouznejad to achieve, and the data guideline and processing requirements
(2014); and to the special issue on DEA in the public sector by to be understood by technical teams. For instance, it provides the
Emrouznejad, Banker, Lopes, and de Almeida (2014). necessary understanding of the human–machine interactions with
Reviews of the literature on predictive methodologies for the e-services and the underlying factors that affect the satisfaction
prediction of satisfaction classes and characteristics of satisfied of users. It designs the data strategy; identifies the performance
users showed that Classification and Regression Tree (CART) is one variables, data types, and collection sources; define data gover-
of the most popular methods, it received more than 291,0 0 0 hits nance and ethics. It further builds the data model to align and
on Google. CART was first developed by Breiman, Friedman, Ol- map the identified variables to the defined organizational goals
shen, and Stone (1984) to identify classes of common characteris- and objectives. Second, the analytics process employs the right mix
tics from the construction process of a tree of predictors. It has at- of advanced modeling and solving methodologies to address the
tracted a number of applications. Oña, Eboli, and Mazzulla (2014)) complex challenge in evaluating the performance of e-services. The
used CART to determine the most important variables which affect analytics process combines both the Data Envelopment Analysis
changes in the quality of transit services. However, DEA and CART (DEA) and the classification and regression trees (CART) method-
methodologies are often combined in a two-stage approach, which ologies in a two-stage approach. In the first stage, DEA generates
has attracted more than 6070 hits on Google. Applications include the input-efficiency and output-effective scores for each e-service,
the work of Emrouznejad and Anouze (2010). They used DEA in the computes an individual satisfaction score for each user and iden-
first stage to generate relative efficiency scores for banks, whereas tifies the set of efficient of e-services (the set of best-practice
CART was used in the second stage to identify the characteris- benchmark) to improve inefficient e-services. In the second stage,
tics of profitable customers in the banking sector. Chuang, Chang, the individual DEA scores and the qualitative characteristics of
and Lin (2011) used DEA to measure the operational-efficiency and users are analyzed using CART to predict the characteristics of sat-
cost-effectiveness of medical institutions while CART was used in isfaction classes and prescribe corrective policy recommendations
the second stage to extract rules for a better resource allocation. for managerial actions. Last the management process defines the
De Nicola, Gito, and Mancuso (2012) applied a Bootstrapping ap- engagement and coordination among senior managers, providers
proach to the DEA efficiency scores to increase the confidence and research team during the execution of the analytics project.
in the quality of DEA results, before conducting CART analysis to The senior management further sets the strategic goals and ob-
determine the environmental variables which impact the perfor- jectives to be achieved. Senior managers have an important role
mance of the Italian healthcare system. Biener, Eling, and Wirfs in fueling the organization digital culture for the assurance of
(2016) also used DEA and Bootstrapping models to evaluate the ef- a successful implementation of the government transformation
ficiency production of Swiss insurance companies. Li, Crook, and application. Those seniors have to be convinced before they can
Andreeva (2014) used DEA to measure the technical and scale effi- support any digital technology innovation to address challenges.
ciencies of some Chinese companies. The two efficiency measures Therefore, the management process is needed to provide the
were introduced into a logistic regression model to predict the closed-loop linkage between the cognitive and analytics processes.
distress probability of a particular company. Horta and Camanho In summary, the main contribution of the paper is to propose
(2014) used DEA and CART models to characterize the competi- a cognitive analytics management (CAM) methodology framework
tive positing of Portuguese construction companies. They argued to guide the transformation of Turkish e-services to achieve the
that the DEA–CART approach can bring new insights and identify Turkish Government goal in implementing its digital inclusive ini-
classes of competitive companies. They generated DEA efficiency tiative to close the digital divide through improved performance
scores from financial data and identified by CART the non-financial e-services, increased satisfaction of users, and creation of sustain-
characteristics of efficient companies. able shared values to all stakeholders. To the best of our knowl-
The previous studies demonstrated the importance of the Op- edge, the CAM framework is the first OR/MS contribution to the
erational Research/Management science (OR/MS) in providing the modeling of human–machine online interactions in the electronic
academic rigor in modeling and solving problems. However, Busi- government domain. It builds on the academic OR modeling rigor
ness Analytics (BA) is a growing faster and becoming more popular and practical BA relevance to advance electronic government re-
than OR/MS. Searching for BA on Google, it returned 7440,0 0 0 hits, search. The specific research objectives include: (i) introducing a
compared to 2750,0 0 0 hits for OR and 11,50 0,0 0 0 hits for MS. BA cognitive process to understand and frame the human–machine in-
presents a genuine challenge to the OR/MS community, Mortenson, teractions with an e-service in analytics terms; (ii) introducing an-
Doherty, and Robinson (2015). Rayard, Fildes and Hu (2015) argued alytics models and solving processes to evaluate the efficiency of
that if OR are to prosper; it must reflect more closely the needs of an e-service and measure the satisfaction of users during their on-
organizations and practitioners. Further, Vidgen, Shaw, and Grant line human–machine interactions, hence providing a new research
(2017) highlighted that challenges of data and management are of- avenue to understand the human–machine online interactions in
ten marginalized and even ignored in the OR context, but they are the electronic government context; (iii) using the DEA methodol-
essential for organizational managers who seek to become more ogy to develop performance indices for the input-efficiency and
data-driven creators of shared values. output-effectiveness of e-services, and to measure satisfaction of
The above review of the literature shows a lack of a method- users with e-services; (iv) proposing a tool to identify the set of ef-
ology framework to guide the government transformation appli- ficient e-services (best-practice benchmark) to guide the improve-
cation to improve both the internal processes of e-services and ment process of the inefficient e-services; determine the variables
the institutional transformation to advance relationships with and associated target improvement levels with reference to the
stakeholders; to increase the usage of e-services for creating best-practice benchmark; (v) identifying characteristics of the dif-
sustainable shared values to all stakeholders. The innovative ferent satisfaction classes of users using CART to develop social in-
methodology framework brings together the OR/MS academic clusion policies for managerial actions.
rigor and the BA practical relevance in the context of electronic The remaining part of the paper is organized as follows.
government. It consists of three interconnected closed-loop strate- Section 2 presents a literature review on the existing method-
gic processes. First, the cognitive process is to understand the ologies for the evaluation of e-services. Section 3 introduces each
evaluation (business) challenge and frame it into analytics terms. process of the proposed CAM framework. Section 4 presents and
The framing process identifies the questions to answer, the goals discusses our experience and results. The final section concludes

Please cite this article as: I.H. Osman, A.L. Anouze and Z. Irani et al., A cognitive analytics management framework for the transformation
of electronic government services from users’ perspective to create sustainable shared values, European Journal of Operational Research,
https://doi.org/10.1016/j.ejor.2019.02.018
JID: EOR
ARTICLE IN PRESS [m5G;February 25, 2019;20:45]

4 I.H. Osman, A.L. Anouze and Z. Irani et al. / European Journal of Operational Research xxx (xxxx) xxx

with managerial implications, practical impacts, and limitations tice and liberty. As a result, a conceptual framework was proposed
for further research directions. using the identified financial, social and political dimensions from
multiple stakeholders’ perspective without any quantitative vali-
2. Review of evaluation studies on e-government services dation and analysis, Chircu (2008). Recently, a systematic review
on electronic government research was conducted by Weerakkody
Many of the models and frameworks for the evaluation of et al. (2015). The overall derived view indicated that although a
e-services are adapted from the e-commerce literature; they inves- large number of papers discussing issues related to costs, oppor-
tigate users’ attitudes and satisfaction with e-government services. tunities, benefits and risks, the treatment of these issues tended
Appendix 1 presents taxonomy of developed methodologies to to be superficial. They reported a lack of empirical studies to
evaluate e-services with special focus on the measured objec- validate the relationships of the performance measures to various
tives, evaluation methodologies, analytical models and associated e-government systems and government transformation goals. Such
variables. The following observations can be made. The various analysis would help in the pre-adoption of digital government
methodologies developed over time have one of the following transformation and implementation by senior management.
objectives to evaluate: (a) system success, (b) service quality, (c) Reviewing non-statistical studies for the assessment of cus-
government value, and (d) users’ satisfaction index. First, the e- tomer satisfaction in other non-government domains shows the
service success model was built on the well-known information followings. First, k-means algorithm was used by Afolabi and
system success model introduced by DeLone and McLean’s (1992). Adegoke (2014) to identify the factors that contribute to customer
It consists of six measurement factors: system quality; information satisfaction. It requires researchers to pre-specify the number of
quality; service quality; system use; user satisfaction and net clusters (k) which is very difficult to estimate in reality. Another
benefits. Second, the e-service quality model was built on the drawback of k-means, it does not record the quality of gener-
SERVQUAL model introduced by Parasuraman, Zeithaml, and Berry ated clusters for benchmarking analysis. Second, a multi-objective
(1998). It consists of five factors: tangibles; reliability; respon- genetic algorithm (GA) was implemented by Liébana-Cabanillas,
siveness; assurance and empathy. Third, the e-service value model Nogueras, Herrera, and Guillén (2013) to predict the levels of trust
(VM) was introduced by Harvard University to assess the value among e-banking users using socio-demographic, economic, finan-
and usage of e-government websites and e-government projects, cial and behavioral variables. It was found that GA requires a long
Mechling (2002). The VM model is conceptual and based on five- running time to find optimal solutions, converges to a limited re-
value factors: direct-user value; social/public value; government- gion on the Pareto efficient frontier; and might ignore interesting
financial value; government operational/foundational value and solutions. An advanced technique based on Artificial Neural Net-
strategic/political value. In the value category, a commissioned- work (ANN) was used by Shen and Li (2014) to evaluate the ser-
evaluation study for the Australian government was presented vice quality of public transport. It was found that the ANN tech-
by Alston (2003). The study provided estimated measures on nique lacks interpretability at the level of individual predictors
e-government costs and benefits to both users and government and is difficult to construct the network layers, and associated
agencies. The estimated measures were midpoint values of the learning and momentum measures. Finally, a DEA model based
financial benefits and costs over a period of 5 years. The estimates on the SERVQUAL five dimensions using nominal and ordinal data
were solicited from e-government agencies. The estimated results was useful for conducting a benchmarking analysis to improve the
provided useful information, but given the difficulty of getting quality of services by Lee and Kim (2014).
those estimates, it was advised to interpret them with caution. In summary, the above brief review highlights the existence
Fourth, the cost-benefit and risk-opportunity analysis (COBRA) of fragmented measures with a lack of unified methodologies for
model was developed to measure the satisfaction values of users evaluating both satisfaction and efficiency of e-services from users’
with e-services based on 8 factors, Osman et al. (2014a). Last, perspective. This lack requires conducting more research using
other satisfaction models were adopted from traditional customer experiential real-time data with a proper validation of measures;
satisfaction indices in different countries to identify key drivers of developing a comprehensive analytical model which can integrate
satisfaction with products and services, (Kim, Im, & Park, 2005). various fragmented measures into a holistic framework to evaluate
They were extended to evaluate satisfaction with e-services using simultaneously both satisfaction of users as well as efficiency
surveys. The importance of using surveys to measure satisfaction of and effectiveness of e-services to identify managerial actions for
citizens was discussed in Van Ryzin and Immerwahr (2007). Fur- improvement. Users of e-services unlike customers cannot switch
ther, Grigoroudis, Litos, Moustakis, Politis, and Tsironis (2008) used to different government providers; if not satisfied, they can only
surveys data and multi-criteria regression models to assess users’ opt not to re-use an e-service and stick to traditional modes.
perceived web quality and to measure users’ satisfaction with the Therefore, this paper follows a bottom-up (user-centric) approach
service provisions by three major cellular phone companies in to determine satisfaction measures from real-time experience of
Greece. More details on other satisfaction methodologies in the users while interacting online with e-services. The identified mea-
electronic government research can be found in Irani et al. (2012). sures can then be validated and analyzed using the proposed CAM
Finally, there is a lot of literature on evaluating e-government framework to meet the desired goal of Turksat. A bottom-up or
services from providers’ perspective. For instance, Chircu (2008) re- user-centric approach was followed due to the following reasons. It
viewed the electronic government research published in the period was reported that the usage rate was limited and had not kept up
20 01–20 07; and found that most of the research contributions with the fast growing availability of e-services, (UN-survey, 2012).
on e-services were focused on US (45%), UK (11%), Singapore (7%) The percentage availability of basic e-government services has
and (37%) for the rest of the world. The evaluated objectives were grown up by 91% compared to only 39% for the percentage usage
measured in percentages of coverage in the published papers: 63% of e-government services in the EU27 between 2005 and 2010.
addressed financial objectives to achieve reduction in cost, time and Therefore, users must be placed at the center of development and
labor savings for maintaining the current service levels, and avoid- delivery of e-services; identifying the factors, that affects users’
ing cost-increase for the provision of better service levels; 65% motivations, attitudes, needs, and satisfactions underlying inten-
focused on social objectives to provide effective e-service deliveries, tions to use e-government services, would have a decisive influ-
information dissemination, sustainable shared value creation and ence on large adoption and use of e-services, Verdegm and Verleye
better resource allocation; and finally 44% discussed political objec- (2009). A number of recent studies stressed the importance of
tives to enable democracy, transparency, accountability, social jus- studying the factors that influence citizens’ behavioral intention to

Please cite this article as: I.H. Osman, A.L. Anouze and Z. Irani et al., A cognitive analytics management framework for the transformation
of electronic government services from users’ perspective to create sustainable shared values, European Journal of Operational Research,
https://doi.org/10.1016/j.ejor.2019.02.018
JID: EOR
ARTICLE IN PRESS [m5G;February 25, 2019;20:45]

I.H. Osman, A.L. Anouze and Z. Irani et al. / European Journal of Operational Research xxx (xxxx) xxx 5

adopt and use of e-government services. Citizens’ attitude toward


using e-government services was found to be the most significant
determinant factors among other socio-technology, political and
cultural factors by Al-Hujran, Al-Debei, Chatield, and Migdadi
(2015). The significance of modeling the behavior of citizens to
influence the adoption of e-government services was stressed in
Rana, Dwivedi, Lat, Williams, and Clement (2017). They found that
the attitude has a direct effect on adoption of e-services and it
is influenced by the effort expectancy of user, the performance of
e-services and the social influence. Similarly, the advantage of
assessing e-services from users’ perspective through measuring
the users’ satisfaction to increase e-participation was stressed
by Kipens and Askounis (2016). For comprehensive reviews of
literature on e-services’ measures and relationships between
satisfaction, impact, cost-benefit and risk-opportunity, and the
adoption and intention to use of e-services by individuals and
organizations, we refer to Osman et al. (2014a), Weerakkody et al.
(2015) and Petter et al. (2008).

3. The cognitive analytics management methodlogies

The cognitive analytics management is a mission-driven (or Fig. 1. The cognitive analytics management framework: processes and concepts.
goal-driven) approach for the transformation of organizations, peo-
ple and systems to generate insights and create shared values. It
generates data-driven insights to make informed decision, and sug- Velocity, Variety and Value. Donaki (2014) of Deloitte consulting de-
gests policy innovations for managing performance and transfor- fined “Cognitive Analytics” as a new emerging term to describe how
mation of e-services to achieve desired goals. It expands and in- organizations apply analytics to make smart decisions; the new
tegrate emerging scientific fields including: Social cognitive theory, term attracts more than 63,900 hits on google, it is defined as a
Bandura (1986); Analytics, (Robinson, Levis, & Bennett, 2014); Busi- “field of analytics that tries to mimic the human brain in drawing
ness analytics (Pape, 2016); Big data (Structured, unstructured, and inferences from existing data and patterns, it draws conclusions
semi-structured data), Gupta, Kumar, Baabdullah, and Al-Khowaiter based on existing knowledge bases and then inserts this back into
(2018); Cognitive analytics, Donaki (2014); and Cognitive computing the knowledge base for future inferences – a self-learning feedback
(human–machine interaction, machine-machine interaction), Gupta loop”. Whereas the Cognitive computing term attracts more than
et al. (2018). We shall present a brief discussion to provide the 481,0 0 0 hits on google, it is defined as the simulation of human
necessary support and background for the development of CAM thought processes in a computerized model, (TechTarget, 2017). It
framework and associated methodologies. involves self-learning systems that use data mining, pattern recogni-
The Social Cognitive Theory (SCT) has not been fully used tion and natural language processing to mimic the way the human
to examine the individual’s behavior for adopting e-services brain works. Davenport (2006) stressed the need for organizations
despite being considered as one of the important theories in to compete on analytics. An analytics initiative requires the follow-
human behavior, Bandura (1986). Some of the SCT constructs ings to assure success: an enterprise-wide system to ensure an easy
such as anxiety, attitude, self-efficacy, social influence, outcome access to critical data; a widespread use of a set of modeling and
expectancy, and their links to behavioral intention toward using optimization tools beyond basic statistics for a comprehensive un-
technology have been investigated in a number of studies on derstanding of customers; and an advocate team of senior analytics
e-government adoption studies, Rana and Dwivedi (2015). Further, leaders who have passion for analytics. Those analytics leaders set
the Cognitive Mapping Theory (CMT) was used to better under- the right goals, objectives and analytics culture. They also provide
stand the decision-making interactions that took place across support to acquire the right analytics tools, hire the right analytics
the management and implementation of e-government projects, talent and act on the data-driven insights and recommendations.
Sharif, Irani, and Weerakkoddy (2010). CMT seeks to graphically Pape (2016) reported that the extract-transform-load process of
represent the state of variables (organizational complexity, gover- cleansing data items from legacy systems and external sources to
nance, methodological constraints, practitioner concerns, financial transfer them into an analytics system typically accounts for more
concerns, and fear of failure) within an e-government project by than 50 per cent of the time and cost of any analytics project.
links with fuzzy weights to signify causes and effects relation- Given the above brief discussion on fragmented but useful ap-
ships. The authors concluded that understanding of the broader proaches for the success of organizations in public and private
social, political, user satisfaction and stakeholder contexts are sectors, we aim to propose our Cognitive Analytics Management
imperative for effective decision making and e-government im- framework to bring together three fragmented cognitive, analyt-
plementations; organizations must also define who is responsible ics and management concepts into a unifying methodology frame-
for e-government projects, and senior management must engage work to guide the implementation of digital transformation to ad-
with e-government investment decision processes to improve dress an organization’s challenges. It brings together the best of all
decision making. Robinson et al. (2014) of the OR/MS community worlds in academic rigor and practical relevance with a special fo-
defined Analytics as “analytics facilitates the realization of business cus on the beginning and ending processes. Fig. 1 illustrates the
objectives through reporting of data to analyze trends, creating interconnection and integration of the three CAM processes. The
predictive models for forecasting and optimizing business processes implementation of CAM starts with the cognition process to un-
for enhanced performance”. This definition stresses modeling and derstand a challenge and frames it into analytics terms by build-
processing to conduct descriptive, predictive and prescriptive ana- ing data models, identifying variables and relationships to align
lytics. Big data analytics describe software tools to handle big-data with the desired goals/objectives which are set in coordination
attributes which are characterized by the 4-V data-models: Volume, with senior management. A data and governance strategy must be

Please cite this article as: I.H. Osman, A.L. Anouze and Z. Irani et al., A cognitive analytics management framework for the transformation
of electronic government services from users’ perspective to create sustainable shared values, European Journal of Operational Research,
https://doi.org/10.1016/j.ejor.2019.02.018
JID: EOR
ARTICLE IN PRESS [m5G;February 25, 2019;20:45]

6 I.H. Osman, A.L. Anouze and Z. Irani et al. / European Journal of Operational Research xxx (xxxx) xxx

Fig. 2. A pictorial representation of the human–machine online interactions.

formulated for getting clean data using appropriate research pro- man during online interactions with a machine are captured us-
cesses, protection, and technology enablers in compliance with the ing digital sensors/cameras to estimate human-intent to guide the
General Data Protection Regulation (GDPR) guideline, Tikkinen-Piri, movement of robotics, Kulic and Croft (2007). However, in our e-
Anna Rohunen, and Markkula (2018). The cognition process typ- government online interactions instead of using sensors/cameras,
ically accounts for more than 50 per cent of the time and cost online surveys are used to capture real-time behavior, attitude, in-
of any analytics project, Pape (2016). The analytics process iden- tention and reaction, and economic values of users immediately af-
tifies the most appropriate analytics models and solving method- ter completing online sessions with an e-service. Fig. 2 illustrates
ologies for addressing the identified challenge, and achieving the the human–machine interaction with an e-service. The left part of
desired objectives; it also validates the results and communicates the figure and the right part of the figure show the internal process
the analytics insights to senior management. Finally, the manage- (black-box for analytics engine) and the external process via a user
ment process is the most difficult one. It is necessary for the assur- interface (white box: computer/ mobile screen) while interacting
ance of successful implementation based on the derived manage- with an e-government system The user interactions are then trans-
rial insights; it involves communications of results to other stake- lated into a series of internal/external processes involving human–
holders to influence transformational actions. It requires crafting machine (external), machine-to-machine (internal) and machine-
a message to stakeholders and fueling suggestions on this pro- to-human (external) instructions to complete an e-service request.
cess. More discussions on each process are presented in the below The final interaction delivers outputs on screen or on other exter-
sub-sections, whereas supporting details on CAM applications and nal delivery modes such traditional mails, emails, mobile messages,
creation of sustainable shared values can be found in Osman and etc. For more discussions on the human–man interaction cycles for
Anouze (2014b). the creation of shared values, refer to Osman and Anouze (2014c).
To evaluate the human–machine online interactions, a careful
design is required for the identification of metrics to capture both
3.1. The cognitive process for understanding and framing the human and machine characteristics. Critical reviews for the iden-
evaluation challenge tification of the most critical factors and variables affecting users’
satisfaction were published in Irani et al. (2012) and Weerakkody
The cognitive process is designed to understand the human– et al. (2015). The reviews led to the development of COBRA model
machine online interaction and to frame it into analytics terms. It for satisfaction, which consisted of a set of satisfaction metrics.
analyzes previous findings to find out what has happened to deter- This set contains 49 SMART (Specific, Measurable, Achievable, Rel-
mine the data management and compliance of research processes evant, Timely) metrics, which were statistically validated in Osman
required to deliver quality data for processing at the analytics pro- et al. (2014a). Table 1 provides a list of questions for each of 49
cess. To be more specific, it is to understand the factors affect- COBRA metrics divided into four COBRA factors and 8 sub-factors:
ing performance and satisfaction of human; determine appropriate Cost, (tangible, tC; and intangible, iC); Benefit (tangible, tB; and in-
data models and metrics; validate relationships and alignment to tangible, iB); Risk, (personal, pR; and financial, fR); Opportunity
the desired goals; and propose information technology processes (service, sO; and technology, tO). The SMART metrics were de-
to collect, store, retrieve and prepare data for analysis. rived using serval focus-group meetings conducted in UK, Turkey,
Fig. 2 provides an illustration of the human–machine online Lebanon and Qatar involving users, professionals, academics and
interactions with an e-service for a better understanding of the government experts to include new practical and relevant metrics
evaluation challenge. It is inspired by the social cognitive theory, not available in the literature. It turned out that COBRA model
Bandura (1986); and the cognitive computing systems, (TechTarget, was the quantitative equivalent to the SWOT qualitative strategic
2017). It aims to understand the challenge in human–machine on- model, (Jackson, Joshi, & Erhardt, 2003). SWOT evaluates company,
line interactions, Hollan, Hutchins, and Kirsh (20 0 0). In cognitive service and product by generating improvement initiatives with
systems, the psychological and physiological behaviors of a hu- references to internal processes and external competitors without

Please cite this article as: I.H. Osman, A.L. Anouze and Z. Irani et al., A cognitive analytics management framework for the transformation
of electronic government services from users’ perspective to create sustainable shared values, European Journal of Operational Research,
https://doi.org/10.1016/j.ejor.2019.02.018
JID: EOR
ARTICLE IN PRESS [m5G;February 25, 2019;20:45]

I.H. Osman, A.L. Anouze and Z. Irani et al. / European Journal of Operational Research xxx (xxxx) xxx 7

Table 1
COBRA validated measurable variables and associated labels (online survey questions).

No Item/question Label

1 The e-service is easy to find tB


2 The e-service is easy to navigate tB
3 The description of each link is provided tB
4 The e-service information is easy to read (font size, color) tB
5 The e-service is accomplished quickly tB
6 The e-service requires no technical knowledge tB
7 The instructions are easy to understand tB
8 The e-service information is well organized iB
9 The drop-down menu facilitates completion of the e-service iB
10 New updates on the e-service are highlighted iB
11 The requested information is uploaded quickly iB
12 The information is relevant to my service iB
13 The e-service information covers a wide range of topics iB
14 The e-service information is accurate iB
15 The e-service operations are well integrated iB
16 The e-service information is up-to-date iB
17 The instructions on performing e-service are helpful iB
18 The referral links provided are useful iB
19 The Frequently Asked Questions are relevant sO
20 Using the e-service saved me time tC
21 Using the e-service saved me money tC
22 The provided multimedia services (SMS, email) facilitate contact with e-service staff sO
23 I can share my experiences with other e-service users sO
24 The e-service can be accessed anytime sO
25 The e-service can be reached from anywhere sO
26 The information needed for using the e-service is accessible sO
27 The e-service points me to the place of filled errors, if any, during a transaction tO
28 The e-service allows me to update my records online tO
29 The e-service can be completed incrementally (at different times) tO
30 The e-service removes any potential under table cost to get the service from E-government agency (tips) tC
31 The e-service reduces the bureaucratic process tC
32 The e-service offers tools for users with special needs (touch screen, Dictaphone) tO
33 The information are provided in different languages (Arabic, English, Turkish) tO
34 The e-service provides a summary report on completion with date, time, checkup list tO
35 There is a strong incentive for using e-service (such as paperless, extended deadline, less cost) tO
36 I am afraid my personal data may be used for other purposes pR
37 The e-service obliges me to keep record of documents in case of future audit fR
38 The e-service may lead to a wrong payment that needs further correction fR
39 I worry about conducting transactions online requiring personal financial information such visa, account number fR
40 Using e-service leads to fewer interactions with people pR
41 The password and renewal costs of e-service are reasonable tC
42 The Internet subscription costs is reasonable tC
43 The e-service reduces my travel cost to get the service from E-government agency tC
44 It takes a long-time to arrange an access to the e-service (the time includes: arrange for password; renew password; and Internet subscription) iC
45 It takes a long-time to upload of e-service homepage iC
46 It takes a long-time to find my needed information on the e-service homepage. iC
47 It takes a long-time to download/ fill the e-service application iC
48 It takes several attempts to complete the e-service due to system break-downs iC
49 It takes a long-time to acknowledge the completion of e-service. iC

COBRA: intangible and tangible Benefit (iB and tB); intangible and tangible Cost (iC and tC); service and technology Opportunity (sO and tO); financial and personal
Risk (fR and pR) Analysis.

prioritization. In the SWOT-COBRA analogy, the Cost, Opportunity, matches prior expectation, (Oliver, 1980). If users are satisfied
Benefit and Risk factors are equivalent to Weakness, Opportunity, with the e-services’ website and application design, they are likely
Strength and Threat, respectively. With the appropriate use of an- to share information about their successful experience through the
alytics model to balance the tradeoffs between the various COBRA use of social media networks, UN-Survey (2012). Such satisfaction
measures, satisfaction and prioritization insights with recommen- would further encourage the citizen usage of e-services, Zheng
dations for improvement and resource allocations can be generated and Schachter (2017). According to the equity theory for predict-
as discussed later. ing individual’s motivation and satisfaction behaviors, equity is
Further the theoretical support to the CORBA selection of measured by comparing the ratio of rewards (outputs) to efforts
metrics for measuring satisfaction and performance values are (inputs). Pritchard (1969) suggested that employees try to maintain
provided below. First, the social exchange theory for interaction a balance between what efforts they give to an organization (input
indicates that people invest in social exchange interaction if and efforts) against what they receive (output rewards) to base their
only if the cost and risk inputs they put in the interaction are less satisfaction. As a consequence, individuals seek to maximize their
than the benefit and opportunity outputs they get, (Blau, 1964). The net outcomes (rewards minus costs) or at least maintain equity
e-service values to users have been largely ignored in empirical between them. When individuals find themselves participating in
research; they play an important role in determining patterns of inequitable relationships, they become distressed and dissatisfied.
development, adoption, use and outcomes, Leinder and Kayworth Using rational thinking, it is believed that when the benefits and
(2006). Second, the expectation confirmation theory for satisfac- opportunities (used as outputs) are greater than the costs and
tion indicates that consumers are satisfied if the actual experience risks (used as inputs), the users will be more satisfied.

Please cite this article as: I.H. Osman, A.L. Anouze and Z. Irani et al., A cognitive analytics management framework for the transformation
of electronic government services from users’ perspective to create sustainable shared values, European Journal of Operational Research,
https://doi.org/10.1016/j.ejor.2019.02.018
JID: EOR
ARTICLE IN PRESS [m5G;February 25, 2019;20:45]

8 I.H. Osman, A.L. Anouze and Z. Irani et al. / European Journal of Operational Research xxx (xxxx) xxx

Table 2
DEA output-oriented and input-oriented constant return to scale (primal) models.

DEA-CRS-O: output-oriented primal model DEA-CRS-I: input-oriented primal model



s 
m
Maximize θ p = vk ykp Minimize ϕ p = u j x jp
k=1 j=1
Subject to: Subject to:
m 
s
u j x jp = 1 vk ykp = 1
j=1 k=1
s 
m 
s 
m
vk yki − u j x ji ≤ 0; i = 1, . . . , n vk yki − u j x ji ≤ 0; i = 1, . . . , n
k=1 j=1 k=1 j=1
vk , u j ≥ 0 ∀ k, j vk , u j ≥ 0 ∀ k, j

3.2. The analytics process for evaluation Table 3


DEA input-oriented and output variable return to scale envelopment (dual)
models).
The analytics process to analyze the online experiential data is
based on prescriptive analytics using Data Envelopment Analysis DEA-VRS-I: DEA input-oriented dual DEA-VRS-O: output oriented dual
and prediction analytics based on a classification and regression Minmize θ p Maximize ϕ p
tree (CART). Data Envelopment Analysis (DEA) models the human– Subject to: Subject to:
n n
machine interactions to find out what to do. DEA generates λ j xi j ≤ θ p xip ; ∀i λ j xi j ≤ xip ; ∀i
j=1 j=1
analytics results on the performance of e-services and the satisfac- n n

tion of users. It establishes the best-practice internal benchmark λ j yr j ≥ yr p ; ∀r λ j yr j ≥ ϕ p yr p ; ∀r


j=1 j=1
to set target for improving inefficient e-services with reference to n n
λj = 1 λj = 1
the established benchmark. A classification and regression trees j=1 j=1

(CART) model is used to analyze both DEA satisfaction score of λ j ≥ 0; ∀ j = 1, . . . , n; θ p free λ j ≥ 0 ∀ j = 1, . . . , n; ϕ p free
each individual with associate characteristics to identify satis-
fied/dissatisfied classes for recommending social inclusion policies ber of output variables; x j p is the utilized amount of input j, ykp is
and prioritizing managerial actions to increase take up. The two the generated amount of output k; and vk , u j are the weights as-
analytics models are described next. signed to output k and input j, respectively. The optimized weights
can be seen as the optimal values for the returns and the costs of
3.2.1. Data Envelopment Analysis (DEA) for satisfaction and the resources that would make the associated DMU as efficient as
performance evaluation possible in the relative evaluation process. The quality value of the
DEA is a nonparametric linear programming approach. It is pro- transformation for a DMU (p) is obtained by maximizing a non-
posed to evaluate the relative performance efficiencies of a set linear objective performance function. It is expressed by the ratio
 
of e-services. These e-services are called decision-making units, i=1 u j x j p ) subject to no performance ratios exceed
( sj=1 vk ykp / m
(DMUs) in DEA terminology. Each DMU utilizes multiple-input re- one. This objective ratio is linearized in two different ways to gen-
sources (costs, risks) to transform them into multiple-output re- erate the input and output oriented linear models as follows. The
turns (benefits, opportunities) to measure satisfaction and perfor- DEA-CRS output-oriented linear model is generated by setting the
mance. DEA generates a relative score for each DMU indicating the denominator of the non-linear objective to 1, whereas the DEA-CRS
quality of transformation by comparing the DMU to its peers. The input-oriented linear model is obtained by setting the numerator
DEA score is an aggregated value, which is defined as the ratio to one. Table 3 provides the two formulation models in columns
of the total sum of weighted outputs over the total sum of the one and two, respectively. The set of n constraints in both models
weighted inputs. The weights for the inputs and outputs are not imposes the relativity concept that no ratio should exceed one.
fixed values, like in statistics, but they take variable values. These Further, since one of the objectives of the DEA evaluation is
variable weights are optimized in the best interest of the DMU be- to identify the set of efficient DMUs to establish a benchmark
ing evaluated subject to relativity performance constraints indicat- and to set targets for improving the inefficient units; each of
ing that the performance of each DMU should not exceed 1. A DMU the DEA-CRS models needs to be executed as many times as the
with a DEA score of 1 is called efficient, whereas a DMU with a number of available DMUs to derive the DEA efficiency scores for
score less than 1 is called inefficient. the DMUs. The DEA-CRS models assume homogeneity of DMUs.
In the literature, there are two basic DEA models. First, a DEA According to Dyson, Allen, Camanho, Podinovski, Sarrico, and
model assumes that all DMUs are operating under the homo- Shale (2001), the source of non-homogeneity (heterogeneity)
geneity assumption of Constant Return to Scale (DEA-CRS), Charnes of DMUs comes from different environments or economies of
et al. (1978). Further the DEA-CRS model can be sub-divided scale. In the context of e-government services, the human–man
into input-oriented and output-oriented DEA models. The output- interactions with an e-service (DMUs) are heterogeneous due to
oriented model maximizes the total sum of weighted outputs to the existence of interactions from locations with more or less at-
generate an output-effectiveness performance value for each DMU tractive internet speeds (environment) or involving people having
and suggests recommendations for increasing outputs at a fixed different knowledge and technical skills; they are all interacting
level of the multiple inputs. The input-orientation model minimizes with an e-service using different input resources to receive the
the total sum of weighted inputs to generate an input-efficiency same output level or interacting with different e-service types
performance value and suggests recommendations for reducing the (informational, transactional, and interactional) using the same
utilization of inputs at a fixed level of the multiple outputs. input level to receive different output levels. Such differences
Table 2 provides linear programing formulations for the input- lead to heterogeneous human—machine interactions operating
oriented (DEA-CRS-I) model and the output oriented (DEA-CRS-O) under variable increasing or decreasing variable return to scales.
model under constant return to scale assumptions. For each DMU To address, the non-homogeneity (heterogeneity) issue, a variable
(p = 1,…,n) in a set of homogenous DMUs for evaluation, (m) rep- returns to scale (DEA-VRS) model has been developed specifically
resents the number of input variables and (s) represents the num- to accommodate scale effects in analysis, Banker, Charnes, and

Please cite this article as: I.H. Osman, A.L. Anouze and Z. Irani et al., A cognitive analytics management framework for the transformation
of electronic government services from users’ perspective to create sustainable shared values, European Journal of Operational Research,
https://doi.org/10.1016/j.ejor.2019.02.018
JID: EOR
ARTICLE IN PRESS [m5G;February 25, 2019;20:45]

I.H. Osman, A.L. Anouze and Z. Irani et al. / European Journal of Operational Research xxx (xxxx) xxx 9

Cooper (1984). They are based on the duality theory of linear the least squared differences between the observed and predicted
programing, in which the DEA-CRS primal model is augmented values.

by adding a new constraint ( nj=1 λi = 1) to allow for variable CART models are not as much popular as traditional statistical
returns to scale in the envelopment dual models; where the methods due to the short span of existence. However, they have
set of (λi )s represents the dual variables of the constraints in several implementation advantages: (i) the analytical results are
the linear programing primal model associated with DMU (i). easy to explain, and interpret; the segmentation of population into
Table 3 provides linear programming formulations for the input- meaningful classes is often obtained from the visualization of the
oriented (DEA-VRS-I) and the output oriented (DEA-VRS-O) constructed tree structure; (ii) CART models are non-parametric
envelopment dual models under variable return to scales. and non-linear in nature. They do not make any statistical as-
It is known that one of the most important steps in the DEA sumptions on normality, non-collinearity, and other requirements
analytics modeling process is the definition of DMUs and the identi- of conventional methods (like discriminant analysis and ordinary
fication of associated multiple-input and multiple-output variables least square methods) (iii) CART model can handle all data types
to achieve the organizational desired goal. The set of human–man whether numerical or categorical. As a result, they become one
online interactions will be used to define the set of decision- of the most powerful visualization analytics tool, Hastie, Tibshi-
making units (DMUs). The COBRA metrics would then be used to rani, and Friedman (2009). However, they are not without disad-
define the set of multiple-input and multiple-output variables, re- vantages. A researcher cannot force a particular predictor variable
spectively. The set of online interactions (DMUs) would then be into CART model. It might be omitted from the list of predictors
evaluated using the DEA models to determine a relative satisfac- in the final tree if such predictor was found not necessarily sig-
tion of each user based on the individual experiential interac- nificant. In such case, traditional regression models might be more
tions with a particular e-service. The DEA scores derived for all appropriate, Gordon (2013). Finally, for more details, we refer to
users from the DEA-VRS input-oriented model would be averaged the good book on statistical learning by Hastie et al. (2009), and
to provide an input-efficiency score on the performance of a par- the review on CART development and applications by Loh (2014).
ticular e-service. Similarly, the derived DEA scores from the DEA-
VRS output-oriented model would provide an output-effectiveness 3.3. The management process for setting goals and managerial
score on the performance of an e-service. actions
The major advantage of DEA Model lies in the simplicity of solv-
ing after careful modeling of the underlying problem. It does not The management process is an essential component of CAM
require an objective function to express the transformation of in- framework for the initiation and successful completion of any an-
puts into outputs. But it does require the total number of DMUs alytics project. Senior management sets initially the strategic goals
over the total sum of inputs and outputs to be sufficiently large and objectives such as closing the digital divide through enhancing
for a proper discrimination among DEA units, Osman et al. (2011). users’ satisfaction and providing more improved e-services. They
The major DEA disadvantage, it does not provide a statistical in- approve access to systems, and enforce adherence to data gover-
ference on the derived DEA scores due to random errors in data nance policies. Senior management also synchronizes and coordi-
inputs or data correlations between metrics and users’ trait. As a nates the interactions among stakeholders (policy makers, internal
result, a potential efficiency bias in the DEA scores may happen. technical support staff and research team, users and providers). If
To address this disadvantage, Simar and Wilson (2007) suggested convinced in the desired values, senior managers would commit
implementing DEA models with a bootstrapping technique to pro- the necessary resources to execute the analytics project and im-
duce estimate values for the lower and upper confidence bounds plement the analytics-based policy recommendations.
for the averages and medians of the original DEA scores. They sug- Successful organizations like e-bay, google, and amazon have
gested using a simple sampling procedure to draw with random adopted superior management processes by hiring analytics ex-
replacement from the same sample of DEA scores, thus mimicking perts. They provide them with quality data; support them with the
the underlying original data generation process. best analytics tools to make the best-informed decisions whether
big and small, every day, over and over, (Davenport, 2006; Daven-
3.2.2. Classification and regression trees (CART) for classification port & Patil 2012). For a recent survey on analytics models, appli-
Classification and regression trees (CART) model – is a non- cations, and tools we refer to Osman et al. (2014a). Further, discus-
parametric methodology. It is proposed to construct a tree struc- sion on the importance of the management process for the success
ture to identify classes of users based on their DEA satisfaction of the analytics initiatives for measuring productivity performance
scores (dependent) and characteristics as (independent) predictors. at organizations and development of evidence-based policies can
CART model recursively partitions the dataset of users (population) be found in Dyson (20 0 0).
into smaller meaningful subclasses to improve the fit within each
subclass as much as possible. The partitioning process uses a sim- 4. CAM implemenation and discusion of results
ple decision tree structure to visualize the identified classes. The
major components of CART model are the partition and stopping In this section, the CAM implementation processes, associated
rules. The partition rule determines which node (among predic- components, results and impacts are discussed. Each CAM process
tor variables) to select for splitting the population at each strati- took almost one-year time to complete, and the project was ex-
fication stage. The stopping rule determines the final constructed ecuted over a 4-year period from 2010 to 2014. Fig. 3 depicts the
strata (branches of tree with different characteristics). Once the data-flow chart for implementing the cognitive, analytics and man-
strata have been created, the node impurity of each stratum is as- agement processes. The PIM (Performance Improvement Manage-
sessed based on heterogeneity measures. There are three hetero- ment) Software is used to generate the DEA results (Emrouznejad
geneity criteria to assess a node impurity: Misclassification error, & Thanassoulis, 2014), while the CART analysis and visualization is
Gini index, and Cross entropy. The Gini index of node impurity is generated from v6.6 (Salford Systems, San Diego-USA) software.
the most commonly used measure for classification. The impurity
measure reaches a minimum value of zero when all observations 4.1. The implementation of the cognitive process
included in a node belong to the same purity class, and a max-
imum value when the different classes at node are equal sizes. Turksat is a private agency entrusted to provide e-services to
However in regression trees, the measure of impurity is based on Turkish users. It has provided an excellent support staff to mount

Please cite this article as: I.H. Osman, A.L. Anouze and Z. Irani et al., A cognitive analytics management framework for the transformation
of electronic government services from users’ perspective to create sustainable shared values, European Journal of Operational Research,
https://doi.org/10.1016/j.ejor.2019.02.018
JID: EOR
ARTICLE IN PRESS [m5G;February 25, 2019;20:45]

10 I.H. Osman, A.L. Anouze and Z. Irani et al. / European Journal of Operational Research xxx (xxxx) xxx

Data on Users'
Characteriscs

Data on Highly satisfied


COBRA
metrics: ……………………..
Inputs
Cost DEA
Risk Models Satisfied Tatget
& &
Outputs Scores
……………………..
Benefit Group
Opportunity Dissatisfied

Data collection DEA Analytics CART Analytics


Cognitive Process Analytics Process

Management Process

Fig. 3. The flow-chart of processes for the CAM implementation.

Table 4
List of the 13 e-services and characteristics.

Group Code Name Responses

Informational-G1 (1 e-service) 9001 Content Pages for Citizen Information 2258

860 Military Services – Application for receiving information


867 Online Inquiry for Consumer Complaint
868 Parliament Reservation for Meeting
871 Military Services – Deployment Place for Training
872 Military Services Inquiry
Interactive/Transactional-G2 (10 e-services) 20 0 0 Consumer Portal – Application of Consumer Complaint 636
2002 Juridical System Portal – Citizen Entry
2003 Juridical System Portal – Lawyer Entry
2004 Juridical System Portal – Institutional Entry
2005 Juridical System Portal – Body of Lawyers Entry

Personalized-G3 (2 e-services) 870 Education Service – Student Information System 284


90 0 0 My Personal Page

the online survey inside the internal operating systems of e- 4.1.2. The online survey for data collection
services to collect real-experiential data from users. The online sur- The online survey is composed of three parts: (1) part I col-
vey parts were first approved by our Universities Internal Review lects qualitative bio-data on users (education, age, income, gender,
research boards before the data collection process took place. frequency of using e-government services and annual income); (2)
part II accumulates quantitative data from responses of the 49 CO-
BRA questions, and (3) part III gathers text-free comments in re-
4.1.1. The list of e-services sponse to an open-ended question for cross-validation and content
A list of 13 e-government services was selected from Turksat’s analysis. The online survey was not promoted to Turksat’s users;
e-government portal. The details on each e-service and collected and the data was collected using a simple random sampling pro-
responses from users are provided in Table 4. They are divided cess to avoid bias. After completion of an e-service interactive ses-
into three groups of different characteristics: (1) informational; (2) sion, a user is invited to complete the online survey. At the start,
interactive/transactional; and (3) personalized groups. The Informa- the respondent user is informed that the personal data will be kept
tional e-services provide public content, and do not require au- confidential, the collected data will only be used for an academic
thentication (username and password) to access by users. Interac- research to improve the provision of e-services; the user was not
tive/transactional e-services, however, do require authentication for obliged to complete the survey and can stop any time.
downloading application forms, contacting agency officials and re- The data collection process was collected over two-phases. A
questing appointments. Finally, personalized e-services also require pilot phase collected data over a three-month period to refine
authentication and allow users to customize the content of e- the measurement scale model. The field phase was run for an-
services, conduct financial transactions and pay online to receive other period of nine months; and one dataset was gathered ev-
e-services. ery three months. Since it is an unsolicited online survey, it was

Please cite this article as: I.H. Osman, A.L. Anouze and Z. Irani et al., A cognitive analytics management framework for the transformation
of electronic government services from users’ perspective to create sustainable shared values, European Journal of Operational Research,
https://doi.org/10.1016/j.ejor.2019.02.018
JID: EOR
ARTICLE IN PRESS [m5G;February 25, 2019;20:45]

I.H. Osman, A.L. Anouze and Z. Irani et al. / European Journal of Operational Research xxx (xxxx) xxx 11

Table 5
Descriptive analysis of the DEA input and output variables.

Variables (Total number of Questions): “NO” under Column 1 of Table 2 Mean Median Deviation Minimum Maximum

DEA input variables


tangible Cost (tC) (7): Q20–Q21; Q30–Q31; and Q41–Q43. 22.84 25.00 8.38 7.00 35.00
intangible Cost (iC) (6): Q44–Q49 17.19 17.00 6.93 6.00 30.00
personal Risk (pR) (5): Q36–Q40 6.34 7.00 2.41 5.00 25.00
financial Risk (fR) (3): Q37–Q39 9.39 10.00 3.48 3.00 15.00
DEA output variables
tangible Benefit (tB) (7): Q1–Q7 22.54 25.00 8.73 7.00 35.00
intangible Benefit (tB) (11): Q8–Q18 34.84 38.00 13.22 11.00 55.00
Service support Opportunity (sO) (5): Q19–Q22 and Q26 19.05 21.00 7.30 5.00 25.00
Technology support Opportunity (tO) (7): Q27–Q29 and Q32–Q35 22.52 24.00 8.03 7.00 35.00

not possible to determine the termination time to accumulate of 10 million needs a sample of 2400 responses at the 95% confi-
enough responses. At the end of the collection period, a total of dence level and at 2% margin of errors. Thus our collected sample
3506 responses were collected. A data cleaning process was con- of 3178 clean responses exceeds the minimum threshold to con-
ducted to remove incomplete responses; a total of 3178 responses duct any valid statistical analysis. The respondent users were asked
(96.64%) was found valid, and they were distributed on the list to rate the COBRA questions in part II on the online man–machine-
of e-services as follows: (2258 responses for the informational e- interactions using 5-point Likert scale values (1 = strongly dis-
services; 636 responses for interactive/transactional e-services; and agree, …, 5 = strongly agree).
243 responses personalized e-government services) as shown in
Table 4. 4.1.3. Validation of COBRA measures
Despite the fact that our analytics models are non-parametric Further to the theoretical support for COBRA metrics provided
and do not require any statistical assumptions on sample size, a in Section 3.1, empirical validation was also conducted to establish
sufficiency test was conducted to make sure that we have an ac- relationships between COBRA metrics and the satisfaction of users,
ceptable representative sample. Given an estimate of 80 million for and alignment to Turksat’s goal. An advanced statistical structural
the Turkish population, out of which 9% are ICT users then an es- equation model was developed in Osman et al. (2014a). The sta-
timate of 7.2 million ICT users can be potential users of e-services. tistical results showed that the cost and risk factors have nega-
According to Saunders, Lewis, and Thornhill (2007), a population tive relationships to users’ satisfaction; whereas, the benefit and

Table 6
Descriptive analysis on user’s bio-data.


http://ec.europa.eu/eurostat/statistics-explained/index.php/File:Obtaining_information_online_from_public_authorities_
by_socioeconomic_breakdown_2009.png.

Please cite this article as: I.H. Osman, A.L. Anouze and Z. Irani et al., A cognitive analytics management framework for the transformation
of electronic government services from users’ perspective to create sustainable shared values, European Journal of Operational Research,
https://doi.org/10.1016/j.ejor.2019.02.018
JID: EOR
ARTICLE IN PRESS [m5G;February 25, 2019;20:45]

12 I.H. Osman, A.L. Anouze and Z. Irani et al. / European Journal of Operational Research xxx (xxxx) xxx

opportunity factors have positive relationships to users’ satisfac-


tion. The coefficients in brackets show the values of the struc-

Median (2)
tural links: cost (−0.36); risk (−0.11); benefit (0.59) and opportu-
nity (0.68) indicating that the lower the cost and the risk, and the

100
68

68

68
68
79
58
75
62

65
72

72
61
higher the benefit and opportunity values, are, the higher the satis-
faction is. The total explanation of the users’ satisfaction variations
by the COBRA model was 76% at the 90% confidence level.

Mean (2)
Table 5 provides a descriptive summary of collected data on the

100
COBRA metrics in part II. For instance, the maximum value for an

68
66
69

75
62

69
80
58

69
73

73
62
aggregated set of 7 questions is 35 (7 × 5); and the minimum value

Input-orientation
is 7 (7 × 1), see the values for the set of intangible cost (tC) cate-

DEA score (1)


gory. Similarly, a descriptive summary on the respondents’ bio data
of part I is provided in Table 6. The bio-data on users show that
e-services are attracting highly educated citizen with a cumulative

100
68
66
69

62

69
80
58
75

69
73

73
62
of 51.6% for graduate and postgraduate degree holders; this per-
centage is very close to the EU27 average of 53% for the same cat-

Median (2)
egory group. It is interesting to note that the percentage of low
educated users (33.32%) in Turkey is much higher than 12% of the

100
EU27 similar group. Moreover the low age group in EU27 has a

64
78
81

79

63

75

79
70
72

71

76
58
higher percentage of users than that in Turkey whereas Turkish
middle and old age groups have higher values than that of EU27.

Mean (2)
Last the Eurostat 2010 shows that 28% of EU27 citizens have ob-
tained information online from government authorities’ websites

100

64
78
81

79

63

79
70
72

71

76
58

74
in 2009 compared to only 9% in Turkey. No other EU statistics can

Output-orientation
be compared to our additional statistics generated from our sur-

DEA score (1)


Meta-Frontier
vey. For instance, the frequency of e-service usages in our sample
is 23% daily; 44% weekly, 27% monthly and around 6% yearly; these
figures indicate a high frequency-usage level. The percentages of

100
78
82

63
80

75
65

79
77
70
72
73

59
female and male users were 20% and 80%, respectively.

4.2. DEA prescriptive results for benchmarking analysis Median (2)

Two DEA models are implemented with reported results

100
79
85

78
69
75
80
72

73
63

66
65

74
to evaluate the satisfaction of users and the performance of
e-services. First, a local-frontier analysis is conducted to report sat-
Mean (2)

isfaction and performance results on a single e-service. Second, a


meta-frontier analysis is conducted to report overall performance

100
79
84

78
69
80
72

73
63

66
65

74

74
and satisfaction results obtained by including all e-services in a
single DEA run to identify the best-practice national benchmark

2: The average of DEA bootstrapping confidence intervals (mean and median) of DEA scores.
Input-orientation

which consists of the set of most efficient and effective e-services


DEA score (1)

to guide the management process in the design of improved e-


services.
100

In practice, there are operational managers and policy makers


86
80

81

79
70
73

76

76
66
67

67

74

in charge of e-services. The operational managers often are inter-


ested in monitoring and controlling the operations of an e-service
Median (2)

at the micro (local) level; hence special interest is focused on


input-efficiency and output-effectiveness performance values of an
100

e-service at the local-frontier level. The input-efficiency objective


79
82

81
78

69

79

79
81
70
76
65

74
Local and meta-Frontier anlyses of satisfaction and success.

minimizes the resource (cost and risk) utilization at fixed amount


of outputs. The output-effectiveness objective maximizes the (ben-
Mean (2)

1: The average of DEA scores without bootstrapping.

efit and opportunity) return at fixed amount of inputs. However,


100

the policy makers have a strategic interest in improving the per-


78
82

81
78

69

79

79
81
77

71
66

74

formance of all e-services at the macro (Meta) level; hence the


Output-orientation

meta-frontier analysis is of interest to obtain input-efficiency and


DEA score (1)
Local Frontier

output-effectiveness values for the whole e-services sector. Hence,


identification of the national best-practice benchmark is of great
importance for learning and development of evidence-based poli-
100
84

83

79

82

82
80

80

80
72
71

76
68

cies. Therefore, input-orientation and output-orientation DEA mod-


els are used to obtain efficiency and effectiveness values at the in-
dividual level and the overall level of e-services.
e-service

Table 7 reports the averages of the DEA results without boot-


90 0 0
2005
2004
2002
2003
9001

2000
868
860
867

870
871
872

strapping reported under DEA score (1) columns, while that for
the bootstrapping DEA results are reported in averages of Mean
Group
Table 7

(2) and Median (2) columns at both the local-frontier and meta-
G1
G2

G3

frontier levels. Since each DEA run provides a single DEA satisfac-
tion score for each individual user, the average of the DEA satisfac-

Please cite this article as: I.H. Osman, A.L. Anouze and Z. Irani et al., A cognitive analytics management framework for the transformation
of electronic government services from users’ perspective to create sustainable shared values, European Journal of Operational Research,
https://doi.org/10.1016/j.ejor.2019.02.018
JID: EOR
ARTICLE IN PRESS [m5G;February 25, 2019;20:45]

I.H. Osman, A.L. Anouze and Z. Irani et al. / European Journal of Operational Research xxx (xxxx) xxx 13

Fig. 4. The national best-practice of efficiency-effectiveness benchmarking groups.

tion scores of users is considered as a satisfaction measure for the worst e-service is 20 0 0 (Application of consumer complaint) while
particular e-service. The e-service satisfaction measures for a spe- the best e-service is 867 (Online inquiry for consumer complaint)
cific e-service would be different from those at the local-frontier indicating that is easier to get an online inquiry for consumer
and meta-frontier levels due to pooling of the data on all e-services complaint than filling an application for consumer complaint. The
in a single DEA analytics run. potential star groups include the informational e-service 9001
From the meta-frontier analysis in Table 7, it can be seen that which attracts 71.82% of the responded users. It has an above av-
the best-practice national benchmark consists of the two non- erage output-effectiveness score, i.e., providing reasonable output
dominated frontier e-services (867: online inquiry for consumer values to users; but it has a below average input-effectiveness
complaint; and 2005: Juridical system portal). They have either score, i.e., the analysis provides transformational signals to re-
input-efficiency or output-effectiveness scores equal to 100. They design its internal process to improve the input-efficiency of such
envelop all other e-services under them, hence the Data Envelop- e-service by reducing its associated cost and risk factors, like Inter-
ment Analysis name. They provide policy makers with an identifi- net cost and password access costs. It is also interesting to notice
cation of the best-practice e-services to recommend improvement that all personalized e-services (870 and 90 0 0) belong to the
targets to the other less efficient e-services. Additionally, plotting output-effectiveness groups, such personalization is appreciated
the averages of efficiency and efficiency pair of scores for each by users, but they have below average input-efficiency scores, i.e.,
e-service would provide a visual representation of performances such e-services should be re-designed to operate more efficiently
for all e-services, Fig. 4. Four different groups of e-services can from users’ perspectives. The potential star inefficient groups
be easily identified: two e-services – 2005 is input-efficient (most contain 83% of the total number of e-services; they cut across all
efficient); 867 is output-effective (most effective), they form the informational, transactional and personalized operating groups.
frontier best-practice of e-services (benchmark); whereas 860 has They indicate needs for revamping the majority of e-services to
above average values of efficiency and effectiveness scores. 860 improve both their efficiency and effectiveness performance levels.
and 867 e-services form the super-star group e-services to mimic To provide more analytics insights from the benchmarking anal-
(top-right); the five e-services −868, 870, 2004, 9000, 9004 – ysis, the following three e-services are considered: the Juridical
have output-effectiveness scores above average but they are not System Portal: Citizen Entry, (20 0 0); Lawyer Entry (2003); Insti-
input-efficient – a potential star group for input redesign (top-left): tutional Entry (2004); Body of Lawyers Entry (2005). They are de-
three e-services – 871, 20 02, 20 05 – have input-efficient scores signed to meet the goals and objectives of the Ministry of justice
above average but they are not output-effective - a potential star for serving its users: general public; professional lawyers, and in-
for output redesign (bottom right): three e-services – 872, 20 0 0, stitutional officers. From providers’ perspective, they are designed
2003 – are neither effective input-efficient nor output-effective – to receive the same satisfaction level. However, the DEA bench-
a potential group for a complete redesign (bottom-left). The group marking analysis places each e-service in different quadrants in
of super star would help the policy makers in documenting their Fig. 4 due to different performance evaluation scores from users’
best-practice experiences to guide the learning process to improve perspective: e-services (20 0 0 and 20 03) in bottom-left; 20 04
all others inefficient e-services. It is interesting to note that the in top-left; and e-service 2005 in the bottom-right quadrant.

Please cite this article as: I.H. Osman, A.L. Anouze and Z. Irani et al., A cognitive analytics management framework for the transformation
of electronic government services from users’ perspective to create sustainable shared values, European Journal of Operational Research,
https://doi.org/10.1016/j.ejor.2019.02.018
JID: EOR
ARTICLE IN PRESS [m5G;February 25, 2019;20:45]

14 I.H. Osman, A.L. Anouze and Z. Irani et al. / European Journal of Operational Research xxx (xxxx) xxx

Table 8
The DEA policy recommendations to improve overall performance of e-services.

Policies for COBRA metrics Maximum Observed Projection % change Weight Weighted change

P1: Increase benefits and opportunities (effectiveness of e-services) from users’ perspective
B Tangible benefit 35 17.57 23.00 30.9 0.42 12.97
Intangible benefit 55 27.85 37.09 33.2 0.15 4.98
O Service support 30 15.13 20.15 33.2 0.14 6.48
Technology support 35 18.63 23.61 26.7 0.28 7.47
P2: Reduce costs and risks (efficiency of e-services) from users’ perspective
C Tangible cost 35 18.29 13.90 −24.0 0.62 −14.88
Intangible cost 30 16.42 10.06 −38.7 0.18 −6.96
R Personal risk 10 5.52 2.80 −49.3 0.11 −5.52
Financial risk 15 8.51 5.08 −40.3 0.09 −3.62

Further empirical insights can be observed from the results in hence addressing the concern of users on the five e-services in the
Table 7. First, it seems that the bootstrapping DEA was able to top-left quadrant of Fig. 4. The input-efficiency recommendations
correct down the averages of DEA scores up to 3% at the local- should be given the highest priorities since it would impact the
frontier level and up to 1% at the meta-frontier level. At the local- majority of users 2542 out of 3178 (or 79.93%) of respondent users.
frontier level, the frequency of changes (12 times) and magnitudes It should be noted that our presented recommendations are
of corrections in the averages of the out-efficiency scores over all based on the projected changes using percentages from aggregated
e-services were: 0% (once), 1% (three times) and 2% (eight times), Likert scale data for the 8 measurement factors in Table 8. The ag-
and 3% (once); the similar changes (12 times) to the total re- gregated data for the eight factors range from 5 to 55 Likert Scale
duction in the total averages of input-efficiency scores: 1% (eight values in Table 5. But these scales are integer ordinal scale data
times), 2 (three times) and 4 (once). The bootstrapping DEA re- that are used similarly in serval studies including Cook and Zhu
sults have changed equally in both models, but the largest change (2006) and Park (2010). The aggregation of Likert values would
of 4% occurred in one of the input-oriented results, this is prob- lessen the effect of the normally required continuous data type
ably due to the existence of more correlation between to the to conduct DEA sensitivity analysis. Park (2010) provided a discus-
input variables and users’ characteristics than with the output vari- sion on how to interpret the DEA results with Likert scale data;
ables that are more related to e-services’ characteristics. Second, it was concluded that the interpretation of recommendations for
at the meta-frontier level, the averages of output-effectiveness val- improvement in percentage of Likert scale is still possible; they
ues were reduced much less as follows: 0% (five times) and 1% should not directly operationalized in the same magnitudes of nor-
(eight times) and the input-efficiency values were reduced by a mal data; but using the underlying sprit of magnitudes. Hence, our
total of 0%. The bootstrapping analysis indicates that the results recommendations and interpretations resemble the existing best-
at the local-frontier level are more affected by the categorical data practice in literature, and they are implemented in the same un-
on users’ characteristics. However, it is more interesting to observe derlying spirit of magnitude in percentage changes.
that pooling all data into a single local DEA analysis has produced To provide further DEA recommendations at the operational
no change in bootstrapping results at the meta-frontier analysis, level and using the above underlying spirit, one needs to extract
i.e., the pooling has removed the impact of any correlation and the more details on the individual metrics of each factor using the DEA
existence of errors. Although, the corrected changes from the boot- local-frontier analysis for each e-service. For instance, to reduce
strapping DEA are not big in general, they may be needed for the the tangible cost and risk factors, senior management can improve
analysis of individual e-services with a small number of responses. the complexity of the registration process, remove the replacement
Third, Table 8 suggests improvement targets to develop man- of fees for lost password to access the e-services portal; provide
agerial actions to improve the e-services sector. The targets are more options to retrieve passwords; reduce the cost of Internet
expressed in terms of expected percentage changes on COBRA fac- access, and provide more access options using public libraries and
tors. At least, two general policy recommendations at the strategic offices. Further reduction in users’ intangible costs can be done
level can be developed. Policy one (P1) is related to the desire through improving Internet speed, i.e., reducing the download
to increase the benefit and opportunity values, while Policy two and increasing the upload speeds. To improve tangible benefits of
(P2) is related to the desire to decrease the cost and risk values. e-service, they can make them easier to find, easier to navigate,
The recommended changes are computed from the difference in more interoperable and have better descriptions of links. Note that,
percentages between the desired projections and observed actual the above recommendations are generated for variables inside each
values. In Table 8, the averages of the optimal weights (weight col- factor and were obtained from the open-ended comments.
umn) for each COBRA factor are also provided. Although, they may
not be unique due to the potential existence of multiple-optimality 4.3. CART analytics results
solutions for the DEA linear programming models, they can still
provide an expected weighted value on the magnitude of desired CART analysis is implemented to identify the characteristics of
changes to prioritize managerial actions. The weights are multi- users who are found highly satisfied, satisfied and dissatisfied from
plied by the percentage of the recommended changes (% change the DEA results. The classes of users with common characteristics
column) to generate the data-driven expected magnitudes. The are generated by the CART visualization tree. The DEA analysis pro-
most important recommendations are: to reduce first the tangible vides relative DEA satisfaction scores, but they do not link an indi-
cost factor; and second to increase the tangible benefit factor; and vidual satisfaction score to the characteristics of users. The charac-
third to increase both the service and technology opportunity fac- teristics can be identified from the set of predictors or categorical
tors. However ignoring the order of magnitudes and using the per- variables such as gender (male/female), or ordinal variables such
centage differences would favor reducing the risk factor first, fol- education, technology experiences, frequency of use in Table 6. It
lowed by reducing the intangible cost factor second, and increasing is of prime importance to identify the characteristics of each social
the intangible benefit and service opportunity third; i.e., giving group in order to understand the underlying reasons behind the
more priorities to improving the input-efficiency performance, low usage. Such understanding would help in developing policy

Please cite this article as: I.H. Osman, A.L. Anouze and Z. Irani et al., A cognitive analytics management framework for the transformation
of electronic government services from users’ perspective to create sustainable shared values, European Journal of Operational Research,
https://doi.org/10.1016/j.ejor.2019.02.018
JID: EOR
ARTICLE IN PRESS [m5G;February 25, 2019;20:45]

I.H. Osman, A.L. Anouze and Z. Irani et al. / European Journal of Operational Research xxx (xxxx) xxx 15

recommendations to reduce the digital divide among social classes.


Therefore, CART is implemented using DEA scores and individual

Gender
characteristics to determine the common characteristics of differ-
ent satisfaction classes.

M









F

F
To implement CART, the original DEA satisfaction scores are di-
vided into three different groups. Since the DEA scores are nu-
meric values between zero and 1; and a good classification re-

Ease of use
quires only a finite set of ordinal values, therefore, the DEA scores
are transformed into three ordinal data with one value is given

M/H
M/H

L/M
L/M









to each group. If not transformed, CART would produce a com-
plex classification tree with a large number of classes leading to
a large and unrealistic number of policy recommendations. There-

Education
fore, three classification classes were created based the overall
average of DEA scores and their standard deviation (μ, σ ). The

2L2M
L/ M

L2H
L/M
Users’ characteristics – set of predictors (categorical values)

H
first group of “Satisfied, S” users includes users with DEA satisfac-




L

L
tion scores in the interval of (μ ± σ ) values, it has 1084 satisfied
users (34.10%); the second group of “Dissatisfied, D” includes users

Annual income
with DEA scores are below the (μ − σ ) value; it has 1072 dissat-
isfied users (33.75%); and the third group of “highly satisfied, H”
includes users with DEA scores are above (μ + σ ) value, it has

L2M2H
M/H

M/H
M/H
1022 users (32.15%). The users in each group are assigned one or-

MH





L

L
dinal value of 1, 2, and 3, respectively. Looking at the distribution
of DEA-scores; it is interesting to notice that the underlying distri-
bution is not normal despite having a large sample of users; this

2L4M3O
L3M2O

4L4MO
is expected from the DEA non-parametric results, but from a prac-

M/O
M/O

M/O

M/O
L/M

L/M
L/M
L/M
L/M

L/M
L/M
Age
tical point of view, a good success of any e-service is expected to

O
have a normal distribution of satisfaction values. In fact, the distri-
bution of the different satisfaction scores is almost uniformly equal

Frequency
and this might explain the reason for the low take-up rate.

3L3HM
To predict the characteristics of the 3178 responded users, the

4M2H
M/H
L3M

L/ H
L/M

L/H
set of independent variables (predictors) in Table 6 takes categor-
M
M

M
M

M
H

H
L
ical (male/female) or ordinal (low, medium, high) values; whereas
the dependent variable (target) takes the values of 1, 2, or 3 from
the individually transformed DEA scores. The CART analysis is con-
DEA satisfaction groups (target’s transformed value)

C: Class; HS: Highly Satisfied; S: Satisfied and D: Dissatisfied groups; L: Low; M: Medium and H: High.
49.81 (410)
% HS green

ducted using the following settings; the Gini criterion for split-

23.97(222)
(232)
28.4(293)

27.2(390)

(62)
(54)
(38)
(24)

34.9(82)
24.4(83)

13.3(49)
23.7(14)

16.7(12)
37.2(74)
ting node and a 10-fold cross-validation for learning and testing

9.8(5)
53.2
42.2
58.7
42.7
40.7
sample. In the 10-fold cross validation, the data on the 3178 users
are divided into approximately 10 equal subsets which are gener-
ated by a random stratified sampling process for splitting on the
set of predictors; and the construction of tree-growing process is

30.56(283)
23.94(197)
41.7 (430)
44.1 (150)

15.29 (14)
42.2(604)

39.6(146)
23.2(101)
40.7 (24)

repeated 10 times from scratch. In the cross-validation, nine sub- 26.0 (61)
22.5(20)
32.0(47)

25.4(15)

29.4(15)
25.0(18)
21.6(43)
%S blue

sets of the data are used for the learning sample, and one sub-set
is used for the testing sample. The predictability accuracy of re-
CART classes based on the satisfaction of users and their characteristics.

sults was verified using two standard quality measures: Receiver

Bold: highlights the dominant group; (x): number of users in the class.
Operating Characteristic (ROC) function and the Area Under Curve 45.46 (421)
(AUC). ROC illustrates the trade-off values between sensitivity and
26.25(216)
30.4 (435)
29.8 (307)

23.6 (103)

47.2 (174)

58.3 (42)
60.8 (31)
31.5(107)

39.1 (92)
41.2 (82)
25.9(38)

33.9(20)
34.8(31)
35.6(21)

26.1(24)

specificity that could be achieved by the classification tree when


%D red

varying the thresholds of the dependent variables. AUC measures


the overall discrimination ability among the generated classes. The
ROC and AUC values were equal to 0.86 and 0.93, respectively, in-
dicating a high predictable accuracy of the CART constructed tree.
Leafs

The CART analysis has generated a number of useful informa-


28

36

20
24

35
27
12
13
19
18

11

7
9

tion. First, CART evaluates the splitting capability of each predic-


tor; the most important one (frequency of use) is then assigned
the highest score of 100%; the remaining predictors are relatively
Number of users

ranked for it, and the least predictor is found to be gender (9%) as
shown in Fig. 5. Second, CART produces a detailed output, or se-
quences of predictors from the top of the tree to various thirteen
1030

1429
436

823
369

926
340

235
199
147

terminal nodes (leafs). Such nodes are listed from left to right as
59

92
89
59

72
51

follows: 7, 27, 18, 9, 19, 20, 11, 12, 13, 35, 36 and 24 as shown
CART classes

in the constructed CART of Appendix 2. Each terminal node repre-


sents one class which contains one dominant sub-group of users
C-Total

C-Total

C-Total
Table 9

Class

belonging to one of three original satisfaction groups. The domi-


12
13
10
11
1
2
3

4
5
6
7
8

nating satisfaction sub-groups in each class are marked in bold in


Table 9. Table 9 provides a summary of rich information on the
classification characteristics of each class. The following points can

Please cite this article as: I.H. Osman, A.L. Anouze and Z. Irani et al., A cognitive analytics management framework for the transformation
of electronic government services from users’ perspective to create sustainable shared values, European Journal of Operational Research,
https://doi.org/10.1016/j.ejor.2019.02.018
JID: EOR
ARTICLE IN PRESS [m5G;February 25, 2019;20:45]

16 I.H. Osman, A.L. Anouze and Z. Irani et al. / European Journal of Operational Research xxx (xxxx) xxx

Gender 9.9%
Ease of use 25.6%
Education 26.1%
Annual income 32.9%
Age 44.3%
Frequency of use 100%

Fig. 5. Importance of predictions for each of the categorical input variables.

be observed and explained. Each satisfaction group has a unique to old, low to high income, male and women). Hence, the Turk-
color assigned automatically to it by CART software (Appendix 2). ish e-services are fulfilling one the EU i2010 initiative on inclu-
For instance, “Highly Satisfied, H” group is colored in green color; sive e-government services by means of the Internet but at a low
“Satisfied, S” is colored in blue; and “Dissatisfied, D” is colored in percentage of 9% of the population. Further, the above analysis pro-
red. The thirteen identified different classes (1–13) with satisfac- vides empirical evidence indicating that the current Turkish users
tion characteristics are displayed in column 1. They are listed in have additional characteristics beyond those of US adopters. In US,
a decreasing order of sizes, (the number of users) in column 2. users were classified to be only young, better educated, and high
First, the set of classes 1–3 explains the characteristics of majority income citizens, Morgeson, Van Amburg, and Mithas (2011).
of satisfied group of users. For instance, Class 1 (node 19 in the
tree) is the largest among the set of 1–3 classes with a majority 4.4. Management process and practical impacts of policy
of satisfied users; it contains a set of 1030 users: 430 (41.7%) users recommendations
are satisfied (S); 307 (29.8%) users are dissatisfied, and 293 (28.4%)
users are highly satisfied. Further, the users of class 1 are charac- One of the main objectives of policy makers in Turkey was to
terized by being low (L) to medium (M) frequency of users, young meet the EU initiative for closing the digital divide through “im-
(L) to middle aged users; and it is dominated by male users. If one proved” provision of electronic government services. The policy
adds the characteristics of classes 2 and 3, one would obtain the makers and senior managers at Turskat have played a very instru-
overall characteristics of a total of 1429 users: 604 (42.2% of the mental role in assuring success of our analytics project and im-
three lasses or 19% of the total population of 3178 users) are sat- plementation of its evidence-based recommendations. First, they
isfied; the majority of them are weekly to monthly average users approved the initiation of the project by identifying its associ-
(3M), the majority are male between 45 and 54 years old (3M) and ated challenge and desired goals, facilitated the organization of
over (2O), middle to high income (MH), and have lower than high focused-group meetings, and provided the technical staff support
school (2L) and secondary education (2M). to mount the online survey on the government portal of e-services.
Second the next set of 5 classes (4–8) determines the charac- Hence, they provided us an excellent opportunity to identify real-
teristics of highly satisfied users, the set contains 823 users: 410 time measures from users’ perspective, to collect real-time qual-
(49.81% of the set of users, or 12.9% of the total population of 3178 ity data on the human–machine online interactions on e-services,
users) are highly satisfied users. The largest class 4 contains 436 thus avoiding data quality issues found in traditional surveys. Sec-
users (53%) who are classified to be: frequency of users – daily (H) ond, the senior management has also provided the executive sup-
and few times a year (L) –; less than 54 years old (L–M) and of low port to implement analytics policy recommendations for achiev-
income (L). The general characteristics of 4–8 classes are: majority ing desired goals and creating sustainable creation of shared val-
of low (3L) and high frequency users (3H); majority of less than 54 ues. Further support to the recommendations was found from the
year old (4L–4M), high to middle income (2M and 2H) with lower statements of responses to the open-ended question in the survey
than high school education (L), and they include a class with a ma- such as “please mention other challenges and desires to improve
jority of female users. It is interesting to note that the last two the provision of e-services”; “the cost of registration of 10 Turkish
classes (7 and 8) demonstrate a fulfillment of a social e-inclusion Liras to (TL) e-government portal and the need to pay the same
initiative across social classes: the young (4L), middle (4M) and old cost again for replacing a lost password are high”; “to add more
(O) aged users who are average frequency users and female of low public electronic services”; “I would like to create my personalized
education but they are having middle to high ease of use. page on my own“; and “I wish to have the option to add the links
Last, the set of five classes (9–13) determines the characteris- I want to my personal page".
tics of dissatisfied users. The characteristics of the largest dissat- Combing the various analytics recommendations from the DEA
isfied class (9) are: monthly users; above 54-year-old and highly input-efficiency and output-effectiveness indices and the open-
educated. The overall characteristics for these five classes include: ended statements has provided us with additional insights for
daily (2H) to monthly users (4M); low (L) and highly educated managerial actions. The DEA and CART analyses showed that more
(2H), of all ages (2L, 4M and 3O), low income (L) and have low than one third of dissatisfied users include users with distinct
to medium ease of use. In general, the users are either highly characteristics from all male and female groups: low income; low
educated, mature, and have a medium ease of use or low educa- and highly education; low and high ease of use; medium to high
tion and low income and low ease of use. The first group of high frequency users. To meet their concerns, the senior management
education expects more in terms of output-effectiveness, whereas at TurkSat after communicating the recommendations to top pol-
the second group of low education expects more improvements in icy makers at the Ministry of transport in charge of providing In-
terms of input-efficiency. ternet services has decided to take the following set of correc-
It is interesting to note, CART results have highlighted a number tive actions: (i) increased the Internet speed to reduce the time
of characteristics for each class of users; such characteristics would of human–machine interactions to improve both the cost and ben-
not have been possible to obtain from the DEA scores alone with- efit factors; (ii) reduced the registration cost from 10 TL to 4 TL
out the integration with CART. Finally, it is worth noting that the with more free options (email and SMS) to retrieve lost passwords,
set of e-services have attracted all social classes (all ages, young hence reducing the cost factor and contributing more to benefit

Please cite this article as: I.H. Osman, A.L. Anouze and Z. Irani et al., A cognitive analytics management framework for the transformation
of electronic government services from users’ perspective to create sustainable shared values, European Journal of Operational Research,
https://doi.org/10.1016/j.ejor.2019.02.018
JID: EOR
ARTICLE IN PRESS [m5G;February 25, 2019;20:45]

I.H. Osman, A.L. Anouze and Z. Irani et al. / European Journal of Operational Research xxx (xxxx) xxx 17

and opportunity factors; (iii) used a new architecture design to 5. Conclusions, limitation and future research
revamp all e-services in order to provide 24/7 availabilities, elim-
inated internal machine to machine broken links to improve in- The authors have introduced a cognitive analysis management
teroperability among e-services; (iv) obtained ISO 9241-151 certifi- (CAM) framework to evaluate the performance of e-government
cates for the usability and accessibility of e-services to improve services to achieve the government goal of closing the digital di-
opportunity factors; (v) added new eservices to provide more op- vide through increasing citizens’ take-up rate. The CAM processes
portunities to users; (vi) moved the informational group (9001) to were carefully designed to model the human–machine online in-
authenticated group G2 and introduced a small registration cost teractions with e-services. It advances studies on the identification
to the e-government portal in order to provide better opportu- of characteristics and measurable variables from users’ perspective
nity in information quality; (vii) offered users more personalized through an empirical survey. The cognitive process employed a de-
e-services via social media (Blog, Twitter and Facebook accounts) signed online survey to capture data from users while interactions
using smart mobile phones with proactive reminders on the com- with an e-service to ensure highest level of data quality, triangu-
ing due date of payments, receipts and tracking options of submit- lation and veracity. Much care was also employed to avoid data
ted requests. The implementation of these policies led to a reduc- errors and bias found in distributed surveys. Advanced statistical
tion in the cost-risk and improvement in the benefit-opportunity tools were used to validate measures and established relationships
factors; consequently fulfilling the main goal of closing digital di- to the organizational goal. The analytics process used DEA frontiers
vide through the provision of more improved e-services to increase analytics to measure users’ satisfaction as well as generating input-
in the take-up rate with significant practical impacts. efficiency and out-effectiveness indices for each e-service. It es-
Alston (2003) reported that the benefits, outcomes, impacts and tablished benchmarking analysis that cannot be obtained through
shared values of e-government services to people, agencies and so- other methods. The analytics process used the classification and
ciety have been scarce and they are often underestimated by agen- regression trees for visualization and identification of characteris-
cies. It was noted that not all the benefits of e-government are to tics of satisfied, dissatisfied and highly satisfied users. These char-
users, but to agencies and society as well. The study indicated that acteristics cannot be found from the DEA results alone. The man-
45% of e-government users stated that they saved money by using agement analytics process has facilitated the synchronization and
e-government. An average savings of $14.62 per single transaction coordination among stakeholders to generate a sustainable shared
was estimated across all users; while businesses and intermedi- value impact. Our CAM framework contributes to the OR rigor in
aries had higher estimates of cost savings and benefits than cit- modeling and solving complex problems while expanding the OR
izens, for instance 11% of respondents reported savings less than relevance to the excellence in practice for addressing a new chal-
$10; 11% reported savings between $10 and $24; and 8% reported lenge in a no-traditional domain of e-government, as follows:
savings between $25 and $50 per a single interaction. The total
benefits to users were estimated to be at least $1.1 billion for a 1- Identifying a set of new holistic performance measures from
population of 19.5 million citizens using a total 169 e-government the perspective of actual users. The authors believe that the
programs in 2002. The government aggregate financial benefit/cost presented research makes a contribution to the OR normative
ratio across e-government programs was estimated to be 92.5% literature through the identification of primary measures on the
with an estimated average annual saving of $450 million. performance of e-services. This is achieved using online data
Due to a lack of similar benefits estimates in Turkey, the captured immediately after interactions. Therefore, good qual-
above values are used to provide rough and approximate values ity data can be assured for a proper implementation of Cost-
on the Turkish benefits from the provision of new improved e- Benefit and Risk-Opportunity Analysis.
government services. Looking at the available statistics at the gov- 2- Defining the human–machine online interactions as a set of
ernment portal (https://www.turkiye.gov.tr) on January 21st 2016, decision making units (DMUs) to model and frame the eval-
it can be seen that our online survey is in continuous usage at uation of performance of e-service (machine) and satisfaction
https://www.turkiye.gov.tr/anket- eu- fp7- cees), the reader can find of users (human) in Data Envelopment Analysis terms is an
also the following information: a total of 26,094,739 registered innovative contribution of OR modeling in Government. The
users out of 76,66,864 million citizens; and the development of study has contributed to the generation of input-efficiency and
1394 e-services at 211 public institutions. The above data indicates out-efficiency indices to support benchmarking analysis of e-
that 34.03% of population uses e-services; it is a significant im- services as well as measuring users’ satisfaction levels.
provement over the initial value of 9% in Turkey and 28% in EU27 3- The developing of a two-dimensional visualization plot based
before starting the government transformation project in 2009. on the input-efficiency and output-effectiveness indices for e-
Given that the ratio of the number of registered users (26,094,739) services, has facilitated the communication of analytics insights
over the whole Australian population (19,413,0 0 0 including users to non-technical managers. For instance, the set of juridical
and non-users in 2002) has a value of 1.34, the benefits to users e-services was designed using the same standards. However,
in Turkey can then be estimated to be $1.573 billion in 2016. The the DEA benchmarking analysis placed the associated four e-
benefits to government can be similarly estimated and it mostly services in three different quadrants. Such benchmarking anal-
comes from improved business processes and service delivery, in- ysis and analytics insights were not expected from the perspec-
creased multi-agency cooperation, reduced service costs (advertis- tive of providers.
ing, printed materials, staff cost,...) as well as increased revenue. 4- Showing empirically that the DEA bootstrapping approach may
Finally, the wider economic and environmental benefits to society not be necessarily needed for a large sample size.
from engagement in the digital economy include reduced complex- 5- Combining DEA and CART methodologies has highlighted use-
ity when dealing with government, significant ease of finding in- ful information on users ‘characteristics. It helped to convince
formation for all stakeholders, more transparent government and policy makers to execute CAM recommendations by revamping
less corruptions; better information to make decisions; increased all existing e-services and obtained the ISO quality certification.
community skill and knowledge; more new businesses and job 6- Engaging senior managers at various stages of the CAM analyt-
opportunities; more efficient supply chain management; and bet- ics processes played a significant role to assure the success of
ter opportunities to initiate partnerships between government and the project. Although this was not as part of a structured Del-
private sectors to deliver jointly better e-services among others. phi process, it nevertheless contributed to the evaluation and
fitness of the CAM framework.

Please cite this article as: I.H. Osman, A.L. Anouze and Z. Irani et al., A cognitive analytics management framework for the transformation
of electronic government services from users’ perspective to create sustainable shared values, European Journal of Operational Research,
https://doi.org/10.1016/j.ejor.2019.02.018
JID: EOR
ARTICLE IN PRESS [m5G;February 25, 2019;20:45]

18 I.H. Osman, A.L. Anouze and Z. Irani et al. / European Journal of Operational Research xxx (xxxx) xxx

7- Measuring the outcome and impact of our CAM implementa- Supplementary materials
tion were estimated in terms of number of users (adopters);
number of new e-services; and estimated financial savings be- Supplementary material associated with this article can be
fore and after the implementation of recommendations: the found, in the online version, at doi:10.1016/j.ejor.2019.02.018.
percentage of new users increased from 9% to 34%; number of
References
newly designed e-services increased from few tens to over one
thousands of e-services and the financial savings was roughly
Afolabi, I., & Adegoke, F. (2014). Analysis of Customer satisfaction for competitive
estimated to be over one and half billion dollars annually to advantage using clustering and association rules. International Journal of Com-
users in addition to more savings to government agencies and puter Science and Engineering, 3, 141–150.
Al-Hujran, O., Al-Debei, M. M., Chatield, A., & Migdadi, M. (2015). The imperative of
society at large. The estimated savings would be more than fif-
influencing citizen attitude toward e-government adoption and use. Computers
teen billions by 2026. in Human Behavior, 53, 189–203.
8- Recommending the CAM framework for the establishment of Alston, R. (2003). An overview of the e-government benefits study. Australia: Min-
national benchmarking indices to continuously access the de- ister for Communications, Information Technology and the Arts Retrieved Jan-
uary 1st 2016 http://www.finance.gov.au/agimo-archive/__data/assets/file/0012/
velopment of e-services and the impact of new policy over 16032/benefits.pdf.
time. Bandura, A. (1986). Social foundations of thought and action: A social cognition theory.
Englewood Cliffs, NJ: Printice-Hall.
Like all studies, there are limitations which invite further re- Banker, R., Charnes, A., & Cooper, W. (1984). Some Models for estimating technical
and scale inefficiencies in data envelopment analysis. Management Science, 30,
search. The proposed framework was tested and validated us- 1078–1092.
ing Likert Scale and rough estimates of the benefits and costs. Bertot, J. C., Jaeger, P. T., & Grimes, J. M. (2010). Using ICTs to create a culture of
CAM recommendations were implemented based on the under- transparency: E-government and social media as openness and anti-corruption
tools for societies. Government Information Quarterly, 27, 264–271.
lying sprits of such estimates; obtaining real data would con-
Biener, C., Eling, M., & Wirfs, J. H. (2016). The determinants of efficiency and produc-
tribute to measuring the real impact of CAM implementation. Fur- tivity in the Swiss insurance industry. European Journal of Operational Research,
ther studies on the evaluation of services should be conducted 248, 703–714.
Blau, P. (1964). Exchange and power in social life. New York: John Wiley & Sons 1964.
with special measures from the perspective of providers and
Breiman, L., Friedman, J., Olshen, R., & Stone, C. (1984). Classification and regression
other stakeholders to develop a 360-degree evaluation (Osman, trees, pacific grove. Wadsworth-Monterey 1984.
Anouze, Hindi, Irnai, Lee, and Weerakkody, 2014b). The new stud- Buchanan, B., & Naqvi, N. (2018). Building the future of EU: Moving forward with
ies would help to measure satisfaction and performance from International collaboration on Blockchain. The Journal of British Blockchain Asso-
ciation, 1, 1–4.
all perspectives to measure different expectations among stake- Charnes, A., Cooper, W., & Rhodes, E. (1978). Measuring the efficiency of decision
holders in different countries and businesses. Last, research on making units. European Journal of Operational Research, 2, 429–444.
block-chain technology is needed for more efficient and secure Chen, Y.-C. (2010). Citizen-centric e-government services: Understanding inte-
grated citizen service information systems. Social Science Computer Review, 28,
end-to-end processing capabilities; preventing fraud, increasing 427–442.
transparency and trust among stakeholders in the e-government Chircu, A. M. (2008). E-government evaluation: towards a multidimensional frame-
ecosystem, Buchanan and Naqvi (2018). work. Electronic Government, an International Journal, 5, 345–363.
Cook, W., & Zhu, J. (2006). Rank order data in DEA: A general framework. European
Our CAM framework is recommended to support the Informa- Journal of Operational Research, 174, 1021–1038.
tion Society Strategy of the Ministry of Development in Turkey ini- Chuang, C-L., Chang, P-C., & Lin, R-H. (2011). An Efficiency Data Envelopment Anal-
tiative as well as other in countries to develop government per- ysis Model Reinforced by Classification and Regression Tree for Hospital Perfor-
mance Evaluation. Journal of Medical of Medical System, 35, 1075–1083.
formance indices, generate data-driven policies to close the digi-
Davenport, T. H. (2006). Competing on analytics. Harvard Business Review, 84, 98.
tal divide and implement successfully other government transfor- Davenport, T. H., & Patil, D. J. (2012). Data Scientist. Harvard Business Review, 90,
mation applications. Extension of the set of predictors to include 70–76.
Dawes, S. S. (2009). Governance in the digital age: a research and action
users’ physical conditions and GPS location on residential informa-
framework for an uncertain future. Government Information Quarterly, 26(2),
tion would be advisable to provide further services to the need- 257–264.
iest users. CAM is currently being used to assess e-services from De Witte, K., & Geys, B. (2013). Citizen coproduction and efficient public good pro-
users and providers’ perspectives in UK, Qatar and Lebanon in or- vision: theory and evidence from local public libraries. European Journal of Op-
erational Research, 224, 592–602.
der to modernize government services, increase e-participation, re- De Nicola, A., Gito, S., & Mancuso, p (2012). Uncover the predictive structure of
duce corruptions, and increase transparency to achieve a sustain- healthcare efficiency applying a bootstrapped data envelopment analysis. Expert
able growth of shared values for a smarter world. Systems with Applications, 39, 10495–10499.
DeLone, W., & McLean, E. (1992). Information systems success: the quest for the
dependent variable. Information Systems Research, 3, 60–95.
Acknowledgments Donaki, R. Cognitive analytics – the three-minute guide. 2014. https://public.deloitte.
com/media/analytics/pdfs/us_da_3min_guide_cognitive_analytics.pdf
Dyson, R. G. (20 0 0). Strategy, performance and operational research. Journal of the
The authors would like to acknowledge the funding support Operational Re-search, 51, 5–11.
for the CEES project (C-E-E-S: Citizen Oriented Evaluation of E- Dyson, R. G., Allen, R., Camanho, A. S., Podinovski, V. V., Sarrico, C. S., &
Shale, E. A. (2001). Pitfalls and protocols in DEA. European journal of Operational
government Services: a reference model) funded by “FP7-PEOPLE- Research, 132, 245–259.
IAAP-2008 - Marie Curie Action: Industry-Academia-Partnerships Emrouznejad, A., & Anouze, A. (2010). Data envelopment analysis with classification
and Pathways” under grant agreement: 230658. We would like also and regression tree: a case of banking efficiency. Expert Systems with Applica-
tions, 27, 231–246.
thank the senior management of Turksat, namely President Ensar Emrouznejad, A., & Thanassoulis, E. (2014). Introduction to performance improve-
Gül and VP Halil Yeşilçimen of Turksat for their advocacy and com- ment management software (PIM-DEA). Handbook of research on strategic per-
mitment to support the implementation of the various recommen- formance management and measurement using data envelopment analysis. Penn-
sylvania, USA: IGI Global, Hershey.
dations, without them, the project would not have achieved the
Emrouznejad, A., Banker, R., Lopes, A. L. M., & de Almeida, M. R. (2014). Data envel-
e-government goals. The Husni Sawwaf Chair at the American Uni- opment analysis in the public sector. Socio-Economic Planning Science, 48, 2–3.
versity of Beirut for supporting also the project. Also special thanks Esmaeili, A., & Horri, M. (2014). Efficiency evaluation of customer satisfaction index
in e-banking using the fuzzy data envelopment analysis. Management Science
go to the Euro Excellence in Practice Award committee for select-
Letters, 4, 71–86.
ing this work to be among finalists at EURO 2015 conference, Glas- EU-Archive (2010). Archive: E-government statistics. http://ec.europa.eu/eurostat/
gow, UK. Finally, sincere appreciations go to the guest editors and statistics-
the anonymous referees for their critical comments and support explained/index.php/Archive:E-government_statistics.
Gupta, S., Kumar, A. K., Baabdullah, A., & Al-Khowaiter, W. A. A. (2018). Big data with
that immensely improved the clarity of the presentation of the cognitive computing: a review for the future. International Journal of Information
paper. Management, 42, 78–89.

Please cite this article as: I.H. Osman, A.L. Anouze and Z. Irani et al., A cognitive analytics management framework for the transformation
of electronic government services from users’ perspective to create sustainable shared values, European Journal of Operational Research,
https://doi.org/10.1016/j.ejor.2019.02.018
JID: EOR
ARTICLE IN PRESS [m5G;February 25, 2019;20:45]

I.H. Osman, A.L. Anouze and Z. Irani et al. / European Journal of Operational Research xxx (xxxx) xxx 19

Grigoroudis, E., Litos, C., Moustakis, V. A., Politis, Y., & Tsironis, L. (2008). The assess- data, business analytics models and tools. In I. H. Osman, A Anouze, & A. Em-
ment of user-perceived web quality: application of a satisfaction benchmarking rouznejad (Eds.), Handbook of research on strategic performance management and
approach. European Journal of Operational Research, 187, 1346–1357. measurement using data envelopment analysis (pp. 80–189). Pennsylvania, USA:
Gordon, L. (2013). Using classification and regression trees (CART) in SAS® enterprise IGI Global.
miner TM for applications in public health. Lexington, KY: University of Kentucky. Osman, I. H., & Anouze, A. L. (2014c). A Cognitive Analytics Management Frame-
Hastie, T., Tibshirani, R., & Friedman, J. (2009). The element of statistical learning: work (CAM-Part 3): critical skills shortage, higher education trends, education
Data mining, inference and prediction (2nd Ed). Springer-Verlag. value chain framework, government strategy, tools. In I. H. Osman, A Anouze,
Hollan, J., Hutchins, E., & Kirsh, D. (20 0 0). Distributed cognition: toward a new foun- & A. Emrouznejad (Eds.), Handbook of research on strategic performance manage-
dation for human–computer interaction research. ACM Transactions on Comput- ment and measurement using data envelopment analysis (pp. 190–234). Pennsyl-
er–Human Interaction, 7, 174–196. vania, USA: IGI Global.
Horta, I., & Camanho, A. (2014). Competitive positioning and performance assess- Osman, I. H., Berbary, L. N., Sidani, Y., Al-Ayoubi, B., & Emrouznejad, A. (2011).
ment in the construction industry. Expert Systems with Applications, 41, 974–983. Data envelopment analysis model for the appraisal and relative performance
Irani, Z., Weerakkody, V., Kamal, M., Hindi, N. M., Osman, I. H., Anouze, A. L., evaluation of nurses at an intensive care unit. Journal of Medical Systems, 35,
et al. (2012). An analysis of methodologies utilised in e-government research: 1039–1062.
a user satisfaction perspective. Journal of Enterprise Information Management, 25, Pape, T. (2016). Prioritizing data items for business analytics: Framework and ap-
298–313. plication to human resources. European Journal of Operational Research, 252,
Jackson, E. S., Joshi, A., & Erhardt, N. L. (2003). Recent research on team and organi- 687–698.
zational diversity: SWOT analysis and implications. Journal of Management, 29, Parasuraman, V., Zeithaml, A., & Berry, L. (1998). SERVQUAL: A multiple-item scale
801–830. for measuring consumer perceptions of service quality. Journal of Retailing, 64,
Kim, T., Im, S., & Park, S. (2005). Intelligent measuring and improving model for 12–40.
customer satisfaction level in e-government. In Proceeding of the electronic gov- Park, K. S. (2010). Duality, efficiency computations and interpretations in imprecise
ernment: Forth international conference, EGOV 2005 August 22–26, 2005. DEA. European Journal of Operational Research, 200(1), 289–296.
Kipens, L., & Askounis, D. (2016). Assessing e-participation via user’s satisfaction Petter, S., DeLone, W., & McLean, E. (2008). Measuring information systems success:
measurement: the case of OurSpace platform. Annals of Operational Research, Models, dimensions, measures, and interrelationships. European Journal of Infor-
247, 599–615. mation Systems, 17, 236–263.
Kulic, D., & Croft, E. A. (2007). Affective State estimation for human-robot interac- Porter, M. E., & Kramer, M. K. (2011). Creating shared value. Harvard Business Review,
tion. IEEE Transactions on Robotics, 23(5), 991–10 0 0. 89, 62–77 Jan/Feb2011.
Liébana-Cabanillas, F., Nogueras, R., Herrera, L., & Guillén, A. (2013). Analyzing user Pritchard, R. D. (1969). Equity theory: A review and critique. Organizational Behavior
trust in electronic banking using data mining methods. Expert Systems with Ap- and Human Performance, 4, 176–211.
plications, 40, 5439–5447. Rana, N. P., Dwivedi, Y. K., Lat, B., Williams, M. D., & Clement, M. (2017). Citizens’
Lee, H., Irani, Z., Osman, I. H., Balci, A., Ozkan, S., & Medeni, T. (2008). Research note: adoption of an electronic government system: toward a unified view. Informa-
toward a reference process model for citizen oriented evaluation of e-govern- tion Systems Frontiers, 19, 549–658.
ment services. Transforming Government: People, Process and Policy, 2, 297–310. Rana, N. P., & Dwivedi, Y. K. (2015). Citizen’s adoption of an e-government system:
Lee, H., & Kim, C. (2014). Benchmarking of service quality with data envelopment validating extended social cognitive theory. Government Information Quarterly,
analysis. Expert Systems with Applications, 41, 3761–3768. 32, 172–181.
Leidner, D. E., & Kayworth, T. (2006). A review of culture in information systems Ranyard, J. C., Fildes, R., & Hu, T-I. (2015). Reassessing the scope of or practice: the
research: toward a theory of information technology culture conflict. MIS Quar- influence of problem structuring methods and the analytics movement. Euro-
terly, 30(2), 357–399. pean Journal of Operational Research, 245, 1–13.
Li, Z., Crook, J., & Andreeva, G. (2014). Chinese companies distress prediction: an Rowley, J. (2011). E-government stakeholders—Who are they and what do they
application of data envelopment analysis. Journal of the Operational Research So- want. International Journal of Information Management, 31(1), 53–62.
ciety, 65, 466–479. Robinson, A., Levis, J., & Bennett, G.. https://www.informs.org/ORMS-Today/Public-
Loh, W-Y. (2014). Fifty years of classification and regression trees. International Sta- Articles/October- Volume- 37- Number- 5/INFORMS- News- INFORMS- to- Officially-
tistical Review, 82, 329–348. Join- Analytics- Movement.
Luna-Reyes, L. F., & Gil-Garcia, J. R. (2014). Digital government transformation and Saunders, M., Lewis, P., & Thornhill, A. (2007). Research methods for business students
internet portals: the co-evolution of technology, organizations and institutions. (4th Ed). Harlow, England: Pearson Education.
Government Information Quarterly, 31, 545–555. Sharif, A., Irani, Z., & Weerakkoddy, V. (2010). Evaluating and modelling constructs
Magoutas, B., & Mentzas, G. (2010). SALT: A semantic adaptive framework for mon- for e-government decision making. Journal of the Operational Research Society,
itoring citizen satisfaction from e-government services. Expert Systems with Ap- 61(6), 929–952.
plications, 37, 4292–4300. Shen, J., & Li, W. (2014). Discrete Hopfield Neural Networks for evaluating service
Mechling, J. (2002). Building a methodology for measuring the value of e-government quality of public transit. International Journal of Multimedia and Ubiquitous Engi-
services (pp. 1–52). Washington D.C: Booz Allen Hamilton 2002p.p.Retrieved at:. neering, 9, 331–340.
Millard, J. (2008). eGovernment measurement for policy makers. European Journal Simar, L., & Wilson, P. W. (2007). Estimation and Inference in two-stage, semi-para-
of e-Practice, 4, 19–32. metric models of production processes. Journal of Econometrics, 136, 31–64.
Morgeson, F., Van Amburg, D., & Mithas, S. (2011). Misplaced trust? Exploring the TechTarget. (2017). Cognitive Computing http://whatis.techtarget.com/definition/
structure of the e-government – citizen trust relationship. Journal of Public Ad- cognitive-computing.
ministration Research and Theory, 21, 257–283. Tikkinen-Piri, C., Anna Rohunen, A., & Markkula, J. (2018). EU general data protec-
Mortenson, M. J., Doherty, N. F., & Robinson, S. (2015). Operational Research Tay- tion regulation: changes and implications for personal data collecting compa-
lorism to terabytes: a research agenda. European Journal of Operational Research, nies. Computer Law & Security Review, 34(1), 134–153.
241, 583–595. UN-Survey. (2012). Expanding usage to realize the full benefits of e-government Avail-
Norris, D. F., & Lloyd, B. A. (2006). The scholarly literature on e-government: char- able at https://publicadministration.un.org/egovkb/Portals/egovkb/Documents/
acterizing a nascent field. International Journal of Electronic Government Research, un/2012-Survey/Chapter- 6- Expanding- usage- to- realize- the- full- benefits- of- e-
2, 40–56. government.pdf.
Oliver, R. (1980). A cognitive model of the antecedents and consequences of satis- Van Ryzin, G., & Immerwahr, S. (2007). Importance-performance analysis of citizen
faction decisions. Journal of Marketing Research, 17, 460–469. satisfaction surveys. Public Administration, 85, 215–226.
Oña, R., Eboli, L., & Mazzulla, G. (2014). Key factors affecting rail service quality in Verdegem, P., & Verleye, G. (2009). User-centered E-Government in practice: A com-
the Northern Italy: a decision tree approach. Transport, 29, 75–83. prehensive model for measuring user satisfaction. Government Information Quar-
Osman, I. H., Anouze, A. L., Irani, Z., Lee, H., Balci, A., Medeni, T., et al. (2014a). CO- terly, 26, 487–497.
BRA framework to evaluate E-government services: a citizen-centric perspec- Vidgen, R., Shaw, S., & Grant, D. B. (2017). Management Challenges in creating value
tive. Government Information Quarterly, 31, 243–256. from business analytics. European Journal of Operational Research, 261, 626–639.
Osman, I. H., & Anouze, A. L. (2014a). A cognitive analytics management framework Weerakkody, V., Irani, Z., Lee, H., Osman, I. H., & Hindi, N. (2015). E-government im-
(CAM-Part 1): SAMAS components, leadership, frontier performance growth, plementation: a bird’s eye view of issues relating to costs, opportunities, bene-
and sustainable shared value. In I. H. Osman, A Anouze, & A. Emrouznejad fits and risks. Information Systems Frontiers, 17, 889–915.
(Eds.), Handbook of research on strategic performance management and measure- Welsh, W. (2014). Internet of things: 8 cost-cutting ideas for government Available at
ment using data envelopment analysis (pp. 1–79). Pennsylvania: IGI Global. http://www.informationweek.com/government/leadership/internet- of- things- 8-
Osman, I. H., Anouze, A. L., Hindi, N. M., Irani, Z., Lee, H., & Weerakkody, V. (2014b). cost- cutting- ideas- for- government/d/d- id/1113459?image_number=7.
I-MEET framework for the evaluation of e-government services from engaging Yildiz, M. (2007). E-government research: reviewing the literature, limitations, and
stakeholders’ respective (pp. 1–13). European Scientific Journal, Special Edition. ways forward. Government Information Quarterly, 24, 646–665.
doi:10.19044/esj.2014.v10n10p25p. Zheng, Y., & Schachter, H. L. (2017). Explaining citizens’ e-participation use: the role
Osman, I. H., & Anouze, A. L. (2014b). A Cognitive Analytics Management Framework of perceived advantages. Public Organization Review, 17(3), 409–428.
(CAM-Part 2) societal needs, shared-value models, performance indicators, big

Please cite this article as: I.H. Osman, A.L. Anouze and Z. Irani et al., A cognitive analytics management framework for the transformation
of electronic government services from users’ perspective to create sustainable shared values, European Journal of Operational Research,
https://doi.org/10.1016/j.ejor.2019.02.018

You might also like