Aham Anyanwu - Nnanyelugo PHD
Aham Anyanwu - Nnanyelugo PHD
Aham Anyanwu - Nnanyelugo PHD
PARTICIPATION THROUGH A
CITIZEN-CONTENT
ENGAGEMENT FRAMEWORK:
THE PERSPECTIVE OF
NIGERIANS.
N.M. Aham-Anyanwu
PhD
2016
ENHANCING E-
PARTICIPATION THROUGH A
CITIZEN-CONTENT
ENGAGEMENT FRAMEWORK:
THE PERSPECTIVE OF
NIGERIANS.
November 2016
ii
Abstract
Governments around the world are increasingly investing in the publication of data and
information on the internet in a bid to promote transparency and public engagement. However,
studies have found that there is a poor audience and citizens’ engagement with online contents
in general, and with governments’ digital data and information in particular. Studies have also
shown that it is important that governments who seek to engage the citizens in State’s decision-
making process should first engage them with their informative online contents. But the
challenge is that e-public engagement research has been predominantly techno-centric.
Therefore, with an exploratory research design and a sequential-mixed methods approach, this
study investigated the factors influencing citizens’ engagement with governments’ online
contents based on the views of Nigerians. From the qualitative phase of the study, a citizen-
content engagement (C-CE) model was developed. This model was then tested in the
quantitative phase, and findings indicate that citizens’ engagement with governments’ online
contents (CE) is directly influenced by the quality and ability of the contents in meeting the
citizens’ information need (INPCQ), and by the citizens’ affinity for governments’ platforms
(IVP). IVP is influenced by trust in the government (TGA), the ability to actively participate in
information creation on governments platforms (CC), and the ability to interact and deliberate
with other citizens and government’s officials on those platforms (IDelib). Governments’
platform-type and citizens’ level of political awareness also played a moderating role on IVP.
Governments’ use of social media was found to be more important than the use of websites in
the influence of TGA, CC, and IDelib on IVP. Poor level of political awareness was more
important than the optimal level of political awareness in the influence of IVP on CE, which
indicates that the more aware citizens are about the government, the less the affinity they have
for their platforms. This research is important as the outcome may help governments that are
interested in e-participation to shape their contents better in ways that would encourage citizen-
content engagement and citizen participation.
iii
Table of Contents
iv
3.4.2 Sampling Method ............................................................................................... 42
3.4.3 Development of Questions and Materials used .................................................. 46
3.5 PART FOUR: QUANTITATIVE PHASE ........................................................................ 48
3.5.1 Research Participants, Sampling and Sample Size .............................................. 48
3.6 CONCLUSION ........................................................................................................... 49
CHAPTER 4 : QUALITATIVE ANALYSIS AND HYPOTHESIS DEVELOPMENT .. 50
4.1 INTRODUCTION ....................................................................................................... 50
4.2 QUALITATIVE ANALYSIS METHOD ......................................................................... 50
4.3 FINDINGS ................................................................................................................. 53
4.3.1 Content-engagement .......................................................................................... 53
4.3.2 Information Need................................................................................................ 54
4.3.3 Content Attributes .................................................................................................. 59
4.3.4 Perception about writer .......................................................................................... 66
4.3.5 Affinity for Government’s Online Platforms ............................................................ 67
4.3.6 Moderating factors ............................................................................................. 76
4.4 SUMMARY OF FINDINGS AND HYPOTHESIS ............................................................. 82
CHAPTER 5 : QUANTITATIVE DATA ANALYSIS ................................................ 85
5.1 INTRODUCTION ....................................................................................................... 85
5.2 PART ONE: SCALE DEVELOPMENT AND SAMPLING ................................................ 85
5.2.1. Item generation ............................................................................................. 85
5.2.2 Content Adequacy Assessment: Scale and Content Validity .............................. 90
5.2.3 Questionnaire Development .............................................................................. 97
5.2.4 Sampling, Sample Size and Data Collection ........................................................ 97
5.2.5 Pilot Study .......................................................................................................... 98
5.3 PART TWO: DATA PREPARATION AND DESCRIPTIVE STATISTICS ......................... 101
5.3.1 Respondents’ Profile ......................................................................................... 102
5.3.2 Descriptive Statistics ......................................................................................... 103
5.3.3 Data Preparation for Structural Equation Modelling (SEM) ............................. 105
5.4 PART THREE: FACTOR ANALYSIS AND RELIABILITY TEST ................................... 107
5.4.1 Exploratory Factor Analysis .............................................................................. 107
5.4.2 Reliability Test................................................................................................... 111
5.5 PART FOUR: ANALYSIS OF THE CITIZEN-CONTENT ENGAGEMENT (C-CE) MODEL
USING SEM ........................................................................................................................ 114
5.5.1 Measure of Fit for the Measurement Model .................................................... 114
5.5.2 Unidimensionality ............................................................................................. 115
5.5.3 Reliability Analysis ............................................................................................ 117
5.5.4 Construct Validity Analysis................................................................................ 117
5.5.5 Measure of Fit for the Structural Model ........................................................... 119
5.5.6 Evaluating the hypothesised model ................................................................. 120
5.6 CONCLUSION ......................................................................................................... 129
CHAPTER 6 : DISCUSSION .........................................................................................130
6.1 INTRODUCTION ..................................................................................................... 130
6.2 CITIZENS’ ENGAGEMENT WITH GOVERNMENT’S ONLINE PLATFORMS. ............... 130
6.3 SUMMARY OF FINDINGS: QUALITATIVE AND QUANTITATIVE .............................. 131
6.4 PREDICTORS OF CONTENT ENGAGEMENT (CE) .................................................... 132
6.4.1 The Effect of INPCQ on CE ................................................................................. 132
6.4.2 The Effect of VAC on CE .................................................................................... 134
6.4.3 The Effect of IVP on CE ...................................................................................... 135
6.4.4 The Effect of TGA on CE .................................................................................... 135
6.5 ANTECEDENTS OF AFFINITY FOR GOVERNMENTS’ PLATFORMS (IVP) ................. 136
v
6.6 PLATFORM TYPE AS A MODERATING FACTOR ...................................................... 138
6.7 POLITICAL AWARENESS AS A MODERATING FACTOR ........................................... 138
6.8 IMPLICATIONS AND CONTRIBUTIONS .................................................................... 139
6.8.1 Theoretical Implications.................................................................................... 140
6.8.2 Practical Implication: Proposing the Citizen-content Engagement (C-CE)
Framework ..................................................................................................................... 142
6.9 LIMITATIONS AND SUGGESTION FOR FUTURE STUDIES ........................................ 144
CHAPTER 7 : CONCLUSION .......................................................................................147
7.1 THE STUDY ........................................................................................................... 147
7.2 STUDY’S CONTRIBUTIONS TO KNOWLEDGE ......................................................... 148
7.3 PLANS FOR FUTURE WORK ................................................................................... 149
7.4 REFLECTIONS ON THE RESEARCHER’S EXPERIENCE: LESSONS LEARNT AND
KNOWLEDGE ACQUIRED .................................................................................................... 149
APPENDICES ..................................................................................................................... I
APPENDIX A: INTERVIEW PARTICIPANTS’ INFORMATION SHEET ............................................ I
APPENDIX B1: CONSENT FORM FOR INTERVIEW ............................................................. III
APPENDIX B2: CONSENT PAGE FOR CONTENT ADEQUACY ASSESSMENT SURVEY......... III
APPENDIX B3: CONSENT PAGE FOR SURVEY................................................................... IV
APPENDIX C: INTERVIEW QUESTIONS ..................................................................................... V
APPENDIX D: CONSTRUCTS AND ITEMS ................................................................................. VI
APPENDIX E: QUESTIONNAIRE .............................................................................................. XV
APPENDIX F: MISSING ........................................................................................................ XXVI
APPENDIX G: SCATTER PLOTS FOR CE FACTORS (ORIGINAL DATA, ITERATION 1 AND 5) XXVII
APPENDIX H: SCATTER PLOTS FOR IVP FACTORS WITH OUTLIERS (ORIGINAL DATA,
ITERATION 1 AND 5) ............................................................................................................ XXIX
APPENDIX I: SCATTER PLOTS FOR IVP FACTORS WITH OUTLIERS (ORIGINAL DATA,
ITERATION 1 AND 5) ............................................................................................................ XXXI
APPENDIX J: RESPONDENTS’ DATA (ORIGINAL DATA, ITERATION 1 TO 5) ...................... XXXIII
APPENDIX K: DESCRIPTIVE STATISTICS OF LIKERT VARIABLES (ORIGINAL DATA,
ITERATION 1 TO 5) .............................................................................................................. XXXV
APPENDIX L: DESCRIPTIVE STATISTICS OF DICHOTOMOUS AND MULTI-RESPONSE
VARIABLES (ORIGINAL DATA, ITERATION 1 TO 5) .......................................................... XXXVII
APPENDIX M: COMMUNALITIES (ORIGINAL DATA, ITERATION 1 TO 5) ........................... XXXIX
APPENDIX N: R2 , Β AND P ( ITERATION 1 TO 5) .................................................................... XL
APPENDIX O: FACTOR ANALYSIS’ PATTERN MATRIX (ORIGINAL DATA, ITERATION 1 TO 5)
.............................................................................................................................................. XLI
REFERENCES ...............................................................................................................151
List of Tables
Table 3.1: Respondents’ Demographic Details ............................................................ 45
Table 4.1: Phases of thematic analysis (Braun & Clarke, 2006, p. 35) ........................ 51
Table 4.2: Table of Findings and Hypothesis ............................................................... 82
Table 5.1: Items for CE................................................................................................. 86
Table 5.2: Items for IN .................................................................................................. 86
Table 5.3: Items for VAC .............................................................................................. 87
Table 5.4: Items for PCQ .............................................................................................. 87
Table 5.5: Items for PWC ............................................................................................. 88
Table 5.6: Items for IVP................................................................................................ 88
vi
Table 5.7: Items for TGA .............................................................................................. 88
Table 5.8: Items for FA ................................................................................................. 89
Table 5.9: Items for CC ................................................................................................ 89
Table 5.10: Items for IDelib.......................................................................................... 89
Table 5.11: Items for HF .............................................................................................. 90
Table 5.12: ICVI Scores................................................................................................ 94
Table 5.13: ICVI for IN................................................................................................. 94
Table 5.14: SCVI/AVE Scores....................................................................................... 95
Table 5.15: ICC Scores ................................................................................................. 96
Table 5.16: Reliability Test........................................................................................... 98
Table 5.17: Question Types .......................................................................................... 99
Table 5.18: Second Round of Reliability Test ............................................................ 100
Table 5.19: Respondents' Profile (Pooled Iteration) .................................................. 102
Table 5.20: Descriptive Statistics of Likert Variables ................................................ 103
Table 5.21: Descriptive Statistics of Dichotomous and Multi-Response Variables ... 104
Table 5.22: Durbin-Watson's Statistics for CE ........................................................... 106
Table 5.23: Durbin-Watson’s Statistics for IVP ......................................................... 106
Table 5.24:Initial KMO and Bartlett’s Test of Sphericity .......................................... 108
Table 5.25: Communalities ......................................................................................... 108
Table 5.26: Pattern Matrix ........................................... Error! Bookmark not defined.
Table 5.27: Cronbach's Alpha .................................................................................... 111
Table 5.28: Fit Indices for the Measurement Model .................................................. 115
Table 5.29: Factor Loadings ...................................................................................... 116
Table 5.30:Reliability ................................................................................................. 117
Table 5.31: Average Variance Extracted.................................................................... 118
Table 5.32: Discriminant Validity With AVE ............................................................. 119
Table 5.33: Fit Indices of the Structural Model ......................................................... 119
Table 5.34: Pooled Data Analysis Result ................................................................... 121
Table 5.35: Effects of the variables on CE ................................................................. 122
Table 5.36: Platform Moderation Effects ................................................................... 126
Table 5.37: Political Awareness Moderation Effects ................................................. 127
Table 6.1: List of Hypotheses...................................................................................... 132
List of Figures
Figure 1.1: E-participation Research Model (Sæbø et al., 2010).................................... 3
Figure 3.1: Conceptual Framework .............................................................................. 37
Figure 4.1: Conceptual Model of the Findings and Hypothesis (C-CE Model) ........... 84
Figure 5.1: Refined Conceptual Model/Hypothesis.................................................... 113
Figure 5.2: Data analysis Results ................................................................................ 121
Figure 5.3: Citizens' Choice of Platforms ................................................................... 123
Figure 5.4: Type of Platform Used by the Government ............................................. 124
Figure 5.5: Platform Moderation Effects .................................................................... 125
Figure 5.6: Political Awareness Moderation Effects .................................................. 127
Figure 5.7: Types of Information Citizens Want from the Government .................... 129
vii
Interface with the Wider Research Community
During this study the, Researcher shared and exchanged ideas with the wider research
community through a workshop, three intra-University conferences, and seminars; four
conference papers, a journal article and an edited book proposal; feedbacks from these channels
were ploughed back into the study at various stages. At the early stages of this study, the
Researcher made a Pecha Kucha presentation at the 2014 Northumbria Research Conference. A
presentation was also made at an iSchool Research group seminar, and a research-in-progress
paper was written and submitted to the 14th International Federation for Information Processing
(IFIP) Electronic Government (EGOV) and 7th Electronic Participation (ePart) Conference. The
feedbacks from these helped shaped this study in terms of focus and methodology. As the study
progressed, the Researcher gave a presentation at a workshop hosted by the British Academy of
Management, wrote a research-in-progress paper which was accepted for presentation at the
International Conference on Information Systems (ICIS) 2015, and wrote an article that was
accepted for publication in the International Journal of Public Administration in the Digital Age
(IJPADA). Based on the findings of this study, an edited book proposal was written to IGI-
Global and was accepted for production; two conference papers were also written to the
International Conference on Information Systems (ICIS) and the Hawaii International
Conference on System Sciences (HICSS).
Honglei Li, Cemal Tevrizci, Nnanyelugo Aham-Anyanwu, and Robert Xin Luo. (2015). The
Interplay between Value and Service Quality Experience: E-Loyalty Development Process
through the EtailQ Scale and Value Perception, Electronic Commerce Research, 15, 4, 585-
615.
Nnanyelugo Aham-Anyanwu & Honglei Li. (2015) Toward E-Public Engagement: A Review
of Public Participation for Government/Governance. In International Conference on
Information Systems, Fort Worth, Texas, USA. 13 -16 December 2015.
Honglei Li, Cemal Tevrizci, and Nnanyelugo Aham-Anyanwu (2014) An Empirical Study of
E-Loyalty Development Process through E-Satisfaction and E-Trust. In Pacific-Asia
Conference on Information Systems, Chengdu, China. 24-28 June 2014.
viii
Submitted Conference Papers
Nnanyelugo Aham-Anyanwu & Honglei Li. (2016) Citizens’ Engagement with Governments’
Content: A Meta-synthesis and Empirical Research. Submitted to the 2016 International
Conference on Information Systems (ICIS).
ix
Acknowledgement
If the completion of my doctoral research and thesis is a forest, I am only a tree and, as the
popular proverb goes, a tree does not make a forest. I want to use this opportunity to
acknowledge the numerous trees that contributed to the growth of this doctoral forest.
Here I am, proud to have come to the end of a three-year long journey. I am also full of gratitude
for the various roles numerous people have played in the course of this journey. I will mention
as many as I can remember, though I am certain that there are others whose names may not
appear here.
My gratitude goes to my colleagues Huyen Ngo, Srisukkham Worawut, Faraz Khan, Phoebe
Barraclough, Suzannah Ogwu, Opeyemi Dele-Ajayi, Wafa El-Tarhouni, Elhasanin Salem,
Francesco Giglio, and fellow “occupants” of Lab F7 (2013-2017). They did not only have an
input in this research, they were there through challenging times, they introduced me to their
cultures, and they provided social avenues for relief from the rigours of research. I am also
grateful to the iSchool research group for their immediate and remote input in this research in
terms of ideas on possibilities and methodological approaches. My gratitude also goes to the
x
graduate school-especially Stuart Hotchkin and Allan Osborne who provided valuable help and
advice when I needed them.
I am grateful to the kind hearts that took out time to participate in this research, the individuals
who reviewed and commented on outputs of my work, the YouTube Channel owners whom I
relied on as I learnt quantitative analysis and to so many other people who played one role or the
other as I journeyed through this doctoral research.
Finally, I give all glory and honour to God almighty who gave life to and nurtured me, my
family, colleagues and friends into the trees that have made this lush doctoral forest.
xi
Declaration
I declare that the work contained in this thesis has not been submitted for any other award and
that it is all my own work. I also confirm that this work fully acknowledges the opinions and
ideas as cited from the work of others.
The ethical clearance and approval needed for this study were obtained. The clearance and
approval were granted by the Northumbria University Ethics Committee.
I declare that the word count of this Thesis is 68, 950 words.
Signature:
xii
Chapter 1 : Introduction
This research was a product of chance, or what the Researcher refers to as serendipity. The
Researcher had initially intended to investigate the persuasiveness of governments’ information
on the internet. After reviewing the literature in persuasion, the Researcher visited some
government-owned online platforms that were set up either for informational purposes, e.g.
British-owned www.blog.gov.uk and the Australian-owned www.awm.gov.au/blog, or for web-
mediated activities/transactions, e.g. the British-owned online petition platform on
http://epetitions.direct.gov.uk/petitions and http://www.parliament.qld.gov.au/work-of-
assembly/petitions as owned by the Australian government. The aim was to take a cursory look
at these platforms and deduce the type of persuasion on them, if any. However, the Researcher
noticed that although both types of platforms were designed to elicit responses from the public,
the informational platforms appear not to be performing as well as the transactional sites. For
example, the www.blog.gov.uk had its first post on the 14th of February 2014 and amongst the
first 10 posts, one had 22 comments, one had three, three had one each, and the rest had none.
As of the 17th of February 2015, the last 10 contents had no comments at all. This observation is
similar on www.awm.gov.au/blog.
A preliminary review of literature indicates that governments tend to focus on publishing and
making government data and information available on the internet (Coursey & Norris, 2008;
Janssen, Charalabidis, & Zuiderwijk, 2012). For instance, the United Kingdom was praised for
releasing a “tsunami of data” on the internet (in particular data.gov.uk) in the 2012 Open
Government Meeting held in Brasilia (Rogers, 2012). However, the UK National Audit Office
(2012) observes that traffic figures do not show that the members of the public are engaging
with the contents of data.gov.uk. These poor traffic figures exist despite the UK government
departments spending between £53,000 and £500,000, and the Cabinet Office spending £2
million annually to publish information and run data.gov.uk. As a result, Rogers (2012) claimed
that the British government is spending exorbitant amounts of money in the publication of huge
amount of online data and information that no one looks at.
As a result, the Researcher asked: why this apparent lack of engagement with government’s
online information/contents? What could facilitate more engagement? The zeal to answer these
questions became more significant than investigating governments’ persuasiveness on their
online platforms, and this was the birth of the research.
1
1.2 The Position of this Study in E-Participation Research
This study is part of the e-government research field, but with a bias to e-public engagement/e-
participation. E-public engagement, also referred to as e-participation, is the use of information
and communication technology (ICT) to enhance political participation and citizen engagement
(Panopoulou, Tambouris, & Tarabanis, 2014). Previous studies have reported the benefit of e-
public engagement to governments and citizens (Chadwick, 2008; Kardan & Sadeghiani, 2011;
Näkki et al., 2011; Novak, 2005; Panagiotopoulos, Bigdeli, & Sams, 2014; Warren, Sulaiman,
& Jaafar, 2014; Zheng & Zheng, 2014). It improves governments’ transparency (Alvarez, Katz,
Llamosa, & Martinez, 2009; Astrom, Karlsson, Linde, & Pirannejad, 2012; Bonsón, Torres,
Royo, & Flores, 2012) and restores public trust in government (Parent, Vandebeek, & Gemino,
2005; Tolbert & Mossberger, 2006; Welch, Hinnant, & Moon, 2005).
According to Sæbø, Rose, and Flak (2008), there are five key focal areas in e-public engagement
research (Figure 1.1), these include: e-participation actors, e-participation activities, contextual
factors, e-participation effects, and e-participation evaluation. The first category - e-participation
actors - focuses on the key players in e-public engagement, and they include citizens, politicians,
government institutions and voluntary organisations (Medaglia, 2012). E-participation activities
category contains all research focusing on technology-enabled social activities and practices
(Sæbø, Rose, & Molka-Danielsen, 2010), which include e-voting, online political discourse,
online decision making, e-petitioning, etcetera. (Medaglia, 2012). The contextual factors
category includes every research focusing on issues that are not part of e-participation activities
but affect them by being part of the context in which they take place (Medaglia, 2012). Examples
include information availability and its effect on political discourse and e-participation activities,
internet access, technology literacy, and all other structural external environmental factors (Sæbø
et al., 2010). The e-participation effects research category looks at both desirable and undesirable
outcomes of e-participation, which may include improved public engagement, better quality of
political deliberation, improved citizen inclusion in public discourse, alienation of some citizens
from public participation, etcetera (Sæbø et al., 2010). Finally, is the e-participation evaluation
category, which contains studies that aim to measure /evaluate the effects of e-participation
activities.
2
Figure 1.1: E-participation Research Model (Sæbø et al., 2010)
This study can be placed within the contextual factors category and specifically investigates the
factors that influence citizens’ engagement with governments’ online contents as a precursor to
optimal e-public engagement.
Furthermore, the United Nations (2014) discussed three types of e-public engagement, which
include e-decision-making, e-consultation, and e-information. E-decision-making facilitates
citizenship empowerment and contribution to the design of policies, the production of service
components and the delivery modes of these service components. E-consultation affords
Governments the opportunity to involve citizens in the contribution to and deliberation of states’
policies and services. E-information is typically a one-way flow of information from
Governments to citizens which facilitates participation by making public information available
and accessible to citizens without or on demand. This study is focused on e-information which
is the foundation of e-public engagement (Harrison & Sayogo, 2014; Norris, 2001).
Generally, engagement on the internet - especially on social media- has been of particular
importance in the field of marketing as businesses seek ways of attracting customers, improving
their online experience, getting them engaged in their advertisements, making sales and thus
profit (Calder, Malthouse, & Schaedel, 2009; Gummerus, Liljander, Weman, & Pihlström, 2012;
Heath, 2007; Mollen & Wilson, 2010; Sashi, 2012). This interest in online engagement has also
spread to the field of politics as politicians try to gain followers using social media (Baumgartner
& Morris, 2009; Crawford, 2009; Gueorguieva, 2008). Individuals and firms have also become
interested in knowing how well their online websites and contents are engaging their customers
3
and followers. However, gaining audience engagement online is difficult as studies have shown
that there is a high rate of audience disengagement especially with articles/written contents
(Haile, 2014; Manjoo, 2013; Mintz, 2014). It appears that the more the audience read, the more
they tune out (Manjoo, 2013). While this phenomenon may be well known, there is yet to be an
empirical study to investigate what influences engagement on the internet.
4
Citizens need to engage with governments’ contents/information on the internet before they can
participate (give meaningful feedback to the government), which would then generate
collaboration (interaction between citizens and government). The Researcher, therefore, argues
that establishing audience-content engagement is important to the discourse that takes place
in the e-public deliberative spheres and should be the first step towards the affordances of e-
public engagement. If a government/agency is to execute properly its task of engaging the public
online by communicating government policies and getting public opinion, it should not only
publish contents/information; the published contents must be able to engage the public before
the government/agency can. Therefore, there is the need to ask: “what are the factors that can
facilitate citizen-content engagement on the internet?”.
Researchers and practitioners pay significant attention to the techno-centric and top-down
aspects of e-public engagement research while neglecting the info-centric (Bonson et al., 2015;
Leston-Bandeira & Bender, 2013; Roman & Miller, 2013) and bottom-up aspects (Carter &
Bélanger, 2005; Olphert & Damodaran, 2007). For instance, there is abundant research on
governments’ efforts at using technology to improve citizens’ participation in governance
(United Nations, 2014), the type of technologies adopted for this purpose (Aichholzer &
Westholm, 2009), the factors that affect governments’ implementation of e-public engagement
initiatives (Zheng, Schachter, & Holzer, 2014), and how to adopt and use these initiatives
(Alvarez et al., 2009; Bonson et al., 2015; Carter & Belanger, 2012).
5
shares - Bonson et al. (2015) investigated citizens’ engagement with the contents their local
governments posted on Facebook. Similarly, Alvarez et al. (2009) investigated citizens’
perception of the e-voting initiatives developed by their government. The Researcher argues
that citizens can play a role in the development of e-public participation initiatives instead of
just being mere users (whether passive or active). This argument is in agreement with Medaglia
(2012)’s call for a shift of e-participation research focus from governments to citizens. It is also
in line with Bertot, Jaeger, and McClure (2008, p. 137)’s argument that the purpose of e-
government and all its by-products like e-democracy and e-public engagement is to engage the
citizenry in governance in a citizen-centred manner.
Although the info-centric aspect of e-public engagement has received little research attention,
Arnstein (1969) -in her widely cited article – argued that information is essential for genuine
public participation. Information is the foundation of democracies (Harrison & Sayogo, 2014)
which in turn determine the possible implementation, use, and success of e-public engagement
initiatives (Norris, 2001). At the core of active e-public engagement is the information provided
by the government or what Mergel (2013) refers to as a government’s attempt at transparency.
According to Zuiderwijk, Janssen, Choenni, Meijer, and Alibaks (2012), the process of e-public
engagement starts from the publication of information by the government, which the citizens use
and subsequently provide feedback on its usage. This information has also been referred to as
Open Government Data (OGD), which is “data produced or commissioned by government or
government controlled entities” that “can be freely used, reused and redistributed by anyone”
(Open Government Data, 2015; Susha, Grönlund, & Janssen, 2015). OGD not only facilitates
better transparency and trust in the government (Susha et al., 2015), it also encourages
participatory governance and creates a “read/write” society who follow and contribute to what
the government does (Open Government Data, 2015). Janssen et al. (2012) and Ubaldi (2013)
argue that the true value of governments’ information lay in its use by the citizens, public or
audience to make better decisions about their lives and contribute/participate meaningfully in
public affairs. This argument is contrary to the popular belief that the spread and publication of
government information determine its value (Janssen et al., 2012). Having a presence online and
providing information on the internet for the citizens to access does not necessarily mean e-
public engagement (Coursey & Norris, 2008), the citizens must be able to engage with such
information.
The Researcher, therefore, argues that e-public engagement research should include a focus
on the information provided by governments and how it influences e-public engagement. Such
info-centric research should not be reactionary, but should try to investigate what citizens expect
from their governments as it concerns information provision. Therefore, instead of just focusing
6
on citizens’ social reactions to existing online government contents as in Bonson et al. (2015)’s
research, there should be attempts at understanding factors that may affect citizens’ engagement
with governments’ contents on the internet.
The aim of this research is to develop a framework for optimal citizen engagement with
governments’ contents on the internet. The research questions include:
1. RQ1: What are the factors that influence citizens’ engagement with governments’
contents on the internet? To answer this are the following objectives:
a. R-OBJ1: To identify factors that influence citizens’ engagement with
governments’ contents on the internet.
b. R-OBJ2: To develop a model/propose a hypothesis from the above
investigation.
2. RQ2: How well do these factors explain citizens’ engagement with governments’
contents on the internet? To answer this are the following objectives:
a. R-OBJ3: To statistically test the hypothesis developed in R-OBJ2.
b. R-OBJ4: To propose a framework for optimal citizens’ engagement with
governments’ online contents based on the result of R-OBJ3.
This study adopts a two-phase approach with each phase dedicated to answering one research
question, i.e., phase one of this study answers RQ1 and meets R-OBJ1 and R-OBJ2, while phase
two answers RQ2 and meets R-OBJ3 and R-OBJ4.
7
activities involving the stakeholders that could influence their engagement with government’s
contents.
There are two types of gratifications on the web: the content and process gratifications (Kayahara
& Wellman, 2007). With government information as artefacts, citizens seek content
gratification; they seek process gratification as it concerns the process aspect of government
information. To investigate these gratifications, a conceptual framework is built around the uses
and gratification theory (UGT) which is used to ascertain the why and the how of media use
(Urista, Dong, & Day, 2009). However –as will be discussed later - this study is not aimed at
testing or validating a theory and will start with an in-depth/qualitative investigation of factors
that influence citizens’ engagement with governments’ online contents. Therefore, it is necessary
to point out that the conceptual framework only serves as an initial guide that enables the
Researcher identify the key questions to ask the research participants while allowing for
emergent ideas and questions as data collection progresses.
1.7 Methodology
This study adopted an exploratory research design using a sequential mixed-method approach
across the two main phases: a qualitative first phase and a quantitative second phase. The study
was based on the taxonomy development model of mixed-methods research, and as such more
emphasis was given to the qualitative phase. The decision to use this approach was because there
was no existing theory to investigate citizens' engagement with governments' information online
explicitly and because the Researcher intended to generate and test a quantitative hypothesis
from an initial exploratory qualitative study (Creswell & Clark, 2011).
Data was collected from Nigerians across both qualitative and quantitative phases. For the
qualitative phase, the interview technique was used to collect data. The thematic data analysis
was used to analyse data from the interview. For the second phase, data was collected using
questionnaires, the data was then analysed using Structural Equation Modelling (SEM).
• Chapter 1 (Introduction): This is this current chapter, and it presents a general overview
of the research and thesis
• Chapter 2 (Literature Review): This chapter presents the current state of knowledge in
the e-public engagement research field, and in the audience-content engagement
8
research area.
• Chapter 3 (Theoretical Framework and Research Methodology): This chapter presents
the guideline - as developed from the literature – for this empirical study. It also presents
and justifies the Researcher’s choice of methodologies, methods and techniques for this
study.
• Chapter 4 (Qualitative Analysis and Hypothesis Development): This chapter presents
the analysis of and findings from the qualitative data and the development of hypothesis
and key variables.
• Chapter 5 (Quantitative Data Analysis): This chapter presents the development of items
operationalising the variables identified in the previous chapter, and the questionnaire
that will be used in the quantitative phase of the study. It also presents the analysis of
and findings from the quantitative data.
• Chapter 6 (Discussion): This chapter discusses the understandings gained and findings
made throughout the research, the implications, limitations and recommendation for
future studies.
• Chapter 7 (Conclusion): This chapter presents an overarching conclusion to the research
as a whole.
9
Chapter 2 : Literature Review
2.1 Introduction:
This chapter reviews and presents the literature in the areas of public engagement and e-public
engagement. It also reviews the literature on citizens’ engagement with government’s content
and on the nature of the phenomenon called engagement. They are presented in the following
sections.
To adequately conceive public engagement in this study, it is important that the Researcher
discuss the concept of the public sphere. While the public sphere facilitates citizen discussions
and information sharing outside of the ruling sphere, public engagement is a means by which
the ruling sphere delivers to, receives, uses information from and involves the public in state’s
decision-making process. The following sections would discuss both concepts.
Aristotle conceived a two-tiered society made up of the oikos and the polis. The oikos represents
the private setting or household made up of “master and slave, husband and wife, father and
child” and is the basic social unit of the polis (Roy, 1999, p. 1). The polis, on the other hand,
represents the public setting, the state or the city and is made up of a collection of households
and citizens; where the citizens are office holders and administrators of justice (Koçan, 2008).
However, Habermas (1997) – a famous German sociologist who originally brought out the
concept of the public sphere- suggests the existence of a three-tiered society. The three-tiered
society includes a sphere of private autonomy which is similar to Aristotle’s oikos, a public
power sphere with the right to governance, and a domain of private individuals who come
together to form a public sphere that mediates between the public power sphere and the private
sphere. The concept of the public sphere has been widely cited since then (Fraser, 1992; Graham,
2012; Grbesa, 2003; Kellner, 2000). Habermas defined a public sphere as "a realm of our social
life in which something approaching public opinion can be formed" (Habermas, 1964, p. 49),
where public opinion refers to a collection of different individual views and beliefs (Herbst,
1993). Habermas went ahead to suggest that a public sphere comes into existence when private
citizens assemble to converse in an unrestricted manner. He points out that there are two types
of public sphere:
10
1. The political public sphere where discussions that “deal with objects connected to the
activity of the state” (Habermas, 1964, p. 49) are held and where public opinions are
towards politics.
2. The literal public sphere where general issues which are not necessarily political are
discussed and where that the nature of discussions within a public sphere is dependent
on the members (Fraser, 1992; Hauser, 1999).
Both types of public spheres remain open for anyone to partake in but while the literal public
sphere can be said to be as old as man, the political public sphere was emergent.
Graham (2012, p. 29), in his work titled ‘Public opinion and the public sphere’, very clearly
explains the emergence of Habermas’ three-tiered society from Aristotle’s two. He narrated that
historically, political societies were made up of two distinct but porous sets – the rulers, and the
ruled. In this arrangement, the ruled owed their rights and entitlement to the authority of the
rulers. However, certain historical events affected this structure amongst which were “the spread
of Christianity into Northern Europe, the invention of printing, the Reformation, the emergence
of industrial production”. With the new structure, the authority of the government became
dependent on the citizens. To institute a central authority that would effectively protect their
rights to life, liberty, and property, the citizens transferred their rights to self-defence and
retributive punishment to the Magistrates through social contracts. These contracts were usually
for life and were only terminated when the Magistrate abused his authority or was corrupt.
Eventually, there arose concerns about when the social contracts were made and how past
agreements were binding to the present. Due to this, a new kind of contract was introduced which
allowed the ruled the chance to renew a contract periodically or terminate it through democratic
elections. With the introduction of democracy in the system, a group of citizens emerged who
analysed decisions made by rulers and also played essential roles in forming and disseminating
public opinions, which the rulers had to take note of if they were to get re-elected by the ruled.
This group of citizens constituted the “Public Sphere” and included everyone outside the ruling
class whose interests, and activities were focused on political affairs like researchers, journalists,
broadcasters, writers, etcetera (Graham, 2012, p. 30); Habermas refers to this sphere as the
Political public sphere. Today’s political public spheres emerge as a result of citizens'
dissatisfaction with governance or economic issues in society (Shirky, 2011) and public opinions
formed thereof are geared towards criticising and controlling the elite, opponents or the ruling
class (Pusey, 1987a). The Political public sphere is also seen as essential especially within a
democratic state (Grbeša, 2004) as it “mediates between society and state, in which the public
organises itself as the bearer of public opinion” (Habermas, 1964, p. 50). The political public
sphere is where activists and journalists fit into in today’s political societies although it is open
11
to everyone, even those in political positions, as long as they contribute to matters of general
interest and without coercion by the state.
Jürgen Habermas’ public sphere is the most popular study and model of public discourse, and
as observed by Dahlberg (2001a), it is the most systematic critical theory of democratic
communication available. According to Habermas (1989), Pusey (1987b) and Hauser (1998)
a normative public sphere is characterised by independence from the state and without restriction
as it concerns assembly and expression of opinion, freedom of access to the sphere, freedom to
put forward individual views, and opinions and freedom to contest the views and opinions of
other citizens in the discourse of issues of general interest . Based on these normative
characteristics, Dahlberg (2000) cited in (Dahlberg, 2001a) developed a public sphere model
with the following normative characteristics: autonomy from state and economic power,
exchange and critique of criticisable moral-practical claims; reflexivity, ideal role-taking;
sincerity and discursive inclusion and equality.
The normative public sphere must be autonomous from state and economic power; it must be
restriction-free and independent from the state and should allow free speech. A normative public
sphere must also allow the exchange and critique of criticisable moral-practical claims; it
should be devoid of dogmas, however, it should contain reasoned and criticisable opinions and
involve reciprocal critiquing of these opinions (Dahlberg, 2001a; Habermas, 1989). Reflexivity
refers to the consideration and acceptance of opposing views and opinions in the light of better
judgement (Dahlberg, 2001a). Reflexivity is the core of rational critical discourse which
Wilhelm (2000) described as being the same concept as deliberation. In a public sphere, ideal
role-taking demands that interlocutors with conflicting opinions should understand the diverse
perspectives by putting themselves in the position of the other (Dahlberg, 2001a). This allows
participants to listen to each other despite the differences and to respectfully dialogue. The
normative public sphere demands that interlocutors must be sincere, and must thrive towards
sincerely declaring every relevant information, making known their true intentions, interests,
needs and desires all of which are necessary for rational discourse and critique to be possible
(Dahlberg, 2001a). A normative public sphere must also be characterised by discursive equality
and inclusion, it must be devoid of status/class and must be
open to every citizen (Habermas, 1989). All interlocutors in this sphere are listened to and treated
as equals.
12
2.2.2 Public Engagement
Public engagement, on the other hand, was defined by the Economic and Social Research
Council (2008) (cited in (Maile & Griffiths, 2014, p. 15)) as the “involvement of specialists in
listening to, developing their understanding of, and interacting with non-specialists”. The
concept of public engagement has been mainly adopted by medical researchers (Carlsson,
Nilbert, & Nilsson, 2006; Lorenc & Robinson, 2015; Pizzo, Doyle, Matthews, & Barlow, 2014;
Rissi et al., 2015) in what is called Patient and Public Engagement/Involvement (PPE/PPI).
Lorenc and Robinson defined this as the process of involving, consulting and listening to patients
and the public with the aim of creating and delivering services that are responsive to patients’
needs and that will improve clinical outcomes and patient experience. Public engagement is also
referred to as citizen science (Jackson, Gergel, & Martin, 2015; Shirk, 2015; Supp et al., 2015;
Zhao, Fautz, Hennen, Srinivas, & Li, 2015) which affords scientists the opportunity to involve
the public in their projects.
However, for the purpose of this study, public engagement will be discussed in the context of
government public relations and States’ policymaking process which Phillips (2013) described
as being rooted in democracy and as the process of involving the public in the governing system.
In correspondence with the definition of public engagement by the Economic and Social
Research Council and the context of this study, specialists may refer to the State and policy-
makers while non-specialist refers to the members of the public. It, therefore, goes to say that
public engagement is the inclusion/involvement of members of the public in the policy-forming
process of the State.
Arnstein (1969) introduced a widely cited and accepted conceptualisation and gradation of
public engagement termed the ladder of citizen participation. According to Arnstein, there are
eight levels/rungs on the ladder of citizen participation which progress from a state of
nonparticipation to tokenism and finally citizen power. These levels include: manipulation,
therapy, informing, consultation, placation, partnership, delegated power and citizen control.
Within the nonparticipation state, power holders educate and cure the citizens through
manipulation and therapy. Manipulation refers to phony forms of participations contrived by
power holders which aim at making citizens accept a predetermined course of action. At this
level, gullible citizens are made to believe that they contribute to decision-making while indeed
they are not. On the other hand, therapy refers to the process by which power holders assemble
citizens in the guise of including them in the decision-making but with the sole motive of
admonishing and preaching to them about their shortcomings. Therapy is used by the
government to cure citizens of unfavourable attitudes and behaviour. In the tokenism state,
13
citizens are both informed and given a voice through informing, consultation and placation. At
the informing level, governments provide information to citizens as it concerns the facts about
governmental programs, citizens' rights, responsibilities, options, etcetera. Information flow in
this level is typically one-way although it can be scaled to go both ways. At the level of
consultation, governments make efforts to get citizens' opinions on issues through various means
like citizen polls and surveys. It is pertinent to highlight that consultation can be easily misused
for the purpose of manipulation; citizens’ inputs can be used as a smokescreen to mask the
establishment of a pre-determined order or event by the government. With placation, selected
citizens are allowed to advise the government, which may result in the adoption of some
demands, request or suggestions; however, the right to decide still rests solely on power holders.
At the level of citizen power, citizens have increasing degrees of influence in the states' decision-
making process via participation, delegated power and citizen control. At the rung of
participation, there is a redistribution of power as a result of the negotiation between citizens
and power holders. Citizens’ views and opinions become more relevant in decision-making
when the government, private corporations, and non-profit community-based organisations
collaborate to form joint planning and decision-making structures (LeGates & Stout, 2011).
Participation and the associated negotiation between citizens and power holder may result in the
level called delegated power where citizens achieve dominant decision-making power over a
plan or program. In this level, differences are resolved with power holders initiating the
bargaining process with the citizens instead of the other way around. At the rung of citizen
control, citizens move from negotiating with power holders to fully governing and managing a
program or an institution.
14
With a focus on information flow and with concepts which are not different from Arnstein’s
ladder of citizen participation, Rowe and Frewer (2005) discussed three levels of public
engagement: (1) passive public engagement via public communication. Here, information flow
is one-way and goes from the State as the providers to the public as the consumers. Examples
include newsletters, leaflets, non-interactive TV (2) quasi-active public engagement via public
consultation. Here information-flow is also one-way but goes from members of public to the
State and via a process determined by the state. Examples are balloting, referendum, petition
signing, and surveys, etcetera. (3) Active public engagement via public participation. Here,
information flows both ways, i.e. between members of the public and the State in a deliberative
manner as each try to transform the opinions of the other. Examples are deliberative opinion
polls, focus groups, public hearing, citizens’ panels, etcetera. It is pertinent to point out at this
stage that this study will focus more on information flow as discussed by Rowe and Frewer
(2005) as against policy settings as discussed by Arnstein (1969) and IAP2 (2007).
Although the public sphere exists outside the power sphere, in a democratic setting, citizens have
played a diverse role in state decision-making process through public communication,
participation, consultation, deliberation and citizen empowerment (IAP2, 2007; Rowe & Frewer,
2005; United Nations, 2014). These means by which citizens play a role in states’ decision-
making process fall under two democratic traditions: participatory democracy and deliberative
democracy (Cini, 2011). The participatory democratic tradition focuses on two main goals: (1)
that every citizen takes part in all the decisions that would affect the quality and conduct of his
or her life (2) that the state provides the means by which the public can participate in such
decisions independently ((Lynd, 1965) cited in (Cini, 2011)). Participatory democracy typically
involves balloting, referendum, petition signing, surveys, etcetera (Rowe & Frewer, 2005) and
aims at addressing the quantitative dimension of mass democracy by finding out how many
people were involved in arriving at a certain decision in the state (Cini, 2011). On the other hand,
the deliberative democratic tradition focuses on discourse and argumentation between members
of public and the state as the means by which decisions are made in the state (Fung, 2003).
Citizens become part of a process where mutually acceptable and accessible reasons are given
for any opinion, stance or decision taken (Gutmann & Thompson, 2003). It may involve
deliberative opinion polls, focus groups, public hearing, citizens’ panels, etcetera as a
mechanism (Rowe & Frewer, 2005), and therefore is based on the quality of the
argument/discourse. Public engagement facilitates participatory and deliberative democracies.
15
2.3 E-public Engagement
Public engagement efforts were originally through newsletters, leaflets, non-interactive TV,
balloting, referendum, petition signing, surveys, opinion polls, focus groups, public hearing,
citizens’ panels, etcetera (Dahl, 1998; Phillips, 2013; Rowe & Frewer, 2005). However, with the
advent of and improvements in technology, the e-public engagement was birthed allowing for
citizens to participate in online political debates and paving the way for citizens’ contribution to
the decision-making process on the internet. E-public engagement, more commonly known as
e-participation, refers to government-led initiatives which use technology, especially the
internet, to encourage and support active citizenship with the intent of promoting fair and
efficient governance and society (Sæbø, Rose, & Skiftenes Flak, 2008) particularly in policy-
making (Ahmed, 2006). E-public engagement is the interaction between citizens and
governments as supported by ICT. It is of three types according to United Nations (2014), these
include e-decision-making, e-information, and e-consultation.
16
(Hands, 2005; Mergel, 2013; Rowe & Frewer, 2005), e.g. online petition and online surveys.
Active e-consultation is deliberative and involves a two-way flow of information amongst and
between citizens and the government. Here, governments “use computer mediated
communication to foster strong democracy amongst citizens and between citizens and
representatives” (Hands, 2005, p. 13). Active e-consultation involves real-time conversations
and is facilitated by social media (Hartmann, Mainka, & Peters, 2013). Active e-consultation
should also be collaborative, open, social, communicative, interactive and user-centred (Mainka
et al., 2015; Mergel, 2013). Wright and Street (2007) observed that there are three main
approaches by which governments provide active e-consultation: (1) the policy forums which
are typically highly structured and focused and through which policy documents are made
available for citizens to read after which they leave comments/questions. (2) The ‘have your say’
sections which consists of unstructured and open discourses and which typically involves
citizens initiating discussions on topics they find important but which may or may not be
important to the government. (3) The mixed model which has separate policy forum and ‘have
your say’ areas. Flew (2005), while highlighting the benefits of active e-consultation, argued
that e-government cannot be just about electronic service delivery, provision of information, or
limited consultation typically through e-voting and e-petitions; it is about providing citizens with
tangible channels to make seasoned input into policy. With e-deliberation, citizens become part
of a process where they must give mutually acceptable and generally accessible reasons for any
opinion, stance or decision taken (Gutmann & Thompson, 2003). It enhances a collaborative
approach to generating solutions within the state, involves both people and public officials who
are affected by the problem (Fung & Wright, 2001), and allows the e-public sphere the
opportunity to form, refine and revise preferences through public discourse and towards a mutual
understanding and common action (Sirianni & Friedland, 2003). Active e-consultation platforms
provide citizens with an avenue for public deliberations and afford governments the opportunity
to host, coordinate and appropriate these deliberations. There is an increased need for active e-
consultation platforms because of the increasing amount of political deliberations constantly
going on in the public sphere, and which when appropriated by activists or opponents of the state
can be used to stir up civil unrests. Furthermore, a study by Jensen (2003, p. 349) showed that
government-sponsored online political debate platforms are more “successful in achieving
democratic ideals of openness, respect, argumentation, enlightenment and deliberation than
private ones”.
In 1993 the Federal Government of Nigeria established the National Orientation Agency (NOA).
NOA was formed through the merger of the Directorate for Social Mobilization, Self-Reliance
17
and Economic Recovery (MAMSER) with three Divisions of the then Federal Ministry of
Information and Culture namely: The Public Enlightenment (PE), the War against Indiscipline
(WAI) and National Orientation Movement (NOM) (Iredia, 2012). Alongside communicating
government policies to the public and ensuring that the Nigerian Government stays abreast of
public opinion, NOA was also tasked with promoting patriotism, national unity, and
development of the Nigerian society (National Orientation Agency, 2014).
According to National Orientation Agency (2011, pp. 3-4), NOA’s mission statement is:
“To consistently raise awareness, provide timely and credible feedback; positively
change attitudes, values and behaviours; accurately and adequately inform; and
sufficiently mobilize citizens to act in ways that promote peace, harmony and national
development.”
1. Public Enlightenment and Social Mobilisation: With this, the NOA aims to facilitate
citizen-participation in the political process and to empower them to demand for their
rights and to hold their leaders accountable.
2. Value re-orientation and Promotion of Core National Values: With this, the NOA aims
to discourage attitudes and behaviour that bring about segregation and disunity, whilst
promoting values that bind Nigerians together.
3. Political and Civic Education: With this, the NOA aims to orientate and produce
Nigerians whose “passion for Nigeria cannot be quenched by any sectional interest”
(National Orientation Agency, 2011, p. 17). The NOA aims to educate citizens about
their “rights, duties and obligations, patriotism and nationalism, loyalty to the state,
respect for constituted authorities, respect for national symbols and promoting the good
image of Nigeria among others”.
4. Peace Education and Social Justice: With this, the NOA aims to promote peace, ensure
that there are efficient conflict management systems in place, and that citizens have
access to institutions where they can seek justice.
5. Feedback: With this, the NOA aims to collate citizens’ reactions as it concerns
Government’s programs and policies and their lives as citizens of Nigeria, and to
channel these to the Government.
NOA’s key mandates can be broken down into three main activities: Information provision to
the public, collation of feedback from the public and execution of social functions to enlighten
18
the citizens. Therefore, the NOA is the key avenue through which the Nigerian government seeks
to engender public engagement and participation.
According to Internet Live Stats (2015), between the years 2000 and 2014, internet users in
Nigeria grew from 78,740 to 67,101,452. As at the 22nd of May 2015, the number of internet
users in Nigeria stood at 76,688,600. A survey by Pew Research Centre (2014a) shows that
internet access and use in Nigeria is highest amongst those aged between 18 and 29 (45%),
followed by those aged 30-49 (31%) and 50 and above (4%). A different study by Pew Research
Centre posits that in emerging and developing nations, older people (50 +) are significantly less
likely than their younger counterparts (18 – 49) to participate politically especially when such
participation is online (Pew Research Centre, 2014b). According to this study, 45 and 49% of
Nigerians are convinced that sharing online information and participating in online political
dialogue respectively are effective ways of getting heard by and influencing the government.
These findings point to the facts that Nigerian netizens are increasing rapidly and that a majority
of these netizens fall within the age bracket that is expected to be ready to engage with
government and participate politically. It is therefore necessary that the Nigerian Government
considers ways by which it can digitally inform, interact and meet the needs of her netizens. This
is even more relevant as politicians, individuals and firms have used the internet in recent times
to distort public opinion (Nwaubani, 2014).
The National Orientation Agency, which is tasked with facilitating public engagement in
Nigeria, is mainly active offline and performs its activities by publishing books/booklets which
are then circulated for citizens to read –e.g. the “Political Education Manual” which was
published in order to educate citizens about participation in the Nigerian political process, and
also to inform them of their rights. They also organise social functions-e.g. the “Heir Apparent”
which was a reality programe aimed at creating a new set of vibrant and visionary leaders. NOA
also conducts surveys to gauge the opinions of Citizens. On the internet and via its website
(www.noa.gov.ng), NOA mainly publishes information about itself than any other thing. It has
a Facebook page (https://www.facebook.com/nationalorientationagency) and a Twitter handle
(https://twitter.com/noa_nigeria) which - like its official website - reports more on the activities
of the agency. For an agency, which is tasked with public engagement in a nation where the
citizens are rapidly going online, it is obvious that it is not meeting up to expectation and should
do more to engage the citizens on the internet.
The increasing use of the internet as a platform where citizens engage in political debates is not
peculiar to Nigerians. Muchener Kreis (2013) - cited in Mainka et al. (2015)-conducted a survey
19
in 2012 and 2013 which shows that more than 40 percent of internet users in Brazil, China and
India are interested in participating in online political debates. The internet –especially via social
media- is known to enhance citizen participation and lends citizens a voice to freely discuss and
criticise states' decisions and policies online (Näkki et al., 2011). As an environmental tool,
social media acts as a space where citizens deliberate (e-public sphere) and also as a means for
citizens to campaign against or for a cause (digital activism). The use of social media as an
environmental tool, for instance in digital activism, has sometimes resulted to its use as an
instrumental tool for organising and coordinating mass protests aimed at bringing about
immediate changes in a state and which have toppled governments in recent times (Shirky,
2011). An example is President Joseph Estrada of the Philippines who was impeached on
January 20, 2001 as a result of social media-coordinated mass protests demanding his sack.
Hosni Mubarak of Egypt was ousted as a result of an 18-day long revolution, which was started
by a single Facebook page that quickly spread amongst the citizens (Smith, 2011). Furthermore,
the all-inclusive nature of social media can “give too much voice to citizens who misunderstand,
oversimplify or distort issues" (Ferree, Gamson, Gerhards, & Rucht, 2002, p. 292) out of
ignorance or in the bid to serve their own personal agendas. This presents a rather paradoxical
situation where the public sphere as supported by social media needs to be all-inclusive and at
the same time stands to lose quality if it is. These highlight the importance for governments and
governmental agencies to join these online/social media-based political discussions and
arguments; however, in response to dissidence emanating and spreading from social media,
governments are known to ban and censor its use, thereby controlling the e-public sphere in
states concerned and causing further tension (Shirky, 2011).
20
also looked at how social media can foster citizen-government collaboration and civic
engagement (Panagiotopoulos et al., 2014; Warren et al., 2014). In a recent study by Zheng and
Zheng (2014, p. 1), it was discovered that governments’ efforts to inform the public tend to be
self-promoting “monotonous, rigid and formal” and interaction or communication between
governments and the public tend to be “insufficient and preliminary”. In another study, it was
discovered that governments most commonly tweet or write about special events more than they
do about policies (Graham & Avery, 2013) and this mirrors perfectly the way the NOA uses its
online platforms.
The review of literature and a search on online journal databases for previous studies on e-
government especially with a focus on e-public engagement/e-participation in Nigeria, indicates
that there is need for more research in that area and context. Whilst there is a handful of studies
that have discussed the challenges and prospects of e-government in Nigeria (Ayo, 2005;
Mohammed, Abubakar, & Bashir, 2010; Mudhai, 2009), there is yet to be a study dedicated to
e-public engagement/e-participation in Nigeria. Alongside Croatia, the Dominican Republic,
Guyana, Honduras, Mozambique, Namibia, Pakistan, South Africa and Tonga, Nigeria ranked
97th on the United Nation’s E-participation index or e-public engagement index (United Nations,
2014) having scored 0.333 out of a possible 1. Findings from the United Nations suggest that as
it concerns e-public engagement, Nigeria performs best at e-information with a score of 48.15%;
followed by e-consultation with a score of 18.18% and finally e-decision-making with a score
of 11.11%. This dearth of studies in the area of e-government and related concepts is not peculiar
to Nigeria as Sandoval-Almazan, Leyva, and Gil-Garcia (2013) have observed that it is common
in developing countries. On the contrary, e-participation studies are mainly focused on
developed countries, especially in the Americas and Europe (Alvarez et al., 2009; Bonson et al.,
2015; Carter & Belanger, 2012; Fan, Zhang, & Ieee, 2007; Freire, Fortes, & Barbosa, 2014;
Mahrer & Krimmer, 2005; Oktem et al., 2014; Panopoulou et al., 2014; Saebo et al., 2011; Zheng
et al., 2014). Sandoval-Almazan et al. (2013) argued that the construction, deployment and
delivery of internet citizen portals in developing countries would not necessary follow the same
process as in developed countries, this highlights the need for more research focused on less
developed countries.
In the face of little or no previous studies to guide a research, there are three main suggestions:
(1) to consider changing the topic as it will be difficult to get support or help (Blaxter, Hughes,
& Tight, 2001), (2) treating it as a missing element in the existing research literature, or what is
commonly known as research gap, which has to filled with reports from similar research studies
(Bachman & Schutt, 2008). Contrary to Blaxter et al. (2001) ‘s advice, this study shall embrace
the challenge posed by the scarcity of e-public engagement research in the Nigerian context, and
21
will treat it as a research gap which needs to be bridged. Therefore, the focus of this research
shall be on Nigeria.
2.6 Engagement
A web interface that is boring, a multimedia presentation that does not captivate users’
attention or an online community that fails to engender a sense of community are [Sic]
quickly dismissed with a simple mouse click. Failing to engage users equates with no
sale on an electronic commerce site and no transmission of information from a website,
people go elsewhere to perform their tasks and communicate with colleagues and
friends.
22
involvement, Thomson, MacInnis, and Whan Park (2005) opined that it is an individual’s state
of mental readiness to deploy his/her cognitive resources to a consumable object, decision or
action. Heath (2007) defined attention as a conscious, rational construct that determines the
amount of thought given to an advertisement, or in a general sense -a consumable object,
decision or action. Involvement and attention are similar concepts since they involve a conscious
attempt by an individual to expend his/her mental or cognitive resources – including
thinking/thoughts- on a physical or abstract element. On the other hand, experience is an
individual’s internal and subjective response to a direct or indirect contact with an element
(Novak, Hoffman, & Yung, 2000). It is an individual’s belief about how an element fits into
his/her life; this belief may be utilitarian or intrinsically enjoyable in nature (Calder et al., 2009).
Attention/involvement is an important dimension of engagement (Mollen & Wilson, 2010) while
experiences aggregate to form engagement (Calder et al., 2009). An element’s engagement-
ability is its power to hold the attention of an individual and is different from its persuasiveness
or its ability to deliberately change an individual’s behaviour or attitude in the desired direction
(IJsselsteijn, De Kort, Midden, Eggen, & van Den Hoven, 2006; Rashotte, 2007; Seiter & Gass,
2004; Simons, 1976).
As earlier observed, there is no single established theory that pertains to engagement, and this
makes it hard to adopt a theoretical framework in engagement research. However, O'Brien and
Toms (2008) discussed four established theories that are related to engagement and which are
especially helpful in defining user engagement with technology. These include aesthetic theory,
play theory, flow theory, and information interaction theory. The Aesthetic and play theories,
both of which are not yet extensively researched, shall be discussed briefly.
According to Jennings (2000), there are two main views of aesthetics - the broad and narrow
views. The broad view of aesthetics focuses on those perceptual, cognitive and affective factors
that support the creation of engaging and immersive environments. It is concerned with aesthetic
experience which occurs when a person is deeply engaged and immersed in an activity just for
intrinsic reasons to the point where outside distractions do not interfere ((Beardsley, 1982) cited
in (Jennings, 2000)). Beardsley’s idea of aesthetic experience is similar to Csikszentmihaly’s
flow theory as shall be discussed soon. The narrow view of aesthetics focuses on visual
appearance or beauty as related to the principles of design: balance, emphasis, harmony,
proportion, rhythm, and unity. Pleasing and attractive visuals are important as they create the
urge to explore further thereby resulting to engagement. This view of aesthetics is just one
23
important aspect of engagement and does not wholly define it (O'Brien & Toms, 2008).
On the other hand, play is defined as an activity that is voluntary, intrinsically motivating,
involves some level of active often physical engagement, and has a make-believe quality
(Rieber, 1996). Although play shares some characteristics with flow, it is different because it
has the make-believe attribute. According to Rieber (1996), the opposite of work is leisure and
not play as work can become intrinsically satisfying that getting paid to do it becomes secondary.
Play theories or rhetoric are typically in four themes: play as progress which is when play is used
for something useful; play as power which is when play is associated with competition; play as
fantasy which is when play is used for creativity; and play as self which occurs when play is
used for personal satisfaction (Milne, 2012; Pellegrini, 1995). Play is seen as intrinsic to
engagement because it facilitates satisfaction of system users and increases motivation,
challenge and affect ((Woszczynski et al., 2002) cited in (O'Brien & Toms, 2008)).
Flow is the experience of complete absorption and involvement in the present moment
(Nakamura & Csikszentmihalyi, 2009) and, as discussed above, is an essential part of both
aesthetic and play theories. It is a condition wherein people are deeply involved in an activity
that nothing else seems to matter; and because the experience is so enjoyable, people will do it
even at great costs and only for the sake of that activity (Csikszentmihalyi, 1991). Flow theory
and research are concerned with understanding the phenomenon behind activities which are only
rewarding in and of themselves despite any extrinsic rewards that may come from them. Getzels
and Csikszentmihalyi (1976) (cited in (Nakamura & Csikszentmihalyi, 2009, p. 195)) narrated
how Csikszentmihalyi observed an artist who single-mindedly carried on painting while
neglecting hunger, fatigue, and discomfort; but soon after the painting, he lost interest in the
picture. For flow to occur there must be: (1) perceived challenge or opportunities for action
which are within the person's skills or capabilities (2) there must be clear non-distant goals with
immediate feedback about the progress made. With these conditions in place, flow experience
is likely. Such an experience is characterised by (1) intense and focused concentration on present
moment/activity, (2) the merger of action and awareness, (3) the loss of reflective self-
consciousness, (4) a sense of being in control of one's actions, (5) loss of awareness of temporal
existence, (6) the feeling that the activity is intrinsically rewarding even more than the end goal.
While recognising that there are shared characteristics between flow and engagement like
focused attention, feedback, control, activity orientation and intrinsic motivation, O'Brien and
Toms (2008) argue that there should be some differences. According to them, while flow
involves motivation, engagement may arise involuntarily; and while flow demands undivided
long-term focus, engagement is possible in a dynamic environment. Flow theory has been used
24
in studies that investigated why people play games (Ghani, Supnick, & Rooney, 1991; Hsu &
Lu, 2004), in studies that investigated the experience employees have while using computers in
the workplace (Ghani & Deshpande, 1994; Trevino & Webster, 1992; Webster, Trevino, &
Ryan, 1994), and in studies that investigated online consumer behaviour (Chan, Cheung, Kwong,
Limayem, & Zhu, 2003; Koufaris, 2002; Lu, Zhou, & Wang, 2009; Novak, Hoffman, &
Duhachek, 2003). Though the flow theory is related to the concept of engagement, it is more in
tune with HCI and non-text contents and have been used mainly in game development and virtual
reality studies (Chen, 2007; Hsu & Lu, 2004; Lauteren, 2002; Mathwick & Rigdon, 2004;
Montola, Stenros, & Waern, 2009; Reid, 2004; Rieber, 1996) and not in traditional information-
rich environments that are textual, visual or aural based.
Information architecture is focused on solving the basic problems involved in accessing and
using information (Gullikson et al., 1999; Resmini & Rosati, 2012). It support’s Nielsen (1999)’s
argument that people come to the web to seek information and not for experience. Information
architecture focuses solely on information organisation/categorisation, presentation/aesthetics,
navigation and access by web users (Gullikson et al., 1999; Rosenfeld & Morville, 2002) and
includes a system of classification, labelling of concepts, navigation and search/access systems
for a defined body of information (Toms, 2002). Information architecture is focused on the
organisation and presentation of data so that it is better transformed into valuable and meaningful
information.
On the other hand, Information design is the art and science of preparing information so that
they can be used by human beings with efficiency and effectiveness. Its objectives transcend the
development of documents that are comprehensible and easily retrievable to designing
interactions that are easy, natural and as pleasant as possible (Horn, 2000). It is a multi-faced
practice that doesn’t only provide a blueprint for information organisation and accessibility on
25
websites but also facilitates media immersion, engagement, participation, and experience of
users (Nardi & O'Day, 1999; O'Brien & Toms, 2008; Shedroff, 1999). Information design
focuses on creating a meaningful experience for the audience which is essential in transforming
information into knowledge. It is rooted in HCI (Horn, 2000) and entails that there should be
feedback from the engagement between audience and content and that the audience should have
control over the outcome of the engagement. The audience should have productive, creative,
adaptive and communicative experiences. Interactive design is against passivity as is present in
simple navigation and playbacks-only contents (Shedroff, 1999).
Shedroff (1999), in addition to the two information interaction design structures, introduced a
new structure which he termed the Sensorial Design. The sensorial design focuses on the creation
and presentation of information using the medium or media that best supports the information
goal and desired audience-experience. It is the technique involved in stimulating and utilising
the five human senses to create a “more compelling, engaging and appropriate experience
(O'Brien & Toms, 2008) as well as a more successful communication and interaction (Shedroff,
1999). In Shedroff’s view, a complete Information interaction design would involve the marriage
of three structures: information architecture (which he referred to as information design),
information design (which he referred to as interaction design) and sensorial design. O’Brien
argues that while the computer system may be aesthetically appealing and may have design
elements that promote play which could, in turn, facilitate a flow experience; it is the interaction
between users and content or system that facilitates an engaging experience.
The concept of engagement has been widely discussed in the context of reading (Ahola, 2015;
Baker & Wigfield, 1999; Guthrie et al., 2004; Jones & Brown, 2011; Nguyen, van Landingham,
Massof, Rubin, & Ramulu, 2014; Wigfield & Guthrie, 1997) and is seen as the integration of
cognitive, motivational and social aspects of reading. Engagement in reading is evidenced by
four factors: time invested in reading, the effect, the cognitive qualities of the reader, and the
indulgence in reading activities (Guthrie, 2004). For a reader to invest time in reading, there
should be sufficient attention to the text, there should be concentration on the meaning of the
text, and there should be sustained cognitive effort; this agrees with Mollen and Wilson (2010)’s
definition of engagement as a mental state accompanied by active, sustained and complex
cognitive processing. As it concerns affect, the interaction with texts may result in feelings of
enthusiasm, liking, and enjoyment; and - according to Mollen and Wilson – is engagement
through emotional impact, pleasure, and satisfaction. The reader’s cognitive qualities are
signified by his/her depth of processing while reading and which typically results in learning;
26
this is also in agreement with Mollen and Wilson’s view of engagement as a mental state with
sustained cognitive processing. Participation in diverse reading practices signifies indulgence in
reading activities. While this cannot be mapped to any of Mollen and Wilson (2010)’s concepts
of engagement; it is also obvious that Guthrie (2004)’s four factors of reading engagement
lacked the ‘need factor’ as identified by Mollen and Wilson.
The reading engagement theory was conceptualised by Wigfield and Guthrie (1997) who argued
that only motivated readers will engage more in reading. According to Wigfield, Cambria, and
Ho (2012, p. 53), motivation is seen as the “beliefs, values, and goals individuals have for
different activities”, and - in the context of reading - motivation is an individual’s personal
beliefs, values, and goals as it concerns the topics, processes and outcomes of reading (Guthrie
& Wigfield, 2000). Wigfield and Guthrie (1997) developed a framework of reading engagement
to assess children’s engagement with reading. This framework describes three categories of
factors that impact on motivation for reading and reading engagement which include:
competence and efficacy beliefs, goals for reading, and social purposes of reading (Wigfield &
Guthrie, 1997). The competence and efficacy beliefs category is concerned with the belief that
one can be successful at reading (self-efficacy), the willingness to read difficult contents
(challenge), the conscious desire and effort to avoid reading activities (work avoidance). The
goals for reading category is concerned with the desire to read about a particular topic of interest
(curiosity), the favourable experience or enjoyment derived from reading the content
(involvement), the personal value ascribed to reading a content (importance), the
acknowledgement received from significant others as a result of reading (recognition), the desire
to get favourable evaluation from teachers as a result of reading (grades), and the desire to
outperform others in reading (competition). The social purposes of reading category include the
process of constructing and sharing the meanings gained from reading with the immediate social
circle (social), and the need to meet the expectation of others (compliance). A similar framework
was developed by OECD (2010) to measure reading engagement in Programme for International
Student Assessment (PISA). This framework focuses on the enjoyment of reading, time spent
on reading for enjoyment, the diversity of print materials read, the diversity of online materials
read, and reading for school. Whilst these frameworks are focused on children and students,
Wigfield et al. (2012) developed a more generic framework called the Motivations for Reading
Information Books- Nonschool Questionnaire (MRIB-N). The MRIB-N is not fundamentally
different from other frameworks and does, in fact, share similar factors and concepts. It is
concerned with the common factors like enjoyment for reading, avoidance, importance, efficacy,
and recognition/peer value. The MRIB-N is also concerned with the lack of value ascribed to
reading (devalue), lack of recognition or acknowledgement from peers about reading (peer
devalue), and the notion that reading a content is difficult (perceived difficulty).
27
2.6.1.5 The 4-stage model of engagement
With the aim of developing a tool for evaluating national newspapers as it concerns readership-
engagement, McGarrigle and Sanderson (2010) identified five key readership-engagement
factors which include: the informative/inspirational factor, the loyalty/emotional attachment
factor, the entertainment factor, time factor, and the frequency factor. For each of these five
factors, there are associated input variables which are twelve in total. The
informative/inspirational factor includes willingness to recommend the newspaper to a friend,
the reader’s belief that the paper re-enforces his/her outlook on the world, the belief that the
paper yields a stimulating read, that it is a source of reference, that it is inspirational, that it is
thought-provoking, that it challenges the reader’s views on the world and that it is an absorbing
read. The loyalty and emotional attachment factor include difficulty with substituting the
28
newspaper and the disappointment associated with not getting hold of one. The entertainment
factor includes the entertainment and relaxation derived from reading the newspaper. The time
factor includes time spent reading the paper and the number of times the paper was picked up.
Finally, the frequency factor includes the ‘recency’ and frequency of reading. These twelve
variables constitute a single engagement index or what has been referred to as Endex (Gibbs,
2012). However, Calder et al. (2009, p. 322) argued that all these variables are consequences of
engagement and do not describe engagement itself; according to them “it is engagement with a
website that causes someone to want to visit it, download its pages, be attentive to it, recommend
it to a friend, or be disappointed if it were no longer available”.
Just like this study, the uses and gratification theory is an audience-focused approach to
understanding the use of media; it seeks to understand not how media consumption affects the
audience but how and why the audience consumes media (Urista et al., 2009); and it is based on
the assumption that the audience is not passive but have wants and needs which detect their
deliberate choice and consumption of media (Rubin, 2002). It was developed by a psychologist
named Herta Herzog in 1944 as she studied satisfaction amongst radio audiences but has since
been extended to the study of audience gratification across several mediums of communication
like prints (Finn, 1997), televisions (Palmgreen & Rayburn, 1979; Wenner, 1982), the internet
(Ko, Cho, & Roberts, 2005; Stafford, Stafford, & Schkade, 2004), video games (Sherry, Lucas,
Greenberg, & Lachlan, 2006), and mobile phones (Leung & Wei, 2000; O'Keefe & Sulanowski,
1995). It is also getting increasingly popular in social media studies (Park, Kee, & Valenzuela,
2009; Raacke & Bonds-Raacke, 2008; Urista et al., 2009). The uses and gratification theory
holds that there are social and psychological needs which give rise to an individual’s
expectations of the media s/he consumes and which then impacts on his/her engagement with
that media with the aim of attaining gratification (Katz, Blumler, & Gurevitch, 1973). It is widely
held that people use media for five main reasons: (1) to be informed and educated or to satisfy
cognitive needs, (2) to be entertained or to satisfy affective needs, (3) to develop a personal
identity by mimicking characters in the media context or to satisfy personal integrative needs,
(4) to get socially integrated and enhance social interaction or to satisfy social integrative needs,
(5) and for escapism or to attain a tension-free state (Rodman, 2009). However, studies that
employ the uses and gratification theory identify different sets of gratification for the item under
study (Leung & Wei, 2000; Stafford et al., 2004; Urista et al., 2009). Kayahara and Wellman
(2007) posit that the two major categories of gratification on the internet are the process
gratification which is concerned with the experience associated with navigating or using internet
functionalities and the content gratification which deals with the acquisition of required
29
information. While Stafford et al. (2004) agree with Kayahara and Wellman, they suggested a
third category which is socialisation. The Uses Gratification Theory has been used several times
in the study of engagement (Calder et al., 2009; Dimmick, McCain, & Bolton, 1979; Leung,
2009; Sherry et al., 2006).
The UGT shows that intent to consume information is an important factor that affects audience
engagement with media contents. On the internet, these gratifications include process
gratification which pertains to the ease of getting information and could be enhanced by
information architecture, content gratification as an outcome of acquiring information and can
be enhanced by information design, and social gratification. While the three gratifications sought
by internet users are important; for this study, a major focus is on the content gratification and
which is related to human-information interaction; this is because this study aims to improve
audience engagement with information provided online by Governments as an antecedent to e-
public deliberation. With this being the case, it is pertinent to ask: what information will the
public need from Governments? This leads us to ‘Information need’ as conceptualised by Taylor
(1962). Information need was defined by Ormandy (2011) as the recognition that one’s
knowledge is inadequate to satisfy a goal within a given context or situation in which s/he finds
himself at a given point in time. This was referred to as the Anomalous State of Knowledge by
Belkin (1980) and is the reason an individual gets involved in the process of asking questions
that will help satisfy a conscious or unconscious need (Taylor, 1962). This process of asking
questions was seen as information-seeking behaviour by Wilson (2006) and may involve making
demands on formal systems, other information sources or even from other people through
30
interaction/information exchange. A more popular but related terminology to Wilson’s
Information-seeking behaviour is Information Retrieval (IR) which refers to the process of
obtaining from a bank of information resources, those particular resources that will meet the
individual’s information need (Bian, Liu, Agichtein, & Zha, 2008; Broder, 2002; Craswell &
Hawking, 2009; Frakes, 1992). Belkin (1993, p. 1) outlined how important it is to understand
the information need of the audience with the intent of creating good audience-content
engagement and interaction. He opined that:
People are not just passive recipients of messages, but rather active seekers of texts, and
active constructors of meaning from these texts. They look for texts of potential interest;
they make judgements about the usefulness or interest of texts by engaging with them.
Thus, our engagement with texts and our interpretation of them are central to our being
able to use them for our goals, whatever they may be.
On the web, there are three types of information need: (1) navigational need with the immediate
intent to reach a particular page or site, e.g. by visiting www.gov.uk/browse/tax. (2)
Informational need with the intent to search for information which is relevant to meet needs or
interest, e.g. by searching on Google for UK Universities that offer Post Graduate courses; this
is closely related to traditional information retrieval (Broder, 2002). (3) Transactional need with
the intent to reach a site where certain transactions or web-mediated activities will take place,
e.g. shopping, chatting/socialising, gaming, downloading, etcetera. (Broder, 2002; Craswell &
Hawking, 2009). These needs are related to the gratification sought by users as earlier discussed
in section 2.4.1.7 which include process, content and socialising gratifications (Kayahara &
Wellman, 2007; Stafford et al., 2004). A study by ((Rose and Levinson, 2004) cited in (Craswell
& Hawking, 2009)) shows that 60% of web queries were informational, 25% were transactional,
and 15% were navigational. Furthermore, community question-answering (CQA) and web
search using search engines are the two main ways of stating informational needs on the internet
(Bian et al., 2008). With CQA, information needs are specified as natural language questions,
and the desired results are direct self-contained answers from the community. On the other hand,
queries using search engines provide a list of links or documents. However, review of the
literature shows that information seekers can also directly visit an informative platform with the
intent to consume information (Broder, 2002; Craswell & Hawking, 2009; Kayahara &
Wellman, 2007; Stafford et al., 2004).
With individuals and firms aiming to measure and understand how their online contents are
engaging their audience, web analytics was born. Web analytics refers to the analysis of websites
with the intent to understanding their performance (Ferrini & Mohr, 2009), understanding the
31
behaviour of the audience, improving the websites and enhancing the audiences’ experience
(Waisberg & Kaushik, 2009) and thus engagement (Gerlitz & Helmond, 2011). Review of
literature shows that web analytics is carried out in two main ways: by measuring the audience’s
implicit relationships with the online content or media vehicle and by measuring their explicit
relationship of the same. Implicit web analytics is also referred to as the on-site web analytics
and can be carried out only by owners of the target websites or anyone who has access to the
backend of the site.
On-site web analytics are reliant on the audience’s interaction with the unit of content,
commonly known as a page. What qualifies as a page is dependent on the analytics tool(s) used
and could be “Flash, AJAX, media files, downloads, documents, PDFs” (Burby & Brown, 2007,
p. 6) as well as the usual web pages. In a study funded by the Web Analytics Association, Burby
and Brown (2007) discussed some on-site web analytics which include page views, hits (Ferrini
& Mohr, 2009), visits/sessions, page views per visit, unique visitors, entry page, landing page,
exit page, visit duration, the referrer, click-throughs, click-through ratios, page exit ratio, single-
page visits, bouncing/single page view visits, bounce rates, conversion, engagement time (Haile,
2014; Mintz, 2014), eye tracking (Drusch, Bastien, & Paris, 2014; Granka, Joachims, & Gay,
2004; Jacob & Karn, 2003; Michailidou, Christoforou, & Zaphiris, 2014), and mouse-tracking
(Hehman, Stolier, & Freeman, 2014; Mueller & Lockerd, 2001; Smucker, Guo, & Toulis, 2014).
Explicit web analytics, also known as off-site web analytics, can be performed by anyone who
can see the frontend of the website, whether members of the audience or owners of the website.
These include easily observable metrics like number of shares, comments, the number of likes,
etcetera. Previous studies have suggested that an individual’s online influence is evident in the
level of engagement the audience have with his/her online contents. These studies have mainly
relied on off-site web analytics with a focus on network contagion and information diffusion
(Cha, Haddadi, Benevenuto, & Gummadi, 2010; Lerman & Hogg, 2010; Onnela & Reed-
Tsochas, 2010; Ye & Wu, 2010) to detect and measure engagement and thus influence. Before
the current method of measuring influence, the focus was on the number of clicks a content
receives. The focus was then shifted to measuring reach and frequency when it was realised that
online robots were used to imitate human click-throughs (Chen & Wells, 1999). As rightly
observed by Toder-Alon, Brunel, and Fournier (2014), message frequency and dispersion or
valence has taken the bulk of research as it concerns understanding influence in the context of
social media. For example, Ye and Wu (2010) focused on message propagation, the number of
followers and re-tweets in their study on social influence on Twitter. Similarly, Goggins and
Petakovic (2014) mentioned the number of shares, comments and likes as evidence of influence
on Facebook while direct tweets, replies, mentions, and retweets explain influence on Twitter.
32
These studies measure the influence of individuals on social networking sites by investigating
the spread of their contents. Though a popular or influential person (source) can have
widespread/viral contents on social networking sites, contents about a popular figure can also
propagate widely, its source notwithstanding e.g. news on the death of Michael Jackson (Ye &
Wu, 2010). Therefore, it can be argued that the content -not just the source, can account for
information propagation on social networking sites.
As it concerns on-site web analytical approaches, it can be argued that the above-listed metrics-
apart from providing a calculated guess- cannot reliably detect audience-content engagement.
For instance, the visit duration metric has no way to show that the visitor was busy reading the
contents on the website for the duration of the session; but there is a chance that s/he did. An
exception to this is are the engagement time metric, eye tracking and mouse tracking which has
been hailed as the most accurate means by which engagement can be measured as they take into
consideration the movement of the eye movements, cursor, clicks, hovers, scrolls and time spent
in determining a visitor’s engagement with an online content (Haile, 2014; Jacob & Karn, 2003;
Mintz, 2014). However, these metrics can only be ascertained by gaining access to the backend
using expensive web analytic tools which are installed by and accessible to those who have
access to the sites’ backend and/or by using expensive analytical tools. Both options cannot be
employed in studies of online audience-content engagement where the researchers have no
access to websites’ backend or are financially constrained.
Similarly, current research shows that off-site web analytical approaches are also not completely
indicative of engagement. Chartbeat observed that people currently mistake content propagation
for content engagement. According to the CEO - Tony Haile, there is no correlation between
social shares and the audience actually reading the content (Haile, 2014). This finding was
confirmed by another company-Upworthy, who with their “attention minutes” metrics measures
the amount of time the audience spend on an online article. Data gathered and analysed by
Upworthy show that people who spend 25% of the average attention minutes on an article shared
the article more than those who spend 100% of the attention minutes on it (Mintz, 2014). In view
of this, it can be said that online social activities, for example: likes, shares and comments left
are not necessarily indicative of audience-content engagement. However, the strongest indicator
of engagement with contents are feedback or comments left by the audience (Albrecht, 2006;
Dahlberg, 2001a; Sample, 2014), but even these have to be analysed based on the context of the
content (Herring et al., 2005) before engagement can be ascertained. This is so because real life
instances show that an online content may have an enormous number of comments which on the
33
periphery signifies engagement, but a closer look shows that a significant amount of these
comments are out of context and therefore cannot signify engagement with the online content.
2.7 Summary
Review of literature showed that there is no overarching theory that could be adopted in the
study of audience engagement with online information/contents. However, nine models,
concepts, and theories related to engagement were discussed. The Researcher categorised these
theories and concepts into four main groups:
The next chapter will focus on designing a conceptual framework using one or more of the
concepts and theories discussed in this chapter. This framework will guide the Researcher
towards providing answers to the research questions of this study. The next chapter would also
discuss the research methodology for this study.
34
Chapter 3 : Conceptual Framework and Research Methodology
3.1 Introduction
This is a chapter of four parts that discusses the conceptual framework, research methodology,
methods and techniques adopted in this study. The first part presents the conceptual framework,
the second part provides an overview and justification for the Researcher’s choice of mixed-
methods approach, the third and fourth parts present the methodology and methods used in the
first and second phases of this study respectively.
35
1. Citizens would visit governments’ platforms for information and/or transactions (Wang
et al., 2005). This study is focused on the information side of why citizens visit
government platforms.
2. Citizens engage with government information in two perspectives: as artefacts and/or as
processes ((Davies and Bawa, 2012) cited in (Susha et al., 2015)). According to Susha
et al. (2015), as artefacts, government information should be user-friendly by meeting
citizens’ information needs and must be designed and presented appropriately. As
processes, every relevant stakeholder must be part of the development and
implementation of policies for the use of government information (Maruyama et al.,
2013), collaborate in developing such information (Davies, 2010) and users should be
able to interact with the providers and give feedback on the use of the information
(Zuiderwijk et al., 2012). Where the artefacts refer to the information types/topics and
features of government’s contents that can improve citizens’ engagement with
government contents, the processes aspect refers to those activities involving the
stakeholders that could influence their engagement with government’s contents.
3. There are two types of gratifications on the web: the content and process gratifications
(Kayahara & Wellman, 2007). With government information as artefacts, citizens seek
content gratification; they seek process gratification as it concerns the process aspect of
government information. This reflects the uses and gratification theory (UGT) which is
used to ascertain the why and the how of media use (Urista et al., 2009). Therefore, the
UGT provides the lenses through which this study can investigate citizens’ engagement
with governments’ online contents.
36
Figure 3.1: Conceptual Framework
Question:
What Facilitates Engagement
With Governments’ Online
Information?
Theory/Concept 1:
Uses and Gratification
Artefacts/Contents on the Web Processes
(Kayahara & Wellman, 2007)
Theory/Concept 2:
Perspectives of Citizens’ Engagement
With Governments’ Information
(Susha et al., 2015)
Artefacts/Contents Processes
Q1 Q4 Q6
Q2 Q5 Q7
Q3 Q8* Q8*
The purpose of this part of the chapter is to explain the rationale behind the researcher’s choice
of research approach (methodology and methods) and to justify the choices made thereof. This
part of the chapter would discuss the research philosophy and paradigm, the nature of the study
and the required research approaches, other factors that influenced the researcher’s choice of
research approach, the researcher’s choice of approach and ethics.
37
3.3.1 Research philosophy and paradigm
The Researcher believes that there is a single reality, accepts that human infallibility would
inhibit the chances of detecting the nature of this reality, but still strives towards it. This belief
places the researcher as a post-positivist (Trochim, 2006) and entails the use of methodologies
that allow for the generation of hypotheses through in-depth study/investigation of a given
phenomenon within its complex and dynamic social context (qualitative), and methodologies
that test these hypotheses (quantitative).
3.3.2 Research Methodology: Need for the ‘Taxonomy Development Model” of Mixed-
Methods Approach
Regardless of the researcher’s philosophical bias, more important determinants of the choice of
research approach are the research nature as determined by the questions and objectives
(Benbasat, Goldstein, & Mead, 1987; Dawson, 2002; Patton, 1990; Wellington, Bathmaker,
Hunt, McCulloch, & Sikes, 2005) as no approach can be said to be more appropriate than others
across every context. To adequately provide the answers a researcher seeks, it is pertinent that
fit-for-purpose methodologies and methods are applied. The aim of this research is to develop a
framework of factors that governments should consider in order to improve their citizens’
engagement with government’s contents on the internet
This study adopted a multi-method approach based on the taxonomy development model by
Creswell and Clark (2011). A multi-method approach entails the application of two or more
research methods to the investigation of a research question to limit incorrect inferences and
conclusions due to measurement errors. Multi-method research approach can be mono-strategic,
i.e. involving same methodology (qualitative or quantitative) or multi-strategic, i.e. involving a
mixture of methodologies (both qualitative and quantitative) (Venkatesh, Brown, & Bala, 2013).
Multi-strategic multi-method is also called the mixed-methods. This study shall adopt a
sequential mixed-method approach across two main phases: a qualitative first phase and a
quantitative second phase. This approach was termed exploratory design (Creswell & Clark,
2011), qualitative-quantitative sequential exploratory strategy (Terrell, 2012), and
developmental mixed-methods approach (Venkatesh et al., 2013). In their widely cited book,
Creswell and Clark (2011) observed that there are two variants of the exploratory design type of
mixed-methods studies: the instrument development model and the taxonomy development
model. Although both models start with a qualitative phase and end with the quantitative, the
difference is in the way the researcher connects both phases. In the more popular instrument
development model, the researcher explores a research topic qualitatively with a few
participants, then uses the findings to develop items and scales for a quantitative survey. More
38
emphasis is given to the quantitative phase in this variant. In the taxonomy development model,
the qualitative phase in conducted with the aim of identifying important variables, a
classification system or an emergent theory (hypothesis) while the quantitative phase tests the
findings of the first phase in more detail.
This study shall be based on the taxonomy development model of mixed-methods research, and
as such more emphasis will be given to the qualitative phase; this is because there is no existing
theory to investigate citizens' engagement with governments' information online explicitly, and
because the study intends to generate and test a quantitative hypothesis from an initial
exploratory qualitative study (Creswell & Clark, 2011).
Both phases of this study require the sampling of the opinions of study participants where the
Researcher questions an entire population or a representation of the population, gathers their
response and analyses same. Therefore, the Researcher chose interviews for the qualitative first
phase of the study, and a survey using questionnaires for the quantitative second phase.
Interviews and surveys allow for the gathering of information from a research population by
questioning the participants (Pickard, 2013) and are popular in information systems/science
research (Box, Hepworth, & Harrison, 2002; Jankowska, 2004; Kuruppu & Gruber, 2006).
This study collected data from Nigerians for the qualitative and quantitative phases due to the
following reasons:
39
countries and the United states dominate the contextualisation of e-public engagement
research; this prompted the invitation by Moatshe and Mahmood (2012) for similar
studies in developing African, Asian and Middle-eastern countries.
2. Methodological Relevance: As this study is exploratory, contextualising it to a single
cultural background would allow for more in-depth investigation that could inform
future studies (Zainal, 2007).
3. Practical Relevance: According to Internet Live Stats (2015), between the years 2000
and 2014, Internet users in Nigeria grew from 78,740 to 67,101,452. As of the 22nd of
May 2015, the number of internet users in Nigeria stood at 76,688,600. A survey by Pew
Research Centre (2014a) shows that internet access and use in Nigeria is highest
amongst those aged between 18 and 29 (45%), followed by those aged 30-49 (31%) and
50 and above (4%). A different study by Pew Research Centre posits that in emerging
and developing nations, older people (50 +) are significantly less likely than their
younger counterparts (18 – 49) to participate politically especially when such
participation is online (Pew Research Centre, 2014b). According to this study, 45 and
49% of Nigerians are convinced that sharing online information and participating in
online political dialogue respectively are effective ways of getting heard by and
influencing the government. These findings point to the facts that Nigerian netizens are
increasing rapidly and that a majority of these netizens fall within the age bracket that
is expected to be ready to engage with government and participate politically. It is,
therefore, necessary that the Nigerian Government considers ways by which it can
digitally inform, interact and meet the needs of her netizens. It has become even more
necessary because politicians, individuals, and firms have used the internet in recent
times to distort public opinion (Nwaubani, 2014).
4. In 1993 the Federal Government of Nigeria established the National Orientation Agency
(NOA). NOA was formed through the merger of the Directorate for Social Mobilization,
Self-Reliance and Economic Recovery (MAMSER) with three Divisions of the then
Federal Ministry of Information and Culture namely: The Public Enlightenment (PE),
the War against Indiscipline (WAI) and National Orientation Movement (NOM) (Iredia,
2012). Alongside communicating government policy to the public and ensuring that the
Nigerian Government stays abreast of public opinion, NOA is also responsible for
promoting patriotism, national unity, and development of Nigerian society (National
Orientation Agency, 2014). The National Orientation Agency, tasked with facilitating
public engagement in Nigeria, is mainly active offline and performs its activities by
publishing books/booklets which are then put in circulation for citizens to read –e.g. the
“Political Education Manual” which was published in order to educate citizens about
participation in the Nigerian political process, and also to inform them of their rights.
40
They also organise social functions-e.g. the “Heir Apparent” which was a reality
program aimed at creating a new set of vibrant and visionary leaders. NOA also conducts
surveys to gauge the opinions of Citizens. On the internet and via its website
(www.noa.gov.ng), NOA mainly publishes information about itself than any other thing.
It has a Facebook page (https://www.facebook.com/nationalorientationagency) and a
Twitter handle (https://twitter.com/noa_nigeria) which - like its official website - reports
more on the activities of the agency. For an agency that is tasked with public engagement
in a nation where the citizens are rapidly going online, it is obvious that it is not living
up to expectation and should do more to engage the citizens on the internet.
5. Convenience: Although it was possible to collect data from other developing
countries, the Researcher is Nigerian and found it more convenient to collect data from
Nigerians.
This part of the chapter discusses the methodology, methods and techniques that were adopted
in the first phase of this study. R-OBJ1 and R-OBJ2 were achieved by the completion of this
phase and the findings were discussed in Chapter 5. This phase aimed to develop a hypothetical
model of citizens’ engagement with governments’ online contents as there is presently no
existing models of theories for such study. For this phase, data was collected using interviews
which are a popular qualitative research technique in information systems research (Schultze &
Avital, 2011). Interviews allow for the retrospective investigation or ‘what is’ and also for
prospective investigation or ‘what might be’ through direct conversations between participants
and researchers. With interviews, researchers gain insight into the opinions and lives of the
participants resulting to rich data which is the hallmark of qualitative research (Brekhus,
Galliher, & Gubrium, 2005).
The Researcher had a set of questions and had intended to collect data solely from interviews
conducted on Facebook. A pilot study was conducted on Facebook with six participants to test
the interview questions and ensure that they would elicit required data; this lasted for six weeks.
The Pilot study not only helped improve the questions, but it also helped the Researcher note the
challenges of conducting the interviews on Facebook. Through the pilot study, the Researcher
observed that interviews over chat/messaging platforms could be time-consuming as they
typically entail multiple asynchronous chat sessions for each respondent. The Researcher also
observed that some participant lost the zeal to continue with the interview especially after the
first two sessions. With this in mind, the Researcher decided that a better approach would be to
ask each participant to select between textual and oral interviews.
41
Participants were asked to choose between Facebook/Skype chats, Skype/Telephone calls and
face-to-face interviews where possible. Of the 16 respondents, six had their interviews
conducted over Facebook chat, three over Skype Chat; two over Skype calls, three over
telephone calls, and two were in person. The data collection process lasted for about four months.
According to Crouch and McKenzie (2006), qualitative research is concerned with gaining in-
depth understanding and meaning about a given phenomenon and not making generalised
hypothesis; therefore, frequencies and statistics are rarely important. The guiding principle in
qualitative research as it concerns sample size is the concept of saturation (Mason, 2010) which
refers to the point when there is no new data emerging from the data collection process (Francis
et al., 2010) or the point where the emerging data becomes counter-productive and adds nothing
to the overall study (Dey, 1999). However, this concept of saturation has been contentious. Some
researchers rightly point out that most qualitative researchers do not realistically have the
resources it requires to keep collecting data until point of saturation (Green & Thorogood, 2013),
while others argue that some studies claim to have reached saturation without a proof of what it
means and how it was achieved (Mason, 2010) as that there is no framework or set of principles
to guide and report saturation in qualitative studies (Francis et al., 2010).
For this study, this research will adopt Francis et al. (2010, p. 1234)’s principles for specifying
data saturation which state that:
1. The researcher should specify an initial sample size from which to collect data: For this
study, the Researcher shall take 20 as the defined sample size. This will be in agreement
with common practice in qualitative PhD research (Mason, 2010) and also in agreement
with established qualitative researchers like Green and Thorogood (2013). This sample
size may well increase if new data keep emerging.
2. The researcher should specify an additional number of interviews to conduct following
a point when saturation is reached. If no new data emerges at the 20th, the Researcher
shall interview five more people. However -adapting this principle- if no new data
emerges at the 15th interview, the interview stops at the 20th.
Since this study is interested in investigating factors that affect citizens’ engagement with
government’s online contents, the possible participants are all Nigerian citizens who have access
to the internet and are interested in government-related information. Access to the internet is
42
determined by demographics like economic background, education, age, gender, value
orientation (Albrecht, 2006) and location (Prieger, 2003). Similarly, interest in governments’
information, activities and politics is dependent on age, economic background, education, gender
and location (Albrecht, 2006; Haerpfer, Wallace, & Spannring, 2002; Isaksson, Kotsadam, &
Nerman, 2014; Melo & Stockemer, 2014; Pew Research Centre, 2014b). A survey by Pew
Research Centre (2014a) shows that age is the strongest indicator of Internet usage in Nigeria.
The survey shows that Internet access and use in Nigeria is highest amongst those aged between
18 and 29 (45%), followed by those aged 30-49 (31%) and 50 and above (4%). Based on this
data, the Researcher is aware that selecting participants from 50 and above for this study would
not generate the needed data. Furthermore, in a survey by Pew Research Centre (2014b), it was
observed that in emerging and developing countries which Nigeria is a part of, the level of
education had the strongest positive influence on interest in and engagement with politics and
governance. Therefore, based on these, the likelihood of getting substantive participants for this
study from Nigeria is increased if they are selected from the educated aged 18 to 49 years.
Where it is impossible to include the entire population of interest in a research study, sampling
is used to select representatives of the population (Pickard, 2013). Welman, Kruger, and Mitchell
(2005) discussed two classes of sampling methods: the probability samples and the non-
probability samples. Probability sampling is concerned with affording a researcher the statistical
basis to generalise his/her study to a wider population by ensuring that participants (sample) are
selected such that they represent the wider unselected population (Pickard, 2013). It is
predominantly the preserve of positivists and the quantitative research methodology and includes
simple random samples, stratified random samples, quota samples, systematic samples and
cluster samples (Kumar, 2005; Pickard, 2013; Welman et al., 2005). On the other hand, non-
probability sampling disregards the probability of selecting participants or constituting a sample
that is representative of the wider population. It is useful “where the elements in a population
are unknown or cannot be individually identified” (Kumar, 2005, pp. 177-178) and where the
purpose of the research is not to generalise findings to the wider population but to learn from the
recruited participants (Brikci & Green, 2007; Pickard, 2013). Non-probability sampling,
therefore, is predominantly the preserve of the interpretivists and the qualitative research
methodology. It is pertinent to state that during the first phase of this study, the purpose is not to
generalise findings but to access and use information concerning a phenomenon and as provided
by participants. A qualitative research which aims at generalising its finding to the wider
population should be questioned (Pickard, 2013).
The Researcher ensured that this study recruited only participants who can provide information
about the target issue (Krueger & Casey, 2000), and can articulate their thoughts in speech
43
and/or in writing (Strickland et al., 2003). Since this research is in the Nigerian context, the
Researcher recruited participants who are between the ages of 18 and 49 and who have gained
University degrees. This is because studies have shown that Internet use is high amongst people
in that age bracket and that the level of education had the strongest positive influence on interest
in politics and government-related issues (Pew Research Centre, 2014a, 2014b). To ensure that
quality data is gathered from participants that would be interviewed, the Researcher focused on
observable characteristics that could improve the level of critical thinking and contribution in
this phase. Participants’ level of educational qualification was used as a yardstick for selection.
To recruit the participants, this study adopted different non-probability sampling techniques.
These were:
An accidental sampling: An online survey was developed on Survey Monkey. The link to this
survey was sent to people in the Researcher’s immediate social circle between the ages 18 and
49; they, in turn, forwarded it to other people. Everyone who completed the survey was a
potential participant for the interview study and data collated from this survey helped the
Researcher in the recruitment of the best possible participants for the study. 51 people completed
the survey which asked for names, age, gender, level of education, and interest in being
interviewed.
Self-selection sampling: The online survey informed the respondents about the interview,
requested for their contact details and asked them to indicate their interest to be interviewed by
selecting yes, no or maybe. A ‘yes’ selection made the respondent a definite participant for the
interview as long as other selection indices were satisfactory; a ‘no’ selection ruled the
respondent out, and a ‘maybe’ selection required the Researcher to persuade the respondent as
long as other selection indices were satisfactory. Of the 51 respondents, 38 were willing to be
interviewed; 10 were undecided, and three declined. The number of people who were willing to
be interviewed was more than the projected sample size for this study.
Snowball sampling: To ensure that there is even a greater chance of the survey reaching credible
prospective participants for the interview, the Researcher requested recipients of the survey to
forward it to people whom they believe would provide valuable data for the study. Out of the 51
respondents, 26 were from the Researcher’s immediate social circle while 25 were external.
Purposive sampling: To ensure that the best selection of participants were interviewed from the
entire population of survey respondents, the survey asked for their highest academic
qualifications. The higher the qualification of a respondent, the more likely s/he would possess
effective communication and critical thinking skills. Of the 38 respondents who were willing to
be interviewed, 16 had Masters Degrees while 22 had Bachelor Degrees.
44
Amongst the 38 respondents who were willing to be interviewed, there were 18 males and 20
females. There were 16 females with undergraduate degrees and four with postgraduate degrees;
six of the males had undergraduate degrees while 12 had postgraduate. Acknowledging that the
intention to take part in the research may not transcend in actual participation, the Researcher
decided to invite all 38 respondents for the interview –only 14 accepted the invitation and five
were used to pilot the study. The Researcher was compelled to recruit further participants outside
the survey respondents, and of the 12 who agreed to be interviewed, only seven eventually
participated.
As shown in Table 3.1, 16 people were interviewed- 15 were male; 12 had postgraduate degrees,
and four had undergraduate degrees. It was difficult to recruit female respondents for the
interview, and this may be because males have been seen to be more interested in e-participation
than women (Pew Research Centre, 2014b). All respondents in this study were Nigerians, but
not all were resident in Nigeria.
45
Table 3.1: Respondents’ Demographic Details
Alias Gender Age Location Profession Highest Interview
Qualification Medium
Respondent Male 38 United Lecturer Doctorate Face-to-
10 Kingdom degree Face
Respondent Male 34 Nigeria Engineer Master’s Facebook
11 Degree Chat
Respondent Male 42 United Banker Master’s Face-to-
12 Kingdom Degree Face
Respondent Male 38 United PhD Master’s Skype Call
13 Kingdom student/Lecturer Degree
Respondent Male 38 Nigeria PhD Master’s Skype Chat
14 student/Lecturer Degree
Respondent Male 31 Nigeria Job seeker Master’s Facebook
15 Degree Chat
Respondent Male 37 Nigeria IT Specialist Master’s Skype Call
16 Degree
Although the Researcher had intended to interview at least 20 participants as discussed in section
3.4.1, getting those who agreed to be interviewed to participate became a serious challenge.
Efforts to recruit more participants continued concurrently with data collection and analysis and
by the 11th participant to be interviewed, no new themes were emerging. The Researcher,
therefore, decided to stop the data collection after the 16th participant had been interviewed with
no new themes emerging.
The participants were all emailed an information sheet (Appendix A) which explained the
purpose of the research, why they had been chosen to participate, what was expected of them,
and whom to contact if they had a complaint about the Researcher. They were also emailed
Consent forms (Appendix B) to read, sign and return.
An initial set of questions (Appendix C) was also drafted. In designing the questions for the
interview, the Researcher considered:
46
1. The types of interviews: Turner III (2010) discussed three types of interviews which
include the informational conversational interview, general interview guide approach
and the standardised open-ended interview. The informational conversational interview
refers to the spontaneous generation of questions in the course of a natural interaction;
here, questions are not pre-planned but manifest from ongoing participant observation.
This method is best fit for studies adopting observation as a research technique. The
general interview guide approach is more structured and refers to the pre-planned
tailoring and presentation of the same question in different ways to each participant;
here the researcher words the questions differently to suit each participant. This method
is best fit for studies adopting individual interviews as a research technique. The
standardised open-ended interview allows the researcher to structure and standardise
his/her interview questions such that every individual participant gets asked the same
question using the same wording; it, however, allows follow-up questions to be asked
depending on the participants’ initial answers to the standardised questions. This
approach can be used where the research technique is either an individual interview or
a group interview like the focus group. This study adopted the standardised open-ended
interview approach.
2. The ‘Science’ behind the questions asked: In the Researcher’s First Annual Progression
Report Panel, the Chairman- asked what the science behind the proposed interview
questions was. Coming from an Engineering Background, the Chairman wanted to be
sure that the questions the Researcher would ask are not purely subjective but are based
on already existing, tested and trusted knowledge. However, the Researcher explained
that in the field of information systems and sciences - just as in social sciences –
qualitative interview development is subjective and is framed around the information
that a researcher is interested in. Furthermore, although the choice of questions was
subjective, the Researcher ensured that they were consistent with the research
framework as discussed in Part one of this chapter. Where necessary, questions asked in
previous related studies were borrowed. For instance, Beer, Marcella, and Baxter
(1998), Jankowska (2004) and Kuruppu and Gruber (2006) guided the development of
the questions that focused on participants’ information needs.
As the interview progressed, new themes/areas of interest emerged from the data; these were
subsequently added to the interview questions. The data collection started with 10 questions, but
there were 15 questions altogether by the end of the process.
47
3.5 Part Four: Quantitative Phase
This part of the chapter discusses the methodology, methods and techniques that were adopted
in the second phase of this study. R-OBJ3 and R-OBJ4 were achieved by the completion of this
phase. This phase of the study will be quantitative and would test the hypothetical model
developed in the previous phase. The quantitative methodology allows for the use of statistical,
mathematical, numerical and computational data and techniques in the systematic empirical
investigation of observable phenomena (Given, 2008). In this phase, the Researcher wants to
investigate- statistically and otherwise -what a wider population finds as salient or not amongst
the factors identified in Phase One. Therefore, a survey using quantitative questionnaires present
the best means of data collection (Kumar, 2005).
The process started with the development of items and questionnaire using the findings from the
qualitative analysis and the literature. The items and questionnaire development procedure
included item generation, content adequacy assessment, and questionnaire development. The
quantitative methodology process also included sampling and pilot study. Details of this process
are found in the first part of Chapter 5.
The study population of focus in this phase of the study is made up of Nigerians aged 18 and
above. Although a probability sampling method would have yielded a high degree of
representativeness of the study population, it requires the identification of each member of the
population and the quantification of this population. However, the Researcher cannot identify
nor quantify –individually- the number of Nigerians with the predetermined characteristics. As
a result, this phase of the study shall rely on non-probability sampling. This phase of the study
would adopt the Snowball sampling technique as the researcher would start from his immediate
social circle and spread the recruitment of substantive participants from there. Since the study
population size is unknown, there is no way to justify the sample size for this phase of the study.
Therefore, the Researcher shall, without proof, assume the second principle of sampling as
discussed by Kumar (2005, p. 168) which says that “the greater the sample size, the more
accurate will be the estimate of the true population mean.” With this principle in mind, the
Researcher shall endeavour to reach as many participants as is possible although the eventual
sample size will still not be representative of the study population.
48
3.6 Conclusion
This chapter - of four parts - has presented a conceptual framework built around the UGT from
which three key questions were identified. This chapter also presented a background of the
methodology that shall be adopted in this study and across its two phases. The taxonomy-
development model of mixed-methods approach was adopted with a qualitative first phase using
interviews and a quantitative second phase using survey/questionnaires.
49
Chapter 4 : Qualitative Analysis and Hypothesis Development
4.1 Introduction
This chapter presents the analysis and results of the first phase of this study conducted through
interviews. The procedures involved in the analysis are discussed, and the findings presented
together with illustrative data extracts. The findings are presented as hypotheses which will be
tested in the second phase of this study.
Thematic analysis method was adopted to analyse the qualitative data collected. According to
Braun and Clarke (2006), Thematic analysis is a method through which themes within a
qualitative data corpus are identified, analysed and reported. Themes related to the research
questions are identified to capture important aspects of the data. Thematic analysis is
predominant in qualitative research (Guest, 2012), and its fundamental and underlying principles
are found in other qualitative data analysis methods like content analysis, discourse analysis,
grounded theory analysis; but there are nuances. Like thematic analysis these other methods are
used to identify patterns across qualitative methods; but unlike thematic analysis, content
analysis is used for quantitative analysis of qualitative data by focusing on frequency of themes
(Ryan & Bernard, 2000; Wilkinson, 2000). Discourse analysis, Interpretative Phenomenological
Analysis (IPA) and grounded theory are all theoretically bound (Braun & Clarke, 2006;
Jørgensen & Phillips, 2002). Discourse analysis is specifically used to identify the underlying
meanings of texts and languages and how texts and languages are used in social contexts
(Hodges, Kuper, & Reeves, 2008). The IPA focuses on aspects of the texts or language that
depict people’s real life experiences. Grounded theory approaches texts with the sole intent of
developing theories from them (Braun & Clarke, 2006; Charmaz & Belgrave, 2002; Guest,
MacQueen, & Namey, 2011; Smith, Flowers, & Larkin, 2009). Thematic analysis is different
because it is qualitative, flexible and not theoretically bounded.
50
Although the thematic analysis is criticised for not having set guidelines (Antaki, Billig,
Edwards, & Potter, 2003), Braun and Clarke (2006) developed a 6-phase guide to doing the
thematic analysis. These phases include familiarisation with the data, generation of initial codes;
searching of themes, reviewing of themes, definition and naming of themes, and producing the
report. These phases are shown in Table 4.1. For this study, Braun and Clarke’s 6-phase guide
to thematic analysis shall be adopted and adapted where necessary.
Table 4.1: Phases of thematic analysis (Braun & Clarke, 2006, p. 35)
First Phase:
The data collected from this study were textual (as in Facebook and Skype messaging) and verbal
(as in face-to-face interviews and voice calls over Skype and telephone). Each textual data item
was imported into the Nvivo software which the Researcher used for the data analysis. Nvivo is
a Computer Assisted Qualitative Data Analysis Software (CAQDAS) developed by QSR. Like
other CAQDAS, Nvivo helps researchers present an accurate and transparent picture of collated
51
qualitative data while also providing an audit of the data analysis process (Welsh, 2002). Other
popular CAQDAS packages include ATLAS.ti, QDA Miner, MaxQDA; however, the
Researcher chose Nvivo solely because it is the only CAQDAS licenced for use at Northumbria
University. Northumbria University also provides special training for its usage.
Verbal data were first transcribed verbatim and then also imported into Nvivo. The Researcher
read through the data immediately after collection, transcription (where necessary) and import
into Nvivo. Important and interesting segments of the data were highlighted and noted. Each of
the 16 data items was read thrice in this phase.
Srnka and Koeszegi (2007) referred to these phases as unitisation (phase 2) and categorisation
(phase 3). Respectively, these phases involve the preliminary tasks of dividing the material into
units of analysis (coding) and developing a category scheme. At the fourth reading of each data
item, the Researcher focused on coding important and interesting segments of the data. The note-
taking in the previous phase made this easier. Interesting segments of the data corpus were
collated into appropriate codes. These codes were determined deductively by the framework
designed in Chapter 3 and inductively by their ability to capture the essence of the citizens’
engagement with governments’ online contents (Srnka & Koeszegi, 2007).
Beginning with the three pre-determined categories (information need, content features and
activities) as shown in the theoretical framework, the Researcher conducted several rounds of
preliminary coding on the data corpus. Other relevant categories that would provide theoretical
insight into the phenomenon under investigation emerged from the data corpus. The Researcher
also coded interesting features of the data corpus which were outside the theoretical framework
and which did not to capture the essence of the phenomenon under investigation; according to
(Braun & Clarke, 2006), this is advisable as they may be of potential importance.
At the end of the preliminary coding and categorisation Initial data sets under the three pre-
determined categories (information need, content features and activities) were identified.
Afterwards, there commenced an iterative process of changing, eliminating, adding and re-
categorising the data set to capture the essence of the phenomenon under investigation. This
process went on even as findings from the interview data corpus were documented until a perfect
fit for all categories/themes, sub-themes/sub-categories and codes was ascertained (Braun &
Clarke, 2006). After this refinement, five themes were identified: information needs, the
52
attributes of the contents, the perception of writers’ credibility, citizens’ affinity for
governments’ online platforms, and trust in government/agency. These five themes make up the
variables that directly impact on citizen-content engagement.
4.3 Findings
This section is the 6th phase of thematic analysis according to Braun and Clarke (2006) and
entails the presentation and description of the results and how they depict the factors that impact
on citizens’ engagement with governments’ information especially in the Nigerian context. In
this section, each theme will be discussed individually, and some data extracts will be presented
to highlight the findings further.
4.3.1 Content-engagement
Although debated, previous studies have predominantly indicated that online social activities
such as liking/favouriting, sharing, commenting and/or spread of on online contents - including
government contents -are indicators of audience engagement with the contents (Janssen et al.,
2012; Toder-Alon et al., 2014). For example, studies by Ye and Wu (2010) and Goggins and
Petakovic (2014) reported that message propagation/spread, re-tweets, shares and comments are
evidence of audience content engagement on the internet.
Interestingly, all the respondents in this study reported that the indicator for engagement with
governments’ contents on the internet is reading the contents completely (without abandoning it
before the end); For example:
No matter how lengthy it is; it depends on how engaging it is. If it engages me, I will
read it completely.
Respondent 12
Respondent 13
This finding is in contrast with Bonson et al. (2015)’s focus on the number of shares, likes and
comments as proof for citizens’ engagement with governments’ contents. Indeed, a significant
53
number of studies have relied on the spread (Cha et al., 2010; Goggins & Petakovic, 2014;
Lerman & Hogg, 2010; Onnela & Reed-Tsochas, 2010; Ye & Wu, 2010) of and discourse that
follow online contents (De Cindio et al., 2007; Jensen, 2003; Jones & Rafaeli, 2000; Preece,
2001; Sack, 2005; Wilhelm, 2000; Wright & Street, 2007) as adequate proof of audience-content
engagement.
Although this study was not set up to investigate the validity of the predominantly held opinion
about the indicators of audience-content engagement on the internet, this finding agrees with
the opinions of researchers and practitioners who observed that social activities on an online
content are not necessarily great indicators of audience-content engagement. They argue that
there is no correlation between spread of online contents and audience engagement with such
contents and that comments left on online contents can sometimes be outside the context of the
information provided (Haile, 2014; Manjoo, 2013; Mintz, 2014). This finding also agrees with
the reading engagement theory which suggests that engagement in reading is evidenced by the
sufficient attention given to the text with the reader concentrating on the meaning of the text
(Guthrie, 2004).
From the qualitative data analysis, five key themes/variables were identified as being directly
important in facilitating citizen-content engagement: information needs, the attributes of the
contents, the perception of writers’ credibility, citizens’ affinity for governments’ online
platforms, and trust in government/agency.
This theme describes the information that citizens need from the government. As discussed
earlier in Chapter 3 (conceptual framework), providing the needed information to the citizens is
expected to enhance their engagement with governments’ online contents and facilitate e-public
engagement (Davies, 2012; Susha et al., 2015). This assertion was supported by the respondents
who discussed how their information needs and interests influence their engagement with
government contents on the internet. For instance:
I also have to say this, even as individuals, there are areas of interest. For instance, if
you open a web page, and there is a kind of story, if it is an area that you are interested
in…for instance, I am more into government, politics, economics, sports. So as much as
possible I do not miss those stories, especially if they are interesting stories.
Respondent 10
54
It will only put me off if the information contained therein is not of interest to me. It all
depends on the topic of interest. For instance, national issues that deal with youth
empowerment, jobs and economy are issues of interest to me. These I read from
beginning to the end.
Respondent 15
According to Respondent 13, “it must be an article that deals with the area that I am interested
in”. The findings suggest that information need directly influences citizens’ engagement with
the Nigerian government’s online contents. Therefore, this suggests that:
Finding 1 (Hypothesis 1): Online government contents that meet the information needs (IN) of
citizens will be positively associated with their engagement with such contents (CE).
As was expected, there were diverse opinions from the respondents as it concerns the focus of
information they want from the Nigerian Government. A study by Bonson et al. (2015) found
that citizens in a Local Governments within Western Europe are more engaged with information
that directly affect their lives and/or is related to local issues. In this current research, there were
47 information needs in total; these were categorised into three: (1) information for political
participation e.g. government’s financial income and expenditure, policies, plans and activities.
(2) Information for individual choices/personal use e.g. for research, employment, welfare,
etcetera. (3) Information on trending socio-political events. These categories accounted for three
out of the 10 types of citizen information needs as identified by Johannessen, Flak, and Sæbø
(2012, p. 30).
As regards information needs for political participation, citizens are interested in policies of
government/parties, the performance of politicians and governmental departments and agencies.
For example, the respondents stressed the need for information that updates them on
government’s activities and achievements. These include information about policies and plans
and how they may affect citizens, information on projects being planned and/or executed by the
government from various government agencies, information on funds accrued to and spent by
the government, and information on the economy (Davies, 2010). They said:
Mainly, I will want the government of the day to publish information about their
strategic decisions and plans of how to move the country forward. I mean, every citizen
wants to know what's going on? What are planned? What's the long-term plan for
Nigeria? I mean the tenure of the government is usually four years, so four years is a
medium term, it is not a long term. So, we want to know what you plan to achieve, how
you are steering the ship of the country for that four years?
55
Respondent 12
Detailed information about the policies and programmes (sic) of the government
towards achieving developmental goals which include reduction of poverty, illiteracy,
unemployment, infrastructural development and provision of security, etc.
Respondent 6
I also want to see information about how laws and policies by government affect all
citizens, as a means of properly dissemination such information to the layman's
understanding.
Respondent 4
Respondent 1
Daily update of government activities, the current status of all on-going projects, prompt
upload of financial expenditure of government and IGR (internally generated revenue)
statement.
Respondent 9
I need Information about Nigerian state government's monthly allocations. I need to use
it to reconcile and ascertain the level of infrastructural development that is on ground in
the various states.
Respondent 11
56
decisions they are making such that they can move the country up economically. Such
decisions include fixing of interest rates that will obviously affect borrowing and lending
from a banking perspective. So those are the kind of information I will like to have.
Respondent 12
Anything that borders on Nigerian economy interests me because I want to know why
certain things are done the way they are done.
Respondent 13
They need to tell the citizenry what is happening to the economy. Talking about foreign
reserves, how many Nigerians know about it? When you talk of per capita income how
many people know about that? The government needs to break down issues of the
economy in a way that every person will need to understand what is happening to our
economy.
Respondent 5
On information for individual choices, the respondents discussed their interest in information
for their personal use and benefit especially as it concerns employment and empowerment of
citizens, access to government’s interventions and citizens’ rights. Furthermore, respondents
mentioned the importance of providing information to individual citizens who may have a
specific need for such information, usually to enrich knowledge in their profession and studies.
This finding is in support of Faibisoff and Ely (1974) who observed that each individual will
have his/her subjects of interest which may yet be dependent on the type of activity which s/he
is engaged with at a given moment. This makes it very difficult to determine the information an
individual may need for personal use. In the context of governance, the difficulty to understand
citizens’ personal needs for information is compounded with the current era of individualised
access to the government where citizens deal with the government as an individual customer
instead of being part of an organised public (Crenson & Ginsberg, 2003). A possible implication
would be the need for governments to create an avenue for information provision on demand.
According to the respondents:
57
I need information on how to go about a lot of things. There are a lot of opportunities
and provisions by the government which people cannot even access ordinarily.
Respondent 11
If the NOA can focus on information aimed at reducing youth unemployment and
government policies that empower youths, then young Nigerians will definitely keep a
date with them on a daily basis.
Respondent 15
Respondent 9
Uhm, there are a number of reasons why I may go for government information, one is
for professional reasons because I am an academic whose area of specialisation requires
me to get myself acquainted with what is going on in government because I am in the
political science and international relations. I am interested in the Nigerian politics and
African politics as part of my research. So, that could be one reason why I look for
government information because it helps in my teaching and research.
Respondent 13
Finally, the respondents discussed their need for information that would focus on current socio-
political issues in the country. This type of information was referred to as ‘local information’ by
Johannessen et al. (2012) and Bonson et al. (2015) and contains trending information from the
political scenery, local events, and projects, etcetera. The respondents highlighted the need of
information on the diverse but trending issues socio-political issues in Nigeria, with some
examples including how the government is dealing with corruption, ethnic and religious
conflicts. For example:
58
I also want to see on the NOA website consistent update of events, a viable website with
the up to the last-minute information about trending national issues and its effects on the
nation.
Respondent 9
Information about the current state of affairs in the nation. This is to make sure NOA
remains the trusted way to get Government information. It will help avoid rumour
mongering too.
Respondent 1
… One that captures the mood of the nation, it is one that is contemporary in the sense,
I mean you are a Nigerian, and if I ask you what are the issues in Nigeria, there are
things that come to your mind because those are issues of the day. So, if I open up a web
page, I would want to read about those things. For instance, imagine what is happening
in the Senate in Nigeria right now, if I find any news as far as the Senate president is
concerned I want to read it. Especially for those of us who are doing research that is
related to Nigeria, you just want to be on top of things. So, as far as I am concerned, that
sort of news would always capture my attention.
Respondent 10
I will like to be updated on every political issue in Nigeria. If we take a kind of leverage
from the reigning thing that has to do with the slogan of the present government which
is war against corruption. Now, this is one avenue that orientation can help, not only
helping to facilitate government policies and views and aspirations; it will also help to
educate the people more on what corruption is all about. Religious issues, both between
Christians and Muslims. These are some of the issues that NOA can investigate and
bring into the social media and these are issues that are currently dealing with the
Nigerian society.
Respondent 5
Content attributes describe the features of governments’ online information or contents that may
impact on content engagement. The respondents identified both visual, and perceived content
attributes in agreement with Susha et al. (2015).
59
Visual attributes of the content
The visual attributes refer to the presentation of governments’ contents on the internet. They
describe visible and discernible features of governments’ online contents that impact on
audience-content engagement. They include the length of the contents, and use of pictures and
videos. The respondents discussed the influence of the length of content on their engagement
with such content. They described the length of an article regarding its word-count and/or
compared it with a typical Microsoft Word document consisting of 500 words a page. Most of
the respondents were of the opinion that the longer the content, the less likely that they will
remain engaged with it. This phenomenon has been observed by Haile (2014), Manjoo (2013)
and Mintz (2014) who suggest that the more that people read contents online, the more they tune
off or disengage. This may be because the audience does not have enough time to delve into
details of the information on the content (Zuiderwijk et al., 2012). Similarly, Morkes and Nielsen
(1997) recommended that online contents should have concise texts as the majority of the
audience would want the content to fit on a single screen. Following a study of online readers,
Nielsen (2008) suggested that by default, online contents should be strictly restricted to around
500 words unless they are meant for a targeted elite readership. According to the Respondents:
It should be straight to the point and not too long; I mean (an) article is not a textbook.
There are some articles you read and you have to scroll down for ages. I will think an
online article should not be more than 1500 words; in fact, maybe between 1000 to one
1500 words. Use Facebook posts as an example, how many times have you read a post
or a comment that seems endless? I do not, I just scan through and post mine which is
always short.
Respondent 2
I will say if an average word document is 500 words, that - to me- is just about two
(web) pages. To be honest with you, I think I will consider an article to be long if it is
more than three pages. If it is more than three pages I will consider it too long, that is
about 750 words; you know, less than 1000. If it is more than 1000, at least I know
that...I begin to decide how best to read it.
Respondent 10
It depends on how long, I mean, if it is so large I cannot finish it. I do not have time. I
look at the topic, read the first paragraph, read the closing paragraph then go to the
comments, and read what people commented. I do not want it to be more than 500 words.
Respondent 16
60
Respondent 1 said “sometimes I scroll down through the article b4 (sic) reading it. If it is too
long, I feel discouraged”. When asked the maximum word-count he could tolerate, he said
“1200”. Similarly, Respondent 3 said he could read more than 1000 words only if he was
“forced to read at gunpoint”.
However, there was also a warning against very short online articles. Respondent 2 said: “I also
hate shallow articles. I clicked on one, and I felt like slapping the person that wrote it. It was
just about five sentences”. This agrees with Henry (2009) whose study shows that online
contents with more words tend to have more links to them from external sources on the web.
However, Henry’s study refers to links to online contents and not really engagement and may
be due to the perception that the more the words, the greater the information contained, a fact
that Respondent 14 alluded to when he said that “Serious issues cannot be discussed in few
lines”. Again. Henry’s focus on links is exactly what Mintz (2014) and Haile (2014) described
as online social activities which have no correlation which audience-content engagement.
In a slightly different opinion, Respondent 4 suggested that though there is usually a limit to
the length of an online content he tolerates; even if this limit is exceeded, his engagement with
the content shall be sustained as long as the content meets his information needs. According to
him:
Anything more than 500 words will ordinarily affect my interest. However, if the content
centres on a current issue and I have an interest in the said issue I will read it no matter
the size.
Respondent 4
Furthermore, the Respondents discussed the role pictures, and videos can play in improving
citizens’ engagement with governments’ online contents. This agrees with the study by Bonson
et al. (2015) who found that pictures improve citizens’ reaction to governments’ posts on
Facebook. Renowned web-usability researcher and expert –Jakob Nielsen- also suggests that
graphics and texts should complement each other (Morkes & Nielsen, 1997). To describe the
importance of pictures in improving audience-content engagement; Respondent 2 said: “I love
articles that are full of pictures… pictures that are relevant to the subject matter.” Similarly,
Respondent 6 suggested that “diagram attached have a role to play (in enhancing content
engagement) …it makes it catchy. Respondent 14 advises “prepare it (the content) in an
attractive format with pictures”; and Respondent 7 said, “it should have pics (sic), graphs and
tables”. Focusing on videos, Respondent 5 said:
It should not just be about writing articles, people will not care to read much. As you
are writing that article, try to put up some video clips because what people watch visually
61
attract them a lot. As they are now watching that, they will now try to read a bit of what
you have written. In that way, you have maximised that particular medium of reaching
them. That is my own thinking…there is need to combine print with visual especially
for a society that looks like ours. It is in a developed society that people can easily read
and write, and they are attracted to reading. But in a place where the reading culture is
not very prominent amongst the very many people that are concerned, you now have to
combine that print and visual for them to understand properly what you are doing.
Respondent 5
Finding 2 (Hypothesis 2-1): Online visually appealing government contents (IVP) will be
positively associated with citizens’ engagement with such contents (CE).
These refer to the perceived information quality of the contents. According to Iivari and Koskela
(1987), the quality of content or its informativeness is not just about relevance and
comprehensiveness, it is also about recency/timeliness. There should always be the right amount
and quality of information for citizens to access in order to improve e-participation (Medaglia,
2012). The respondents suggested that it is a factor of its timeliness, relevance to the audience,
accuracy, simplicity and story-like presentation where possible. These factors have been
discussed in the literature (Chen, Clifford, & Wells, 2002; Iivari & Koskela, 1987; Nardi &
O'Day, 1999; O'Brien & Toms, 2008; Peng, Fan, & Hsu, 2004; Shedroff, 1999). As has been
observed by previous researchers, citizens’ engagement with governments’ content is negatively
impacted when the information is obsolete (Janssen et al., 2012; Lee & Kwak, 2012).
Respondent 2 said: “Most times you have outdated articles on (government platforms).
Something you read some time ago and you visit months after, it is still there. No update.”.
Respondent 4 suggested that “if it is not on a current issue or an issue on the front burner for
example if it is a stale issue I will not read it”. According to Respondent 9:
“I also want to see on the NOA website consistent update of events, a viable website
with the up to the last-minute information about trending national issues and its effects
on the nation.
Respondent 9
Davies (2012) and Susha et al. (2015) also posit that citizens require information that is relevant
to them from their governments to encourage e-participation, therefore, governments’ contents
must meet citizens’ information needs (as discussed earlier). According to the respondents:
62
I also have to say this, even as individuals, there are areas of interest. For instance, if
you open a web page, and there is a kind of story, if it is an area that you are interested
in…for instance, I am more into government, politics, economics, sports. So as much as
possible I do not miss those stories, especially if they are interesting stories.
Respondent 10
It will only put me off if the information contained therein is not of interest to me. It all
depends on the topic of interest. For instance, national issues that deal with youth
empowerment, jobs and economy are issues of interest to me. These I read from
beginning to the end.
Respondent 15
Similarly, Janssen et al. (2012) and O'Riain, Curry, and Harth (2012) suggest that lack of
authenticity, the inaccuracy of government information and concerns over the trustworthiness of
the source mitigate citizens’ engagement with the content. Respondent 1 opined that he would
abandon the content if he thinks “it is full of lies and unrealistic information”, and according to
Respondent 3 the content will be abandoned if he is “convinced that it is a mere propaganda
and has elements of lies meant to deceive the people.”. Respondent 16 says “You know there
is (sic) so much fake news out there... I check to see exactly where the information is from”.
Some of the respondents highlighted their cynicism towards the authenticity of government’s
information. This cynicism for government information was echoed by Lee (2005) who
suggested that advances in technology have increased governments’ ability to engage in
pseudonymous and anonymous communication with the citizens, and to proliferate propaganda
(Baldino & Goold, 2014). For example:
Governments in general, everywhere in the world -but it has to do with degrees now-
tries to promote itself in what they are doing and play less on the areas that they are not
doing well. So, there are some elements of emotions and sentiments that go on in that
projection for whatever they are writing and whatever they are giving to us. In areas,
which they are not achieving they play less on it, and begin to highlight more on the
areas they are doing well. So, when you take it back to most of the 3rd world countries
like in Nigeria, the level of corruption makes it impossible for the government to be very
sincere in giving information pertaining to her daily activities.
Respondent 5
There are several e-media and government's registered websites, but information there
is always censored if they are meant to damage the government's image
Respondent 15
63
I will not want to see information that seems to cover up facts. You can defend
government policies without telling lies. Also, outright wrong information, maybe I start
to read an article I have some bit of information and the writer goes all out to dish out
incomplete or wrong information. Articles that are full of lies…you will always know
an article written to please one patron or another or make him appear good, there are so
many in government circles.
Respondent 2
When you do not trust the people, who are in governance, whatever comes forth from
them you might not be interested in going through. I'm talking about my own personal
perspective.
Respondent 16
I want articles based on facts and figures. I mean, correct figures. For example, you are
quoting the population of Nigeria as, if you start quoting the population of Nigeria as 50
million, I will definitely stop and trash it. So, what also gets my attention is the quality
of information, or data that is in the article.
Respondent 12
As it concerns simplicity, Morkes and Nielsen (1997) suggest that internet users prefer simple
and informal writing. Janssen et al. (2012) observed that governments make the mistake of
assuming that citizens have the capabilities and knowledge levels required to use government
information. They noted that governments would normally apply statistical techniques in
collecting, analysing, interpreting and presenting data even when statistical knowledge is scarce.
This results in a situation where the content is not understandable to the general public, and
where citizens and users of the content find it difficult to use the information because they are
unfamiliar with the definitions and categories that were used to present the data (Zuiderwijk et
al., 2012). Respondent 2 said “the article must be in simple easy to understand English…I do
not want to read an article with a dictionary by my side”. Similarly, Respondent 7 suggested
that “it should not be overly scientific, overly technical, or difficult to understand. It should be
very pictorial and broken down”. Further instances include:
Sometimes too you find an article that is very technical; technical in terms of the usage
of words, and you ask yourself, is this meant for a layman? You know, I better use my
time somewhere else.
Respondent 10
64
When an article has a lot of bombastic words, it will not really help for one to flow in
reading that article. Not every minute you are opening dictionaries to find out the
meaning of words, whereas the essence of such write-ups is to communicate. And for a
communication breakthrough to take place, it has to do with you internalising everything
you are reading as the whole thing is flowing and you are grabbing it.
Respondent 5
The story-like presentation of the content -where possible- facilitates the media immersion,
engagement, participation and experience of users (Nardi & O'Day, 1999; O'Brien & Toms,
2008; Shedroff, 1999).To achieve these is the aim of information design which is the art and
science of preparing information so that they can be used by human beings with efficiency and
effectiveness. It to designing interactions that are easy, natural and as pleasant as possible (Horn,
2000). According to the respondents:
If it has lots of grammatical mistakes, so disjointed, not flowing as I read it. It will be
uninteresting to continue reading it. It is a matter of people writing articles and knowing
how to write articles that can really captivate the interest of the audience. The moment
the article is not well written, I do not think I’d waste my time reading such article.
Respondent 5
Unstructured kind of publication may not be easy to read. The structure of the
publication, maybe the lexis and structures of the publication are not well defined, and
it might be a turn off that it is not written by a professional or a learned person.
Respondent 9
It must be catchy and should be written in a story kind of way. It has to be arranged well,
edited well and checked for errors both grammatical or typographical errors. I do not
want to be correcting the grammar and tenses as I read, in fact, it one of the things that
put me off.
Respondent 2
Finding 3 (Hypothesis 2-2): The perceived quality of online government contents (PCQ) will
be positively associated with citizens’ engagement with such contents (CE)
65
4.3.4 Perception about writer
The respondents described the influence of their judgement about a content’s writer on
engagement with the said content. This phenomenon is not new in research and is referred to as
evaluative feedback through which an audience judges a message sender as it concerns his/her
ethos or credibility. The readers judge the “appropriateness, effectiveness or correctness” of the
message source’s opinions, thoughts, feelings or behaviour (Capps, 2001, p. 59). As it concerns
textual communication, the audience judges the writer’s language for professionalism, grammar
correctness and spelling errors or lack thereof (McLean, 2014). Respondent 14 said: “I do look
for reliable writers/editors. I do not read everything. The credibility, sincerity and writer’s
unbiased (sic) approach to issues matter”; according to Respondent 3 “I lookout for the
author’s credibility, if the author is popular and wrote well in the past, I am likely to read”.
According to Respondent 9
When I am reading a government article, and I begin to read in between the line that the
writer or the publisher is partisan, i.e., not really telling the truth -it is easy to tell when
one is partisan- it is discouraging. At that point, I will say that the guy is out there to
confuse people not to convince them, and it will make me not consume the article. I
would not read it with an open mind and wouldn’t comment.
Respondent 9
What I do is look at the author of the article and some of them put their details, positions
e.g. editor in chief. Some do write and not provide details. When I look at the author and
the credibility of the author, that determines if I'm going to read it or not. I look at the
author, the person that wrote the article. If his/her title is credible, I would...for example,
if an article is written by the Vice President, Professor Osibanjo, it sparks interest to
(sic) me because I know him personally, and I know how credible he is. Essentially, I
look at the credibility of the writer.
Respondent 12
Finding 4 (Hypothesis 3): The credibility of the writers of government’s online contents (PWC)
will be associated with citizens’ engagement with such contents (CE)
66
4.3.5 Affinity for Government’s Online Platforms
There are two main motivations for use of online platforms: extrinsic and intrinsic (Castañeda,
Muñoz-Leiva, & Luque, 2007). Users who are extrinsically motivated to visit an online platform
do so as a means to an end, while the use of the platform is an end in itself for intrinsically
motivated users. As observed by Wang et al. (2005), citizens would mainly visit governments’
platforms for information and/or transactions. Visiting governments’ or any online platforms for
transactions would depict extrinsic motivation; on the other hand, visiting an online platform for
entertainment would be intrinsic. However, where information is needed from the platform, there
is a mixture of both extrinsic and intrinsic motivation (Castañeda et al., 2007; Wolfinbarger &
Gilly, 2001). Reddick and Turner (2012), Sandoval-Almazan and Gil-Garcia (2012) and Oktem
et al. (2014) suggest that citizens visit governments’ platforms for information more than for
transactions. This claim agrees with the interview data as there was a consensus that information
is the main reason for visiting government’s online platforms in Nigeria; the other reason being
to lay complaints. For example, Respondent 2 said “As regards government platforms, it is
either to see the policy direction of the government or her agencies…I also lodge complaints if
I have any”, Respondent 7 said, “I visit them to get the official statement or reports from the
government pertaining to certain issues of interest”. According to Respondent 5:
Okay, Uhm, each time I visit the website of my government, what I will like to know is
what is happening in Nigeria. I go there for the reason of knowing what is happening in
Nigeria.
Respondent 5
The respondents highlighted the impact which governments’ online platforms can have on
citizens’ engagement with the hosted contents. There is abundant literature especially in the field
of e-marketing which show the impact of media vehicles/platforms on customers’ engagement
with adverts placed on the platforms (Calder et al., 2009; Chen & Wells, 1999; Gibbs, 2012;
Mollen & Wilson, 2010; Peng et al., 2004). Findings from these studies basically suggest that it
is more likely that customers would engage with adverts placed on their platform of choice than
on others (Paek, Hove, Jung, & Cole, 2013). According to Matuszak (2007), businesses should
reach their audience on the online platform they visit most. Succinctly put, if the citizens do not
visit government platforms, then they would not see the contents, and therefore citizen-content
engagement would never take place.
Finding 5 (Hypothesis 4): Citizens’ affinity for government’s online platform (IVP) will be
positively associated with citizens’ engagement with the contents on it (CE)
67
The respondents discussed some factors that could influence their affinity for and intent to visit
governments’ platforms; these include: trust in government/Agency and the platform attributes.
Findings from a study by Carter and Bélanger (2005) showed that trustworthiness influences
citizens’ intention to adopt and use e-government initiatives. Trustworthiness refers to users’
perception of confidence in an electronic marketer’s reliability and integrity (Belanger, Hiller,
& Smith, 2002). Citizens must have the trust and confidence in both the government and the
technologies used for service or information delivery. In Carter and Bélanger (2005)‘s study,
there were two dimensions of trust: internet-trust and government-trust. However, the
respondents in this current study discussed trust in incumbent government and trust in agency
leadership as having an impact on their engagement with government’s online contents and their
affinity for government’s online platforms.
Cynicism for government’s information impacts Citizen-content engagement. This assertion has
been observed by previous studies that pointed out the ease of propaganda proliferation by
governments as aided by advances in technology, and the negative impact it has on citizens’
engagement with government information (Baldino & Goold, 2014; Janssen et al., 2012; Lee,
2005). According to the respondents:
Governments in general, everywhere in the world -but it has to do with degrees now-
tries to promote itself in what they are doing and play less on the areas that they are not
doing well. So, there are some elements of emotions and sentiments that go on in that
projection for whatever they are writing and whatever they are giving to us. In areas
which they are not achieving they play less on it, and begin to highlight more on the
areas they are doing well. So, when you take it back to most of the 3rd world countries
like in Nigeria, the level of corruption makes it impossible for the government to be very
sincere in giving information pertaining to her daily activities.
Respondent 5
There are several e-media and government's registered websites, but information there
is always censored if they are meant to damage the government's image
Respondent 15
68
When you do not trust the people, who are in governance, whatever comes forth from
them you might not be interested in going through. I'm talking about my own personal
perspective.
Respondent 16
The respondents also discussed how the trust in government influences citizens’ affinity for
governments’ online platforms. This was observed by Bélanger and Carter (2008), Warkentin,
Gefen, Pavlou, and Rose (2002), Carter and Bélanger (2005) and Welch et al. (2005) who
discussed how citizens’ perception of confidence and trust in governments impact on their
adoption of e-government. According to the Respondents:
There are many people who are against the same government that has set up this agency
and their policies. So, they are not only against the government but also against policies
of the government and such institution like NOA which the government has set up. So,
since this organisation has started for a very long time and so many people look at it to
be one of these avenues that government wants to use to eat money (sic). You know, are
they reorienting us? Let them go and reorientate themselves first before they come to
us. So, there are some people that dismiss issues like that.
Respondent 5
You might also want to think about government’s interference. I know that the NOA is
a government agency, but I expect them to have some level of independence to be able
to do their work, but what you find is sometimes, there is too much intervention. They
are simply not able to do their job. If I have that feeling that this organisation is just
another waste of government funds I am not going to go looking at their websites.
Respondent 10
What may discourage citizens from visiting NOA website is if there is a failure of
governance because NOA is a sensitisation outfit of whatever government that is in
place in Nigeria. When there is failure of governance in such a way that citizens are not
happy the way government is going about things, there is massive unemployment, there
is poverty all over the land, things are not going on well, workers are not being paid
salaries, roads are not fixed, people now get angry with government so anything that
concerns government people develop apathy for it. They do not want to know, they do
not want to hear about it, essentially, when such a situation arises, it will discourage the
citizens from going to NOA website.
69
It is not going to be essentially about the thing done by NOA because NOA’s
responsibility is to carry out sensitisation on what government is doing, but the moment
government fails in essential sectors, people seeing NOA as a government platform will
develop that hatred about whatever that is going on there. They do not want to know.
Not necessarily because NOA did anything, but because it is a government platform and
they are unhappy with the Government.
Respondent 13
According to Lee and Turban (2001) (cited in (Carter & Bélanger, 2005, pp. 9-10)), “the
decision to engage in e-government transactions requires citizen trust in the state government
agency providing the service”. The respondents identified the impact of citizens’ perception of
NOA’s director/leadership on their affinity for its online platforms. This is an interesting finding
as the respondents are not only concerned about the credibility of the content’s writer, but also
about the credibility of the head of the agency which makes the content public. For example:
Also, the turn off for people not visiting NOA also has to do with the personality of the
Director General. You need to show integrity and visibility; you need to get into the
subconscious of people and your followers that information coming from you is for the
interest of everybody and not partisan. When you can do that, you win the trust of the
people. They must have trust that whatever comes from the organisation is for the
people.
Respondent 9
Then again, you also want to look at the people in the organisation especially the
leadership. Who is the chairman, or the DG of NOA? Is he one of those that have been
accused of corruption at one time or the other? Of course, you just ask yourself, what
good can come out of that? There are names that if you bandy them around, people
would say no. You need to put people that would bring legitimacy to that organisation.
If I do not find such people, I will never be interested in NOA affairs.
Respondent 10
The bosses must be part of it. No be to siddon dey waka with police and escort (The
bosses should not just be lazy or moving around with police and escort). That kind of
job is a field/grass root job; let them come down to earth. The moment we see a change
in orientation in the political class, people like me will take them more serious.
Respondent 11
70
Finding 7 (Hypothesis 4-1b): Citizens’ trust in government/agency (TGA) will be associated
with their affinity for government’s online platforms (IVP)
Platform Attributes
The respondents described three attributes of governments’ platforms that would influence their
affinity the platforms, these are: its similarity with the public sphere, and its hedonic/persuasive
features.
Habermas (1964, p. 49) defined the public sphere as a realm of our social life in which something
approximate to public opinions can be formed, while the public opinion refers to a collection of
different individual views and beliefs (Herbst, 1993). A public sphere must be independent of
the state and has no restriction as it concerns assembly and the expression of opinions. Every
citizen should be allowed access, be free to put forward individual views and opinions and be
free to contest the views and opinions of other citizens in the discourse of issues of general
interest (Hauser, 1998; Pusey, 1987a). Habermas went further to suggest that a public sphere
exists when private citizens assemble to converse in an unrestricted manner. The respondents
were of the opinion that governments’ platforms should allow citizens free and unrestricted
access, allow them to post their contents on the platforms, and to interact and deliberate with
other citizens and government officials. These reflect a classical public sphere with the
significant difference being that a public sphere should be without interference from the
government.
The very obvious reason right now is the internet provision in Nigerian. Everyone has
internet access over here (Britain), so it is a lot easy to get on the internet. But in Nigeria,
how many people can afford 1 gigabyte at 2000 Nigerian Naira? So, the cost of getting
on the internet is a barrier, so that also has to be dealt with. The Nigerian government
needs to work with the providers, get this cost down and make it easier for the common
man to have access to the internet because that is the first thing. If they do not have
71
access to the internet they obviously cannot read this information we are talking about.
That is the very first barrier, and that has to be dealt with.
Respondent 12
Similarly, Respondent 5 wanted government to ensure that citizens do not pay for access to its
online platforms:
Respondent 5
On the other hand, Lin and Lu (2000)’s study showed that the ease or difficulty in accessing a
website affects users’ belief in it. According to Respondent 2, “Government’s platforms must
be readily accessible. Similarly, Respondent 16 said:
For example, I visit Vanguard (a newspaper outfit) three to four times a day, and it is
because of their mobile app which makes it is easily accessible. At least through that, I
can have an overview of what is going on…the government needs to copy that.
Respondent 16
If every agency lives up to the expectation, I do not need to beg them to access data. For
the fact that Nigeria has the Freedom of Information bill in place, that means that these
agencies are not prohibited from making information available to the citizens and
members of the public who might need them. They are expected to have the
information/pieces of information ready and structured on their websites for easy access.
Respondent 13
Nigerians do not want to read the information provided by the government, putting a
further barrier before getting the information makes the matter worse. Let there be no
requirement to register before accessing the info.
Respondent 12
Finding 8 (Hypothesis 4-2): Accessibility (FA) will be positively associated with citizens’
affinity for government’s online platforms (IVP)
72
Bonson et al. (2015) found that there were greater signs of engagement on governments’
Facebook pages when citizens are allowed to post contents on the wall. Having such freedom
means that citizens are not just mere recipients of government services and information but
collaborate amongst themselves and the government to provide the needed services and
information (Bason, 2010; Sørensen & Torfing, 2011) in what is called co-production. The
respondents showed interest in being able to create and publish information on government’s
online platforms in agreement with Zuiderwijk et al. (2012) who observed that limitation of
information provision on governments’ platforms to a minority of researchers affects its use by
citizens. For example:
If you look at the responsibility of NOA, it has a lot to do with members of the public,
so I think the website should be open to allow members of the public to post information.
I think if I know that I can make a report, if I know that I can critique the activity of the
NOA, I will be happy to visit the website.
Respondent 10
Being able to post articles on governments’ platforms would definitely help drive more
Nigerians onto the platform. If for example we are friends, and I see your article on the
platform, I will say ‘oh, that is good.' That will also motivate me also to want to put an
article on there.
Respondent 12
However, the respondents suggested that the information posted by citizens on governments’
platforms should be vetted and monitored to avoid misuse. For example:
There's no problem with other Nigerians providing information; it's just that as a
government agency, you want to be seen to provide credible information not just take
information from every tom dick and harry and put it on the internet. You want to vet
that information, check the credibility before putting it on the internet. So, it is good; it
would be good for Nigerians to be able to put information on there but that information
has to be somehow vetted before being allowed to stay on the platform.
Respondent 12
When I say this again, it is with a bit of caution. I would want NOA's website to be for
NOA, but I would also expect NOA to say "look, you are free to post maybe if you
identify concerns with our operation or something is happening somewhere that you feel
we should know about, yes you can post it" but of course when you make a post, I also
73
expect NOA to have an officer that will be looking through all those posts because you
want to be careful as to what comes on the website.”
Respondent 10
Finding 9 (Hypothesis 4-3): The ability for citizens to post contents on government’s platforms
(CC) is positively associated with their affinity for these platforms (IVP)
Closely related to the need for co-production is the need for interaction on government’s
platforms. Lilleker et al. (2011, p. 199) defined a platform’s interactive features as “those which
allow visitors to interact in some way with the host or other visitors”. The respondents discussed
the need for government’s platforms to allow interaction and deliberation amongst citizens and
between citizens and government officials. According to Mahrer and Krimmer (2005) and
Oktem et al. (2014), such capabilities will encourage dialogue between citizens and governments
on governments’ platforms. For example:
There should be a feature that enables interaction among readers. That is where opinions
are formed or quashed. There must be an interactive platform; they could create an app
and allow people download and get engaged in discussions.
Respondent 11
Part of the things most organisations are doing now is moving away from just having a
website and having blogs, Twitter handles, Facebook pages with dedicated people who
do interactions there, update it, respond to chats and enquiries. That will make it a lot
more interesting and challenging to the citizens, and people will now at will always want
to visit. With these, when an issue comes up, they can set up a tweet, and someone
responds and chats "oh why did this happen?". There should always be a feedback.
Feedback encourages continuous usage. The moment there is no feedback mechanism,
it discourages people
Respondent 13
NOA is national and by that we are looking at 150-180 million Nigerians and about 10
million foreigners (who are) resident in Nigeria. So we are looking at about 200 million
people to inform. Already they have a website, but they to make the website interactive
and functional. By interactive, feedbacks can be given; you create a comment area, and
somebody out there would respond to those queries and comments.
74
Respondent 9
I will prefer a platform with live interactivity where people can chat and call in for
solutions. There must be an interactive forum which will feature both live calls and chats
platforms
Respondent 15
Group chat helps visitors to ask themselves questions and get clarifications. Have you
visited Nairaland.com before? Somebody posts a question or an article and people
contribute. Do you know nairaland.com help people to interact as well as get relevant
information they want? It may take time but by the time you have gone through all the
comments and submissions from people, you would have known almost all that you
wanted.
Respondent 2
Finding 10 (Hypothesis 4-4): The ability for citizens and government officials to interact on
government’s platforms (IDelib) is positively associated with their affinity for these platforms
The respondents also highlighted the need for governments’ platforms to host challenges and
activities that can attract the youth; these should be interesting and fun. The use of interesting
activities on an online platform as a way of attracting visitors and developing loyalty to the
platform is not new in the literature (Chen et al., 2002; Chen & Wells, 1999; Peng et al., 2004);
these studies suggested that online platforms should be entertaining, fun and imaginative.
Weiksner, Fogg, and Liu (2008) observed that an online platform’s hedonic and persuasive
features include activities that can cause provocation and retaliation, instigate revelation and
comparison, cause competition, and encourage self-expression and group exchange.
Respondent 14 said that, “NOA should think towards using their platform to run promos (sic),
competitions and challenges that are capable of attracting the youths.” According to
Respondent 12:
Another way is to create incentives and try to lure people to whatever information you
are putting on the internet. There are so many ways of doing that; you could start doing
some sort of lottery. You may say you are looking for first 100 readers, and the 100th
75
person wins something. Try and throw something in the air, something that will motivate
people to go online.
Respondent 12
The respondents also highlighted the importance of getting notification about new contents and
activities on government’s platforms. Andrew, Borriello, and Fogarty (2007, p. 262) referred to
this as suggestion technology and defined it as “one that incorporates active notifications that
contain information that allows someone to do something he or she might not otherwise have
done”. The persuasive capability of the suggestion technology has been studied in online
platforms, especially the social media (Andrew et al., 2007; Fogg & Iizawa, 2008; Weiksner et
al., 2008). For example:
At least every morning you wake up, Facebook reminds you of notifications, Twitter
reminds you of trending news, and all the rest. These people (platforms) remind you of
these things, there are notifications and this is what made them popular. I have three or
five areas of interest, and government should be able to have a mechanism on their
website that I can subscribe to for daily or weekly newsletter to read. Without logging
into the website, I am informed with popups on my smartphone. If it is a catchy
information I can just click and go to the website and read about the publication in detail
and from there, I can make decisions.
Respondent 9
Finding 11 (Hypothesis 4-5): Hedonic and persuasive features of government’s platforms (HF)
are positively associated with citizens’ affinity for these platforms (IVP)
Based on the literature and the opinions of respondents, there are moderating factors that may
influence some of the findings discussed earlier; these include the type of platform and citizens
level of political awareness. These possible moderating factors are discussed below.
Type of platform
In a study by Johannessen et al. (2012), they discovered that government websites were the most
preferred platforms through which politicians, government administrators and civil society
interacted with the government. This was followed closely by a preference for the email whereas
the social media and contact over the telephone were not that popular. In contrast to the finding
by Johannessen et al. (2012), findings from this present study indicate that social media,
76
especially Facebook, is the most preferred medium; a preference for websites closely follows
this. However, the respondents advised that a single medium should not be used as was also
observed by Johannessen et al. (2012). The focus in this section is the influence of the type of
medium used by the government on trust in government, and similarity to the public sphere
(access, content creation, interactivity, and deliberation) as determinants of affinity for the
medium/platform. This shall be based on respondents’ opinions and from the literature.
A study by Moy and Scheufele (2000) found that media type used by governments had an effect
on political and social trust. In recent times, governments’ use of social media has been
identified to facilitate transparency and trust (Bertot, Jaeger, & Grimes, 2010; Bertot, Jaeger, &
Hansen, 2012; Bonsón et al., 2012). In agreement with these studies, respondents discussed the
influence of using social media on their trust for the government and intent to visit government’s
platforms, for instance, according to Respondent 13, the Nigerian government needs to build
trust by becoming more active on social media. In his words:
The first step is building trust in the brand. How do you build trust in the brand? You
have to use social platforms that have gained the confidence of Nigerian citizens. This
will help lead people towards the website and over time you can now be independent
because people are now aware that you have started doing the right thing.
Respondent 13
If you create your own website from the start, the Nigerian society would say "na them-
them (it is the same set of untrustworthy people), forget it. Is it today that we have been
seeing this? Is this not an avenue by which the government wants to spend our money?"
And you find out that they will go with such language, and none of them will be
interested in getting in there (visit the platforms). So, that is why I said that it has to go
through Facebook first.
Respondent 5
Finding 12 (Hypothesis 5-1): Social media use by governments would have a more positive
effect than websites on the influence of trust in government/agency (TGA) on citizen’s affinity
for government’s platforms (IVP)
77
Most of the respondents also discussed the impact of platform type on citizens’ access to
government’s online platforms. They indicated that the predominant and ubiquitous access to
and use of social media by the citizens entail that the government should also be active on such
platforms. This is in agreement with Matuszak (2007)’s call for corporations to use social media
to reach their audience or a prospective audience where they like to hangout. Moreover,
according to Vollmer and Precourt (2008) (cited in (Mangold & Faulds, 2009, p. 359)), with
social media, consumers/citizens are in control as they have greater access to information and
greater command over the consumption of information that ever before. The respondents said:
I would be more attracted if NOA can improve her social media presence as youth are
more likely to search for trending national topics on social media than listen to the radio
(thanks to smartphones and handhelds)
Respondent 3
I really want to them to utilise the social media platform because of the number of people
who use them
Respondent 2
Social media is best because that is the easiest means of getting to the information to the
whole population especially the younger generation.
Respondent 6
One viable platform that I would always recommend is the social media. You
know...even though there are challenges as far as Internet usage in Nigeria is concerned
but, there have been lots of development and improvement in that area so if they can
effectively engage social media to communicate with Nigerians I think that would go a
long way in helping them achieve their objectives.
Respondent 10
Social media could be more effective. A lot of young folks who even lack formal or
tertiary education are alive on social media. So it is faster to spread information there
because these guys will not go about logging into websites.
Respondent 11
78
Therefore, this suggests that:
Finding 13 (Hypothesis 5-2): Social media use by governments would have a more positive
effect than websites on the influence of accessibility (FA) on citizen’s affinity for government’s
platforms (IVP)
The major social media platforms mentioned were Facebook and Twitter mainly because of the
number of Nigerians on them (Facebook) and the brevity of words (Twitter). For example,
Respondent 4 said, “contents on Facebook are more likely to be read than contents on NOA
website.” According to Respondent 5
If you look at Nigeria of today, if you want to reach out to the youth very many of them
are on Facebook, if you want to reach out to the youth, Facebook is the best channel to
use.
Respondent 5
Clearly, a lot of Nigerians are on Facebook, so that is a very good platform for NOA to
try and delve into. And Facebook is first on the list.
Respondent 12
Social (especially Twitter) is usually very precise in its reportage (thanks to 140-
character limit). Hence it makes it easier for me follow government updates.
Respondent 3
However, the respondents also mentioned the need for a mixture of different social media
platforms and traditional websites. For example:
If NOA wants to create its own website, it would be beautiful. But in my own opinion,
there must be a way to attract someone from one website to the other. Start from the on
go of getting into that Facebook I am talking about. Not with the aim of staying there
forever. Now when you get there, you now begin to introduce people to your own
website. You could say that anything they see on Facebook, for them to see it in details,
they should go to the website. In that way, you are drawing them from the Facebook to
the website.
79
Respondent 5
I think a website is a powerful tool, all you need to do is to make it visible. Also, social
media is a powerful tool at the moment; the website should be linked to major social
media platforms so that citizens skip directly to government’s web page in order to get
information.
Respondent 9
Bertot et al. (2010) observed that social media has four key strengths: collaboration,
participation, empowerment, and time. It provides the opportunity for remote users to connect,
socialise, form communities, share information and work towards achieving a common goal. In
a study by Bonson et al. (2015), it was found that citizens were more active on government’s
Facebook accounts which allowed the posting of contents on their wall. Social media allows the
creation and exchange of user-generated contents (Berthon, Pitt, Plangger, & Shapiro, 2012);
this is not possible with traditional websites which are characterised by unidirectional
communication (Cormode & Krishnamurthy, 2008).
Finding 14 (Hypothesis 5-3): Social media use by governments would have a more positive
effect than websites on the influence of collaborative content creation (CC) on citizen’s affinity
for government’s platforms (IVP)
Another difference between social media and traditional websites is the possibility for
interactions. Platforms with interactive features allow interaction amongst users and between
users and hosts (Lilleker et al., 2011). There is limited evidence of interactivity on traditional
websites (Lilleker et al., 2011; Lusoli & Ward, 2005; Schweitzer, 2008). On the other hand,
social media is known to be based on interactivity and facilitates communication between
citizens and governments (Hofmann, Beverungen, Räckers, & Becker, 2013; Linders, 2012;
Mossberger, Wu, & Crawford, 2013)
80
Finding 15 (Hypothesis 5-4): Social media use by governments would have a more positive
effect than websites on the influence of interactivity and deliberation (IDelib) on citizen’s
affinity for government’s platforms (IVP)
Political Awareness
Political awareness refers to a citizen’s sensitivity to and interest in government and public
policies and “affects virtually every aspect of citizens’ political attitudes” (Zaller, 1990, p. 1).
The respondents discussed this phenomenon in two kinds: awareness/interest in
government/agency and in their online platforms. According to the respondents, the level of
political awareness is determined by the citizens’ political efficacy and the government/agency’s
effort to be visible or prominent to the public. For the former, a respondent said:
Unlike me, you know there are people who are naturally not cut out for things
concerning the government and all that. Such persons would not like to visit
government’s websites or even read government information. There may be a lot of
people like that in Nigeria, I cannot say how many.
Respondent 13
The latter is referred as observability by Rogers (2003) in his Diffusion of Innovation theory,
while Moore and Benbasat (1991) called it visibility. According to Rogers, it is the degree to
which product usage and impact are visible to people. Users’ intent to use a system increases
with the awareness that others are using it (Carter & Bélanger, 2005; Moore & Benbasat, 1991;
Rogers, 2003). The respondents highlighted the need for governments and government agencies
to create awareness about what they do and about their online platforms too. This points towards
the principles of marketing and advertisement which entails promoting the concerned agencies
and their online platforms (Grow & Altstiel, 2005; Panopoulou et al., 2014). According to
Respondent 14, “their platform is not properly advertised”. Similarly:
(There is) lack of awareness; if you do not know that NOA exists, why do you want to
visit them? NOA should start by letting Nigerians know that (they) exist. I’m sure I’m
saying this because I’m aware that there is something like the NOA. If you go to Nigeria
and ask a lot of people, you will be shocked that they do not know that the organisation
exists, so the first thing they should do is let Nigerians know of their presence.
Respondent 10
It is down to what the agency is doing, what are their roles? They need to be recognised
by their roles in society and offline before people can take them serious on the internet.
Respondent 8
81
Therefore, this suggests that:
Finding 16 (Hypothesis 6): Optimal Political awareness would have more positive effect than
poor political awareness on the influence of citizens’ affinity for governnment’s platforms (IVP)
on content engagement (CE)
Following the qualitative analysis, the Researcher hypothesised that six factors (IN, VAC, PCQ,
PWC, IVP and TGA) directly influence citizens’ engagement with governments’ contents on
the internet (CE). TGA and four other factors (FA, CC, IDelib, and HF) were also hypothesised
to indirectly influence CE through IVP. These 11 factors (CE, IN, VAC, PCQ, PWC, IVP, TGA,
FA, CC, IDelib and HF) constitute the main constructs that will be further investigated in the
next phase of this study. The Researcher also hypothesised that governments’ choice of
platforms have a moderating effect on the influence of TGA, FA, CC and IDelib on IVP, with
the use of social media likely to have more positive effect than websites (this is described by
PC). Finally, the study hypothesises that citizens’ political awareness would have a moderating
effect of the influence of IVP on CE, with an optimal level of awareness more likely to have
more positive effect than poor level of awareness (this is described by PA). These findings and
hypothesis are as shown in Table 4.2 and the conceptual/hypothesised citizen-content
engagement (C-CE) model is shown in Figure 4.1.
82
Table 4.2: Table of Findings and Hypothesis
Findings Hypotheses
Citizens’ trust in government/agency (TGA) will be TGA à CE
F6 H4-1a associated with their engagement with government’s
contents (CE)
Citizens’ trust in government/agency (TGA) will be TGA à IVP
F7 H4-1b associated with their affinity for government’s online
platforms (IVP)
Accessibility (FA) will be positively associated with FA à IVP
F8 H4-2 citizens’ affinity for government’s online platforms
(IVP)
The ability for citizens to post contents on CC à IVP
F9 H4-3 government’s platforms (CC) is positively associated
with their affinity for these platforms (IVP)
The ability for citizens and government officials to IDelib à IVP
interact on government’s platforms (IDelib) is
F10 H4-4
positively associated with their affinity for these
platforms
Hedonic and persuasive features of government’s HF à IVP
F11 H4-5 platforms (HF) are positively associated with citizens’
affinity for these platforms (IVP)
Social media use by governments would have a more TGA à IVP
positive effect than websites on the influence of trust (Platform
F12 H5-1
in government/agency (TGA) on citizen’s affinity for type i.e. PC)
government’s platforms (IVP)
Social media use by governments would have a more FA à IVP
positive effect than websites on the influence of (Platform
F13 H5-2
accessibility (FA) on citizen’s affinity for type i.e. PC)
government’s platforms (IVP)
Social media use by governments would have a more CC à IVP
positive effect than websites on the influence of (Platform
F14 H5-3
collaborative content creation (CC) on citizen’s type i.e. PC)
affinity for government’s platforms (IVP)
Social media use by governments would have a more IDelib à IVP
positive effect than websites on the influence of (Platform
F15 H5-4
interactivity and deliberation (IDelib) on citizen’s type i.e. PC)
affinity for government’s platforms (IVP)
Optimal Political awarness would have more positive IVP à CE
effect than poor political awareness on the influence of (Political
F16 H6
citizens’ affinity for governnment’s platforms (IVP) on Awareness
content engagement (CE) i.e. PA)
83
Figure 4.1: Conceptual Model of the Findings and Hypothesis (C-CE Model)
Trust in
Type of Platform Government/ Political Awareness
(PC) Agency (PA)
H5-1 (TGA)
H5-2 H4-1a
H6
Affinity for
Content creation
H4-3 Platform H4
(CC) (IVP)
Content
Engagement
H4-4 (CE)
Interactivity and H1
Deliberation
(IDelib)
Information Needs
(IN)
H4-5
H2-1
Hedonic Features
(HF)
Content Attributes H3
Visual Attributes
(VAC)
Perceived
Content Quality
(PCQ)
Perception of
Writer’s Credibility
(PWC)
84
Chapter 5 : Quantitative Data Analysis
5.1 Introduction
This is chapter presents the quantitative analysis phase of this study. The Chapter consists of
four parts. The first part of this chapter presents the generation of items and the development of
the questionnaire. The second part of this chapter presents data cleaning and preparation process,
and the descriptive statistics of collated data as well as the respondents’ profile. The third part
presents the exploratory factor analysis (EFA) and reliability tests of the hypothesised C-CE
model. The fourth part of this chapter presents the data analysis results of the quantitative phase
of this study through structural equation modelling (SEM) method.
This part presents the process through which the quantitative questionnaire was developed. It
covers item generation from the qualitative data, the adequacy assessment, and the questionnaire
development. It also presents the sampling process and pilot study results.
With the qualitative data analysis done, findings presented, and a thematic model developed, the
next step is to devise a scale for measuring citizens’ engagement with government’s online
contents. This scale was developed because there was no adequate or appropriate existing scale
for this study. This study adapted Hinkin, Tracey, and Enz (1997)’s systematic seven-step
process of scale development. The process for this study includes item generation, content
adequacy assessment, pilot survey, questionnaire administration, exploratory factor analysis
(EFA), and structural equation modelling (SEM) through confirmatory factor analysis (CFA).
This is the first step of the scale development and involves the generation of items that would be
used to assess the construct under examination (Hinkin et al., 1997). StatSoft (2013) described
this as a creative process where the researcher develops as many items as is possible to
operationalise a construct. Item generation can be done either deductively or inductively. The
deductive approach is based on theoretical definitions of the construct under investigation as
ascertained from the literature. The inductive approach is best when an unfamiliar phenomenon
is being investigated and entails the sampling of participant opinions, analysis of the responses,
categorization based on keywords/themes, and finally the identification of themes. This study
adopted both inductive and deductive approaches in item identification. From the qualitative
data and the literature, 47 items were developed for the 11 constructs identified in Chapter 4.
85
Sentences and/or phrases that best highlight each construct were identified and selected from the
qualitative data, and where ever possible, definitions or scales related to the constructs were
adopted from the literature. The constructs, items and sources are as presented in Table 5.1 -
86
Table 5.2: Items for IN
87
Table 5.5: Items for PWC
88
Table 5.8: Items for FA
Accessibility (FA)
Definition Items Sources
Gauging citizens’ perceived FA1 I have free access to Interview Data
level of access to government’s platforms on
governments’ platforms the internet (Habermas, 1989;
FA2 I do not have to register on Hauser, 1998;
government’s platforms to Pusey, 1987b)
gain access
FA3 I have unrestricted access to
government’s platform on the
internet
89
on government’s
platforms
IDelib5 I can interact with
other citizens on
government’s
platform
This is an important stage in scale development, which allows researchers to pre-test generated
items and ensure that they are adequate for the measurement intended measurement (Hinkin et
al., 1997). The literature indicates that content adequacy assessments are mainly done by either
sorting or rating the items. Furthermore, these sorting and rating can either be by: (1) face
validity, which entails that respondents subjectively sort items into categorical definitions that
fit best or rate them according to how well they operationalise a categorical definition (Baldus,
Voorhees, & Calantone, 2015; Germain, 2006). (2) content validity, which is the statistical
approach to the sorting and/or rating of measurement items as it concerns their relevance to the
construct being measured (Anderson & Gerbing, 1991; Hinkin & Tracey, 1999; Schriesheim,
Powers, Scandura, Gardiner, & Lankau, 1993). In the literature on scale construction, criterion
and construct validity are also usually mentioned (Hinkin et al., 1997; Rubio, Berg-Weger, Tebb,
Lee, & Rauch, 2003), however, they focus more on the construct and the measurement as a
whole and not the operationalising items.
Face validity is criticised for its reliance on the qualitative face value of items. Content validity,
on the other hand, allows for a more rigorous process (Rubio et al., 2003). However, Hinkin et
al. (1997) point out that none of the techniques would guarantee scales with validated contents,
90
but they will provide evidence that the items reasonably operationalise the construct under
examination, and will also reduce the need for subsequent modification of the scale. This study
adopts the content validity technique.
A questionnaire was developed to check the content validity of the 47 items for each of the 11
constructs (Hinkin & Tracey, 1999; Hinkin et al., 1997). Google Forms, an online survey
software developed by Google, was chosen for data collection because it is one of only two
online survey tools recognised by the Northumbria University, the other being Bristol Online
Survey (BOS). The questionnaire consists of 13 sections. The first section contained explicit
instructions and an example of what the respondents had to do. The second section obtained the
respondents’ details apart from their names. Each of the remaining 11 sections contained one of
the 11 construct definitions, followed by the 47 items. The definition of each construct was
printed on the top of each section/page of the questionnaire followed by a randomised list of the
items. The respondents were asked to rate each item according to how it fits the definition at the
top of the page/section. Response choice ranged from 1(strongly unfit) to 5 (strongly fit). The
question-shuffle feature of the Survey software was enabled; this ensured that the questions were
randomised so as to control response bias that may be due to order-effects (Hinkin & Tracey,
1999). In total, the questionnaire had 517 items i.e. 47 items x 11 definitions.
Some researchers have advised that for content validity check, the sample should be a panel of
experts who know about the construct being measured (Davis, 1992; Rubio et al., 2003). They
argue that selecting experts in an area that deals with the construct under investigation would
help in determining if the scale is well constructed and suitable for purpose. However, the sample
for the content validity in this study consists of postgraduate students and lecturers who are not
e-public participation/engagement experts. This is in agreement with the opinion of Hinkin and
Tracey (1999) and Schriesheim et al. (1993) who argue that the sample should consist of neutral
individuals (without pertinent bias) who have sufficient intellectual ability to rate the symmetry
between items and definitions of various theoretical constructs (Hinkin & Tracey, 1999;
Schriesheim et al., 1993).
Due to the size of the content validity survey (517 items and 11 definitions), and the risk of
response bias by boredom and fatigue, it was important for participants to stay motivated. To
facilitate motivation, the Researcher offered to organise a seminar on scale items development
and content validity for the Ph.D. students in the Faculty of Business Administration, Imo State
University, Nigeria. 13 students took part in the seminar after which they were asked to
volutarily complete the survey as a formative test. They were asked to do this at their own
convenience but within three days. Two lecturers also participated. 13 (of 15) content validity
91
surveys were returned for a response rate of 86%. Three returned surveys were not usable due
to missing data and the remaining 10 were valid for analysis. Of the 10 participants, four were
female. The participants had an average age of 34 years and an average of seven years’ work
experience.
The sample size for the content validity of this study was 10 (eight Ph.D. students and two
University Lecturers). Although there are different views on the optimal sample size for content
validity studies (Hinkin & Tracey, 1999; Rubio et al., 2003; Schmitt, Klimoski, Ferris, &
Rowland, 1991), this study followed Gable and Wolf (2012)’s argument that an adequate sample
size for content validity should be between two and 20. The use of a sample size of 10 in this
study was also supported by Lynn (1986) who advised that there should be a minimum of three
participants in a content validity study, and having more than 10 participants would be
unnecessary.
The data was analysed for validity using the Content Validity Index for Items (I-CVI), the
Average Content Validity Index for Scales (S-CVI/Ave) and the Universal Agreement Content
Validity Index for Scales (S-CVI/UA). As the name implies, the I-CVI checks for content
validity of each item and is computed as the number of participants giving an item a relevant
rating of either 4 or 5 (on a 5-point scale), divided by the total number of experts (Polit & Beck,
2006). Lynn (1986) recommends a minimum I-CVI of 0.78 where there are six or more
participants. On the other hand, the S-CVI checks for content validity at the scale level, but for
two participants only and is the proportion of items which both participants rated as relevant or
highly relevant (Polit & Beck, 2006; Polit, Beck, & Owen, 2007; Waltz & Bausell, 1981). Since
the content validity survey of this study had 10 participants, the S-CVI can be ascertained by
computing the average I-CVI across the items (S-CVI/Ave). Polit and Beck (2006) suggests a
minimum S-CVI/Ave of 0.90 although a lower limit of 0.80 is commonly used by scale
developers (Davis, 1992; Polit et al., 2007; Squires, Estabrooks, Newburn-Cook, & Gierl, 2011).
Alternatively, the S-CVI can be calculated by checking the proportion of the items that received
a rating of 4 or 5 by all the participants –this is called universal agreement (S-CVI/UA). When
S-CVI/UA is used, the value/likelihood to achieve total agreement tends to decrease with an
increasing number of participants regardless of the I-CVI value. As a result, there are no agreed
acceptable values for S-CVI/UA, but it is good practice to report it.
It is pertinent to point out that there are other ways of analysing content validity surveys. Tojib
and Sugianto (2006b) discussed five: Content Validity Ration (CVR), Index of Objective
Congruence, Content Validity Index (CVI), Weighted Mean Score and Inter-Observer
Agreement. Though the weighted mean score is the most used approach across disciplines
92
(Hinkin et al., 1997; Tojib & Sugianto, 2006a), like the CVI, it was developed in the nursing
discipline by Fehring (1987). The CVI was adopted because it not only considers each item for
validity (I-CVI); it also considers the scale as a whole (S-CVI/Ave), both of which must reach
pre-determined validity scores. The CVI is, therefore, a more stringent validity method than the
weighted means score method, which validates any item that returns a score of 0.05 and requires
subjectivity in selecting the items to use.
As is shown in Table 5.12, 44 of the 47 items scored over 0.78 in I-CVI; the items that did not
reach the benchmark were removed, and they include CE3, PWC1, and PCQ4. An additional
item (IN5) was removed for theoretical parsimony (Hinkin & Tracey, 1999). A maximum of six
items per construct were allowed in line with common practice (Burton-Jones & Hubona, 2006;
Carter & Bélanger, 2005; Castañeda et al., 2007; Hinkin et al., 1997; Lin & Lu, 2000). However,
all seven items for the IN construct scored 1.0 in I-CVI. To maintain parsimony and reduce the
IN items from seven to six, the total score each item got from the respondents were compared.
IN5 scored the least at 46 and was, therefore, eliminated (Table 5.13). In total, 43 items were
retained after content validation. The 11 scales for the 11 constructs scored over .80 in S-
CVI/Ave (Table 5.14).
The reliability of the scales was assessed using the Intraclass Correlation Coefficient (ICC). The
ICC describes the strength of resemblance between units in the same group. It is the assessment
of consistency between quantitative measurements made by different individuals who
observed/measured the same behaviour or phenomenon (Squires et al., 2011). Each of the 11
constructs was analysed for reliability. According to Raat, Botterweck, Landgraf, Hoogeveen,
and Essink-Bot (2005) an acceptable ICC should be above 0.70, it is optimal above 0.80 and
excellent above 0.90. All the scales for the 11 constructs were above the acceptable level of 0.70
(Table 5.15).
93
Table 5.12: I-CVI Scores
I-CVI I-CVI I-CVI I-CVI I-CVI I-CVI I-CVI I-CVI I-CVI I-CVI I-CVI
CE1 1.00 IDelib1 1.00 HF1 0.80 FA1 1.00 CC1 1.0 PWC1 0.70 IVP1 0.90 TGA1 1.00 IN1 1.00 VAC1 1.00 PCQ1 0.90
CE2 0.90 IDelib2 1.00 HF2 1.00 FA2 0.80 CC2 0.9 PWC2 0.90 IVP2 1.00 TGA2 1.00 IN2 1.00 VAC2 1.00 PCQ2 0.90
CE3 0.70 IDelib3 0.90 HF3 0.90 FA3 1.00 CC3 1.0 PWC3 1.00 IVP3 1.00 TGA3 1.00 IN3 1.00 VAC3 1.00 PCQ3 1.00
CE4 0.80 IDelib4 0.90 PWC4 1.00 IVP4 1.00 TGA4 1.00 IN4 1.00 PCQ4 0.30
IDelib5 0.80 IN5 1.00 PCQ5 1.00
IN6 1.00 PCQ6 0.90
IN7 1.00 PCQ7 1.00
Items Rater Rater Rater Rater Rater Rater Rater Rater Rater Rater Total Number of I-CVI
1 2 3 4 5 6 7 8 9 10 agreements
Scores
IN1 5 5 5 5 5 5 5 5 4 4 48 10 1.0000
IN2 5 5 5 5 5 5 5 5 5 4 49 10 1.0000
IN3 5 5 5 5 5 5 5 5 5 4 49 10 1.0000
IN4 5 5 5 5 5 5 5 5 5 4 49 10 1.0000
IN5 5 5 5 4 5 5 4 4 5 4 46 10 1.0000
IN6 5 5 5 5 5 4 5 5 5 4 48 10 1.0000
IN7 5 5 5 5 5 5 5 5 5 4 49 10 1.0000
94
Table 5.14: S-CVI/AVE Scores
Constructs S- SCVI-
Title Abbreviation CVI/AVE UA
Content Engagement CE 0.85 0.25
Interactivity and Deliberation IDelib 0.92 0.40
Hedonic Features HF 0.90 0.33
Accessibility FA 0.95 0.75
Collaborative Content Creation CC 0.97 0.66
Perception about Writer’s Credibility PWC 0.90 0.50
Affinity for Government’s Platforms IVP 0.98 0.75
Trust in Government and Agency TGA 1.0 1.0
Information Need IN 1.0 1.0
Visual Attributes of the Contents VAC 1.0 1.0
Perceived Content Quality PCQ 0.86 0.42
95
Table 5.15: ICC Scores
CE IDelib HF FA CC PWC IVP TGA IN VAC PCQ
Cronbach’s Alpha .945 .964 .922 .965 .974 .972 .906 .965 .982 .969 .969
Intraclass Correlation .940 .963 .908 .962 .973 .971 .891 .961 .981 .967 .966
(Average measures)
Lower Bound 95% Confidence .911 .945 .861 .944 .960 .958 .836 .942 .971 .952 .949
Interval
Higher Bound 95% Confidence .963 .977 .943 .976 .983 .982 .932 .976 .988 .980 .979
Interval
96
5.2.3 Questionnaire Development
Having validated the items, the next phase would be developing an attitudinal scale from these
items. Attitudinal Scales are used to collect quantitative data about the opinions, attitude and
beliefs of a population (Ross, 2005). A seven-point Likert scale was developed using the 43
retained items. The seven-point scale was adopted because five- or seven-point Likert scales
create variance that helps examine relationships among items and scales. They also create
adequate internal consistency reliability estimates (Hinkin et al., 1997). The Questionnaire had
43 Likert items for 11 constructs, two sets of multiple-choice (multi-answer) questions which
investigated citizens’ choice of information from the government, and their choice of online
platforms, and one set of binary-type question which checked citizens’ level of political
awareness (See Appendix E).
Hinkin et al. (1997) observed that in the literature, recommendations for item-to-response ratio
suggest that there should be about five to eight participants per item. However, Hinkin et al.
(1997) advised the use of a conservative approach because an increase in sample size is likely
to increase the chances of attaining statistical significance and distorting the practical meaning
of the results. A minimum of 215 respondents were projected in the main survey, i.e. five
participants per item for the 43 items.
While the qualitative phase of this research collated data from Nigerians between the ages of 18
and 49 and who have gained University degrees; this quantitative phase was more encompassing,
and the participants included Nigerians who are above 18 years whether they had gained
university degrees or not. This ensures that this study could reach diverse Nigerians and therefore
sample diverse opinions as to what factors that influence citizens’ engagement with
governments’ online contents. Furthermore, the qualitative phase was aimed at gathering in-
depth information on factors that influence citizens’ engagement with governments’ online
contents and there was the need to speak with people who are knowledgeable enough, who can
express their thoughts clearly, and who can provide the required information, hence the need to
recruit participants who had University degrees. On the other hand, the quantitative phase was
aimed at testing the information gathered during the qualitative phase across different
demographics, hence the need to recruit both University graduates and those who are not.
Participants were recruited both offline (paper-based) and online (Google Forms) using the
snowball sampling method. The paper-based version was emailed to contacts in Nigeria, who
printed, distributed, collated and couriered the completed questionnaires back to the Researcher.
97
5.2.5 Pilot Study
Two rounds of pilot studies were conducted to test the scale for reliability and validity. An
adequate sample size for a pilot study is debated in the literature. Some researchers suggest that
it should be at least 10% of the sample projected for the main study (Connelly, 2008; Treece &
Treece Jr, 1977), others suggested 12 as an adequate sample size (Julious, 2005), and some said
it should be a minimum of 10 and maximum of 30 (Hill, 1998; Isaac & Michael, 1995). Using
the ‘10% rule’ as a guideline, this study arrived at an adequate sample size of approximately 22
participants (10% of the projected 215 respondents for the main survey). For the first round of
the Pilot study, questionnaires were developed using Google Forms and were distributed on
different Facebook groups. After four days, 25 questionnaires were completed.
The data collected from the first round of pilot study was tested for reliability using SPSS. Every
construct returned Cronbach’s Alpha value ≥ 0.70 (Loewenthal, 2001; Saunders, Aasland,
Babor, De la Fuente, & Grant, 1993) apart from the VAC construct which returned a value of -
0.030. However, the Cronbach’s alpha if item (VAC1) deleted was 0.769 as suggested by SPSS
(Table 5.16).
Cronbach’s .76 .88 .70 .75 .84 .76 .75 .84 .81 -.03 .79
Alpha (With
all original
items)
Cronbach’s .76
Alpha If Item
Deleted
(VAC1 as
suggested by
SPSS)
Following the pilot study, some issues were raised by respondents which resulted in both minor
and major changes to the questionnaire.
98
2. The shortening of item words wherever possible, for example, “I believe that writers of
government contents are usually reliable” was changed to “writers of government
contents are usually reliable.”
3. The item (VAC1) which affected the Cronbach’s Alpha of the VAC construct was
reworded from “in my opinion, government’s online contents are usually too long” to
“government’s contents are usually of an appropriate length (not too long or too short)”
1. The Likert items measuring citizens’ affinity for government’s platforms (IVP)
were changed following advice from the research supervisor. The purpose was for the
items to match items that have been used to operationalise intent to visit online platforms
in the literature. This reduced IVP Likert items from four two three.
2. The Likert items measuring citizens’ information need (IN) were also changed. This was
because the initial version generated data specifying particular topics the citizens want
from the government (which has already been ascertained during the first phase of the
study). The new version contains Likert statements which check if government
platforms contain the needed information. This reduced IN Likert items from six to
three.
With the changes effected, the total number of Likert items reduced from 43 to 39.
Constructs affected by the major changes (IVP and IN) were put through a face validation
process (Baldus et al., 2015; Germain, 2006; Nevo, 1985) by nine Ph.D. students. The
students were given the definitions of the two constructs and asked to sort a list of six items
into the matching definitions. All the items were matched to their intended constructs. At
the end of this process, there were 11 constructs with 39 Linkert items, three multiple-choice
(multi-answer) questions and three multiple-choice (single-answer) questions (Table 5.17).
99
Table 5.17: Question Types
Using the same method as in the first pilot study, a second round of pilot study was conducted
with a different set of respondents (15 persons). The reliability of all the instruments was re-
assessed using the Cronbach’s alpha reliability coefficient, they were above the conventional
score of .70 (Table 5.18).
100
5.3 Part Two: Data Preparation and Descriptive Statistics
Within four weeks, 276 questionnaires were returned (106 were online (38%), and 170 were on
paper (62%)). The paper-based data was manually input into the spreadsheet automatically
generated by Google Forms for the online data; this resulted in a single spreadsheet containing
all the data collected both offline and online.
To prepare the data for analysis, all nominal data were translated into numerical forms. Options
to the multiple response questions (IN and PC as indicated in Table 5.17) were treated as
individual variables and translated into binary forms. Therefore, answers to the IN4 question
resulted to seven variables, and answers to PC1 and PC2 resulted to five variables apiece. SPSS
was later used to combine these pseudo-variables into their original variables.
The Data was screened to identify cases with missing data and/or unengaged responses. The
variables were also screened for missing data. Using Microsoft Excel, each case was screened
for blank columns (missing data). Seven Cases were removed because they had 10% or more
missing data (Bennett, 2001; Dong & Peng, 2013); this resulted in the removal of 14 cases. The
remaining 262 cases were further screened for unengaged responses and two cases with a
standard deviation less than 0.5 were removed, thus resuting to a total of 260 cases.
The variables were then screened for missing data and 25 variables were identified as having at
least one missing data (Appendix F). In reality, however, there were 11 variables with missing
data because the PC2x, IN4x and PC1x variables were developed from three different variables
for easier analysis. There are two main approaches to handling missing data, these are the
conventional and advanced approaches (Soley-Bori, 2013). The conventional approach includes
listwise deletion, which removes all the cases with missing data, and the imputation method,
which replaces the missing data with the mean of the non-missing values. The listwise deletion
method could exclude a large portion of the original sample, while the imputation method could
result to “biased estimates of variances and covariance and should be avoided” (Soley-Bori,
2013, p. 7). The advanced approaches include maximum likelihood and multiple imputation.
The maximum likelihood generates variance-covariance matrix for variables based on all
available data points. However, this requires special software packages and advanced analytical
skills. Multiple imputation, on the other hand, runs simulations on the missing data relative to
the available data in an attempt to replace the missing data with data that is most likely to be
similar to the available data. It looks at patterns in the available data and makes a probability
judgement as to what the missing data would be (Carpenter & Kenward, 2012; Rubin, 2004).
Multiple imputation replaces each missing item with two or more values which represent a
101
distribution of possibilities (Allison, 2002; Soley-Bori, 2013). Multiple imputation shares same
optimal properties with the Maximum Likelihood method and also removes some of its
limitations as it can be used with any conventional software package and provides “consistent,
asymptotically efficient and asymptotically normal estimates”(Soley-Bori, 2013, p. 8).
The Multiple Imputation function of SPSS was used to replace the missing data with the aim of
maintaining the sample size. Five iterations of this imputation was conducted, and the missing
data were replaced. The implication of this is that for every analysis done on the data, SPSS
would provide six different results: one for the original data, and five for the five iterations of
the imputation process. Depending on the nature of analysis, there may also be an additional
result which is the pooled result of the five iterations as generated by SPSS. However, because
there are certain analysis for which pooled results cannot be generated, this study follows
Wayman (2003, p. 5)’s advice by running the statistical analysis on each of the five multiple
imputation datasets, and averaging the individual results to produce a single set of result (pooled
result). This was also supported by (Sinharay, Stern, & Russell, 2001). In this chapter, where
ever possible, only pooled results from the multiple imputation will be tabulated and presented.
Detailed tables with results from the original data and five of the iterations will be presented in
the Appendices.
Of the 260 valid responses, 58% were male, and 51% were in the age range of 29-35. 60% of
the population were single, and a majority of them were university graduates (56%), with 28%
having completed postgraduate degrees. Slightly over a quarter of the respondents (28%) earn
between 50,000 Naira to 99,000 Naira/£126.76 - £253.51 monthly; this is closely followed by
those who earn between 100,000 Naira to 199,999 Naira/£253.51 – £507.02 (26%). A majority
of the respondents were civil servants (23%), they are closely followed by professionals (20%).
The pooled data of respondents’ profile is as shown in Table 5.19. Appendix J shows this data
across the original data and the five iterations.
102
Table 5.19: Respondents' Profile (Pooled Iteration)
In Table 5.20 and Table 5.21 are the descriptive statistics for both endogenous and exogenous
constructs used in this study, as well as the dichotomous and multiple-response data. In Table
5.20, only ID5 had missing data and required multiple imputation, therefore the pooled data is
presented. Details of the data across the original data and five multiple imputation iterations are
as presented in Appendix K. Similarly, Table 5.21 contains the pooled iteration of dichotomous
and multiple-response data. It is presented in more detail with the original data and five multiple
imputation iterations in Appendix L.
103
Table 5.20: Descriptive Statistics of Likert Variables
Variables Original Data
Mean S.D
ID2 3.52 1.80
ID3 3.99 1.75
ID4 2.99 1.78
ID5 (Pooled Data) 4.30 1.79
HF1 3.38 1.76
HF2 2.55 1.56
HF3 3.00 1.74
FA1 4.17 1.78
FA2 4.16 1.75
FA3 4.02 1.76
CC1 3.98 1.77
CC2 4.53 1.70
CC3 3.78 1.77
PWC1 3.87 1.61
PWC2 3.10 1.45
PWC3 4.04 1.49
IVP1 4.19 1.43
IVP2 4.37 1.52
IVP3 4.12 1.42
TGA1 2.99 1.68
TGA2 3.00 1.54
TGA3 2.70 1.40
TGA4 3.44 1.38
IN1 3.96 1.60
IN2 3.67 1.69
IN3 3.55 1.65
VAC1 3.78 1.311
VAC2 3.76 1.39
VAC3 3.37 1.36
PCQ1 4.20 1.46
PCQ2 4.02 1.47
PCQ3 3.28 1.48
PCQ4 3.85 1.51
PCQ5 3.48 1.55
PCQ6 4.15 1.58
104
Table 5.21: Descriptive Statistics of Dichotomous and Multi-
Response Variables
To prepare the data for SEM, it was analysed for the assumption of independent errors by
computing the standardised residuals. With CE as the dependent variable and TGA, PWC, VAC,
IN, IVP and PCQ as independent variables, Durbin-Watson’s statistic was 1.95 across the five
multiple imputation iterations as indicated in Table 5.22. This value is very close to the
recommended value of 2.0 and much above the minimum threshold value of 1.0, thus indicating
that the residuals are uncorrelated (Durbin & Watson, 1950). Scatterplots were also visually
inspected for outliers across the original data and five iterations. The standardised residual points
were all between +3 and -3 on the Y axis (regression standardised residual) and X axis, thus
indicating that there were no outliers (PSU, 2016). Because they are largely the same, scatter
plots for the original data, and two (first, and the fifth) of the five iterations are shown in
Appendix G.
105
Table 5.22: Durbin-Watson's Statistics for CE
Standardised residuals were also computed with IVP as the dependent variable and HF, FA, PA,
TGA, CC and IDelib as independent variables. This time, the Durbin-Watson statistic was 2.1
which was still indicative of residuals being uncorrelated as indicated in Table 5.23. However,
from the scatterplot, five outliers were identified and removed (Appendix H). With the outliers
removed, Durbin-Watson statistic remained stable at 2.1, the scatter-point points fell within +3
and -3 on both axes as is shown in Appendix I.
In summary, to have the data fit for SEM, five cases (outliers) were removed from the data
sample. This resulted in a total sample size of 255 after data preparation.
106
5.4 Part Three: Factor Analysis and Reliability Test
This part of the chapter presents the factor analysis and reliability test of the quantitative data.
This is important as the scale used in this study was developed from qualitative data and there is
need to identify the underlying relationships between the measured variables and to refine the
hypothesised model if necessary (Thompson, 2004; Williams, Onsman, & Brown, 2010).
Cronbach’s alpha was also used to check the scale for reliability (Santos, 1999).
Exploratory factor analysis (EFA) is a data reduction technique which takes a large set of
variable and looks for a way by which the data can be reduced or summarised using a smaller
set of variables or components (Thompson, 2004). It does this by looking for clumps or groups
that have very strong inter-correlations within a set of variables. Factor analysis can help in the
reduction of a large number of related variables to a more manageable and efficient number of
variables that measure a construct (Loehlin, 1998). EFAs are essential in scale development and
should be conducted before a confirmatory factor analysis (CFA) (Fabrigar, Wegener,
MacCallum, & Strahan, 1999; Hinkin et al., 1997). In this study, EFA was conducted with
guidelines from Williams et al. (2010) which involved the following steps:
Checking for the suitability of the dataset for factor analysis: The suitability of a dataset for
factor analysis is determined by the data type, sample size and strength of the relationship or
inter-correlation among the variables or items within the measurement tool (Osborne & Costello,
2009; Williams et al., 2010). Of the 45 items in this study, only 39 were selected for factor
analysis and these were all Likert items (See Table 5.17). The unselected six items were nominal
data in binary/dichotomous forms and were not appropriate for factor analysis (Bartholomew,
Steele, Galbraith, & Moustaki, 2008; Knol & Berger, 1991). Although some researchers
routinely treat binary data as continuous, it is prone to the “appearance of 'difficulty' factors, i.e.
factors based on items with similar distributions rather than similar content or skill similarities”
(IBM, 2014). These 39 items Likert items/variables were spread across 11 constructs. The
number of cases used in this study (255) was also adequate, i.e., greater than 195 which is the
minimum expected sample size for 39 items at the ratio of 5 respondents to 1 item (Hinkin et al.,
1997; Lynn, 1986). Furthermore, preliminary analysis indicated that the 39 items were suitable
for factor analysis. As observed from the analysis, with a Pearson’s r correlation value greater
or equal to 0.3, all 39 items in the pooled result and the five multiple imputation iterations
correlated with at least one other item. The Kaiser-Meyer-Olin (KMO) measure of sampling
adequacy was 0.878 in the pooled result which is well above the recommended value of 0.5
107
(Tabachnick & Fidell, 2007; Williams et al., 2010). The Bartlett’s Test of Sphericity was also
statistically significant at p < 0.001 as shown in Table 5.24.
Factor Extraction: This involves determining from a set of items, the smallest number of items
that best represents the interrelationships amongst the given set of items. This study used the
principal component analysis method, which is the most popular extraction method (Osborne &
Costello, 2009). Based on the Kaiser criterion, 10 factors were identified as having Eigenvalue
greater than 1.0. These 10 factors cumulatively explained an average of 67.8% of the variance
across the five multiple imputation iterations. The communalities in the pooled result of the
multiple imputation iterations were mostly above the recommended 0.5 treshold thus indicating
that a substantive amount of variance in each variable is accounted for (Field, 2005). The only
exception is PWC2 which had a communality value of 0.439 and, therefore, was eliminated from
subsequent analysis as indicated in Table 5.25 (see Appendix M for a complete table).
108
Table 5.25: Communalities
Variables Extraction
Initial Pooled Results
CC2 1.000 0.731
CC3 1.000 0.656
PWC1 1.000 0.7092
PWC2 1.000 0.439
PWC3 1.000 0.691
IVP1 1.000 0.6988
IVP2 1.000 0.7718
IVP3 1.000 0.7534
TGA1 1.000 0.694
TGA2 1.000 0.8132
TGA3 1.000 0.77
TGA4 1.000 0.6058
IN1 1.000 0.6288
IN2 1.000 0.645
IN3 1.000 0.6388
VAC1 1.000 0.6136
VAC2 1.000 0.79
VAC3 1.000 0.80
PCQ1 1.000 0.636
PCQ2 1.000 0.614
PCQ3 1.000 0.609
PCQ4 1.000 0.6158
PCQ5 1.000 0.542
PCQ6 1.000 0.5644
Factor Rotation: This helps clarify, simplify and interpret the results of factor extraction by
presenting a pattern of loadings which highlights the variables that clump together; it can be
done either by an orthogonal or oblique approach (Williams et al., 2010). The Orthogonal
approach assumes that factors are uncorrelated and therefore produce outputs that are easier to
interpret. The oblique approach, on the other hand, allows items to correlate but does not force
them, however, the interpretation of the output is slightly more complex than the orthogonal
(Osborne, 2015). This study adopted the oblique rotation approach which allows for both
correlated and uncorrelated factors. This analysis was run for the five multiple imputation
iterations, and the values were very similar as shown in Appendix N. With a correlation
coefficient cut-off score of 0.40 –which Hair, Black, Babin, Anderson, and Tatham (2006b)
described as important - 36 variables/items cleanly loaded on 10 factors in the five multiple
imputation iterations; two variables consistently failed to load (PCQ3 and HF1). To increase the
parsimony of the factors (Hair et al., 2006b; Kieffer, 1999), the threshold score was increased to
109
0.5 – which Hair et al. (2006b) suggested as being significant. This increment resulted in the
removal of three additional variables (PCQ4, ID4, and PCQ6) from the five imputation iterations
resulting in a total of 33 loaded items across 10 factors/constructs.
The pattern matrix values across the five iterations were then pooled together to get an
overarching pattern matrix with ten factors and 33 variable/items as shown in Table 5.6. The
first factor had the most variables and was a merger of all the information needs items (IN1, IN2,
IN3) with three out of the six perceived content quality items (PCQ1, PCQ2, and PCQ5). This
factor appears to represent citizens’ desire for quality information that meets their information
needs. The second Factor contained four of the five interaction and deliberation items (IDelib1,
IDdelib2, IDelib3, IDelib5). The third factor contained all the accessibility factors (FA1, FA2,
FA3). The fourth factor contained two of the three hedonic features items (HF2, HF3). The fifth
factor contained all the trust in government factors (TGA1, TGA2, TGA3, TGA4). The sixth
factor had all the collaborative content creation items (CC1, CC2, CC3). The seventh factor had
two of the three perceived writer’s credibility items (PWC1, PWC3). The eighth factor contained
all the content engagement items (CE1, CE2, CE3). The ninth factor contained all the affinity
for government’s platforms items (IVP1, IVP2, IVP3). The tenth factor had all the visual
attributes of content items (VAC1, VAC2, VAC3).
Factors
1 2 3 4 5 6 7 8 9 10
Variables
Number of Variables Per Factor
6 4 3 2 4 3 2 3 3 3
IN3 .72
IN2 .62
PCQ1 .57
IN1 .57
PCQ2 .54
PCQ5 .54
PCQ4 .48
PCQ3
IDelib2 .84
IDelib3 .81
IDdelib1 .78
IDdelib5 .76
IDdelib4 .47
FA2 .84
FA1 .80
FA3 .75
HF2 .87
HF3 .76
TGA2 -.82
TGA4 -.72
TGA3 -.72
110
Factors
1 2 3 4 5 6 7 8 9 10
Variables
Number of Variables Per Factor
6 4 3 2 4 3 2 3 3 3
TGA1 -.64
CC1 -.80
CC2 -.72
CC3 -.72
HF1
PWC1 .82
PWC3 .65
CE1 .79
CE2 .66
CE3 .61
IVP3 .78
IVP2 .74
IVP1 .69
VAC3 .88
VAC2 .83
VAC1 .55
PCQ6 .45
Each of the factors/scales was also checked for reliability/internal consistency using the
Cronbach’s alpha. All exceeded the recommended threshold value of 0.70 (Loewenthal, 2001)
apart from PWC which had a Cronbach’s alpha value of 0.56 and had only two items (PWC1
and PWC3), hence, removing an item to improve the score was not a valid option. Therefore,
the factor was removed entirely resulting in a total of 9 factors and 31 items. This is as indicated
in Table 5.26 .
111
Table 5.26: Cronbach's Alpha
In summary, the factor analysis and reliability analysis reduced the hypothesised C-CE model
by two constructs/factors and six variables. To a great extent the pattern matrix significantly
agreed with the hypothesised model pre-factor analysis because all the variables clumped
together as was designed; the only difference being the merger of the IN and PCQ items. The
refined C-CE model is as shown in Figure 5.1
112
Figure 5.1: Refined Conceptual Model/Hypothesis
Trust in
Type of Platform Government/ Political Awareness
(PC) Agency (PA)
H5-1 (TGA)
H5-2 H4-1a
H6
Affinity for
Content creation
H4-3 Platform H4
(CC) (IVP)
Content
Engagement
H4-4 (CE)
Interactivity and
Deliberation
(IDelib) H1
Hedonic Features
(HF) H2-1
Content Attributes
Visual Attributes
(VAC)
113
5.5 Part Four: Analysis of the Citizen-Content Engagement (C-CE) Model Using SEM
The IBM SPSS AMOS Version 22 and IBM SPSS Statistics Version 22 applications were used
to analyse the hypothesised C-CE model following a Structural equation modelling (SEM)
approach. SEM refers to a diverse set of statistical methods that link networks of constructs to
collected data (Kaplan, 2009). It has two components or approaches: the measurement model
and the structural model (Anderson & Gerbing, 1988). The measurement model specifies the
relationship between latent variables and their indicators and is typically done using exploratory
or confirmatory factor analysis (Kline, 2010). The structural model, on the other hand, specifies
the relationship and dependencies between endogenous and exogenous variables and is typically
done using path analysis (Kline, 2010).
To assess the measurement model in this study, a confirmatory factor analysis was first
conducted. The confirmatory factor analysis (CFA) is a procedure that is used to test for
unidimensionality, validity and reliability of latent constructs (Atkinson et al., 2011; Fischer,
2012; Fornell & Larcker, 1981; Hair, Black, Babin, Anderson, & Tatham, 2006a; Prudon, 2014).
There are different approaches to CFA but this study adapts the steps suggested by (Awang,
2016) to include (1) assessment of measurement model fit (2) test for unidimensionality (3) test
for reliability (4) test for validity (5) assessment of structural model fit (6) evaluation of
hypothesised model.
Fit analysis/measurement helps assess how well the observed data matches the value expected
by theory (Hooper, Coughlan, & Mullen, 2008; Prudon, 2014). There are three classes of indices
that assess model fit. These include: (1) The absolute fit indices which include Chi-square (c2),
Goodness of fit index (GFI), Adjusted Goodness of Fit Index (AGFI), Root Mean Square
Residual (RMSR), and Standardised Root Mean Square Residual (SRMR). (2) The relative fit
indices which include Incremental Fit Index (IFI), Tucker-Lewis Index (TLI), and the Normed-
Fit Index (NFI). (3) The noncentrality-based fit indices which include Root Mean Square of
Error Approximation (RMSEA), and Comparative Fit Index (CFI) (Muruyama 1998; Tanaka
1993) cited in (Li, 2006, p. 97).
The Chi-square value is essential in the calculation of these three classes of fit indices. The
relative fit indices are calculated by comparing the model’s Chi-square value against the null
model which says that all the observed variables are uncorrelated and are a very poor fit. On the
other hand, the noncentrality-based indices are functions of chi-square, the degree of freedom
(df), and the sample size (N). Theoretically and desirably, a good model should have a c2 p-
114
value greater than 0.05. This means there should be no significant difference between the tested
model and the saturated/perfect model expected by theory. However, c2 is susceptible to the
influence of sample and model size, which in turn affects the significance of the difference
between the tested and saturated models (Kenny, 2015). Therefore, the Chi-square is no longer
a reliable basis for the acceptance or rejection of model fit (Schermelleh-Engel, Moosbrugger,
& Müller, 2003; Vandenberg, 2006). As a result of this, reporting a combination of fit results
across the three classes of fit indices has become acceptable (Cheung & Rensvold, 2002;
Jackson, Gillaspy Jr, & Purc-Stephenson, 2009). This study chose the SRMR as an indicator for
absolute fit, the TLI and NFI as indicators of relative fit, and the RMSEA and CFI as indicators
of noncentrality-based fit. However, these indices are still not perfect (Steiger, 2007).
Going by the values of the indices in the pooled results, it can be said that the measurement
model has a good fit. As shown in Table 5.27, the NFI is above the recommended 0.80 threshold,
the CFI and TLI are above the recommended value of 0.90, RMSEA is below the recommended
value of 0.10, and SRMR is below the recommended value of 0.11 (Bentler & Bonett, 1980;
Hooper et al., 2008).
5.5.2 Unidimensionality
Unidimensionality of a scale refers to the scale’s ability to measure a given construct or attribute
and nothing else. It is achieved when items/indicators in a construct have acceptable factor
loadings (Awang, 2016). Doll, Raghunathan, Lim, and Gupta (1995) point out that although
factor loading above 0.70 are considered good measures of their latent constructs, there is no
universally acceptable cut-off value. Conversely, Hair et al. (2006a) suggest that factor loadings
with values at 0.50 or higher are acceptable. However, Hutcheson and Sofroniou (1999) argue
that values between 0.5 and 0.7 are mediocre, though acceptable for new measures and
exploratory studies. Because this is an exploratory study and the measures are not well
established, this study used 0.5 as the factor loading threshold. As a result, four items were
115
eliminated. As indicated in Table 5.28, the factor loadings for all the items in the construct were
above 0.50 and it can be said that the measurement model has achieved unidimensionality.
116
5.5.3 Reliability Analysis
A reliability test was conducted and Cronbach’s alpha exceeded the recommended value of 0.7
(Loewenthal, 2001). However, a common approach to reliability test in CFA is the composite
reliability (CR) which measures the overall reliability of a collection of heterogenous but similar
items (Fischer, 2012) and is calculated as (square of the summation of factor loadings)/ (square
of the summation of factor loadings + the summation of the error variances). It is considered
acceptable above the value of 0.7 (Fornell & Larcker, 1981). The composite reliability for the
model demonstrates acceptable values that are over 0.7 as indicated in Table 5.29.
Table 5.29:Reliability
The convergent validity is ascertained by considering the average variance extracted (AVE).
The AVE measures the variance of the items in a construct relative to the total amount of
variance, the variance of the indicators inclusive. In order to pass a convergent validity criterion,
constructs must have AVE values of 0.50 and above (Fischer, 2012; Fornell & Larcker, 1981).
As indicated in Table 5.30, the AVE values were all ³ 0.50 apart from the INPCQ construct
which had a value of 0.38. This means that on the average, the INPCQ construct has items that
contain less than 50% explained or common variance (Fornell & Larcker, 1981), and, therefore,
has more error than variance explained. Measurement error have been observed to be as a result
of items measuring other factors besides the hypothesised construct (Kline, 2005; Schumacker
& Lomax, 2004). This may be the reason for the poor AVE because INPCQ was formed by
117
factor rotation during the exploratory factor analysis when items from two different constructs
(IN and PCQ) loaded under one factor.
In the literature, convergent validity is also said to be proven if the latent variable is reliable
(Ping, 2009), or if the factor loadings are ≥ 0.50 (Johari, Yahya, & Omar, 2011; Said, Badru,
& Shahid, 2011). Going by reliability and factor loading, it can be said that the measurement
model in this study has achieved convergent validity. However, presenting these as proof of
convergent validity is not as popular/acceptable as the AVE, hence, the claim of validity is made
with some caution. Ultimately, the INVPQ construct was retained, as is, in the model. This is
because the model and measures are new, and the study is exploratory (Ping, 2009).
Construct AVE
INPCQ 0.38
IDelib 0.54
HF 0.55
VAC 0.68
FA 0.50
TGA 0.52
CE 0.56
IVP 0.50
CC 0.51
The discriminant validity is ascertained by comparing the shared variance among constructs with
the AVE, where the shared variance is calculated by squaring the correlation coefficients. If the
AVE for latent variables in a model are greater than the shared variance between the latent
variables, the discriminant validity is confirmed (Fornell & Larcker, 1981). As shown in Table
5.31, the AVE for the latent variables were consistently greater than the shared variance between
them. This suggests that the measures are distinct and free from redundant items.
118
Table 5.31: Discriminant Validity With AVE
Another round of fit measurement was conducted on the structural model – with focus on the
relationships between the latent variables. The same fit indices in the fit analysis of the
measurement model were used. These include the NFI, CFI, TLI, RMSEA and SRMR. As
shown in Table 5.32 all the fit indices reached the recommended thresholds except the TLI and
RMSEA. In the pooled results, TLI had a score of 0.80 (this is 0.2 less than the recommended
threshold) while RMSEA had a score of 0.12 (this is 0.02 less than the recommended threshold).
Similar values have been seen as satisfactory (Boukamcha, 2015; Li, 2006; Strohmeier,
Yanagida, & Toda, 2016). Nonetheless, fit indices similar to TFI and RMSEA indicate a
satisfactory fit. IFI and NFI, which are relative fit indices like TLI, were above the
recommended threshold of 0.90 and 0.80 respectively. Similarly, CFI, which is a noncentrality-
based fit index like the RMSEA, was also above the recommended threshold of 0.90.
119
Table 5.32: Fit Indices of the Structural Model
The multiple imputation dataset was split into five for each of the five iterations in preparation
for the analysis of the overall explanatory power of the hypothesised C-CE model. With the IBM
SPSS AMOS 22 application, the predictive power of the exogenous variables for the
hypothesised model was examined for each of the five datasets using standardised squared
multiple correlations (R2), standardised path coefficients (β), and the significance of the
coefficients (p). The individual results were pooled and the mean presented as the overarching
result (Sinharay et al., 2001; Wayman, 2003). The results of the analysis across the five data sets
were very similar as they returned approximately the same R2, β, and p values as shown in
Appendix O.
The average R2, β, and p values were obtained and are shown in Figure 5.2 and Table 5.33. CE
and IVP (the two endogenous variables) had an R2 value of 0.13 and 0.33 respectively. This
indicates that the factors in the C-CE model predict 13% of the total variability in citizens’
engagement with government’s online contents (CE) and 33% of their affinity for government’s
online platforms (IVP). Out of the nine hypotheses, five were significant: H1, H4, H4-1B, and
H4-3 were statistically significant at p<0.001, while H4-4 was significant at p = 0.01. H2-1, H4-
1A, H4-2, and H4-5 were, however, not significant.
120
Figure 5.2: Data analysis Results
Trust in
Government/
Agency
(TGA)
Platform Attributes
Information Needs
Hedonic Features
(HF)
and Quality 0.04
(INPCQ)
Content Attributes
Visual Attributes
(VAC)
Hypotheses β P
H4-2 IVP ß FA 0.08 0.13
H4-3 IVP ß CC 0.23 ***
H4-4 IVP ß IDelib 0.13 0.01
H4-5 IVP ß HF -0.012 0.81
H4-1B IVP ß TGA 0.51 ***
H2-1 CE ß VAC 0.04 0.51
H1 CE ß INPCQ 0.25 ***
H4 CE ß IVP 0.24 ***
H4-1A CE ß TGA 0.04 0.53
Notes: *** p-value < 0.001
121
The direct, indirect, and total effect of the factors on content engagement (CE) was also analysed
and presented in Table 5.34. The result shows that the quality and ability of the content to meet
citizens’ information needs (INPCQ) has the largest effect on citizens’ engagement with the
content; this is closely followed by citizens’ affinity for government’s platforms. Trust in
government/agency (TGA) had the next highest effect on CE, most of which was accounted for
by its indirect effect shaped through affinity for governments platform (IVP).
Multiple-response data was used to investigate the influence of government’s choice of online
platforms on citizen’s affinity for governments’ platforms and to engage with government’s
online contents. Respondents were asked to identify from a list, the type(s) of platforms they
would prefer their government to use in communicating with and providing information for them
(PC1), and the type(s) currently in use (PC2). The options were “Facebook”, “Twitter”, “Blog”,
“Websites”, and “Others”. To facilitate analysis of the data using SPSS, each option was treated
as a separate variable (PC1Fbook, PC1Twit, PC1Blog, PC1Web, PC1Others, PC2Fbook,
PC2Twit, PC2Blog, PC2Web, PC2Others) and the data was presented in a binary categorical
form where ‘0’ meant that the option was not selected and ‘1’ meant that it was.
As indicated in Figure 5.3, for PC1, Facebook was identified as the most preferred platform for
citizens (73.7%), this was followed closely by website (64.5%). The respondents had an almost
equal preference for Twitter (35.2%) and Blogs (35.3%). Other identified platforms include
messengers, questions and answer sites, Instagram, YouTube, and Email (9.4%).
122
Figure 5.3: Citizens' Choice of Platforms
80.00%
70.00%
60.00%
50.00%
40.00%
30.00%
20.00%
10.00%
0.00%
Pooled Data
As indicated in Figure 5.4, for PC2, websites were identified as being the most used by the
government (79.1%), followed by Facebook and Twitter (33.8%), and blogs (21.1%). Other
media identified by respondents were offline and included national dailies, television, and radio
(21.1%). It is pertinent to state that the individual percentages do not sum up to 100 because the
question that generated this data was a multiple response type where respondents were allowed
to select one or more options.
123
Figure 5.4: Type of Platform Used by the Government
90.00%
80.00%
70.00%
60.00%
50.00%
40.00%
30.00%
20.00%
10.00%
0.00%
Pooled Data
Based on the PC2 variable, the hypothesised C-CE model was re-analysed using the multigroup
moderation analysis approach to check for moderating effects of platform type on trust in
government and agency (TGA), platform accessibility (FA), Collaborative content creation on
platform (CC), interaction and deliberation (IDelib), and hedonic/persuasive features as
determinants of citizens’ affinity for government’s platforms (IVP). The intent was to test the
hypotheses that social media use by the government would have more positive effect than
websites on the influence of TGA (H5-1), FA (H5-2), CC (H5-3), and IDelib (H5-4) on IVP.
To facilitate the analysis, PC2Fbook and PC2Twitter variables were merged into a new variable
representing social media use, while the PC2Blog and PC2Web variables were merged into a
new variable representing mainstream website/blog use. PC2Others was not considered as its
data referred to offline media which were not within the context of this study. The two new
variables were further merged and dummy coded into a new variable representing government’s
platform types (PlatformUse). The PlatformUse variable had two groups of moderating values
which include SM and WEB. SM was coded as “1” and represents government’s use of both
social media alone or together with mainstream websites/blogs; WEB was coded as “0” and
represents government uses mainstream websites/blogs alone.
124
Using standardised R2 values, standardised path coefficients, and significance, the result of this
multigroup moderation analysis is presented in Figure 5.5 and Table 5.35 for the effect of the
moderation on the entire C-CE model but with a particular interest in the relationship between
IVP. In Figure 5.5, the path coefficients are written in red for the SM group, and in black for the
WEB group. The results show that only hypotheses H5-1, H5-3, and H5-4 had empirical support
for the expectation that social media would have more positive effect than websites on the
influence of TGA, FA, CC, and IDelib on IVP. To check the significance of the difference
between the coefficients, a Stats Tool Package by Gaskin (2012) was used to calculate the path
differences between the two groups. Based on the critical ratios approach, this tool calculates
the significance of the difference in the estimate of parameters between groups by comparing
the z-score of these differences as well as the estimated regression weights of the groups (Kruse,
Williams, & Seng, 2014). The comparison shows that only the difference in H5-4 was
significant.
Trust in
Government/
Platform Attributes Agency
(TGA)
E-public sphere
0.12
0.54*** -0.05
Accessibility
(FA) 0.04 0.51***
0.06
0.08 0.22**
Interactivity and
Deliberation 0.25**
(IDelib)
-0.05
Information Needs
and Quality -0.06
(INPCQ)
Hedonic Features
-0.16
(HF)
Content Attributes
Visual Attributes
(VAC)
125
Table 5.35: Platform Moderation Effects
Significance of
Social Media Websites/blogs
Difference
Hypotheses
Diff
β P B β P B Z-Score
in B
H5-4 IVP ß IDelib .23 *** 0.178 -.02 .782 -0.018 -.196 -2.451**
H5-1 IVP ß TGA .54 *** 0.550 .51 *** 0.443 -.107 -1.121
H5-3 IVP ß CC .25 *** 0.204 .17 .035 0.138 -.066 -0.795
H5-2 IVP ß FA .04 .545 0.031 .06 .486 0.045 .014 0.171
Notes: *** p-value < 0.01; ** p-value < 0.05; * p-value < 0.10
Three questions were used to test the citizens’ level of political awareness, each question had a
score of 1. These made up the PA variable (PA1, PA2, PA3). To prepare the variable for
multigroup moderation analysis, the mean score for PA was obtained (1.2) and the data
transformed into binary. Scores less than 1.2 were coded as 0, and those equal or above were
coded as 1; where “0” represented poor awareness and “1” represented adequate awareness.
62% of the respondents indicated they had poor political awareness level, while 38% were
optimally aware. The hypothesised C-CE model was re-analysed using the multigroup
moderation analysis approach to check for moderating effects on the model as shaped through
hypothesis H6. The intent was to evaluate the moderating effect of political awareness on
citizens’ affinity for government’s platforms (IVP) as a determinant of their engagement with
government’s contents (CE), and to test the hypothesis that optimal level of political awareness
would have more positive effect than poor awareness level on the influence of IVP on CE (H6).
Using standardised R2 values, standardised path coefficients, and significance, the result of this
multigroup moderation analysis is presented in Figure 5.6 and Table 5.36 for the effect of the
moderation on the entire C-CE model but with a particular interest in the relationship between
IVP and CE. In Figure 5.6, the path coefficients are written in red for the optimal political
awareness level group, and in black for the poor political awareness level group. The results
show that there was an awareness moderation effect on the relationship between IVP and CE
because with awareness at optimum, β value was -0.072, and 0.30 with poor awareness.
However, there was no empirical support for hypothesis H6, as optimal awareness had a negative
effect on the influence of IVP on CE, contrary to expectation. The difference in β value between
optimal and poor levels of awareness was also checked for significance using the critical ratio
approach, and it was significant at p < 0.01.
126
Figure 5.6: Political Awareness Moderation Effects
Trust in
Government/
Agency
(TGA)
Content creation
(CC)
0.17 0.13 Affinity for Platform -0.07 Content Engagement
(IVP) (CE)
0.23** R2 = 0.15 / R2 = 0.23 0.40*** R2 = 0.13 / R2 = 0.11
-0.07
Interactivity and
Deliberation
(IDelib) 0.07 0.33***
-0.14 0.16
Information Needs
Hedonic Features
and Quality 0.08
(HF)
(INPCQ)
-0.01
Content Attributes
Visual Attributes
(VAC)
Significance of
Aware Not Aware
Difference
Hypotheses
Diff in
β P B β P B Z-score
B
H6 CE
-.072 .453 -0.121 .298 *** 0.406 .527
ß IVP 2.664***
Notes: *** p-value < 0.01; ** p-value < 0.05; * p-value < 0.10
127
5.5.6.3 Other Analysis
Multiple-response data was used to investigate citizens’ choice of information from the
government. Respondents were asked to identify from a list, the type(s) of information they
would want from their government on the internet (IN4). The options were from the qualitative
data and include: information on trending socio-political events, information on government
policies, information on government’s income and expenditure, information on government’s
projects and activities, information on the economy, information for personal use, and others
which include information on international/diplomatic relations, information on opportunities
for citizens to play some role in policy development, information on direct contact details of
government officials. To facilitate analysis of the data using SPSS, each option was treated as a
separate variable (IN4Trend, IN4GovtPol, IN4GovtExp, IN4GovtProj, IN4Econ, IN4PersUse,
IN4Others) and the data was presented in a binary categorical form where ‘0’ meant that the
option was not selected and ‘1’ meant that it was. Using the pooled data as a sphere of reference,
IN4PersUSe is the most needed type of information at about 68.6%, with the least being
IN4Others at 7.2% (Figure 5.7). As IN4GovtPol, IN4GovtExp, IN4GovtProj and IN4Econ are
all part of information for political participation (Johannessen, Flak and Saebo, 2012), their
individual percentages were averaged to get 62.05%. It is pertinent to state that the individual
percentages do not sum up to 100 because the question that generated this data was a multiple
response type where respondents were allowed to select one or more options. The result of this
analysis can only be taken at face value as a test of significance between the options was
impossible given that the data was from a multiple response question type.
128
Figure 5.7: Types of Information Citizens Want from the Government
80.00%
70.00%
60.00%
50.00%
40.00%
30.00%
20.00%
10.00%
0.00%
Pooled Data
5.6 Conclusion
This chapter presented the development of hypothesis and questionnaires for the quantitative
phase of this study, from the qualitative phase. It also presented the justification of sampling and
sample size and the data collection process.
As it concerns the analysis, in summary, eight of the 14 proposed hypotheses were accepted:
citizens’ engagement with governments’ online contents (CE) is significantly influenced by the
contents’ quality and ability to meet citizens’ information needs (INPCQ), and their affinity for
governments’ online platforms (IVP); citizens’ affinity for governments’ online platforms (IVP)
was significantly influenced by their trust in government/agency (TGA), the platforms’ ability
to allow collaborative content creation (CC), and interactivity and deliberation (IDelib); social
media use by governments was found to have more positive effects than websites on the
influence of trust in government/agency (TGA), collaborative content creation (CC), and
interactivity and deliberation (IDelib). The next chapter will discuss and interpret the result in
its entirety.
129
Chapter 6 : Discussion
6.1 Introduction
This chapter discusses the data analysis results as it concerns factors that influence citizens’
engagement with government’s online contents (CE), and the effect of political awareness and
the platform type being used by governments on the research model. A summary of the findings
is first presented, followed by individual sections addressing each hypothesis. Subsequently,
practical implications for theory and practice are presented. The limitations of the study and
recommendation for future studies are also presented.
Although governments around the world have increasingly used ICT -especially over the
internet- to provide services for, transact, inform, communicate, and interact with citizens
(Astrom et al., 2012; Belanger & Carter, 2006), there is a dearth of research on the contents or
information provided by the government, their value to the public and their effects on e-public
engagement (Janssen et al., 2012). Citizens’ engagement with and use of governments’
information remains an unexplored niche topic that needs more research attention (Zuiderwijk
et al., 2014). This research gap can be attributed to some factors including (1) the predominant
focus of e-public engagement/participation research on techno-centric aspects/e-participation
activities like the adoption and usage of e-voting, e-petitioning, e-surveys, e-deliberation,
etcetera. (Medaglia, 2012; Sæbø, Rose, & Flak, 2008). (2) The significant focus on governments
in e-participation research. For instance, there is abundant research on governments’ efforts at
using technology to improve citizens’ participation in governance (United Nations, 2014), the
type of technologies adopted for this purpose (Aichholzer & Westholm, 2009), the factors that
affect governments’ implementation of e-public engagement initiatives (Zheng et al., 2014), and
how to adopt and use these initiatives (Alvarez et al., 2009; Bonson et al., 2015; Carter &
Belanger, 2012). Studies that have considered citizens or other stakeholders outside ruling
sphere tend to be reactionary (Alvarez et al., 2009; Bonson et al., 2015). (3) The superficial
approach to studying engagement on the internet (Haile, 2014; Manjoo, 2013; Mintz, 2014),
especially on social media, by counting the number of likes, comments, shares, etc. (Bonson et
al., 2015; Bonsón et al., 2014; Goggins & Petakovic, 2014; Ye & Wu, 2010). Besides the sparse
focus on info-centric aspects of e-public engagement/participation research, previous studies
have failed to accept the invitation by Medaglia (2012) and Bertot et al. (2008, p. 137) who
called for a shift of e-government and e-participation research focus from governments to
citizens. There is also no detailed and comprehensive framework that presents factors which
influence CE.
130
6.3 Summary of Findings: Qualitative and Quantitative
This study aims to investigate -from citizens’ perspective- the factors that influence CE, and to
develop a framework for government information provision. To provide an initial guideline to
the investigation, this study adopts a conceptual framework developed around the uses and
gratification theory (UGT), which suggests that CE would be based on citizens’ information
needs, on the contents’ features/quality, and on activities that facilitate engagement (Davies,
2010; Maruyama et al., 2013; Susha et al., 2015; Wang et al., 2005; Zuiderwijk et al., 2012).
Based on the qualitative empirical investigation, the first phase of this study builds on the
conceptual model and hypothesises that (1) CE is directly influenced by citizens’ information
needs (IN, H1). (2) The contents’ attributes which could be visual (VAC, H2-1) and/or perceived
(PCQ, H2-2). (3) The perception of the writer (PWC, H3). (4) Trust in government/agency
(TGA, H4-1a). (5) Citizens’ affinity for governments’ platforms (IVP, H4). Qualitative findings
of this study also suggest that IVP is dependent on TGA (H4-1b), the platforms’ similarity to
the public sphere (accessibility (FA, H4-2), content creation (CC, H43), and interactivity and
deliberation (IDelib, H4-4)) and their hedonic/persuasive features (HF, H4-5). The first phase
also hypothesises that social media use by governments will have more positive effect than
websites on the influence of TGA on IVP (H5-1). Similar hypotheses were developed for
relationships between FA on IVP (H5-2), CC on IVP (H5-3), and IDelib on IVP (H5-4). Finally,
the first phase hypothesises that citizens’ level of political awareness has a positive moderation
effect on the influence of IVP on CE (H6). From these, an initial citizen-content engagement (C-
CE) model was developed and ready for further testing.
Based on quantitative empirical research, the second phase of this study tests the assumptions
and claims of the first phase. The C-CE model was further refined using factor analysis. IN and
PCQ were merged into one to represent contents’ quality and ability to meet citizens’
information needs (INPCQ, H1), while PWC (H3) was removed entirely. This reduced the
factors that may influence CE to four (INPCQ, VAC, TGA, and IVP) in the refined C-CE model;
every other aspect remained largely unchanged. The refined C-CE model was then tested for
significant relationships between the exogenous and endogenous variables. INPCQ and IVP
were found to have a significant influence on CE, while TGA, CC, and IDelib significantly
influence IVP. This suggests that H1, H4, H4-1b, H4-3 and H4-4 are accepted. Furthermore,
H5-1, H5-3, H5-4 are accepted; but only H5-4 is significant. Finally, H6 is rejected.
Quantitative empirical findings also show that Facebook is the platform of choice for citizens as
it concerns getting information from, interacting and communicating with the government.
Twitter follows this and then blogs and websites. The top three information types of choice are
131
information for the personal use of the citizens, information on the economy, and information
government’s income and expenses. The subsequent sections would interpret the findings based
on the endogenous variables (CE and IVP)
Four factors were hypothesised to directly predict citizens’ engagement with government’s
online contents (CE); five factors were predicted to do so indirectly. However, only five out of
the nine factors were supported empirically (Table 6.1) Each factor shall be discussed in the
following sections.
The findings of this study indicate, expectedly, that the quality of governments’ contents and the
contents’ ability to meet citizens’ information needs (INPCQ) strongly influence their
engagement with the contents. INPCQ has the highest total effect on CE. This finding agrees
with previous studies which suggest that citizens’ engagement with governments’ online
contents and e-participation are enhanced when government provides the information that meets
the citizens needs (Davies, 2012; Susha et al., 2015) and in the right amount and quality (Lin,
Fofanah, & Liang, 2011; Medaglia, 2012).
Findings from this study suggest that citizens would mainly want information that is for their
individual use and benefits; this includes information that may lead to employment, information
132
about social interventions and welfare packages, information for academic and professional
purposes, etcetera. This type of information need was also observed by Bonson et al. (2015)
whose study found that citizens in a Local Governments within Western Europe are more
engaged with information that directly affect their lives. Following closely is the need for
information for political participation which includes information about the economy,
information on governments’ income and expenditure, information on government’s projects,
and information on government’s policies. This type of information support citizens’ scrutiny of
the government, enlighten them as voters, enlighten them on specific issues in the state, and
support campaigning and lobbying (Davies, 2010). Interest in political participation is said to be
influenced by citizens’ access to adequate finance, education, and civic skills (Krawczyk &
Sweet-Cushman, 2016; Verba, Schlozman, Brady, & Brady, 1995). In this study, the high
interest in information for political participation may be because over half of the respondents
had at least a Bachelor’s degree, thus were educated. However, access to adequate finance and
possession of adequate civil skills may not be factors necessitating the need for information that
would aid political participation. This is so as over half of the respondents earned between
£126.76 - £507.02 a month; 27% earned less and 19% earned more. Furthermore, Nigeria faces
a shortage of adequate civil skills as has been recognised by researchers who have suggested
various interventions, especially through education, that may help equip Nigerians with the
needed skills and help them avoid uncivil behavior (Aroge, 2012; Enu & Effiom, 2012; Falade,
2008). A more plausible explanation would be the current state of economic hardship and
uncertainty in the country which has also gone into recession (Doya, Wallace, & Ibukun, 2016);
this may have contributed to the heightened interest in activities of the government and the state
of the economy. Previous studies have made similar findings which show that in many
developing countries, the marginalised and poor tend to show more interest in governments’
activities and participate at higher levels than those with more resources (Holzner, 2010; Inman
& Andrews, 2009; Krawczyk & Sweet-Cushman, 2016).
Content/information quality has been well discussed by previous studies and refers to the
relevance of the information to the users, the timeliness, accuracy, simplicity (Chen et al., 2002;
Iivari & Koskela, 1987; Nardi & O'Day, 1999; Peng et al., 2004; Shedroff, 1999) and captivating
presentation which may be story-like (O'Brien & Toms, 2008). The perceived quality of
government’s contents is particularly important as it concerns simplicity, timeliness, and
accuracy/honesty. Governments tend to assume that citizens have the capabilities and knowledge
levels required to use government information. Janssen et al. (2012) noted that governments
would normally apply statistical techniques in collecting, analysing, interpreting and presenting
data even when statistical knowledge in scarce. This results in a situation where the content is
not understandable to the general public, and where citizens and users of the content find it
133
difficult to use the information because they are unfamiliar with the definitions and categories
that were used to present the data (Zuiderwijk et al., 2012). Furthermore, as has been observed
by previous researchers, citizens’ engagement with governments’ content is negatively impacted
when the information is obsolete (Janssen et al., 2012; Lee & Kwak, 2012); this is more so in
Nigeria where government’s digital contents are routinely noncurrent (Madukoma & Opemipo,
2016). Another important aspect of perceived quality of governments’ contents is the accuracy,
or lack thereof, which may impact on trust for the government and bring about cynicism for
government information (Janssen et al., 2012; O'Riain et al., 2012). This is even more important
as advancements in technology afford governments the ability and the urge to engage in
pseudonymous and anonymous communication with the citizens, and to proliferate propaganda
(Baldino & Goold, 2014; Lee, 2005).
The finding of this study suggests that the visual attributes of governments’ online contents
(VAC) have no significant influence on citizens’ engagement with the contents. These visual
attributes include the length of the contents and the use of relevant pictures and/or videos. This
finding goes contrary to opinions of researchers and practitioners that audience of online
contents tend to tune-off or disengage the more they read (Haile, 2014; Manjoo, 2013; Mintz,
2014; Zuiderwijk et al., 2012). Renowned web-usability researcher and expert –Jakob Nielsen-
recommended that online contents should have concise texts as the majority of the audience
would want the content to fit on a single screen (Morkes & Nielsen, 1997). Following a study of
online readers, Nielsen (2008) suggested that by default, online contents should be strictly
restricted to around 500 words unless they are meant for a targeted elite readership. Furthermore,
a study by Bonson et al. (2015) found that pictures improve citizens’ reaction to governments’
posts on Facebook. Similarly, Morkes and Nielsen (1997) suggest that graphics and texts should
complement each other for more engaging experience.
This finding can be explained by reference to the earlier finding which suggests that citizens are
more interested in contents which they perceive are of good quality and which meet their
information needs; therefore, the length of the contents and the use of complementary pictures
and/or videos are not important. Furthermore, citizens would typically visit government’s
platforms for information and/or to complete transactions (Wang et al., 2005) which are
utilitarian other than hedonic. Therefore it is understandable that a citizen would remain engaged
to an online government content as long as it meets his/her information needs.
134
6.4.3 The Effect of IVP on CE
Citizens’ affinity for governments’ platforms (IVP) was confirmed to have a significant
influence on their engagement with the contents on those platforms. IVP had the second highest
total effect on CE. Users visit online platforms for extrinsic or intrinsic reasons (Castañeda et
al., 2007) to achieve utilitarian or hedonic aims. According to Wang et al. (2005), citizens would
mainly visit governments’ platforms for information and/or transactions; which is mainly
utilitarian. Studies have found that governments’ platforms attract more citizens who are in
search of information than those who want to complete specific transactions (Oktem et al., 2014;
Reddick & Turner, 2012; Sandoval-Almazan et al., 2013). At the time of this discussion, the
Researcher was not aware of any past study that discussed the relationship between citizens’
affinity for governments’ platforms and their engagement with the contents on those platforms.
However, in the field of e-marketing, studies have shown that customers’ engagement with
adverts on a platform is influenced by their affinity for and intent to use that platform (Calder
et al., 2009; Chen & Wells, 1999; Gibbs, 2012; Mollen & Wilson, 2010; Peng et al., 2004). It is
more likely that customers would engage with adverts placed on their platform of choice than
on others (Paek et al., 2013), and this has prompted a call for businesses to reach their audience
on the online platform they visit most (Matuszak, 2007). Therefore, an explanation for this
finding can be inferred from the e-marketing research field. Just as customers have been found
to engage with adverts on their preferred online platforms, citizens’ affinity for governments’
platforms would influence their engagement with contents on the platforms.
Unexpectedly, the findings of this study indicate that the trust which citizens have on the
governments/agencies (TGA) has no significant impact on CE; however, it had the third-highest
total effect on CE. Though previous studies suggest that advancements in technology make it
easy for governments to proliferate propaganda and thus bring about mistrust and cynicism for
governments’ information (Baldino & Goold, 2014; Janssen et al., 2012; Lee, 2005); these
studies have, however, focused on mistrust for government information due to perceived
misinformation or propaganda, and not necessarily due to performance or failure in governance.
The issue of citizens’ trust in government is salient in e-government research as previous studies
have discussed the impact of citizens’ trust in government on their adoption of and satisfaction
with e-government (Bélanger & Carter, 2008; Colesca, 2015; Warkentin et al., 2002; Welch et
al., 2005), and the impact of e-government on citizens’ trust for the government (Parent et al.,
2005; Tolbert & Mossberger, 2006; Welch & Hinnant, 2003). However, the Researcher is not
aware of studies that have empirically researched the influence of citizens’ trust in governments
135
on their engagement with governments’ online contents. An explanation for this finding could
be that citizens would engage with government contents that may be of benefit to them whether
they trust the government or not; the possibility of benefiting from the content and satisfying
their need becomes paramount and overshadows any repulsion that mistrust in government may
cause. Another explanation could be the information seeking behaviour of citizens which refers
to the way they search for and utilise information to satisfy their current information needs
(Osiobe, 1988). According to Kuhlthau (1991), in her widely cited model of information seeking
behaviour called the information Search Process (SIP), a searcher (information-seeker) would
pass through six stages; two of these include pre-focus exploration and information collection.
In both stages, a searcher tries to locate relevant information from different sources and could
tolerate inconsistencies and incompatibility of information encountered during the search. This
could explain the finding as citizens would engage with contents from government platforms as
well as from other sources, as they compare, contrast and make sense of the information they
have encountered.
This study found that trust in the government/agency (TGA), collaborative content creation
(CC), and interactivity and deliberation (IDelib) all have important effects on citizens’ affinity
for governments’ online platforms (IVP). However, the effects of accessibility (FA) and the
hedonic/persuasive features of the platform (HF) were found not to be important.
The findings suggest that citizens’ affinity for governments’ platforms would increase when:
they trust the government, when they can collaborate amongst themselves and the government
to provided needed information on the platform, and when the platform allows interactions and
deliberation amongst the citizens and between citizens and government officials. Trust in
government had the largest effect size (0.51). These findings were expected and had been
confirmed by previous studies with reasons being that trust in government will increase citizens’
intent to use e-government platform and services
(Bélanger & Carter, 2008; Colesca, 2015; Warkentin et al., 2002; Welch et al., 2005). Citizens
would also be attracted to governments’ platforms when they know that they could provide and
get information for and from the government and other citizens. This was also observed by
Bonson et al. (2015) who found that there was more sign of engagement on governments’
Facebook pages that allow citizens to post on the wall. Furthermore, in this present era and
ubiquity of social media, users are largely allowed to react to contents on platforms by
commenting on the contents, liking them, disliking them, sharing them, etcetera. Therefore,
citizens and netizens of today would want such interactivity on governments’ platforms. Both
136
collaborative content creation and interactivity and deliberation promote a multi-way
information flow which increases participation (Lilleker et al., 2011; Mahrer & Krimmer, 2005;
Oktem et al., 2014)
This study further suggests that accessibility is not important in influencing citizens’ affinity for
government’s platforms. However, previous studies have found that accessibility does impact
on citizens’ use of and belief in e-government platforms and services (Belanger & Carter, 2006;
Bélanger & Carter, 2009; Sipior & Ward, 2005). In fact, governments in developed countries
are beginning to take the issue of accessibility seriously, for example, the United Kingdom
(Duggin, 2016) and the United States (ODEP, 2014), especially as it concerns access by
physically challenged persons. In developing countries, accessibility issues are usually due to
digital divide (Dada, 2006; Fuchs & Horak, 2008; Ndou, 2004) which typically materialises as
inadequate access to the internet and/or poor computer literacy (Belanger & Carter, 2006). A
study by Belanger and Carter (2006) found that access to e-government platforms and services
is influenced by income, age and education. With a focus on Nigeria, a survey by Pew Research
Centre (2014a) shows that age is the strongest indicator of internet usage. The survey shows that
Internet access and use in Nigeria is highest amongst those aged between 18 and 29 (45%),
followed by those aged 30-49 (31%) and 50 and above (4%). Furthermore, in its last ICT survey,
National Bureau of Statistics found that the ratio of urban to rural internet access in Nigeria is
11:1 (National Bureau of Statistics, 2011). Therefore, there are indications that this finding can
be as a result of the demographics of the survey participants in the study. Of the participants,
94% were aged between 18-42; 84% were educated at the undergraduate level at the very least;
38% of the respondents completed the survey online thus indicating access to the internet; while
the remaining 62% who completed the paper version of the survey were urban dwellers. This
demographic data shows that a majority of the participants had a demographical advantage in
terms of access to and use of the internet. This may explain why accessibility was not found to
be important in this study. This finding further indicates that out of the three aspects of the public
sphere (accessibility, collaborative content creation, interactivity, and deliberation) (Habermas,
1964; Hauser, 1998; Pusey, 1987a), only collaborative content creation, and interactivity and
deliberation were found important.
137
6.6 Platform Type as a Moderating Factor
This study found that the influence of trust in government agency (TGA), collaborative content
creation (CC), and interactivity and deliberation (IDelib) on citizens’ affinity for governments’
online platforms (IVP) were greater for citizens who visit government’s social media platforms
than for those who visit the traditional websites/blogs.
The reason for this result may be because of the uni-directional flow of information which
characterises traditional websites (Cormode & Krishnamurthy, 2008) as against the multi-
directional communication allowed by social media (Berthon et al., 2012). This creates a
perception of formality and alienation for citizens visiting governments’ websites as they are
only docile recipients of contents, who cannot provide feedback on the contents, cannot provide
information on the platform and cannot interact with the owner and other readers of the content.
These may impact on trust and affinity for governments’ platforms. On the other hand, social
media creates a perception of informality (Mosquera & Moreda, 2012), where the citizens and
government assemble to create and share information, ideas, and opinions as peers. The use of
social media by governments has been identified to facilitate transparency and trust in previous
studies (Bertot et al., 2010; Bertot et al., 2012; Bonsón et al., 2012; Kim, Park, & Rho, 2015)
However, contrary to expectation, the influence of accessibility (FA) on IVP was greater for
citizens who visit government’s traditional websites than for those who visit their social media
platforms. This may be clearly explained by the fact that social media platforms, especially the
widely preferred Facebook and Twitter, are only open to registered users; in contrast,
websites/blogs typically do not require registration before access.
Despite the differences between the moderating strengths of social media and traditional website
use by the government, only the difference on their impact on the influence of interactivity and
deliberation on affinity for government’s contents was significant. This can be ascribed to the
fact that the ability to allow for interaction and deliberation amongst platform users and between
platform users and host is the main difference between social media and traditional websites
(Berthon et al., 2012; Cormode & Krishnamurthy, 2008; Lilleker et al., 2011; Lusoli & Ward,
2005; Schweitzer, 2008)
This study found that the influence of the affinity for government’s platforms (IVP) on their
engagement with governments’ contents (CE) was significantly less for citizens who claim to be
interested in government activities, and to have adequate knowledge of the government/agency
138
and their platforms, than for those who have inadequate knowledge of the government. This
finding was unexpected as previous studies have highlighted the importance of awareness in
enhancing citizens’ adoption and use of e-government initiatives (Bwalya & Healy, 2010; Carter
& Weerakkody, 2008; Kolsaker & Lee-Kelley, 2008). This points towards the principles of
marketing and advertisement which entails promoting the concerned agencies and/or their online
platforms (Grow & Altstiel, 2005; Panopoulou et al., 2014). This finding suggests then that the
more the citizens are aware of their government/agencies and their online platforms, the less
their intent to visit those platforms and to engage with the contents therein. This may be due to
initial information seeking behaviour of citizens where those with a low level of awareness may
be more willing to explore and visit governments’ platforms in search of information (Kuhlthau,
1991). With time, however, these information-seekers may develop either of or both (1)
informed negative perception/opinion of the government/agency (2) informed negative
perception/opinion of governments’ platform which impacts on their affinity for the platforms.
This could be as a result of having got to know much about the government/agency that
perception of trust drops, or not being able to find quality information on governments’ platforms
such that there is no incentive to return to the platform. This phenomenon is related to the
concept of e-loyalty in the e-commerce research field which is defined as a customer’s
favourable attitude towards a retailer that results to repeated buying behaviour and is typically
dependent on satisfaction and trust (Li, Aham-Anyanwu, Tevrizci, & Luo, 2015; Luarn & Lin,
2003; Reichheld & Schefter, 2000; Smith, 2000).
The aim of this study was to contribute to the e-government research area, literature and practice
-with a bias to e-public engagement/participation- by developing a framework for optimal
citizen engagement with governments’ contents on the internet. Two key research questions
were asked in Chapter One: (1) What are the factors that influence citizens’ engagement with
governments’ contents on the internet? (2) How well do these factors explain citizens’
engagement with governments’ contents on the internet? To answer these questions, four
objectives were set: (1) to identify factors that influence citizens’ engagement with governments’
online contents. (2) To propose a hypothesis with the identified factors. (3) To statistically test
the hypothesis. (4) To propose a framework for optimal citizens’ engagement with governments’
contents on the internet based on results of the statistical test.
To meet these objectives, the study was divided into two phases (one phase for each question).
The findings of this study and the process by which the study was executed show that the
139
research questions have been answered, and the objectives met; this was summarised in section
6.3. However, what are the implications of these findings for theory and practice?
This study addresses the need for extensive, in-depth info-centric and citizen-focused e-
participation research in a field dominated by technocentric and top-down (government-facing)
studies. It extends e-participation research by showing that it was possible to operationalise
citizen-content engagement and generate an initial explanatory/thematic (C-CE) model of
factors that influence citizen's engagement with government's contents on the internet; the
Researcher is not aware of any previous study that has done this. The C-CE model suggests that
citizens' information needs, visual and perceived attributes of the contents, the perception of
writer's credibility, affinity for governments' platform, trust in government/agency, platforms'
public sphere attributes and its hedonic features all play direct and indirect roles in facilitating
citizens’ engagement with governments’ contents.
The model was subjected to and refined through a content adequacy test, a pilot test, factor
analysis and a goodness-of-fit test to ensure that it meets all relevant viability indices. However,
findings based on the C-CE model cannot be easily generalised as it was developed -ab initio-
through an exploratory/qualitative study involving a small sample size and contextualised to a
particular country. Furthermore, the refined C-CE model was statistically tested with a sample
size that was not representative of the entire population; this also impedes generalisation.
Nonetheless, one major theoretical implication of this study is that the C-CE model can serve as
a framework or a foundation on which to build future research interested in investigating citizens'
engagement with governments' online contents.
Out of necessity, this study developed a quantitative scale from qualitative findings using the
content adequacy assessment approach typically popular in the medical field. This was
necessitated by the need to build the study around an in-depth investigation of citizens’
engagement with governments' content, which has not been studied previously. The Researcher
is not aware of any study that has adopted this approach in the information systems/sciences
research field where studies typically rely on existing models and theories. This study, therefore,
shows that it is possible to build a hypothetical model from the scratch in the IS research field,
and serves as an invitation for future studies to attempt same where necessary.
Having adopted a conceptual framework built around the Uses and Gratification Theory (UGT),
this study extends it to the e-governments research area as it concerns governments’ online
contents. The UGT was developed by a psychologist named Herta Herzog in 1944 as she studied
140
satisfaction amongst radio audiences but has since been extended to the study of audience
gratification across several mediums of communication like prints (Finn, 1997), televisions
(Palmgreen & Rayburn, 1979; Wenner, 1982), the internet (Ko et al., 2005; Stafford et al., 2004);
video games (Sherry et al., 2006), and mobile phones (Leung & Wei, 2000; O'Keefe &
Sulanowski, 1995). It is also getting increasingly popular in social media studies (Park et al.,
2009; Raacke & Bonds-Raacke, 2008; Urista et al., 2009). This study- particularly, the
qualitative phase - suggests that citizens’ gratification of governments’ contents are the
information they contain, their visual and percieved attributes, and the desire to read from certain
writers. Furthermore, the gratification of governments’ platforms include their public sphere
characteristics and hedonic features.
Although the public sphere concept has been studied in the era and context of the Internet, it
remains largely alien in the e-participation research field. This may be ascribed to Habermas’
ceonceptualisation of the public sphere as being free from the interference and control of the
State/government (Habermas, 1997). However, findings in this study highlight the importance
of considering the concept of the public sphere in the discourse of e-participation. Two
(collaborative content creation, and Interactivity and deliberation) out of three of the public
sphere factors/characteristics investigated in this study were found to influence citizen's affinity
for governments' platforms significantly. Although accessibility was not found to be significant,
this may have been due to the homogeneity of the respondents regarding access to the internet
as explained in an earlier section; and thus, may be significant in a different study with diverse
respondents. Therefore, this study serves as an invitation for researchers to consider ways
through which e-participation can be enhanced with the concept of e-public sphere. In essence,
this entails the need for studies that investigate ways through which governments, using the
internet, can be part of the e-public sphere. Would this be possible while maintaining the
principle/characteristics of the public sphere as conceptualised by Habermas? Or is there going
a re-conceptualisation of the public sphere for governments to play a role in it via the internet?
Previous studies have indicated the impact of trust on citizens’ use of e-government services,
and in customers' purchase of online products. However, this study indicates that while trust may
influence affinity for governments platforms; it is not important in engagement with the contents
on those platforms. As explained earlier, this may be due to the info-centric nature of this study
and citizens’ information seeking behaviour. This, however, indicates that the influence of trust
on platform users' behaviour on a host’s platform, will be dependent on the nature of their interest
on the platform.
141
Although Nigerians were the case of this study, some findings can be extended beyond the case
and the e-participation research field. For instance, information need and content quality should
expectedly influence diverse readers' engagement with different content types whether in e-
learning, e-government, e-journalism, etcetera. Similarly, is the importance of citizens' affinity
for governments platform in their engagement with the contents on the platform; this is related
to the concept of e-loyalty in e-commerce. Therefore, the more the intent of users to visit an
online platform, the more the likelihood that they would engage with contents on the platform.
However, there are also findings that may be peculiar to the case, for instance, the need for
information for political participation which may be due to instability and uncertainty in the
country. In more developed countries, citizens’ information needs may be more for individual
interests than for political participation.
The overall result of this research shows that citizens' engagement with governments' online
contents is dependent on the perceived attributes of the contents in terms of quality and ability
to meet the citizens’ information needs, and on citizens' affinity for governments' online
platforms. Governments, agencies, and stakeholders are therefore faced with and must meet the
challenge of providing the right information on and attracting citizens to their online platforms.
However, it is difficult, if not impossible, to provide bespoke contents that meet the information
needs of every citizen on governments’ online platforms. A more practical solution would be
finding ways to understand the predominant information needs of citizens at any given point in
time; for example, current socio-economic events in a country may result in an increased demand
for certain information as citizens try to understand how the events may affect them.
Governments should also be ready to provide tailored information to individual citizens on
demand and in the shortest possible time; prima facie, this may appear to be resource consuming.
However, governments can approach this by (1) providing a single hub where citizens can
request for government-related information. This is very important as government is a huge
enterprise with enourmous bureaucracies running through ministries, departments and agencies.
It is easy for and a common occurrence that citizens get entangled and confused in their search
for information from the government. With such a hub, citizens can request for information and
it gets channelled to the appropriate ministries, departments or agencies. (2) Publishing
frequently-requested information so that subsequent requests can be met by directing the
individual citizens to the content. (3) Providing an avenue for citizens to recycle information,
for example, community questions and answers platform where citizens can request for
information and get same from a community of users who may have had earlier access to the
information.
142
Governments, especially in the developing world, should also realise that a majority of the
information that citizens would want from them would be for the purpose of political
participation as they try to judge the performance of the government. The natural reaction to this
by most governments would be propaganda or doctored information, but this creates mistrust.
Therefore, governments may need to rise to the challenge of self-reporting which promotes the
perception of transparency and improves trust.
Governments should also embrace the concept of e-loyalty as in e-commerce. This is so because
this study provides proof that citizens with a low level of political awareness tend to show more
affinity for governments’ platforms and more willing to engage with governments’ contents. On
the flip side, citizens who are optimally aware of the government tend to show disaffection for
governments’ platforms, and this may be due to previous poor experience on such platforms.
Therefore, governments should endeavour to ensure that citizens have optimal experience and
that their information needs are met on their platforms.
Around the world, governments and researchers tend to focus mainly on e-service provision/e-
government as proven by the literature. For example, in the United Kingdom, the government is
working on a policy called Digital by Default, which aims to digitalise all transactional
government services. However, conducting such digital transaction with governments can be
perceived as being riskier than creating, requesting and or demanding for digital information
from governments; this creates a situation whereby citizens may not trust the digital system and
would prefer offline human-to-human transactions. Nonetheless, governments can increase this
trust by ensuring appropriate engagement of citizens with the low-risk information level; and
this is where the findings of this study can be of help. Governments can start by facilitating
citizens’ engagement with governments’ online contents and their affinity for governments’
digital platforms; this then provides a pedestal on which transactional functions can be
143
introduced. Moreover, engagement with governments’ online contents may positively influence
the adoption of governments’ digital transactions by citizens who lack the digital skills; this is
so because they would be confident that the information needed to complete such digital
transactions would be accessible.
This study is no different from other empirical investigations with their inherent methodological
weaknesses. One major weakness of this research is the collection of data exclusively from
‘ordinary’ citizens and not from other stakeholders like business and civil society organisations
and even other governments who also use government information. Therefore, the findings of
this study may not be realistically extended to all groups of users of government information.
Perhaps, the outcome of this study would have been different if the data was collected from
stakeholders across the citizenry, other governments, business and civil society organisations. In
this study, the decision to focus on citizens and no other group of stakeholders was intentional
because they are the most important actors in e-participation (Medaglia, 2012) and because
citizens’ engagement with and use of governments’ information is an unexplored niche topic
that needs research attention (Janssen et al., 2012; Zuiderwijk et al., 2014). In the future, similar
studies can be carried out on separate groups of stakeholders or even across different groups of
stakeholders. The initial C-CE model developed in this research can also be tested across groups
of stakeholders.
The data collection was also cross-sectional and have not captured possible differences or
changes in opinions that may occur over a period. This is an even more important limitation
considering that opinions of citizens tend to change with changes in socio-economic conditions
in their country. There is every possibility that the opinions captured in this study may change a
few months from now, and the findings would not be the same. Perhaps, if a longitudinal study
approach was adopted, more realistic findings would be made. This study, being a Ph.D.
research, had just three years to be concluded. Due to the limited research time, a cross-sectional
approach to data collection was more feasible than the longitudinal approach. Future studies
could adopt a longitudinal approach to improve the findings of this study.
Although a majority of the values returned by the CFA met the requirement for validity,
reliability and fit, there were some that did not meet desirable values. For instance, the factor
loadings had some values (between 0.5 and 0.7) that were acceptable but were not great.
Similarly, the Average Variance Extracted (AVE) had a value that was below the recommended
0.5. There were also two fit indices that were not met. All these raise questions about the fit,
validity and reliability of the model. During the exploratory factor analysis, a new construct
144
(INPCQ) was formed when the factor rotation clumped items from two different constructs (IN
and PCQ) together. It is this construct that is largely to blame for the poor values. However,
these shortcomings were allowed because the study was exploratory and the measure was new.
Future studies could resolve this by conducting a more conservative round of CFA.
There is also a limitation in terms of generalisability of the findings as the study was
contextualised in Nigeria. The findings of the first phase of this research suffer from the usual
limitations of qualitative studies as it concerns generalisation; obviously, sampling the opinions
of 16 citizens of a single country is not enough to produce a definitive generalisation about
factors that influence citizens’ engagement with governments’ online contents around the world.
Similarly, the second phase of this study, though quantitative, relied on non-probability sampling
approach other than probability and therefore not everyone in the population had an equal chance
of being selected for this study. Furthermore, with only 260 (of which 255 cases were used for
the SEM) participants, the sample size was not representative of the entire population. In other
words, if this study had included more countries and used samples that are representative of the
entire population, the findings may have been closer to reality than it currently is. The decision
to contextualise this study stems from the argument that a case study allows for a holistic, in-
depth investigation of a phenomenon (Zainal, 2007). The decision to use Nigerians as the case
was both for convenience and relevance sakes. A Nigerian but studying in the United Kingdom,
in terms of convenience, the Researcher had two options from which to select a case: Nigerians
or the British. However, European countries and the United States dominate the
contextualisation of e-public engagement research (Bonson et al., 2015; Carter & Belanger,
2012; Freire et al., 2014; Mahrer & Krimmer, 2005; Panopoulou et al., 2014; Saebo et al., 2011;
Zheng et al., 2014). This prompted the invitation by Moatshe and Mahmood (2012) for similar
studies in developing countries in Africa, Asia, and the Middle-east. Therefore, it was relevant
for the extension of e-participation research that this study is contextualised in Nigeria.
Furthermore, in the first phase of the study, the Researcher could not have realistically
interviewed every individual the target population, and the decision to stop at the 16th
interviewee was due to the principle of data saturation in qualitative studies (Francis et al., 2010).
Similarly, in the second phase, the Researcher adopted non-probability sampling as it was not
realistic –within the research time frame- to conduct a survey of a sample that would optimally
represent the entire population. The Researcher made efforts to survey as many people as
possible using both online and paper-based questionnaires. However, only 260 valid responses
came through, and this was further reduced to 255 for the SEM due to outliers. As discussed in
Chapter Five, the sample size was appropriate for the data analysis used in this study. Future
studies could consider different contexts or use a multiple case study approach. Where possible,
145
future studies could try to involve a sample size that is representative of the entire population of
interest by using probability sampling methods.
Finally, another weakness is the loose theoretical base of this research, which is mainly because
the Researcher aimed at developing a framework in a research focus with little prior studies.
Although the empirical investigation of this study was based on a conceptual framework
developed around the Uses and Gratification theory (UGT), the approach was mainly grounded
as the Researcher allowed for new findings and concepts to emerge. Therefore, the UGT did not
feature heavily in this study as a theory. Hence, it is appropriate to consider the findings of this
study with some scepticism. However, this study has succeeded in proffering a framework of
factors that influence citizens’ engagement with governments’ contents (the C-CE model), and
also in initiating a discourse in that regard for future studies to participate in. Future studies can
test, and or improve on this framework.
146
Chapter 7 : Conclusion
This chapter wraps up the thesis of seven chapters. In chapter one, the arguments for this research
were made, the research questions were asked, and the objectives were set. In chapter two, a
literature review was conducted to ascertain the current state of knowledge in e-participation
research field as it concerns citizens’ engagement with governments’ information. In Chapter
three, a conceptual framework based on the uses and gratification theory (UGT) was developed,
and the research methodology for the study was discussed. In Chapter four, the findings from
the qualitative phase were presented, and a hypothetical model (C-CE model) was developed. In
Chapter five, items for the questionnaire leading to the quantitative phase of the study were
developed, and the qualitative data was analysed and presented. In Chapter six, the research
findings, implication, and limitations were discussed. In this chapter, the thesis is concluded
The of aim this study was to contribute to the e-government research area, literature and practice
-with a bias to e-public engagement/participation- by developing a framework for optimal
citizen engagement with governments’ contents on the internet. Two key research questions
were asked in Chapter One: (RQ1) What are the factors that influence citizens’ engagement with
governments’ contents on the internet? And (RQ2) How well do these factors explain citizens’
engagement with governments’ contents on the internet? For clarity sakes, this study was divided
into two phases, each phase dedicated to answering a research question.
In phase one and to answer RQ1, two objectives were set: (R-OBJ1) To identify factors that
influence citizens’ engagement with governments’ online contents. (R-OBJ2) To propose a set
of hypotheses with the identified factors. To meet R-OBJ1, the literature was reviewed, and the
conclusion was that little or no research exists in the area of citizens’ engagement with
governments’ contents; some researchers claimed it was a niche research area. To provide an
initial guideline to the investigation, this study adopted a conceptual framework based on the
uses and gratification theory (UGT), which suggests that citizens’ engagement with
governments’ online contents would be based on their information needs, on the contents’
features/quality, and on activities that facilitate engagement. A qualitative empirical
investigation built on the theoretical framework and found that citizens’ engagement with
governments’ online contents (CE) is directly influenced by citizens’ information needs, the
contents’ attributes which could be visual (VAC) and/or perceived (PCQ), perception of the
writer (PWC), trust in government/agency (TGA), and citizens’ affinity for governments’
platforms (IVP). Qualitative findings of this study also suggest that IVP is dependent on TGA,
147
the platforms’ similarity to the public sphere (accessibility (FA), content creation (CC), and
interactivity and deliberation (IDelib)) and their hedonic/persuasive features (HF). Having
arrived at these findings, R-OBJ1 was fully met. To meet R-OBJ2, a set of hypotheses was
proposed for each of the findings. It was also hypothesised that social media use by governments
would have more positive effect than websites on the influence of TGA on IVP, FA on IVP, CC
on IVP, and IDelib on IVP. A final hypothesis was that citizens’ level of political awareness has
a positive moderation effect on the influence of IVP on CE. A hypothetical/thematic model of
these findings was also developed and called the citizen-content engagement (C-CE) model and
R-OBJ2 was met. Having met both objectives, RQ1 was answered, and the first phase of this
study was successfully concluded.
In phase two, and to answer RQ2, two objectives were set: (R-OBJ3) To statistically test the
hypotheses. (R-OBJ4) To propose a framework for optimal citizens’ engagement with
governments’ contents on the internet based on results of the statistical test. To meet R-OBJ3,
a quantitative empirical research was conducted to test the assumptions and claims of R-OBJ2.
Using factor analysis, the C-CE model was further refined. IN and PCQ were merged into one
to represent contents’ quality and ability to meet citizens’ information needs (INPCQ), while
PWC was removed entirely. This reduced the factors that may influence CE to four (INPCQ,
VAC, TGA, and IVP) in the refined C-CE model while every other thing remained largely
unchanged. The refined C-CE model had 14 hypotheses which were tested for significant
relationships between the exogenous and endogenous variables. Eight of these hypotheses were
accepted: citizens’ engagement with governments’ online contents (CE) is significantly
influenced by the contents’ quality and ability to meet citizens’ information needs (INPCQ), and
their affinity for governments’ online platforms (IVP). Citizens’ affinity for governments’ online
platforms (IVP) is significantly influenced by their trust in government/agency (TGA), the
platforms’ ability to allow collaborative content creation (CC), and interactivity and deliberation
(IDelib). Social media use by governments was found to have more positive effects than websites
on the influence of trust in government/agency (TGA), collaborative content creation (CC), and
interactivity and deliberation (IDelib). To meet R-OBJ4, the result and implications of the
regression analysis were discussed. Having met both objectives, RQ2 was answered, and the
second phase of this study was successfully concluded.
The main contribution to knowledge of this study is that the findings, especially in the qualitative
phase, provides a holistic info-centric view of factors that could influence citizens’ engagement
with governments’ digital contents. From the qualitative phase, the citizen-content engagement
148
(C-CE) model was developed. The C-CE model provides a framework/foundation on which
future research in citizens’ engagement with governments’ information can be built.
Furthermore, this study introduced the use of content validity index for items (I-CVI) and
average content validity index for scales (S-CVI/AVE) as an appropriate way of developing
scales in the IS research field. This approach was adopted from the healthcare research field
where it is widely used to develop and validate scales from qualitative data.
This study also adds to the sparse e-participation research and literature contextualised in
developing countries and with a proactive focus on citizens. It also discusses the public sphere
as part of e-participation and, therefore, serves as an invitation for researchers to investigate
ways through which governments’ platforms can foster a public sphere
According to Oscar De La Hoya, “there is always room for improvement, no matter how long
you have been in the business”. Although this research had taken three years, it is far from
perfect as has earlier been highlighted in the section on limitations and recommendations.
Therefore, to circumvent those limitations, the Researcher’s immediate focus in extending this
study would include:
1. Collecting and comparing data from more countries (both developed and developing).
2. Collecting data from a wider range of stakeholders including ‘ordinary citizens’,
businesses, civil societies/organisations, etc.
3. Conducting a longitudinal study to capture possible changes in time and context
4. Adopting a partial least squares (PLS)-SEM approach, in the first instance, to test the
C-CE model. A covariance-based-SEM approach can be used when the model is
optimally established.
7.4 Reflections on the Researcher’s Experience: Lessons Learnt and Knowledge Acquired
Before this study, the Researcher had only done qualitative and conceptual (desktop) studies for
his Bachelors and Masters degrees. The Researcher had no experience of quantitative research
and was also unable to understand outcomes of quantitative studies that were based on any form
of regression analysis. However, during the course of this research, the Researcher performed
systematic literature reviews, performed qualitative analysis with NVIVO at a level he had never
done before, performed a content adequacy assessment which prepared the qualitative data for
use in a quantitative survey, and performed various statistical analysis (multiple imputation,
149
exploratory factor analysis, structural equation modelling) using SPSS and Amos. The
Researcher learnt how to interpret various statistical indices used by researchers in quantitative
studies. These lessons and acquired knowledge were also fortified with the Researcher teaching
his colleagues how to use NVIVO and SPSS for qualitative and quantitative analysis
respectively. In the course of this study, the researcher also strengthened his ability in critical
thinking, problem-solving and independent research. These are expected of all Ph.D. researchers
as there are times when certain bottlenecks can only be overcome by the researcher’s resilience,
just as there are times when certain tough decisions need to be made by the researcher as long
as there are valid justifications for such decisions. The Researcher has, indeed, emerged from
the Doctoral journey as a much-improved researcher.
150
APPENDICES
Appendix A: Interview Participants’ Information Sheet
Research Topic:
Researcher:
Nnanyelugo Aham-Anyanwu.
Invitation:
It is my honour to request that you take part in this research project. Before you decide, it is
necessary that you see the reasons for this research and what is shall involve. Please take
time to carefully read though the following information. Feel free to discuss this with your
friends and colleagues and please do ask me questions where you need clarification or more
information. Think about it and let me know if you wish to participate in this research or not.
Thank you for reading this.
What happens to the transcribed/textual data during and after the study?
i
During the study, the transcribed/textual data shall be stored in stored securely on the
University’s intranet. It shall also analysed in order to achieve the objectives of the study. Your
details shall not be included in the analysis, only your opinions count. You shall be referred to
as “Respondent X” where “X” is a number; this would help link opinions to the individuals
who gave them while they still remain anonymous.
After the study, every transcribed/textual data shall be deleted from storage.
Please keep a copy of this information sheet and do sign a copy of the consent form should you
decide to take part in this study.
Thanks a lot for reading.
ii
Appendix B1: Consent Form for Interview
____________________________________ _______________________________
Participant’s name Signature Date
____________________________________ ______________________________
Researcher’s name Signature Date
Below is the text used to gain consent from participants in the content validity phase of this
study. This was done online using the Google forms survey software. It appeared on the first
page of the survey.
___________________________________________________________________________
Hello there!
You are requested to help in the development of a scale to test citizens’ engagement with the
Nigerian government’s contents on the internet. You have been selected because the
Researcher believes that you have the required intellectual ability.
The process is straightforward. There are 11 pages, 11 definitions and 48 items. On each page,
a definition is written at the top followed by the 48 items.
All you need to do is read the definition, look at each item and rate how much you think it fits
the definition. These items can be rated from 1-5 where:
1 is strongly unfit
2 is unfit
3 is neutral
4 is fit
5 is strongly fit
You must not participate. Clicking "Next" to go to the next page indicates that you have given
your consent to participate; however, you are free to exit at any time before submitting the
form and any input made would not be used in this study.
Before you start, please input some of your details on the next page, but do not put down your
name. This is just for statistical purposes as I cannot link it to you.
iii
Thank you so much, I am very grateful
___________________________________________________________________________
Below is the text used to gain consent from participants in final phase of this study. This was
done online using the Google forms survey software. It appeared on the first page of the
survey.
___________________________________________________________________________
Hello there!
You are requested to complete a questionnaire which investigates citizens' engagement with
Nigerian government's contents on the internet.
You have been selected simply because you are Nigerian or because you live in Nigeria.
You must not participate. Clicking "Next" to go to the next page indicates that you have given
your consent to participate; however, you are free to exit at any time before submitting the
form and any input made would not be used in this study.
Before you start, please input some of your details on the next page, but do not put down your
name. This is just for statistical purposes as I cannot link it to you.
iv
Appendix C: Interview Questions
Note* Questions 11 -15 (in italics) were not part of the original questions. They were added as
data was collected.
v
Appendix D: Constructs and Items
IN3 I am interested in
information that
vi
Constructs Definition Second-level Definition Third-level Definition Source/Reference Items Measures
constructs constructs
focuses on trending
socio-political issues in
the country.
IN4 I am interested in
information that
focuses on
government’s
activities/projects.
IN5 I am interested in
government
information that
focuses on
government’s financial
income and
expenditure.
IN6 I am interested in
government
information that is of
direct/personal benefit
to me (jobs, education,
healthcare, welfare
packages, etcetera)
vii
Constructs Definition Second-level Definition Third-level Definition Source/Reference Items Measures
constructs constructs
attributes of a (visible contents are usually
content that features) of long
influence governmen
citizens’ ts’ online
engagement contents/ar
with them ticles
VAC2 In my opinion,
government’s online
contents usually have
relevant pictures
VAC3 In my opinion,
government’s contents
usually have relevant
videos
4. Perceived Gauging Interview Data PCQ1 Government online
Content the quality (Chen, Clifford, & contents are usually
Quality of Wells, 2002; Iivari & informative
(PCQ) governmen Koskela, 1987; Peng,
t’s contents Fan, & Hsu, 2004)
on the
internet
viii
Constructs Definition Second-level Definition Third-level Definition Source/Reference Items Measures
constructs constructs
PCQ3 In my opinion,
government’s online
contents are usually
accurate
ix
Constructs Definition Second-level Definition Third-level Definition Source/Reference Items Measures
constructs constructs
PWC4 I believe that writers of
government’s contents
are usually transparent
6. Affinity for Gauging Interview IVP1 I visit government’s
governments’ citizens’ (Carter & Bélanger, online platform as an
Platforms reasons for 2005; Gardner & important source of
(IVP) visiting Amoroso, 2004; Peng information
government’s et al., 2004)
online
platforms
IVP2 I visit government’s
online platform to
express my opinions
IVP3 I visit government’s
online platform to
interact with other
citizens
IVP4 I visit government’s
platforms to interact
with government
officials
x
Constructs Definition Second-level Definition Third-level Definition Source/Reference Items Measures
constructs constructs
TGA3 In my opinion heads of
government agencies
can be trustworthy
TGA4 The National
Orientation Agency
(NAO) is a trustworthy
agency
Platform Attributes of Similarity to Attributes 8. Accessibility Gauging Interview Data FA1 I have free access to
Attributes government’s the public of (FA) citizens’ (Habermas, 1989; government’s
online sphere governmen perceived Hauser, 1998; Pusey, platforms on the
platforms that ts’ contents level of 1987b) internet
encourage that are access to
citizen-content similar to governments
engagement Habermas’ ’ platforms
concept of
public
sphere as it
concerns
access,
initiation
of
discourse,
and
exchange
of ideas.
FA2 I do not have to register
on government’s
platforms to gain
access
xi
Constructs Definition Second-level Definition Third-level Definition Source/Reference Items Measures
constructs constructs
FA3 I have unrestricted
access to government’s
platform on the internet
9. Gauging Interview Data CC1 Everyone has equal
Collaborative citizens’ (Habermas, 1989; opportunity to post
Content ability to Hauser, 1998; Pusey, contents on
Creation (CC) create and 1987b) governments’
post contents platforms
on
governments
’ online
platforms
CC2 I see contents written
by other citizens on
governments’
platforms
xii
Constructs Definition Second-level Definition Third-level Definition Source/Reference Items Measures
constructs constructs
government’s
platforms
IDelib3 I believe I am free to
challenge the opinions
of government officials
on government’s
platforms.
xiii
Constructs Definition Second-level Definition Third-level Definition Source/Reference Items Measures
constructs constructs
HF2 Government’s online
platforms have
interesting gamified
activities
HF3 There are entertaining
activities on
government’s
platforms
xiv
Appendix E: Questionnaire
xv
xvi
xvii
xviii
xix
xx
xxi
xxii
xxiii
xxiv
xxv
Appendix F: Missing
Missing
N Percent Valid N
PC2Others 103 39.6% 157
PC2Web 103 39.6% 157
PC2Blog 103 39.6% 157
PC2Twit 103 39.6% 157
PC2Fbook 103 39.6% 157
IN4Others 21 8.1% 239
IN4PersUse 20 7.7% 240
IN4Econ 20 7.7% 240
IN4GovtProj 20 7.7% 240
IN4GovtExp 20 7.7% 240
IN4GovtPol 20 7.7% 240
IN4Trend 20 7.7% 240
MthIncome 16 6.2% 244
Occupation 15 5.8% 245
PC1Others 5 1.9% 255
PC1Web 5 1.9% 255
PC1Blog 5 1.9% 255
PC1Twit 5 1.9% 255
PC1Fbook 5 1.9% 255
ID5 3 1.2% 257
LastQual 2 0.8% 258
Age 2 0.8% 258
Gender 2 0.8% 258
VP2 1 0.4% 259
MarStat 1 0.4% 259
xxvi
Appendix G: Scatter plots for CE factors
(Original data, Iteration 1 and 5)
xxvii
xxviii
Appendix H: Scatter plots for IVP Factors with outliers
(Original data, Iteration 1 and 5)
xxix
xxx
Appendix I: Scatter plots for IVP factors with outliers
(Original data, Iteration 1 and 5)
xxxi
xxxii
Appendix J: Respondents’ Data
(Original data, Iteration 1 to 5)
Details Number of Cases and Percentages
Original 1st Iteration 2nd Iteration 3rd Iteration 4th Iteration 5th Iteration Pooled
Data Iterations
Gender Male 149 (57.3%) 149 (57.3%) 150 (57.7%) 150 (57.7%) 150 (57.7%) 150 (57.7%) 149.8 (58%)
Female 109 (41.9%) 111 (42.7%) 110 (42.3%) 110 (42.3%) 110 (42.3%) 110 (42.3%) 110.2 (42%)
Age 18-28 85 (32.7%) 85 (32.7%) 86 (33.1%) 86 (33.1%) 86 (33.1%) 85 (32.7%) 85.6 (33%)
29-35 131 (50.4%) 131 (50.4%) 132 (50.8%) 131 (50.4%) 131 (50.4%) 132 (50.8%) 131.4 (51%)
36-42 27 (10.4%) 28 (10.8%) 27 (10.4%) 27 (10.4%) 27 (10.4%) 28 (10.8%) 27.4 (10%)
42 - Above 15 (5.8%) 16 (6.2%) 15 (5.8%) 16 (6.2%) 16 (6.2%) 15 (5.8%) 15.6 (6%)
Marital Single 156 (60%) 156 (60%) 157 (60.4%) 157 (60.4%) 157 (60.4%) 156 (60%) 156.6 (60%)
Status Married 103 (39.6%) 104 (40%) 103 (39.6%) 103 (39.6%) 103 (39.6%) 104 (40%) 103.4 (40%)
Education SSCE 20 (7.7%) 20 (7.7%) 20 (7.7%) 20 (7.7%) 20 (7.7%) 20 (7.7%) 20 (8%)
Diploma 19 (7.3%) 19 (7.3%) 21 (8.1%) 20 (7.7%) 20 (7.7%) 21 (8.1%) 20.2 (8%)
Bachelors 146 (56.2%) 147 (56.5%) 146 (56.2%) 147 (56.5%) 147 (56.5%) 146 (56.2%) 146.6 (56%)
Postgraduate 73 (28.1%) 74 (28.5%) 73 (28.1%) 73 (28.1%) 73 (28.1%) 73 (28.1%) 73.2 (28%)
Monthly Less than N20,000 29 (11%) 31 (11.9%) 31 (11.9%) 34 (13.1%) 33 (12.7%) 33 (12.7%) 32.4
Income (Less than £50.70) (12%)
N20,000 – N49,999 37 (14.2%) 41 (15.8%) 40 (15.4%) 38 (14.6%) 41 (15.8%) 39 (15.0%) 39.8
(£50.70 - £126.75) (15%)
N50,000 – N99,999 72 (27.7%) 73 (28.1%) 75 (28.8%) 73 (28.1%) 74 (28.5%) 74 (28.5%) 73.8
(£126.76 - £253.51) (28%)
N100,000 – N199,999 66 (25.5%) 68 (26.2%) 67 (25.8%) 67 (25.8%) 66 (25.4%) 66 (25.4%) 66.8
(£253.51 – £507.02) (26%)
N200,000 – N299,999 28 (10.8%) 31 (11.9%) 32 (12.3%) 28 (10.8%) 31 (11.9%) 29 (11.2%) 30.2
(£507.03 - £760.54) (12%)
N300,000 and Above 12 (4.6%) 16 (6.2%) 15 (5.8%) 20 (7.7%) 15 (5.8%) 19 (7.3%) 17
(£760.55 and above) (7%)
xxxiii
Details Number of Cases and Percentages
Original 1st Iteration 2nd Iteration 3rd Iteration 4th Iteration 5th Iteration Pooled
Data Iterations
xxxiv
Appendix K: Descriptive Statistics of Likert Variables
(Original data, Iteration 1 to 5)
Variables Original Data 1st Iteration 2nd Iteration 3rd Iteration 4th Iteration 5th Iteration Pooled Iterations
Mean S.D Mean S.D Mean S.D Mean S.D Mean S.D Mean S.D Mean S.D
CE1 4.16 1.48
CE2 3.10 1.62
CE3 3.29 1.72
ID1 4.10 1.82
ID2 3.52 1.80
ID3 3.99 1.75
ID4 2.99 1.78
ID5* 4.29 1.79 4.30 1.80 4.30 1.78 4.29 1.78 4.30 1.79 4.31 1.80 4.30 1.79
HF1 3.38 1.76
HF2 2.55 1.56
HF3 3.00 1.74
FA1 4.17 1.78
FA2 4.16 1.75
FA3 4.02 1.76
CC1 3.98 1.77
CC2 4.53 1.70
CC3 3.78 1.77
PWC1 3.87 1.61
PWC2 3.10 1.45
PWC3 4.04 1.49
IVP1 4.19 1.43
IVP2 4.37 1.52
IVP3 4.12 1.42
TGA1 2.99 1.68
TGA2 3.00 1.54
TGA3 2.70 1.40
TGA4 3.44 1.38
IN1 3.96 1.60
xxxv
Variables Original Data 1st Iteration 2nd Iteration 3rd Iteration 4th Iteration 5th Iteration Pooled Iterations
Mean S.D Mean S.D Mean S.D Mean S.D Mean S.D Mean S.D Mean S.D
IN2 3.67 1.69
IN3 3.55 1.65
VAC1 3.78 1.311
VAC2 3.76 1.39
VAC3 3.37 1.36
PCQ1 4.20 1.46
PCQ2 4.02 1.47
PCQ3 3.28 1.48
PCQ4 3.85 1.51
PCQ5 3.48 1.55
PCQ6 4.15 1.58
xxxvi
Appendix L: Descriptive Statistics of Dichotomous and Multi-Response Variables
(Original data, Iteration 1 to 5)
Variables Original Data 1st Iteration 2nd Iteration 3rd Iteration 4th Iteration 5th Iteration Pooled Iterations
No Yes No Yes No Yes No Yes No Yes No Yes No Yes
IN4Trend 111 129 120 140 128 132 124 136 122 138 122 138 123.2 136.8
(42.7%) (49.6%) (46.2%) (53.8%) (49.2%) (50.8%) (47.7%) (52.3%) (46.9%) (53.1%) (46.9%) (53.1%) (47.4%) (52.6%)
IN4GovtPol 99 141 104 156 109 151 111 149 108 152 103 157 107 153
(38.1%) (54.2%) (40%) (60%) (41.9%) (58.1%) (42.7%) (57.3%) (41.5%) (58.5%) (39.6%) (60.4%) (41.1%) (58.7%)
IN4GovtExp 86 154 97 163 93 167 95 165 96 164 96 164 95.4 164.6
(33.1%) (59.2%) (37.3%) (62.7%) (35.8%) (64.2%) (36.5%) (63.5%) (36.9%) (63.1%) (36.9%) (63.1%) (36.7%) (63.3%)
IN4GovtProj 90 150 103 157 96 164 98 162 99 161 96 164 98.4 161.6
(34.6%) (57.7%) (39.6%) (60.4%) (36.9%) (63.1%) (37.7%) (62.3%) (38.1%) (61.9%) (36.9%) (63.1%) (37.8%) (62.2%)
IN4Econ 83 157 94 166 97 163 90 170 88 172 99 161 93.6 166.4
(31.9%) (60.4%) (36.2%) (63.8%) (37.3%) (62.7%) (34.6%) (65.4%) (33.8%) (66.2%) (38.1%) (61.9%) (36.0%) (64.0%)
IN4PersUSe 72 168 84 176 83 177 76 184 84 176 82 178 81.8 178.2
(27.7%) (64.6%) (32.3%) (67.7%) (31.9%) (68.1%) (29.2%) (70.8%) (32.3%) (67.7%) (31.5%) (68.5%) (31.4%) (68.6%)
IN4Others 228 11 243 17 241 19 240 20 242 18 240 20 241.2 18.8
(87.7%) (4.2%) (93.5%) (6.5%) (92.7%) (7.3%) (92.3%) (7.7%) (93.1%) (6.9%) (92.3%) (7.7%) (92.8%) (7.2%)
PC1Fbook 66 189 68 192 67 193 68 192 70 190 68 192 68.2 191.8
(25.4%) (72.7%) (26.2%) (73.8%) (25.8%) (74.2%) (26.2%) (73.8%) (26.9%) (73.1%) (26.2%) (73.8%) (26.3%) (73.7%)
PC1Twit 167 88 169 91 168 92 168 92 169 91 169 91 168.6 91.4
(64.2%) (33.8%) (65.0%) (35.0%) (64.6%) (35.4%) (64.6%) (35.4%) (65.0%) (35.0%) (65.0%) (35.0%) (64.8%) (35.2%)
PC1Blog 166 89 167 93 168 92 171 89 168 92 167 93 168.2 91.8
(63.8%) (34.2%) (64.2%) (35.8%) (64.6%) (35.4%) (65.8%) (34.2%) (64.6%) (35.4%) (64.2%) (35.8%) (64.7%) (35.3%)
PC1Web 90 165 93 167 93 167 93 167 91 169 91 169 92.2 167.8
(34.6%) (63.5%) (35.8%) (64.2%) (35.8%) (64.2%) (35.8%) (64.2%) (35.0%) (65.0%) (35.0%) (65.0%) (35.5%) (64.5%)
PC1Others 233 22 235 25 236 24 236 24 235 25 235 25 235.4 24.6
(89.6%) (8.5%) (90.4%) (9.6%) (90.8%) (9.2%) (90.8%) (9.2%) (90.4%) (9.6%) (90.4%) (9.6%) (90.6%) (9.4%)
PC2Fbook 111 46 183 77 171 89 162 98 163 97 182 78 172.2 87.8
(42.7%) (17.7%) (70.4%) (29.6%) (65.8%) (34.2%) (62.3%) (37.7%) (62.7%) (37.3%) (70.0%) (30.0%) (66.2%) (33.8%)
PC2Twit 124 33 188 72 159 101 172 87 174 86 167 93 172.2 87.8
(47.7%) (12.7%) (72.3%) (27.7%) (61.2%) (28.8%) (66.5%) (33.5%) (66.9%) (33.1%) (64.2%) (35.8%) (66.2%) (33.8%)
xxxvii
Variables Original Data 1st Iteration 2nd Iteration 3rd Iteration 4th Iteration 5th Iteration Pooled Iterations
No Yes No Yes No Yes No Yes No Yes No Yes No Yes
PC2Blog 144 13 210 50 208 52 194 66 206 54 208 52 205.2 54.8
(55.4%) (5.0%) (80.8%) (19.2%) (80.0%) (20.0%) (74.6%) (25.4%) (79.2%) (20.8%) (80.0%) (20.0%) (78.9%) (21.1%)
PC2Web 7 150 52 208 46 214 59 201 61 199 54 206 54.4 205.6
(2.7%) (57.7%) (20.0%) (80.0%) (17.7%) (82.3%) (22.7%) (77.3%) (23.5%) (76.5%) (20.8%) (79.2%) (20.9%) (79.1%)
PC2Others 133 24 170 90 195 65 193 67 197 63 212 48 193.4 66.6
(51.2%) (9.2%) (65.4%) (24.6%) (75.0%) (25.0%) (74.2%) (25.8%) (75.8%) (24.2%) (81.5%) (18.5%) (74.4%) (23.6%)
PA1 107 153 107 153 107 153 107 153 107 153 107 153 107 153
(41.2%) (58.8%) 41.2%) (58.8%) (41.2%) (58.8%) (41.2%) (58.8%) (41.2%) (58.8%) (41.2%) (58.8%) (41.2%) (58.8%)
PA2 179 80 179 81 180 80 179 81 179 81 180 80 179.4 80.6
(68.8%) (30.8%) (68.8%) (31.2%) (69.2%) (30.8%) (68.8%) (31.2%) (68.8%) (31.2%) (69.2%) (30.8%) (68.9%) (31.0%)
PA3 185 75 185 75 185 75 185 75 185 75 185 75 185 75
(71.2%) (28.8%) (71.2%) (28.8%) (71.2%) (28.8%) (71.2%) (28.8%) (71.2%) (28.8%) (71.2%) (28.8%) (71.2%) (28.8%)
xxxviii
Appendix M: Communalities
(Original data, Iteration 1 to 5)
xxxix
Appendix N: R2 , β and p
( Iteration 1 to 5)
Individual R2 Values
xl
Appendix O: Factor Analysis’ Pattern Matrix
(Original data, Iteration 1 to 5)
Pattern Matrix (Original Data)
Factors
1 2 3 4 5 6 7 8 9 10
Variabl
es
IN3 .727
IN2 .622
PCQ1 .590
IN1 .566
PCQ2 .546
PCQ5 .533
PCQ4 .480
PCQ3
IDelib2 .833
IDdelib
.804
3
IDelib1 .779
IDdelib
.764
5
IDelib4 .480
FA2 .838
FA1 .796
FA3 .755
HF2 .869
HF3 .761
TGA2 -
.811
TGA4 -
.718
TGA3 -
.711
TGA1 -
.623
CC1 -
.800
CC2 -
.725
CC3 -
.722
HF1
PWC1 .814
PWC3 .651
CE1 .791
CE2 .655
CE3 .612
IVP3 .789
IVP2 .748
IVP1 .691
VAC3 .878
VAC2 .837
VAC1 .551
PCQ6 .451
xli
Keys Grey Failed to load
Orange Removed due to low correlation coefficient score
xlii
Keys Grey Failed to load
Orange Removed due to low correlation coefficient score
Variables
IN3 .723
IN2 .621
PCQ1 .587
IN1 .567
PCQ2 .544
PCQ5 .536
PCQ4 .483
PCQ3
IDelib2 .832
IDdelib3 .803
IDelib1 .781
IDdelib5 .760
IDelib4 .474
FA2 .839
FA1 .797
FA3 .748
HF2 .869
HF3 .760
HF1
TGA2 -
.819
TGA4 -
.721
TGA3 -
.718
TGA1 -
.639
CC1 -
.804
CC2 -
.722
CC3 -
.720
PWC1 .816
PWC3 .652
CE1 .794
CE2 .656
CE3 .608
IVP3 .785
IVP2 .741
IVP1 .689
VAC3 .878
VAC2 .833
VAC1 .552
PCQ6 .449
Keys Grey Failed to load
Orange Removed due to low correlation coefficient score
xliii
Pattern Matrix (3rd Iteration)
Factors
1 2 3 4 5 6 7 8 9 10
Variables
IN3 .723
IN2 .620
PCQ1 .587
IN1 .567
PCQ2 .543
PCQ5 .536
PCQ4 .483
PCQ3
IDelib2 .831
IDdelib3 .804
IDelib1 .781
IDdelib5 .763
IDelib4 .473
FA2 .840
FA1 .797
FA3 .748
HF2 .869
HF3 .760
HF1
TGA2 -
.819
TGA4 -
.721
TGA3 -
.718
TGA1 -
.639
CC1 -
.804
CC2 -
.722
CC3 -
.720
PWC1 .816
PWC3 .652
CE1 .794
CE2 .657
CE3 .610
IVP3 .785
IVP2 .741
IVP1 .689
VAC3 .878
VAC2 .833
VAC1 .552
PCQ6 .449
Keys Grey Failed to load
Orange Removed due to low correlation coefficient score
xliv
Pattern Matrix (4th Iteration)
Factors
1 2 3 4 5 6 7 8 9 10
Variables
IN3 .724
IN2 .621
PCQ1 .586
IN1 .566
PCQ2 .543
PCQ5 .536
PCQ4 .483
PCQ3
IDelib2 .831
IDdelib3 .806
IDelib1 .781
IDdelib5 .763
IDelib4 .473
FA2 .840
FA1 .797
FA3 .747
HF2 .869
HF3 .760
TGA2 -
.819
TGA4 -
.721
TGA3 -
.718
TGA1 -
.639
CC1 -
.804
CC2 -
.723
CC3 -
.721
HF1
PWC1 .815
PWC3 .652
CE1 .793
CE2 .655
CE3 .608
IVP3 .784
IVP2 .740
IVP1 .691
VAC3 .878
VAC2 .833
VAC1 .552
PCQ6 .449
Keys Grey Failed to load
xlv
Orange Removed due to low correlation coefficient score
Variables
IN3 .724
IN2 .621
PCQ1 .586
IN1 .566
PCQ2 .543
PCQ5 .536
PCQ4 .482
PCQ3
IDelib2 .832
IDdelib3 .806
IDelib1 .781
IDdelib5 .760
IDelib4 .473
FA2 .840
FA1 .798
FA3 .747
HF2 .868
HF3 .760
TGA2 -
.819
TGA4 -
.721
TGA3 -
.718
TGA1 -
.637
CC1 -
.804
CC2 -
.723
CC3 -
.721
HF1
PWC1 .815
PWC3 .652
CE1 .793
CE2 .655
CE3 .607
IVP3 .784
IVP2 .739
IVP1 .691
VAC3 .878
VAC2 .833
xlvi
VAC1 .552
PCQ6 .449
Keys Grey Failed to load
Orange Removed due to low correlation coefficient score
xlvii
References
151
Bachman, R., & Schutt, R. K. (2008). Fundamentals of research in criminology and
criminal justice. Lomdon: Sage.
Baker, L., & Wigfield, A. (1999). Dimensions of children's motivation for reading and
their relations to reading activity and reading achievement. Reading research
quarterly, 34(4), 452-477.
Baldino, D., & Goold, J. (2014). Iran and the emergence of information and
communications technology: the evolution of revolution? Australian Journal of
International Affairs, 68(1), 17-35.
Baldus, B. J., Voorhees, C., & Calantone, R. (2015). Online brand community
engagement: Scale development and validation. Journal of business research,
68(5), 978-985.
Barber, B. (1999). The discourse of civility. In S. Elkin & S. Karol (Eds.), Citizen
competence and democratic institutions (pp. 39-47). University Park:
Pennsylvania State University Press.
Bartholomew, D. J., Steele, F., Galbraith, J., & Moustaki, I. (2008). Analysis of
multivariate social science data: CRC press.
Bason, C. (2010). Leading public sector innovation: Co-creating for a better society:
Policy Press.
Baumgartner, J. C., & Morris, J. S. (2009). MyFaceTube politics: Social networking web
sites and political engagement of young adults. Social Science Computer
Review.
Beer, S. F., Marcella, R., & Baxter, G. (1998). Rural citizens’ information needs a
survey undertaken on behalf of the Shetland Islands Citizens Advice Bureau.
Journal of Librarianship and Information Science, 30(4), 223-240.
Belanger, F., & Carter, L. (2006). The Effects of the Digital Divide on E-Government: An
Emperical Evaluation. Paper presented at the Proceedings of the 39th Annual
Hawaii International Conference on System Sciences (HICSS'06).
Bélanger, F., & Carter, L. (2008). Trust and risk in e-government adoption. The journal
of strategic Information Systems, 17(2), 165-176.
Bélanger, F., & Carter, L. (2009). The impact of the digital divide on e-government use.
Communications of the ACM, 52(4), 132-135.
Belanger, F., Hiller, J. S., & Smith, W. J. (2002). Trustworthiness in electronic
commerce: the role of privacy, security, and site attributes. The journal of
strategic Information Systems, 11(3), 245-270.
Belkin, N. J. (1980). ANOMALOUS STATES OF KNOWLEDGE AS A BASIS FOR
INFORMATION-RETRIEVAL. Canadian Journal of Information Science-Revue
Canadienne Des Sciences De L Information, 5(MAY), 133-143.
Belkin, N. J. (1993). Interaction with texts: Information retrieval as information
seeking behavior. Information retrieval, 93, 55-66.
Benbasat, I., Goldstein, D. K., & Mead, M. (1987). The case research strategy in
studies of information systems. MIS quarterly, 369-386.
Bennett, D. A. (2001). How can I deal with missing data in my study? Australian and
New Zealand Journal of Public Health, 25(5), 464-469.
Bentler, P. M., & Bonett, D. G. (1980). Significance tests and goodness of fit in the
analysis of covariance structures. Psychological bulletin, 88(3), 588.
Berthon, P. R., Pitt, L. F., Plangger, K., & Shapiro, D. (2012). Marketing meets Web 2.0,
social media, and creative consumers: Implications for international marketing
strategy. Business horizons, 55(3), 261-271.
152
Bertot, J. C., Jaeger, P. T., & Grimes, J. M. (2010). Using ICTs to create a culture of
transparency: E-government and social media as openness and anti-
corruption tools for societies. Government Information Quarterly, 27(3), 264-
271.
Bertot, J. C., Jaeger, P. T., & Hansen, D. (2012). The impact of polices on government
social media usage: Issues, challenges, and recommendations. Government
Information Quarterly, 29(1), 30-40.
Bertot, J. C., Jaeger, P. T., & McClure, C. R. (2008). Citizen-centered e-government
services: benefits, costs, and research needs. Paper presented at the
Proceedings of the 2008 international conference on Digital government
research.
Bian, J., Liu, Y., Agichtein, E., & Zha, H. (2008). Finding the right facts in the crowd:
factoid question answering over social media. Paper presented at the
Proceedings of the 17th international conference on World Wide Web.
Blaxter, L., Hughes, C., & Tight, M. (2001). How to research. (2nd ed.). Buckingham:
Open University Press.
Bonsón, E., & Ratkai, M. (2013). A set of metrics to assess stakeholder engagement
and social legitimacy on a corporate Facebook page. Online information
review, 37(5), 787-803.
Bonson, E., Royo, S., & Ratkai, M. (2015). Citizens' engagement on local governments'
Facebook sites. An empirical analysis: The impact of different media and
content types in Western Europe. Government Information Quarterly, 32(1),
52-62. doi:10.1016/j.giq.2014.11.001
Bonsón, E., Royo, S., & Ratkai, M. (2014). Facebook Practices in Western European
Municipalities An Empirical Analysis of Activity and Citizens’ Engagement.
Administration & Society, 0095399714544945.
Bonsón, E., Torres, L., Royo, S., & Flores, F. (2012). Local e-government 2.0: Social
media and corporate transparency in municipalities. Government Information
Quarterly, 29(2), 123-132.
Boukamcha, F. (2015). Double Estimation Methods to Assess Scales' Psychometric
Quality in Marketing Research: ML Versus PLS approaches. Paper presented at
the ECRM2015-Proceedings of the 14th European Conference on Research
Methods 2015: ECRM 2015.
Box, V., Hepworth, M., & Harrison, J. (2002). Identifying information needs of people
with multiple sclerosis. Nursing times, 99(49), 32-36.
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative
research in psychology, 3(2), 77-101.
Brekhus, W. H., Galliher, J. F., & Gubrium, J. F. (2005). The need for thin description.
Qualitative Inquiry, 11(6), 861-879.
Brians, C. L., Willnat, L., Manheim, J. B., & Rich, R. C. (2011). Empirical political
analysis: Quantitative and qualitative research methods. New York: Longman.
Brikci, N., & Green, J. (2007). A guide to using qualitative research methodology.
Retrieved from
http://evaluation.msf.at/fileadmin/evaluation/files/documents/resources_MSF/
MSF_Qualitative_Methods.pdf (Accessed: 12 July 2014)
Broder, A. (2002). A taxonomy of web search. Paper presented at the ACM Sigir
forum.
Burby, J., & Brown, A. (2007). Web analytics definitions. Washington DC: Web
Analytics Association.
153
Burton-Jones, A., & Hubona, G. S. (2006). The mediation of external variables in the
technology acceptance model. Information & Management, 43(6), 706-717.
Bwalya, K. J., & Healy, M. (2010). Harnessing e-government adoption in the SADC
region: a conceptual underpinning. Electronic Journal of e-Government, 8(1),
23-32.
Calder, B. J., Malthouse, E. C., & Schaedel, U. (2009). An experimental study of the
relationship between online engagement and advertising effectiveness.
Journal of interactive marketing, 23(4), 321-331.
Capps, D. (2001). Giving counsel: A minister's guidebook: Chalice Press.
Carlsson, C., Nilbert, M., & Nilsson, K. (2006). Patients’ involvement in improving
cancer care: experiences in three years of collaboration between members of
patient associations and health care professionals. Patient education and
counseling, 61(1), 65-71.
Carpenter, J., & Kenward, M. (2012). Multiple imputation and its application: John
Wiley & Sons.
Carter, L., & Belanger, F. (2012). Internet Voting and Political Participation: An
Empirical Comparison of Technological and Political Factors. Data Base for
Advances in Information Systems, 43(3), 26-46.
Carter, L., & Bélanger, F. (2005). The utilization of e-government services: citizen
trust, innovation and acceptance factors*. Information systems journal, 15(1),
5-25.
Carter, L., & Weerakkody, V. (2008). E-government adoption: A cultural comparison.
Information Systems Frontiers, 10(4), 473-482.
Castañeda, J. A., Muñoz-Leiva, F., & Luque, T. (2007). Web Acceptance Model (WAM):
Moderating effects of user experience. Information & Management, 44(4),
384-396.
Cha, M., Haddadi, H., Benevenuto, F., & Gummadi, P. K. (2010). Measuring User
Influence in Twitter: The Million Follower Fallacy. ICWSM, 10, 10-17.
Chadwick, A. (2008). Web 2.0: New challenges for the study of e-democracy in an era
of informational exuberance. ISJLP, 5, 9.
Chan, G., Cheung, C., Kwong, T., Limayem, M., & Zhu, L. (2003). Online consumer
behavior: a review and agenda for future research. BLED 2003 Proceedings,
43.
Charalabidis, Y., & Loukis, E. (2012). Participative public policy making through
multiple social media platforms utilization. International Journal of Electronic
Government Research (IJEGR), 8(3), 78-97.
Charmaz, K., & Belgrave, L. (2002). Qualitative interviewing and grounded theory
analysis. The SAGE handbook of interview research: The complexity of the
craft, 2, 2002.
Chen, J. (2007). Flow in games (and everything else). Communications of the ACM,
50(4), 31-34.
Chen, Q., Clifford, S. J., & Wells, W. D. (2002). Attitude toward the site II: new
information. Journal of Advertising Research, 42(2), 33-46.
Chen, Q., & Wells, W. D. (1999). Attitude toward the site. Journal of Advertising
Research, 39(5), 27-37.
Cheung, G. W., & Rensvold, R. B. (2002). Evaluating goodness-of-fit indexes for testing
measurement invariance. Structural equation modeling, 9(2), 233-255.
154
Cini, L. (2011). Between participation and deliberation: Toward a new standard for
assessing democracy. Paper presented at the 9th Pavia Graduate Conference
in Political Philosophy.
Codagnone, C., & Undheim, T. A. (2008). Benchmarking eGovernment: tools, theory,
and practice. European Journal of ePractice, 4, 4-18.
Colesca, S. E. (2015). Understanding trust in e-government. Engineering Economics,
63(4).
Connelly, L. M. (2008). Pilot studies. Medsurg Nursing, 17(6), 411-413.
Corey, E. C., & Garand, J. C. (2002). Are government employees more likely to vote?:
An analysis of turnout in the 1996 US national election. Public Choice, 111(3-
4), 259-283.
Cormode, G., & Krishnamurthy, B. (2008). Key differences between Web 1.0 and Web
2.0. First Monday, 13(6).
Coursey, D., & Norris, D. F. (2008). Models of e-government: Are they correct? An
empirical assessment. Public administration review, 68(3), 523-536.
Craswell, N., & Hawking, D. (2009). Web information retrieval. Information Retrieval:
Searching in the 21st Century, 85-101.
Crawford, K. (2009). Following you: Disciplines of listening in social media. Continuum:
Journal of Media & Cultural Studies, 23(4), 525-535.
Crenson, M. A., & Ginsberg, B. (2003). From popular to personal democracy. National
Civic Review, 92(2), 173-189.
Creswell, J. W. (1994). Research design: Qualitative & quantitative approaches.
Thousand Oaks, CA: Sage.
Creswell, J. W., & Clark, V. L. P. (2011). Designing and conducting mixed methods
research (2nd ed.). Thousand Oaks: Sage.
Crouch, M., & McKenzie, H. (2006). The logic of small samples in interview-based
qualitative research. Social science information, 45(4), 483-499.
Csikszentmihalyi, M. (1991). Flow: The psychology of optimal experience (Vol. 41):
HarperPerennial New York.
Dada, D. (2006). The failure of e-government in developing countries: A literature
review. The electronic journal of information systems in developing countries,
26.
Dahl, R. (1998). On Democracy. London: Yale University Press.
Dahlberg, L. (2001a). Computer-mediated communication and the public sphere: A
critical analysis. Journal of Computer-Mediated Communication, 7(1), 0-0.
Dahlberg, L. (2001b). The Internet and democratic discourse: Exploring the prospects
of online deliberative forums extending the public sphere. Information,
Communication & Society, 4(4), 615-633.
Davies, T. (2010). Open data, democracy and public sector reform. A look at open
government data use from data. gov. uk.
Davies, T. (2012). Supporting open data use through active engagement. Using Open
Data: policy modeling, citizen empowerment, data journalism (PMOD 2012), 1-
5.
Davis, L. L. (1992). Instrument review: Getting the most from a panel of experts.
Applied nursing research, 5(4), 194-197.
Davis, R. (1999). Web of Politics: The Internet's Impact on the American Political
System. Oxford: Oxford University Press.
Dawson, C. (2002). Practical research methods: a user-friendly guide to mastering
research techniques and projects. Oxford: How to books.
155
De Cindio, F., De Marco, A., & Grew, P. (2007). Deliberative community networks for
local governance. International Journal of Technology, Policy and
Management, 7(2), 108-121.
Dey, I. (1999). Grounding grounded theory: Guidelines for qualitative inquiry:
Academic Press.
Dimmick, J. W., McCain, T. A., & Bolton, W. T. (1979). Media Use and the Life Span
Notes on Theory and Method. American Behavioral Scientist, 23(1), 7-31.
Doll, W. J., Raghunathan, T., Lim, J.-S., & Gupta, Y. P. (1995). Research report-A
confirmatory factor analysis of the user information satisfaction instrument.
Information systems research, 6(2), 177-188.
Dong, Y., & Peng, C.-Y. J. (2013). Principled missing data methods for researchers.
SpringerPlus, 2(1), 1-17.
Douglas, Y., & Hargadon, A. (2000). The pleasure principle: immersion, engagement,
flow. Paper presented at the Proceedings of the eleventh ACM on Hypertext
and hypermedia.
Doya, D. M., Wallace, P., & Ibukun, Y. (2016). Nigeria Faces Recession Risk as GDP
Shrinks in First Quarter. Retrieved from
http://www.bloomberg.com/news/articles/2016-05-20/nigeria-s-economy-
contracts-for-the-first-time-since-2004 (Accessed: 16 July 2016)
Drusch, G., Bastien, J. C., & Paris, S. (2014). Analysing Eye-Tracking Data: From
Scanpaths and Heatmaps to the Dynamic Visualisation of Areas of Interest.
Advances in Science, Technology, Higher Education and Society in the
Conceptual Age: STHESCA, 20, 205.
Duggin, A. (2016). Doing the hard work to make accessibility simple. Retrieved from
https://gds.blog.gov.uk/2016/05/19/doing-the-hard-work-to-make-accessibility-
simple/. (Accessed: 20 July 2016)
Durbin, J., & Watson, G. S. (1950). Testing for serial correlation in least squares
regression: I. Biometrika, 37(3/4), 409-428.
Eggers, W. D. (2005). Government 2.0: Using Technology to Improve Education. Cut
Red Tape, Reduce Gridlock, and Enhance Democracy, Rowman& Littlefield Pub
Inc.
Enu, D. B., & Effiom, V. N. (2012). Producing responsible citizenship in Nigeria for
national development through social studies education. American Journal of
Social Issues and Humanities, 2(5).
Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating
the use of exploratory factor analysis in psychological research. Psychological
methods, 4(3), 272.
Faibisoff, S. G., & Ely, D. P. (1974). Information and Information Needs.
Falade, D. (2008). Civic education as a tool for nation building in Nigeria. Nigerian
Journal of Social Studies, 11(1), 15-27.
Fan, J., Zhang, P., & Ieee. (2007). What factors influence the information sharing
across government agencies?
Fehring, R. J. (1987). Methods to validate nursing diagnoses. Nursing Faculty Research
and Publications, 27.
Ferree, M. M., Gamson, W. A., Gerhards, J., & Rucht, D. (2002). Four models of the
public sphere in modern democracies. Theory and society, 31(3), 289-324.
Ferrini, A., & Mohr, J. J. (2009). Uses, limitations, and trends in web analytics.
Handbook of research on Web log analysis, 122-140.
Field, A. P. (2005). Discovering statistics using SPSS (2nd ed.). London: Sage.
156
Finn, S. (1997). Origins of media exposure linking personality traits to TV, radio, print,
and film use. Communication Research, 24(5), 507-529.
Fischer, K. E. (2012). Decision-making in healthcare: a practical application of partial
least square path modelling to coverage of newborn screening programmes.
Bmc Medical Informatics and Decision Making, 12(1), 1.
Flew, T. (2005). From e-government to online deliberative democracy.
Fogg, B., & Iizawa, D. (2008). Online persuasion in Facebook and Mixi: a cross-cultural
comparison Persuasive technology (pp. 35-46): Springer.
Fornell, C., & Larcker, D. F. (1981). Structural equation models with unobservable
variables and measurement error: Algebra and statistics. Journal of marketing
research, 382-388.
Frakes, W. (1992). Introduction to information storage and retrieval systems. Space,
14, 10.
Francis, J. J., Johnston, M., Robertson, C., Glidewell, L., Entwistle, V., Eccles, M. P., &
Grimshaw, J. M. (2010). What is an adequate sample size? Operationalising
data saturation for theory-based interview studies. Psychology and Health,
25(10), 1229-1245.
Fraser, N. (1992). Rethinking the public sphere: a contribution to the critique of
actually exisiting democracy. In A. Fung & E. O. Wright (Eds.), Habermas and
the public sphere (pp. 109-142). Cambridge: MIT Press.
Freire, M., Fortes, N., & Barbosa, J. (2014). Decisive Factors for the Adoption of
Technology in E-Government Platforms. In A. Rocha, D. Fonseca, E. Redondo,
L. P. Reis, & M. P. Cota (Eds.), Proceedings of the 2014 9th Iberian Conference
on Information Systems and Technologies.
Fuchs, C., & Horak, E. (2008). Africa and the digital divide. Telematics and informatics,
25(2), 99-116.
Fung, A. (2003). Survey article: recipes for public spheres: eight institutional design
choices and their consequences. Journal of Political Philosophy, 11(3), 338-
367.
Fung, A., & Wright, E. O. (2001). Deepening democracy: innovations in empowered
participatory governance. Politics and society, 29(1), 5-42.
Gable, R. K., & Wolf, M. B. (2012). Instrument development in the affective domain:
Measuring attitudes and values in corporate and school settings (Vol. 36):
Springer Science & Business Media.
Gardner, C., & Amoroso, D. L. (2004). Development of an instrument to measure the
acceptance of internet technology by consumers. Paper presented at the
System Sciences, 2004. Proceedings of the 37th Annual Hawaii International
Conference on.
Gaskin, J. (2012). Plugins and estimands. Stats Tools Package. Retrieved from
http://statwiki.kolobkreations.com/ (Accessed: 10 July 2016)
Gaver, B., & Martin, H. (2000). Alternatives: exploring information appliances through
conceptual design proposals. Paper presented at the Proceedings of the
SIGCHI conference on Human factors in computing systems.
Gerlitz, C., & Helmond, A. (2011). Hit, link, like and share. Organising the social and
the fabric of the web. Paper presented at the Digital Methods Winter
Conference Proceedings.
Germain, M.-L. (2006). Stages of Psychometric Measure Development: The Example
of the Generalized Expertise Measure (GEM). Online Submission.
157
Ghani, J. A., & Deshpande, S. P. (1994). Task characteristics and the experience of
optimal flow in human—computer interaction. The Journal of psychology,
128(4), 381-391.
Ghani, J. A., Supnick, R., & Rooney, P. (1991). The Experience Of Flow In Computer-
Mediated And In Face-To-Face Groups. Paper presented at the Proceedings of
the International Conference on Information Systems, ICIS 1991, December
16-18, 1991, New York, NY, USA.
Gibbs, I. (2012). Online Engagement Research. The Guardian. Retrieved from
http://www.theguardian.com/advertising/online-engagement (Accessed: 5
September 2014)
Given, L. M. (2008). The Sage encyclopedia of qualitative research methods: Sage
Publications.
Glencross, A. (2009). E-participation in the legislative process: procedural and
technological lessons from Estonia. Paper published on the web site of the
International Regulatory Reform Network. Retrieved on, 29.
Goggins, S., & Petakovic, E. (2014). Connecting Theory to Social Technology Platforms:
A Framework for Measuring Influence in Context. American Behavioral
Scientist. doi:10.1177/0002764214527093
Graham, G. (2012). Public opinion and the public sphere. Beyond Habermas:
Democracy, Knowledge, and the Public Sphere, 29.
Graham, M., & Avery, E. J. (2013). Government Public Relations and Social Media: An
Analysis of the Perceptions and Trends of Social Media Use at the Local
Government Level. Public Relations Journal, 7(4).
Granka, L. A., Joachims, T., & Gay, G. (2004). Eye-tracking analysis of user behavior in
WWW search. Paper presented at the Proceedings of the 27th annual
international ACM SIGIR conference on Research and development in
information retrieval.
Grbesa, M. (2003). Why If at All Is the Public Sphere a Useful Concept? Politička
misao, 40(5), 110-121.
Grbeša, M. (2004). Why if at all is the Public Sphere a Useful Concept? Politička
misao, 40(5), 110-121.
Green, J., & Thorogood, N. (2013). Qualitative methods for health research. Thousand
Oaks: Sage.
Grow, J., & Altstiel, T. (2005). Advertising strategy: Creative tactics from the
outside/in. Thousand Oaks, California: Sage Publications, Inc.
Gueorguieva, V. (2008). Voters, Myspace, and Youtube the impact of alternative
communication channels on the 2006 election cycle and beyond. Social
Science Computer Review, 26(3), 288-300.
Guest, G. (2012). Applied thematic analysis. Thousand Oaks, California: Sage.
Guest, G., MacQueen, K. M., & Namey, E. E. (2011). Applied thematic analysis: Sage.
Gullikson, S., Blades, R., Bragdon, M., McKibbon, S., Sparling, M., & Toms, E. G.
(1999). The impact of information architecture on academic web site usability.
Electronic Library, The, 17(5), 293-304.
Gummerus, J., Liljander, V., Weman, E., & Pihlström, M. (2012). Customer
engagement in a Facebook brand community. Management Research Review,
35(9), 857-877.
Guthrie, J. T. (2004). Teaching for literacy engagement. Journal of Literacy Research,
36(1), 1-30.
158
Guthrie, J. T., & Wigfield, A. (2000). Engagement and motivation in reading. In M. L.
Kamil, P. B. Mosenthal, P. D. Pearson, & R. Barr (Eds.), Handbook of reading
research (3rd ed., pp. 403-422). New York: Longman.
Guthrie, J. T., Wigfield, A., Barbosa, P., Perencevich, K. C., Taboada, A., Davis, M. H., . .
. Tonks, S. (2004). Increasing Reading Comprehension and Engagement
Through Concept-Oriented Reading Instruction. Journal of Educational
Psychology, 96(3), 403.
Gutmann, A., & Thompson, D. (2003). Deliberative democracy beyond process.
Debating deliberative democracy, 31-52.
Habermas, J. (1964). The Public Sphere: An Encyclopedia Article. Retrieved from
http://www.jstor.org/stable/pdfplus/487737.pdf?acceptTC=true&acceptTC=tru
e&jpdConfirm=true (Accessed: 18 Feb 2014)
Habermas, J. (1989). The structural transformation of the public sphere. Boston: MIT
Press.
Habermas, J. (1997). The public sphere. In R. E. Goodin & P. Pettit (Eds.),
Contemporary political philosophy (2nd ed., pp. 103-106).
Haerpfer, C. W., Wallace, C., & Spannring, R. (2002). Young people and politics in
eastern and western Europe: Inst. für Höhere Studien (IHS).
Haile, T. (2014, March 9, 2014). Web is wrong. Time. Retrieved from
http://time.com/12933/what-you-think-you-know-about-the-web-is-wrong/
(Accessed: 3 July 2014)
Hair, J., Black, W., Babin, B., Anderson, R., & Tatham, R. (2006a). Multivariate data
analysis (6th ed.). Upper Saddle River, NJ: Pearson-Prentice Hall.
Hair, J. F., Black, W. C., Babin, B. J., Anderson, R. E., & Tatham, R. L. (2006b).
Multivariate data analysis (Vol. 6): Pearson Prentice Hall Upper Saddle River,
NJ.
Hands, J. (2005). E–deliberation and local governance: The role of computer mediated
communication in local democratic participation in the United Kingdom. First
Monday, 10(7).
Harrison, T. M., & Sayogo, D. S. (2014). Transparency, participation, and
accountability practices in open government: A comparative study.
Government Information Quarterly, 31(4), 513-525.
doi:10.1016/j.giq.2014.08.002
Hartmann, S., Mainka, A., & Peters, I. (2013). Government activities in social media.
Paper presented at the Conference for E-Democracy and Open Governement.
Hassenzahl, M., & Tractinsky, N. (2006). User experience-a research agenda.
Behaviour & Information Technology, 25(2), 91-97.
Hauser, G. A. (1998). Vernacular dialogue and the rhetoricality of public opinion.
Communication Monographs, 65(2), 83-107.
doi:10.1080/03637759809376439
Hauser, G. A. (1999). Vernacular voices: The rhetoric of publics and public spheres.
Columbia: University of South Carolina Press.
Heald, D. (2012). Why is transparency about public expenditure so elusive?
International Review of Administrative Sciences, 78(1), 30-49.
Heath, R. (2007). How do we predict advertising attention and engagement?
Hehman, E., Stolier, R. M., & Freeman, J. B. (2014). Advanced mouse-tracking analytic
techniques for enhancing psychological science. Group Processes & Intergroup
Relations, 1368430214538325.
159
Henry, C. (2009). What makesd a link post worthy - Part 1. Retrieved from
https://moz.com/blog/what-makes-a-link-worthy-post-part-1. (Accessed: 16
January 2016)
Herbst, S. (1993). Numbered voices. Chicago: University of Chicago Press.
Herring, S. C., Kouper, I., Paolillo, J. C., Scheidt, L. A., Tyworth, M., Welsch, P., . . . Yu,
N. (2005). Conversations in the blogosphere: an analysis" from the bottom up".
Paper presented at the System Sciences, 2005. HICSS'05. Proceedings of the
38th Annual Hawaii International Conference on.
Herrington, J., Oliver, R., & Reeves, T. C. (2003). Patterns of engagement in authentic
online learning environments. Australian journal of educational technology,
19(1), 59-71.
Hill, R. (1998). What sample size is “enough” in internet survey research.
Interpersonal Computing and Technology: An electronic journal for the 21st
century, 6(3-4), 1-12.
Hinkin, T. R., & Tracey, J. B. (1999). An analysis of variance approach to content
validation. Organizational Research Methods, 2(2), 175-186.
Hinkin, T. R., Tracey, J. B., & Enz, C. A. (1997). Scale construction: Developing reliable
and valid measurement instruments. Journal of Hospitality & Tourism
Research, 21(1), 100-120.
Hodges, B. D., Kuper, A., & Reeves, S. (2008). Discourse analysis. Bmj, 337(aug07_3),
a879-a879.
Hofmann, S., Beverungen, D., Räckers, M., & Becker, J. (2013). What makes local
governments' online communications successful? Insights from a multi-
method analysis of Facebook. Government Information Quarterly, 30(4), 387-
396.
Holzner, C. A. (2010). Poverty of Democracy: The institutional roots of political
participation in Mexico: University of Pittsburgh Pre.
Hooper, D., Coughlan, J., & Mullen, M. (2008). Structural equation modelling:
Guidelines for determining model fit. Articles, 2.
Horn, R. E. (2000). Information design: Emergence of a new profession. In R. Jacobson
(Ed.), Information design (pp. 15-33). Massachusetts: MIT Press.
Hsu, C.-L., & Lu, H.-P. (2004). Why do people play on-line games? An extended TAM
with social influences and flow experience. Information & Management, 41(7),
853-868.
Hutcheson, G. D., & Sofroniou, N. (1999). The multivariate social scientist:
Introductory statistics using generalized linear models. London: Sage.
IAP2. (2007). IAP2 spectrum of public participation. Retrieved from
http://www.iap2.org.au/documents/item/84 (Accessed: 3 September 2014)
IBM. (2014). Exploratory Factor Analysis with categorical variables. Retrieved from
http://www-01.ibm.com/support/docview.wss?uid=swg21477550 (Accessed:
01 July 2016)
Iivari, J., & Koskela, E. (1987). The PIOCO model for information systems design. MIS
quarterly, 401-419.
IJsselsteijn, W., De Kort, Y., Midden, C., Eggen, B., & van Den Hoven, E. (2006).
Persuasive technology for human well-being: setting the scene Persuasive
technology (pp. 1-5): Springer.
Inman, K., & Andrews, J. T. (2009). Corruption and political participation in Africa:
Evidence from survey and experimental research. Midwest Political Science
Association, April, 3-6.
160
Internet Live Stats. (2015). Nigeria Internet Users. Retrieved from
http://www.internetlivestats.com/internet-users/nigeria/ (Accessed: 20 May
2015)
Iredia, T. (2012). Revamp the National Orientation Agency now. Vanguard. Retrieved
from http://www.vanguardngr.com/2012/01/revamp-the-national-orientation-
agency-now/ (Accessed: 21 May 2015)
Isaac, S., & Michael, W. B. (1995). Handbook in research and evaluation (3rd ed.). San
Diego: Educational and Industrial Testing Services.
Isaksson, A.-S., Kotsadam, A., & Nerman, M. (2014). The gender gap in African
political participation: testing theories of individual and contextual
determinants. Journal of Development Studies, 50(2), 302-318.
Jackson, D. L., Gillaspy Jr, J. A., & Purc-Stephenson, R. (2009). Reporting practices in
confirmatory factor analysis: an overview and some recommendations.
Psychological methods, 14(1), 6.
Jackson, M. M., Gergel, S. E., & Martin, K. (2015). Citizen science and field survey
observations provide comparable results for mapping Vancouver Island
White-tailed Ptarmigan (Lagopus leucura saxatilis) distributions. Biological
Conservation, 181, 162-172.
Jacob, R. J., & Karn, K. S. (2003). Eye tracking in human-computer interaction and
usability research: Ready to deliver the promises. Mind, 2(3), 4.
Jankowska, M. A. (2004). Identifying university professors' information needs in the
challenging environment of information and communication technologies. The
Journal of Academic Librarianship, 30(1), 51-66.
Janssen, M., Charalabidis, Y., & Zuiderwijk, A. (2012). Benefits, adoption barriers and
myths of open data and open government. Information Systems
Management, 29(4), 258-268.
Jennings, M. (2000). Theory and models for creating engaging and immersive
ecommerce websites. Paper presented at the Proceedings of the 2000 ACM
SIGCPR conference on Computer personnel research.
Jensen, J. L. (2003). Public Spheres on the Internet: Anarchic or Government-
Sponsored–A Comparison. Scandinavian Political Studies, 26(4), 349-374.
Johannessen, M. R., Flak, L. S., & Sæbø, Ø. (2012). Choosing the right medium for
municipal eparticipation based on stakeholder expectations Electronic
participation (pp. 25-36): Springer.
Johari, J., Yahya, K. K., & Omar, A. (2011). The Construct Validity of Organizational
Structure Scale: Evidence from Malaysia. World, 3(2), 131-152.
Jones, M. G. (1998). Creating Electronic Learning Environments: Games, Flow, and the
User Interface.
Jones, Q., & Rafaeli, S. (2000). Time to split, virtually:'Discourse
architecture'and'community building'create vibrant virtual publics. Electronic
markets, 10(4), 214-223.
Jones, T., & Brown, C. (2011). Reading Engagement: A Comparison between E-Books
and Traditional Print Books in an Elementary Classroom. Online Submission,
4(2), 5-22.
Jørgensen, M., & Phillips, L. (2002). Discourse analysis as theory and method. London:
Sage Publications.
Julious, S. A. (2005). Sample size of 12 per group rule of thumb for a pilot study.
Pharmaceutical Statistics, 4(4), 287-291.
161
Kang, M. (2010). Measuring social media credibility: A study on a Measure of Blog
Credibility. Institute for Public Relations, 59-68.
Kaplan, D. (2009). Structural equation modelling: Foundations and extensions (2nd
ed.). Thousand Oaks: Sage Publications.
Kardan, A. A., & Sadeghiani, A. (2011). Is e-government a way to e-democracy?: A
longitudinal study of the Iranian situation. Government Information Quarterly,
28(4), 466-473.
Katz, E., Blumler, J. G., & Gurevitch, M. (1973). Uses and gratifications research. Public
opinion quarterly, 509-523.
Kavanaugh, A. L., Fox, E. A., Sheetz, S. D., Yang, S., Li, L. T., Shoemaker, D. J., . . . Xie, L.
(2012). Social media use by government: From the routine to the critical.
Government Information Quarterly, 29(4), 480-491.
Kayahara, J., & Wellman, B. (2007). Searching for culture—high and low. Journal of
Computer-Mediated Communication, 12(3), 824-845.
Kearsley, G., & Shneiderman, B. (1998). Engagement theory: A framework for
technology-based teaching and learning. Educational technology, 38(5), 20-23.
Kellner, D. (2000). Habermas, the public sphere, and democracy: A critical
intervention. Perspectives on Habermas, 259-288.
Kenny, D. A. (2015). Measuring model fit. Retrieved from
http://davidakenny.net/cm/fit.htm (Accessed: 12 July 2016)
Kieffer, K. M. (1999). An Introductory Primer on the Appropriate Use of Exploratory
and Confirmatory Factor Analysis. Research in the Schools, 6(2), 75-92.
Kim, S. K., Park, M. J., & Rho, J. J. (2015). Effect of the Government’s Use of Social
Media on the Reliability of the Government: Focus on Twitter. Public
Management Review, 17(3), 328-355.
Kline, R. B. (2005). Principles and practice of structural equation modeling (2nd ed.).
New York: The Guilford Press.
Kline, R. B. (2010). Principles and practice of structural equation modeling (3rd ed.).
New York: Guilford Press.
Knol, D. L., & Berger, M. P. (1991). Empirical comparison between factor analysis and
multidimensional item response models. Multivariate Behavioral Research,
26(3), 457-477.
Ko, H., Cho, C.-H., & Roberts, M. S. (2005). Internet uses and gratifications: a
structural equation model of interactive advertising. Journal of advertising,
34(2), 57-70.
Koçan, G. (2008). Models of public sphere in political philosophy. EUROSPHERE
Çevrimiçi çalışma makaleleri(02).
Kolsaker, A., & Lee-Kelley, L. (2008). Citizens' attitudes towards e-government and e-
governance: a UK study. International Journal of Public Sector Management,
21(7), 723-738.
Koufaris, M. (2002). Applying the technology acceptance model and flow theory to
online consumer behavior. Information systems research, 13(2), 205-223.
Krawczyk, K. A., & Sweet-Cushman, J. (2016). Understanding political participation in
West Africa: the relationship between good governance and local citizen
engagement. International Review of Administrative Sciences,
0020852315619024.
Krueger, R. A., & Casey, M. A. (2000). Focus groups. A practical guide for applied
research, 3.
162
Kruse, J. A., Williams, R. A., & Seng, J. S. (2014). Considering a relational model for
depression in women with postpartum depression. International journal of
childbirth, 4(3), 151-168.
Kuhlthau, C. C. (1991). Inside the search process: Information seeking from the user's
perspective. Journal of the American society for information science, 42(5),
361.
Kumar, R. (2005). Research methodology: A step-by-step guide for beginners (2nd
ed.). London: Sage Publishers.
Kuruppu, P. U., & Gruber, A. M. (2006). Understanding the information needs of
academic scholars in agricultural and biological sciences. The Journal of
Academic Librarianship, 32(6), 609-623.
Lauteren, G. (2002). The Pleasure of the Playable Text: Towards an Aesthetic Theory
of Computer Games. Paper presented at the CGDC Conf.
Lee, G. (2005). Persuasion, Transparency and Government Speech. Hastings Law
Journal, 56(5), 983.
Lee, G., & Kwak, Y. H. (2012). An open government maturity model for social media-
based public engagement. Government Information Quarterly, 29(4), 492-503.
LeGates, R. T., & Stout, F. (2011). City reader: Routledge.
Lerman, K., & Hogg, T. (2010). Using a model of social dynamics to predict popularity
of news. Paper presented at the Proceedings of the 19th international
conference on World wide web.
Leston-Bandeira, C., & Bender, D. (2013). How deeply are parliaments engaging on
social media? Information Polity, 18(4), 281-297.
Leung, L. (2009). User-generated content on the internet: an examination of
gratifications, civic engagement and psychological empowerment. New Media
& Society, 11(8), 1327-1347.
Leung, L., & Wei, R. (2000). More than just talk on the move: Uses and gratifications
of the cellular phone. Journalism & Mass Communication Quarterly, 77(2),
308-320.
Li, H. (2006). An empirical exploration of virtual community participation: the
interpersonal relationship perspective. (Doctor of Philosophy), The Chinese
University of Hong Kong.
Li, H., Aham-Anyanwu, N., Tevrizci, C., & Luo, X. (2015). The interplay between value
and service quality experience: e-loyalty development process through the
eTailQ scale and value perception. Electronic Commerce Research, 15(4), 585-
615.
Lilleker, D. G., Koc-Michalska, K., Schweitzer, E. J., Jacunski, M., Jackson, N., & Vedel,
T. (2011). Informing, engaging, mobilizing or interacting: Searching for a
European model of web campaigning. European Journal of Communication,
26(3), 195-213.
Lin, F., Fofanah, S. S., & Liang, D. (2011). Assessing citizen adoption of e-Government
initiatives in Gambia: A validation of the technology acceptance model in
information systems success. Government Information Quarterly, 28(2), 271-
279.
Lin, J. C.-C., & Lu, H. (2000). Towards an understanding of the behavioural intention to
use a web site. International Journal of Information Management, 20(3), 197-
208.
163
Linders, D. (2012). From e-government to we-government: Defining a typology for
citizen coproduction in the age of social media. Government Information
Quarterly, 29(4), 446-454.
Loehlin, J. C. (1998). Latent variable models: An introduction to factor, path, and
structural analysis: Lawrence Erlbaum Associates Publishers.
Loewenthal, K. M. (2001). An introduction to psychological tests and scales:
Psychology Press.
Lorenc, A., & Robinson, N. (2015). A tool to improve patient and public engagement in
commissioning sexual and reproductive health and HIV services. Journal of
Family Planning and Reproductive Health Care, 41(1), 8-12.
Lu, Y., Zhou, T., & Wang, B. (2009). Exploring Chinese users’ acceptance of instant
messaging using the theory of planned behavior, the technology acceptance
model, and the flow theory. Computers in human behavior, 25(1), 29-39.
Luarn, P., & Lin, H.-H. (2003). A Customer Loyalty Model for E-Service Context. J.
Electron. Commerce Res., 4(4), 156-167.
Lusoli, W., & Ward, J. (2005). “Politics Makes Strange Bedfellows” The Internet and
the 2004 European Parliament Election in Britain. The Harvard International
Journal of Press/Politics, 10(4), 71-97.
Lynn, M. R. (1986). Determination and quantification of content validity. Nursing
research, 35(6), 382-386.
Madukoma, E., & Opemipo, M. F. (2016). Information Behaviour of Revenue
Collectors in Lagos State, Nigeria. Library Philosophy and Practice, 0_1.
Mahrer, H., & Krimmer, R. (2005). Towards the enhancement of e-democracy:
identifying the notion of the 'middleman paradox'. Information systems
journal, 15(1), 27-42. doi:10.1111/j.1365-2575.2005.00184.x
Maile, S., & Griffiths, D. (2014). Cafe scientifique and the art of engaging publics. In S.
Maile & D. Griffiths (Eds.), Public Engagement and Social Science (pp. 7-28).
Bristol: Policy Press.
Mainka, A., Hartmann, S., Stock, W. G., & Peters, I. (2015). Looking for friends and
followers: a global investigation of governmental social media use.
Transforming Government: People, Process and Policy, 9(2), 237-254.
Mangold, W. G., & Faulds, D. J. (2009). Social media: The new hybrid element of the
promotion mix. Business horizons, 52(4), 357-365.
Manjoo, F. (2013). You Won’t Finish This Article. Why people online don’t read to the
end: Slate.
Marchionini, G. (2008). Human–information interaction research and development.
Library & Information Science Research, 30(3), 165-174.
Marshall, S. (2007). Engagement Theory, WebCT, and academic writing in Australia.
International Journal of Education and Development using ICT, 3(2).
Maruyama, M., Douglas, S., & Robertson, S. (2013). Design teams as change agents:
Diplomatic design in the open data movement. Paper presented at the System
Sciences (HICSS), 2013 46th Hawaii International Conference on.
Mason, M. (2010). Sample size and saturation in PhD studies using qualitative
interviews. Paper presented at the Forum Qualitative Sozialforschung/Forum:
Qualitative Social Research.
Mathwick, C., & Rigdon, E. (2004). Play, flow, and the online search experience.
Journal of Consumer Research, 31(2), 324-332.
Matuszak, G. (2007). Enterprise 2.0: Fad or Future? The Business Role for Social
Software Platforms.
164
Maxwell, J. A. (2013). Qualitative research design: An Interactive Approach (3rd ed.).
California: SAGE Publication.
McGarrigle, A., & Sanderson, S. (2010). Engagement metrics in the media planning
process. Paper presented at the Audience Measurement 5.0, Millennium
Broadway Hotel, New York City.
McLean, S. (2014). Business communication for success. Retrieved from
http://catalog.flatworldknowledge.com/bookhub/15?e=mclean-ch08_s03
(Accessed: 7 October 2014)
Medaglia, R. (2012). eParticipation research: Moving characterization forward (2006-
2011). Government Information Quarterly, 29(3), 346-360.
doi:10.1016/j.giq.2012.02.010
Melo, D. F., & Stockemer, D. (2014). Age and political participation in Germany,
France and the UK: A comparative analysis. Comparative European Politics,
12(1), 33-53.
Mergel, I. (2013). A framework for interpreting social media interactions in the public
sector. Government Information Quarterly, 30(4), 327-334.
Michailidou, E., Christoforou, C., & Zaphiris, P. (2014). Towards Predicting Ad
Effectiveness via an Eye Tracking Study HCI in Business (pp. 670-680): Springer.
Michener, G., & Bersch, K. (2013). Identifying transparency. Information Polity, 18(3),
233-242.
Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded
sourcebook (2nd ed.). Beverly Hills: Sage.
Milne, C. (2012). Deconstructing games as play: progress, power, fantasy, and self.
Cultural Studies of Science Education, 7(4), 761-765.
Mintz, D. (2014). 3 interesting things attention minutes have already taught us.
Retrieved from http://blog.upworthy.com/post/76538569963/3-interesting-
things-attention-minutes-have-already. (Accessed: 3 July 2014)
Moatshe, R. M., & Mahmood, Z. (2012). Implementing eGovernment projects:
Challenges facing developing countries. Paper presented at the Proceedings of
the European Conference on e-Government, Institute of Public Governance
and Management, Barcelona.
Mohammed, S., Abubakar, M. K., & Bashir, A. (2010). eGovernment in Nigeria: A
Catalyst for National Development. Paper presented at the 4th International
Conference on Development Studies, Nigeria.
Mollen, A., & Wilson, H. (2010). Engagement, telepresence and interactivity in online
consumer experience: Reconciling scholastic and managerial perspectives.
Journal of business research, 63(9), 919-925.
Montola, M., Stenros, J., & Waern, A. (2009). Pervasive games: theory and design:
Morgan Kaufmann Publishers Inc.
Moore, G. C., & Benbasat, I. (1991). Development of an instrument to measure the
perceptions of adopting an information technology innovation. Information
systems research, 2(3), 192-222.
Morkes, J., & Nielsen, J. (1997). Concise, scannable, and objective: How to write for
the Web. Useit. com.
Mosquera, A., & Moreda, P. (2012). Smile: An informality classification tool for
helping to assess quality and credibility in web 2.0 texts. Paper presented at
the Proceedings of the ICWSM workshop: Real-Time Analysis and Mining of
Social Streams (RAMSS).
165
Mossberger, K., Wu, Y., & Crawford, J. (2013). Connecting citizens and local
governments? Social media and interactivity in major US cities. Government
Information Quarterly, 30(4), 351-358.
Moy, P., & Scheufele, D. A. (2000). Media effects on political and social trust.
Journalism & Mass Communication Quarterly, 77(4), 744-759.
Mudhai, O. F. (2009). Implications for Africa of E-Gov Challenges for Giants South
Africa and Nigeria. African Media and the Digital Public Sphere, 21-40.
Mueller, F., & Lockerd, A. (2001). Cheese: tracking mouse movement activity on
websites, a tool for user modeling. Paper presented at the CHI'01 extended
abstracts on Human factors in computing systems.
Nakamura, J., & Csikszentmihalyi, M. (2009). Flow Theory and Research. In C. R. S. E.
Wright & S. J. Lopez (Eds.), Oxford handbook of positive psychology (pp. 195-
206). New York: Oxford University Press.
Näkki, P., Bäck, A., Ropponen, T., Kronqvist, J., Hintikka, K. A., Harju, A., . . . Kola, P.
(2011). Social media for citizen participation. Report on the Somus Project, VTT
Publications, 755.
Nardi, B. A., & O'Day, V. (1999). Information ecologies: Using technology with heart.
Cambridge: MIT Press.
National Audit Office. (2012). Cross-government review: Implementing transparency.
Retrieved from https://www.scribd.com/document/89884318/Implementing-
Transparency-Report-from-the-NAO (Accessed: 13 October 2014)
National Bureau of Statistics. (2011). 2011 annual socio-economic reort: Access to ICT.
Retrieved from http://www.nigerianstat.gov.ng/pdfuploads/Social Economic
survey - ICT.pdf (Accessed: 20 July 2016)
National Orientation Agency. (2011). The role of National Orientation Agency in
managing Nigeria's pluralism in ethno-religious crises. Retrieved from
http://www.noa.gov.ng/attachments/article/63/THE ROLE OF NATIONAL
ORIENTATION AGENCY IN MANAGING NIGERIA.pdf. (Accessed: 16
June 2015)
National Orientation Agency. (2014). About us. Retrieved from
http://www.noa.gov.ng/index.php/about-us (Accessed: 20 May 2015)
Ndou, V. (2004). E-government for developing countries: opportunities and
challenges. The electronic journal of information systems in developing
countries, 18.
Nevo, B. (1985). Face validity revisited. Journal of Educational Measurement, 22(4),
287-293.
Nguyen, A. M., van Landingham, S. W., Massof, R. W., Rubin, G. S., & Ramulu, P. Y.
(2014). Reading ability and reading engagement in older adults with
glaucoma. Investigative ophthalmology & visual science, 55(8), 5284-5290.
Nielsen, J. (1999). User interface directions for the web. Communications of the ACM,
42(1), 65-72.
Nielsen, J. (2008). How little do users read? Nielsen Norman Group.
Norris, P. (2001). Digital divide: Civic engagement, information poverty, and the
Internet worldwide: Cambridge University Press.
Novak, T. (2005). Toward a Digitized Public Sphere? A Critical Reevaluation of the
Internet’s Democratizing Potential.
Novak, T. P., Hoffman, D. L., & Duhachek, A. (2003). The influence of goal-directed
and experiential activities on online flow experiences. Journal of consumer
psychology, 13(1), 3-16.
166
Novak, T. P., Hoffman, D. L., & Yung, Y.-F. (2000). Measuring the customer experience
in online environments: A structural modeling approach. Marketing science,
19(1), 22-42.
Nwaubani, A. T. (2014, 23 September 2014). Letter from Africa: Nigeria's internet
warriors. BBC. Retrieved from http://www.bbc.co.uk/news/world-africa-
29237507 (Accessed: 12 March 2015)
O'Brien, H. L., & Toms, E. G. (2008). What is user engagement? A conceptual
framework for defining user engagement with technology. Journal of the
American Society for Information Science and Technology, 59(6), 938-955.
O'Keefe, G. J., & Sulanowski, B. K. (1995). More than just talk: Uses, gratifications, and
the telephone. Journalism & Mass Communication Quarterly, 72(4), 922-933.
O'Riain, S., Curry, E., & Harth, A. (2012). XBRL and open data for global financial
ecosystems: A linked data approach. International Journal of Accounting
Information Systems, 13(2), 141-162.
ODEP. (2014). Improving the Accessibility of Social Media in Government. Retrieved
from https://www.digitalgov.gov/resources/improving-the-accessibility-of-
social-media-in-government/. (Accessed: 20 July 2016)
OECD. (2010). Learning to learn: Student engagement, strategies and practices.
Retrieved from Paris: https://www.oecd.org/pisa/pisaproducts/48852630.pdf
(Accessed: 5 April 2016)
Oktem, M. K., Demirhan, K., & Demirhan, H. (2014). The Usage of E-Governance
Applications by Higher Education Students. Kuram Ve Uygulamada Egitim
Bilimleri, 14(5), 1925-1943.
Olphert, W., & Damodaran, L. (2007). Citizen participation and engagement in the
design of e-government services: The missing link in effective ICT design and
delivery. Journal of the Association for Information Systems, 8(9), 491.
Onnela, J.-P., & Reed-Tsochas, F. (2010). Spontaneous emergence of social influence
in online systems. Proceedings of the National Academy of Sciences, 107(43),
18375-18380.
Open Government Data. (2015). What is open government Data. Retrieved from
http://opengovernmentdata.org/ (Accessed: 17 June 2015)
Ormandy, P. (2011). Defining information need in health–assimilating complex
theories derived from information science. Health expectations, 14(1), 92-104.
Osborne, J. W. (2015). What is Rotating in Exploratory Factor Analysis? Practical
Assessment, Research & Evaluation, 20(2), 2.
Osborne, J. W., & Costello, A. B. (2009). Best practices in exploratory factor analysis:
Four recommendations for getting the most from your analysis. Pan-Pacific
Management Review, 12(2), 131-146.
Osiobe, S. A. (1988). Information seeking behaviour. International Library Review,
20(3), 337-346.
Paek, H.-J., Hove, T., Jung, Y., & Cole, R. T. (2013). Engagement across three social
media platforms: An exploratory study of a cause-related PR campaign. Public
Relations Review, 39(5), 526-533.
Palmgreen, P., & Rayburn, J. D. (1979). Uses and Gratifications and Exposure To Public
Television A Discrepancy Approach. Communication Research, 6(2), 155-179.
Panagiotopoulos, P., Bigdeli, A. Z., & Sams, S. (2014). Citizen–government
collaboration on social media: The case of Twitter in the 2011 riots in England.
Government Information Quarterly, 31(3), 349-357.
167
Panopoulou, E., Tambouris, E., & Tarabanis, K. (2014). Success factors in designing
eParticipation initiatives. Information and Organization, 24(4), 195-213.
doi:10.1016/j.infoandorg.2014.08.001
Parent, M., Vandebeek, C. A., & Gemino, A. C. (2005). Building citizen trust through e-
government. Government Information Quarterly, 22(4), 720-736.
Park, N., Kee, K. F., & Valenzuela, S. (2009). Being immersed in social networking
environment: Facebook groups, uses and gratifications, and social outcomes.
CyberPsychology & Behavior, 12(6), 729-733.
Patton, M. Q. (1990). Qualitative evaluation and research methods: SAGE
Publications, inc.
Pellegrini, A. D. (1995). Introduction. In A. D. Pellegrini (Ed.), The future of play theory:
A multidisciplinary inquiry into the contributions of Brian Sutton-Smith (pp. vii–
x). Albany: State University of New York Press.
Peng, K.-F., Fan, Y.-W., & Hsu, T.-A. (2004). Proposing the content perception theory
for the online content industry-a structural equation modeling. Industrial
Management & Data Systems, 104(6), 469-489.
Pew Research Centre. (2014a). Emerging nations embrace internet, mobile
technology. Retrieved from http://www.pewglobal.org/2014/02/13/emerging-
nations-embrace-internet-mobile-technology/ (Accessed: 10 February 2015)
Pew Research Centre. (2014b). Many in emerging and developing nations
disconnected from politics. Retrieved from
http://www.pewglobal.org/2014/12/18/many-in-emerging-and-developing-
nations-disconnected-from-politics/ (Accessed: 10 February 2015)
Phillips, P. W. B. (2013). Democracy, governance, and public engagement: A critical
assessment. In K. O'Doherty & E. Einsiedel (Eds.), Public Engagement and
Emerging Technologies (pp. 45-65). Vancouver: UBC Press.
Pickard, A. J. (2013). Research methods in information (2nd ed.). London: Facet
Publishing.
Ping, R. (2009). Is there any way to improve average variance extracted (AVE) in a
latent variable (LV) X Retrieved from http://home/. att. net/~
rpingjr/ImprovAVE1. doc (Accessed: 15 July 2016)
Pizzo, E., Doyle, C., Matthews, R., & Barlow, J. (2014). Patient and public involvement:
how much do we spend and what are the benefits? Health expectations.
Polit, D. F., & Beck, C. T. (2006). The content validity index: are you sure you know
what's being reported? Critique and recommendations. Research in nursing &
health, 29(5), 489-497.
Polit, D. F., Beck, C. T., & Owen, S. V. (2007). Is the CVI an acceptable indicator of
content validity? Appraisal and recommendations. Research in nursing &
health, 30(4), 459-467.
Postrel, V. (2009). The substance of style. New York: Harper Collins.
Preece, J. (2001). Sociability and usability in online communities: Determining and
measuring success. Behaviour & Information Technology, 20(5), 347-356.
Prieger, J. E. (2003). The supply side of the digital divide: is there equal availability in
the broadband Internet access market? Economic Inquiry, 41(2), 346-363.
Prudon, P. (2014). Confirmatory factor analysis: a brief introduction and critique:
Tilgængelig på: http://home/. kpn. nl/p. prudon/CFA-critique. pdf. Besøgt.
PSU. (2016). Regression methods. Retrieved from
https://onlinecourses.science.psu.edu/stat501/node/279 (Accessed: 15 July
2016)
168
Pusey, M. (1987a). Jürgen Habermas. Chichester: Ellis Horwood.
Pusey, M. (1987b). Jürgen Habermas: key sociologists. London: Routledge.
Raacke, J., & Bonds-Raacke, J. (2008). MySpace and Facebook: Applying the uses and
gratifications theory to exploring friend-networking sites. CyberPsychology &
Behavior, 11(2), 169-174.
Raat, H., Botterweck, A. M., Landgraf, J. M., Hoogeveen, W. C., & Essink-Bot, M.-L.
(2005). Reliability and validity of the short form of the child health
questionnaire for parents (CHQ-PF28) in large random school based and
general population samples. Journal of epidemiology and community health,
59(1), 75-82.
Rashotte, L. (2007). Social influence. The blackwell encyclopedia of social psychology,
9, 562-563.
Reddick, C. G., & Turner, M. (2012). Channel choice and public service delivery in
Canada: Comparing e-government to traditional service delivery. Government
Information Quarterly, 29(1), 1-11.
Reichheld, F. F., & Schefter, P. (2000). E-loyalty: your secret weapon on the web.
Harvard business review, 78(4), 105-113.
Reid, D. (2004). A model of playfulness and flow in virtual reality interactions.
Presence, 13(4), 451-462.
Resmini, A., & Rosati, L. (2012). A brief history of information architecture. Journal of
Information Architecture, 3(2).
Rieber, L. P. (1996). Seriously considering play: Designing interactive learning
environments based on the blending of microworlds, simulations, and games.
Educational technology research and development, 44(2), 43-58.
Rissi, J. J., Gelmon, S., Saulino, E., Merrithew, N., Baker, R., & Hatcher, P. (2015).
Building the foundation for health system transformation: Oregon's patient-
centered primary care home program. Journal of Public Health Management
and Practice, 21(1), 34-41.
Rodman, G. R. (2009). Mass media in a changing world: History, industry, controversy:
McGraw Hill Boston.
Rogers, E. M. (2003). Diffusion of Innovation. New York: The Free Press.
Rogers, S. (2012). UK open government data: the results of the official audit. The
Guardian. Retrieved from
https://www.theguardian.com/news/datablog/2012/apr/18/uk-open-
government-data-national-audit-office (Accessed: 13 October 2014)
Roman, A. V., & Miller, H. T. (2013). New questions for e-government: Efficiency but
not (yet?) democracy. International Journal of Electronic Government
Research (IJEGR), 9(1), 65-81.
Rosenfeld, L., & Morville, P. (2002). Information architecture for the world wide web:
" O'Reilly Media, Inc.".
Ross, K. N. (2005). Quantitative research methods in educational planning: UNESCO
International Institute for Educational Planning. 6th Edition, Englewood Cliffs,
NJ Prentice-Hall.
Rowe, G., & Frewer, L. J. (2005). A typology of public engagement mechanisms.
Science, technology & human values, 30(2), 251-290.
Roy, J. (1999). Polis and oikos in classical Athens. Greece and Rome (Second Series),
46(01), 1-18.
169
Rubin, A. (2002). The uses-and -gratification perspective of media effects. In J. Bryant
& D. Zillmann (Eds.), Media effects: Advances in theory and research (2nd ed.,
pp. 525-548). New Jersey: Lawrence Erlbaum Associates, Inc.
Rubin, D. B. (2004). Multiple imputation for nonresponse in surveys (Vol. 81). New
York: John Wiley & Sons.
Rubio, D. M., Berg-Weger, M., Tebb, S. S., Lee, E. S., & Rauch, S. (2003). Objectifying
content validity: Conducting a content validity study in social work research.
Social work research, 27(2), 94-104.
Ryan, G. W., & Bernard, H. R. (2000). Data management and analysis methods. In N.
K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp.
769-802). Thousand Oaks, CA: Sage.
Sack, W. (2005). Discourse architecture and very large-scale conversation. In R.
Latham & S. Sassen (Eds.), Digital formations: IT and new architectures in the
global realm (pp. 242-282). Princeton: Princeton University Press.
Saebo, O., Flak, L. S., & Sein, M. K. (2011). Understanding the dynamics in e-
Participation initiatives: Looking through the genre and stakeholder lenses.
Government Information Quarterly, 28(3), 416-425.
doi:10.1016/j.giq.2010.10.005
Sæbø, Ø., Rose, J., & Flak, L. S. (2008). The shape of eParticipation: Characterizing an
emerging research area. Government information quarterly, 25(3), 400-428.
Sæbø, Ø., Rose, J., & Molka-Danielsen, J. (2010). eParticipation: Designing and
managing political discussion forums. Social Science Computer Review, 28(4),
403-426.
Sæbø, Ø., Rose, J., & Skiftenes Flak, L. (2008). The shape of eParticipation:
Characterizing an emerging research area. Government Information Quarterly,
25(3), 400-428.
Said, H., Badru, B. B., & Shahid, M. (2011). Confirmatory factor analysis (CFA) for
testing validity and reliability instrument in the study of education. Australian
Journal of Basic and Applied Sciences, 5(12), 1098-1103.
Sample, J. (2014). The communication styles and abilities inventory for leaders (c-sail):
Feedback results Retrieved from http://www.linkageinc.com/leadership-
development-
documents/files/assessments/Linkage_C_SAIL_Sample_Report.pdf (Accessed:
7 October 2014)
Sandoval-Almazan, R., & Gil-Garcia, J. R. (2012). Are government internet portals
evolving towards more interaction, participation, and collaboration? Revisiting
the rhetoric of e-government among municipalities. Government Information
Quarterly, 29, S72-S81.
Sandoval-Almazan, R., Leyva, N. K. S., & Gil-Garcia, J. R. (2013). Maturity and evolution
of e-government portals in central America: a three-year assessment 2011-
2013. Paper presented at the Proceedings of the 7th International Conference
on Theory and Practice of Electronic Governance.
Santos, J. R. A. (1999). Cronbach’s alpha: A tool for assessing the reliability of scales.
Journal of extension, 37(2), 1-5.
Sashi, C. (2012). Customer engagement, buyer-seller relationships, and social media.
Management Decision, 50(2), 253-272.
Saunders, J. B., Aasland, O. G., Babor, T. F., De la Fuente, J. R., & Grant, M. (1993).
Development of the alcohol use disorders identification test (AUDIT): WHO
170
collaborative project on early detection of persons with harmful alcohol
consumption-II. Addiction, 88(6), 791-804.
Schermelleh-Engel, K., Moosbrugger, H., & Müller, H. (2003). Evaluating the fit of
structural equation models: Tests of significance and descriptive goodness-of-
fit measures. Methods of psychological research online, 8(2), 23-74.
Schmitt, N., Klimoski, R. J., Ferris, G. R., & Rowland, K. M. (1991). Research methods in
human resources management: South-Western Pub.
Schriesheim, C. A., Powers, K. J., Scandura, T. A., Gardiner, C. C., & Lankau, M. J.
(1993). Improving construct measurement in management research:
Comments and a quantitative approach for assessing the theoretical content
adequacy of paper-and-pencil survey-type instruments. Journal of
Management, 19(2), 385-417.
Schultze, U., & Avital, M. (2011). Designing interviews to generate rich data for
information systems research. Information and Organization, 21(1), 1-16.
Schumacker, R. E., & Lomax, R. G. (2004). A beginner‟s guide to structural equation
model (2nd ed.). Mahwah: Lawrence Erlbaum Associates.
Schweitzer, E. J. (2008). Innovation or normalization in e-campaigning? A longitudinal
content and structural analysis of German party websites in the 2002 and
2005 national elections. European Journal of Communication, 23(4), 449-470.
Seiter, J. S., & Gass, R. H. (2004). Perspectives on persuasion, social influence, and
compliance gaining. Boston, MA: Allyn and Bacon.
Shedroff, N. (1999). Information interaction design: A unified field theory of design.
Information design, 267-292.
Sherry, J. L., Lucas, K., Greenberg, B. S., & Lachlan, K. (2006). Video game uses and
gratifications as predictors of use and game preference. Playing video games:
Motives, responses, and consequences, 213-224.
Shirk, J. (2015). “I Try to Work With These People.” Scientists, Citizen Science, and
Public Engagement. Paper presented at the 2015 AAAS Annual Meeting (12-16
February 2015).
Shirky, C. (2011). Political Power of Social Media-Technology, the Public Sphere
Sphere, and Political Change, The. Foreign Aff., 90, 28.
Simons, H. W. (1976). Persuasion: Addison-Wesley Reading, Massachusetts.
Sinharay, S., Stern, H. S., & Russell, D. (2001). The use of multiple imputation for the
analysis of missing data. Psychological methods, 6(4), 317.
Sipior, J. C., & Ward, B. T. (2005). Bridging the digital divide for e-government
inclusion: A United States case study. The electronic Journal of e-Government,
3(3), 137-146.
Sirianni, C., & Friedland, L. (2003). Deliberative Democracy. Civic Dictionary, The Civic
Practices Network, http://www/. cpn. org/tools/dictionary/deliberate. html.
Smith, E. R. (2000). E-loyalty: How to keep customers coming back to your website:
HarperInformation.
Smith, G., John, P., & Sturgis, P. (2013). Taking political engagement online: An
experimental analysis of asynchronous discussion forums. Political Studies,
61(4), 709-730.
Smith, J. A., Flowers, P., & Larkin, M. (2009). Interpretative phenomenological
analysis: Theory, method and research: London: Sage.
Smith, K. N. (2011). Social media and political campaigns.
Smucker, M. D., Guo, X. S., & Toulis, A. (2014). Mouse movement during relevance
judging: implications for determining user attention. Paper presented at the
171
Proceedings of the 37th international ACM SIGIR conference on Research &
development in information retrieval.
Soley-Bori, M. (2013). Dealing with missing data: Key assumptions and methods for
applied analysis. Retrieved from http://www.bu.edu/sph/files/2014/05/Marina-
tech-report.pdf (Accessed: 20 July 2016)
Sørensen, E., & Torfing, J. (2011). Enhancing collaborative innovation in the public
sector. Administration & Society, 0095399711418768.
Squires, J. E., Estabrooks, C. A., Newburn-Cook, C. V., & Gierl, M. (2011). Validation of
the conceptual research utilization scale: an application of the standards for
educational and psychological testing in healthcare. BMC health services
research, 11(1), 1.
Srnka, K. J., & Koeszegi, S. T. (2007). From words to numbers: how to transform
qualitative data into meaningful quantitative results. Schmalenbach Business
Review, 59(1), 29-57.
Stafford, T. F., Stafford, M. R., & Schkade, L. L. (2004). Determining uses and
gratifications for the Internet. Decision Sciences, 35(2), 259-288.
StatSoft. (2013). Electronic statistics textbook. Retrieved from Tulsa, OK:
http://www.statsoft.com/Textbook (Accessed: 15 March 2016)
Steiger, J. H. (2007). Understanding the limitations of global fit assessment in
structural equation modeling. Personality and Individual differences, 42(5),
893-898.
Strickland, O. L., Moloney, M. F., Dietrich, A. S., Myerburg, S., Cotsonis, G. A., &
Johnson, R. V. (2003). Measurement issues related to data collection on the
World Wide Web. Advances in Nursing Science, 26(4), 246-256.
Strohmeier, D., Yanagida, T., & Toda, Y. (2016). 13 Individualism/collectivism as
predictors of relational and physical victimization in Japan and Austria. School
Bullying in Different Cultures: Eastern and Western Perspectives, 259.
Supp, S., La Sorte, F., Cormier, T., Lim, M., Powers, D., Wethington, S., . . . Graham, C.
(2015). Citizen-science data provides new insight into annual and seasonal
variation in migration patterns. Ecosphere 6 (1): 15.
Susha, I., Grönlund, Å., & Janssen, M. (2015). Organizational measures to stimulate
user engagement with open data. Transforming Government: People, Process
and Policy, 9(2), 181-206.
Sutcliffe, A. (2009). Designing for user engagement: Aesthetic and attractive user
interfaces. Synthesis lectures on human-centered informatics, 2(1), 1-55.
Tabachnick, B. G., & Fidell, L. S. (2007). Using Multivariate Statistics. Boston: Pearson
Education Inc.
Taylor, R. S. (1962). The process of asking questions. American documentation, 13(4),
391-396.
Terrell, S. R. (2012). Mixed-methods research methodologies. The Qualitative Report,
17(1), 254-280.
Thompson, B. (2004). Exploratory and confirmatory factor analysis: Understanding
concepts and applications: American Psychological Association.
Thomson, M., MacInnis, D. J., & Whan Park, C. (2005). The ties that bind: Measuring
the strength of consumers’ emotional attachments to brands. Journal of
consumer psychology, 15(1), 77-91.
Toder-Alon, A., Brunel, F. F., & Fournier, S. (2014). Word-of-mouth rhetorics in social
media talk. Journal of Marketing Communications, 20(1-2), 42-64.
172
Tojib, D. R., & Sugianto, L.-F. (2006a). Content validating the B2E portal user
satisfaction instrument. Paper presented at the Computer and Information
Science, 2006 and 2006 1st IEEE/ACIS International Workshop on Component-
Based Software Engineering, Software Architecture and Reuse. ICIS-COMSAR
2006. 5th IEEE/ACIS International Conference on.
Tojib, D. R., & Sugianto, L.-F. (2006b). Content validity of instruments in IS research.
Journal of Information Technology Theory and Application (JITTA), 8(3), 5.
Tolbert, C. J., & Mossberger, K. (2006). The effects of e-government on trust and
confidence in government. Public administration review, 66(3), 354-369.
Toms, E. G. (2002). Information interaction: Providing a framework for information
architecture. Journal of the American Society for Information Science and
Technology, 53(10), 855-862.
Treece, E. W., & Treece Jr, J. W. (1977). Elements of research in nursing. Nursing2015,
7(6), 12-13.
Trevino, L. K., & Webster, J. (1992). Flow in computer-mediated communication
electronic mail and voice mail evaluation and impacts. Communication
Research, 19(5), 539-573.
Trochim, W. M. K. (2006). Research methods knowledge base. Retrieved from
http://www.socialresearchmethods.net/kb/index.php (Accessed: 22 June 2014)
Turner III, D. W. (2010). Qualitative interview design: A practical guide for novice
investigators. The Qualitative Report, 15(3), 754-760.
Ubaldi, B. (2013). Open Government Data.
United Nations. (2014). E-government survey 2014: E-government for the future we
want. Retrieved from New York:
http://unpan3.un.org/egovkb/Portals/egovkb/Documents/un/2014-Survey/E-
Gov_Complete_Survey-2014.pdf (Accessed: 21 August 2014)
Urista, M. A., Dong, Q., & Day, K. D. (2009). Explaining why young adults use MySpace
and Facebook through uses and gratifications theory. Human Communication,
12(2), 215-229.
Vandenberg, R. J. (2006). Statistical and methodological myths and urban legends:
Where, pray tell, did they get this idea? Organizational Research Methods,
9(2), 194.
Venkatesh, V., Brown, S. A., & Bala, H. (2013). Bridging the qualitative-quantitative
divide: Guidelines for conducting mixed methods research in information
systems. MIS quarterly, 37(1), 21-54.
Verba, S., Schlozman, K. L., Brady, H. E., & Brady, H. E. (1995). Voice and equality: Civic
voluntarism in American politics (Vol. 4): Cambridge Univ Press.
Vetere, F., Gibbs, M. R., Kjeldskov, J., Howard, S., Mueller, F. F., Pedell, S., . . . Bunyan,
M. (2005). Mediating intimacy: designing technologies to support strong-tie
relationships. Paper presented at the Proceedings of the SIGCHI conference on
Human factors in computing systems.
Waisberg, D., & Kaushik, A. (2009). Web Analytics 2.0: empowering customer
centricity. The original Search Engine Marketing Journal, 2(1), 5-11.
Waltz, C. F., & Bausell, B. R. (1981). Nursing research: design statistics and computer
analysis: Davis FA.
Wang, A. (2006). Advertising engagement: A driver of message involvement on
message effects. Journal of Advertising Research, 46(4), 355.
Wang, L., Bretschneider, S., & Gant, J. (2005). Evaluating web-based e-government
services with a citizen-centric approach. Paper presented at the System
173
Sciences, 2005. HICSS'05. Proceedings of the 38th Annual Hawaii International
Conference on.
Warkentin, M., Gefen, D., Pavlou, P. A., & Rose, G. M. (2002). Encouraging citizen
adoption of e-government by building trust. Electronic markets, 12(3), 157-
162.
Warren, A. M., Sulaiman, A., & Jaafar, N. I. (2014). Social media effects on fostering
online civic engagement and building citizen trust and trust in institutions.
Government Information Quarterly, 31(2), 291-301.
Wayman, J. C. (2003). Multiple imputation for missing data: What is it and how can I
use it. Paper presented at the Annual Meeting of the American Educational
Research Association, Chicago, IL.
Webster, J., Trevino, L. K., & Ryan, L. (1994). The dimensionality and correlates of flow
in human-computer interactions. Computers in human behavior, 9(4), 411-
426.
Weiksner, G. M., Fogg, B., & Liu, X. (2008). Six patterns for persuasion in online social
networks Persuasive Technology (pp. 151-163): Springer.
Welch, E. W., & Hinnant, C. C. (2003). Internet use, transparency, and interactivity
effects on trust in government. Paper presented at the System Sciences, 2003.
Proceedings of the 36th Annual Hawaii International Conference on.
Welch, E. W., Hinnant, C. C., & Moon, M. J. (2005). Linking citizen satisfaction with e-
government and trust in government. Journal of public administration
research and theory, 15(3), 371-391.
Wellington, J., Bathmaker, A.-M., Hunt, C., McCulloch, G., & Sikes, P. (2005).
Succeeding with your doctorate. London: Sage.
Welman, Kruger, & Mitchell. (2005). Research methodology (3rd ed.). Oxford: Oxford
University Press.
Welsh, E. (2002). Dealing with data: Using NVivo in the qualitative data analysis
process. Paper presented at the Forum Qualitative Sozialforschung/Forum:
Qualitative Social Research.
Wenner, L. A. (1982). GRATIFICATIONS SOUGHT AND OBTAINED IN PROGRAM
DEPENDENCY A Study of Network Evening News Programs and 60 Minutes.
Communication Research, 9(4), 539-560.
Wigfield, A., Cambria, J., & Ho, A. N. (2012). Motivation for Reading Information
Texts. In J. T. Guthrie, A. Wigfield, & S. L. Klauda (Eds.), Adolescents'
engagement in academic literacy (pp. 52-102).
Wigfield, A., & Guthrie, J. T. (1997). Relations of children's motivation for reading to
the amount and breadth or their reading. Journal of Educational Psychology,
89(3), 420.
Wilhelm, A. G. (2000). Democracy in the digital age. Challenges to political life in the
digital age Routledge. USA.
Wilkinson, S. (2000). Women with breast cancer talking causes: Comparing content,
biographical and discursive analyses. Feminism & Psychology, 10(4), 431-460.
Williams, B., Onsman, A., & Brown, T. (2010). Exploratory factor analysis: A five-step
guide for novices. Australasian Journal of Paramedicine, 8(3).
Wilson, T. D. (2006). On user studies and information needs. Journal of
Documentation, 62(6), 658-670.
Wolfinbarger, M., & Gilly, M. C. (2001). Shopping online for freedom, control, and
fun. California Management Review, 43(2), 34-55.
174
Wright, S., & Street, J. (2007). Democracy, deliberation and design: the case of online
discussion forums. New Media & Society, 9(5), 849-869.
Ye, S., & Wu, S. F. (2010). Measuring message propagation and social influence on
Twitter. com Social informatics (pp. 216-231): Springer.
Zainal, Z. (2007). Case study as a research method. Jurnal Kemanusiaan(9), 1-6.
Zaller, J. (1990). Political awareness, elite opinion leadership, and the mass survey
response. Social Cognition, 8(1), 125.
Zhao, Y., Fautz, C., Hennen, L., Srinivas, K. R., & Li, Q. (2015). Public Engagement in the
Governance of Science and Technology Science and Technology Governance
and Ethics (pp. 39-51): Springer.
Zheng, L., & Zheng, T. (2014). Innovation through social media in the public sector:
Information and interactions. Government Information Quarterly.
Zheng, Y., Schachter, H. L., & Holzer, M. (2014). The impact of government form on e-
participation: A study of New Jersey municipalities. Government Information
Quarterly, 31(4), 653-659. doi:10.1016/j.giq.2014.06.004
Zuiderwijk, A., Janssen, M., Choenni, S., Meijer, R., & Alibaks, R. S. (2012). Socio-
technical impediments of open data. Electronic Journal of e-Government,
10(2), 156-172.
Zuiderwijk, A., Janssen, M., Gil-García, J., & Helbig, N. (2014). Introduction. Innovation
through open data: a review of the state-of-the-art and an emerging research
agenda. Journal of Theoretical and Applied Electronic Commerce Research,
9(2), 258-268.
175