Design and Analysis of A Multi-Agent E-Learning System Using Prometheus Design Tool
Design and Analysis of A Multi-Agent E-Learning System Using Prometheus Design Tool
Design and Analysis of A Multi-Agent E-Learning System Using Prometheus Design Tool
Corresponding Author:
Kennedy E. Ehimwenma
Department of Computer Science
Wenzhou-Kean University
88 Daxue Rd, Ouhai District, Wenzhou, Zhejiang, China
Email: [email protected]
1. INTRODUCTION
An agent software methodology is a set of guidelines that covers the entire life-cycle of a multi-
agent development process. A multi-agent system (MAS) is a system of interactive agents or autonomous
program modules. In general, a unified modelling language (UML) assists software developers to specify,
design, visualize and document software engineering processes that meets application requirements [1]. A
UML allow models to be created, considered, developed, and processed in a standard way from the initial
phase of analysis to design and implementation [2]. Systems implementation is focused on users’ needs as
well as system functionality with requirements specification as the driver. From start to finish, effective and
efficient system evolves from user interaction and the incremental principle of development. Software
development stages have shared abstraction in both object-oriented programming (OOP) methodology and
agent-oriented software engineering (AOSE). In OOP paradigm, these stages are: requirements gathering,
analysis, design, implementation, testing and maintenance. Whilst the AOSE process subsumes the steps in
OOP methodologies, the concepts for developing objects (in OOP) are however different from those in agent-
based systems. For instance, object-oriented methodologies cover concepts such as objects, classes and
inheritance. But in AOSE, design concepts are terms that view agents as autonomous, situated, reactive, and
social. This paper is a presentation of the application of prometheus [3, 4] agent-oriented methodology for the
static and dynamic design of an elearning MAS. Though there are several AOSE methodologies for
designing agent-based systems, the choice of prometheus was predicated on its structured and detailed step-
by-step procedure that supports how requirement statements can be acquired. The purpose of the system is to
pre-assess students’ prior learning, classify their skills, and make recommendation for appropriate material
suitable to their needs. Thus, the contribution of this paper is: i) To demonstrate requirements analysis and
design specifications for the development of an e-learning pre-assessment system using MAS. ii) To analyse
the descriptive functions and roles of multi-agents within an e-learning pre-assessment system. iii) To show a
detailed model of software engineering with agent UML (AUML) tool for teaching and learning. iv) To
demonstrate inter-agent communication for the assessments and classification of students' prior skill-set. v)
To analyse the data collated from the system using regression models of prediction. This paper continues
with the background logic of knowledge engineering for the system in which an abstract model of an
ontology tree traversal is discussed as applicable in the MAS implementation. In section 2, the paper presents
AUML tools and agent software development life cycle (ASDLC). In section 3, models of analysis and
design from the use of the prometheus design tool (PDT) is presented. Section 4 looks into implementation,
issues at experimentation, data collection and analysis; and section 5 is conclusion.
𝑝𝑖,𝑗 (𝑥𝑖,𝑗 , 𝑥𝑖+1,𝑗±1 ) ꓥ 𝑝𝑖+1,𝑗±1 (𝑥𝑖+1,𝑗±1 , 𝑧𝑗±1,𝑘 ) → 𝑝𝑖+1,𝑗±1 (𝑥𝑖,𝑗 , 𝑧𝑗±1,𝑘 ). axiom 1
Axiom 1 states that if a parent node 𝑥𝑖,𝑗 has a named prerequisite 𝑥𝑖+1,𝑗±1 (one level below the hierarchy) of
𝑥𝑖,𝑗 either on its right hand or left-hand side which is denoted by 𝑗 ± 1 (+ for right, and – for left), and the
named prerequisite has a named leafnode 𝑧𝑗±1,𝑘 , then the parent node has a direct relation with the leafnode
like the perquisite has. An example of this transitive closure is hasPrerequisite(c4, c6) ꓥ hasKB(c6, n11) →
hasKB(c4, n11) that satisfies the property of transitivity. In addition, the axiom 1 conveys the leafnodes 𝑧𝑗±1,𝑘
that are: i) pre-assessed upon, and ii) the nodes that are recommended when any leafnode N that is connected
to the prerequisite node 𝑥𝑖+1,𝑗±1 are failed. On the other hand, the counterpart axiom 2
𝑝𝑖,𝑗 (𝑥𝑖,𝑗 , 𝑥𝑖+1,𝑗±1 ) ꓥ 𝑝𝑖+1,𝑗±1 (𝑥𝑖+1,𝑗±1 , 𝑧𝑗±1,𝑘 ) → 𝑝𝑖+1,𝑗 (𝑥𝑖,𝑗 , 𝑧𝑗±1,𝑘 ) axiom 2
is the axiom that also satisfies the property of transitivity. In this case, it is for the recommendation of
leafnodes 𝑧𝑗±1,𝑘 that has direct relations to the desired topic’s 𝑥𝑖,𝑗 given that an episode of pre-assessment on
the perquisite 𝑧𝑗±1,𝑘 connected to 𝑥𝑖+1,𝑗±1 have all been attempted and are all passed. An example of this
logical axiom 2 is hasPrerequisite(c1, c3) ꓥ hasKB(c3, {n4,n5, n6}) → hasKB(c1, n1).
In our agent-based pre-assessment system, agents need to communicate the ground fact representation of this
logical axioms. For instance, for an agent to resolve the relevant plans for their next action, this group of
agents must inter-communicate the desired topics, passed leafnodes and/or failed leafnodes using the
following predicate logic form passed(𝑧𝑗+1,𝑘 ) and failed(𝑧𝑗±1,𝑘 ). The predicates which are the actions taken
by the multi-agents based on students’ response to questions, form the basis of the facts about the outcome of
a student using logic programming. This is because any object has a property that it satisfies or that any
object is connected by some relation to another object. From the foregoing, the explicitly stated logic-based
formulas are the premises on which the multi-agent of the pre-assessment system interacts, select ontological
nodes, select questions associated with leafnodes, assess users, classify user skills and recommend learning
materials.
conceptualized a VLE application on mobile agent technology for the assessment of students’ knowledge,
and described agent role and agent interaction using a UML tool, and finally to implementation using JADE.
The InfoStation system [2] is a project of distributed elearning centre (DeLC), also used multi-agent
technology with proposed implementation on JADE [14]. With a UML, [2] described the InfoStation system
as a system of interactive agents whose functions included designated e-services. Also, in [15] the AGILE-
PASSI methodology was reported as the development tool for a medical educational game called
MEDEDUC for the purpose of improving learning in medical education and clinical performance. As a,
MEDEDUC allowed students to answer questions at different level of difficulty on multimedia presentation.
While many applications on agent-based technology are developed in fields such as commerce and security,
or adaptive dynamic programming [16] very limited attention has been given to agent-based development for
student learning. Among the aforementioned few, none had the combined system goal of skills classification
and recommendation of learning materials that we are presenting in this paper.
2.2. Prometheus
Prometheus [27] is a methodology designed for the realisation of BDI agent systems with the use of
goals and plans. It supports development activities from requirements specification through to detailed design
for implementation. Prometheus design tool (PDT) [29, 30] is a graphical editor that supports the Prometheus
methodology. The PDT supports the development and documentation of all the phases of the Prometheus
methodology for building agent-based systems. Prometheus has three inter-connected design phases which
are system specification, architectural design, and the detailed design.
Initial goals:
§ Observe percept Goal Specification:
§ Understanding of prerequisite § HOW CAN EACH OF THESE * Observe percept
§ Testing INITIAL GOALS BE ACHIEVED ? - Receive user concept
§ Classifying § E ACH GOALS HAD FURTHER - present concept
§ Continuous feedback SUB-GOALS DEVELOPED AS : DESIRED_CONCEPT
§ KB update
§ Recommend materials
*Recommend materials
*KB updating - concept ontology in BB
*Continuous user feedback - search ontological relation
- store user learning activity persistently -user friendly interaction from assessments
PERSISTENT BELIEF STORE - fetch URL link
-welcome and introduction to system - present to user
USER INTERACTION
RECOMMENDATION
Figure 2. High level description of problem including initial goal and overall system goal specification
Table 2. The PDT notation symbol and meaning [3, 32, 33]
Name Symbol Description
Agent The agent symbols.
Action This is what the agent does that has effect on the environment or other agents.
Role This symbolizes roles or group of roles for agents.
Protocols specifies interaction between agents. Protocols are specified using textual
Protocol
notations that maps to AUML2.
This is used to represent the belief (internal knowledge model) or external data. It is where
Data
functionalities that transcends to agent read or write data or information.
Messages This is used to symbolize a message communication between agents.
BDI Messages This symbol is used to represent messages that updates the beliefs of agents.
Percept Represents the input coming from the environment to the agent.
This is an abstract description of a sequence of steps taken in the development of a system.
Scenario It is usually the initial step that starts for the breakdown of the “statement of problem” or
description of the problem to solve.
Goal It is the realizable target or achievement set for an agent.
Connection Arrows They are edges that connects entities (i.e. symbols) together.
classify and persistentBB update goals after its decision-making function; and the classify goal further
connects the recommend material goal.
overview phase are adopted for specifying agents’ details. The inherited interfaces are the notation symbols
that appear greyish in colour.
Agent agInterface: In Figure 7 is a much refined and detailed design where CArtAgO artifact is the medium
that was used to get user input. The interface agent first creates the artifact in order to observe it. The
observed inputs are communicated as messages in agent plan (shown with the plan diagram or symbol) to
other agents e.g., the agent agSupport that is responsible for pre-assessing students.
Agent agSupport: This is the pre-test agent that is saddled with the task of questioning a user’s skills before
making recommendation, as shown in Figures 8-9. The agent agSupport uses its achievement goals for
navigation, from leafnode 𝑧𝑗,𝑘 to leafnode 𝑧𝑗,𝑘+1 in the hierarchy of concepts to retrieve quizzes which are
represented in predicate logic in its BB to test students’ skills. Using the answer percept received, it compares
and matches the given answer input with the predefined answer in its BB. Taking the decision for either a
passed or a failed predicate on every answer received, this agent also communicate all assessment activities,
namely: the decision reached per question, the questions asked, and communication of the answers received
to other agents in the MAS that needs to know. This agent also date and timestamp every learning activity.
Agent agModelling: This agent gets message percepts from agent agSupport for every leafnode (question
attached to a unit of learning) in the ontology whose pre-assessment has been completed. This agent uses the
percept (or information) it receives to match the pre-conditions in its plan context, and thereafter classify the
student’s skills. The category of information (in one plan) that is determined by this agent is communicated
to the next receiving agent (agMaterial) that will in turn send learning material to the student, as shown in
Figure 10.
Agent agMaterial: Figure 11 is agent agMaterial that keeps the URLs links of learning material as an
ontology. The perfomative used in the message to this agent is “achieve”. On receiving the “achieve”
performative message from the classifier agent (after classification), the agent agMaterial then releases
learning materials for students to learn. These materials are dependent on the number of failed and passed
prerequisite assessments.
Agent agModel: This agent uses the Java TextPersistentBB class to store all the learning activities in the
system. The TextPersistentBB class was configured in the MAS at the point of declaration or creation of the
multi-agents project with the Mas2j [34] extension at the level of implementation. The activities stored are
Design and analysis of a multi-agent e-learning system using prometheus… (Kennedy E. Ehimwenma)
16 ISSN: 2252-8938
messages sent to the agent; and they include students’ desired topics, and answers to question (both correct or
incorrect) percept. As shown in Figure 12, the persistent beliefs are permanently stored in the system.
of the multi-agent pre-assessment system whose software engineering design steps we have presented in the
preceding sections; such that, ⅅ is the desired concept (also called the desired topic) that subsumes some
prerequisites 𝐶𝑖 which further subsumes some leafnodes 𝑁𝑖,𝑗 . In description logic notation, it states 𝑁𝑖,𝑗 𝐶𝑖
ⅅ. In the system, the content of learning is in the domain of SQL (structured queried language) from which
topics – that we have called the DesiredConcept ⅅ are chosen and studied by students.
Now, let ⅅ = { 𝑐 ∈ ⅅ | 𝑝(𝑐) } and N = { 𝑛 ∈ 𝐶 | 𝑞(𝑛) }. ⅅ precedes 𝐶 in the hierarchy of concepts (or topics)
of learning such that the number of elements in D = C + 1. Then the set of topics otherwise known as
elements considered in the domain ⅅ is given as
ⅅ = {union, join, update, delete, insert, select};
and the set of all prerequisites 𝐶 underneath ⅅ is given as
𝐶 = {join, update, delete, insert, select};
and the set of all terminal leafnodes 𝑁 in 𝐶 and ⅅ, respectively, given as
N = {union, unionAll, selfJoin, fullOuterJoin, innerJoin, updateSelect, updateWhere, deleteSelect,
deleteWhere, insertSelect, insertValue, selectOrderBy, selectDistinct, selectWhere, selectAll }.
In education, teaching-learning is chronological and this forms the basis for the connection of a previous
learning to a new or ongoing learning. Let the relation 𝑅 be the set of connection between nodes i.e. a new
topic and a previously learned topic. Then we state that D and C, and C and N belong to some relations,
respectively; as shown in the following set A and B with regards to the given ontological node relationships
A = D x C ∈ R and, B = C x N ∈ R.
Symbolically, it holds that
∀𝑑 ∈ ⅅ ∀𝑐 ∈ 𝐶 ∀𝑛 ∈ 𝑁, 𝑅(ⅅ, 𝐶) ∧ 𝑅(𝐶, 𝑁)
In furtherance, the elements of the sets D, C and N thus satisfies the following definitions and their respective
properties 𝑝 and 𝑞 in predicate formulas
(d, c) = {𝑝 ∈ 𝑅 | 𝑝(𝑑, 𝑐)}, where the relation 𝑝 = hasPrerequisite; and
(c, n) = {𝑞 ∈ 𝑅 | 𝑞(𝑐, 𝑛)}, where the relation 𝑞 = hasKB.
Symbolically, the conjunction of the above given relation is
∀𝑑 ∈ ⅅ ∀𝑐 ∈ 𝐶 ∀𝑛 ∈ 𝑁, hasPrerequisite(d, c) ∧ hasKB(c, n).
4. DISCUSSION
The paper has presented the prometheus AUML design tool for the design and analysis of the pre-
assessment system, and its implementation with Jason – a Java-based interpreter and declarative language.
The choice of Prometheus methodology ensured that every requirement and detailed design activity were
captured with the appropriate symbol. This we have depicted from initial goal specifications, to subgoals, to
agent roles and interaction using distinctive diagrams. From critical analysis, Prometheus provides support
on how requirement statements may be acquired -- starting with intial goals specification -- as well as a
general system architecture as against some other AUML tools. These steps are vital as any left-out
functionality would cause a void in the system: A void that may require the re-engineering of the whole
system. In a declarative language, agents communicate via message passing in predicate logic form. Thus, in
line with the reported mechanism of pre-assessment & recommendation and formalized (FOL based) pre-
assessment rules [35-37] in which the MAS made accurate recommendation after pre-assessment, Figure 13
hereby presents the pseudocode of the operation of the system and how the perceived knowledge by agents
are used: from percept acquisition at the interface (line 7), through to other agents via the .send() internal
action [4] (on lines 9, 11, 18, 23, 27; as shown in Figure 13) which clearly shows the number of interactive
agents in the system. Between each internal
Pseudocode action,andisinteraction
of pre-assessment the action designated
in the for a receiving agent to execute.
multiagent system
Figure 13. Pseudo-algorithm of the pre-assessment process that depends on the number of leafnodes N
considered under a desiredConcept
Design and analysis of a multi-agent e-learning system using prometheus… (Kennedy E. Ehimwenma)
18 ISSN: 2252-8938
if the answer percept coming into the system does not match some initially predefined SQL
queries then inform the student that the answer given is incorrect and then select the next
leafnode question and present to the student.
Literally, from the behaviour exhibited by the agent, the agent’s interpretation was any other plan whose plan
context has no match to any already known knowledge in the agent’s belief base. The miss-selection of plans
was due to some uncertainty in the agent ability to map an incorrect query percept to beliefs. This behaviour
as observed adversely altered the order of subsequent goal/question selection of a prerequisite’s leafnode N,
in contrast to the arrangements of nodes in the ontology tree. This was a non-trivial problem. At the
implementation phase, one of the key principles of software methodology is to combine coding and testing
[38]. This principle which enables a system to be investigated while it is still being developed ensured that
this non-trivial problem was checked before the system was completely built.
To enable the pre-assessment agent, as shown in Figures 8-9 to accurately select relevant plan(s) for a match
of its plan context to the percept that is adopted in the \== operator; and to correctly determine the next
appropriate agent goal and accurate message passing to other agents, we had to introduce a process of
iteration that could count plan selection in the agent for every parent node (or topic) and their connected
leafnodes N. In Jason predicate logic form, an example syntax of this iteration is countForDeletePre(X),
which depicts the counter for the delete node where X is a positive integer. In addition, the negation of some
incoming percept was required to stop unsolicited plan trigger. An example of such negation in the context
part of a plan, was the not desiredConcept(“insert”) which was used to block-off the
desiredConcept(“insert”) in a plan context so as not trigger the wrong plan and wrong agent goal at a given
time. As the system kept expanding with the programming of more parent nodes D and leafnodes N been
added, this block-off continued and it used to mitigate agent anomaly behaviour. These two combined
strategies effectively controlled agent behaviour as well as the entire multi-agents toward handling of the
incorrect SQL query inputs.
the persistence belief of the student’s response to a pre-assessment on Full Outer Join query; and
the persistence belief of a response to a pre-assessment on Inner Join query; and then
the persistence belief of failed(N) pre-assessments on Full Outer Join and Inner Join.
In Table 3 is the data and the outcomes of either passed(N) or failed(N) binary states [1, 0] for each leafnode
N and the timespent on each leafnode N pre-assessment task. Recall, Use the "Insert Citation" button to add
Design and analysis of a multi-agent e-learning system using prometheus… (Kennedy E. Ehimwenma)
20 ISSN: 2252-8938
Visualization of the data is presented in Figures 15-17. The data was plotted using 80% training and 20%
test. Figure 15 shows the scatter plot of the timespent against the leafnode N encoded as integer values with
the display of the respective leafnode N per timespent. In Figure 16 is the scatter plot of linear regression
model. From the plot, the linear model predicts that there would be an increase passed(N) ≡ 1 binary state as
the timespent on pre-assessment tasks decreases. Invariably in the plot, there is a correlation between increase
in passed pre-assessments and continuous decrease in time spent; which implies increase in the
recommendation of chosen desired topics. Figure 17 is a plot of the logistic regression model. Like the linear
regression model, the model also predict increase in passed(N) pre-assessments. That is, in future more
students are likely to pass their pre-assessments in the domain of SQL, if and only if, the two varied
approaches introduced here are kept and adopted.
Table 4. Boolean classification [1, 0] and time spent on each pre-assessment task (continue)
Leafnode
Boolean Classification [1, 0] vs. Timespent (mm:ss)
Encoding
1 Nil | Nil
2 Nil | Nil
[1] | (12:09), [1] | (09:11), [1] | (05:33), [1] | (10.01), [1] | (04:21), [1] | (11:12), [1] | (05:21), [1] | (08: 01), [1] | (04: 07),
3
[1] | (07:25), [1] | (08: 45), [0] | (10:12), [0] | (07:13)
[1] | (05:16), [1] | (13:02), [1] | (10:22), [1] | (06: 56), [1] | (11:34), [1] | (15:08), [0] | (9:19), [0] | (5:33), [0] | (16:48), [0] |
4
(17:59), [0] | (06:41), [0] | (05: 00), [0] | (11:54)
[1] | (05: 55), [1] | (04:35), [1] | (16: 24), [1] | (07: 31); [1] | (02:47), [1] | (06:57), [0] | (09:35), [0] | (09:12), [0] | (11:43),
5
[0] | (05:13), [0] | (11:48), [0] | (13:10), [0] | (14: 19)
[1] | (19:00), [0] | (20:03), [0] | (13: 44), [0] | (07: 11), [0] | (15:17), [0] | (15: 08), [0] | (03:51), [0] | (08: 10), [0] | (02: 14),
6
[0] | (01: 46), [0] | (15: 16), [0] | (18:05), [0] | (11: 10), [0] | (03: 49), [0] | (14:10), [0] | (09:43)
[1] | (01:23), [1] | (01: 58), [1] | (04: 11), [1] | (11:29), [1] | (03:14), [1] | (15:10), [1] | (11.21), [1] | (08:41), [1] | (11:03),
7
[1] | (05:51), [1] | (15: 09), [1] | (04: 17), [1] | (04:16), [1] | (01: 44), [1] | (03:17), [1] | (11: 04)
[1] | (10: 26), [1] | (12:05), [1] | (13: 02), [1] | (17: 33); [1] | (12:24), [0] | (11:15), [0] | (03: 45), [0] | (07: 30), [0] | (11:19),
8 [0] | (05: 18), [0] | (03:55), [0] | (18: 00), [0] | (03: 44), [0] | (21: 40), [0] | (07: 25), [0] | (11:37), [0] | (19: 16), [0] | (02:
41), [0] | (14:12), [0] | (08:13), [0] | (04: 58)
Table 4. Boolean classification [1, 0] and time spent on each pre-assessment task
Leafnode
Boolean Classification [1, 0] vs. Timespent (mm:ss)
Encoding
[1] | (02:22), [1] | (10: 15), [1] | (06: 08), [1] | (06:11), [1] | (05:40), [1] | (02:15), [1] | (01.58), [1] | (02:12), [1] | (17:22),
9 [1] | (09:21), [1] | (08: 39), [1] | (07: 47), [1] | (07:15), [1] | (01: 16), [1] | (12:18), [1] | (01: 54),
[1] | (15:15), [1] | (07: 11), [1] | (11:18), [1] | (11: 54), [0] | (01:32)
[1] | (02:22), [1] | (01:23), [1] | (07: 36), [0] | (01:55), [0] | (01: 57), [0] | (04:10), [0] | (11:31), [0] | (02:20), [0] | (14:25),
10 [0] | (03: 00), [0] | (03: 58), [0] | (08:14), [0] | (06: 47), [0] | (12:37), [0] | (05: 21), [0] | (12:17), [0] | (11:12), [0] | (04:11),
[0] | (07:15), [0] | (08:18)
[1] | (00:59), [1] | (06: 50), [1] | (02: 01), [1] | (01:29), [1] | (01:45), [1] | (03:04), [1] | (03:22), [1] | (04:01), [1] | (07:23),
11 [1] | (05:11), [1] | (02:48), [1] | (04:07), [1] | (04:10), [1] | (09: 10), [1] | (05:31), [1] | (01:23), [0] | (02:59), [0] | (02:21),
[0] | (05: 26), [0] | (13:05)
[1] | (03: 56), [1] | (04:00), [1] | (00:53), [1] | (07:34); [1] | (01:19), [1] | (03:12), [1] | (06:22), [1] | (04:31), [1] | (05: 12),
12
[1] | (07:04), [1] | (07:17), [0] | (01: 40), [0] | (04:51), [0] | (06:33)
[1] | (01:01), [1] | (01: 51), [1] | (02: 28), [1] | (03:12), [1] | (03:41), [1] | (02:35), [1] | (07:48), [1] | (03:27), [1] | (07:16),
13
[1] | (04:43), [1] | (01: 59), [1] | (02: 55), [1] | (04: 17), [1] | (03: 26), [0] | (02: 38)
[1] | (03:21), [1] | (11: 14), [1] | (00: 56), [1] | (05:21), [1] | (04:11), [1] | (02:22), [1] | (00:50), [1] | (02:32), [1] | (00:26),
14
[1] | (03:39), [1] | (02: 01), [1] | (04: 21), [1] | (04: 15), [1] | (04: 44)
[1] | (00:37), [1] | (02: 55), [1] | (00: 28), [1] | (01:58), [1] | (03:01), [1] | (01:45), [1] | (01:13), [1] | (01:42), [1] | (05:29),
15
[1] | (01:17), [1] | (01: 30), [1] | (04: 27), [1] | (00:23), [1] | (00: 34)
Figure 15. Scatter plot of timespent per leafnode N Figure 16. Linear regression plot of timespent to
boolean classification
5. CONCLUSIONS
This paper has presented a detailed analysis and design of a formative elearning multi-agent pre-
assessment system using the Prometheus methodology design tool and implementation with Jason
(Agentspeak) language. Detailed description and functions of the agent has been presented using different
diagrams of the respective agents and their roles. The paper covered all design activities including issues that
evolved during implementation and the solution strategy that was adopted; then to evaluation, data collection
Design and analysis of a multi-agent e-learning system using prometheus… (Kennedy E. Ehimwenma)
22 ISSN: 2252-8938
and analysis of data. The design activity also covered percept observation by the interface agent, to inter-
agent communication, decision making strategy, classification of user skills and recommendation of materials
for students’ study. The content of the system is databases/SQL: a subject that has been asserted to have pose
difficulty to students. This project was designed to identify the gaps between what a student wants to learn
and what the student has already learned. The two conditions introduced during the pre-skill tests in this
paper have shown that changes in certain factors can change the narratives of the difficulty faced by students
in SQL programming. Further work is the formalization of agent rules using a formalized language in the
pre-assessment and recommendation strategy.
REFERENCES
[1] UML, “Introduction to OMG's Unified Modeling Language,” 5 Feb 2020. [Online]. Available:http://www.omg.org/
gettingstarted/what_is_uml.htm.
[2] S. Stoyanov, I. Ganchev, I. Popchev and M. O’Droma, “An Approach for the Development of InfoStation-Based
eLearning Architectures,” Compt. Rend. Acad. Bulg. Sci, vol. 61, pp. 1189-1198, 2008.
[3] RMIT, “Agents Group,” RMIT University, Australia, 2012. [Online]. Available:
https://sites.google.com/site/rmitagents/software/prometheusPDT/tutorials. [Accessed 5 Apr 2020].
[4] R. H. Bordini, J. F. Hübner and M. Wooldridge, Programming s in AgentSpeak using jason, John Wiley & Sons, 2007.
[5] C. Rouveirol and V. Ventos, “Towards learning in CARIN-ALN,” in In International Conference on Inductive Logic
Programming, Berlin, Heidelberg, 2000.
[6] S. Konstantopoulos and A. Charalambidis, “Formulating description logic learning as an inductive logic programming
task.,” in In International Conference on Fuzzy Systems , 2010.
[7] J. Chen and J. Li, “Globally fuzzy leader-follower consensus of mixed-order nonlinear s with partially unknown
direction control,” Information Sciences,, vol. 523, pp. 184-196, 2020.
[8] W. A. Munassar and A. F. Ali, “Semantic Web Technology and Ontology for E-Learning Environment,” Egyptian
Computer Science Journal, vol. 43, no. 2, pp. 88-100, 2019.
[9] A. S. Aziz, S. A. Taie and R. A. El-Khoribi, “The Relation between the Learner Characteristics and Adaptation
Techniques in the Adaptive E-Learning Systems.,” in International Conference on Innovative Trends in
Communication and Computer Engineering (ITCE), 2020.
[10] A. Trifa, A. Hedhili and W. L. Chaari, “Knowledge tracing with an intelligent agent, in an e-learning platform,”
Education and Information Technologies, vol. 24, no. 1, pp. 711-741, 2019.
[11] U. C. Apoki, S. Ennouamani, H. K. M. Al-Chalabi and G. C. Crisan, “A Model of a Weighted Agent System for
Personalised E-Learning Curriculum,” in International Conference on Modelling and Development of Intelligent
Systems, 2019.
[12] F. El Hajj, A. El Hajj and R. A. Chehade, “Vulnerability detector for a secured E-learning environment,” in Sixth
International Conference on Digital Information Processing and Communications (ICDIPC), Beirut, 2016.
[13] C. Anghel and I. Salomie, “JADE Based solutions for knowledge assessment in eLearning Environments,” EXP-in
search of innovation (Special Issue on JADE), 2003.
[14] N. Stancheva, I. Popchev, A. Stoyanova-Doycheva and S. Stoyanov, “Automatic generation of test questions by
software agents using ontologies,” in In 8th International Conference on Intelligent Systems (IS), 2016.
[15] V. M. F. Ferreira, J. C. C. Carvalho, R. M. E. M. da Costa and V. M. B. Werneck, “Developing an educational
medical game using AgilePASSI multi-agent methodology,” in In 28th International Symposium on Computer-Based
Medical Systems (CBMS),, 2015.
[16] X. Lan, L. Liu and Y. Wang, “ADP-Based Intelligent Decentralized Control for s Moving in Obstacle Environment.,”
IEEE Acess, vol. 7, pp. 59624-59630, 2019.
[17] G. Al-Hudhud, “Designing e-Coordinator for improved teams collaboration in graduation projects,” Computers in
Human Behavior, no. 15, pp. 640-644, 2015.
[18] M. Wooldridge, N. Jennings and D. Kinny, “The Gaia methodology for agent-oriented analysis and design,”
Autonomous Agents and s, vol. 3, no. 3, pp. 285-312, 2000.
[19] L. Cernuzzi and F. Zambonelli, “Gaia4E: A Tool Supporting the Design of MAS using Gaia,” in In ICEIS (4), 2009.
[20] N. R. Jennings and M. J. Wooldridge, “Applications of intelligent agents,” 1998.
[21] T. I. Zhang, E. Kendall and H. Jiang, “An agent-oriented software engineering methodology with application of
information gathering systems for LCC,” in In for LLC, Procs AOIS-2002., 2002.
[22] P. Bresciani, A. Perini, P. Giorgini, F. Giunchiglia and J. Mylopoulos, “Tropos: An agent-oriented software
development methodology,” Autonomous Agents and s, pp. 8(3), 203-236, 2004.
[23] M. Morandini, D. C. Nguyen, L. Penserini, A. Perini and A. Susi, “Tropos Modeling, Code Generation and Testing
with the Taom4E Tool,” in In iStar, 2011.
[24] S. A. DeLoach, “Analysis and Design using MaSE and agentTool,” Air force inst of tech wright-patterson afb oh
school of engineering and management., 2001.
[25] M. Cossentino and C. Potts, “A CASE tool supported methodology for the design of s,” in In International Conference
on Software Engineering Research and Practice (SERP'02), 2002.
[26] M. Cossentino, “From requirements to code with the PASSI methodology,” Agent-oriented methodologies, 3690, pp.
79-106, 2005.
[27] L. Padgham and M. Winikoff, “Developing intelligent agent systems: A practical guide,” 2004.
[28] S. J. Juneidi and G. A. Vouros, “Agent role locking (ARL): theory for multi agent system with e-learning case study.,”
in IADIS AC, 2005.
[29] L. Padgham, J. Thangarajah and M. Winikoff, “Prometheus Design Tool,” in In AAAI, 2008.
[30] Z. Zhang, J. Thangarajah and L. Padgham, “Automated unit testing intelligent agents in PDT,” in In Proceedings of
the 7th international joint conference on Autonomous agents and multiagent systems: demo papers, 2008.
[31] N. Manouselis, H. Drachsler, R. Vuorikari, H. Hummel and R. Koper, “Recommender systems in technology
enhanced learning.,” Recommender systems handbook, pp. 387-415, 2011.
[32] K. E. Ehimwenma, A multi-agent approach to adaptive learning using a structured ontology classification system,
Sheffield UK: Doctoral thesis, Sheffield Hallam University, 2017.
[33] “AUML-2 & Interaction Diagram Tool,” [Online]. Available: http://waitaki.otago.ac.nz/~michael/auml/.
[34] R. H. Bordini, J. F. Hübner and D. M. Tralamazza, “Using Jason to implement a team of gold miners,” in In
International Workshop on Computational Logic in s, Berlin Heidelberg, 2006.
[35] K. Ehimwenma, M. Beer and P. Crowther, “Pre-assessment and learning recommendation mechanism for a Multi-
agent System,” in In 14th International Conference on Advanced Learning Technologies (ICALT) , Sheffield UK,
2014.
[36] K. E. Ehimwenma, P. Crowther and M. Beer, “Formalizing logic based rules for skills classification and
recommendation of learning materials,” International Journal Information Technology and Computer Science
(IJITCS),, vol. 10, no. 9, pp. 1-12, 2018.
[37] K. E. Ehimwenma, M. Beer and P. Crowther, “Computational Estimate Visualisation and Evaluation of Agent
Classified Rules Learning System,” International Journal of Emerging Technologies in Learning (IJET), vol. 11, no.
1, pp. 38-47, 2016.
[38] A. Dennis, B. H. Wixom and D. Tegarden, Systems Analysis and Design with UML, Wiley, 2009.
[39] J. C. Prior, “Online assessment of SQL query formulation skills,” In Proceedings of the fifth Australasian conference
on Computing education, Australian Computer Society, Inc, vol. 20, pp. 247-256, 2004.
[40] K. E. Ehimwenma, M. Beer and P. Crowther, “Student Modelling and Classification Rules Learning for Educational
Resource Prediction in a Multiagent System,” in 7th Computer Science and Electronic Engineering Conference
(CEEC2015), 2015.
BIOGRAPHIES OF AUTHORS
Kennedy E. Ehimwenma has his PhD in s, knowledge representation and rule-based logic in
the area of eLearning application development at the Sheffield Hallam University, United
Kingdom. His research interests include intelligent agent learning, semantic ontology, rule-based
logic, and decision support systems. Dr. Ehimwenma is a lecturer in the Department of
Computer Science, Wenzhou-Kean University. Email: [email protected],
[email protected]. ORCID: https://orcid.org/0000-0002-7616-9342
Design and analysis of a multi-agent e-learning system using prometheus… (Kennedy E. Ehimwenma)