Document 2
Document 2
INTRODUCTION
1.1 Introduction:
With the widespread of personal mobile devices and the ubiquitous access to the
internet, the global number of digital buyers is expected to reach 2.14 billion people
within the next few years, which accounts for one fourth of the world population. With
such a huge number of buyers and the wide variety of available products, the efficiency
of an online store is measured by their ability to match the right user with the right
product, here comes the usefulness of a product recommendation systems. Generally
speaking, product recommendation systems are divided into two main classes: (1)
Collaborative filtering (CF), CF systems recommend new products to a given user based
on his/her previous (rating/viewing/buying) history Sahraoui Dhelim, Huansheng Ning
and Nyothiri Aung are with School of Computer and Communication Engineering,
University of Science and Technology Beijing, 100083, Beijing, China. Runhe Huang
and Jianhua Ma are with the Faculty of Computer and Information Sciences Hosei
University, Japan. Corresponding author: Huansheng Nin([email protected]).
Manuscript received September 24, 2019; revised October 03, 2020. and his/her
neighbours. For example, as shown in Figure 1 (a), most of the people of previously
bought a football jersey, they have also bought a football, thus the system predicate that
the user might be interested in buying a football. (2) Content filtering or content-based
filtering (CBF). CBF systems recommend new items by measuring their similarity with
the previously (rated/viewed/bought) products. For example, as shown in Figure 1 (b),
the football is recommended because semantically similar to the football jersey.
1
Far from that, with the popularity of online social networks such as Facebook,
Twitter and Instagram, many users use social media to express their feeling or opinions
about different topics, or even explicitly expressing their desire to buy a specific product
in some cases. Which made social media content a rich resource to understand the users’
needs and interests [1]. On the other hand, the emerging of personality computing [2]
has offered new opportunities to improve the efficiency of user modelling in general and
particularly recommendation systems by incorporating the user’s personality traits in the
recommendation process. In this work, we propose a product recommendation system
that predicts the user’s needs and the associated items, even if his history does not
contain these items or similar ones. This is done by analyzing the user’s topical interest,
and eventually recommend the items associated with the theses interest. The proposed
system is personality-aware from two aspects; it incorporates the user’s personality
traits to predict his topics of interest, and to match the user’s personality facets with the
associated items. As shown in Figure 2 the proposed system is based on hybrid and
filtering approach (CF and CBF) personality-aware interest mining.
2
techniques required to collect that information. Since in practice, the networks are
usually composed out of hundreds of thousands or even millions of nodes, the method
used to perform link prediction in HIN must be highly efficient. However, computing
only local information could lead to poor predictions, especially in very sparse
networks. Therefore, in our approach, we make use of meta-paths that start from user
nodes and end up in the predicted node (product nodes in our case), and try to fuse the
information from these meta-paths to make the prediction. The contributions of this
work are summarized as follows: 1) Propose a product recommendation system that
infers the user’s needs based on her/his topical interests. 2) The proposed system
incorporates the user’s Big-Five personality traits to enhance the interest mining
process, as well as to perform personality-aware product filtering. 3) The relationship
between the users and products is predicted using a graph-based meta path discovery,
therefore the system can predict implicit as well as explicit interests. The remainder of
this paper is organized as follows. In Section 2 we review the related works, while in
Section 3 the system design of the proposed system is presented. In Section 4 we
evaluate the proposed system. Finally, in Section 5 we conclude the work and state
some of the future directions.
1.2 Scope:
The scope of our Personality-Aware Product Recommendation System spans the
intersection of user psychology, interest mining, and metapath discovery. We aim to
analyze user preferences, behaviours, and interactions to unearth valuable insights into
individual personalities. By employing advanced algorithms for metapath discovery, we
connect the dots between diverse product categories, ensuring holistic and context-
aware recommendations. The system's scope encompasses not just product suggestions,
but a comprehensive understanding of users' evolving tastes, ultimately shaping a
dynamic and highly personalized shopping journey.
1.3 Purpose:
3
1.3.1 Personalization:
Gain deeper insights into user preferences, behaviours, and connections between
different product categories through metapath discovery, allowing for a comprehensive
understanding of individual tastes.
In essence, our system aspires to redefine how users discover and interact with
products, making the online shopping experience not just efficient but an enjoyable
journey reflective of their unique tastes and personalities.
1.4 Objective:
4
1.4.1 Dynamic User Profiling:
Implement a robust system for continuously updating and refining user profiles,
incorporating real-time changes in preferences through effective metapath discovery.
Build a system that adapts to evolving user preferences over time, utilizing
metapath discovery to understand the evolving relationships between different product
categories and user interests.
5
CHAPTER-2
LITERATURE SURVEY
6
content. Piao et al. [1] surveyed the literature of user interest mining from social
networks, the authors reviewed all the previous works by emphasizing the following on
four aspects, (1) data collection, (2) representation of user interest profiles, (3)
construction and refinement of user interest profiles, and (4) the evaluation measures of
the constructed profiles. Zarrinkalam et al. [12] presented a graph-based link prediction
scheme that operates over a representation model built from three categories of
information: user explicit and implicit contributions to topics, relationships between
users, and the similarity among topics. Trikha et al. [13] investigated the possibility of
predicting the users’ implicit interests based on only topic matching using frequent
pattern mining without considering the semantic similarities of the topics. While Wang
et al. [14] proposed a regularization framework based on the relation bipartite graph,
that can be constructed from any kind of relationships, they evaluated the proposed
system from social networks that were built from retweeting relationships. In [15], the
authors discussed the usage of user’s interests to customize the services offered by a
cyber-enabled smart home. Faralli et al. [16] proposed Taxionomy, a method for
modelling of Twitter users by a hierarchical representation based on their interests.
Twiconomy is built by identifying topical friends (a friend represents an interest instead
of social relationship) and associate each of these users with a page on Wikipedia.
Dhelim et al. [17] used social media analysis to extract the user’s topical interest. Kang
et al. [18] proposed a user modelling framework that maps the user’ posted content in
social media into the associated category in the news media platforms, and based on
they used Wikipedia as a knowledge base to construct a rich user profile that represents
the user’ interests. Liu et al. [19] introduced iExpand, a new collaborative filtering
recommendation system based on user interest expansion via personalized ranking.
iExpand uses a three layer, user-interests-item, representation scheme, which makes the
recommendation more accurate and with less computation cost and helps the
understanding of the interactions among users, items, and user interests. Table I shows a
comparison between the proposed system and some of the related works presented
above. Some works such as metapath2vec [20], Shi et al. [21] have used metapaths
embedding to represent the network information in lower dimensions for easy
manipulation of heterogeneous graphs. However, in highly dynamic graphs such as the
user-topic product graph in our case, where the graph update happens very frequently,
7
computing the meta-path embedding all over again is very expensive in terms of
computation.
8
2.2.4 Dynamic User Models:
Create dynamic user models that adapt to changing preferences and evolving
personalities over time. Employ metapath-based insights to continuously refine and
update user profiles, ensuring recommendations remain relevant.
9
The proposed system offers personalized product recommendations by
integrating user interest mining and metapath discovery. By analyzing individual
preferences, it tailors suggestions, enhancing user satisfaction. Metapath discovery adds
depth, identifying intricate relationships in user behaviour for more accurate predictions.
This study is carried out to check the economic impact that the system will have
on the organization. The amount of fund that the company can pour into the research
and development of the system is limited. The expenditures must be justified. Thus the
developed system as well within the budget and this was achieved because most of the
technologies used are freely available. Only the customized products had to be
purchased. 2.4.2 TECHNICAL FEASIBILITY:
This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand on the
available technical resources. This will lead to high demands on the available technical
resources. This will lead to high demands being placed on the client. The developed
system must have a modest requirement, as only minimal or null changes are required
for implementing this system.
The aspect of study is to check the level of acceptance of the system by the user.
This includes the process of training the user to use the system efficiently. The user
must not feel threatened by the system, instead must accept it as a necessity. The level
of acceptance by the users solely depends on the methods that are employed to educate
the user about the system and to make him familiar with it. His level of confidence must
be
10
raised so that he is also able to make some constructive criticism, which is welcomed, as
he is the final user of the system.
CHAPTER-3
3.1 Classification:
11
Logistic regression analysis studies the association between a categorical
dependent variable and a set of independent (explanatory) variables. The name logistic
regression is used when the dependent variable has only two values, such as 0 and 1 or
Yes and No. The name multinomial logistic regression is usually reserved for the case
when the dependent variable has three or more unique values, such as Married, Single,
Divorced, or Widowed. Although the type of data used for the dependent variable is
different from that of multiple regression, the practical use of the procedure is similar.
12
While the Naive Bayes classifier is widely used in the research world, it is not
widespread among practitioners which want to obtain usable results. On the one hand,
the researchers found especially it is very easy to program and implement it, its
parameters are easy to estimate, learning is very fast even on very large databases, its
accuracy is reasonably good in comparison to the other approaches. On the other hand,
the final users do not obtain a model easy to interpret and deploy, they do not
understand the interest of such a technique.
Thus, we introduce in a new presentation of the results of the learning process.
The classifier is easier to understand, and its deployment is also made easier. In the first
part of this tutorial, we present some theoretical aspects of the naive bayes classifier.
Then, we implement the approach on a dataset with Tanagra. We compare the obtained
results (the parameters of the model) to those obtained with other linear approaches
such as the logistic regression, the linear discriminant analysis and the linear SVM. We
note that the results are highly consistent. This largely explains the good performance of
the method in comparison to others. In the second part, we use various tools on the
same dataset (Weka 3.6.0, R 2.9.2, Knime 2.1.1, Orange 2.0b and RapidMiner 4.6.0).
We try above all to understand the obtained results.
3.2 Modules:
In this module, the Service Provider has to login by using valid user name and
password. After login successful he can do some operations such as Train and Test Data
Sets, View Trained and Tested Accuracy in Bar Chart, View Trained and Tested
Accuracy Results, Predict Product Recommendation From Data Set Details, Find
Product Recommendation Prediction Ratio on Data Sets, Download Trained Data Sets,
View Product Recommendation Prediction Ratio Results, View All Remote Users.
In this module, the admin can view the list of users who all registered. In this,
the admin can view the user’s details such as, user name, email, address and admin
authorizes the users.
13
In this module, there are n numbers of users are present. User should register
before doing any operations. Once user registers, their details will be stored to the
database. After registration successful, he has to login by using authorized user name
and password. Once Login is successful user will do some operations like Post Product
data sets, Predict Product Recommendation, View Your Profile.
14
CHAPTER-4
SOFTWARE DESIGN
15
4.2 Sequence Diagram:
Fig.4.2.Sequence Diagram
16
4.3 Use Case Diagram:
17
4.4 Data Flow Diagram:
18
4.5 Flow Charts:
19
4.5.2 Remote User:
20
CHAPTER-5
21
CHAPTER-6
CODING
6.1 Code:
import warnings
warnings.filterwarnings("ignore")
plt.style.use('ggplot')
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import accuracy_score, confusion_matrix,
classification_report
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
def login(request):
username = request.POST.get('username')
password = request.POST.get('password')
try:
22
enter =
ClientRegister_Model.objects.get(username=username,password=passwor
d)
request.session["userid"] = enter.id
return redirect('Add_DataSet_Details')
except:
pass
return render(request,'RUser/login.html')
def Add_DataSet_Details(request):
if "GET" == request.method:
return render(request, 'RUser/Add_DataSet_Details.html',
{})
else:
excel_file = request.FILES["excel_file"]
# you may put validations here to check extension or file
size
wb = openpyxl.load_workbook(excel_file)
# getting all sheets
sheets = wb.sheetnames
print(sheets)
# getting a particular sheet
worksheet = wb["Sheet1"]
print(worksheet)
# getting active sheet
active_sheet = wb.active
print(active_sheet)
# reading a cell
print(worksheet["A1"].value)
excel_data = list()
# iterating over the rows and
# getting value from each cell in row
for row in worksheet.iter_rows():
row_data = list()
for cell in row:
row_data.append(str(cell.value))
print(cell.value)
excel_data.append(row_data)
Product_Details.objects.all().delete()
Recommend_Prediction.objects.all().delete()
for r in range(1, active_sheet.max_row+1):
Product_Details.objects.create(
idno= active_sheet.cell(r, 1).value,
ProductId= active_sheet.cell(r, 2).value,
UserId= active_sheet.cell(r, 3).value,
23
ProfileName= active_sheet.cell(r, 4).value,
HelpfulnessNumerator= active_sheet.cell(r, 5).value,
HelpfulnessDenominator= active_sheet.cell(r, 6).value,
Score= active_sheet.cell(r, 7).value,
Time= active_sheet.cell(r, 8).value,
Summary= active_sheet.cell(r, 9).value,
Review= active_sheet.cell(r, 10).value )
def ViewYourProfile(request):
userid = request.session['userid']
obj = ClientRegister_Model.objects.get(id= userid)
return render(request,'RUser/ViewYourProfile.html',
{'object':obj})
def Search_DataSets(request):
if request.method == "POST":
kword = request.POST.get('keyword')
if request.method == "POST":
kword = request.POST.get('keyword')
df = pd.read_csv('Reviews.csv')
df
df.columns
df.rename(columns={'Score': 'Rating', 'Text':
'Review'}, inplace=True)
24
def apply_recommend(Rating):
if (Rating <= 2):
return 0 # No Recommend
else:
return 1 # Recommend
df['recommend'] = df['Rating'].apply(apply_recommend)
df.drop(['Rating'], axis=1, inplace=True)
recommend = df['recommend'].value_counts()
df.drop(
['Id', 'ProductId', 'UserId', 'ProfileName',
'HelpfulnessNumerator', 'HelpfulnessDenominator', 'Time',
'Summary'], axis=1, inplace=True)
cv = CountVectorizer()
X = df['Review']
y = df['recommend']
X = cv.fit_transform(X)
models = []
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,
y, test_size=0.20)
X_train.shape, X_test.shape, y_train.shape
print("Naive Bayes")
print("Logistic Regression")
classifier = VotingClassifier(models)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
25
review_data = [kword]
vector1 = cv.transform(review_data).toarray()
predict_text = classifier.predict(vector1)
prediction = int(pred1)
if prediction == 0:
predict = 'No Recommend'
else:
predict = 'Recommend'
26
6.2 Python:
Often, programmers fall in love with Python because of the increased productivity
it provides. Since there is no compilation step, the edit-test-debug cycle is incredibly
fast. Debugging Python programs is easy: a bug or bad input will never cause a
segmentation fault. Instead, when the interpreter discovers an error, it raises an
exception. When the program doesn't catch the exception, the interpreter prints a stack
trace. A source level debugger allows inspection of local and global variables,
evaluation of arbitrary expressions, setting breakpoints, stepping through the code a line
at a time, and so on. The debugger is written in Python itself, testifying to Python's
introspective power. On the other hand, often the quickest way to debug a program is to
add a few print statements to the source: the fast edit-test-debug cycle makes this simple
approach very effective.
27
Python is Interactive: You can actually sit at a Python prompt and interact
with the interpreter directly to write your programs.
Python is Object-Oriented: Python supports Object-Oriented style or
technique of programming that encapsulates code within objects.
Python is a Beginner's Language: Python is a great language for the beginner-
level programmers and supports the development of a wide range of
applications from simple text processing to WWW browsers to game.
Django's templating engine enables the separation of logic and presentation in web
applications. This encourages the creation of clean and maintainable code by allowing
developers to define dynamic content within HTML templates. The framework's URL
routing system organizes the application's structure, mapping URLs to specific views
and facilitating a clear and scalable project layout.
Django follows the "Don't Repeat Yourself" (DRY) principle, providing reusable
components and a modular design. This promotes code efficiency and reduces
28
redundancy, allowing developers to focus on implementing unique features rather than
repetitive tasks. Additionally, Django's extensive ecosystem of third-party packages,
known as "Django apps," further accelerates development by offering pre-built solutions
for various functionalities.
29
CHAPTER-7
TESTING
o Unit Testing
o Integration Testing
o User Acceptance Testing
o Output Testing
o Validation testing
Unit testing focuses verification effort on the smallest unit of Software design
that is the module. Unit testing exercises specific paths in a module’s control structure
to ensure complete coverage and maximum error detection. This test focuses on each
module individually, ensuring that it functions properly as a unit. Hence, the naming is
Unit Testing.
During this testing, each module is tested individually and the module interfaces
are verified for the consistency with design specification. All important processing path
are tested for the expected results. All error handling paths are also tested.
7.2 Integration Testing:
Integration testing addresses the issues associated with the dual problems of
verification and program construction. After the software has been integrated a set of
high order tests are conducted. The main objective in this testing process is to take unit
tested modules and builds a program structure that has been dictated by design.
30
program module are incorporated into the structure in either a depth first or breadth first
manner.
In this method, the software is tested from main module and
individual stubs are replaced when the test proceeds downwards.
7.2.2 Bottom-up Integration:
This method begins the construction and testing with the modules at the lowest
level in the program structure. Since the modules are integrated from the bottom up,
processing required for modules subordinate to a given level is always available and the
need for stubs is eliminated. The bottom up integration strategy may be implemented
with the following steps:
The low-level modules are combined into clusters into clusters that
perform a specific Software sub-function.
A driver (i.e.) the control program for testing is written to coordinate test
case input and output.
The cluster is tested.
Drivers are removed and clusters are combined moving upward in the
program structure
The bottom-up approaches tests each module individually and then each module
is module is integrated with a main module and tested for functionality.
User Acceptance of a system is the key factor for the success of any system. The
system under consideration is tested for user acceptance by constantly keeping in touch
with the prospective system users at the time of developing and making changes
wherever required. The system developed provides a friendly user interface that can
easily be understood even by a person who is new to the system.
After performing the validation testing, the next step is output testing of the
proposed system, since no system could be useful if it does not produce the required
output in the specified format. Asking the users about the format required by them tests
31
the outputs generated or displayed by the system under consideration. Hence the output
format is considered in 2 ways – one is on screen and another in printed format.
The text field can contain only the number of characters lesser than or equal to
its size. The text fields are alphanumeric in some tables and alphabetic in other tables.
Incorrect entry always flashes and error message.
7.5.2 Numeric Field:
The numeric field can contain only numbers from 0 to 9. An entry of any
character flashes an error message. The individual modules are checked for accuracy
and what it has to perform. Each module is subjected to test run along with sample
data. The individually tested modules are integrated into a single system. Testing
involves executing the real data information is used in the program the existence of any
program defect is inferred from the output. The testing should be planned so that all
the requirements are individually tested.
A successful test is one that gives out the defects for the inappropriate data and
produces and output revealing the errors in the system.
7.5.3 Preparation of Test Data:
Taking various kinds of test data does the above testing. Preparation of test data
plays a vital role in the system testing. After preparing the test data the system under
study is tested using that test data. While testing the system by using test data errors are
again uncovered and corrected by using above testing steps and corrections are also
noted for future use.
Live test data are those that are actually extracted from organization files. After a
system is partially constructed, programmers or analysts often ask users to key in a set
of data from their normal activities. Then, the systems person uses this data as a way to
32
partially test the system. In other instances, programmers or analysts extract a set of live
data from the files and have them entered themselves.
Artificial test data are created solely for test purposes, since they can be
generated to test all combinations of formats and values. In other words, the artificial
data, which can quickly be prepared by a data generating utility program in the
information systems department, make possible the testing of all login and control paths
through the program.
The most effective test programs use artificial test data generated by persons
other than those who wrote the programs. Often, an independent team of testers
formulates a testing plan, using the systems specifications.
The package “Virtual Private Network” has satisfied all the requirements
specified as per software requirement specification and was accepted.
USER TRAINING:
MAINTAINENCE:
This covers a wide range of activities including correcting code and design
errors. To reduce the need for maintenance in the long run, we have more accurately
33
defined the user’s requirements during the process of system development. Depending
on the requirements, this system has been developed to satisfy the needs to the largest
possible extent. With development in technology, it may be possible to add many more
features
based on the requirements in future. The coding and designing is simple and easy to
understand which will make maintenance easier.
TESTING STRATEGY:
A strategy for system testing integrates system test cases and design techniques
into a well planned series of steps that results in the successful construction of software.
The testing strategy must co-operate test planning, test case design, test execution, and
the resultant data collection and evaluation. A strategy for software testing must
accommodate low-level tests that are necessary to verify that a small source code
segment has been correctly implemented as well as high level tests that validate major
system functions against user requirements.
SYSTEM TESTING:
Software once validated must be combined with other system elements (e.g.
Hardware, people, database). System testing verifies that all the elements are proper and
that overall system function performance is achieved. It also tests to find discrepancies
between the system and its original objective, current specifications and system
documentation.
UNIT TESTING:
In unit testing different are modules are tested against the specifications
produced during the design for the modules. Unit testing is essential for verification of
the code produced during the coding phase, and hence the goals to test the internal logic
of the modules. Using the detailed design description as a guide, important Conrail
paths are tested to uncover errors within the boundary of the modules. This testing is
34
carried out during the programming stage itself. In this type of testing step, each module
was found to be working satisfactorily as regards to the expected output from the
module.
35
CHAPTER-8
OUTPUT SCREENS
36
Fig.8.2. Service Provider Login Page
37
Fig.8.4. Post Product Data Sets
38
Fig.8.6. Prediction of Product Recommendation
39
Fig.8.8. Trained and Tested Data Sets Results
40
Fig.8.10. View Trained and Accuracy Bar Graph
41
Fig.8.12. Predict Product Recommendation From Data Set Details
CHAPTER-9
CONCLUSION
42
1) In this work, the users’ personality traits measurement was conducted
through questionnaires. Integrating automatic personality recognition system, that can
detect the users’ personality traits based on their shared data, into Meta-Interest is one of
our future directions.
2) The proposed system uses Big-Five to model the user’ personality
Extending Meta-Interest to include other personality traits models such as the Myers–
Briggs type indicator is a future direction.
3) The proposed system could be further improved by integrating a
knowledge graph and infer topic-item association using semantic reasoning.
CHAPTER-10
FUTURE SCOPE
43
CHAPTER-11
REFERENCE
[1] G. Piao and J. G. Breslin, “Inferring user interests in microblogging social networks:
a survey,” User Modelling and User-Adapted Interaction, vol. 28, no. 3, pp. 277–329,
aug 2018. [Online]. Available: http://link.springer.com/10.1007/s11257-018-9207-8
[2] A. Vinciarelli and G. Mohammadi, “A Survey of Personality Computing,” IEEE
Transactions on Affective Computing, vol. 5, no. 3, pp. 273–291, jul 2014. [Online].
Available: http://ieeexplore.ieee.org/document/6834774/
[3] V. Marteınez, F. Berzal, and J.-C. Cubero, “A Survey of Link Prediction in Complex
Networks,” ACM Computing Surveys, vol. 49, no. 4, pp. 1–33, feb 2017.
[4] H.-C. Yang and Z.-R. Huang, “Mining personality traits from social messages for
game recommender systems,” Knowledge-Based Systems, vol. 165, pp. 157–168, feb
44
2019.[Online].Available:
https://linkinghub.elsevier.com/retrieve/pii/S095070511830577X
[5] W. Wu, L. Chen, and Y. Zhao, “Personalizing recommendation diversity based on
user personality,” User Modelling and User-Adapted Interaction, vol. 28, no. 3, pp. 237–
276, 2018.
[6] H. Ning, S. Dhelim, and N. Aung, “PersoNet: Friend Recommendation System
Based on Big-Five Personality Traits and Hybrid Filtering,” IEEE Transactions on
Computational Social Systems, pp. 1–9, 2019.
[Online]. Available: https://ieeexplore.ieee.org/document/8675299/
[7] B. Ferwerda, M. Tkalcic, and M. Schedl, “Personality Traits and Music Genres:
What Do People Prefer to Listen To?” in Proceedings of the 25th Conference on User
Modeling, Adaptation and Personalization.
ACM, 2017, pp. 285–288.
[8] B. Ferwerda, E. Yang, M. Schedl, and M. Tkalcic, “Personality and taxonomy
preferences, and the influence of category choice on the user experience for music
streaming services,” Multimedia Tools and Applications, pp. 1–34, 2019. [9] Z. Yusefi
Hafshejani, M. Kaedi, and A. Fatemi, “Improving sparsity and new user problems in
collaborative filtering by clustering the personality factors,” Electronic Commerce
Research, vol. 18, no. 4, pp. 813–836, dec 2018. [Online]. Available:
http://link.springer.com/10.1007/s10660- 018-9287-x
[10] S. Dhelim, N. Huansheng, S. Cui, M. Jianhua, R. Huang, and K. I.-K. Wang,
“Cyberentity and its consistency in the cyber-physical-social-thinking hyperspace,”
Computers & Electrical Engineering, vol. 81, p. 106506, jan 2020.[Online].Available:
https://linkinghub.elsevier.com/retrieve/pii/S0045790618334839
[11] A. Khelloufi, H. Ning, S. Dhelim, T. Qiu, J. Ma, R. Huang, and L. Atzori, “A Social
Relationships Based Service Recommendation System For
SIoT Devices,” IEEE Internet of Things Journal, pp. 1–1, 2020. [Online]. Available:
https://ieeexplore.ieee.org/document/9167284/
[12] F. Zarrinkalam, M. Kahani, and E. Bagheri, “Mining user interests over active
topics on social networks,” Information Processing & Management, vol. 54, no. 2, pp.
339–357, 2018.
45
[13] A. K. Trikha, F. Zarrinkalam, and E. Bagheri, “Topic-Association Mining for User
Interest Detection,” in European Conference on Information Retrieval. Springer, 2018,
pp. 665–671.
[14] J. Wang, W. X. Zhao, Y. He, and X. Li, “Infer user interests via link structure
regularization,” ACM Transactions on Intelligent Systems and Technology (TIST), vol.
5, no. 2, p. 23, 2014.
[15] S. Dhelim, H. Ning, M. A. Bouras, and J. Ma, “Cyber-Enabled Human-Centric
Smart Home Architecture,” in 2018 IEEE SmartWorld). IEEE, oct 2018, pp. 1880–
1886. [Online]. Available: https://ieeexplore.ieee.org/document/8560294/
[16] S. Faralli, G. Stilo, and P. Velardi, “Automatic acquisition of a taxonomy of
microblogs users’ interests,” Web Semantics: Science, Services and Agents on the World
Wide Web, vol. 45, pp. 23–40, 2017.
[17] S. Dhelim, N. Aung, and H. Ning, “Mining user interest based on personality-
aware hybrid filtering in social networks,” Knowledge-Based Systems, p. 106227, jul
2020. [Online]. Available:
https://linkinghub.elsevier.com/retrieve/pii/S0950705120304354
[18] J. Kang and H. Lee, “Modeling user interest in social media using news media and
wikipedia,” Information Systems, vol. 65, pp. 52–64, 2017.
[19] Qi Liu, Enhong Chen, Hui Xiong, C. H. Q. Ding, and Jian Chen, “Enhancing
Collaborative Filtering by User Interest Expansion via Personalized Ranking,” IEEE
Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 42, no. 1,
pp. 218–233, feb 2012. [Online]. Available:
http://ieeexplore.ieee.org/document/6006538/
[20] Y. Dong, N. V. Chawla, and A. Swami, “metapath2vec: Scalable Representation
Learning for Heterogeneous Networks,” in Proceedings of the 23rd ACM SIGKDD
International Conference on Knowledge Discovery and Data Mining. New York, NY,
USA:ACM,aug2017,pp.135–144.[Online].Available:
https://dl.acm.org/doi/10.1145/3097983.3098036
[21] C. Shi, B. Hu, W. X. Zhao, and P. S. Yu, “Heterogeneous Information Network
Embedding for Recommendation,” IEEE Transactions on Knowledge and Data
Engineering,vol.31,no.2,pp.357–370,feb2019.[Online].Available:
https://ieeexplore.ieee.org/document/8355676/
46
[22] M. Zhang and Y. Chen, “Link prediction based on graph neural networks,” in
Advances in Neural Information Processing Systems, 2018, pp. 5165–5175.
[23] W. Song, Z. Xiao, Y. Wang, L. Charlin, M. Zhang, and J. Tang, “Session-Based
Social Recommendation via Dynamic Graph Attention Networks,” in Proceedings of
the Twelfth ACM International Conference on Web Search and Data Mining. New York,
NY, USA:ACM,jan2019,pp.555–563.[Online].Available:
https://dl.acm.org/doi/10.1145/3289600.3290989
[24] P. I. Armstrong and S. F. Anthoney, “Personality facets and RIASEC interests: An
integrated model,” Journal of Vocational Behavior, vol. 75, no. 3, pp. 346–359, dec
2009. [Online]. Available:
https://linkinghub.elsevier.com/retrieve/pii/S0001879109000657
[25] U. Wolfradt and J. E. Pretz, “Individual differences in creativity: Personality, story
writing, and hobbies,” European Journal of Personality,2001
47