Dynmic Trust Based Two Layer
Dynmic Trust Based Two Layer
Neurocomputing
journal homepage: www.elsevier.com/locate/neucom
a r t i c l e i n f o a b s t r a c t
Article history: Collaborative filtering has become one of the most widely used methods for providing recommendations
Received 27 June 2017 in various online environments. Its recommendation accuracy highly relies on the selection of appropriate
Revised 15 November 2017
neighbors for the target user/item. However, existing neighbor selection schemes have some inevitable in-
Accepted 30 December 2017
adequacies, such as neglecting users’ capability of providing trustworthy recommendations, and ignoring
Available online 31 January 2018
users’ preference changes. Such inadequacies may lead to drop of the recommendation accuracy, espe-
Communicated by Prof. Yicong Zhou cially when recommender systems are facing the data sparseness issue caused by the dramatic increase
of users and items. To improve the recommendation accuracy, we propose a novel two-layer neighbor se-
Keywords:
Recommender system lection scheme that takes users’ capability and trustworthiness into account. In particular, the proposed
Collaborative filtering scheme consists of two modules: (1) capability module that selects the first layer neighbors based on
Neighbors their capability of providing recommendations and (2) a trust module that further identifies the sec-
Availability ond layer neighbors based on their dynamic trustworthiness on recommendations. The performance of
Trust the proposed scheme is validated through experiments on real user datasets. Compared to three exist-
Temporal information ing neighbor selection schemes, the proposed scheme consistently achieves the highest recommendation
Forgetting factor accuracy across data sets with different degrees of sparseness.
© 2018 Elsevier B.V. All rights reserved.
https://doi.org/10.1016/j.neucom.2017.12.063
0925-2312/© 2018 Elsevier B.V. All rights reserved.
Z. Zhang et al. / Neurocomputing 285 (2018) 94–103 95
appropriate neighbors is the key of the CF algorithm that dramati- matrix. Such patterns will then be used to predict this user’s pref-
cally influences the recommendation accuracy. erences. The most popular model-based CF approaches are based
The CF algorithm is facing new challenges in the big data on clustering [9], co-clustering [10,11], matrix factorization [12–15],
era. One of these challenges is the data sparseness issue. As dis- mixtures models [16,17], and transfer learning approaches [18,19].
cussed above, recent online recommender systems often contain Compared to memory-based CF methods, model-based CF methods
vast amount of items. And the number of items is still rapidly in- can handle large-scale datasets well and provide faster predictions
creasing. Compared to the total amount of items available in the once the model has been established. However, the modeling pro-
system, the number of items rated by an individual user becomes cess itself is usually time-consuming and often causes information
very limited, leading to high uncertainty in estimating this user’s loss, which may lead to the drop of recommendation accuracy.
preferences.Although the total number of users involved in recom-
mender systems is also increasing, how to appropriately select ca-
2.2. User similarity calculations
pable and reliable (i.e. trustworthy) neighbors to predict the target
user’s preferences for accurate recommendations is very challeng-
Due to the wide adoption of the memory-based CF, many re-
ing, which leads to the so-called data sparseness problem.
searchers are attracted to improve its accuracy. Consequently, a
Many existing neighbor selection mechanisms, however, have
number of methods are proposed. Pearson correlation coefficient,
intrinsic inadequacies in handling the data sparseness issue. First,
cosine-base similarity and adjust cosine-base similarity, are the
most of existing user similarity calculation mechanisms, such as
most popular methods to calculate user similarity, which serves as
Pearson correlation coefficient, cosine-based similarity and ad-
the foundation of selecting appropriate neighbors for recommen-
justed cosine-based similarity [5–7], calculate the similarity be-
dations. Some extension methods are proposed to further improve
tween a pair of users as a symmetric value, while ignoring these
the accuracy of user similarity computations. In [20], the authors
two users’ asymmetric capabilities in recommending items to each
propose a method to detect and correct unreliable ratings to en-
other. Second, when comparing two neighbors, their total number
sure the availability of the data set. In [21], a significance-based
of commonly rated items with the target user is often ignored,
similarity measure is proposed to compute user similarities based
leading to a weird scenario where a neighbor sharing only one
on three types of significances. A new similarity function, proposed
commonly rated item with the target user may yield a higher sim-
in [22], achieves higher recommendation accuracy by (1) assigning
ilarity score than the neighbor who shares 100 commonly rated
different weights to each individual item and (2) selecting differ-
items with the target user. Third, most current neighbor selection
ent sets of neighbors for each specific user. This scheme, however,
schemes do not consider the consistency of users’ preferences on
also significantly increases computational complexity. A new infor-
different items, not to mention the dynamic changes of users’ pref-
mation entropy-driven user similarity measure model is proposed
erences.
in [23] to measure the relative difference between ratings and a
This paper aims to improve the recommendation accuracy of
Manhattan distance-based model is then developed to address the
the user-similarity based CF, which, as discussed above, highly
fat tail problem by estimating the alternative active user average
relies on precisely selecting neighbors for the target user, and
rating, which improves the accuracy of the similarity computation.
resolve the problems caused by sparse data through the opti-
In [24], a multi-level collaborative filtering method is proposed to
mization of neighbor selection. To achieve this goal, we propose
assign a higher similarity score to a pair of users if their Pearson
a novel two-layer neighbor selection scheme that selects capable
correlation coefficient or the number of commonly rated items ex-
and trustworthy neighbors based on two modules: (1) a capability
ceeds a certain threshold.
module that selects the first layer neighbors by considering the
These methods, however, calculate user similarities without
asymmetric capabilities as well as the total number of commonly
considering the size of their commonly rated items and their
rated items between a pair of users, and (2) a dynamic trust
asymmetric capabilities in recommending items to each other,
module that performs the second layer neighbor selections by
not to mention their recommendation trustworthiness. More im-
considering users’ preference consistencies on different items.
portant, the data sparseness issue makes such inadequacies even
Experiments on real user datasets verify that the proposed scheme
worse, which may then cause significant drop of recommendation
consistently achieves high recommendation accuracies across
accuracy.
datasets with different sparseness degree.
The rest of the paper is organized as follows. Section 2 re-
views the related work; Section 3 introduces the proposed scheme; 2.3. User trustworthiness evaluations
Section 4 describes the experiments and results, followed by con-
clusion in Section 5. Another trend is to improve recommendation accuracy by
introducing trust values among users in collaborative filtering. For
2. Related work instance, the trust values can be computed based on the transi-
tivity rules for similarities among users [25]. Some researchers
2.1. Memory-based and model-based CF algorithms propose trust models based on users’ social network trust rela-
tionships or the propagation effect of online word-of-mouth social
User-similarity-based CF algorithms can be conducted through networks [26–29]. In [30], the authors propose an innovative
either memory-based methods or model-based methods. Memory- Trust-Semantic Fusion (TSF)-based recommendation approach
based CF methods calculate similarities directly based on the user- within the CF framework which incorporates additional informa-
item matrix. Users with higher similarities to the target user are tion from the users’ social trust network and the items’ semantic
identified as the neighbors, whose preferences will then be uti- domain knowledge in order to deal with the data sparsity,user
lized to predict preferences of the target user [8]. As a result, the and item cold-start problems.However, these trust models may
recommendation accuracy of such methods highly relies on the not be applicable in many recommender systems due to the lack
precise neighbor selection. Although memory-based CF algorithms of information about users’ social relationship or behavior. Some
have been widely adopted by many recommender systems, they researchers propose trust models based on subjective logics or
still have some significant limitations when handling data sparse- belief theory such as [31] and [32]. In [33], the model calculates
ness, cold start and scalability. On the other hand, model-based CF direct and indirect trust among users by considering one-hop or
methods model user’s behavior patterns based on the user-item multiple-hop distances among items. The authors in [34] propose
96 Z. Zhang et al. / Neurocomputing 285 (2018) 94–103
provided ratings long time ago may already change his/her prefer- with the target user across different items. Therefore, in this sec-
ences. tion, we propose a novel trust evaluation module, which evaluates
As a summary, by using Pearson correlation coefficient calcu- users’ trustworthiness on providing recommendations to the target
lation as an example, we demonstrate that the conventional user user. This trust evaluation module serves as an important basis for
similarity calculations are not adequate in selecting the most ap- our neighbor selection strategy.
propriate neighbors to predict target user’s preferences. To address
these issues, we propose a scheme which contains two key mod- 3.3.1. The revised beta trust model
ules: a capability evaluation module and a trust evaluation mod- In the first step, we aim to use trust models to evaluate
ule. Details of the proposed scheme are discussed in the following whether a user shares consistent preferences with the target user
sections. across different items. There are diverse models to evaluate trust,
such as Beta trust model [40], Bayesian Network [41] and be-
3.2. Capability evaluation module lief theory [42]. In this work, we adopt and revise the Beta trust
model [40] to conduct the basic trust evaluation due to its low
In this section, we present the proposed capability evaluation computational complexity.
module that addresses the first two inadequacies discussed above In particular, we consider that the rating offset between user u
by introducing user capability. Specifically, a user v’s capability to and user v follows Beta distribution. In the classic Beta trust model,
be used for predicting target user u’s preferences (i.e. marked as a trustee’s behavior is evaluated by a trustor as a binary value (i.e.
ava(u, v)), is calculated as in below equation. either “good” or “bad”). In our proposed scheme, we evaluate the
trustworthiness of a user v from the target user u’s perspective. For
0, Iv ⊂ Iu or sim(u, v ) < 0
ava(u, v ) = (2) each specific item that user u and v rate in common, the rating off-
|Iuv |/|Iu | × sim(u, v ), other set between these two users can serve as one observation on their
preference consistency. Depending on whether v’s preference on an
where |Iu | denotes the total number of items rated by target user
item is close to or far away from target user u’s preference, we
u; |Iuv | denotes the number of items rated by user u and user v in
consider user v has displayed either a “good behavior” (i.e. consis-
common, and sim(u, v) denotes the Pearson correlation coefficient
tent preference) or a “bad behavior” (i.e. inconsistent preference).
between user u and user v.
As a result, for a given user, his/her preference consistency with
The first inadequacy is addressed by considering the number of
the target user across different items can be modeled as a random
commonly rated items in the user capability calculation. Specifi-
variable that follows Beta distribution.
cally, we calculate ava(u, v) by multiplying sim(u, v) by |Iuv |/|Iu |. As
More important, in the proposed trust module, we further mod-
a result, the more items that user v rates in common with the tar-
ify the beta trust model to better fit the multivariate rating values
get user, the higher capability he/she will have to be used for pro-
available in recommender systems. Specifically, we consider that a
viding recommendations for the target user u. On the other hand,
user v’s behavior contains both a good portion and a bad portion
users rating very few common items with the target user u will
(i.e. marked as gi (u, v) or bi (u, v)), each of which can be quantified
not be able to obtain a very high capability score and hence have
as a continuous value in the range of (0, 1). Specifically, gi (u, v)
lower possibilities to be selected as neighbors of the target user.
and bi (u, v) are computed as follows.
Furthermore, by setting zero capability scores to users who can-
not provide influential information for recommending items to the |Rvi − Rui |
target user, we can avoid selecting these users as neighbors, which gi (u, v ) = 1 − (3)
Rmax − Rmin
addresses the second inadequacy. Specifically, in Eq. (2), when
Iv ⊂Iu , which indicates that all the items rated by user v are rated |Rvi − Rui |
by the target user u already, ava(u, v) is set as 0. In addition, when bi (u, v ) = (4)
Rmax − Rmin
sim(u, v) < 0, indicating an extremely low similarity between user
u and v, ava(u, v) is also set as 0. Please note that although the where Rmax and Rmin represent the maximum and the minimum
proposed scheme ignores users with negative Pearson correlation rating values in a recommender system, respectively. For instance,
coefficients, it is compatible with the case when the absolute value in a 5-star rating scale, Rmax is 5 and Rmin is 1. According to
of the Pearson similarity is used. (3) and (4), for each item (i.e. marked as i) rated by user u and
By applying the proposed capability evaluation module on the user v in common, user v’s “good” and “bad” portion of rating be-
sample user-item rating matrix in Table 1, we obtain the capability havior will always add up to 1. In addition, user v is believed to
scores as ava(u1 , u3 )=0.5, ava(u1 , u2 )=0.333, ava(u3 , u1 )=0, indi- conduct a higher portion of “good behavior” if his/her rating value
cating that (1) compared to user u2 , user u3 is more capable of is closer to the rating value of the target user u.
recommending items to user u1 ; and (2) although user u3 could Furthermore, the total number of good/bad behaviors con-
be a capable neighbor for user u1 , user u1 is not capable of pro- ducted by user v is calculated as the sum of his/her good/bad be-
viding recommendations in the opposite way. These observations havior values on all the commonly rated items.
further validate our previous arguments.
As a summary, we construct the capability evaluation module G(u, v )= gi (u, v ) (5)
i∈Iuv
by considering the number of commonly rated items and setting
zero capability scores to users who cannot provide helpful infor-
mation for recommending items to the target user. In this way, we B(u, v )= bi (u, v ) (6)
can select neighbors with higher capability, which addresses the i∈Iuv
first two inadequacies. At the end, the trustworthiness of user v from target user u’s
perspective (i.e. marked as tru(u, v)) is calculated as in (7). We
3.3. Trust evaluation module can observe that, the more good behaviors a user v conducts, the
higher trust value he/she will obtain.
Although the proposed capability evaluation module can ad-
G(u, v ) + 1
dress the first two inadequacies, it cannot address the third issue: tru(u, v )= (7)
identifying “trustworthy” users who share consistent preferences G(u, v ) + B(u, v ) + 2
98 Z. Zhang et al. / Neurocomputing 285 (2018) 94–103
3.3.2. Time factor v or user w. Therefore, the timing of Rui is ignored in the above
In the second step, we further evaluate whether a user shares computation.
consistent preferences with the target user by introducing a time Furthermore, as user v may rate multiple common items with
factor, an essential factor that is seldom studied by existing work the target user u in window θ , the total number of good/bad be-
for selecting trustworthy neighbors. Time factor is critical because haviors in window θ is calculated as the sum of his/her good/bad
users’ interests/preferences could change. This naturally leads to behavior values on all the commonly rated items.
two consequences. First, the ratings provided by a user long time
ago should take lower weights in influencing this user’s trust- gθ (u, v ) = gθi (u, v ) (11)
worthiness as it may no longer be able to accurately reflect this i∈Iuθv
user’s up-to-date preferences. Second, comparing two users v and
w, who share consistent preferences with the target user long time bθ (u, v ) = bθi (u, v ) (12)
ago and more recently, respectively, the latter one should be more i∈Iuθv
trustworthy as he/she could probably provide more accurate infor-
mation for inferring target user’s current preferences. Next, we will discuss the overall amount of “good behaviors” and
To let users’ trust values accurately reflect their preference “bad behaviors” with forgetting factor. To let the more recent be-
changes , we propose to gradually forget users’ previous prefer- haviors take higher weights in the calculation of users’ trustworthi-
ences by introducing time decay factors. Specifically, we need to ness, we use time decay factors to gradually forget a user’s previ-
carefully determine when to forget and how to forget. These two ous behaviors. Specifically, for time window θ , the overall amount
questions will be answered in the following two subsections in of “good behaviors” and “bad behaviors” (i.e. Gθ (u, v) and Bθ (u, v))
details. can be calculated as:
3.4. Neighbor selection strategy Intel i5 2.4 GHz, 12 GBs of RAM, windows 8.1 system. All the
comparison methods are implemented in the Python programming
In this section, we propose a two-layer neighbor selection strat- language.
egy by integrating the capability evaluation module and the trust
evaluation module. 4.1. Experiment data set
In conventional collaborative filtering algorithms, there are two
most popular neighbor selection methods. One is to select a fixed We conduct experiments using the MovieLens-100k dataset col-
number (i.e. K) of neighbors with the highest similarity scores; lected by the GroupLens Research Project at the University of Min-
The other one is to select neighbors with similarity scores higher nesota, which is one of the most popular datasets used by re-
than a certain threshold. However, both of these two methods have searchers in the field of collaborative filtering. The MovieLens-100k
their own limitations. The former one can easily include users with dataset consists of 10 0,0 0 0 ratings from 943 users on 1682 movies.
small similarity scores as the top K neighbors. The latter one has to The rating values are integer values ranging from 1 to 5. The min-
deal with threshold selections for different environment settings, imum number of items rated by each user is 20.
which is not trivial. An extension of the latter one is proposed In addition, the training and testing data are extracted based on
scheme in [19] which sets the threshold as the average similar- the temporal domain information to examine the effectiveness of
ity scores of all neighbors. Nevertheless, the range of the threshold the time factor proposed in our scheme. Specifically, we first order
values determined by this extension is hard to be dynamically ad- all the user ratings as a sequence according to the time when they
justed. In addition, it also introduces more computational costs. were provided. Then the first 70% ratings in the sequence are used
To address these limitations, we propose a two-layer neighbor as training data and the remaining 30% ratings are used as testing
selection strategy where the first layer neighbors (i.e. marked as data. In this way, all the ratings in the training set are provided
N (u)) are selected as the top K users with the highest availabil- before any ratings in the testing set, which matches the real world
ity scores. Assume K is the number of neighbors that we plan to recommendation scenarios. Furthermore, the time of the last rating
select, and ε ∈ {ε ∈ R|ε ≥ 1}. The number of the first layer neighbors in the training set represents the training ending time, which will
(i.e. K ) is obtained as the maximum integer value not exceeding be used later for forgetting purposes.
ε × K, which is calculated as in below equation. Such a way of organizing training and testing data also brings
one issue that some users, who only provide ratings after the train-
K = [ε × K ] (16) ing ending time, may not have any ratings in the training set. As
Furthermore, the second layer neighbors, marked as N(u), are we have no knowledge about these users at the training stage,
further selected to include the top K users with the highest we simple exclude these users from the testing user group. On
trust values. We display the proposed strategy in Algorithm 1. By the other hand, there are also some users who have sufficient rat-
ing data in the training set while never provide any ratings after
Algorithm 1. the training ending time. These users are also excluded from the
testing user group, as there is no ground truth for us to validate
Input: the user-item matrix; the target user u; maximum number
the performance of the recommendation scheme on these users.
of neighbors K; parameter ε .
However, they may still be used as trustworthy neighbors by the
Output: the set of user u’s neighbors N (u )
recommendation algorithm to predict other users’ future ratings.
Calculate K based on Eq. (16)
Please note that there are 943 users in the original data set. And
for each user v in the user-item matrix except user u do
after our preprocessing, there are 687 users in the training data
Calculate common rated items between user u and user v,
and 126 of users in the testing data, which still provides sufficient
and store them into set Iuv
experiment data.
Calculate ava(u, v ) based on Eqs. (1) and (2)
After the above preprocessing, the extracted training and test-
end for
ing data are used in the later experiments for performance valida-
if the number of users who have non-zero ava(u, v ) > K then
tion. The detailed experiment results and analysis are presented in
Insert K users who have higher ava(u, v )into N (u )
the following sections.
else
Insert all users who have non-zero ava(u, v ) into N (u ) 4.2. Performance evaluation metric
end if
for each user v in N (u ) do The predicted rating of item i for the target user u is the
Calculate tru(u, v ) based on Eqs. (3)-(15) weighted average of ratings from user u’s selected neighbors. It is
end for calculated as in below equation.
if the size of N (u ) > K then
Insert K users who have higher tru(u, v ) into N (u ) (Rvi − Rv ) × ava(u, v ) × tru(u, v )
v∈N ( u )
else Pui = Ru + (17)
N (u ) = N (u ) |ava(u, v ) × tru(u, v )|
v∈N ( u )
end if
return N (u ) To measure the effectiveness of the proposed method, we use
Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) ,
which are widely accepted by the research community. MAE and
using the proposed two-layer neighbor selection strategy, we aim RMSE are used to compute the deviation between the predicted
to select appropriate neighbors with high capability and trustwor- ratings and the actual ratings in all experiments. Specifically, the
thiness in predicting the target users’ preferences. MAE and RMSE are calculated as
1
N
4. Experimental results MAE = |ri − pi | (18)
N
i=1
In this section, in order to validate the effectiveness of the pro-
1
N
posed scheme, we conduct several experiments based on a real
user dataset. The experiments are performed on a computer with RMSE = (ri − pi )2 (19)
N
i=1
100 Z. Zhang et al. / Neurocomputing 285 (2018) 94–103
schemes, please refer to Section II for more detailed discussion on MovieLens-100K dataset. The experimental results show that the
these comparison schemes. proposed scheme outperforms the comparison schemes by consis-
Fig. 4 shows the performances of different schemes with dif- tenly achieving higher recommendation accuracy across datasets
ferent K values, where the x-axis represent the K value and and with different degrees of data sparseness.
the y-axis represent the MAE and RMSE, respectively. In Fig. 4,
we observe that the proposed scheme achieves the best MAE and
RMSE for all different K values. Specifically, taking the K value as References
40 for example, the proposed scheme outperform the CF, DNC-CF
[1] Z.Y. Zhang, Y.H. Liu, Z.G. Jin, R. Zhang, Selecting influential and trustworthy
and ML-CF by decreasing the MAE by 16.24%, 11.82% and 2.82% and
neighbors for collaborative filtering recommender systems, in: Proceedings of
the RMSE by 16.39%, 10.95%, 2.80%, respectively. the 7th IEEE Annual Computing and Communication Workshop and Confer-
At the end, we would like to compare the performance of dif- ence, IEEE CCWC, Hotel Stratosphere, Las Vegas, USA, 2017, pp. 1–7.
ferent schemes in terms of handling the data sparseness issue. [2] M. Zhang, X. Guo, G. Chen, Prediction uncertainty in collaborative filtering: en-
hancing personalized online product ranking, Decis. Support Syst. 83 (2016)
Specifically, we set the K value as 40, and show the performance 10–21.
of different schemes on various datasets with different degrees of [3] F. Xie, Z. Chen, J. Shang, W. Huang, J. Li, Item similarity learning methods for
sparseness in Fig. 5. For example, the 0.3 data sparseness degree collaborative filtering recommender systems, in: Proceedings of the IEEE In-
ternational Conference on Advanced Information Networking and Applications,
represents the scenario that only 30% of the original training data 2015, pp. 896–903.
is adopted for parameter training. As shown in Fig. 5, we compare [4] J. Lu, D. Wu, M. Mao, W. Wang, G. Zhang, Recommender system application
the proposed scheme to three other schemes when 30%, 40%, 50%, developments, Decis. Support Syst. 74 (C) (2015) 12–32.
[5] H. Yu, J.H. Li, Algorithm to Solve the Cold-Start Problem in New Item Recom-
60%, 70%, 80% and 90% of the original training data is used for mendations, Chin. J. Software 26 (6) (2015) 1395–1408.
training, respectively. We observe that the proposed scheme sta- [6] D. Li, C. Chen, Q. Lv, L. Shang, Y. Zhao, T. Lu, N. Gu, An algorithm for effi-
bly outperforms all other schemes when the dataset is sparse in cient privacy-preserving item-based collaborative filtering, Future Gener. Com-
put. Syst. 55 (2016) 311–320.
different degrees, demonstrating its high and stable capability in [7] B.K. Patra, R. Launonen, V. Ollikainen, S. Nandi, A new similarity measure us-
handling the data sparseness issue. In addition, the ML-CF scheme ing Bhattacharyya coefficient for collaborative filtering in sparse data, Knowl.
achieves very good performance when more than 50% of the orig- Based Syst. 82 (2015) 163–177.
[8] S. Jamalzehi, M.B. Menhaj, A new similarity measure based on item proxim-
inal training data set is used for training. However, when data
ity and closeness for collaborative filtering recommendation, in: Proceedings
sparseness issue becomes more significant (e.g. 0.3 or 0.4), it per- of the International Conference on Control, Instrumentation, and Automation,
forms the worst, which indicates a poor capability to handle data 2015, pp. 445–450.
sparseness issue. [9] A. Salah, N. Rogovschi, M. Nadif, A dynamic collaborative filtering system via a
weighted clustering approach, Neurocomputing 175 (2016) 206–215.
As a summary, with appropriate values of ε , tw , fg , fb , the pro- [10] T.F. George, S. Merugu, A Scalable Collaborative Filtering Framework Based on
posed scheme achieves the best recommendation accuracy regard- Co-Clustering (2005) 625–628.
less of the K value and the data sparseness. [11] M. Khoshneshin, W.N. Street, Incremental collaborative filtering via evolution-
ary co-clustering, in: Proceedings of the Conference on Recommender Systems,
Recsys, ACM, Barcelona, Spain, 2010, pp. 325–328.
5. Conclusion [12] Y. Koren, R. Bell, C. Volinsky, Matrix factorization techniques for recommender
systems, Computer 42 (8) (2009) 30–37.
[13] Y. Xu, R. Hao, W. Yin, Z. Su, Parallel matrix factorization for low-rank tensor
In this work, a novel two-layer neighbor selection scheme is completion, Inverse Probl. Imaging 9 (2) (2017) 601–624.
proposed for collaborative filtering recommender systems, aim- [14] R. Mazumder, T. Hastie, R. Tibshirani, Spectral regularization algorithms for
learning large incomplete matrices, J. Mach. Learn. Res. 11 (11) (2010) 2287.
ing at improving the recommendation accuracy by selecting the [15] T. Hastie, R. Mazumder, R. Zadeh, R. Zadeh, Matrix completion and low-rank
most capable and trustworthy neighbors. Specifically, the proposed SVD via fast alternating least squares, J. Mach. Learn. Res. 16 (1) (2015)
scheme contains two modules: the capability evaluation module 3367–3402.
[16] B.M. Marlin, R.S. Zemel, S. Roweis, M. Slaney, Collaborative filtering and the
and the trust evaluation module. The capability module selects
missing at random assumption, in: Proceedings of the Conference on Uncer-
the first layer neighbors by (1) considering the number of com- tainty in Artificial Intelligence, 2007, pp. 267–275.
monly rated items between potential neighbors and the target [17] Y.D. Kim, S. Choi, Bayesian binomial mixture model for collaborative prediction
with non-random missing data, in: Proceedings of the Conference on Recom-
user, and (2) setting zero capability scores to potential neighbors
mender Systems, ACM, 2014, pp. 201–208.
who cannot provide helpful information for recommending items [18] B. Li, Q. Yang, X. Xue, Can movies and books collaborate? Cross-domain col-
to the target user. In addition, the trust module further identi- laborative filtering for sparsity reduction., in: Proceedings of the International
fies the second layer neighbors who share consistent preferences Joint Conference on Artificial Intelligence, IJCAI, Pasadena, California, USA,
2009, pp. 2052–2057.
with the target user across different items. To evaluate the perfor- [19] J. Wang, L. Ke, Feature subspace transfer for collaborative filtering, Neurocom-
mance of the proposed scheme, experiments are conducted on the puting 136 (1) (2014) 1–6.
Z. Zhang et al. / Neurocomputing 285 (2018) 94–103 103
[20] P. Moradi, S. Ahmadian, A Reliability-Based Recommendation Method to Im- Ziyang Zhang received the B.S. degree from Tianjin Uni-
prove Trust-Aware Recommender Systems, Pergamon, Press, Inc., 2015. versity, Tianjin, China, in 2015. He is currently a M.S. stu-
[21] A. Hernando, F. Ortega, Collaborative filtering based on significances, Inf. Sci. dent at the School of Electrical and Information Engi-
185 (1) (2012) 1–17. neering, Tianjin University. His research interests include,
[22] K. Choi, Y. Suh, A new similarity function for selecting neighbors for each tar- developing trust models, recommendation algorithm and
get item in collaborative filtering, Knowl. Based Syst. 37 (1) (2013) 146–153. social media.
[23] W. Wang, G.Z. M, L.â. Jie, Collaborative filtering with entropy-driven user sim-
ilarity in recommender systems, Int. J. Intell. Syst. 30 (8) (2015) 854–870.
[24] N. Polatidis, C.K. Georgiadis, A multi-level collaborative filtering method that
improves recommendations, Expert Syst. Appl. 48 (2016) 100–110.
[25] M. Papagelis, D. Plexousakis, T. Kutsuras, Alleviating the sparsity problem
of collaborative filtering using trust inferences, in: Proceedings of the Third
International Conference on Trust Management, iTrust, Paris, France, 2005,
pp. 224–239. Yuhong Liu received the B.S. and M.S. degrees from Bei-
[26] S. Deng, L. Huang, G. Xu, X. Wu, Z. Wu, On deep learning for trust-aware rec- jing University of Posts and Telecommunications, Beijing,
ommendations in social networks, IEEE Trans. Neural Netw. Learn. Syst. 28 (5) China, in 2004 and 2007, respectively, and the Ph.D. de-
(2016) 1164–1177. gree from University of Rhode Island in 2012. She is as-
[27] K.T. Senthilkumar, R. Ponnusamy, Diffusing multi-aspects of local and global sistant Professor at Department of Computer Engineer-
social trust for personalizing trust enhanced recommender system, in: Pro- ing, Santa Clara University. She is the recipient of the
ceedings of the International Conference on Advanced Computing and Com- 2013 University of Rhode Island Graduate School Excel-
munication Systems, 2016, pp. 1–8. lence in Doctoral Research Award. With expertise in trust-
[28] D.H. Alahmadi, X.J. Zeng, Twitter-based Recommender System to Address Cold- worthy computing and cyber security, her research inter-
start: A Genetic Algorithm Based Trust Modelling and Probabilistic Sentiment ests include developing trust models and applying them
Analysis (2015) 1045–1052. on emerging applications, such as online social media,
[29] R.S. Liu, T.C. Yang, Improving recommendation accuracy by considering elec- cyber-physical systems and cloud computing. She is the
tronic word-of-mouth and the effects of its propagation using collective matrix recipient of the best paper awards at the IEEE Interna-
factorization, in: Proceedings of the IEEE Datacom, 2016. tional Conference on Social Computing 2010 (acceptance rate = 13%) and The 9th
[30] Q. Shambour, J. Lu, A trust-semantic fusion-based recommendation approach International Conference on Ubi-Media Computing (UMEDIA 2016).
for e-business applications, Decis. Support Syst. 54 (1) (2012) 768–780.
[31] G. Pitsilis, L. Marshall, A model of trust derivation from evidence for use in
recommendation systems, Proceedings of the PREP, Presented Poster(2004). Zhigang Jin received his Ph.D. degree of EE from Tian-
[32] G. Pitsilis, L.F. Marshall, Modeling Trust for Recommender Systems using Sim- jin University, Tianjin, China, in 1999. He was a visiting
ilarity Metrics, Springer, US, 2008. professor in Ottawa University, Ottawa, Canada, in 2002.
[33] X.M. Wang, X.M. Zhang, W.U. Jiang-Xing, Collaborative filtering recommen- He is currently a professor in Tianjin University, Tianjin,
dation algorithm based on one-jump trust model, J. Commun. 36 (6) (2015) China. His research interests focus on underwater sensor
197–204. networks, the management and security of the computer
[34] D. Jia, F. Zhang, A collaborative filtering recommendation algorithm based networks, and social networks.
on double neighbor choosing strategy, J. Comput. Res. Dev. 50 (5) (2013)
1076–1084.
[35] N. Lathia, S. Hailes, L. Capra, X. Amatriain, Temporal diversity in recommender
systems, in: Proceedings of the International ACM SIGIR Conference on Re-
search and Development in Information Retrieval, 2010, pp. 210–217.
[36] G. Zhao, M.L. Lee, W. Hsu, W. Chen, Increasing temporal diversity with
purchase intervals, in: Proceedings of the 35th International ACM SIGIR
Conference on Research and Development in Information Retrieval, 2012, Rui Zhang received his M.S. degree in Electronic and In-
pp. 165–174. formation Engineering from the college of Electrical and
[37] L.I. Jing-Jiao, L.M. Sun, W. Jiao, SRL recommendation system model im- Information Engineering of Tianjin University. He is cur-
proving session recommendation diversity, J. Northeast. Univ. 34 (5) (2013) rently a Ph.D. student at the School of Electrical and Infor-
650–653+662. mation Engineering, Tianjin University. He was a lecturer
[38] Y. Xiao, A.I. Pengqiang, C.H. Hsu, H. Wang, X. Jiao, Time-ordered collaborative in the Department of Software and Communication at
filtering for news recommendation, China Commun. 12 (12) (2015) 53–62. Tianjin Sino-German University of Applied Sciences, Tian-
[39] Y. Ding, X. Li, Time weight collaborative filtering, in: Proceedings of the ACM jin. His research interests include computer vision, deep
International Conference on Information and Knowledge Management, CIKM, learning and social media as well as computational ge-
Bremen, Germany, 2005, pp. 485–492. ometry and artificial intelligence.
[40] A. Jsang, R. Ismail, The Beta Reputation System (2002).
[41] A. Whitby, A. Jsang, J. Indulska, Filtering out unfair ratings in Bayesian rep-
utation systems, in: Proceedings of the International Joint Conference on Au-
tonomous Agenst Systems, 2005, pp. 106–117.
[42] B. Yu, M.P. Singh, An evidential model of distributed reputation management,
in: Proceedings of the International Joint Conference on Autonomous Agents
and Multiagent Systems, 2002, pp. 294–301.