Congratulations to all top performers! Ranking list of top 10 participants.

Rank Team Orgainization Spearman's Rho MSE MAE
1 TaiwanNo.1 SMP-T1 National Tsing Hua University 0.8268 2.0528 1.0676
2 heihei SMP-T1 East China Normal University 0.8093 2.1767 1.1059
3 NLPR_MMC_Passerby SMP-T1 Chinese Academy of Sciences 0.7927 2.4973 1.1783
4 BUPTMM SMP-T1 Beijing University of Posts and Telecommunications 0.7723 2.4482 1.1733
5 bluesky SMP-T1 Guandong University of Technology 0.7406 2.7293 1.2475
6 WePREdictIt SMP-T1 National Taiwan University of Science and Technology 0.5631 4.2022 1.6278
7 FirstBlood SMP-T1 National Cheng Kung University 0.6456 6.3815 1.6761
8 ride_snail_to_race SMP-T1 Guangdong University of Technology -0.0405 9.2715 2.4274
9 CERTH-ITI-MKLAB SMP-T1 Centre for Research and Technology Hellas (CERTH) 0.3554 19.3593 3.8178
10 yoyoyo SMP-T1 National Tsing Hua University give up
No team rank is displayed for Task2, since all received submissions for Task2 have no effective performance (0 hit rate).


The ranking for the competition is based on the results from T1 and T2, respectively. Specifically, a rank list of teams is produced by sorting their scores on each T1 or T2 evaluation metric, respectively.


Task Metrics
SMP-T1 Spearmanr's Rho, MAE, and MSE
SMP-T2 HR, MAE, and Spearmanr's Rho

The evaluation provided here can be used to obtain performances on the testing set of SMP. It contains multiple common metrics, including Spearman Ranking Correlation (Spearmanr's Rho), Mean Absolute Error (MAE), Mean Squared Error (MSE), and Hit Ratio (HR).
By quantitative evaluation, we measure the systems submitted to this challenge on a testing set. Our evaluation protocol is applied on the following criteria:
Prediction Correlation: whether the predicted popularity more correlates with the actual value of the popularity?
Prediction Error: to judge the error of the score prediction
Ranking Relevance: to measure the ranking relevance between top-k items and predicted ranking