This year we will focus on the task Headline Popularity Prediction. Meanwhile, we are open for innovative self-proposed topics. The contestants are asked to develop their prediction algorithm based on the SMHP dataset provided by the Challenge (as training data), plus possibly additional public/private data, to address one or both of the given tasks. For the evaluation purpose, a contesting system is asked to produce prediction results of popularity. The accuracy will be evaluated by pre-defined quantitative evaluation. The contestants need to introduce their algorithms and datasets in the conference.

The task is designed to discover top-k popular posts on social media. Suppose to have a set of candidate photos and the history data of past photo sharing, the goal is to automatically predict which will be the most popular photos in the next day.
To encourage the exploration of the SMHP application scope, we also accept innovative topics proposed by the participants themselves, e.g., behavior prediction, interest mining, etc. For open topics, the participants need to clearly define the topic, demonstrate the technical advancement of their proposed solutions, specify the evaluation protocols, and provide introduction based experimental results.




Each team is allowed to submit the results of at most three runs and selects one run as the primary run of the submission (we do not guarantee to evaluate additional runs), which will be measured for performance comparison across teams.
Each submission is required to be formatted in a JSON File as follows. Here we also provide a submission example as reference.

    "version": "VERSION 1.2",
    "result": [
            "post_id": "post6374637",
            "ranking_position": 1,
            "popularity_score": 2.1345
            "post_id": "post3637373",
            "ranking_position": 5,
            "popularity_score": 3.1415
    "external_data": {
        "used": "true",
        "details": "VGG-19 pre-trained on ImageNet training set"