TASK DESCRIPTION

This year we will focus on two particular tasks, Popularity Prediction and Tomorrow’s Top Prediction. Meanwhile, we are open for innovative self-proposed topics. The contestants are asked to develop their prediction algorithm based on the SMP dataset provided by the Challenge (as training data), plus possibly additional public/private data, to address one or both of the two given tasks (SMP-T1 or SMP-T2). For the evaluation purpose, a contesting system is asked to produce prediction results of popularity. The accuracy will be evaluated by pre-defined quantitative evaluation. The contestants need to introduce their algorithms and datasets in the conference.

The task is designed to predict the impact of sharing different posts for a publisher on social media. Given a photo (a.k.a. post) from a publisher, the goal is to automatically predict the popularity of the photo, e.g., view count for Flickr, Pin count for Pinterest, etc.
The task is designed to discover top-k popular posts on social media. Suppose to have a set of candidate photos and the history data of past photo sharing, the goal is to automatically predict which will be the most popular photos in the next day.
To encourage the exploration of the SMP application scope, we also accept innovative topics proposed by the participants themselves, e.g., behavior prediction, interest mining, etc. For open topics, the participants need to clearly define the topic, demonstrate the technical advancement of their proposed solutions, specify the evaluation protocols, and provide SMP based experimental results.

RESULT SUBMISSION

After registration on SMP website and submission buttion opened (June 10, 2017). Please send your prediction result by Submission Mail. The mail title is "Submission Mail-(teamname)", and JSON file of results are attached in the mail with particular task name ("SMP-T1.json" or "SMP-T2.json").

We will take the last Submission Mail of each team as the final submission before submission deadline.

SUBMISSION FORMAT

Each team is allowed to submit the results of at most three runs and selects one run as the primary run of the submission (we do not guarantee to evaluate additional runs), which will be measured for performance comparison across teams.
Each submission is required to be formatted in a JSON File as follows. Here we also provide a submission example as reference.

                  
{
    "version": "VERSION 1.2",
    "result": [
        {
            "post_id": "post6374637",
            "popularity_score": 2.1345
        },
        ...
        {
            "post_id": "post3637373",
            "popularity_score": 3.1415
        }
    ],
    "external_data": {
        "used": "true",
        "details": "VGG-19 pre-trained on ImageNet training set"
    }
}
                  
                
                  
{
    "version": "VERSION 1.2",
    "result": [
        {
            "post_id": "post6374637",
            "ranking_position": 1,
            "popularity_score": 2.1345
        },
        ...
        {
            "post_id": "post3637373",
            "ranking_position": 5,
            "popularity_score": 3.1415
        }
    ],
    "external_data": {
        "used": "true",
        "details": "VGG-19 pre-trained on ImageNet training set"
    }
}