###################################################
CALL FOR TASK PARTICIPATION
NTCIR-13 Lifelog Access and Retrieval (Lifelog) Task
Twitter: https://twitter.com/NTCIRLifelog
Registration: http://ntcir.nii.ac.jp/NTCIR13Regist/
Important Dates:
November 22, 2016: Phase I: Data release
December 15, 2016: Phase I: Dry run submissions due
December 31, 2016: Phase I: Dry run results release
January 21, 2017: Phase I: Formal run topics release
February 28, 2017: Phase I: Formal run submissions due
March 15, 2017: Phase I: Formal results release
April 15, 2017: Phase II: Data, dry run topics release
April 30, 2017: Phase II API open
May 15, 2017: Phase II: Dry run submissions due
May 31, 2017: Phase II: Dry run results release
June 30, 2017: Phase II: Formal run topics release
July 30, 2017: Phase II: Formal run submissions due
September 01, 2017: Phase II: Formal run results release
October 01, 2017: Participant papers due
November 01, 2017: Camera ready due
December 5-8, 2017: NTCIR-13 Conference @ NII, Tokyo, Japan
=== OVERVIEW ===
Digital recording of life experience (lifelogging) is gaining popularity as new types of wearable sensors can generate a rich archive of life experience. Such wearable sensors include wearable cameras, fitness trackers and various mobile devices. The objective of the NTCIR Lifelog task is to encourage research in this field and to understand the current state-of-the-art in lifelog retrieval.
A real-world lifelog dataset will be distributed to task participants. This dataset will consist of at least 45 days of data from two active lifeloggers. Much of the data will be recorded 24 x 7, though multimedia data will only be recorded during waking hours. The dataset will contain wearable camera images, biometrics records, human activity logs, and computer usage. A topic set of real-world topics will accompany the dataset.
Task participants must submit a paper to the NTCIR-13 Conference and at least one member of each participating group must either attend the conference at NII, Tokyo or a satellite event in Europe in December 2017, to present their work.
=== TASKS ===
NTCIR-13 Lifelog includes four subtasks, all of which can be participated in independently.
Phase I
(November 2016 - March 2017)
==Lifelog Annotation Task (LAT) ==
The aim of this subtask is to explore the most effective computer vision algorithms to accurately describe the visual content of lifelog images. A small ontology will be generated of important lifelog concepts (activities, environments and objects) and the task will require the development of automated approaches to annotating these concepts.
Both image content as well as provided metadata and external evidence sources can be used to generate the annotations The best performing annotation outputs are encouraged to release the data for other participants to use in the other three tasks.
Phase II
(April - September 2017)
==Lifelog Semantic Access Task (LSAT) ==
In this subtask, the participants have to retrieve specific moments in a lifelogger's life. We define moments as semantic events, or activities that the individual was involved in. The task can best be compared to a known-item search task as known from TRECVid.
Example search tasks include:
Find the moment(s) where I was boarding an A380.
Find the moment(s) where I am in my kitchen.
Find the moment(s) where I am playing with my phone.
Find the moment(s) where I am preparing breakfast.
Tasks can be undertaken in an interactive (user in the loop) or automatic manner (automatic query processing). Submissions will indicate the time (in minutes) of all instances of ranked results that best match the topic description and for interactive runs, the time-taken to find each result is required. The main metric to be used for evaluation is [log in to unmask] Dependent on the feedback received from the participants during the dry run, these metrics might be revised.
==Lifelog Insight Task (LIT) ==
The aim of this subtask is to gain insights into the lifelogger's life. It follows the idea of the Quantified Self movement that focuses on the visualization of knowledge mined from self-tracking data to provide "self-knowledge through numbers". Participants are requested to generate new types of visualisations and insights about the life of the lifeloggers by generating a themed diary.. This task is not evaluated in the traditional sense, but participants will be expected to present their work in a special session at NTCIR. An event segmentation will be defined for the data.
Example tasks include:
Provide insights on my social interactions.
Provide insights my coffee drinking habits.
Provide insights on the time I spend commuting to work.
==Event Segmentation Task (LEST) ==
The aim of this subtask is to examine approaches to event segmentation from continual lifelog stream data. Events have always been proposed to be the standard unit of retrieval and there have been a number of suggestions made as to how to segment into events. The proposal here is to release a training set of manually segmented lifelog data and evaluate how well the participants can segment a test set of lifelog data. There would be a sliding window facility to calculate accuracy in submissions.
Please visit NTCIR lifelog website for more information about the task:
http://ntcir-lifelog.computing.dcu.ie/
=== ORGANIZERS ===
Cathal Gurrin (Dublin City University, Ireland)
Hideo Joho (University of Tsukuba, Japan)
Frank Hopfgartner (University of Glasgow, UK)
Liting Zhou (Dublin City University, Ireland)
Rami Albatal (Heystaks, Ireland)
=== CONTACT ===
ntcir-lifelog at computing dot dcu dot ie
|