- Human action recognition requires multimodal sensor setups, which leads to interesting ML problems
- Lack of benchmarks for activity recognition <there are benchmarks – maybe they are no good though>
- In many cases, data isn’t shared, or the approach is highly tailored to the specific sensor and environmental setup considered
- Action recognition requires “tackling issues ranging from the feature selection and classification (…), to decision fusion and fault-tolerance (…).”
- They recorded “realistic daily life activities in a sensor-rich environment…” <I think no head-cam>
- They really wire up all the items in the environment
- Mention some other datasets:
- Placelab: Environment where almost everything is wired up, married couple lived there for 15 days. <Sensors are comprehensive and very hardcore! Even monitor water flow and electricity usage at each plug.>
- Van Kasteren: 28 day recording, data is much less rich
- Darmstadt routine dataset: Just 2 accelerometers in arm, narration
- TUM Kitchen <already discussed in blog>
- HASC Corpus: Just 1 accelerometer, but on many people
- Also there is something called body sensor network contest, but this seems to be devoted to recording very short activity episodes
- In this dataset, there is a loosely scripted series of activities the subject is instructed to undertake
- The rest is details on the challenge they propose for their dataset <skipping>
Advertisements