Wearable computing systems generate vast streams of multimodal data, requiring intelligent mechanisms to recognize and interpret user situations. This paper proposes a situation identification framework that integrates machine learning models with context space theory (CST) to effectively map raw sensor data into high-level contextual states. The approach provides a systematic way to handle uncertainty and dynamic changes in the environment, while enabling robust and adaptive situation recognition. Experimental evaluation shows that combining CST with learning techniques significantly improves accuracy and generalization in complex real-world scenarios, laying a foundation for next-generation smart wearable systems.

Full paper