presented as part of a poster session, in press
Springer International Publishing (in press)
Nowadays, there is an ever-growing interest in assessing the collective intelligence (CI) of a team in a wide range of scenarios, thanks to its potential in enhancing teamwork and group performance. Recently, special attention has been devoted on the clinical setting, where breakdowns in teamwork, leadership, and communication can lead to adverse events, compromising patient safety. So far, researchers have mostly relied on surveys to study human behavior and group dynamics; however, this method is ineffective. In contrast, a promising solution to monitor behavioral and individual features that are reflective of CI is represented by wearable technologies. To date, the field of CI assessment still appears unstructured; therefore, the aim of this narrative review is to provide a detailed overview of the main group and individual parameters that can be monitored to evaluate CI in clinical settings, together with the wearables either already used to assess them or that have the potential to be applied in this scenario. The working principles, advantages, and disadvantages of each device are introduced in order to try to bring order in this field and provide a guide for future CI investigations in medical contexts.
Full paper
Considerable progress has been made in developing sensors and wearable systems for monitoring physiological parameters in different fields. Among all, healthcare and sports are showing increasing interest in monitoring respiratory rate through these sensors. However, several open challenges limit their reliability. This study presents the design, development, and testing of a wearable sensor based on conductive textiles for respiratory monitoring in sports. Our approach involved an initial analysis of the breathing kinematics to investigate the magnitude of chest wall strains during breathing. This analysis was useful to guide the design of the sensing element, as well as the metrological characterization of the sensor and its integration into a wearable strap. A pilot experiment was then carried out on a healthy volunteer to assess the sensor’s performance under three different breathing patterns (bradypnea, quiet breathing, and tachypnea) using a wearable reference system. The obtained results are very promising and aim to contribute to developing a reliable and efficient wearable device for monitoring respiratory rate. Furthermore, the design process employed in this study provides insight into the attributes needed to accurately capture breathing movements while maintaining comfort and usability.
Full paper
An interesting and innovative activity in Collective Intelligence systems is Sentiment Analysis (SA) which, starting from users’ feedback, aims to identify their opinion about a specific subject, for example in order to develop/improve/customize products and services. The feedback gathering, however, is complex, time-consuming, and often invasive, possibly resulting in decreased truthfulness and reliability for its outcome. Moreover, the subsequent feedback processing may suffer from scalability, cost, and privacy issues when the sample size is large or the data to be processed is sensitive. Internet of Things (IoT) and Edge Intelligence (EI) can greatly help in both aspects by providing, respectively, a pervasive and transparent way to collect a huge amount of heterogeneous data from users (e.g., audio, images, video, etc.) and an efficient, low-cost, and privacy-preserving solution to locally analyze them without resorting to Cloud computing-based platforms. Therefore, in this paper we outline an innovative collective SA system which leverages on IoT and EI (specifically, TinyML techniques and the EdgeImpulse platform) to gather and immediately process audio in the proximity of entities-of-interest in order to determine whether audience’ opinions are positive, negative, or neutral. The architecture of the proposed system, exemplified in a museum use case, is presented, and a preliminary, yet very promising, implementation is shown, reveling interesting insights towards its full development.
The expansion of Internet if Things (IoT) technology has led to the widespread use of sensors in various everyday environments, including healthcare. Body Sensor Networks (BSNs) enable continuous monitoring of human physiological signals and activities, benefiting healthcare and well-being. However, existing BSN systems primarily focus on single-user activity recognition, disregarding multi-user scenarios. Therefore, this paper introduces a collaborative BSN-based architecture for multi-user activity recognition to identify group collaborations among nearby users. We discuss first the general problem of multi-user activity recognition, the associated challenges along with potential solutions (such as data processing, mining techniques, sensor noise, and the complexity of multi-user activities) and, then, the software abstractions and the components of our architecture. This represents an innovative solution of collective intelligence and it holds significant potential for enhancing healthcare and well-being applications by enabling real-time detection of group activities and behaviors.
The healthcare industry faces challenges due to rising treatment costs, an aging population, and limited medical resources. Remote monitoring technology offers a promising solution to these issues. This paper introduces an innovative adaptive method that deploys an Ultra-Wideband (UWB) radar-based Internet-of-Medical-Things (IoMT) system to remotely monitor elderly individuals’ vital signs and fall events during their daily routines. The system employs edge computing for prioritizing critical tasks and a combined cloud infrastructure for further processing and storage. This approach enables monitoring and telehealth services for elderly individuals. A case study demonstrates the system’s effectiveness in accurately recognizing high-risk conditions and abnormal activities such as sleep apnea and falls. The experimental results show that the proposed system achieved high accuracy levels, with a Mean Absolute Error (MAE) ± Standard Deviation of Absolute Error (SDAE) of 1.23±1.16 bpm for heart rate (HR) detection and 0.22±0.27 bpm for respiratory rate (RR) detection. Moreover, the system demonstrated a recognition accuracy of 90.60% for three types of falls (i.e., stand, bow, squat to fall), one daily activity, and No Activity Background. These findings indicate that the radar sensor provides a high degree of accuracy suitable for various remote monitoring applications, thus enhancing the safety and well-being of elderly individuals in their homes.
The development of feature-oriented programming (FOP) and of (its generalization) delta-oriented programming (DOP) has focused primarily on SPLs of class-based object oriented programs. In this paper, we introduce delta-oriented SPLs of functional programs with algebraic data types (ADTs). To pave the way towards SPLs of multi-paradigm programs, we tailor our presentation to the functional sublanguage of the multi-paradigm modeling language ABS, which already features DOP support for its class-based object-oriented sublanguage. Our main contributions are: (i) we motivate and illustrate our proposal by an example from an industrial modeling scenario; (ii) we formalize delta-oriented SPLs for functional programs with ADTs in terms of a foundational calculus; (iii) we define family-based analyses to check whether an SPL satisfies certain well-formedness conditions and whether all variants can be generated and are well-typed; and (iv) we briefly outline how, in the context of the toolchain of ABS, the proposed delta-oriented constructs and analyses for functional programs can be integrated with their counterparts for object-oriented programs.
Social activities are a fundamental form of social interaction in our daily life. Current smart systems based on human-computer interaction (e.g. for security, safety, and healthcare applications) may significantly benefit and often require an understanding of users’ individual and group activities performed. Recent advancements in Wi-Fi signal analysis suggest that this pervasive communication infrastructure can also represent a convenient, non-invasive, contactless sensing method to detect human activities. In this paper, we propose a data-level fusion method based on Wi-Fi Channel State Information (CSI) analysis to recognize social activities (e.g., walking together) and gestures (e.g., hand-shaking) in an indoor environment. Our results show that off-the-shelf Wi-Fi devices can be effectively used as a contact-less sensing method for social activity recognition alternative to other approaches such as those based on computer vision and wearable sensors.
Stream Runtime Verification (SRV) has been recently proposed for monitoring input streams of data while producing output streams in response. The Aggregate Programming (AP) paradigm for collection of distributed devices has been used to implement distributed runtime verification of spatial and temporal Boolean properties. In this paper we outline how distributed SRV could be implemented by AP and the new opportunities AP could bring to the field of distributed SRV.
The importance of monitoring groups of devices working together towards shared global objectives is growing, for instance when they are used for crucial purposes like search and rescue operations during emergencies. Effective approaches in this context include expressing global properties of a swarm as logical formulas in a spatial or temporal logic, which can be automatically translated into executable distributed run-time monitors. This can be accomplished leveraging frameworks such as Aggregate Computing (AC), and proving non-trivial “translation correctness” results, in which subtle bugs may easily hide if relying on hand-made proofs.
In this paper, we present an implementation of AC in Coq, which allows to automatically verify monitor correctness, further raising the security level of the monitored system. This implementation may also allow to integrate static analysis of program correctness properties with run-time monitors for properties too difficult to prove in Coq. We showcase the usefulness of our implementation by means of a paradigmatic example, proving the correctness of an AC monitor for a past-CTL formula in Coq.
Recent trends like the Internet of Things (IoT) suggest a vision of dense and multi-scale deployments of computing devices in nearly all kinds of environments. A prominent engineering challenge revolves around programming the collective adaptive behaviour of such computational ecosystems. This requires abstractions able to capture concepts like ensembles (dynamic groups of cooperating devices) and collective tasks (joint activities carried out by ensembles). In this work, we consider collections of devices interacting with neighbours and that execute in nearly-synchronised sense–compute–interact rounds, where the computation is given by a single control program. To support programming whole computational collectives, we propose the abstraction of a distributed collective process (DCP), which can be used to define at once the ensemble formation logic and its collective task. We implement the abstraction in the eXchange Calculus (XC), a core language based on neighbouring values (maps from neighbours to values) where state management and interaction is handled through a single primitive, exchange. Then, we discuss the features of the abstraction, its suitability for different kinds of distributed computing applications, and provide a proof-of-concept implementation of a wave-like process propagation.
In the last decades, many smart sensing solutions have been provided for monitoring human health ranging from systems equipped with electrical to mechanical and optical sensors. In this scenario, wearables based on fiber optic sensors like fiber Bragg gratings (FBGs) can be a valuable solution since they show many advantages over the competitors, like miniaturized size, lightness, and high sensitivity. Unfortunately, one of the main issues with this technology is its inherent fragility. For this reason, various encapsulation modalities have been proposed to embed FBG into flexible biocompatible materials for robustness improvements and skin-like appearance. Recently, 3D printing techniques have been proposed to innovate this process thanks to their numerous advantages like a quick fabrication process, high accuracy, repeatability, and resolution. Moreover, the possibility of easily customizing the sensor design by choosing a set of printing parameters (e.g., printing orientation, material selection, shape, size, density, and pattern) can help in developing sensing solutions optimized for specific applications. Here, we present a 3D-printed sensor developed by fused deposition modeling (FDM) with a rectangular shape. A detailed description of the design and fabrication stages is proposed. In addition, changes in the spectral response as well as in the metrological properties of the embedded FBG sensor are investigated. The presented data can be utilized not only for improving and optimizing design and fabrication processes but also may be beneficial for the next research in the production of highly sensitive 3D-printed sensors for applications in wearable technology and, more generally, healthcare setting.
Full paper
Supported by current socio-scientific trends, programming the global behaviour of whole computational collectives makes for great opportunities, but also significant challenges. Recently, aggregate computing has emerged as a prominent paradigm for so-called collective adaptive systems programming. To shorten the gap between such research endeavours and mainstream software development and engineering, we present ScaFi, a Scala toolkit providing an internal domain-specific language, libraries, a simulation environment, and runtime support for practical aggregate computing systems development.
The Internet of Things and edge computing are fostering a future of ecosystems hosting complex decentralized computations, deeply integrated with our very dynamic environments. Digitalized buildings, communities of people, and cities will be the next-generation “hardware and platform,” counting myriads of interconnected devices, on top of which intrinsically-distributed computational processes will run and self-organize. They will spontaneously spawn, diffuse to pertinent logical/physical regions, cooperate and compete, opportunistically summon required resources, collect and analyze data, compute results, trigger distributed actions, and eventually decay. How would a programming model for such ecosystems look like? Based on research findings on self-adaptive/self-organizing systems, this paper proposes design abstractions based on “dynamic decentralization domains”: regions of space opportunistically formed to support situated recognition and action. We embody the approach into a Scala application program interface (API) enacting distributed execution and show its applicability in a case study of environmental monitoring.
Swarm intelligence leverages collective behaviours emerging from interaction and activity of several “simple” agents to solve problems in various environments. One problem of interest in large swarms featuring a variety of sub-goals is swarm clustering, where the individuals of a swarm are assigned or choose to belong to zero or more groups, also called clusters. In this work, we address the sensing-based swarm clustering problem, where clusters are defined based on both the values sensed from the environment and the spatial distribution of the values and the agents. Moreover, we address it in a setting characterised by decentralisation of computation and interaction, and dynamicity of values and mobility of agents. For the solution, we propose to use the field-based computing paradigm, where computation and interaction are expressed in terms of a functional manipulation of fields, distributed and evolving data structures mapping each individual of the system to values over time. We devise a solution to sensing-based swarm clustering leveraging multiple concurrent field computations with limited domain and evaluate the approach experimentally by means of simulations, showing that the programmed swarms form clusters that well reflect the underlying environmental phenomena dynamics.
The use of wearable devices in daily activities is continuously and rapidly growing. Wearable technology provides seamless sensing, monitoring, multimodal interaction, without continuous manual intervention and effort for users. These devices support the realization of novel applications in many domains, from healthcare to security and entertainment, improving the quality of life of users. The situation awareness paradigm allows wearable computing systems to be aware of what is happening to users and in the surrounding environment, supporting automatic smart adaptive behaviors based on the identified situation. Although situation-aware wearable devices have recently attracted a lot of attention, there is still a lack of methodological approaches and references models for defining such systems. In this paper, we propose a reference architecture for situation-aware wearable computing systems grounded on Endsley’s SA model. A specialization of the architecture in the context of multi-user wearable computing systems is also proposed to support team situation awareness. An illustrative example shows a practical instantiation of the architecture in the context of contact tracing using smart sensorized face masks.
Background and objective: COVID-19 outbreak has become one of the most challenging problems for human being. It is a communicable disease caused by a new coronavirus strain, which infected over 375 million people already and caused almost 6 million deaths. This paper aims to develop and design a framework for early diagnosis and fast classification of COVID-19 symptoms using multimodal Deep Learning techniques. Methods: we collected chest X-ray and cough sample data from open source datasets, Cohen and datasets and local hospitals. The features are extracted from the chest X-ray images are extracted from chest X-ray datasets. We also used cough audio datasets from Coswara project and local hospitals. The publicly available Coughvid DetectNow and Virufy datasets are used to evaluate COVID-19 detection based on speech sounds, respiratory, and cough. The collected audio data comprises slow and fast breathing, shallow and deep coughing, spoken digits, and phonation of sustained vowels. Gender, geographical location, age, preexisting medical conditions, and current health status (COVID-19 and Non-COVID-19) are recorded. Results: the proposed framework uses the selection algorithm of the pre-trained network to determine the best fusion model characterized by the pre-trained chest X-ray and cough models. Third, deep chest X-ray fusion by discriminant correlation analysis is used to fuse discriminatory features from the two models. The proposed framework achieved recognition accuracy, specificity, and sensitivity of 98.91%, 96.25%, and 97.69%, respectively. With the fusion method we obtained 94.99% accuracy. Conclusion: this paper examines the effectiveness of well-known ML architectures on a joint collection of chest-X-rays and cough samples for early classification of COVID-19. It shows that existing methods can effectively used for diagnosis and suggesting that the fusion learning paradigm could be a crucial asset in diagnosing future unknown illnesses. The proposed framework supports health informatics basis on early diagnosis, clinical decision support, and accurate prediction.
In the last fifteen years, there has been a widespread diffusion of wearable sensorized devices for a plethora of applications in heterogeneous domains. Wearable technology provides fundamental capabilities such as smart sensing, monitoring, data recording, and multi-modal interaction, in a seamless, pervasive, and easy-to-use way. An emerging research trend is the definition of situation-aware wearable computing systems, i.e., wearable devices able to perceive and understand what is happening in the environment in order to adapt their behavior and anticipate users’ needs, a capability known as situation awareness. Although the increasing interest of the research community in situation-aware wearable devices, there is a lack of studies, formal models, methodological approaches, and theoretical groundings on which these systems can be grounded. As a result, a very limited number of smart sensors (physical or virtual) capable of effectively and efficiently supporting Situation Awareness have been proposed so far. In this article, we provide a survey and a classification of state-of-the-art situation-aware wearable systems, outlining current research trends, shortcomings, and challenges, with an emphasis on the models, approaches, and computational techniques of situation awareness and wearable computing on which they are based. The survey has been performed using the PRISMA methodology for systematic reviews. The analysis has been conducted with respect to a reference architecture, namely SA-WCS, of a generic situation-aware wearable computing system that we propose in this article, grounded on Endsley’s model of Situation Awareness. Such reference architecture not only provides a systematic framework for the comparison and categorization of the works, it also aims to promote the development of the next generation WCS.
The use of wearable devices in daily activities is continuously and rapidly growing. Wearable technology provides seamless sensing, monitoring, multimodal interaction, without continuous manual intervention and effort for users. These devices support the realization of novel applications in many domains, from healthcare to security and entertainment, improving the quality of life of users. The situation awareness paradigm allows wearable computing systems to be aware of what is happening to users and in the surrounding environment, supporting automatic smart adaptive behaviors based on the identified situation. Although situation-aware wearable devices have recently attracted a lot of attention, there is still a lack of methodological approaches and references models for defining such systems. In this paper, we propose a reference architecture for situation-aware wearable computing systems grounded on Endsley’s SA model. A specialization of the architecture in the context of multi-user wearable computing systems is also proposed to support team situation awareness. An illustrative example shows a practical instantiation of the architecture in the context of contact tracing using smart sensorized face masks.
Why do groups perform better than individuals? The answer is hidden behind the concept of the so-called Collective Intelligence (CI). CI is defined as the ability of a group to perform a wide variety of tasks, and team behavior and individual characteristics are CI consistent predictors. A complex environment in which CI is increasingly recognized as a determinant of safe and efficient functioning is the operating room (OR), where individual inputs and efforts should be adapted to those of teammates to accomplish shared goals. To date, although teamwork failure accounts for 70–80% of serious medical errors, the lack of quantitative measurements of individual responses and interpersonal dynamics makes CI poorly characterized in OR. This work proposed an innovative wearable platform for monitoring physiological biomarkers and joint movements of individuals while performing tasks. ECG trace and breathing waveform combined with skin conductance and movements patterns of both wrist and elbow are recorded unobtrusively and without impairing any activity of the user. The preliminary assessment of these devices was carried out by performing two trials (i.e., in a static condition to have the user baseline and while handling tools for simulating typical surgical tasks). This study with its preliminary findings, can be considered the first attempt toward the establishment of an innovative strategy to improve team performance and consequently, surgical outcomes and patient safety in the clinical routine.
Full paper