by Davide Domini, Laura Erhan, Giovanni Aguzzi, Lucia Cavallaro, Amir Douzandeh Zenoozi, Antonio Liotta, Mirko Viroli
published in International Joint Conference on Neural Networks (IJCNN 2025), Rome, Italy, June 30 – July 5, 2025
The growing demands of Society 5.0 require intelligent systems that are both cooperative and energy efficient. This paper introduces Sparse Self-Federated Learning (SSFL), a novel paradigm combining sparsification with self-federation to reduce communication and computation costs in distributed training. By leveraging sparsity, the approach ensures lightweight yet accurate models, while self-federation allows nodes to autonomously coordinate their training without centralized orchestration. Experimental results demonstrate significant gains in energy efficiency and scalability, highlighting the potential of SSFL for large-scale cooperative intelligence on edge devices and cyber-physical systems.
Voice signals contain rich information about human health, and pathological conditions often manifest as distinctive distortions in vocal features. This paper explores acoustic and signal-based characteristics of pathological voice distortions, identifying which features provide the most discriminative power for detecting and classifying vocal pathologies. The study emphasizes the importance of robust feature extraction for developing human–machine systems that support medical diagnostics and continuous health monitoring. Results suggest that carefully selected features can improve classification performance, paving the way for practical applications in computer-aided voice pathology detection.
Developing large-scale collective adaptive systems for safety-critical applications requires an extensive effort, involving the interplay of distributed programming techniques and mathematical proofs of real-time guarantees. This effort could be significantly reduced by allowing the system developer to rely on libraries of predefined algorithms. By exploiting such algorithms, distributed behaviour and (hard) real-time guarantees for the final application could be automatically inferred, effectively shifting the verification burden from the system designer to the algorithm developer.
Following earlier work on real-time guarantees for aggregate computing algorithms, we argue that aggregate computing could provide a convenient framework towards this aim. As a first step, we give a detailed description of different kinds of models that abstract aggregate programs as mathematical functions. Then, building on such models, we start investigating the problem of how real-time behavior constraints could be specified in a compositional way. Finally, we conclude by singling out a number of potential building block library algorithms that could constitute such a real-time aggregate computing library, with the potential of providing a valuable asset for supporting the rigorous engineering of safety-critical large-scale collective adaptive systems.
Full paper
Programming swarm behaviors is a challenging task, due to the need to express collective behaviors in terms of local interactions among simple agents. Even if several programming frameworks have been proposed, they are often based on low-level abstractions, which makes the development of swarm applications complex and error-prone. Thus, we present MacroSwarm, an aggregate programming framework for the development of swarm behaviors. With this framework, it is possible to define a large variety of swarm behaviors, starting from simple movements to more complex ones, such as aggregation, flocking, and collective decision-making. In this paper, we present the main features of the framework and some simple examples of its API usage.
Full paper
Swarm behaviour engineering is inherently challenging due to the need for agents to coordinate based on local interactions. This paper introduces MacroSwarm, a Scala-based aggregate programming framework that supports design of swarm behaviors via composable, high-level building blocks. These include primitives for tasks like aggregation, flocking, and collective decision-making. Based on field-based coordination, MacroSwarm enables expressive and reusable behavior modeling, ideal for large-scale, decentralized swarm systems.
Pneumonia remains a major global health concern, and chest X-ray imaging is one of the most common diagnostic tools for its detection. This paper presents an experimental comparison of deep learning models applied to pneumonia classification in chest X-ray datasets. The authors evaluate several convolutional neural network (CNN) architectures and training strategies, analyzing their performance in terms of accuracy, sensitivity, and robustness. The results provide insights into the trade-offs between model complexity and diagnostic reliability, offering practical guidance for deploying AI-driven medical imaging tools in healthcare settings.
Efficient deployment of AI models on edge devices requires balancing predictive accuracy with resource constraints. This chapter presents a comparative study of neural network pruning strategies, evaluating their impact on accuracy, sparsity, and computational efficiency. By systematically analyzing different pruning approaches—such as weight pruning, structured pruning, and hybrid strategies—the authors highlight trade-offs between compression rate and performance. The results provide practical guidance for selecting pruning techniques suited for edge environments, supporting reliable and resource-efficient AI deployment in real-world applications.
Federated learning enables collaborative model training across distributed devices without sharing raw data, offering strong privacy guarantees. However, deploying such systems on edge devices faces limitations in computation and energy efficiency. This paper presents a hybrid edge–cloud federated learning framework applied to the problem of lightweight smoking detection. The proposed approach leverages edge devices for preliminary local training and offloads computationally intensive tasks to the cloud, achieving a balance between low-latency inference, privacy, and resource efficiency. Experimental results show that the hybrid strategy outperforms purely edge-based or cloud-based alternatives, making it a promising solution for healthcare monitoring applications.
InThis article introduces ScaRLib, a Scala-based framework that aims to streamline the development cyber-physical swarms scenarios (i.e., systems of many interacting distributed devices that collectively accomplish system-wide tasks) by integrating macroprogramming and multi-agent reinforcement learning to design collective behavior. This framework serves as the starting point for a broader toolchain that will integrate these two approaches at multiple points to harness the capabilities of both, enabling the expression of complex and adaptive collective behavior.
Full paper
In recent years, the infrastructure supporting the execution of situated distributed computations evolved at a fast pace. Modern collective adaptive applications – as found in the Internet of Things, swarm robotics, and social computing – are designed to be executed on very diverse devices and to be deployed on infrastructures composed of devices ranging from cloud servers to wearable devices, constituting together a cloud–edge continuum. The availability of such an infrastructure opens to better resource utilisation and performance but, at the same time, introduces new challenges to software designers, as applications must be conceived to be able to adapt to changing deployment domains and conditions. In this paper, we introduce a practical framework for the development of systems based on the concept of pulverisation, meant to neatly separate business logic and deployment concerns, allowing applications to be defined independently of the infrastructure they will execute upon, thus supporting scalability. The framework is based on a domain-specific language capturing, in a declarative fashion: pulverised application components, device capabilities, resource allocation, and (runtime re-) configuration policies. The framework, implemented in Kotlin multiplatform and available as open source, is then evaluated in a small-scale real-world demo and in a city-scale simulated scenario, demonstrating the feasibility of the approach and its potential benefits in achieving better trade-offs between performance and resource utilisation.
Full paper
The edge–cloud continuum provides a heterogeneous, multi-scale, and dynamic infrastructure supporting complex deployment profiles and trade-offs for application scenarios like those found in the Internet of Things and large-scale cyber–physical systems domains. To exploit the continuum, applications should be designed in a way that promotes flexibility and reconfigurability, and proper management (sub-)systems should take care of reconfiguring them in response to changes in the environment or non-functional requirements. Approaches may leverage optimisation-based or heuristic-based policies, and decision making may be centralised or distributed: this work investigates decentralised heuristic-based approaches. In particular, we focus on the pulverisation approach, whereby a distributed software system is automatically partitioned (“pulverised”) into different deployment units. In this context, we address two main research problems: how to support the runtime reconfiguration of pulverised systems, and how to specify decentralised reconfiguring policies by a global perspective. To address the first problem, we design and implement a middleware for pulverised systems separating infrastructural and application concerns. To address the second problem, we leverage aggregate computing and exploit self-organisation patterns to devise self-stabilising reconfiguration strategies. By simulating deployments on different kinds of complex infrastructures, we assess the flexibility of the pulverisation middleware design as well as the effectiveness and resilience of the aggregate computing-based reconfiguration policies.
Full paper
Sparse Neural Networks are gaining attention for their ability to deliver compact, efficient models suitable for resource-constrained environments like edge devices. This chapter examines and compares the performance of sparsely-trained Convolutional Neural Networks versus traditional dense models for binary classification tasks using medical image data. Experiments on low-resolution grayscale images (e.g., (28 \times 28) pixels) demonstrate that high sparsification levels (> 75 %) can nearly match the performance of dense counterparts while reducing inference time and memory usage—boosting efficiency in edge computing scenarios. However, extremely high sparsity (e.g., > 90 %) may lead to unstable performance.
Aggregate computing is a promising programming paradigm for modeling and engineering collective adaptive systems, where global behaviors emerge from local interactions among distributed entities. This paper explores an extension towards real-time aggregate computing, aiming to enrich the paradigm with time-aware constructs that guarantee predictable and coordinated system behaviors under timing constraints. The proposed approach provides a foundation for rigorous modeling of distributed systems that must operate in real-time, paving the way for applications in safety-critical and adaptive domains.
Swarm programming is focused on the design and implementation of algorithms for large-scale systems, such as fleets of robots, ensembles of IoT devices, and sensor networks. Writing algorithms for these systems requires skills and familiarity with programming languages, which can be a barrier for non-expert users. Even if visual programming environments have been proposed for swarm systems, they are often limited to specific platforms or tasks, and do not provide a high-level programming model that can be used to design algorithms for a wide range of swarm systems. Therefore, in this paper, we propose a low-code swarm programming environment, called ScaFi-Blocks, which allows users to design and implement swarm algorithms visually.
Full paper
The advent of highly distributed systems, such as the Internet of Things, has led to the development of distributed systems that require efficient and resilient runtime monitoring. Among the various monitoring techniques, runtime verification is a lightweight verification method that assesses the correctness of a running system concerning a formal specification. In this paper, we investigate the optimization of aggregate monitors, i.e., monitors that operate on ensembles of devices, for properties expressed in Spatial Logic of Closure Spaces (SLCS)–a formal logic designed to reason about spatial relationships between entities in a distributed system. We propose three different algorithms for the implementation of the “somewhere” operator, a key construct in SLCS, and we evaluate their performance through a series of simulations, comparing their convergence time and computational load.
Full paper
Federated Learning has gained increasing interest in the last years, as it allows the training of machine learning models with a large number of devices by exchanging only the weights of the trained neural networks. Without the need to upload the training data to a central server, privacy concerns and potential bottlenecks can be removed as fewer data is transmitted. However, the current state-of-the-art solutions are typically centralized, and do not provide for suitable coordination mechanisms to take into account spatial distribution of devices and local communications, which can sometimes play a crucial role. Therefore, we propose a field-based coordination approach for federated learning, where the devices coordinate with each other through the use of computational fields. We show that this approach can be used to train models in a completely peer-to-peer fashion. Additionally, our approach also allows for emergently create zones of interests, and produce specialized models for each zone enabling each agent to refine their models for the tasks at hand.We evaluate our approach in a simulated environment leveraging aggregate computing—the reference global-to-local field-based coordination programming paradigm. The results show that our approach is comparable to the state-of-the-art centralized solutions, while enabling a more flexible and scalable approach to federated learning.
With the rise of digital health technologies, smartphones have become a promising tool for non-invasive medical screening. This study introduces NeuroEnhanceNet, a deep learning architecture tailored for inertial sensor data collected during walking, enabling early detection of Parkinson’s disease (PD). The pipeline includes preprocessing (normalization, scaling, rotation) of accelerometer signals followed by NeuroEnhanceNet, which captures both long-term intra-channel patterns and inter-channel correlations. The model achieves an impressively low false negative rate of 0.053 for early-stage PD detection. Comparative analysis highlights that gait-derived digital biomarkers outperform those from resting-state and underscore the potential of smartphone-based, walk-derived data for scalable and accurate early PD screening.
Conventional federated learning frameworks typically rely on a central server to coordinate model aggregation, which introduces scalability bottlenecks and fails to address data heterogeneity across geographically distributed clients. This paper introduces Proximity-based Self-Federated Learning (PSFL), a decentralized approach where clients autonomously form local federations based on geographical proximity and data similarity. In PSFL, nodes exchange only model updates within their neighborhoods and self-organize into specialized model groups without any central orchestration. This enables higher adaptability to non-IID data distributions while reducing communication overhead. Experiments on benchmark datasets demonstrate that PSFL achieves superior performance compared to traditional centralized FL in highly heterogeneous environments.
Modeling complex collective behaviors like herding requires capturing both individual motion and group dynamics. This paper presents an agent-based model of directional multi-herds, where autonomous agents form and move in multiple herd-like structures based on direction and local interaction rules. The model explores how simple agent behaviors at the local level give rise to emergent patterns of coordinated movement across multiple sub-herds. Simulations demonstrate that varying interaction parameters can yield different herd formations and movement synchronization, providing insight into decentralized coordination mechanisms.
In recent advancements in machine learning, federated learning allows a network of distributed clients to collaboratively develop a global model without needing to share their local data. This technique aims to safeguard privacy, countering the vulnerabilities of conventional centralized learning methods. Traditional federated learning approaches often rely on a central server to coordinate model training across clients, aiming to replicate the same model uniformly across all nodes. However, these methods overlook the significance of geographical and local data variances in vast networks, potentially affecting model effectiveness and applicability. Moreover, relying on a central server might become a bottleneck in large networks, such as the ones promoted by edge computing. Our paper introduces a novel, fully-distributed federated learning strategy called proximity-based self-federated learning that enables the self-organised creation of multiple federations of clients based on their geographic proximity and data distribution without exchanging raw data. Indeed, unlike traditional algorithms, our approach encourages clients to share and adjust their models with neighbouring nodes based on geographic proximity and model accuracy. This method not only addresses the limitations posed by diverse data distributions but also enhances the model’s adaptability to different regional characteristics creating specialized models for each federation.
The concept of Edge-cloud Continuum (ECC) serves as a strategic infrastructure for deploying modern Collective-adaptive Systems (CASs). In this framework, heterogeneous devices create a continuum between the edge and the cloud, offering new opportunities and challenges for deploying collective systems such as smart cities, IoT applications, and more. Pre-liminary work, like the pulverisation approach, models a system as an ensemble of logical entities connected forming a dynamic graph, where each device is decomposed into five independent components (i.e., sensors, actuators, state, communication, and behaviour). This approach addresses the challenge of devising an application partitioning strategy to effectively deploy collective systems in the continuum but does not provide an explicit mechanism to handle dynamic system reconfiguration. For this reason, learning approaches can be effective in managing the dynamic and continuously evolving requirements of the ECC (e.g., latency, power consumption, computational resources). In this paper we propose a new generation of “Intelligent Collective Services” that uses advanced partitioning models and learning approaches, such as Graph Neural Network (GNN) and Many-agent Reinforcement Learning (MARL), to enhance adaptability and pave the way for the next generation of CAS in the ECC.
Cyber-physical swarms represent a paradigm shift in distributed systems, mirroring characteristics akin to natural swarms, such as self-organization, scalability, and fault tolerance. This paper delves into these complex systems, characterized by vast networks of cyber-physical entities with limited environmental awareness, yet capable of exhibiting emergent collective behaviors. These systems encompass a diverse array of scenarios, ranging from swarm robotics to the interconnectivity in smart cities, as well as the collaboration among augmented humans. The engineering of such systems presents unique challenges, primarily due to their intricate complexity and the spontaneous nature of their collective behaviors.This paper aims to dissect these challenges, offering a clear delineation of potential approaches. We present a comprehensive analysis, shedding light on the intricacies of engineering cyber-physical swarms and discussing modern solutions in engineering collective applications for such systems
Preservation of the existing biodiversity and wildlife is a crucial task for the future of our planet. To be able to protect and conserve animal populations, it is essential to understand their behavior and the factors that influence it. One of the major sources of information for biologists and ethologists is the acquisition of photos and videos from camera traps, Unmanned Aerial Vehicles (UAVs), and Unmanned Ground Vehicles (UGVs). However, appropriately positioning camera traps and optimizing the movement of unmanned devices is difficult, often requiring trial-and-error, and thus amenable to improvement through in-silico simulation. In this context, an appropriate actionable model of the herd behavior of wildlife is of paramount importance, as it can provide a reasonably realistic context for simulating the deployment and control of unmanned devices before field operations. Using ground-truth data from the Kenyan Animal Behavior Recognition (KABR) dataset, we propose a model of directional multi-herds that can be used to simulate the movement of multiple herds of animals. The model and analysis is enriched by an implementation and evaluation into an existing discrete event simulator.
Selecting the most appropriate evaluation metric for binary classification models remains a foundational challenge. Traditional metrics—such as Accuracy, F1-score, or MCC—can provide conflicting rankings when models’ confusion matrices change. This chapter introduces the Worthiness Benchmark (γ), a novel concept that defines the minimal change required in a confusion matrix for one classifier to be considered superior to another:contentReference[oaicite:1]{index=1}. The authors propose a structured γ-analysis, examining how various evaluation metrics respond to such perturbations and highlighting their implicit ranking principles. This framework offers practitioners clearer guidance when choosing metrics tailored to problem-specific contexts.:contentReference[oaicite:2]{index=2}
Full chapter
Neural networks with high sparsity levels are increasingly studied as a means of reducing computational and memory requirements while retaining predictive accuracy. This chapter investigates high sparsity training strategies in binary classification tasks, focusing on their trade-offs between efficiency and performance. The authors analyze sparsity thresholds and their impact on accuracy, demonstrating that carefully chosen sparsification techniques can preserve classification quality while significantly improving model compactness and execution efficiency. The findings are relevant for deploying AI in resource-constrained environments, such as embedded systems and edge computing.
The burden of Alzheimer’s Disease (AD) extends to both patients and caregivers (CGs), necessitating effective management strategies. Non-pharmacological methods like the Walk and Talk Programs have shown promise in enhancing their quality of life. The interactions between patient and CGs and their psychophysical load may influence the adherence to the program, but they still lack an objective evaluation. The assessment of these aspects, aligning with the principles of Collective Intelligence (CI), can foster their refinement with consequent superior outcomes. Wearable systems may intervene to monitor parameters related to the patient-CGs psychophysical load with a good acceptance among AD patients. We proposed a multiparametric wearable system embedding two inertial measurement unit (IMU) sensors to monitor key elements in CI, such as heart rate (HR), respiratory rate (RR) and activity level (LA). The feasibility of the whole system was assessed in a pilot study on eight volunteers, replicating patient-CG interactions typical of the Walk and Talk Program. Results showed low mean percentage errors for HR and RR estimations, validated against a reference chest strap. Dynamic conditions notably captured group dynamics, consistently detecting LAs of the subjects. In summary, our study laid the foundation for a more comprehensive and efficacious AD management approach.Full paper
In Alzheimer’s disease, maintaining proximity between patients and caregivers (CGs) is crucial for their well-being, necessitating continuous monitoring indoors and outdoors. The integration of Collective Intelligence through collaborative approaches monitored by integrated technologies may be beneficial in enhancing caregiving outcomes. However, non-pharmacological interventions like the ‘Walk and Talk Program’ (WTP) lack objective assessment methods. Multi-sensing approaches leveraging wearables for proximity detection using Bluetooth Low Energy and computer vision technologies for identifying subjects in the scene and measuring dialogue time are vital for accurate evaluation. This paper evaluates the validity and accuracy of these tools in supporting the WTP, through simulated real-life scenarios of patient-CG interactions. A pilot study involving volunteers replicating WTP interactions revealed promising outcomes for both wearable devices and camera systems, emphasizing their potential in advancing dementia care practices.Full paper
Recent research on collective adaptive systems and macro-programming has shown the importance of programming abstractions for expressing the self-organising behaviour of ensembles, large and dynamic sets of collaborating devices. These generally leverage the interplay between the execution model and the program logic to steer the global-level emergent behaviour of the system. One notable example is the aggregate process abstraction: in an asynchronous round-based computational model, it allows to specify how aggregate-level computations are spawned, take form or spread on a domain of devices, and ultimately quit. Previous presentations of aggregate processes, however, are given in the formal framework of the field calculus, requiring knowledge of its syntax and articulated semantics. To provide a more accessible and language-agnostic presentation of such an abstraction, in this paper we introduce a general formal framework of collective computational processes (CCP). Specifically, as key contribution, we model and describe the programming interface (spawn construct) and dynamics of CCPs on event structures. Furthermore, we also propose novel algorithms for efficient propagation and termination of CCPs, based on statistics on the information speed and a notion of progressive wave-like closure. Crucially, thanks to our theoretical framework, we can provide optimality guarantees for the proposed algorithms, whose performance, superior to the state of the art, is assessed by simulation. Finally, to show applicability of CCPs, we provide a case study of situated service discovery in peer-to-peer networks.
Federated learning traditionally depends on centralized servers to orchestrate model aggregation, which can limit scalability and resilience. This paper introduces a field-based coordination (FBC) paradigm to support federated learning in a fully decentralized way. By leveraging computational fields, devices exchange local information that self-organizes into global coordination patterns, enabling model training without central control. The approach supports partitioned models, autonomous collaboration, and enhanced fault tolerance. Experimental evaluation shows that FBC achieves performance comparable to centralized approaches, while improving scalability and flexibility in distributed and dynamic environments.
Accurate gait analysis plays a key role in rehabilitation and healthcare monitoring, but wearable-based measurements often suffer from calibration errors and variability across subjects. This paper proposes a visual calibration driven gait analysis model, which integrates wearable sensor data with vision-based calibration cues to enhance accuracy and reliability. The framework leverages multimodal data fusion to reduce drift and misalignment in wearable measurements. Experimental evaluation demonstrates improved gait parameter estimation and robustness across diverse walking conditions, making it a promising solution for real-time, personalized gait monitoring in clinical and home settings.
Effective human–machine interaction requires systems not only to perceive the current context but also to project future situations to proactively adapt to user needs. This paper introduces a situation projection framework based on rule mining, enabling machines to infer likely future states from past and present context data. The approach integrates knowledge discovery with cognitive interaction models, enhancing adaptability and responsiveness in complex environments. Experimental validation demonstrates that rule-based projection improves system performance in terms of situation awareness, offering a robust basis for next-generation interactive systems.
Despite the huge importance that centrality metrics have in understanding the topology of a network, little is known about how small alterations in the network’s topology affect the norm of the centrality vector, which stores node centralities. This paper investigates that gap by formalizing centrality definitions and empirically examining three fundamental metrics (Degree, Eigenvector, and Katz centrality) under two probabilistic node-failure models: Uniform (each node removed independently with fixed probability) and Best Connected (removal probability proportional to node degree). The findings show that Degree centrality remains relatively stable under minor perturbations, while Eigenvector and Katz centralities can be extremely sensitive — even small changes may cause large distortions under specific conditions.
Continuous monitoring of sitting posture in wheelchair users can enable early detection of health-related changes in functional status. Traditional observation methods rely on intermittent clinical assessments, which limit timely interventions. This paper presents a novel unsupervised anomaly detection system using pressure, inertial, or related sensor data to automatically identify deviations from personalized sitting patterns, without the need for labelled data. The method operates in two stages: (1) modeling normal posture behavior per user, and (2) detecting anomalies in real time. Comparative analysis across multiple unsupervised algorithms shows that dimensionality reduction techniques notably enhance detection accuracy. The personalization of normal posture models further improves system performance, making this an effective solution for real-time posture monitoring in wheelchair users.
Accurate classification of ECG signals is vital for early detection of cardiovascular conditions, particularly in wearable healthcare devices with limited computational resources. This paper presents Emcnet, an ensemble multiscale convolutional neural network designed for single-lead ECG classification. By combining multiscale convolutional feature extraction with ensemble strategies, Emcnet captures both fine-grained and global temporal patterns in ECG signals. Experimental results on benchmark datasets demonstrate that Emcnet achieves superior accuracy and robustness compared to state-of-the-art baselines, while remaining efficient enough for deployment on wearable platforms.
The design of community-oriented wearable systems requires controlled environments where new algorithms and interaction paradigms can be tested. This paper presents a proximity-based wearable computing testbed that enables experimentation with group dynamics, social interaction models, and context-aware applications. The testbed integrates wearable devices equipped with proximity sensors to capture real-time co-location and interaction patterns. Validation experiments demonstrate the system’s capability to model community behaviors, offering researchers a robust platform for developing and testing collaborative wearable applications in pervasive computing scenarios.
Freezing of Gait (FoG) is one of the most disabling motor symptoms of Parkinson’s disease (PD), often leading to falls and reduced quality of life. This paper proposes AiCarePWP, a deep learning-based framework designed to forecast FoG episodes before they occur, enabling preventive interventions. Leveraging wearable sensor data, AiCarePWP employs temporal modeling to capture subtle gait dynamics and detect precursors of FoG events. Experimental evaluation shows that AiCarePWP achieves high predictive accuracy and robustness compared to conventional detection approaches, offering a path toward real-time, patient-centric monitoring systems for Parkinson’s disease management.
The behavior of distributed systems situated in space can be required to satisfy spatial properties, in addition to the more widely known temporal properties. In particular, it has been previously shown that fully distributed monitors in eXchange Calculus (XC) can be automatically derived for verifying properties of situated systems expressed in the Spatial Logic of Closure Spaces (SLCS). While it has been proven that such monitors eventually compute the truth value of the desired properties, the actual time required for such computations has been thus far disregarded.
In the present paper, we fill this gap by investigating the real-time guarantees that can be given in terms of upper bounds on the time taken by the XC monitors to compute the truth of SLCS properties after stabilisation of inputs.
Full paper
by Audrito, G., Bortoluzzi, D., Damiani, F., Scarso, G., Torta, G.
published in Coordination Models and Languages. COORDINATION 2024. Lecture Notes in Computer Science, vol 14676. Springer, Cham.
Recent work in the area of coordination models and collective adaptive systems promotes a view of distributed computations as functions manipulating computational fields (data structures spread over space and evolving over time) and introduces the eXchange Calculus (XC) as a novel formal foundation for field computations. In XC, evolution (time) and neighbor interaction (space) are handled by a single communication primitive called exchange, working on the neighbouring value data structure to represent both received values and values to share. However, the exchange primitive does not allow to directly retain information about neighbours across subsequent rounds of computation. This hampers the convenient expression of useful algorithms in XC, such as the computation of a neighbour reliability score.
In this paper, we introduce a new generalised version of the exchange primitive, also implementing it into the FCPP DSL. This primitive allows for neighbour data retention across rounds, strictly expanding the expressiveness of the exchange primitive in XC. The contribution is then evaluated through a case study on distributed sensing in a wireless sensor network of battery-powered devices, exploiting the reliability scores to improve robustness.
Full paper
In the field of deep learning (DL), the deployment of complex Neural Networks (NN) models on memory-constrained devices presents a significant challenge. TinyML focuses on optimizing DL models for such environments, where computational and storage resources are limited. The main key aspect of this optimization involves reducing the size of the models without compromising their performance too much.
We have investigated the efficacy of various quantization techniques in optimizing DL models for deployment on memory-constrained devices. To understand the challenges of memory requirements of standard deep learning models, we conducted comprehensive literature reviews and identified quantization methods as a potent approach for model size reduction. Our study targets popular NN architectures such as ResNetV1 and V2, MobileNetV1 and V2, and introduces a custom-designed model, examining their suitability to TinyML constraints.
We have analyzed CIFAR-10 and MNIST datasets to assess the impact of four distinct quantization techniques on model size and accuracy. These techniques include Dynamic Range Quantization, Full Integer Quantization, Float16 Quantization, and Integer 16×8 Quantization. Our aim is to contribute valuable insights into model optimization for efficient deployment in resource-limited environments.
Full paper
presented as part of a poster session, in press
Springer International Publishing (in press)
Wearable computing systems generate vast streams of multimodal data, requiring intelligent mechanisms to recognize and interpret user situations. This paper proposes a situation identification framework that integrates machine learning models with context space theory (CST) to effectively map raw sensor data into high-level contextual states. The approach provides a systematic way to handle uncertainty and dynamic changes in the environment, while enabling robust and adaptive situation recognition. Experimental evaluation shows that combining CST with learning techniques significantly improves accuracy and generalization in complex real-world scenarios, laying a foundation for next-generation smart wearable systems.
Nowadays, there is an ever-growing interest in assessing the collective intelligence (CI) of a team in a wide range of scenarios, thanks to its potential in enhancing teamwork and group performance. Recently, special attention has been devoted on the clinical setting, where breakdowns in teamwork, leadership, and communication can lead to adverse events, compromising patient safety. So far, researchers have mostly relied on surveys to study human behavior and group dynamics; however, this method is ineffective. In contrast, a promising solution to monitor behavioral and individual features that are reflective of CI is represented by wearable technologies. To date, the field of CI assessment still appears unstructured; therefore, the aim of this narrative review is to provide a detailed overview of the main group and individual parameters that can be monitored to evaluate CI in clinical settings, together with the wearables either already used to assess them or that have the potential to be applied in this scenario. The working principles, advantages, and disadvantages of each device are introduced in order to try to bring order in this field and provide a guide for future CI investigations in medical contexts.
Full paper
Sparse neural networks are increasingly recognized for their ability to deliver compact yet accurate models. While much work has focused on binary classification, this chapter investigates the role of high sparsity strategies in multi-class classification tasks. The authors evaluate the trade-offs between sparsity levels, classification accuracy, and model efficiency across diverse datasets. Results indicate that, under appropriate sparsity thresholds, multi-class classifiers not only maintain performance but in some cases even outperform their dense counterparts. These findings support the use of sparsification as a practical method for building efficient AI models deployable in real-world, resource-constrained environments.
Smartwatch sensors generate continuous multimodal data streams that can be exploited for human activity monitoring in real time. This paper proposes an attention-based multihead deep learning framework that effectively captures both temporal dynamics and cross-sensor dependencies. The model employs multihead attention mechanisms to learn discriminative feature representations from raw smartwatch signals, enabling accurate recognition of diverse daily activities. Extensive experiments demonstrate that the framework outperforms conventional deep models in terms of accuracy and robustness, highlighting its potential for online, real-world activity monitoring in IoT-enabled healthcare and lifestyle applications.
Engineering self-organising systems – e.g., robot swarms, collectives of wearables, or distributed infrastructures – has been investigated and addressed through various kinds of approaches: devising algorithms by taking inspiration from nature, relying on design patterns, using learning to synthesise behaviour from expectations of emergent behaviour, and exposing key mechanisms and abstractions at the level of a programming language. Focussing on the latter approach, most of the state-of-the-art languages for self-organisation leverage a round-based execution model, where devices repeatedly evaluate their context and control program fully: this model is simple to reason about but limited in terms of flexibility and fine-grained management of sub-activities. By inspiration from the so-called functional reactive paradigm, in this paper we propose a reactive self-organisation programming approach that enables to fully decouple the program logic from the scheduling of its sub-activities. Specifically, we implement the idea through a functional reactive implementation of aggregate programming in Scala, based on the functional reactive library Sodium. The result is a functional reactive self-organisation programming model, called FRASP, that maintains the same expressiveness and benefits of aggregate programming, while enabling significant improvements in terms of scheduling controllability, flexibility in the sensing/actuation model, and execution efficiency.
Considerable progress has been made in developing sensors and wearable systems for monitoring physiological parameters in different fields. Among all, healthcare and sports are showing increasing interest in monitoring respiratory rate through these sensors. However, several open challenges limit their reliability. This study presents the design, development, and testing of a wearable sensor based on conductive textiles for respiratory monitoring in sports. Our approach involved an initial analysis of the breathing kinematics to investigate the magnitude of chest wall strains during breathing. This analysis was useful to guide the design of the sensing element, as well as the metrological characterization of the sensor and its integration into a wearable strap. A pilot experiment was then carried out on a healthy volunteer to assess the sensor’s performance under three different breathing patterns (bradypnea, quiet breathing, and tachypnea) using a wearable reference system. The obtained results are very promising and aim to contribute to developing a reliable and efficient wearable device for monitoring respiratory rate. Furthermore, the design process employed in this study provides insight into the attributes needed to accurately capture breathing movements while maintaining comfort and usability.
Full paper
An interesting and innovative activity in Collective Intelligence systems is Sentiment Analysis (SA) which, starting from users’ feedback, aims to identify their opinion about a specific subject, for example in order to develop/improve/customize products and services. The feedback gathering, however, is complex, time-consuming, and often invasive, possibly resulting in decreased truthfulness and reliability for its outcome. Moreover, the subsequent feedback processing may suffer from scalability, cost, and privacy issues when the sample size is large or the data to be processed is sensitive. Internet of Things (IoT) and Edge Intelligence (EI) can greatly help in both aspects by providing, respectively, a pervasive and transparent way to collect a huge amount of heterogeneous data from users (e.g., audio, images, video, etc.) and an efficient, low-cost, and privacy-preserving solution to locally analyze them without resorting to Cloud computing-based platforms. Therefore, in this paper we outline an innovative collective SA system which leverages on IoT and EI (specifically, TinyML techniques and the EdgeImpulse platform) to gather and immediately process audio in the proximity of entities-of-interest in order to determine whether audience’ opinions are positive, negative, or neutral. The architecture of the proposed system, exemplified in a museum use case, is presented, and a preliminary, yet very promising, implementation is shown, reveling interesting insights towards its full development.
The expansion of Internet if Things (IoT) technology has led to the widespread use of sensors in various everyday environments, including healthcare. Body Sensor Networks (BSNs) enable continuous monitoring of human physiological signals and activities, benefiting healthcare and well-being. However, existing BSN systems primarily focus on single-user activity recognition, disregarding multi-user scenarios. Therefore, this paper introduces a collaborative BSN-based architecture for multi-user activity recognition to identify group collaborations among nearby users. We discuss first the general problem of multi-user activity recognition, the associated challenges along with potential solutions (such as data processing, mining techniques, sensor noise, and the complexity of multi-user activities) and, then, the software abstractions and the components of our architecture. This represents an innovative solution of collective intelligence and it holds significant potential for enhancing healthcare and well-being applications by enabling real-time detection of group activities and behaviors.
This paper introduces FRASP (Functional Reactive Approach to Self-Organisation Programming), a novel macro-programming paradigm that reinterprets aggregate computing through the lens of functional reactive programming (FRP). Implemented as a Scala DSL, FRASP provides reactive abstractions—such as Flow for time-varying distributed computations and NbrField for neighbor data—that enable declarative specifications of self-organizing behaviors. Examples include gradient creation and self-healing communication channels. Through microbenchmarks, FRASP demonstrates efficient and expressive modeling of adaptive collective behaviors in distributed systems.
The healthcare industry faces challenges due to rising treatment costs, an aging population, and limited medical resources. Remote monitoring technology offers a promising solution to these issues. This paper introduces an innovative adaptive method that deploys an Ultra-Wideband (UWB) radar-based Internet-of-Medical-Things (IoMT) system to remotely monitor elderly individuals’ vital signs and fall events during their daily routines. The system employs edge computing for prioritizing critical tasks and a combined cloud infrastructure for further processing and storage. This approach enables monitoring and telehealth services for elderly individuals. A case study demonstrates the system’s effectiveness in accurately recognizing high-risk conditions and abnormal activities such as sleep apnea and falls. The experimental results show that the proposed system achieved high accuracy levels, with a Mean Absolute Error (MAE) ± Standard Deviation of Absolute Error (SDAE) of 1.23±1.16 bpm for heart rate (HR) detection and 0.22±0.27 bpm for respiratory rate (RR) detection. Moreover, the system demonstrated a recognition accuracy of 90.60% for three types of falls (i.e., stand, bow, squat to fall), one daily activity, and No Activity Background. These findings indicate that the radar sensor provides a high degree of accuracy suitable for various remote monitoring applications, thus enhancing the safety and well-being of elderly individuals in their homes.
The development of feature-oriented programming (FOP) and of (its generalization) delta-oriented programming (DOP) has focused primarily on SPLs of class-based object oriented programs. In this paper, we introduce delta-oriented SPLs of functional programs with algebraic data types (ADTs). To pave the way towards SPLs of multi-paradigm programs, we tailor our presentation to the functional sublanguage of the multi-paradigm modeling language ABS, which already features DOP support for its class-based object-oriented sublanguage. Our main contributions are: (i) we motivate and illustrate our proposal by an example from an industrial modeling scenario; (ii) we formalize delta-oriented SPLs for functional programs with ADTs in terms of a foundational calculus; (iii) we define family-based analyses to check whether an SPL satisfies certain well-formedness conditions and whether all variants can be generated and are well-typed; and (iv) we briefly outline how, in the context of the toolchain of ABS, the proposed delta-oriented constructs and analyses for functional programs can be integrated with their counterparts for object-oriented programs.
Social activities are a fundamental form of social interaction in our daily life. Current smart systems based on human-computer interaction (e.g. for security, safety, and healthcare applications) may significantly benefit and often require an understanding of users’ individual and group activities performed. Recent advancements in Wi-Fi signal analysis suggest that this pervasive communication infrastructure can also represent a convenient, non-invasive, contactless sensing method to detect human activities. In this paper, we propose a data-level fusion method based on Wi-Fi Channel State Information (CSI) analysis to recognize social activities (e.g., walking together) and gestures (e.g., hand-shaking) in an indoor environment. Our results show that off-the-shelf Wi-Fi devices can be effectively used as a contact-less sensing method for social activity recognition alternative to other approaches such as those based on computer vision and wearable sensors.
Stream Runtime Verification (SRV) has been recently proposed for monitoring input streams of data while producing output streams in response. The Aggregate Programming (AP) paradigm for collection of distributed devices has been used to implement distributed runtime verification of spatial and temporal Boolean properties. In this paper we outline how distributed SRV could be implemented by AP and the new opportunities AP could bring to the field of distributed SRV.
The importance of monitoring groups of devices working together towards shared global objectives is growing, for instance when they are used for crucial purposes like search and rescue operations during emergencies. Effective approaches in this context include expressing global properties of a swarm as logical formulas in a spatial or temporal logic, which can be automatically translated into executable distributed run-time monitors. This can be accomplished leveraging frameworks such as Aggregate Computing (AC), and proving non-trivial “translation correctness” results, in which subtle bugs may easily hide if relying on hand-made proofs.
In this paper, we present an implementation of AC in Coq, which allows to automatically verify monitor correctness, further raising the security level of the monitored system. This implementation may also allow to integrate static analysis of program correctness properties with run-time monitors for properties too difficult to prove in Coq. We showcase the usefulness of our implementation by means of a paradigmatic example, proving the correctness of an AC monitor for a past-CTL formula in Coq.
Deploying binary classifiers on constrained platforms requires models that are both compact and accurate. This chapter explores the miniaturisation of binary classifiers through sparse neural networks, showing how sparsification techniques can significantly reduce memory and computational costs while retaining high predictive performance. The authors present comparative analyses of different sparsity levels, demonstrating the trade-offs between efficiency and accuracy, and discuss the potential of sparse models as a practical solution for resource-limited edge and embedded AI applications.
Recent trends like the Internet of Things (IoT) suggest a vision of dense and multi-scale deployments of computing devices in nearly all kinds of environments. A prominent engineering challenge revolves around programming the collective adaptive behaviour of such computational ecosystems. This requires abstractions able to capture concepts like ensembles (dynamic groups of cooperating devices) and collective tasks (joint activities carried out by ensembles). In this work, we consider collections of devices interacting with neighbours and that execute in nearly-synchronised sense–compute–interact rounds, where the computation is given by a single control program. To support programming whole computational collectives, we propose the abstraction of a distributed collective process (DCP), which can be used to define at once the ensemble formation logic and its collective task. We implement the abstraction in the eXchange Calculus (XC), a core language based on neighbouring values (maps from neighbours to values) where state management and interaction is handled through a single primitive, exchange. Then, we discuss the features of the abstraction, its suitability for different kinds of distributed computing applications, and provide a proof-of-concept implementation of a wave-like process propagation.
In the last decades, many smart sensing solutions have been provided for monitoring human health ranging from systems equipped with electrical to mechanical and optical sensors. In this scenario, wearables based on fiber optic sensors like fiber Bragg gratings (FBGs) can be a valuable solution since they show many advantages over the competitors, like miniaturized size, lightness, and high sensitivity. Unfortunately, one of the main issues with this technology is its inherent fragility. For this reason, various encapsulation modalities have been proposed to embed FBG into flexible biocompatible materials for robustness improvements and skin-like appearance. Recently, 3D printing techniques have been proposed to innovate this process thanks to their numerous advantages like a quick fabrication process, high accuracy, repeatability, and resolution. Moreover, the possibility of easily customizing the sensor design by choosing a set of printing parameters (e.g., printing orientation, material selection, shape, size, density, and pattern) can help in developing sensing solutions optimized for specific applications. Here, we present a 3D-printed sensor developed by fused deposition modeling (FDM) with a rectangular shape. A detailed description of the design and fabrication stages is proposed. In addition, changes in the spectral response as well as in the metrological properties of the embedded FBG sensor are investigated. The presented data can be utilized not only for improving and optimizing design and fabrication processes but also may be beneficial for the next research in the production of highly sensitive 3D-printed sensors for applications in wearable technology and, more generally, healthcare setting.
Full paper
by Reza Alizadehsani, Maryam Roshanzamir, Niloofar H. Izadi, Raffaele Gravina, H. D. Kabir, D. Nahavandi, Giancarlo Fortino, et al.
published in Sensors, vol. 23, no. 3, 1466, 2023
The integration of swarm intelligence (SI) with the Internet of Medical Things (IoMT) is opening new avenues for intelligent, cooperative healthcare solutions. This comprehensive review surveys the application of SI algorithms—such as particle swarm optimization, ant colony optimization, and artificial bee colony—in IoMT contexts. The paper highlights how swarm-inspired approaches can enhance resource allocation, anomaly detection, patient monitoring, and decision support in distributed healthcare systems. Key challenges, including scalability, security, and energy efficiency, are discussed along with future research directions, positioning SI as a cornerstone for next-generation IoMT-based smart healthcare ecosystems.
by Lucia Cavallaro, Pasquale De Meo, Keyvan Golalipour, Xiaoyang Liu, Giacomo Fiumara, Andrea Tagarelli, Antonio Liotta
published in Complex Networks and Their Applications XI, Springer, Vol. 1078, pp. 433-444, 2023
Centrality measures such as Degree, Eigenvector, and Katz play a pivotal role in understanding the structure and functioning of complex networks. However, these metrics may be sensitive to graph perturbations—changes like random node failures or targeted attacks. In this chapter, the authors examine how small perturbations, modelled via Uniform and Best Connected probabilistic failure models, affect different centrality metrics. They find that Eigenvector centrality is particularly susceptible under uniform perturbations, while in targeted (Best Connected) scenarios the degree of perturbation scales with the proportion of attacked nodes. This study sheds light on the robustness of centrality measures and offers guidance on their reliability in perturbed network environments.
Supported by current socio-scientific trends, programming the global behaviour of whole computational collectives makes for great opportunities, but also significant challenges. Recently, aggregate computing has emerged as a prominent paradigm for so-called collective adaptive systems programming. To shorten the gap between such research endeavours and mainstream software development and engineering, we present ScaFi, a Scala toolkit providing an internal domain-specific language, libraries, a simulation environment, and runtime support for practical aggregate computing systems development.
The Internet of Things and edge computing are fostering a future of ecosystems hosting complex decentralized computations, deeply integrated with our very dynamic environments. Digitalized buildings, communities of people, and cities will be the next-generation “hardware and platform,” counting myriads of interconnected devices, on top of which intrinsically-distributed computational processes will run and self-organize. They will spontaneously spawn, diffuse to pertinent logical/physical regions, cooperate and compete, opportunistically summon required resources, collect and analyze data, compute results, trigger distributed actions, and eventually decay. How would a programming model for such ecosystems look like? Based on research findings on self-adaptive/self-organizing systems, this paper proposes design abstractions based on “dynamic decentralization domains”: regions of space opportunistically formed to support situated recognition and action. We embody the approach into a Scala application program interface (API) enacting distributed execution and show its applicability in a case study of environmental monitoring.
by Daniela Lo Presti, Francesca De Tommasi, Chiara Romano, Blandina Lanni, Massimiliano Carassiti, Giancarlo Fortino, Emiliano Schena
published in 2022 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech)
The ability of a team to work together across a wide variety of tasks is known as collective intelligence (CI). In the last decades, CI is gaining traction in healthcare since its potential in enhancing teamwork and patient safety through improved medical procedures. Nevertheless, CI remains poorly characterized in the clinical setting and its implications in improving teamwork and surgical outcomes lack in the literature. Recently, wearable systems have been used to measure physiological signals and quantify the group behaviors of a surgical team. However, no works have still focused on investigating how individual characteristics and group behaviors can be combined to establish models of effective teamwork and, consequently, strengthen CI.In this study, we proposed the combined use of a wearable system and video recordings to quantitively assess changes in physiological traits of two team members (a medical trainee and an anesthesiologist) before and during a medical procedure. In details, a wearable chest strap was used to monitor vital signs and the level of activity of each user while a video was contextually recorded to evaluate the level of teamwork in terms of speaking time and face-to-face interactions. The proposed technologies were able to work in the scenario of interest recording data useful to quantify aspects related to both individuals’ traits and human interactions. The most remarkable changes according to the level of experience were found in the heart rate and its variability were found. These promising results will foster future interventions on a clinical scenario involving a higher number of team members and under more challenging medical procedures (e.g., inside the operating room) for improving team effectiveness and supporting the development of CI in clinical settings.
Full paper
Swarm intelligence leverages collective behaviours emerging from interaction and activity of several “simple” agents to solve problems in various environments. One problem of interest in large swarms featuring a variety of sub-goals is swarm clustering, where the individuals of a swarm are assigned or choose to belong to zero or more groups, also called clusters. In this work, we address the sensing-based swarm clustering problem, where clusters are defined based on both the values sensed from the environment and the spatial distribution of the values and the agents. Moreover, we address it in a setting characterised by decentralisation of computation and interaction, and dynamicity of values and mobility of agents. For the solution, we propose to use the field-based computing paradigm, where computation and interaction are expressed in terms of a functional manipulation of fields, distributed and evolving data structures mapping each individual of the system to values over time. We devise a solution to sensing-based swarm clustering leveraging multiple concurrent field computations with limited domain and evaluate the approach experimentally by means of simulations, showing that the programmed swarms form clusters that well reflect the underlying environmental phenomena dynamics.
The use of wearable devices in daily activities is continuously and rapidly growing. Wearable technology provides seamless sensing, monitoring, multimodal interaction, without continuous manual intervention and effort for users. These devices support the realization of novel applications in many domains, from healthcare to security and entertainment, improving the quality of life of users. The situation awareness paradigm allows wearable computing systems to be aware of what is happening to users and in the surrounding environment, supporting automatic smart adaptive behaviors based on the identified situation. Although situation-aware wearable devices have recently attracted a lot of attention, there is still a lack of methodological approaches and references models for defining such systems. In this paper, we propose a reference architecture for situation-aware wearable computing systems grounded on Endsley’s SA model. A specialization of the architecture in the context of multi-user wearable computing systems is also proposed to support team situation awareness. An illustrative example shows a practical instantiation of the architecture in the context of contact tracing using smart sensorized face masks.
Background and objective: COVID-19 outbreak has become one of the most challenging problems for human being. It is a communicable disease caused by a new coronavirus strain, which infected over 375 million people already and caused almost 6 million deaths. This paper aims to develop and design a framework for early diagnosis and fast classification of COVID-19 symptoms using multimodal Deep Learning techniques. Methods: we collected chest X-ray and cough sample data from open source datasets, Cohen and datasets and local hospitals. The features are extracted from the chest X-ray images are extracted from chest X-ray datasets. We also used cough audio datasets from Coswara project and local hospitals. The publicly available Coughvid DetectNow and Virufy datasets are used to evaluate COVID-19 detection based on speech sounds, respiratory, and cough. The collected audio data comprises slow and fast breathing, shallow and deep coughing, spoken digits, and phonation of sustained vowels. Gender, geographical location, age, preexisting medical conditions, and current health status (COVID-19 and Non-COVID-19) are recorded. Results: the proposed framework uses the selection algorithm of the pre-trained network to determine the best fusion model characterized by the pre-trained chest X-ray and cough models. Third, deep chest X-ray fusion by discriminant correlation analysis is used to fuse discriminatory features from the two models. The proposed framework achieved recognition accuracy, specificity, and sensitivity of 98.91%, 96.25%, and 97.69%, respectively. With the fusion method we obtained 94.99% accuracy. Conclusion: this paper examines the effectiveness of well-known ML architectures on a joint collection of chest-X-rays and cough samples for early classification of COVID-19. It shows that existing methods can effectively used for diagnosis and suggesting that the fusion learning paradigm could be a crucial asset in diagnosing future unknown illnesses. The proposed framework supports health informatics basis on early diagnosis, clinical decision support, and accurate prediction.
In the last fifteen years, there has been a widespread diffusion of wearable sensorized devices for a plethora of applications in heterogeneous domains. Wearable technology provides fundamental capabilities such as smart sensing, monitoring, data recording, and multi-modal interaction, in a seamless, pervasive, and easy-to-use way. An emerging research trend is the definition of situation-aware wearable computing systems, i.e., wearable devices able to perceive and understand what is happening in the environment in order to adapt their behavior and anticipate users’ needs, a capability known as situation awareness. Although the increasing interest of the research community in situation-aware wearable devices, there is a lack of studies, formal models, methodological approaches, and theoretical groundings on which these systems can be grounded. As a result, a very limited number of smart sensors (physical or virtual) capable of effectively and efficiently supporting Situation Awareness have been proposed so far. In this article, we provide a survey and a classification of state-of-the-art situation-aware wearable systems, outlining current research trends, shortcomings, and challenges, with an emphasis on the models, approaches, and computational techniques of situation awareness and wearable computing on which they are based. The survey has been performed using the PRISMA methodology for systematic reviews. The analysis has been conducted with respect to a reference architecture, namely SA-WCS, of a generic situation-aware wearable computing system that we propose in this article, grounded on Endsley’s model of Situation Awareness. Such reference architecture not only provides a systematic framework for the comparison and categorization of the works, it also aims to promote the development of the next generation WCS.
The use of wearable devices in daily activities is continuously and rapidly growing. Wearable technology provides seamless sensing, monitoring, multimodal interaction, without continuous manual intervention and effort for users. These devices support the realization of novel applications in many domains, from healthcare to security and entertainment, improving the quality of life of users. The situation awareness paradigm allows wearable computing systems to be aware of what is happening to users and in the surrounding environment, supporting automatic smart adaptive behaviors based on the identified situation. Although situation-aware wearable devices have recently attracted a lot of attention, there is still a lack of methodological approaches and references models for defining such systems. In this paper, we propose a reference architecture for situation-aware wearable computing systems grounded on Endsley’s SA model. A specialization of the architecture in the context of multi-user wearable computing systems is also proposed to support team situation awareness. An illustrative example shows a practical instantiation of the architecture in the context of contact tracing using smart sensorized face masks.
Why do groups perform better than individuals? The answer is hidden behind the concept of the so-called Collective Intelligence (CI). CI is defined as the ability of a group to perform a wide variety of tasks, and team behavior and individual characteristics are CI consistent predictors. A complex environment in which CI is increasingly recognized as a determinant of safe and efficient functioning is the operating room (OR), where individual inputs and efforts should be adapted to those of teammates to accomplish shared goals. To date, although teamwork failure accounts for 70–80% of serious medical errors, the lack of quantitative measurements of individual responses and interpersonal dynamics makes CI poorly characterized in OR. This work proposed an innovative wearable platform for monitoring physiological biomarkers and joint movements of individuals while performing tasks. ECG trace and breathing waveform combined with skin conductance and movements patterns of both wrist and elbow are recorded unobtrusively and without impairing any activity of the user. The preliminary assessment of these devices was carried out by performing two trials (i.e., in a static condition to have the user baseline and while handling tools for simulating typical surgical tasks). This study with its preliminary findings, can be considered the first attempt toward the establishment of an innovative strategy to improve team performance and consequently, surgical outcomes and patient safety in the clinical routine.
Full paper