Developing large-scale collective adaptive systems for safety-critical applications requires an extensive effort, involving the interplay of distributed programming techniques and mathematical proofs of real-time guarantees. This effort could be significantly reduced by allowing the system developer to rely on libraries of predefined algorithms. By exploiting such algorithms, distributed behaviour and (hard) real-time guarantees for the final application could be automatically inferred, effectively shifting the verification burden from the system designer to the algorithm developer.
Following earlier work on real-time guarantees for aggregate computing algorithms, we argue that aggregate computing could provide a convenient framework towards this aim. As a first step, we give a detailed description of different kinds of models that abstract aggregate programs as mathematical functions. Then, building on such models, we start investigating the problem of how real-time behavior constraints could be specified in a compositional way. Finally, we conclude by singling out a number of potential building block library algorithms that could constitute such a real-time aggregate computing library, with the potential of providing a valuable asset for supporting the rigorous engineering of safety-critical large-scale collective adaptive systems.
Full paper
Programming swarm behaviors is a challenging task, due to the need to express collective behaviors in terms of local interactions among simple agents. Even if several programming frameworks have been proposed, they are often based on low-level abstractions, which makes the development of swarm applications complex and error-prone. Thus, we present MacroSwarm, an aggregate programming framework for the development of swarm behaviors. With this framework, it is possible to define a large variety of swarm behaviors, starting from simple movements to more complex ones, such as aggregation, flocking, and collective decision-making. In this paper, we present the main features of the framework and some simple examples of its API usage.
Full paper
InThis article introduces ScaRLib, a Scala-based framework that aims to streamline the development cyber-physical swarms scenarios (i.e., systems of many interacting distributed devices that collectively accomplish system-wide tasks) by integrating macroprogramming and multi-agent reinforcement learning to design collective behavior. This framework serves as the starting point for a broader toolchain that will integrate these two approaches at multiple points to harness the capabilities of both, enabling the expression of complex and adaptive collective behavior.
Full paper
In recent years, the infrastructure supporting the execution of situated distributed computations evolved at a fast pace. Modern collective adaptive applications – as found in the Internet of Things, swarm robotics, and social computing – are designed to be executed on very diverse devices and to be deployed on infrastructures composed of devices ranging from cloud servers to wearable devices, constituting together a cloud–edge continuum. The availability of such an infrastructure opens to better resource utilisation and performance but, at the same time, introduces new challenges to software designers, as applications must be conceived to be able to adapt to changing deployment domains and conditions. In this paper, we introduce a practical framework for the development of systems based on the concept of pulverisation, meant to neatly separate business logic and deployment concerns, allowing applications to be defined independently of the infrastructure they will execute upon, thus supporting scalability. The framework is based on a domain-specific language capturing, in a declarative fashion: pulverised application components, device capabilities, resource allocation, and (runtime re-) configuration policies. The framework, implemented in Kotlin multiplatform and available as open source, is then evaluated in a small-scale real-world demo and in a city-scale simulated scenario, demonstrating the feasibility of the approach and its potential benefits in achieving better trade-offs between performance and resource utilisation.
Full paper
The edge–cloud continuum provides a heterogeneous, multi-scale, and dynamic infrastructure supporting complex deployment profiles and trade-offs for application scenarios like those found in the Internet of Things and large-scale cyber–physical systems domains. To exploit the continuum, applications should be designed in a way that promotes flexibility and reconfigurability, and proper management (sub-)systems should take care of reconfiguring them in response to changes in the environment or non-functional requirements. Approaches may leverage optimisation-based or heuristic-based policies, and decision making may be centralised or distributed: this work investigates decentralised heuristic-based approaches. In particular, we focus on the pulverisation approach, whereby a distributed software system is automatically partitioned (“pulverised”) into different deployment units. In this context, we address two main research problems: how to support the runtime reconfiguration of pulverised systems, and how to specify decentralised reconfiguring policies by a global perspective. To address the first problem, we design and implement a middleware for pulverised systems separating infrastructural and application concerns. To address the second problem, we leverage aggregate computing and exploit self-organisation patterns to devise self-stabilising reconfiguration strategies. By simulating deployments on different kinds of complex infrastructures, we assess the flexibility of the pulverisation middleware design as well as the effectiveness and resilience of the aggregate computing-based reconfiguration policies.
Full paper
Swarm programming is focused on the design and implementation of algorithms for large-scale systems, such as fleets of robots, ensembles of IoT devices, and sensor networks. Writing algorithms for these systems requires skills and familiarity with programming languages, which can be a barrier for non-expert users. Even if visual programming environments have been proposed for swarm systems, they are often limited to specific platforms or tasks, and do not provide a high-level programming model that can be used to design algorithms for a wide range of swarm systems. Therefore, in this paper, we propose a low-code swarm programming environment, called ScaFi-Blocks, which allows users to design and implement swarm algorithms visually.
Full paper
The advent of highly distributed systems, such as the Internet of Things, has led to the development of distributed systems that require efficient and resilient runtime monitoring. Among the various monitoring techniques, runtime verification is a lightweight verification method that assesses the correctness of a running system concerning a formal specification. In this paper, we investigate the optimization of aggregate monitors, i.e., monitors that operate on ensembles of devices, for properties expressed in Spatial Logic of Closure Spaces (SLCS)–a formal logic designed to reason about spatial relationships between entities in a distributed system. We propose three different algorithms for the implementation of the “somewhere” operator, a key construct in SLCS, and we evaluate their performance through a series of simulations, comparing their convergence time and computational load.
Full paper
Federated Learning has gained increasing interest in the last years, as it allows the training of machine learning models with a large number of devices by exchanging only the weights of the trained neural networks. Without the need to upload the training data to a central server, privacy concerns and potential bottlenecks can be removed as fewer data is transmitted. However, the current state-of-the-art solutions are typically centralized, and do not provide for suitable coordination mechanisms to take into account spatial distribution of devices and local communications, which can sometimes play a crucial role. Therefore, we propose a field-based coordination approach for federated learning, where the devices coordinate with each other through the use of computational fields. We show that this approach can be used to train models in a completely peer-to-peer fashion. Additionally, our approach also allows for emergently create zones of interests, and produce specialized models for each zone enabling each agent to refine their models for the tasks at hand.We evaluate our approach in a simulated environment leveraging aggregate computing—the reference global-to-local field-based coordination programming paradigm. The results show that our approach is comparable to the state-of-the-art centralized solutions, while enabling a more flexible and scalable approach to federated learning.
In recent advancements in machine learning, federated learning allows a network of distributed clients to collaboratively develop a global model without needing to share their local data. This technique aims to safeguard privacy, countering the vulnerabilities of conventional centralized learning methods. Traditional federated learning approaches often rely on a central server to coordinate model training across clients, aiming to replicate the same model uniformly across all nodes. However, these methods overlook the significance of geographical and local data variances in vast networks, potentially affecting model effectiveness and applicability. Moreover, relying on a central server might become a bottleneck in large networks, such as the ones promoted by edge computing. Our paper introduces a novel, fully-distributed federated learning strategy called proximity-based self-federated learning that enables the self-organised creation of multiple federations of clients based on their geographic proximity and data distribution without exchanging raw data. Indeed, unlike traditional algorithms, our approach encourages clients to share and adjust their models with neighbouring nodes based on geographic proximity and model accuracy. This method not only addresses the limitations posed by diverse data distributions but also enhances the model’s adaptability to different regional characteristics creating specialized models for each federation.
The concept of Edge-cloud Continuum (ECC) serves as a strategic infrastructure for deploying modern Collective-adaptive Systems (CASs). In this framework, heterogeneous devices create a continuum between the edge and the cloud, offering new opportunities and challenges for deploying collective systems such as smart cities, IoT applications, and more. Pre-liminary work, like the pulverisation approach, models a system as an ensemble of logical entities connected forming a dynamic graph, where each device is decomposed into five independent components (i.e., sensors, actuators, state, communication, and behaviour). This approach addresses the challenge of devising an application partitioning strategy to effectively deploy collective systems in the continuum but does not provide an explicit mechanism to handle dynamic system reconfiguration. For this reason, learning approaches can be effective in managing the dynamic and continuously evolving requirements of the ECC (e.g., latency, power consumption, computational resources). In this paper we propose a new generation of “Intelligent Collective Services” that uses advanced partitioning models and learning approaches, such as Graph Neural Network (GNN) and Many-agent Reinforcement Learning (MARL), to enhance adaptability and pave the way for the next generation of CAS in the ECC.
Cyber-physical swarms represent a paradigm shift in distributed systems, mirroring characteristics akin to natural swarms, such as self-organization, scalability, and fault tolerance. This paper delves into these complex systems, characterized by vast networks of cyber-physical entities with limited environmental awareness, yet capable of exhibiting emergent collective behaviors. These systems encompass a diverse array of scenarios, ranging from swarm robotics to the interconnectivity in smart cities, as well as the collaboration among augmented humans. The engineering of such systems presents unique challenges, primarily due to their intricate complexity and the spontaneous nature of their collective behaviors.This paper aims to dissect these challenges, offering a clear delineation of potential approaches. We present a comprehensive analysis, shedding light on the intricacies of engineering cyber-physical swarms and discussing modern solutions in engineering collective applications for such systems
Preservation of the existing biodiversity and wildlife is a crucial task for the future of our planet. To be able to protect and conserve animal populations, it is essential to understand their behavior and the factors that influence it. One of the major sources of information for biologists and ethologists is the acquisition of photos and videos from camera traps, Unmanned Aerial Vehicles (UAVs), and Unmanned Ground Vehicles (UGVs). However, appropriately positioning camera traps and optimizing the movement of unmanned devices is difficult, often requiring trial-and-error, and thus amenable to improvement through in-silico simulation. In this context, an appropriate actionable model of the herd behavior of wildlife is of paramount importance, as it can provide a reasonably realistic context for simulating the deployment and control of unmanned devices before field operations. Using ground-truth data from the Kenyan Animal Behavior Recognition (KABR) dataset, we propose a model of directional multi-herds that can be used to simulate the movement of multiple herds of animals. The model and analysis is enriched by an implementation and evaluation into an existing discrete event simulator.
The burden of Alzheimer’s Disease (AD) extends to both patients and caregivers (CGs), necessitating effective management strategies. Non-pharmacological methods like the Walk and Talk Programs have shown promise in enhancing their quality of life. The interactions between patient and CGs and their psychophysical load may influence the adherence to the program, but they still lack an objective evaluation. The assessment of these aspects, aligning with the principles of Collective Intelligence (CI), can foster their refinement with consequent superior outcomes. Wearable systems may intervene to monitor parameters related to the patient-CGs psychophysical load with a good acceptance among AD patients. We proposed a multiparametric wearable system embedding two inertial measurement unit (IMU) sensors to monitor key elements in CI, such as heart rate (HR), respiratory rate (RR) and activity level (LA). The feasibility of the whole system was assessed in a pilot study on eight volunteers, replicating patient-CG interactions typical of the Walk and Talk Program. Results showed low mean percentage errors for HR and RR estimations, validated against a reference chest strap. Dynamic conditions notably captured group dynamics, consistently detecting LAs of the subjects. In summary, our study laid the foundation for a more comprehensive and efficacious AD management approach.Full paper
In Alzheimer’s disease, maintaining proximity between patients and caregivers (CGs) is crucial for their well-being, necessitating continuous monitoring indoors and outdoors. The integration of Collective Intelligence through collaborative approaches monitored by integrated technologies may be beneficial in enhancing caregiving outcomes. However, non-pharmacological interventions like the ‘Walk and Talk Program’ (WTP) lack objective assessment methods. Multi-sensing approaches leveraging wearables for proximity detection using Bluetooth Low Energy and computer vision technologies for identifying subjects in the scene and measuring dialogue time are vital for accurate evaluation. This paper evaluates the validity and accuracy of these tools in supporting the WTP, through simulated real-life scenarios of patient-CG interactions. A pilot study involving volunteers replicating WTP interactions revealed promising outcomes for both wearable devices and camera systems, emphasizing their potential in advancing dementia care practices.Full paper
The behavior of distributed systems situated in space can be required to satisfy spatial properties, in addition to the more widely known temporal properties. In particular, it has been previously shown that fully distributed monitors in eXchange Calculus (XC) can be automatically derived for verifying properties of situated systems expressed in the Spatial Logic of Closure Spaces (SLCS). While it has been proven that such monitors eventually compute the truth value of the desired properties, the actual time required for such computations has been thus far disregarded.
In the present paper, we fill this gap by investigating the real-time guarantees that can be given in terms of upper bounds on the time taken by the XC monitors to compute the truth of SLCS properties after stabilisation of inputs.
Full paper
by Audrito, G., Bortoluzzi, D., Damiani, F., Scarso, G., Torta, G.
published in Coordination Models and Languages. COORDINATION 2024. Lecture Notes in Computer Science, vol 14676. Springer, Cham.
Recent work in the area of coordination models and collective adaptive systems promotes a view of distributed computations as functions manipulating computational fields (data structures spread over space and evolving over time) and introduces the eXchange Calculus (XC) as a novel formal foundation for field computations. In XC, evolution (time) and neighbor interaction (space) are handled by a single communication primitive called exchange, working on the neighbouring value data structure to represent both received values and values to share. However, the exchange primitive does not allow to directly retain information about neighbours across subsequent rounds of computation. This hampers the convenient expression of useful algorithms in XC, such as the computation of a neighbour reliability score.
In this paper, we introduce a new generalised version of the exchange primitive, also implementing it into the FCPP DSL. This primitive allows for neighbour data retention across rounds, strictly expanding the expressiveness of the exchange primitive in XC. The contribution is then evaluated through a case study on distributed sensing in a wireless sensor network of battery-powered devices, exploiting the reliability scores to improve robustness.
Full paper
In the field of deep learning (DL), the deployment of complex Neural Networks (NN) models on memory-constrained devices presents a significant challenge. TinyML focuses on optimizing DL models for such environments, where computational and storage resources are limited. The main key aspect of this optimization involves reducing the size of the models without compromising their performance too much.
We have investigated the efficacy of various quantization techniques in optimizing DL models for deployment on memory-constrained devices. To understand the challenges of memory requirements of standard deep learning models, we conducted comprehensive literature reviews and identified quantization methods as a potent approach for model size reduction. Our study targets popular NN architectures such as ResNetV1 and V2, MobileNetV1 and V2, and introduces a custom-designed model, examining their suitability to TinyML constraints.
We have analyzed CIFAR-10 and MNIST datasets to assess the impact of four distinct quantization techniques on model size and accuracy. These techniques include Dynamic Range Quantization, Full Integer Quantization, Float16 Quantization, and Integer 16×8 Quantization. Our aim is to contribute valuable insights into model optimization for efficient deployment in resource-limited environments.
Full paper
presented as part of a poster session, in press
Springer International Publishing (in press)
Nowadays, there is an ever-growing interest in assessing the collective intelligence (CI) of a team in a wide range of scenarios, thanks to its potential in enhancing teamwork and group performance. Recently, special attention has been devoted on the clinical setting, where breakdowns in teamwork, leadership, and communication can lead to adverse events, compromising patient safety. So far, researchers have mostly relied on surveys to study human behavior and group dynamics; however, this method is ineffective. In contrast, a promising solution to monitor behavioral and individual features that are reflective of CI is represented by wearable technologies. To date, the field of CI assessment still appears unstructured; therefore, the aim of this narrative review is to provide a detailed overview of the main group and individual parameters that can be monitored to evaluate CI in clinical settings, together with the wearables either already used to assess them or that have the potential to be applied in this scenario. The working principles, advantages, and disadvantages of each device are introduced in order to try to bring order in this field and provide a guide for future CI investigations in medical contexts.
Full paper
Engineering self-organising systems – e.g., robot swarms, collectives of wearables, or distributed infrastructures – has been investigated and addressed through various kinds of approaches: devising algorithms by taking inspiration from nature, relying on design patterns, using learning to synthesise behaviour from expectations of emergent behaviour, and exposing key mechanisms and abstractions at the level of a programming language. Focussing on the latter approach, most of the state-of-the-art languages for self-organisation leverage a round-based execution model, where devices repeatedly evaluate their context and control program fully: this model is simple to reason about but limited in terms of flexibility and fine-grained management of sub-activities. By inspiration from the so-called functional reactive paradigm, in this paper we propose a reactive self-organisation programming approach that enables to fully decouple the program logic from the scheduling of its sub-activities. Specifically, we implement the idea through a functional reactive implementation of aggregate programming in Scala, based on the functional reactive library Sodium. The result is a functional reactive self-organisation programming model, called FRASP, that maintains the same expressiveness and benefits of aggregate programming, while enabling significant improvements in terms of scheduling controllability, flexibility in the sensing/actuation model, and execution efficiency.
Considerable progress has been made in developing sensors and wearable systems for monitoring physiological parameters in different fields. Among all, healthcare and sports are showing increasing interest in monitoring respiratory rate through these sensors. However, several open challenges limit their reliability. This study presents the design, development, and testing of a wearable sensor based on conductive textiles for respiratory monitoring in sports. Our approach involved an initial analysis of the breathing kinematics to investigate the magnitude of chest wall strains during breathing. This analysis was useful to guide the design of the sensing element, as well as the metrological characterization of the sensor and its integration into a wearable strap. A pilot experiment was then carried out on a healthy volunteer to assess the sensor’s performance under three different breathing patterns (bradypnea, quiet breathing, and tachypnea) using a wearable reference system. The obtained results are very promising and aim to contribute to developing a reliable and efficient wearable device for monitoring respiratory rate. Furthermore, the design process employed in this study provides insight into the attributes needed to accurately capture breathing movements while maintaining comfort and usability.
Full paper
An interesting and innovative activity in Collective Intelligence systems is Sentiment Analysis (SA) which, starting from users’ feedback, aims to identify their opinion about a specific subject, for example in order to develop/improve/customize products and services. The feedback gathering, however, is complex, time-consuming, and often invasive, possibly resulting in decreased truthfulness and reliability for its outcome. Moreover, the subsequent feedback processing may suffer from scalability, cost, and privacy issues when the sample size is large or the data to be processed is sensitive. Internet of Things (IoT) and Edge Intelligence (EI) can greatly help in both aspects by providing, respectively, a pervasive and transparent way to collect a huge amount of heterogeneous data from users (e.g., audio, images, video, etc.) and an efficient, low-cost, and privacy-preserving solution to locally analyze them without resorting to Cloud computing-based platforms. Therefore, in this paper we outline an innovative collective SA system which leverages on IoT and EI (specifically, TinyML techniques and the EdgeImpulse platform) to gather and immediately process audio in the proximity of entities-of-interest in order to determine whether audience’ opinions are positive, negative, or neutral. The architecture of the proposed system, exemplified in a museum use case, is presented, and a preliminary, yet very promising, implementation is shown, reveling interesting insights towards its full development.
The expansion of Internet if Things (IoT) technology has led to the widespread use of sensors in various everyday environments, including healthcare. Body Sensor Networks (BSNs) enable continuous monitoring of human physiological signals and activities, benefiting healthcare and well-being. However, existing BSN systems primarily focus on single-user activity recognition, disregarding multi-user scenarios. Therefore, this paper introduces a collaborative BSN-based architecture for multi-user activity recognition to identify group collaborations among nearby users. We discuss first the general problem of multi-user activity recognition, the associated challenges along with potential solutions (such as data processing, mining techniques, sensor noise, and the complexity of multi-user activities) and, then, the software abstractions and the components of our architecture. This represents an innovative solution of collective intelligence and it holds significant potential for enhancing healthcare and well-being applications by enabling real-time detection of group activities and behaviors.
The healthcare industry faces challenges due to rising treatment costs, an aging population, and limited medical resources. Remote monitoring technology offers a promising solution to these issues. This paper introduces an innovative adaptive method that deploys an Ultra-Wideband (UWB) radar-based Internet-of-Medical-Things (IoMT) system to remotely monitor elderly individuals’ vital signs and fall events during their daily routines. The system employs edge computing for prioritizing critical tasks and a combined cloud infrastructure for further processing and storage. This approach enables monitoring and telehealth services for elderly individuals. A case study demonstrates the system’s effectiveness in accurately recognizing high-risk conditions and abnormal activities such as sleep apnea and falls. The experimental results show that the proposed system achieved high accuracy levels, with a Mean Absolute Error (MAE) ± Standard Deviation of Absolute Error (SDAE) of 1.23±1.16 bpm for heart rate (HR) detection and 0.22±0.27 bpm for respiratory rate (RR) detection. Moreover, the system demonstrated a recognition accuracy of 90.60% for three types of falls (i.e., stand, bow, squat to fall), one daily activity, and No Activity Background. These findings indicate that the radar sensor provides a high degree of accuracy suitable for various remote monitoring applications, thus enhancing the safety and well-being of elderly individuals in their homes.
The development of feature-oriented programming (FOP) and of (its generalization) delta-oriented programming (DOP) has focused primarily on SPLs of class-based object oriented programs. In this paper, we introduce delta-oriented SPLs of functional programs with algebraic data types (ADTs). To pave the way towards SPLs of multi-paradigm programs, we tailor our presentation to the functional sublanguage of the multi-paradigm modeling language ABS, which already features DOP support for its class-based object-oriented sublanguage. Our main contributions are: (i) we motivate and illustrate our proposal by an example from an industrial modeling scenario; (ii) we formalize delta-oriented SPLs for functional programs with ADTs in terms of a foundational calculus; (iii) we define family-based analyses to check whether an SPL satisfies certain well-formedness conditions and whether all variants can be generated and are well-typed; and (iv) we briefly outline how, in the context of the toolchain of ABS, the proposed delta-oriented constructs and analyses for functional programs can be integrated with their counterparts for object-oriented programs.
Social activities are a fundamental form of social interaction in our daily life. Current smart systems based on human-computer interaction (e.g. for security, safety, and healthcare applications) may significantly benefit and often require an understanding of users’ individual and group activities performed. Recent advancements in Wi-Fi signal analysis suggest that this pervasive communication infrastructure can also represent a convenient, non-invasive, contactless sensing method to detect human activities. In this paper, we propose a data-level fusion method based on Wi-Fi Channel State Information (CSI) analysis to recognize social activities (e.g., walking together) and gestures (e.g., hand-shaking) in an indoor environment. Our results show that off-the-shelf Wi-Fi devices can be effectively used as a contact-less sensing method for social activity recognition alternative to other approaches such as those based on computer vision and wearable sensors.
Stream Runtime Verification (SRV) has been recently proposed for monitoring input streams of data while producing output streams in response. The Aggregate Programming (AP) paradigm for collection of distributed devices has been used to implement distributed runtime verification of spatial and temporal Boolean properties. In this paper we outline how distributed SRV could be implemented by AP and the new opportunities AP could bring to the field of distributed SRV.
The importance of monitoring groups of devices working together towards shared global objectives is growing, for instance when they are used for crucial purposes like search and rescue operations during emergencies. Effective approaches in this context include expressing global properties of a swarm as logical formulas in a spatial or temporal logic, which can be automatically translated into executable distributed run-time monitors. This can be accomplished leveraging frameworks such as Aggregate Computing (AC), and proving non-trivial “translation correctness” results, in which subtle bugs may easily hide if relying on hand-made proofs.
In this paper, we present an implementation of AC in Coq, which allows to automatically verify monitor correctness, further raising the security level of the monitored system. This implementation may also allow to integrate static analysis of program correctness properties with run-time monitors for properties too difficult to prove in Coq. We showcase the usefulness of our implementation by means of a paradigmatic example, proving the correctness of an AC monitor for a past-CTL formula in Coq.
Recent trends like the Internet of Things (IoT) suggest a vision of dense and multi-scale deployments of computing devices in nearly all kinds of environments. A prominent engineering challenge revolves around programming the collective adaptive behaviour of such computational ecosystems. This requires abstractions able to capture concepts like ensembles (dynamic groups of cooperating devices) and collective tasks (joint activities carried out by ensembles). In this work, we consider collections of devices interacting with neighbours and that execute in nearly-synchronised sense–compute–interact rounds, where the computation is given by a single control program. To support programming whole computational collectives, we propose the abstraction of a distributed collective process (DCP), which can be used to define at once the ensemble formation logic and its collective task. We implement the abstraction in the eXchange Calculus (XC), a core language based on neighbouring values (maps from neighbours to values) where state management and interaction is handled through a single primitive, exchange. Then, we discuss the features of the abstraction, its suitability for different kinds of distributed computing applications, and provide a proof-of-concept implementation of a wave-like process propagation.
In the last decades, many smart sensing solutions have been provided for monitoring human health ranging from systems equipped with electrical to mechanical and optical sensors. In this scenario, wearables based on fiber optic sensors like fiber Bragg gratings (FBGs) can be a valuable solution since they show many advantages over the competitors, like miniaturized size, lightness, and high sensitivity. Unfortunately, one of the main issues with this technology is its inherent fragility. For this reason, various encapsulation modalities have been proposed to embed FBG into flexible biocompatible materials for robustness improvements and skin-like appearance. Recently, 3D printing techniques have been proposed to innovate this process thanks to their numerous advantages like a quick fabrication process, high accuracy, repeatability, and resolution. Moreover, the possibility of easily customizing the sensor design by choosing a set of printing parameters (e.g., printing orientation, material selection, shape, size, density, and pattern) can help in developing sensing solutions optimized for specific applications. Here, we present a 3D-printed sensor developed by fused deposition modeling (FDM) with a rectangular shape. A detailed description of the design and fabrication stages is proposed. In addition, changes in the spectral response as well as in the metrological properties of the embedded FBG sensor are investigated. The presented data can be utilized not only for improving and optimizing design and fabrication processes but also may be beneficial for the next research in the production of highly sensitive 3D-printed sensors for applications in wearable technology and, more generally, healthcare setting.
Full paper
Supported by current socio-scientific trends, programming the global behaviour of whole computational collectives makes for great opportunities, but also significant challenges. Recently, aggregate computing has emerged as a prominent paradigm for so-called collective adaptive systems programming. To shorten the gap between such research endeavours and mainstream software development and engineering, we present ScaFi, a Scala toolkit providing an internal domain-specific language, libraries, a simulation environment, and runtime support for practical aggregate computing systems development.
The Internet of Things and edge computing are fostering a future of ecosystems hosting complex decentralized computations, deeply integrated with our very dynamic environments. Digitalized buildings, communities of people, and cities will be the next-generation “hardware and platform,” counting myriads of interconnected devices, on top of which intrinsically-distributed computational processes will run and self-organize. They will spontaneously spawn, diffuse to pertinent logical/physical regions, cooperate and compete, opportunistically summon required resources, collect and analyze data, compute results, trigger distributed actions, and eventually decay. How would a programming model for such ecosystems look like? Based on research findings on self-adaptive/self-organizing systems, this paper proposes design abstractions based on “dynamic decentralization domains”: regions of space opportunistically formed to support situated recognition and action. We embody the approach into a Scala application program interface (API) enacting distributed execution and show its applicability in a case study of environmental monitoring.
by Daniela Lo Presti, Francesca De Tommasi, Chiara Romano, Blandina Lanni, Massimiliano Carassiti, Giancarlo Fortino, Emiliano Schena
published in 2022 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech)
The ability of a team to work together across a wide variety of tasks is known as collective intelligence (CI). In the last decades, CI is gaining traction in healthcare since its potential in enhancing teamwork and patient safety through improved medical procedures. Nevertheless, CI remains poorly characterized in the clinical setting and its implications in improving teamwork and surgical outcomes lack in the literature. Recently, wearable systems have been used to measure physiological signals and quantify the group behaviors of a surgical team. However, no works have still focused on investigating how individual characteristics and group behaviors can be combined to establish models of effective teamwork and, consequently, strengthen CI.In this study, we proposed the combined use of a wearable system and video recordings to quantitively assess changes in physiological traits of two team members (a medical trainee and an anesthesiologist) before and during a medical procedure. In details, a wearable chest strap was used to monitor vital signs and the level of activity of each user while a video was contextually recorded to evaluate the level of teamwork in terms of speaking time and face-to-face interactions. The proposed technologies were able to work in the scenario of interest recording data useful to quantify aspects related to both individuals’ traits and human interactions. The most remarkable changes according to the level of experience were found in the heart rate and its variability were found. These promising results will foster future interventions on a clinical scenario involving a higher number of team members and under more challenging medical procedures (e.g., inside the operating room) for improving team effectiveness and supporting the development of CI in clinical settings.
Full paper
Swarm intelligence leverages collective behaviours emerging from interaction and activity of several “simple” agents to solve problems in various environments. One problem of interest in large swarms featuring a variety of sub-goals is swarm clustering, where the individuals of a swarm are assigned or choose to belong to zero or more groups, also called clusters. In this work, we address the sensing-based swarm clustering problem, where clusters are defined based on both the values sensed from the environment and the spatial distribution of the values and the agents. Moreover, we address it in a setting characterised by decentralisation of computation and interaction, and dynamicity of values and mobility of agents. For the solution, we propose to use the field-based computing paradigm, where computation and interaction are expressed in terms of a functional manipulation of fields, distributed and evolving data structures mapping each individual of the system to values over time. We devise a solution to sensing-based swarm clustering leveraging multiple concurrent field computations with limited domain and evaluate the approach experimentally by means of simulations, showing that the programmed swarms form clusters that well reflect the underlying environmental phenomena dynamics.
The use of wearable devices in daily activities is continuously and rapidly growing. Wearable technology provides seamless sensing, monitoring, multimodal interaction, without continuous manual intervention and effort for users. These devices support the realization of novel applications in many domains, from healthcare to security and entertainment, improving the quality of life of users. The situation awareness paradigm allows wearable computing systems to be aware of what is happening to users and in the surrounding environment, supporting automatic smart adaptive behaviors based on the identified situation. Although situation-aware wearable devices have recently attracted a lot of attention, there is still a lack of methodological approaches and references models for defining such systems. In this paper, we propose a reference architecture for situation-aware wearable computing systems grounded on Endsley’s SA model. A specialization of the architecture in the context of multi-user wearable computing systems is also proposed to support team situation awareness. An illustrative example shows a practical instantiation of the architecture in the context of contact tracing using smart sensorized face masks.
Background and objective: COVID-19 outbreak has become one of the most challenging problems for human being. It is a communicable disease caused by a new coronavirus strain, which infected over 375 million people already and caused almost 6 million deaths. This paper aims to develop and design a framework for early diagnosis and fast classification of COVID-19 symptoms using multimodal Deep Learning techniques. Methods: we collected chest X-ray and cough sample data from open source datasets, Cohen and datasets and local hospitals. The features are extracted from the chest X-ray images are extracted from chest X-ray datasets. We also used cough audio datasets from Coswara project and local hospitals. The publicly available Coughvid DetectNow and Virufy datasets are used to evaluate COVID-19 detection based on speech sounds, respiratory, and cough. The collected audio data comprises slow and fast breathing, shallow and deep coughing, spoken digits, and phonation of sustained vowels. Gender, geographical location, age, preexisting medical conditions, and current health status (COVID-19 and Non-COVID-19) are recorded. Results: the proposed framework uses the selection algorithm of the pre-trained network to determine the best fusion model characterized by the pre-trained chest X-ray and cough models. Third, deep chest X-ray fusion by discriminant correlation analysis is used to fuse discriminatory features from the two models. The proposed framework achieved recognition accuracy, specificity, and sensitivity of 98.91%, 96.25%, and 97.69%, respectively. With the fusion method we obtained 94.99% accuracy. Conclusion: this paper examines the effectiveness of well-known ML architectures on a joint collection of chest-X-rays and cough samples for early classification of COVID-19. It shows that existing methods can effectively used for diagnosis and suggesting that the fusion learning paradigm could be a crucial asset in diagnosing future unknown illnesses. The proposed framework supports health informatics basis on early diagnosis, clinical decision support, and accurate prediction.
In the last fifteen years, there has been a widespread diffusion of wearable sensorized devices for a plethora of applications in heterogeneous domains. Wearable technology provides fundamental capabilities such as smart sensing, monitoring, data recording, and multi-modal interaction, in a seamless, pervasive, and easy-to-use way. An emerging research trend is the definition of situation-aware wearable computing systems, i.e., wearable devices able to perceive and understand what is happening in the environment in order to adapt their behavior and anticipate users’ needs, a capability known as situation awareness. Although the increasing interest of the research community in situation-aware wearable devices, there is a lack of studies, formal models, methodological approaches, and theoretical groundings on which these systems can be grounded. As a result, a very limited number of smart sensors (physical or virtual) capable of effectively and efficiently supporting Situation Awareness have been proposed so far. In this article, we provide a survey and a classification of state-of-the-art situation-aware wearable systems, outlining current research trends, shortcomings, and challenges, with an emphasis on the models, approaches, and computational techniques of situation awareness and wearable computing on which they are based. The survey has been performed using the PRISMA methodology for systematic reviews. The analysis has been conducted with respect to a reference architecture, namely SA-WCS, of a generic situation-aware wearable computing system that we propose in this article, grounded on Endsley’s model of Situation Awareness. Such reference architecture not only provides a systematic framework for the comparison and categorization of the works, it also aims to promote the development of the next generation WCS.
The use of wearable devices in daily activities is continuously and rapidly growing. Wearable technology provides seamless sensing, monitoring, multimodal interaction, without continuous manual intervention and effort for users. These devices support the realization of novel applications in many domains, from healthcare to security and entertainment, improving the quality of life of users. The situation awareness paradigm allows wearable computing systems to be aware of what is happening to users and in the surrounding environment, supporting automatic smart adaptive behaviors based on the identified situation. Although situation-aware wearable devices have recently attracted a lot of attention, there is still a lack of methodological approaches and references models for defining such systems. In this paper, we propose a reference architecture for situation-aware wearable computing systems grounded on Endsley’s SA model. A specialization of the architecture in the context of multi-user wearable computing systems is also proposed to support team situation awareness. An illustrative example shows a practical instantiation of the architecture in the context of contact tracing using smart sensorized face masks.
Why do groups perform better than individuals? The answer is hidden behind the concept of the so-called Collective Intelligence (CI). CI is defined as the ability of a group to perform a wide variety of tasks, and team behavior and individual characteristics are CI consistent predictors. A complex environment in which CI is increasingly recognized as a determinant of safe and efficient functioning is the operating room (OR), where individual inputs and efforts should be adapted to those of teammates to accomplish shared goals. To date, although teamwork failure accounts for 70–80% of serious medical errors, the lack of quantitative measurements of individual responses and interpersonal dynamics makes CI poorly characterized in OR. This work proposed an innovative wearable platform for monitoring physiological biomarkers and joint movements of individuals while performing tasks. ECG trace and breathing waveform combined with skin conductance and movements patterns of both wrist and elbow are recorded unobtrusively and without impairing any activity of the user. The preliminary assessment of these devices was carried out by performing two trials (i.e., in a static condition to have the user baseline and while handling tools for simulating typical surgical tasks). This study with its preliminary findings, can be considered the first attempt toward the establishment of an innovative strategy to improve team performance and consequently, surgical outcomes and patient safety in the clinical routine.
Full paper