Supported by current socio-scientific trends, programming the global behaviour of whole computational collectives makes for great opportunities, but also significant challenges. Recently, aggregate computing has emerged as a prominent paradigm for so-called collective adaptive systems programming. To shorten the gap between such research endeavours and mainstream software development and engineering, we present ScaFi, a Scala toolkit providing an internal domain-specific language, libraries, a simulation environment, and runtime support for practical aggregate computing systems development.
The Internet of Things and edge computing are fostering a future of ecosystems hosting complex decentralized computations, deeply integrated with our very dynamic environments. Digitalized buildings, communities of people, and cities will be the next-generation “hardware and platform,” counting myriads of interconnected devices, on top of which intrinsically-distributed computational processes will run and self-organize. They will spontaneously spawn, diffuse to pertinent logical/physical regions, cooperate and compete, opportunistically summon required resources, collect and analyze data, compute results, trigger distributed actions, and eventually decay. How would a programming model for such ecosystems look like? Based on research findings on self-adaptive/self-organizing systems, this paper proposes design abstractions based on “dynamic decentralization domains”: regions of space opportunistically formed to support situated recognition and action. We embody the approach into a Scala application program interface (API) enacting distributed execution and show its applicability in a case study of environmental monitoring.
Swarm intelligence leverages collective behaviours emerging from interaction and activity of several “simple” agents to solve problems in various environments. One problem of interest in large swarms featuring a variety of sub-goals is swarm clustering, where the individuals of a swarm are assigned or choose to belong to zero or more groups, also called clusters. In this work, we address the sensing-based swarm clustering problem, where clusters are defined based on both the values sensed from the environment and the spatial distribution of the values and the agents. Moreover, we address it in a setting characterised by decentralisation of computation and interaction, and dynamicity of values and mobility of agents. For the solution, we propose to use the field-based computing paradigm, where computation and interaction are expressed in terms of a functional manipulation of fields, distributed and evolving data structures mapping each individual of the system to values over time. We devise a solution to sensing-based swarm clustering leveraging multiple concurrent field computations with limited domain and evaluate the approach experimentally by means of simulations, showing that the programmed swarms form clusters that well reflect the underlying environmental phenomena dynamics.
Background and objective: COVID-19 outbreak has become one of the most challenging problems for human being. It is a communicable disease caused by a new coronavirus strain, which infected over 375 million people already and caused almost 6 million deaths. This paper aims to develop and design a framework for early diagnosis and fast classification of COVID-19 symptoms using multimodal Deep Learning techniques. Methods: we collected chest X-ray and cough sample data from open source datasets, Cohen and datasets and local hospitals. The features are extracted from the chest X-ray images are extracted from chest X-ray datasets. We also used cough audio datasets from Coswara project and local hospitals. The publicly available Coughvid DetectNow and Virufy datasets are used to evaluate COVID-19 detection based on speech sounds, respiratory, and cough. The collected audio data comprises slow and fast breathing, shallow and deep coughing, spoken digits, and phonation of sustained vowels. Gender, geographical location, age, preexisting medical conditions, and current health status (COVID-19 and Non-COVID-19) are recorded. Results: the proposed framework uses the selection algorithm of the pre-trained network to determine the best fusion model characterized by the pre-trained chest X-ray and cough models. Third, deep chest X-ray fusion by discriminant correlation analysis is used to fuse discriminatory features from the two models. The proposed framework achieved recognition accuracy, specificity, and sensitivity of 98.91%, 96.25%, and 97.69%, respectively. With the fusion method we obtained 94.99% accuracy. Conclusion: this paper examines the effectiveness of well-known ML architectures on a joint collection of chest-X-rays and cough samples for early classification of COVID-19. It shows that existing methods can effectively used for diagnosis and suggesting that the fusion learning paradigm could be a crucial asset in diagnosing future unknown illnesses. The proposed framework supports health informatics basis on early diagnosis, clinical decision support, and accurate prediction.
In the last fifteen years, there has been a widespread diffusion of wearable sensorized devices for a plethora of applications in heterogeneous domains. Wearable technology provides fundamental capabilities such as smart sensing, monitoring, data recording, and multi-modal interaction, in a seamless, pervasive, and easy-to-use way. An emerging research trend is the definition of situation-aware wearable computing systems, i.e., wearable devices able to perceive and understand what is happening in the environment in order to adapt their behavior and anticipate users’ needs, a capability known as situation awareness. Although the increasing interest of the research community in situation-aware wearable devices, there is a lack of studies, formal models, methodological approaches, and theoretical groundings on which these systems can be grounded. As a result, a very limited number of smart sensors (physical or virtual) capable of effectively and efficiently supporting Situation Awareness have been proposed so far. In this article, we provide a survey and a classification of state-of-the-art situation-aware wearable systems, outlining current research trends, shortcomings, and challenges, with an emphasis on the models, approaches, and computational techniques of situation awareness and wearable computing on which they are based. The survey has been performed using the PRISMA methodology for systematic reviews. The analysis has been conducted with respect to a reference architecture, namely SA-WCS, of a generic situation-aware wearable computing system that we propose in this article, grounded on Endsley’s model of Situation Awareness. Such reference architecture not only provides a systematic framework for the comparison and categorization of the works, it also aims to promote the development of the next generation WCS.
The use of wearable devices in daily activities is continuously and rapidly growing. Wearable technology provides seamless sensing, monitoring, multimodal interaction, without continuous manual intervention and effort for users. These devices support the realization of novel applications in many domains, from healthcare to security and entertainment, improving the quality of life of users. The situation awareness paradigm allows wearable computing systems to be aware of what is happening to users and in the surrounding environment, supporting automatic smart adaptive behaviors based on the identified situation. Although situation-aware wearable devices have recently attracted a lot of attention, there is still a lack of methodological approaches and references models for defining such systems. In this paper, we propose a reference architecture for situation-aware wearable computing systems grounded on Endsley’s SA model. A specialization of the architecture in the context of multi-user wearable computing systems is also proposed to support team situation awareness. An illustrative example shows a practical instantiation of the architecture in the context of contact tracing using smart sensorized face masks.