About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot


Bibliography Options Menu

04 Dec 2023 at 01:39
Hide Abstracts   |   Hide Additional Links
Long bibliographies are displayed in blocks of 100 citations at a time. At the end of each block there is an option to load the next block.

Bibliography on: Cloud Computing


Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About:  RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE

ESP: PubMed Auto Bibliography 04 Dec 2023 at 01:39 Created: 

Cloud Computing

Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.

Created with PubMed® Query: ( cloud[TIAB] AND (computing[TIAB] OR "amazon web services"[TIAB] OR google[TIAB] OR "microsoft azure"[TIAB]) ) NOT pmcbook NOT ispreviousversion

Citations The Papers (from PubMed®)


RevDate: 2023-12-03

Doo FX, Parekh VS, Kanhere A, et al (2023)

Evaluation of Climate-Aware Metrics Tools for Radiology Informatics and Artificial Intelligence: Towards a Potential Radiology Eco-Label.

Journal of the American College of Radiology : JACR pii:S1546-1440(23)00960-2 [Epub ahead of print].

Radiology, with its embedded and growing technological roots, introduces a distinct dimension to the unfolding narrative of healthcare's contributions to climate change. Delivering modern patient care requires a robust informatics team to move images from the imaging equipment to the workstations and the health care system. Radiology informatics is the field that manages medical imaging information technology. This involves the acquisition, storage, retrieval, and use of imaging information in healthcare efficiently and effectively to improve access and quality, which includes PACS (picture archiving and communication system), cloud services, and artificial intelligence (AI). However, the electricity consumption of computing and the life cycle of various computer components expands the carbon footprint of healthcare. This manuscript provides a general framework to understand the environmental impact of clinical radiology informatics. We use the international Greenhouse Gas (GHG) Protocol to draft a definition of scopes of emissions pertinent to radiology informatics, and explore existing tools available to measure and account for these emissions. A novel standard eco-label for radiology informatics tools, such as the "Energy Star" label for consumer devices or "Leadership in Energy and Environmental Design (LEED)" certification for buildings, should be developed to promote awareness and guide radiologists and radiology informatics leaders in making environmentally conscious decisions for their clinical practice. At this critical climate juncture, the radiology community has a unique and pressing obligation to consider our shared environmental responsibility in innovating clinical technology for patient care.

RevDate: 2023-12-02

Shaikh TA, Rasool T, P Verma (2023)

Machine intelligence and medical cyber-physical system architectures for smart healthcare: Taxonomy, challenges, opportunities, and possible solutions.

Artificial intelligence in medicine, 146:102692.

Hospitals use medical cyber-physical systems (MCPS) more often to give patients quality continuous care. MCPS isa life-critical, context-aware, networked system of medical equipment. It has been challenging to achieve high assurance in system software, interoperability, context-aware intelligence, autonomy, security and privacy, and device certifiability due to the necessity to create complicated MCPS that are safe and efficient. The MCPS system is shown in the paper as a newly developed application case study of artificial intelligence in healthcare. Applications for various CPS-based healthcare systems are discussed, such as telehealthcare systems for managing chronic diseases (cardiovascular diseases, epilepsy, hearing loss, and respiratory diseases), supporting medication intake management, and tele-homecare systems. The goal of this study is to provide a thorough overview of the essential components of the MCPS from several angles, including design, methodology, and important enabling technologies, including sensor networks, the Internet of Things (IoT), cloud computing, and multi-agent systems. Additionally, some significant applications are investigated, such as smart cities, which are regarded as one of the key applications that will offer new services for industrial systems, transportation networks, energy distribution, monitoring of environmental changes, business and commerce applications, emergency response, and other social and recreational activities.The four levels of an MCPS's general architecture-data collecting, data aggregation, cloud processing, and action-are shown in this study. Different encryption techniques must be employed to ensure data privacy inside each layer due to the variations in hardware and communication capabilities of each layer. We compare established and new encryption techniques based on how well they support safe data exchange, secure computing, and secure storage. Our thorough experimental study of each method reveals that, although enabling innovative new features like secure sharing and safe computing, developing encryption approaches significantly increases computational and storage overhead. To increase the usability of newly developed encryption schemes in an MCPS and to provide a comprehensive list of tools and databases to assist other researchers, we provide a list of opportunities and challenges for incorporating machine intelligence-based MCPS in healthcare applications in our paper's conclusion.

RevDate: 2023-12-01

Chen X, Li J, Chen D, et al (2023)

CloudBrain-MRS: An intelligent cloud computing platform for in vivo magnetic resonance spectroscopy preprocessing, quantification, and analysis.

Journal of magnetic resonance (San Diego, Calif. : 1997), 358:107601 pii:S1090-7807(23)00236-7 [Epub ahead of print].

Magnetic resonance spectroscopy (MRS) is an important clinical imaging method for diagnosis of diseases. MRS spectrum is used to observe the signal intensity of metabolites or further infer their concentrations. Although the magnetic resonance vendors commonly provide basic functions of spectrum plots and metabolite quantification, the spread of clinical research of MRS is still limited due to the lack of easy-to-use processing software or platform. To address this issue, we have developed CloudBrain-MRS, a cloud-based online platform that provides powerful hardware and advanced algorithms. The platform can be accessed simply through a web browser, without the need of any program installation on the user side. CloudBrain-MRS also integrates the classic LCModel and advanced artificial intelligence algorithms and supports batch preprocessing, quantification, and analysis of MRS data from different vendors. Additionally, the platform offers useful functions: (1) Automatically statistical analysis to find biomarkers for diseases; (2) Consistency verification between the classic and artificial intelligence quantification algorithms; (3) Colorful three-dimensional visualization for easy observation of individual metabolite spectrum. Last, data of both healthy subjects and patients with mild cognitive impairment are used to demonstrate the functions of the platform. To the best of our knowledge, this is the first cloud computing platform for in vivo MRS with artificial intelligence processing. We have shared our cloud platform at MRSHub, providing at least two years of free access and service. If you are interested, please visit https://mrshub.org/software_all/#CloudBrain-MRS or https://csrc.xmu.edu.cn/CloudBrain.html.

RevDate: 2023-11-30

Zhao K, Farrell K, Mashiku M, et al (2023)

A search-based geographic metadata curation pipeline to refine sequencing institution information and support public health.

Frontiers in public health, 11:1254976.

BACKGROUND: The National Center for Biotechnology Information (NCBI) Sequence Read Archive (SRA) has amassed a vast reservoir of genetic data since its inception in 2007. These public data hold immense potential for supporting pathogen surveillance and control. However, the lack of standardized metadata and inconsistent submission practices in SRA may impede the data's utility in public health.

METHODS: To address this issue, we introduce the Search-based Geographic Metadata Curation (SGMC) pipeline. SGMC utilized Python and web scraping to extract geographic data of sequencing institutions from NCBI SRA in the Cloud and its website. It then harnessed ChatGPT to refine the sequencing institution and location assignments. To illustrate the pipeline's utility, we examined the geographic distribution of the sequencing institutions and their countries relevant to polio eradication and categorized them.

RESULTS: SGMC successfully identified 7,649 sequencing institutions and their global locations from a random selection of 2,321,044 SRA accessions. These institutions were distributed across 97 countries, with strong representation in the United States, the United Kingdom and China. However, there was a lack of data from African, Central Asian, and Central American countries, indicating potential disparities in sequencing capabilities. Comparison with manually curated data for U.S. institutions reveals SGMC's accuracy rates of 94.8% for institutions, 93.1% for countries, and 74.5% for geographic coordinates.

CONCLUSION: SGMC may represent a novel approach using a generative AI model to enhance geographic data (country and institution assignments) for large numbers of samples within SRA datasets. This information can be utilized to bolster public health endeavors.

RevDate: 2023-11-30

Olson RH, Cohen Kalafut N, D Wang (2023)

MANGEM: A web app for multimodal analysis of neuronal gene expression, electrophysiology, and morphology.

Patterns (New York, N.Y.), 4(11):100847 pii:S2666-3899(23)00226-X.

Single-cell techniques like Patch-seq have enabled the acquisition of multimodal data from individual neuronal cells, offering systematic insights into neuronal functions. However, these data can be heterogeneous and noisy. To address this, machine learning methods have been used to align cells from different modalities onto a low-dimensional latent space, revealing multimodal cell clusters. The use of those methods can be challenging without computational expertise or suitable computing infrastructure for computationally expensive methods. To address this, we developed a cloud-based web application, MANGEM (multimodal analysis of neuronal gene expression, electrophysiology, and morphology). MANGEM provides a step-by-step accessible and user-friendly interface to machine learning alignment methods of neuronal multimodal data. It can run asynchronously for large-scale data alignment, provide users with various downstream analyses of aligned cells, and visualize the analytic results. We demonstrated the usage of MANGEM by aligning multimodal data of neuronal cells in the mouse visual cortex.

RevDate: 2023-11-29

Ait Abdelmoula I, Idrissi Kaitouni S, Lamrini N, et al (2023)

Towards a sustainable edge computing framework for condition monitoring in decentralized photovoltaic systems.

Heliyon, 9(11):e21475 pii:S2405-8440(23)08683-8.

In recent times, the rapid advancements in technology have led to a digital revolution in urban areas, and new computing frameworks are emerging to address the current issues in monitoring and fault detection, particularly in the context of the growing renewable decentralized energy systems. This research proposes a novel framework for monitoring the condition of decentralized photovoltaic systems within a smart city infrastructure. The approach uses edge computing to overcome the challenges associated with costly processing through remote cloud servers. By processing data at the edge of the network, this concept allows for significant gains in speed and bandwidth consumption, making it suitable for a sustainable city environment. In the proposed edge-learning scheme, several machine learning models are compared to find the best suitable model achieving both high accuracy and low latency in detecting photovoltaic faults. Four light and rapid machine learning models, namely, CBLOF, LOF, KNN, ANN, are selected as best performers and trained locally in decentralized edge nodes. The overall approach is deployed in a smart solar campus with multiple distributed PV units located in the R&D platform Green & Smart Building Park. Several experiments were conducted on different anomaly scenarios, and the models were evaluated based on their supervision method, f1-score, inference time, RAM usage, and model size. The paper also investigates the impact of the type of supervision and the class of the model on the anomaly detection performance. The findings indicated that the supervised artificial neural network (ANN) had superior performance compared to other models, obtaining an f1-score of 80 % even in the most unfavorable conditions. The findings also showed that KNN was the most suitable unsupervised model for the investigated experiments achieving good f1-scores (100 %, 95 % and 92 %) in 3 out of 4 scenarios making it a good candidate for similar anomaly detection tasks.

RevDate: 2023-11-29

Mohammed MA, Lakhan A, Abdulkareem KH, et al (2023)

Multi-objectives reinforcement federated learning blockchain enabled Internet of things and Fog-Cloud infrastructure for transport data.

Heliyon, 9(11):e21639 pii:S2405-8440(23)08847-3.

For the past decade, there has been a significant increase in customer usage of public transport applications in smart cities. These applications rely on various services, such as communication and computation, provided by additional nodes within the smart city environment. However, these services are delivered by a diverse range of cloud computing-based servers that are widely spread and heterogeneous, leading to cybersecurity becoming a crucial challenge among these servers. Numerous machine-learning approaches have been proposed in the literature to address the cybersecurity challenges in heterogeneous transport applications within smart cities. However, the centralized security and scheduling strategies suggested so far have yet to produce optimal results for transport applications. This work aims to present a secure decentralized infrastructure for transporting data in fog cloud networks. This paper introduces Multi-Objectives Reinforcement Federated Learning Blockchain (MORFLB) for Transport Infrastructure. MORFLB aims to minimize processing and transfer delays while maximizing long-term rewards by identifying known and unknown attacks on remote sensing data in-vehicle applications. MORFLB incorporates multi-agent policies, proof-of-work hashing validation, and decentralized deep neural network training to achieve minimal processing and transfer delays. It comprises vehicle applications, decentralized fog, and cloud nodes based on blockchain reinforcement federated learning, which improves rewards through trial and error. The study formulates a combinatorial problem that minimizes and maximizes various factors for vehicle applications. The experimental results demonstrate that MORFLB effectively reduces processing and transfer delays while maximizing rewards compared to existing studies. It provides a promising solution to address the cybersecurity challenges in intelligent transport applications within smart cities. In conclusion, this paper presents MORFLB, a combination of different schemes that ensure the execution of transport data under their constraints and achieve optimal results with the suggested decentralized infrastructure based on blockchain technology.

RevDate: 2023-11-29

Guo LL, Calligan M, Vettese E, et al (2023)

Development and validation of the SickKids Enterprise-wide Data in Azure Repository (SEDAR).

Heliyon, 9(11):e21586 pii:S2405-8440(23)08794-7.

OBJECTIVES: To describe the processes developed by The Hospital for Sick Children (SickKids) to enable utilization of electronic health record (EHR) data by creating sequentially transformed schemas for use across multiple user types.

METHODS: We used Microsoft Azure as the cloud service provider and named this effort the SickKids Enterprise-wide Data in Azure Repository (SEDAR). Epic Clarity data from on-premises was copied to a virtual network in Microsoft Azure. Three sequential schemas were developed. The Filtered Schema added a filter to retain only SickKids and valid patients. The Curated Schema created a data structure that was easier to navigate and query. Each table contained a logical unit such as patients, hospital encounters or laboratory tests. Data validation of randomly sampled observations in the Curated Schema was performed. The SK-OMOP Schema was designed to facilitate research and machine learning. Two individuals mapped medical elements to standard Observational Medical Outcomes Partnership (OMOP) concepts.

RESULTS: A copy of Clarity data was transferred to Microsoft Azure and updated each night using log shipping. The Filtered Schema and Curated Schema were implemented as stored procedures and executed each night with incremental updates or full loads. Data validation required up to 16 iterations for each Curated Schema table. OMOP concept mapping achieved at least 80 % coverage for each SK-OMOP table.

CONCLUSIONS: We described our experience in creating three sequential schemas to address different EHR data access requirements. Future work should consider replicating this approach at other institutions to determine whether approaches are generalizable.

RevDate: 2023-11-29

Han J, Sun R, Zeeshan M, et al (2023)

The impact of digital transformation on green total factor productivity of heavily polluting enterprises.

Frontiers in psychology, 14:1265391.

INTRODUCTION: Digital transformation has become an important engine for economic high-quality development and environment high-level protection. However, green total factor productivity (GTFP), as an indicator that comprehensively reflects economic and environmental benefits, there is a lack of studies that analyze the effect of digital transformation on heavily polluting enterprises' GTFP from a micro perspective, and its impact mechanism is still unclear. Therefore, we aim to study the impact of digital transformation on heavily polluting enterprises' GTFP and its mechanism, and explore the heterogeneity of its impact.

METHODS: We use Chinese A-share listed enterprises in the heavily polluting industry data from 2007 to 2019, measure enterprise digital transformation indicator using text analysis, and measure enterprise GTFP indicator using the GML index based on SBM directional distance function, to investigate the impact of digital transformation on heavily polluting enterprises' GTFP.

RESULTS: Digital transformation can significantly enhance heavily polluting enterprises' GTFP, and this finding still holds after considering the endogenous problem and conducting robustness tests. Digital transformation can enhance heavily polluting enterprises' GTFP by promoting green innovation, improving management efficiency, and reducing external transaction costs. The improvement role of digital transformation on heavily polluting enterprises' GTFP is more obvious in the samples of non-state-owned enterprises, non-high-tech industries, and the eastern region. Compared with blockchain technology, artificial intelligence technology, cloud computing technology, big data technology, and digital technology application can significantly improve heavily polluting enterprises' GTFP.

DISCUSSION: Our paper breaks through the limitations of existing research, which not only theoretically enriches the literature related to digital transformation and GTFP, but also practically provides policy implications for continuously promoting heavily polluting enterprises' digital transformation and facilitating their high-quality development.

RevDate: 2023-11-29

Ko HYK, Tripathi NK, Mozumder C, et al (2023)

Real-Time Remote Patient Monitoring and Alarming System for Noncommunicable Lifestyle Diseases.

International journal of telemedicine and applications, 2023:9965226.

Telemedicine and remote patient monitoring (RPM) systems have been gaining interest and received adaptation in healthcare sectors since the COVID-19 pandemic due to their efficiency and capability to deliver timely healthcare services while containing COVID-19 transmission. These systems were developed using the latest technology in wireless sensors, medical devices, cloud computing, mobile computing, telecommunications, and machine learning technologies. In this article, a real-time remote patient monitoring system is proposed with an accessible, compact, accurate, and low-cost design. The implemented system is designed to an end-to-end communication interface between medical practitioners and patients. The objective of this study is to provide remote healthcare services to patients who need ongoing care or those who have been discharged from the hospital without affecting their daily routines. The developed monitoring system was then evaluated on 1177 records from MIMIC-III clinical dataset (aged between 19 and 99 years). The performance analysis of the proposed system achieved 88.7% accuracy in generating alerts with logistic regression classification algorithm. This result reflects positively on the quality and robustness of the proposed study. Since the processing time of the proposed system is less than 2 minutes, it can be stated that the system has a high computational speed and is convenient to use in real-time monitoring. Furthermore, the proposed system will fulfil to cover the lower doctor-to-patient ratio by monitoring patients from remote locations and aged people who reside in their residences.

RevDate: 2023-11-28

Rogage K, Mahamedi E, Brilakis I, et al (2022)

Beyond digital shadows: Digital Twin used for monitoring earthwork operation in large infrastructure projects.

AI in civil engineering, 1(1):7.

Current research on Digital Twin (DT) is largely focused on the performance of built assets in their operational phases as well as on urban environment. However, Digital Twin has not been given enough attention to construction phases, for which this paper proposes a Digital Twin framework for the construction phase, develops a DT prototype and tests it for the use case of measuring the productivity and monitoring of earthwork operation. The DT framework and its prototype are underpinned by the principles of versatility, scalability, usability and automation to enable the DT to fulfil the requirements of large-sized earthwork projects and the dynamic nature of their operation. Cloud computing and dashboard visualisation were deployed to enable automated and repeatable data pipelines and data analytics at scale and to provide insights in near-real time. The testing of the DT prototype in a motorway project in the Northeast of England successfully demonstrated its ability to produce key insights by using the following approaches: (i) To predict equipment utilisation ratios and productivities; (ii) To detect the percentage of time spent on different tasks (i.e., loading, hauling, dumping, returning or idling), the distance travelled by equipment over time and the speed distribution; and (iii) To visualise certain earthwork operations.

RevDate: 2023-11-25

Geroski T, Gkaintes O, Vulović A, et al (2023)

SGABU computational platform for multiscale modeling: Bridging the gap between education and research.

Computer methods and programs in biomedicine, 243:107935 pii:S0169-2607(23)00601-6 [Epub ahead of print].

BACKGROUND AND OBJECTIVE: In accordance with the latest aspirations in the field of bioengineering, there is a need to create a web accessible, but powerful cloud computational platform that combines datasets and multiscale models related to bone modeling, cancer, cardiovascular diseases and tissue engineering. The SGABU platform may become a powerful information system for research and education that can integrate data, extract information, and facilitate knowledge exchange with the goal of creating and developing appropriate computing pipelines to provide accurate and comprehensive biological information from the molecular to organ level.

METHODS: The datasets integrated into the platform are obtained from experimental and/or clinical studies and are mainly in tabular or image file format, including metadata. The implementation of multiscale models, is an ambitious effort of the platform to capture phenomena at different length scales, described using partial and ordinary differential equations, which are solved numerically on complex geometries with the use of the finite element method. The majority of the SGABU platform's simulation pipelines are provided as Common Workflow Language (CWL) workflows. Each of them requires creating a CWL implementation on the backend and a user-friendly interface using standard web technologies. Platform is available at https://sgabu-test.unic.kg.ac.rs/login.

RESULTS: The main dashboard of the SGABU platform is divided into sections for each field of research, each one of which includes a subsection of datasets and multiscale models. The datasets can be presented in a simple form as tabular data, or using technologies such as Plotly.js for 2D plot interactivity, Kitware Paraview Glance for 3D view. Regarding the models, the usage of Docker containerization for packing the individual tools and CWL orchestration for describing inputs with validation forms and outputs with tabular views for output visualization, interactive diagrams, 3D views and animations.

CONCLUSIONS: In practice, the structure of SGABU platform means that any of the integrated workflows can work equally well on any other bioengineering platform. The key advantage of the SGABU platform over similar efforts is its versatility offered with the use of modern, modular, and extensible technology for various levels of architecture.

RevDate: 2023-11-25

Zhang T, Jin X, Bai S, et al (2023)

Smart Public Transportation Sensing: Enhancing Perception and Data Management for Efficient and Safety Operations.

Sensors (Basel, Switzerland), 23(22): pii:s23229228.

The use of cloud computing, big data, IoT, and mobile applications in the public transportation industry has resulted in the generation of vast and complex data, of which the large data volume and data variety have posed several obstacles to effective data sensing and processing with high efficiency in a real-time data-driven public transportation management system. To overcome the above-mentioned challenges and to guarantee optimal data availability for data sensing and processing in public transportation perception, a public transportation sensing platform is proposed to collect, integrate, and organize diverse data from different data sources. The proposed data perception platform connects multiple data systems and some edge intelligent perception devices to enable the collection of various types of data, including traveling information of passengers and transaction data of smart cards. To enable the efficient extraction of precise and detailed traveling behavior, an efficient field-level data lineage exploration method is proposed during logical plan generation and is integrated into the FlinkSQL system seamlessly. Furthermore, a row-level fine-grained permission control mechanism is adopted to support flexible data management. With these two techniques, the proposed data management system can support efficient data processing on large amounts of data and conducts comprehensive analysis and application of business data from numerous different sources to realize the value of the data with high data safety. Through operational testing in real environments, the proposed platform has proven highly efficient and effective in managing organizational operations, data assets, data life cycle, offline development, and backend administration over a large amount of various types of public transportation traffic data.

RevDate: 2023-11-25

Nugroho AK, Shioda S, T Kim (2023)

Optimal Resource Provisioning and Task Offloading for Network-Aware and Federated Edge Computing.

Sensors (Basel, Switzerland), 23(22): pii:s23229200.

Compared to cloud computing, mobile edge computing (MEC) is a promising solution for delay-sensitive applications due to its proximity to end users. Because of its ability to offload resource-intensive tasks to nearby edge servers, MEC allows a diverse range of compute- and storage-intensive applications to operate on resource-constrained devices. The optimal utilization of MEC can lead to enhanced responsiveness and quality of service, but it requires careful design from the perspective of user-base station association, virtualized resource provisioning, and task distribution. Also, considering the limited exploration of the federation concept in the existing literature, its impacts on the allocation and management of resources still remain not widely recognized. In this paper, we study the network and MEC resource scheduling problem, where some edge servers are federated, limiting resource expansion within the same federations. The integration of network and MEC is crucial, emphasizing the necessity of a joint approach. In this work, we present NAFEOS, a proposed solution formulated as a two-stage algorithm that can effectively integrate association optimization with vertical and horizontal scaling. The Stage-1 problem optimizes the user-base station association and federation assignment so that the edge servers can be utilized in a balanced manner. The following Stage-2 dynamically schedules both vertical and horizontal scaling so that the fluctuating task-offloading demands from users are fulfilled. The extensive evaluations and comparison results show that the proposed approach can effectively achieve optimal resource utilization.

RevDate: 2023-11-25

Oliveira M, Chauhan S, Pereira F, et al (2023)

Blockchain Protocols and Edge Computing Targeting Industry 5.0 Needs.

Sensors (Basel, Switzerland), 23(22): pii:s23229174.

"Industry 5.0" is the latest industrial revolution. A variety of cutting-edge technologies, including artificial intelligence, the Internet of Things (IoT), and others, come together to form it. Billions of devices are connected for high-speed data transfer, especially in a 5G-enabled industrial environment for information collection and processing. Most of the issues, such as access control mechanism, time to fetch the data from different devices, and protocols used, may not be applicable in the future as these protocols are based upon a centralized mechanism. This centralized mechanism may have a single point of failure along with the computational overhead. Thus, there is a need for an efficient decentralized access control mechanism for device-to-device (D2D) communication in various industrial sectors, for example, sensors in different regions may collect and process the data for making intelligent decisions. In such an environment, reliability, security, and privacy are major concerns as most of the solutions are based upon a centralized control mechanism. To mitigate the aforementioned issues, this paper provides the opportunities for and highlights some of the most impressive initiatives that help to curve the future. This new era will bring about significant changes in the way businesses operate, allowing them to become more cost-effective, more efficient, and produce higher-quality goods and services. As sensors are getting more accurate, cheaper, and have lower time responses, 5G networks are being integrated, and more industrial equipment and machinery are becoming available; hence, various sectors, including the manufacturing sector, are going through a significant period of transition right now. Additionally, the emergence of the cloud enables modern production models that use the cloud (both internal and external services), networks, and systems to leverage the cloud's low cost, scalability, increased computational power, real-time communication, and data transfer capabilities to create much smarter and more autonomous systems. We discuss the ways in which decentralized networks that make use of protocols help to achieve decentralization and how network meshes can grow to make things more secure, reliable, and cohere with these technologies, which are not going away anytime soon. We emphasize the significance of new design in regard to cybersecurity, data integrity, and storage by using straightforward examples that have the potential to lead to the excellence of distributed systems. This groundbreaking paper delves deep into the world of industrial automation and explores the possibilities to adopt blockchain for developing solutions for smart cities, smart homes, healthcare, smart agriculture, autonomous vehicles, and supply chain management within Industry 5.0. With an in-depth examination of various consensus mechanisms, readers gain a comprehensive understanding of the latest developments in this field. The paper also explores the current issues and challenges associated with blockchain adaptation for industrial automation and provides a thorough comparison of the available consensus, enabling end customers to select the most suitable one based on its unique advantages. Case studies highlight how to enable the adoption of blockchain in Industry 5.0 solutions effectively and efficiently, offering valuable insights into the potential challenges that lie ahead, particularly for smart industrial applications.

RevDate: 2023-11-25

Kim J, H Koh (2023)

MiTree: A Unified Web Cloud Analytic Platform for User-Friendly and Interpretable Microbiome Data Mining Using Tree-Based Methods.

Microorganisms, 11(11): pii:microorganisms11112816.

The advent of next-generation sequencing has greatly accelerated the field of human microbiome studies. Currently, investigators are seeking, struggling and competing to find new ways to diagnose, treat and prevent human diseases through the human microbiome. Machine learning is a promising approach to help such an effort, especially due to the high complexity of microbiome data. However, many of the current machine learning algorithms are in a "black box", i.e., they are difficult to understand and interpret. In addition, clinicians, public health practitioners and biologists are not usually skilled at computer programming, and they do not always have high-end computing devices. Thus, in this study, we introduce a unified web cloud analytic platform, named MiTree, for user-friendly and interpretable microbiome data mining. MiTree employs tree-based learning methods, including decision tree, random forest and gradient boosting, that are well understood and suited to human microbiome studies. We also stress that MiTree can address both classification and regression problems through covariate-adjusted or unadjusted analysis. MiTree should serve as an easy-to-use and interpretable data mining tool for microbiome-based disease prediction modeling, and should provide new insights into microbiome-based diagnostics, treatment and prevention. MiTree is an open-source software that is available on our web server.

RevDate: 2023-11-21

Bahadur FT, Shah SR, RR Nidamanuri (2023)

Applications of remote sensing vis-à-vis machine learning in air quality monitoring and modelling: a review.

Environmental monitoring and assessment, 195(12):1502.

Environmental contamination especially air pollution is an exponentially growing menace requiring immediate attention, as it lingers on with the associated risks of health, economic and ecological crisis. The special focus of this study is on the advances in Air Quality (AQ) monitoring using modern sensors, integrated monitoring systems, remote sensing and the usage of Machine Learning (ML), Deep Learning (DL) algorithms, artificial neural networks, recent computational techniques, hybridizing techniques and different platforms available for AQ modelling. The modern world is data-driven, where critical decisions are taken based on the available and accessible data. Today's data analytics is a consequence of the information explosion we have reached. The current research also tends to re-evaluate its scope with data analytics. The emergence of artificial intelligence and machine learning in the research scenario has radically changed the methodologies and approaches of modern research. The aim of this review is to assess the impact of data analytics such as ML/DL frameworks, data integration techniques, advanced statistical modelling, cloud computing platforms and constantly improving optimization algorithms on AQ research. The usage of remote sensing in AQ monitoring along with providing enormous datasets is constantly filling the spatial gaps of ground stations, as the long-term air pollutant dynamics is best captured by the panoramic view of satellites. Remote sensing coupled with the techniques of ML/DL has the most impact in shaping the modern trends in AQ research. Current standing of research in this field, emerging trends and future scope are also discussed.

RevDate: 2023-11-20

Jayathilaka H, Krintz C, R Wolski (2020)

Detecting Performance Anomalies in Cloud Platform Applications.

IEEE transactions on cloud computing, 8(3):764-777.

We present Roots, a full-stack monitoring and analysis system for performance anomaly detection and bottleneck identification in cloud platform-as-a-service (PaaS) systems. Roots facilitates application performance monitoring as a core capability of PaaS clouds, and relieves the developers from having to instrument application code. Roots tracks HTTP/S requests to hosted cloud applications and their use of PaaS services. To do so it employs lightweight monitoring of PaaS service interfaces. Roots processes this data in the background using multiple statistical techniques that in combination detect performance anomalies (i.e. violations of service-level objectives). For each anomaly, Roots determines whether the event was caused by a change in the request workload or by a performance bottleneck in a PaaS service. By correlating data collected across different layers of the PaaS, Roots is able to trace high-level performance anomalies to bottlenecks in specific components in the cloud platform. We implement Roots using the AppScale PaaS and evaluate its overhead and accuracy.

RevDate: 2023-11-18

Wilkinson R, Mleczko MM, Brewin RJW, et al (2023)

Environmental impacts of earth observation data in the constellation and cloud computing era.

The Science of the total environment pii:S0048-9697(23)07212-1 [Epub ahead of print].

Numbers of Earth Observation (EO) satellites have increased exponentially over the past decade reaching the current population of 1193 (January 2023). Consequently, EO data volumes have mushroomed and data processing has migrated to the cloud. Whilst attention has been given to the launch and in-orbit environmental impacts of satellites, EO data environmental footprints have been overlooked. These issues require urgent attention given data centre water and energy consumption, high carbon emissions for computer component manufacture, and difficulty of recycling computer components. Doing so is essential if the environmental good of EO is to withstand scrutiny. We provide the first assessment of the EO data life-cycle and estimate that the current size of the global EO data collection is ~807 PB, increasing by ~100 PB/year. Storage of this data volume generates annual CO2 equivalent emissions of 4101 t. Major state-funded EO providers use 57 of their own data centres globally, and a further 178 private cloud services, with duplication of datasets across repositories. We explore scenarios for the environmental cost of performing EO functions on the cloud compared to desktop machines. A simple band arithmetic function applied to a Landsat 9 scene using Google Earth Engine (GEE) generated CO2 equivalent (e) emissions of 0.042-0.69 g CO2e (locally) and 0.13-0.45 g CO2e (European data centre; values multiply by nine for Australian data centre). Computation-based emissions scale rapidly for more intense processes and when testing code. When using cloud services like GEE, users have no choice about the data centre used and we push for EO providers to be more transparent about the location-specific impacts of EO work, and to provide tools for measuring the environmental cost of cloud computation. The EO community as a whole needs to critically consider the broad suite of EO data life-cycle impacts.

RevDate: 2023-11-18

Tomassini S, Falcionelli N, Bruschi G, et al (2023)

On-cloud decision-support system for non-small cell lung cancer histology characterization from thorax computed tomography scans.

Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society, 110:102310 pii:S0895-6111(23)00128-3 [Epub ahead of print].

Non-Small Cell Lung Cancer (NSCLC) accounts for about 85% of all lung cancers. Developing non-invasive techniques for NSCLC histology characterization may not only help clinicians to make targeted therapeutic treatments but also prevent subjects from undergoing lung biopsy, which is challenging and could lead to clinical implications. The motivation behind the study presented here is to develop an advanced on-cloud decision-support system, named LUCY, for non-small cell LUng Cancer histologY characterization directly from thorax Computed Tomography (CT) scans. This aim was pursued by selecting thorax CT scans of 182 LUng ADenocarcinoma (LUAD) and 186 LUng Squamous Cell carcinoma (LUSC) subjects from four openly accessible data collections (NSCLC-Radiomics, NSCLC-Radiogenomics, NSCLC-Radiomics-Genomics and TCGA-LUAD), in addition to the implementation and comparison of two end-to-end neural networks (the core layer of whom is a convolutional long short-term memory layer), the performance evaluation on test dataset (NSCLC-Radiomics-Genomics) from a subject-level perspective in relation to NSCLC histological subtype location and grade, and the dynamic visual interpretation of the achieved results by producing and analyzing one heatmap video for each scan. LUCY reached test Area Under the receiver operating characteristic Curve (AUC) values above 77% in all NSCLC histological subtype location and grade groups, and a best AUC value of 97% on the entire dataset reserved for testing, proving high generalizability to heterogeneous data and robustness. Thus, LUCY is a clinically-useful decision-support system able to timely, non-invasively and reliably provide visually-understandable predictions on LUAD and LUSC subjects in relation to clinically-relevant information.

RevDate: 2023-11-14

Kwabla W, Dinc F, Oumimoun K, et al (2023)

Evaluation of WebRTC in the Cloud for Surgical Simulations: A case study on Virtual Rotator Cuff Arthroscopic Skill Trainer (ViRCAST).

Learning and collaboration technologies : 10th International Conference, LCT 2023, held as part of the 25th HCI International Conference, HCII 2023, Copenhagen, Denmark, July 23-28, 2023, proceedings. Part II. LCT (Conference) (10th : 2..., 14041:127-143.

Web Real-Time Communication (WebRTC) is an open-source technology which enables remote peer-to-peer video and audio connection. It has quickly become the new standard for real-time communications over the web and is commonly used as a video conferencing platform. In this study, we present a different application domain which may greatly benefit from WebRTC technology, that is virtual reality (VR) based surgical simulations. Virtual Rotator Cuff Arthroscopic Skill Trainer (ViRCAST) is our testing platform that we completed preliminary feasibility studies for WebRTC. Since the elasticity of cloud computing provides the ability to meet possible future hardware/software requirements and demand growth, ViRCAST is deployed in a cloud environment. Additionally, in order to have plausible simulations and interactions, any VR-based surgery simulator must have haptic feedback. Therefore, we implemented an interface to WebRTC for integrating haptic devices. We tested ViRCAST on Google cloud through haptic-integrated WebRTC at various client configurations. Our experiments showed that WebRTC with cloud and haptic integrations is a feasible solution for VR-based surgery simulators. From our experiments, the WebRTC integrated simulation produced an average frame rate of 33 fps, and the hardware integration produced an average lag of 0.7 milliseconds in real-time.

RevDate: 2023-11-14

Farooq MS, Abdullah M, Riaz S, et al (2023)

A Survey on the Role of Industrial IoT in Manufacturing for Implementation of Smart Industry.

Sensors (Basel, Switzerland), 23(21): pii:s23218958.

The Internet of Things (IoT) is an innovative technology that presents effective and attractive solutions to revolutionize various domains. Numerous solutions based on the IoT have been designed to automate industries, manufacturing units, and production houses to mitigate human involvement in hazardous operations. Owing to the large number of publications in the IoT paradigm, in particular those focusing on industrial IoT (IIoT), a comprehensive survey is significantly important to provide insights into recent developments. This survey presents the workings of the IoT-based smart industry and its major components and proposes the state-of-the-art network infrastructure, including structured layers of IIoT architecture, IIoT network topologies, protocols, and devices. Furthermore, the relationship between IoT-based industries and key technologies is analyzed, including big data storage, cloud computing, and data analytics. A detailed discussion of IIoT-based application domains, smartphone application solutions, and sensor- and device-based IIoT applications developed for the management of the smart industry is also presented. Consequently, IIoT-based security attacks and their relevant countermeasures are highlighted. By analyzing the essential components, their security risks, and available solutions, future research directions regarding the implementation of IIoT are outlined. Finally, a comprehensive discussion of open research challenges and issues related to the smart industry is also presented.

RevDate: 2023-11-14

Leng J, Chen X, Zhao J, et al (2023)

A Light Vehicle License-Plate-Recognition System Based on Hybrid Edge-Cloud Computing.

Sensors (Basel, Switzerland), 23(21): pii:s23218913.

With the world moving towards low-carbon and environmentally friendly development, the rapid growth of new-energy vehicles is evident. The utilization of deep-learning-based license-plate-recognition (LPR) algorithms has become widespread. However, existing LPR systems have difficulty achieving timely, effective, and energy-saving recognition due to their inherent limitations such as high latency and energy consumption. An innovative Edge-LPR system that leverages edge computing and lightweight network models is proposed in this paper. With the help of this technology, the excessive reliance on the computational capacity and the uneven implementation of resources of cloud computing can be successfully mitigated. The system is specifically a simple LPR. Channel pruning was used to reconstruct the backbone layer, reduce the network model parameters, and effectively reduce the GPU resource consumption. By utilizing the computing resources of the Intel second-generation computing stick, the network models were deployed on edge gateways to detect license plates directly. The reliability and effectiveness of the Edge-LPR system were validated through the experimental analysis of the CCPD standard dataset and real-time monitoring dataset from charging stations. The experimental results from the CCPD common dataset demonstrated that the network's total number of parameters was only 0.606 MB, with an impressive accuracy rate of 97%.

RevDate: 2023-11-14

Younas MI, Iqbal MJ, Aziz A, et al (2023)

Toward QoS Monitoring in IoT Edge Devices Driven Healthcare-A Systematic Literature Review.

Sensors (Basel, Switzerland), 23(21): pii:s23218885.

Smart healthcare is altering the delivery of healthcare by combining the benefits of IoT, mobile, and cloud computing. Cloud computing has tremendously helped the health industry connect healthcare facilities, caregivers, and patients for information sharing. The main drivers for implementing effective healthcare systems are low latency and faster response times. Thus, quick responses among healthcare organizations are important in general, but in an emergency, significant latency at different stakeholders might result in disastrous situations. Thus, cutting-edge approaches like edge computing and artificial intelligence (AI) can deal with such problems. A packet cannot be sent from one location to another unless the "quality of service" (QoS) specifications are met. The term QoS refers to how well a service works for users. QoS parameters like throughput, bandwidth, transmission delay, availability, jitter, latency, and packet loss are crucial in this regard. Our focus is on the individual devices present at different levels of the smart healthcare infrastructure and the QoS requirements of the healthcare system as a whole. The contribution of this paper is five-fold: first, a novel pre-SLR method for comprehensive keyword research on subject-related themes for mining pertinent research papers for quality SLR; second, SLR on QoS improvement in smart healthcare apps; third a review of several QoS techniques used in current smart healthcare apps; fourth, the examination of the most important QoS measures in contemporary smart healthcare apps; fifth, offering solutions to the problems encountered in delivering QoS in smart healthcare IoT applications to improve healthcare services.

RevDate: 2023-11-14

Abbas Q, Ahmad G, Alyas T, et al (2023)

Revolutionizing Urban Mobility: IoT-Enhanced Autonomous Parking Solutions with Transfer Learning for Smart Cities.

Sensors (Basel, Switzerland), 23(21): pii:s23218753.

Smart cities have emerged as a specialized domain encompassing various technologies, transitioning from civil engineering to technology-driven solutions. The accelerated development of technologies, such as the Internet of Things (IoT), software-defined networks (SDN), 5G, artificial intelligence, cognitive science, and analytics, has played a crucial role in providing solutions for smart cities. Smart cities heavily rely on devices, ad hoc networks, and cloud computing to integrate and streamline various activities towards common goals. However, the complexity arising from multiple cloud service providers offering myriad services necessitates a stable and coherent platform for sustainable operations. The Smart City Operational Platform Ecology (SCOPE) model has been developed to address the growing demands, and incorporates machine learning, cognitive correlates, ecosystem management, and security. SCOPE provides an ecosystem that establishes a balance for achieving sustainability and progress. In the context of smart cities, Internet of Things (IoT) devices play a significant role in enabling automation and data capture. This research paper focuses on a specific module of SCOPE, which deals with data processing and learning mechanisms for object identification in smart cities. Specifically, it presents a car parking system that utilizes smart identification techniques to identify vacant slots. The learning controller in SCOPE employs a two-tier approach, and utilizes two different models, namely Alex Net and YOLO, to ensure procedural stability and improvement.

RevDate: 2023-11-13

Biswas J, Jobaer MA, Haque SF, et al (2023)

Mapping and monitoring land use land cover dynamics employing Google Earth Engine and machine learning algorithms on Chattogram, Bangladesh.

Heliyon, 9(11):e21245.

Land use land cover change (LULC) significantly impacts urban sustainability, urban planning, climate change, natural resource management, and biodiversity. The Chattogram Metropolitan Area (CMA) has been going through rapid urbanization, which has impacted the LULC transformation and accelerated the growth of urban sprawl and unplanned development. To map those urban sprawls and natural resources depletion, this study aims to monitor the LULC change using Landsat satellite imagery from 2003 to 2023 in the cloud-based remote sensing platform Google Earth Engine (GEE). LULC has been classified into five distinct classes: waterbody, build-up, bare land, dense vegetation, and cropland, employing four machine learning algorithms (random forest, gradient tree boost, classification & regression tree, and support vector machine) in the GEE platform. The overall accuracy (kappa statistics) and the receiver operating characteristic (ROC) curve have demonstrated satisfactory results. The results indicate that the CART model outperforms other LULC models when considering efficiency and accuracy in the designated study region. The analysis of LULC conversions revealed notable trends, patterns, and magnitudes across all periods: 2003-2013, 2013-2023, and 2003-2023. The expansion of unregulated built-up areas and the decline of croplands emerged as primary concerns. However, there was a positive indication of a significant increase in dense vegetation within the study area over the 20 years.

RevDate: 2023-11-13

Possik J, Asgary A, Solis AO, et al (2023)

An Agent-Based Modeling and Virtual Reality Application Using Distributed Simulation: Case of a COVID-19 Intensive Care Unit.

IEEE transactions on engineering management, 70(8):2931-2943.

Hospitals and other healthcare settings use various simulation methods to improve their operations, management, and training. The COVID-19 pandemic, with the resulting necessity for rapid and remote assessment, has highlighted the critical role of modeling and simulation in healthcare, particularly distributed simulation (DS). DS enables integration of heterogeneous simulations to further increase the usability and effectiveness of individual simulations. This article presents a DS system that integrates two different simulations developed for a hospital intensive care unit (ICU) ward dedicated to COVID-19 patients. AnyLogic has been used to develop a simulation model of the ICU ward using agent-based and discrete event modeling methods. This simulation depicts and measures physical contacts between healthcare providers and patients. The Unity platform has been utilized to develop a virtual reality simulation of the ICU environment and operations. The high-level architecture, an IEEE standard for DS, has been used to build a cloud-based DS system by integrating and synchronizing the two simulation platforms. While enhancing the capabilities of both simulations, the DS system can be used for training purposes and assessment of different managerial and operational decisions to minimize contacts and disease transmission in the ICU ward by enabling data exchange between the two simulations.

RevDate: 2023-11-10

Healthcare Engineering JO (2023)

Retracted: Sports Training Teaching Device Based on Big Data and Cloud Computing.

Journal of healthcare engineering, 2023:9795604.

[This retracts the article DOI: 10.1155/2021/7339486.].

RevDate: 2023-11-10

Intelligence And Neuroscience C (2023)

Retracted: Real-Time Detection of Body Nutrition in Sports Training Based on Cloud Computing and Somatosensory Network.

Computational intelligence and neuroscience, 2023:9784817.

[This retracts the article DOI: 10.1155/2022/9911905.].

RevDate: 2023-11-10

Horsley JJ, Thomas RH, Chowdhury FA, et al (2023)

Complementary structural and functional abnormalities to localise epileptogenic tissue.


BACKGROUND: When investigating suitability for epilepsy surgery, people with drug-refractory focal epilepsy may have intracranial EEG (iEEG) electrodes implanted to localise seizure onset. Diffusion-weighted magnetic resonance imaging (dMRI) may be acquired to identify key white matter tracts for surgical avoidance. Here, we investigate whether structural connectivity abnormalities, inferred from dMRI, may be used in conjunction with functional iEEG abnormalities to aid localisation of the epileptogenic zone (EZ), improving surgical outcomes in epilepsy.

METHODS: We retrospectively investigated data from 43 patients with epilepsy who had surgery following iEEG. Twenty-five patients (58%) were free from disabling seizures (ILAE 1 or 2) at one year. Interictal iEEG functional, and dMRI structural connectivity abnormalities were quantified by comparison to a normative map and healthy controls. We explored whether the resection of maximal abnormalities related to improved surgical outcomes, in both modalities individually and concurrently. Additionally, we suggest how connectivity abnormalities may inform the placement of iEEG electrodes pre-surgically using a patient case study.

FINDINGS: Seizure freedom was 15 times more likely in patients with resection of maximal connectivity and iEEG abnormalities (p=0.008). Both modalities separately distinguished patient surgical outcome groups and when used simultaneously, a decision tree correctly separated 36 of 43 (84%) patients.

INTERPRETATION: Our results suggest that both connectivity and iEEG abnormalities may localise epileptogenic tissue, and that these two modalities may provide complementary information in pre-surgical evaluations.

FUNDING: This research was funded by UKRI, CDT in Cloud Computing for Big Data, NIH, MRC, Wellcome Trust and Epilepsy Research UK.

RevDate: 2023-11-09

Faruqui N, Yousuf MA, Kateb FA, et al (2023)

Healthcare As a Service (HAAS): CNN-based cloud computing model for ubiquitous access to lung cancer diagnosis.

Heliyon, 9(11):e21520.

The field of automated lung cancer diagnosis using Computed Tomography (CT) scans has been significantly advanced by the precise predictions offered by Convolutional Neural Network (CNN)-based classifiers. Critical areas of study include improving image quality, optimizing learning algorithms, and enhancing diagnostic accuracy. To facilitate a seamless transition from research laboratories to real-world applications, it is crucial to improve the technology's usability-a factor often neglected in current state-of-the-art research. Yet, current state-of-the-art research in this field frequently overlooks the need for expediting this process. This paper introduces Healthcare-As-A-Service (HAAS), an innovative concept inspired by Software-As-A-Service (SAAS) within the cloud computing paradigm. As a comprehensive lung cancer diagnosis service system, HAAS has the potential to reduce lung cancer mortality rates by providing early diagnosis opportunities to everyone. We present HAASNet, a cloud-compatible CNN that boasts an accuracy rate of 96.07%. By integrating HAASNet predictions with physio-symptomatic data from the Internet of Medical Things (IoMT), the proposed HAAS model generates accurate and reliable lung cancer diagnosis reports. Leveraging IoMT and cloud technology, the proposed service is globally accessible via the Internet, transcending geographic boundaries. This groundbreaking lung cancer diagnosis service achieves average precision, recall, and F1-scores of 96.47%, 95.39%, and 94.81%, respectively.

RevDate: 2023-11-09

Wang C, W Dai (2023)

Lung nodule segmentation via semi-residual multi-resolution neural networks.

Open life sciences, 18(1):20220727.

The integration of deep neural networks and cloud computing has become increasingly prevalent within the domain of medical image processing, facilitated by the recent strides in neural network theory and the advent of the internet of things (IoTs). This juncture has led to the emergence of numerous image segmentation networks and innovative solutions that facilitate medical practitioners in diagnosing lung cancer. Within the contours of this study, we present an end-to-end neural network model, christened as the "semi-residual Multi-resolution Convolutional Neural Network" (semi-residual MCNN), devised to engender precise lung nodule segmentation maps within the milieu of cloud computing. Central to the architecture are three pivotal features, each coalescing to effectuate a notable enhancement in predictive accuracy: the incorporation of semi-residual building blocks, the deployment of group normalization techniques, and the orchestration of multi-resolution output heads. This innovative model is systematically subjected to rigorous training and testing regimes, using the LIDC-IDRI dataset - a widely embraced and accessible repository - comprising a diverse ensemble of 1,018 distinct lung CT images tailored to the realm of lung nodule segmentation.

RevDate: 2023-11-08

Wadford DA, Baumrind N, Baylis EF, et al (2023)

Implementation of California COVIDNet - a multi-sector collaboration for statewide SARS-CoV-2 genomic surveillance.

Frontiers in public health, 11:1249614.

INTRODUCTION: The SARS-CoV-2 pandemic represented a formidable scientific and technological challenge to public health due to its rapid spread and evolution. To meet these challenges and to characterize the virus over time, the State of California established the California SARS-CoV-2 Whole Genome Sequencing (WGS) Initiative, or "California COVIDNet". This initiative constituted an unprecedented multi-sector collaborative effort to achieve large-scale genomic surveillance of SARS-CoV-2 across California to monitor the spread of variants within the state, to detect new and emerging variants, and to characterize outbreaks in congregate, workplace, and other settings.

METHODS: California COVIDNet consists of 50 laboratory partners that include public health laboratories, private clinical diagnostic laboratories, and academic sequencing facilities as well as expert advisors, scientists, consultants, and contractors. Data management, sample sourcing and processing, and computational infrastructure were major challenges that had to be resolved in the midst of the pandemic chaos in order to conduct SARS-CoV-2 genomic surveillance. Data management, storage, and analytics needs were addressed with both conventional database applications and newer cloud-based data solutions, which also fulfilled computational requirements.

RESULTS: Representative and randomly selected samples were sourced from state-sponsored community testing sites. Since March of 2021, California COVIDNet partners have contributed more than 450,000 SARS-CoV-2 genomes sequenced from remnant samples from both molecular and antigen tests. Combined with genomes from CDC-contracted WGS labs, there are currently nearly 800,000 genomes from all 61 local health jurisdictions (LHJs) in California in the COVIDNet sequence database. More than 5% of all reported positive tests in the state have been sequenced, with similar rates of sequencing across 5 major geographic regions in the state.

DISCUSSION: Implementation of California COVIDNet revealed challenges and limitations in the public health system. These were overcome by engaging in novel partnerships that established a successful genomic surveillance program which provided valuable data to inform the COVID-19 public health response in California. Significantly, California COVIDNet has provided a foundational data framework and computational infrastructure needed to respond to future public health crises.

RevDate: 2023-11-07

Varadi M, Bertoni D, Magana P, et al (2023)

AlphaFold Protein Structure Database in 2024: providing structure coverage for over 214 million protein sequences.

Nucleic acids research pii:7337620 [Epub ahead of print].

The AlphaFold Database Protein Structure Database (AlphaFold DB, https://alphafold.ebi.ac.uk) has significantly impacted structural biology by amassing over 214 million predicted protein structures, expanding from the initial 300k structures released in 2021. Enabled by the groundbreaking AlphaFold2 artificial intelligence (AI) system, the predictions archived in AlphaFold DB have been integrated into primary data resources such as PDB, UniProt, Ensembl, InterPro and MobiDB. Our manuscript details subsequent enhancements in data archiving, covering successive releases encompassing model organisms, global health proteomes, Swiss-Prot integration, and a host of curated protein datasets. We detail the data access mechanisms of AlphaFold DB, from direct file access via FTP to advanced queries using Google Cloud Public Datasets and the programmatic access endpoints of the database. We also discuss the improvements and services added since its initial release, including enhancements to the Predicted Aligned Error viewer, customisation options for the 3D viewer, and improvements in the search engine of AlphaFold DB.

RevDate: 2023-11-06

Mangalampalli S, Karri GR, Mohanty SN, et al (2023)

Fault tolerant trust based task scheduler using Harris Hawks optimization and deep reinforcement learning in multi cloud environment.

Scientific reports, 13(1):19179.

Cloud Computing model provides on demand delivery of seamless services to customers around the world yet single point of failures occurs in cloud model due to improper assignment of tasks to precise virtual machines which leads to increase in rate of failures which effects SLA based trust parameters (Availability, success rate, turnaround efficiency) upon which impacts trust on cloud provider. In this paper, we proposed a task scheduling algorithm which captures priorities of all tasks, virtual resources from task manager which comes onto cloud application console are fed to task scheduler which takes scheduling decisions based on hybridization of both Harris hawk optimization and ML based reinforcement algorithms to enhance the scheduling process. Task scheduling in this research performed in two phases i.e. Task selection and task mapping phases. In task selection phase, all incoming priorities of tasks, VMs are captured and generates schedules using Harris hawks optimization. In task mapping phase, generated schedules are optimized using a DQN model which is based on deep reinforcement learning. In this research, we used multi cloud environment to tackle availability of VMs if there is an increase in upcoming tasks dynamically and migrate tasks to one cloud to another to mitigate migration time. Extensive simulations are conducted in Cloudsim and workload generated by fabricated datasets and realtime synthetic workloads from NASA, HPC2N are used to check efficacy of our proposed scheduler (FTTHDRL). It compared against existing task schedulers i.e. MOABCQ, RATS-HM, AINN-BPSO approaches and our proposed FTTHDRL outperforms existing mechanisms by minimizing rate of failures, resource cost, improved SLA based trust parameters.

RevDate: 2023-11-06

Bao J, Wu C, Lin Y, et al (2023)

A scalable approach to optimize traffic signal control with federated reinforcement learning.

Scientific reports, 13(1):19184.

Intelligent Transportation has seen significant advancements with Deep Learning and the Internet of Things, making Traffic Signal Control (TSC) research crucial for reducing congestion, travel time, emissions, and energy consumption. Reinforcement Learning (RL) has emerged as the primary method for TSC, but centralized learning poses communication and computing challenges, while distributed learning struggles to adapt across intersections. This paper presents a novel approach using Federated Learning (FL)-based RL for TSC. FL integrates knowledge from local agents into a global model, overcoming intersection variations with a unified agent state structure. To endow the model with the capacity to globally represent the TSC task while preserving the distinctive feature information inherent to each intersection, a segment of the RL neural network is aggregated to the cloud, and the remaining layers undergo fine-tuning upon convergence of the model training process. Extensive experiments demonstrate reduced queuing and waiting times globally, and the successful scalability of the proposed model is validated on a real-world traffic network in Monaco, showing its potential for new intersections.

RevDate: 2023-11-06

Mee L, SM Barribeau (2023)

Influence of social lifestyles on host-microbe symbioses in the bees.

Ecology and evolution, 13(11):e10679.

Microbiomes are increasingly recognised as critical for the health of an organism. In eusocial insect societies, frequent social interactions allow for high-fidelity transmission of microbes across generations, leading to closer host-microbe coevolution. The microbial communities of bees with other social lifestyles are less studied, and few comparisons have been made between taxa that vary in social structure. To address this gap, we leveraged a cloud-computing resource and publicly available transcriptomic data to conduct a survey of microbial diversity in bee samples from a variety of social lifestyles and taxa. We consistently recover the core microbes of well-studied corbiculate bees, supporting this method's ability to accurately characterise microbial communities. We find that the bacterial communities of bees are influenced by host location, phylogeny and social lifestyle, although no clear effect was found for fungal or viral microbial communities. Bee genera with more complex societies tend to harbour more diverse microbes, with Wolbachia detected more commonly in solitary tribes. We present a description of the microbiota of Euglossine bees and find that they do not share the "corbiculate core" microbiome. Notably, we find that bacteria with known anti-pathogenic properties are present across social bee genera, suggesting that symbioses that enhance host immunity are important with higher sociality. Our approach provides an inexpensive means of exploring microbiomes of a given taxa and identifying avenues for further research. These findings contribute to our understanding of the relationships between bees and their associated microbial communities, highlighting the importance of considering microbiome dynamics in investigations of bee health.

RevDate: 2023-11-02

Qian J, Q She (2023)

The impact of corporate digital transformation on the export product quality: Evidence from Chinese enterprises.

PloS one, 18(11):e0293461 pii:PONE-D-23-08706.

The digital economy has become a driving force in the rapid development of the global economy and the promotion of export trade. Pivotal in its advent, the digital transformation of enterprises utilizes cloud computing, big data, artificial intelligence, and other digital technologies to provide an impetus for evolution and transformation in various industries and fields. in enhancing quality and efficiency. This has been critical for enhancing both quality and efficiency in enterprises based in the People's Republic of China. Through the available data on its listed enterprises, this paper measures their digital transformation through a textual analysis and examines how this transformation influences their export product quality. We then explore the possible mechanisms at work in this influence from the perspective of enterprise heterogeneity. The results find that: (1) Digital transformation significantly enhances the export product quality in an enterprises, and the empirical findings still hold after a series of robustness tests; (2) Further mechanism analysis reveals that the digital transformation can positively affect export product quality through the two mechanisms of process productivity (φ), the ability to produce output using fewer variable inputs, and product productivity (ξ), the ability to produce quality with fewer fixed outlays; (3) In terms of enterprise heterogeneity, the impact of digital transformation on export product quality is significant for enterprises engaged in general trade or high-tech industries and those with strong corporate governance. In terms of heterogeneity in digital transformation of enterprise and the regional digital infrastructure level, the higher the level of digital transformation and regional digital infrastructure, the greater the impact of digital transformation on export product quality. This paper has practical implications for public policies that offer vital aid to enterprises as they seek digital transformation to remain sync with the digital economy, upgrade their product quality, and drive the sustainable, high-quality, and healthy development of their nation's economy.

RevDate: 2023-10-31

Copeland CJ, Roddy JW, Schmidt AK, et al (2023)

VIBES: A Workflow for Annotating and Visualizing Viral Sequences Integrated into Bacterial Genomes.

bioRxiv : the preprint server for biology pii:2023.10.17.562434.

Bacteriophages are viruses that infect bacteria. Many bacteriophages integrate their genomes into the bacterial chromosome and become prophages. Prophages may substantially burden or benefit host bacteria fitness, acting in some cases as parasites and in others as mutualists, and have been demonstrated to increase host virulence. The increasing ease of bacterial genome sequencing provides an opportunity to deeply explore prophage prevalence and insertion sites. Here we present VIBES, a workflow intended to automate prophage annotation in complete bacterial genome sequences. VIBES provides additional context to prophage annotations by annotating bacterial genes and viral proteins in user-provided bacterial and viral genomes. The VIBES pipeline is implemented as a Nextflow-driven workflow, providing a simple, unified interface for execution on local, cluster, and cloud computing environments. For each step of the pipeline, a container including all necessary software dependencies is provided. VIBES produces results in simple tab separated format and generates intuitive and interactive visualizations for data exploration. Despite VIBES' primary emphasis on prophage annotation, its generic alignment-based design allows it to be deployed as a general-purpose sequence similarity search manager. We demonstrate the utility of the VIBES prophage annotation workflow by searching for 178 Pf phage genomes across 1,072 Pseudomonas spp. genomes. VIBES software is available at https://github.com/TravisWheelerLab/VIBES .

RevDate: 2023-10-30

Cai T, Herner K, Yang T, et al (2023)

Accelerating Machine Learning Inference with GPUs in ProtoDUNE Data Processing.

Computing and software for big science, 7(1):11.

We study the performance of a cloud-based GPU-accelerated inference server to speed up event reconstruction in neutrino data batch jobs. Using detector data from the ProtoDUNE experiment and employing the standard DUNE grid job submission tools, we attempt to reprocess the data by running several thousand concurrent grid jobs, a rate we expect to be typical of current and future neutrino physics experiments. We process most of the dataset with the GPU version of our processing algorithm and the remainder with the CPU version for timing comparisons. We find that a 100-GPU cloud-based server is able to easily meet the processing demand, and that using the GPU version of the event processing algorithm is two times faster than processing these data with the CPU version when comparing to the newest CPUs in our sample. The amount of data transferred to the inference server during the GPU runs can overwhelm even the highest-bandwidth network switches, however, unless care is taken to observe network facility limits or otherwise distribute the jobs to multiple sites. We discuss the lessons learned from this processing campaign and several avenues for future improvements.

RevDate: 2023-10-28

Horsley JJ, Thomas RH, Chowdhury FA, et al (2023)

Complementary structural and functional abnormalities to localise epileptogenic tissue.

EBioMedicine, 97:104848 pii:S2352-3964(23)00414-0 [Epub ahead of print].

BACKGROUND: When investigating suitability for epilepsy surgery, people with drug-refractory focal epilepsy may have intracranial EEG (iEEG) electrodes implanted to localise seizure onset. Diffusion-weighted magnetic resonance imaging (dMRI) may be acquired to identify key white matter tracts for surgical avoidance. Here, we investigate whether structural connectivity abnormalities, inferred from dMRI, may be used in conjunction with functional iEEG abnormalities to aid localisation of the epileptogenic zone (EZ), improving surgical outcomes in epilepsy.

METHODS: We retrospectively investigated data from 43 patients (42% female) with epilepsy who had surgery following iEEG. Twenty-five patients (58%) were free from disabling seizures (ILAE 1 or 2) at one year. Interictal iEEG functional, and dMRI structural connectivity abnormalities were quantified by comparison to a normative map and healthy controls. We explored whether the resection of maximal abnormalities related to improved surgical outcomes, in both modalities individually and concurrently. Additionally, we suggest how connectivity abnormalities may inform the placement of iEEG electrodes pre-surgically using a patient case study.

FINDINGS: Seizure freedom was 15 times more likely in patients with resection of maximal connectivity and iEEG abnormalities (p = 0.008). Both modalities separately distinguished patient surgical outcome groups and when used simultaneously, a decision tree correctly separated 36 of 43 (84%) patients.

INTERPRETATION: Our results suggest that both connectivity and iEEG abnormalities may localise epileptogenic tissue, and that these two modalities may provide complementary information in pre-surgical evaluations.

FUNDING: This research was funded by UKRI, CDT in Cloud Computing for Big Data, NIH, MRC, Wellcome Trust and Epilepsy Research UK.

RevDate: 2023-10-28

Ramzan M, Shoaib M, Altaf A, et al (2023)

Distributed Denial of Service Attack Detection in Network Traffic Using Deep Learning Algorithm.

Sensors (Basel, Switzerland), 23(20): pii:s23208642.

Internet security is a major concern these days due to the increasing demand for information technology (IT)-based platforms and cloud computing. With its expansion, the Internet has been facing various types of attacks. Viruses, denial of service (DoS) attacks, distributed DoS (DDoS) attacks, code injection attacks, and spoofing are the most common types of attacks in the modern era. Due to the expansion of IT, the volume and severity of network attacks have been increasing lately. DoS and DDoS are the most frequently reported network traffic attacks. Traditional solutions such as intrusion detection systems and firewalls cannot detect complex DDoS and DoS attacks. With the integration of artificial intelligence-based machine learning and deep learning methods, several novel approaches have been presented for DoS and DDoS detection. In particular, deep learning models have played a crucial role in detecting DDoS attacks due to their exceptional performance. This study adopts deep learning models including recurrent neural network (RNN), long short-term memory (LSTM), and gradient recurrent unit (GRU) to detect DDoS attacks on the most recent dataset, CICDDoS2019, and a comparative analysis is conducted with the CICIDS2017 dataset. The comparative analysis contributes to the development of a competent and accurate method for detecting DDoS attacks with reduced execution time and complexity. The experimental results demonstrate that models perform equally well on the CICDDoS2019 dataset with an accuracy score of 0.99, but there is a difference in execution time, with GRU showing less execution time than those of RNN and LSTM.

RevDate: 2023-10-28

Sheu RK, Lin YC, Pardeshi MS, et al (2023)

Adaptive Autonomous Protocol for Secured Remote Healthcare Using Fully Homomorphic Encryption (AutoPro-RHC).

Sensors (Basel, Switzerland), 23(20): pii:s23208504.

The outreach of healthcare services is a challenge to remote areas with affected populations. Fortunately, remote health monitoring (RHM) has improved the hospital service quality and has proved its sustainable growth. However, the absence of security may breach the health insurance portability and accountability act (HIPAA), which has an exclusive set of rules for the privacy of medical data. Therefore, the goal of this work is to design and implement the adaptive Autonomous Protocol (AutoPro) on the patient's remote healthcare (RHC) monitoring data for the hospital using fully homomorphic encryption (FHE). The aim is to perform adaptive autonomous FHE computations on recent RHM data for providing health status reporting and maintaining the confidentiality of every patient. The autonomous protocol works independently within the group of prime hospital servers without the dependency on the third-party system. The adaptiveness of the protocol modes is based on the patient's affected level of slight, medium, and severe cases. Related applications are given as glucose monitoring for diabetes, digital blood pressure for stroke, pulse oximeter for COVID-19, electrocardiogram (ECG) for cardiac arrest, etc. The design for this work consists of an autonomous protocol, hospital servers combining multiple prime/local hospitals, and an algorithm based on fast fully homomorphic encryption over the torus (TFHE) library with a ring-variant by the Gentry, Sahai, and Waters (GSW) scheme. The concrete-ML model used within this work is trained using an open heart disease dataset from the UCI machine learning repository. Preprocessing is performed to recover the lost and incomplete data in the dataset. The concrete-ML model is evaluated both on the workstation and cloud server. Also, the FHE protocol is implemented on the AWS cloud network with performance details. The advantages entail providing confidentiality to the patient's data/report while saving the travel and waiting time for the hospital services. The patient's data will be completely confidential and can receive emergency services immediately. The FHE results show that the highest accuracy is achieved by support vector classification (SVC) of 88% and linear regression (LR) of 86% with the area under curve (AUC) of 91% and 90%, respectively. Ultimately, the FHE-based protocol presents a novel system that is successfully demonstrated on the cloud network.

RevDate: 2023-10-28

Ramachandran D, Naqi SM, Perumal G, et al (2023)

DLTN-LOSP: A Novel Deep-Linear-Transition-Network-Based Resource Allocation Model with the Logic Overhead Security Protocol for Cloud Systems.

Sensors (Basel, Switzerland), 23(20): pii:s23208448.

Cloud organizations now face a challenge in managing the enormous volume of data and various resources in the cloud due to the rapid growth of the virtualized environment with many service users, ranging from small business owners to large corporations. The performance of cloud computing may suffer from ineffective resource management. As a result, resources must be distributed fairly among various stakeholders without sacrificing the organization's profitability or the satisfaction of its customers. A customer's request cannot be put on hold indefinitely just because the necessary resources are not available on the board. Therefore, a novel cloud resource allocation model incorporating security management is developed in this paper. Here, the Deep Linear Transition Network (DLTN) mechanism is developed for effectively allocating resources to cloud systems. Then, an Adaptive Mongoose Optimization Algorithm (AMOA) is deployed to compute the beamforming solution for reward prediction, which supports the process of resource allocation. Moreover, the Logic Overhead Security Protocol (LOSP) is implemented to ensure secured resource management in the cloud system, where Burrows-Abadi-Needham (BAN) logic is used to predict the agreement logic. During the results analysis, the performance of the proposed DLTN-LOSP model is validated and compared using different metrics such as makespan, processing time, and utilization rate. For system validation and testing, 100 to 500 resources are used in this study, and the results achieved a make-up of 2.3% and a utilization rate of 13 percent. Moreover, the obtained results confirm the superiority of the proposed framework, with better performance outcomes.

RevDate: 2023-10-28

Pierleoni P, Concetti R, Belli A, et al (2023)

A Cloud-IoT Architecture for Latency-Aware Localization in Earthquake Early Warning.

Sensors (Basel, Switzerland), 23(20): pii:s23208431.

An effective earthquake early warning system requires rapid and reliable earthquake source detection. Despite the numerous proposed epicenter localization solutions in recent years, their utilization within the Internet of Things (IoT) framework and integration with IoT-oriented cloud platforms remain underexplored. This paper proposes a complete IoT architecture for earthquake detection, localization, and event notification. The architecture, which has been designed, deployed, and tested on a standard cloud platform, introduces an innovative approach by implementing P-wave "picking" directly on IoT devices, deviating from traditional regional earthquake early warning (EEW) approaches. Pick association, source localization, event declaration, and user notification functionalities are also deployed on the cloud. The cloud integration simplifies the integration of other services in the architecture, such as data storage and device management. Moreover, a localization algorithm based on the hyperbola method is proposed, but here, the time difference of arrival multilateration is applied that is often used in wireless sensor network applications. The results show that the proposed end-to-end architecture is able to provide a quick estimate of the earthquake epicenter location with acceptable errors for an EEW system scenario. Rigorous testing against the standard of reference in Italy for regional EEW showed an overall 3.39 s gain in the system localization speed, thus offering a tangible metric of the efficiency and potential proposed system as an EEW solution.

RevDate: 2023-10-28

Lorenzo-Villegas DL, Gohil NV, Lamo P, et al (2023)

Innovative Biosensing Approaches for Swift Identification of Candida Species, Intrusive Pathogenic Organisms.

Life (Basel, Switzerland), 13(10): pii:life13102099.

Candida is the largest genus of medically significant fungi. Although most of its members are commensals, residing harmlessly in human bodies, some are opportunistic and dangerously invasive. These have the ability to cause severe nosocomial candidiasis and candidemia that affect the viscera and bloodstream. A prompt diagnosis will lead to a successful treatment modality. The smart solution of biosensing technologies for rapid and precise detection of Candida species has made remarkable progress. The development of point-of-care (POC) biosensor devices involves sensor precision down to pico-/femtogram level, cost-effectiveness, portability, rapidity, and user-friendliness. However, futuristic diagnostics will depend on exploiting technologies such as multiplexing for high-throughput screening, CRISPR, artificial intelligence (AI), neural networks, the Internet of Things (IoT), and cloud computing of medical databases. This review gives an insight into different biosensor technologies designed for the detection of medically significant Candida species, especially Candida albicans and C. auris, and their applications in the medical setting.

RevDate: 2023-10-28

Dineva K, T Atanasova (2023)

Health Status Classification for Cows Using Machine Learning and Data Management on AWS Cloud.

Animals : an open access journal from MDPI, 13(20): pii:ani13203254.

The health and welfare of livestock are significant for ensuring the sustainability and profitability of the agricultural industry. Addressing efficient ways to monitor and report the health status of individual cows is critical to prevent outbreaks and maintain herd productivity. The purpose of the study is to develop a machine learning (ML) model to classify the health status of milk cows into three categories. In this research, data are collected from existing non-invasive IoT devices and tools in a dairy farm, monitoring the micro- and macroenvironment of the cow in combination with particular information on age, days in milk, lactation, and more. A workflow of various data-processing methods is systematized and presented to create a complete, efficient, and reusable roadmap for data processing, modeling, and real-world integration. Following the proposed workflow, the data were treated, and five different ML algorithms were trained and tested to select the most descriptive one to monitor the health status of individual cows. The highest result for health status assessment is obtained by random forest classifier (RFC) with an accuracy of 0.959, recall of 0.954, and precision of 0.97. To increase the security, speed, and reliability of the work process, a cloud architecture of services is presented to integrate the trained model as an additional functionality in the Amazon Web Services (AWS) environment. The classification results of the ML model are visualized in a newly created interface in the client application.

RevDate: 2023-10-27

Intelligence And Neuroscience C (2023)

Retracted: Cloud Computing Load Balancing Mechanism Taking into Account Load Balancing Ant Colony Optimization Algorithm.

Computational intelligence and neuroscience, 2023:9831926.

[This retracts the article DOI: 10.1155/2022/3120883.].

RevDate: 2023-10-27

Hachisuca AMM, de Souza EG, Oliveira WKM, et al (2023)

AgDataBox-IoT - application development for agrometeorological stations in smart.

MethodsX, 11:102419.

Currently, Brazil is one of the world's largest grain producers and exporters. Agriculture has already entered its 4.0 version (2017), also known as digital agriculture, when the industry has entered the 4.0 era (2011). This new paradigm uses Internet of Things (IoT) techniques, sensors installed in the field, network of interconnected sensors in the plot, drones for crop monitoring, multispectral cameras, storage and processing of data in Cloud Computing, and Big Data techniques to process the large volumes of generated data. One of the practical options for implementing precision agriculture is the segmentation of the plot into management zones, aiming at maximizing profits according to the productive potential of each zone, being economically viable even for small producers. Considering that climate factors directly influence yield, this study describes the development of a sensor network for climate monitoring of management zones (microclimates), allowing the identification of climate factors that influence yield at each of its stages.•Application of the internet of things to assist in decision making in the agricultural production system.•AgDataBox (ADB-IoT) web platform has an Application Programming Interface (API).•An agrometeorological station capable of monitoring all meteorological parameters was developed (Kate 3.0).

RevDate: 2023-10-25

Dube T, Dube T, Dalu T, et al (2023)

Assessment of land use and land cover, water nutrient and metal concentration related to illegal mining activities in an Austral semi-arid river system: A remote sensing and multivariate analysis approach.

The Science of the total environment pii:S0048-9697(23)06546-4 [Epub ahead of print].

The mining sector in various countries, particularly in the sub-Saharan African region, faces significant impact from the emergence of small-scale unlicensed artisanal mines. This trend is influenced by the rising demand and prices for minerals, along with prevalent poverty levels. Thus, the detrimental impacts of these artisanal mines on the natural environment (i.e., rivers) have remained poorly understood particularly in the Zimbabwean context. To understand the consequences of this situation, a study was conducted in the Umzingwane Catchment, located in southern Zimbabwe, focusing on the variations in water nutrient and metal concentrations in rivers affected by illegal mining activities along their riparian zones. Using multi-year Sentinel-2 composite data and the random forest machine learning algorithm on the Google Earth Engine cloud-computing platform, we mapped the spatial distribution of illegal mines in the affected regions and seven distinct land use classes, including artisanal mines, bare surfaces, settlements, official mines, croplands, and natural vegetation, with an acceptable overall and class accuracies of ±70 % were identified. Artisanal mines were found to be located along rivers and this was attributed to their large water requirements needed during the mining process. The water quality analysis revealed elevated nutrient concentrations, such as ammonium and nitrate (range 0.10-20.0 mg L[-1]), which could be attributed to mine drainage from the use of ammonium nitrate explosives during mining activities. Additionally, the prevalence of croplands in the area may have potentially contributed to increased nutrient concentrations. The principal component analysis and hierarchical cluster analysis revealed three clusters, with one of these clusters showing parameters like Ca, Mg, K, Hg and Na, which are usually associated with mineral gypsum found in the drainage of artisanal mines in the selected rivers. Cluster 2 consisted of B, Cu, Fe, Pb, and Mn, which are likely from the natural environment and finally, cluster 3 contained As, Cd, Cr, and Zn, which were likely associated with both legal and illegal mining operations. These findings provide essential insights into the health of the studied river system and the impacts of human activities in the region. They further serve as a foundation for developing and implementing regulatory measures aimed at protecting riverine systems, in line with sustainable development goal 15.1 which focuses on preserving and conserving terrestrial and inland freshwater ecosystems, including rivers. By acting on this information, authorities can work towards safeguarding these vital natural resources and promoting sustainable development in the area.

RevDate: 2023-10-23

Gal-Nadasan N, Stoicu-Tivadar V, Gal-Nadasan E, et al (2023)

Robotic Process Automation Based Data Extraction from Handwritten Medical Forms.

Studies in health technology and informatics, 309:68-72.

This paper proposes to create an RPA(robotic process automation) based software robot that can digitalize and extract data from handwritten medical forms. The RPA robot uses a taxonomy that is specific for the medical form and associates the extracted data with the taxonomy. This is accomplished using UiPath studio to create the robot, Google Cloud Vision OCR(optical character recognition) to create the DOM (digital object model) file and UiPath machine learning (ML) API to extract the data from the medical form. Due to the fact that the medical form is in a non-standard format a data extraction template had to be applied. After the extraction process the data can be saved into databases or into a spreadsheets.

RevDate: 2023-10-23

Eneh AH, Udanor CN, Ossai NI, et al (2023)

Towards an improved internet of things sensors data quality for a smart aquaponics system yield prediction.

MethodsX, 11:102436.

The mobile aquaponics system is a sustainable integrated aquaculture-crop production system in which wastewater from fish ponds are utilized in crop production, filtered, and returned for aquaculture uses. This process ensures the optimization of water and nutrients as well as the simultaneous production of fish and crops in portable homestead models. The Lack of datasets and documentations on monitoring growth parameters in Sub-Saharan Africa hamper the effective management and prediction of yields. Water quality impacts the fish growth rate, feed consumption, and general well-being irrespective of the system. This research presents an improvement on the IoT water quality sensor system earlier developed in a previous study in carried out in conjunction with two local catfish farmers. The improved system produced datasets that when trained using several machine learning algorithms achieved a test RMSE score of 0.6140 against 1.0128 from the old system for fish length prediction using Decision Tree Regressor. Further testing with the XGBoost Regressor achieved a test RMSE score of 7.0192 for fish weight prediction from the initial IoT dataset and 0.7793 from the improved IoT dataset. Both systems achieved a prediction accuracy of 99%. These evaluations clearly show that the improved system outperformed the initial one.•The discovery and use of improved IoT pond water quality sensors.•Development of machine learning models to evaluate the methods.•Testing of the datasets from the two methods using the machine learning models.

RevDate: 2023-10-21

Patel M, Dayan I, Fishman EK, et al (2023)

Accelerating artificial intelligence: How federated learning can protect privacy, facilitate collaboration, and improve outcomes.

Health informatics journal, 29(4):14604582231207744.

Cross-institution collaborations are constrained by data-sharing challenges. These challenges hamper innovation, particularly in artificial intelligence, where models require diverse data to ensure strong performance. Federated learning (FL) solves data-sharing challenges. In typical collaborations, data is sent to a central repository where models are trained. With FL, models are sent to participating sites, trained locally, and model weights aggregated to create a master model with improved performance. At the 2021 Radiology Society of North America's (RSNA) conference, a panel was conducted titled "Accelerating AI: How Federated Learning Can Protect Privacy, Facilitate Collaboration and Improve Outcomes." Two groups shared insights: researchers from the EXAM study (EMC CXR AI Model) and members of the National Cancer Institute's Early Detection Research Network's (EDRN) pancreatic cancer working group. EXAM brought together 20 institutions to create a model to predict oxygen requirements of patients seen in the emergency department with COVID-19 symptoms. The EDRN collaboration is focused on improving outcomes for pancreatic cancer patients through earlier detection. This paper describes major insights from the panel, including direct quotes. The panelists described the impetus for FL, the long-term potential vision of FL, challenges faced in FL, and the immediate path forward for FL.

RevDate: 2023-10-20

Naboureh A, Li A, Bian J, et al (2023)

Land cover dataset of the China Central-Asia West-Asia Economic Corridor from 1993 to 2018.

Scientific data, 10(1):728.

Land Cover (LC) maps offer vital knowledge for various studies, ranging from sustainable development to climate change. The China Central-Asia West-Asia Economic Corridor region, as a core component of the Belt and Road initiative program, has been experiencing some of the most severe LC change tragedies, such as the Aral Sea crisis and Lake Urmia shrinkage, in recent decades. Therefore, there is a high demand for producing a fine-resolution, spatially-explicit, and long-term LC dataset for this region. However, except China, such dataset for the rest of the region (Kyrgyzstan, Turkmenistan, Kazakhstan, Uzbekistan, Tajikistan, Turkey, and Iran) is currently lacking. Here, we constructed a historical set of six 30-m resolution LC maps between 1993 and 2018 at 5-year time intervals for the seven countries where nearly 200,000 Landsat scenes were classified into nine LC types within Google Earth Engine cloud computing platform. The generated LC maps displayed high accuracies. This publicly available dataset has the potential to be broadly applied in environmental policy and management.

RevDate: 2023-10-20

Muratore L, N Tsagarakis (2023)

XBot2D: towards a robotics hybrid cloud architecture for field robotics.

Frontiers in robotics and AI, 10:1168694.

Nowadays, robotics applications requiring the execution of complex tasks in real-world scenarios are still facing many challenges related to highly unstructured and dynamic environments in domains such as emergency response and search and rescue where robots have to operate for prolonged periods trading off computational performance with increased power autonomy and vice versa. In particular, there is a crucial need for robots capable of adapting to such settings while at the same time providing robustness and extended power autonomy. A possible approach to overcome the conflicting demand of a computational performing system with the need for long power autonomy is represented by cloud robotics, which can boost the computational capabilities of the robot while reducing the energy consumption by exploiting the offload of resources to the cloud. Nevertheless, the communication constraint due to limited bandwidth, latency, and connectivity, typical of field robotics, makes cloud-enabled robotics solutions challenging to deploy in real-world applications. In this context, we designed and realized the XBot2D software architecture, which provides a hybrid cloud manager capable of dynamically and seamlessly allocating robotics skills to perform a distributed computation based on the current network condition and the required latency, and computational/energy resources of the robot in use. The proposed framework leverage on the two dimensions, i.e., 2D (local and cloud), in a transparent way for the user, providing support for Real-Time (RT) skills execution on the local robot, as well as machine learning and A.I. resources on the cloud with the possibility to automatically relocate the above based on the required performances and communication quality. XBot2D implementation and its functionalities are presented and validated in realistic tasks involving the CENTAURO robot and the Amazon Web Service Elastic Computing Cloud (AWS EC2) infrastructure with different network conditions.

RevDate: 2023-10-20

Post AR, Ho N, Rasmussen E, et al (2023)

Hypermedia-based software architecture enables Test-Driven Development.

JAMIA open, 6(4):ooad089.

OBJECTIVES: Using agile software development practices, develop and evaluate an architecture and implementation for reliable and user-friendly self-service management of bioinformatic data stored in the cloud.

MATERIALS AND METHODS: Comprehensive Oncology Research Environment (CORE) Browser is a new open-source web application for cancer researchers to manage sequencing data organized in a flexible format in Amazon Simple Storage Service (S3) buckets. It has a microservices- and hypermedia-based architecture, which we integrated with Test-Driven Development (TDD), the iterative writing of computable specifications for how software should work prior to development. Relying on repeating patterns found in hypermedia-based architectures, we hypothesized that hypermedia would permit developing test "templates" that can be parameterized and executed for each microservice, maximizing code coverage while minimizing effort.

RESULTS: After one-and-a-half years of development, the CORE Browser backend had 121 test templates and 875 custom tests that were parameterized and executed 3031 times, providing 78% code coverage.

DISCUSSION: Architecting to permit test reuse through a hypermedia approach was a key success factor for our testing efforts. CORE Browser's application of hypermedia and TDD illustrates one way to integrate software engineering methods into data-intensive networked applications. Separating bioinformatic data management from analysis distinguishes this platform from others in bioinformatics and may provide stable data management while permitting analysis methods to advance more rapidly.

CONCLUSION: Software engineering practices are underutilized in informatics. Similar informatics projects will more likely succeed through application of good architecture and automated testing. Our approach is broadly applicable to data management tools involving cloud data storage.

RevDate: 2023-10-20

Healthcare Engineering JO (2023)

Retracted: Application of Cloud Computing in the Prediction of Exercise Improvement of Cardiovascular and Digestive Systems in Obese Patients.

Journal of healthcare engineering, 2023:9872648.

[This retracts the article DOI: 10.1155/2021/4695722.].

RevDate: 2023-10-20

Healthcare Engineering JO (2023)

Retracted: Medical Cloud Computing Data Processing to Optimize the Effect of Drugs.

Journal of healthcare engineering, 2023:9869843.

[This retracts the article DOI: 10.1155/2021/5560691.].

RevDate: 2023-10-20

Healthcare Engineering JO (2023)

Retracted: Cloud Computing into Respiratory Rehabilitation Training-Assisted Treatment of Patients with Pneumonia.

Journal of healthcare engineering, 2023:9795658.

[This retracts the article DOI: 10.1155/2021/5884174.].

RevDate: 2023-10-20

Hornik J, Rachamim M, S Graguer (2023)

Fog computing: a platform for big-data marketing analytics.

Frontiers in artificial intelligence, 6:1242574.

Marketing science embraces a wider variety of data types and measurement tools necessary for strategy, research, and applied decision making. Managing the marketing data generated by internet of things (IoT) sensors and actuators is one of the biggest challenges faced by marketing managers when deploying an IoT system. This short note shows how traditional cloud-based IoT systems are challenged by the large scale, heterogeneity, and high latency witnessed in some cloud ecosystems. It introduces researchers to one recent breakthrough, fog computing, an emerging concept that decentralizes applications, strategies, and data analytics into the network itself using a distributed and federated computing model. It transforms centralized cloud to distributed fog by bringing storage and computation closer to the user end. Fog computing is considered a novel marketplace phenomenon which can support AI and management strategies, especially for the design of "smart marketing".

RevDate: 2023-10-19

Uhlrich SD, Falisse A, Kidziński Ł, et al (2023)

OpenCap: Human movement dynamics from smartphone videos.

PLoS computational biology, 19(10):e1011462 pii:PCOMPBIOL-D-23-00154.

Measures of human movement dynamics can predict outcomes like injury risk or musculoskeletal disease progression. However, these measures are rarely quantified in large-scale research studies or clinical practice due to the prohibitive cost, time, and expertise required. Here we present and validate OpenCap, an open-source platform for computing both the kinematics (i.e., motion) and dynamics (i.e., forces) of human movement using videos captured from two or more smartphones. OpenCap leverages pose estimation algorithms to identify body landmarks from videos; deep learning and biomechanical models to estimate three-dimensional kinematics; and physics-based simulations to estimate muscle activations and musculoskeletal dynamics. OpenCap's web application enables users to collect synchronous videos and visualize movement data that is automatically processed in the cloud, thereby eliminating the need for specialized hardware, software, and expertise. We show that OpenCap accurately predicts dynamic measures, like muscle activations, joint loads, and joint moments, which can be used to screen for disease risk, evaluate intervention efficacy, assess between-group movement differences, and inform rehabilitation decisions. Additionally, we demonstrate OpenCap's practical utility through a 100-subject field study, where a clinician using OpenCap estimated musculoskeletal dynamics 25 times faster than a laboratory-based approach at less than 1% of the cost. By democratizing access to human movement analysis, OpenCap can accelerate the incorporation of biomechanical metrics into large-scale research studies, clinical trials, and clinical practice.

RevDate: 2023-10-19

Zhang M (2023)

Optimization Strategy of College Students' Education Management Based on Smart Cloud Platform Teaching.

Computational intelligence and neuroscience, 2023:5642142.

With the passage of time and social changes, the form of education is also changing step by step. In just a few decades, information technology has developed by leaps and bounds, and digital education has not yet been widely promoted. Intelligent education cloud platforms based on big data, Internet of things, cloud computing, and artificial intelligence have begun to emerge. The research on the "smart campus" cloud platform is conducive to improving the utilization rate of existing hardware equipment in colleges and universities and is conducive in improving the level of teaching software deployment. At the same time, this research also provides a new idea for the research in the field of cloud security. While cloud computing brings convenience to teaching work, it also brings new problems to system security. At present, virtualization technology is still in the ascendant stage in the construction of "smart campus" in colleges and universities and is gradually applied to cloud computing service products. At present, there are many cases about the construction of teaching resource platform, but most of them are modified from the early resource management system, which has strong coupling of single system, insufficient functions of collecting, processing, searching, sharing, and reusing resources, and weak application support ability for related business systems. Under this social background, this paper studies the teaching process management system for intelligent classroom.

RevDate: 2023-10-18

Wang Y, Hollingsworth PM, Zhai D, et al (2023)

High-resolution maps show that rubber causes substantial deforestation.

Nature [Epub ahead of print].

Understanding the effects of cash crop expansion on natural forest is of fundamental importance. However, for most crops there are no remotely sensed global maps[1], and global deforestation impacts are estimated using models and extrapolations. Natural rubber is an example of a principal commodity for which deforestation impacts have been highly uncertain, with estimates differing more than fivefold[1-4]. Here we harnessed Earth observation satellite data and cloud computing[5] to produce high-resolution maps of rubber (10 m pixel size) and associated deforestation (30 m pixel size) for Southeast Asia. Our maps indicate that rubber-related forest loss has been substantially underestimated in policy, by the public and in recent reports[6-8]. Our direct remotely sensed observations show that deforestation for rubber is at least twofold to threefold higher than suggested by figures now widely used for setting policy[4]. With more than 4 million hectares of forest loss for rubber since 1993 (at least 2 million hectares since 2000) and more than 1 million hectares of rubber plantations established in Key Biodiversity Areas, the effects of rubber on biodiversity and ecosystem services in Southeast Asia could be extensive. Thus, rubber deserves more attention in domestic policy, within trade agreements and in incoming due-diligence legislation.

RevDate: 2023-10-18

Teng Z, Chen J, Wang J, et al (2023)

Panicle-Cloud: An Open and AI-Powered Cloud Computing Platform for Quantifying Rice Panicles from Drone-Collected Imagery to Enable the Classification of Yield Production in Rice.

Plant phenomics (Washington, D.C.), 5:0105.

Rice (Oryza sativa) is an essential stable food for many rice consumption nations in the world and, thus, the importance to improve its yield production under global climate changes. To evaluate different rice varieties' yield performance, key yield-related traits such as panicle number per unit area (PNpM[2]) are key indicators, which have attracted much attention by many plant research groups. Nevertheless, it is still challenging to conduct large-scale screening of rice panicles to quantify the PNpM[2] trait due to complex field conditions, a large variation of rice cultivars, and their panicle morphological features. Here, we present Panicle-Cloud, an open and artificial intelligence (AI)-powered cloud computing platform that is capable of quantifying rice panicles from drone-collected imagery. To facilitate the development of AI-powered detection models, we first established an open diverse rice panicle detection dataset that was annotated by a group of rice specialists; then, we integrated several state-of-the-art deep learning models (including a preferred model called Panicle-AI) into the Panicle-Cloud platform, so that nonexpert users could select a pretrained model to detect rice panicles from their own aerial images. We trialed the AI models with images collected at different attitudes and growth stages, through which the right timing and preferred image resolutions for phenotyping rice panicles in the field were identified. Then, we applied the platform in a 2-season rice breeding trial to valid its biological relevance and classified yield production using the platform-derived PNpM[2] trait from hundreds of rice varieties. Through correlation analysis between computational analysis and manual scoring, we found that the platform could quantify the PNpM[2] trait reliably, based on which yield production was classified with high accuracy. Hence, we trust that our work demonstrates a valuable advance in phenotyping the PNpM[2] trait in rice, which provides a useful toolkit to enable rice breeders to screen and select desired rice varieties under field conditions.

RevDate: 2023-10-17

Kline JA, Reed B, Frost A, et al (2023)

Database derived from an electronic medical record-based surveillance network of US emergency department patients with acute respiratory illness.

BMC medical informatics and decision making, 23(1):224.

BACKGROUND: For surveillance of episodic illness, the emergency department (ED) represents one of the largest interfaces for generalizable data about segments of the US public experiencing a need for unscheduled care. This protocol manuscript describes the development and operation of a national network linking symptom, clinical, laboratory and disposition data that provides a public database dedicated to the surveillance of acute respiratory infections (ARIs) in EDs.

METHODS: The Respiratory Virus Laboratory Emergency Department Network Surveillance (RESP-LENS) network includes 26 academic investigators, from 24 sites, with 91 hospitals, and the Centers for Disease Control and Prevention (CDC) to survey viral infections. All data originate from electronic medical records (EMRs) accessed by structured query language (SQL) coding. Each Tuesday, data are imported into the standard data form for ARI visits that occurred the prior week (termed the index file); outcomes at 30 days and ED volume are also recorded. Up to 325 data fields can be populated for each case. Data are transferred from sites into an encrypted Google Cloud Platform, then programmatically checked for compliance, parsed, and aggregated into a central database housed on a second cloud platform prior to transfer to CDC.

RESULTS: As of August, 2023, the network has reported data on over 870,000 ARI cases selected from approximately 5.2 million ED encounters. Post-contracting challenges to network execution have included local shifts in testing policies and platforms, delays in ICD-10 coding to detect ARI cases, and site-level personnel turnover. The network is addressing these challenges and is poised to begin streaming weekly data for dissemination.

CONCLUSIONS: The RESP-LENS network provides a weekly updated database that is a public health resource to survey the epidemiology, viral causes, and outcomes of ED patients with acute respiratory infections.

RevDate: 2023-10-17

Atchyuth BAS, Swain R, P Das (2023)

Near real-time flood inundation and hazard mapping of Baitarani River Basin using Google Earth Engine and SAR imagery.

Environmental monitoring and assessment, 195(11):1331.

Flood inundation mapping and satellite imagery monitoring are critical and effective responses during flood events. Mapping of a flood using optical data is limited due to the unavailability of cloud-free images. Because of its capacity to penetrate clouds and operate in all kinds of weather, synthetic aperture radar is preferred for water inundation mapping. Flood mapping in Eastern India's Baitarani River Basin for 2018, 2019, 2020, 2021, and 2022 was performed in this study using Sentinel-1 imagery and Google Earth Engine with Otsu's algorithm. Different machine-learning algorithms were used to map the LULC of the study region. Dual polarizations VH and VV and their combinations VV×VH, VV+VH, VH-VV, VV-VH, VV/VH, and VH/VV were examined to identify non-water and water bodies. The normalized difference water index (NDWI) map derived from Sentinel-2 data validated the surface water inundation with 80% accuracy. The total inundated areas were identified as 440.3 km[2] in 2018, 268.58 km[2] in 2019, 178.40 km[2] in 2020, 203.79 km[2] in 2021, and 321.33 km[2] in 2022, respectively. The overlap of flood maps on the LULC map indicated that flooding highly affected agriculture and urban areas in these years. The approach using the near-real-time Sentinel-1 SAR imagery and GEE platform can be operationalized for periodic flood mapping, helps develop flood control measures, and helps enhance flood management. The generated annual flood inundation maps are also useful for policy development, agriculture yield estimation, crop insurance framing, etc.

RevDate: 2023-10-16

Familiar AM, Mahtabfar A, Fathi Kazerooni A, et al (2023)

Radio-pathomic approaches in pediatric neuro-oncology: Opportunities and challenges.

Neuro-oncology advances, 5(1):vdad119.

With medical software platforms moving to cloud environments with scalable storage and computing, the translation of predictive artificial intelligence (AI) models to aid in clinical decision-making and facilitate personalized medicine for cancer patients is becoming a reality. Medical imaging, namely radiologic and histologic images, has immense analytical potential in neuro-oncology, and models utilizing integrated radiomic and pathomic data may yield a synergistic effect and provide a new modality for precision medicine. At the same time, the ability to harness multi-modal data is met with challenges in aggregating data across medical departments and institutions, as well as significant complexity in modeling the phenotypic and genotypic heterogeneity of pediatric brain tumors. In this paper, we review recent pathomic and integrated pathomic, radiomic, and genomic studies with clinical applications. We discuss current challenges limiting translational research on pediatric brain tumors and outline technical and analytical solutions. Overall, we propose that to empower the potential residing in radio-pathomics, systemic changes in cross-discipline data management and end-to-end software platforms to handle multi-modal data sets are needed, in addition to embracing modern AI-powered approaches. These changes can improve the performance of predictive models, and ultimately the ability to advance brain cancer treatments and patient outcomes through the development of such models.

RevDate: 2023-10-16

Jang H, Park S, H Koh (2023)

Comprehensive microbiome causal mediation analysis using MiMed on user-friendly web interfaces.

Biology methods & protocols, 8(1):bpad023.

It is a central goal of human microbiome studies to see the roles of the microbiome as a mediator that transmits environmental, behavioral, or medical exposures to health or disease outcomes. Yet, mediation analysis is not used as much as it should be. One reason is because of the lack of carefully planned routines, compilers, and automated computing systems for microbiome mediation analysis (MiMed) to perform a series of data processing, diversity calculation, data normalization, downstream data analysis, and visualizations. Many researchers in various disciplines (e.g. clinicians, public health practitioners, and biologists) are not also familiar with related statistical methods and programming languages on command-line interfaces. Thus, in this article, we introduce a web cloud computing platform, named as MiMed, that enables comprehensive MiMed on user-friendly web interfaces. The main features of MiMed are as follows. First, MiMed can survey the microbiome in various spheres (i) as a whole microbial ecosystem using different ecological measures (e.g. alpha- and beta-diversity indices) or (ii) as individual microbial taxa (e.g. phyla, classes, orders, families, genera, and species) using different data normalization methods. Second, MiMed enables covariate-adjusted analysis to control for potential confounding factors (e.g. age and gender), which is essential to enhance the causality of the results, especially for observational studies. Third, MiMed enables a breadth of statistical inferences in both mediation effect estimation and significance testing. Fourth, MiMed provides flexible and easy-to-use data processing and analytic modules and creates nice graphical representations. Finally, MiMed employs ChatGPT to search for what has been known about the microbial taxa that are found significantly as mediators using artificial intelligence technologies. For demonstration purposes, we applied MiMed to the study on the mediating roles of oral microbiome in subgingival niches between e-cigarette smoking and gingival inflammation. MiMed is freely available on our web server (http://mimed.micloud.kr).

RevDate: 2023-10-14

Li W, Li SM, Kang MC, et al (2023)

Multi-characteristic tannic acid-reinforced polyacrylamide/sodium carboxymethyl cellulose ionic hydrogel strain sensor for human-machine interaction.

International journal of biological macromolecules pii:S0141-8130(23)04331-3 [Epub ahead of print].

Big data and cloud computing are propelling research in human-computer interface within academia. However, the potential of wearable human-machine interaction (HMI) devices utilizing multiperformance ionic hydrogels remains largely unexplored. Here, we present a motion recognition-based HMI system that enhances movement training. We engineered dual-network PAM/CMC/TA (PCT) hydrogels by reinforcing polyacrylamide (PAM) and sodium carboxymethyl cellulose (CMC) polymers with tannic acid (TA). These hydrogels possess exceptional transparency, adhesion, and remodelling features. By combining an elastic PAM backbone with tunable amounts of CMC and TA, the PCT hydrogels achieve optimal electromechanical performance. As strain sensors, they demonstrate higher sensitivity (GF = 4.03), low detection limit (0.5 %), and good linearity (0.997). Furthermore, we developed a highly accurate (97.85 %) motion recognition system using machine learning and hydrogel-based wearable sensors. This system enables contactless real-time training monitoring and wireless control of trolley operations. Our research underscores the effectiveness of PCT hydrogels for real-time HMI, thus advancing next-generation HMI systems.

RevDate: 2023-10-14

Al-Bazzaz H, Azam M, Amayri M, et al (2023)

Unsupervised Mixture Models on the Edge for Smart Energy Consumption Segmentation with Feature Saliency.

Sensors (Basel, Switzerland), 23(19): pii:s23198296.

Smart meter datasets have recently transitioned from monthly intervals to one-second granularity, yielding invaluable insights for diverse metering functions. Clustering analysis, a fundamental data mining technique, is extensively applied to discern unique energy consumption patterns. However, the advent of high-resolution smart meter data brings forth formidable challenges, including non-Gaussian data distributions, unknown cluster counts, and varying feature importance within high-dimensional spaces. This article introduces an innovative learning framework integrating the expectation-maximization algorithm with the minimum message length criterion. This unified approach enables concurrent feature and model selection, finely tuned for the proposed bounded asymmetric generalized Gaussian mixture model with feature saliency. Our experiments aim to replicate an efficient smart meter data analysis scenario by incorporating three distinct feature extraction methods. We rigorously validate the clustering efficacy of our proposed algorithm against several state-of-the-art approaches, employing diverse performance metrics across synthetic and real smart meter datasets. The clusters that we identify effectively highlight variations in residential energy consumption, furnishing utility companies with actionable insights for targeted demand reduction efforts. Moreover, we demonstrate our method's robustness and real-world applicability by harnessing Concordia's High-Performance Computing infrastructure. This facilitates efficient energy pattern characterization, particularly within smart meter environments involving edge cloud computing. Finally, we emphasize that our proposed mixture model outperforms three other models in this paper's comparative study. We achieve superior performance compared to the non-bounded variant of the proposed mixture model by an average percentage improvement of 7.828%.

RevDate: 2023-10-13

Schacherer DP, Herrmann MD, Clunie DA, et al (2023)

The NCI Imaging Data Commons as a platform for reproducible research in computational pathology.

Computer methods and programs in biomedicine, 242:107839 pii:S0169-2607(23)00505-9 [Epub ahead of print].

BACKGROUND AND OBJECTIVES: Reproducibility is a major challenge in developing machine learning (ML)-based solutions in computational pathology (CompPath). The NCI Imaging Data Commons (IDC) provides >120 cancer image collections according to the FAIR principles and is designed to be used with cloud ML services. Here, we explore its potential to facilitate reproducibility in CompPath research.

METHODS: Using the IDC, we implemented two experiments in which a representative ML-based method for classifying lung tumor tissue was trained and/or evaluated on different datasets. To assess reproducibility, the experiments were run multiple times with separate but identically configured instances of common ML services.

RESULTS: The results of different runs of the same experiment were reproducible to a large extent. However, we observed occasional, small variations in AUC values, indicating a practical limit to reproducibility.

CONCLUSIONS: We conclude that the IDC facilitates approaching the reproducibility limit of CompPath research (i) by enabling researchers to reuse exactly the same datasets and (ii) by integrating with cloud ML services so that experiments can be run in identically configured computing environments.

RevDate: 2023-10-13

Saif Y, Yusof Y, Rus AZM, et al (2023)

Implementing circularity measurements in industry 4.0-based manufacturing metrology using MQTT protocol and Open CV: A case study.

PloS one, 18(10):e0292814 pii:PONE-D-23-19982.

In the context of Industry 4.0, manufacturing metrology is crucial for inspecting and measuring machines. The Internet of Things (IoT) technology enables seamless communication between advanced industrial devices through local and cloud computing servers. This study investigates the use of the MQTT protocol to enhance the performance of circularity measurement data transmission between cloud servers and round-hole data sources through Open CV. Accurate inspection of circular characteristics, particularly roundness errors, is vital for lubricant distribution, assemblies, and rotational force innovation. Circularity measurement techniques employ algorithms like the minimal zone circle tolerance algorithm. Vision inspection systems, utilizing image processing techniques, can promptly and accurately detect quality concerns by analyzing the model's surface through circular dimension analysis. This involves sending the model's image to a computer, which employs techniques such as Hough Transform, Edge Detection, and Contour Analysis to identify circular features and extract relevant parameters. This method is utilized in the camera industry and component assembly. To assess the performance, a comparative experiment was conducted between the non-contact-based 3SMVI system and the contact-based CMM system widely used in various industries for roundness evaluation. The CMM technique is known for its high precision but is time-consuming. Experimental results indicated a variation of 5 to 9.6 micrometers between the two methods. It is suggested that using a high-resolution camera and appropriate lighting conditions can further enhance result precision.

RevDate: 2023-10-13

Intelligence And Neuroscience C (2023)

Retracted: An Optimized Decision Method for Smart Teaching Effect Based on Cloud Computing and Deep Learning.

Computational intelligence and neuroscience, 2023:9862737.

[This retracts the article DOI: 10.1155/2022/6907172.].

RevDate: 2023-10-13

Intelligence And Neuroscience C (2023)

Retracted: The Construction of Big Data Computational Intelligence System for E-Government in Cloud Computing Environment and Its Development Impact.

Computational intelligence and neuroscience, 2023:9873976.

[This retracts the article DOI: 10.1155/2022/7295060.].

RevDate: 2023-10-13

Healthcare Engineering JO (2023)

Retracted: Construction of a Health Management Model for Early Identification of Ischaemic Stroke in Cloud Computing.

Journal of healthcare engineering, 2023:9820647.

[This retracts the article DOI: 10.1155/2022/1018056.].

RevDate: 2023-10-11

Wang TY, Cui J, Y Fan (2023)

A wearable-based sports health monitoring system using CNN and LSTM with self-attentions.

PloS one, 18(10):e0292012 pii:PONE-D-23-18164.

Sports performance and health monitoring are essential for athletes to maintain peak performance and avoid potential injuries. In this paper, we propose a sports health monitoring system that utilizes wearable devices, cloud computing, and deep learning to monitor the health status of sports persons. The system consists of a wearable device that collects various physiological parameters and a cloud server that contains a deep learning model to predict the sportsperson's health status. The proposed model combines a Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and self-attention mechanisms. The model is trained on a large dataset of sports persons' physiological data and achieves an accuracy of 93%, specificity of 94%, precision of 95%, and an F1 score of 92%. The sports person can access the cloud server using their mobile phone to receive a report of their health status, which can be used to monitor their performance and make any necessary adjustments to their training or competition schedule.

RevDate: 2023-10-11

Ruiz-Zafra A, Precioso D, Salvador B, et al (2023)

NeoCam: An Edge-Cloud Platform for Non-Invasive Real-Time Monitoring in Neonatal Intensive Care Units.

IEEE journal of biomedical and health informatics, 27(6):2614-2624.

In this work we introduce NeoCam, an open source hardware-software platform for video-based monitoring of preterms infants in Neonatal Intensive Care Units (NICUs). NeoCam includes an edge computing device that performs video acquisition and processing in real-time. Compared to other proposed solutions, it has the advantage of handling data more efficiently by performing most of the processing on the device, including proper anonymisation for better compliance with privacy regulations. In addition, it allows to perform various video analysis tasks of clinical interest in parallel at speeds of between 20 and 30 frames-per-second. We introduce algorithms to measure without contact the breathing rate, motor activity, body pose and emotional status of the infants. For breathing rate, our system shows good agreement with existing methods provided there is sufficient light and proper imaging conditions. Models for motor activity and stress detection are new to the best of our knowledge. NeoCam has been tested on preterms in the NICU of the University Hospital Puerta del Mar (Cádiz, Spain), and we report the lessons learned from this trial.

RevDate: 2023-10-11

Machado IA, Lacerda MAS, Martinez-Blanco MDR, et al (2023)

Chameleon: a cloud computing Industry 4.0 neutron spectrum unfolding code.

Radiation protection dosimetry, 199(15-16):1877-1882.

This work presents Chameleon, a cloud computing (CC) Industry 4.0 (I4) neutron spectrum unfolding code. The code was designed under the Python programming language, using Streamlit framework®, and it is executed on the cloud, as I4 CC technology through internet, by using mobile devices with internet connectivity and a web navigator. In its first version, as a proof of concept, the SPUNIT algorithm was implemented. The main functionalities and the preliminary tests performed to validate the code are presented. Chameleon solves the neutron spectrum unfolding problem and it is easy, friendly and intuitive. It can be applied with success in various workplaces. More validation tests are in progress. Future implementations will include improving the graphical user interface, inserting other algorithms, such as GRAVEL, MAXED and neural networks, and implementing an algorithm to estimate uncertainties in the calculated integral quantities.

RevDate: 2023-10-10

PLOS ONE Editors (2023)

Retraction: Relationship between employees' career maturity and career planning of edge computing and cloud collaboration from the perspective of organizational behavior.

PloS one, 18(10):e0292209 pii:PONE-D-23-30379.

RevDate: 2023-10-09

Chen C, Yang X, Jiang S, et al (2023)

Mapping and spatiotemporal dynamics of land-use and land-cover change based on the Google Earth Engine cloud platform from Landsat imagery: A case study of Zhoushan Island, China.

Heliyon, 9(9):e19654.

Land resources are an essential foundation for socioeconomic development. Island land resources are limited, the type changes are particularly frequent, and the environment is fragile. Therefore, large-scale, long-term, and high-accuracy land-use classification and spatiotemporal characteristic analysis are of great significance for the sustainable development of islands. Based on the advantages of remote sensing indices and principal component analysis in accurate classification, and taking Zhoushan Archipelago, China, as the study area, in this work long-term satellite remote sensing data were used to perform land-use classification and spatiotemporal characteristic analysis. The classification results showed that the land-use types could be exactly classified, with the overall accuracy and Kappa coefficient greater than 94% and 0.93, respectively. The results of the spatiotemporal characteristic analysis showed that the built-up land and forest land areas increased by 90.00 km[2] and 36.83 km[2], respectively, while the area of the cropland/grassland decreased by 69.77 km[2]. The areas of the water bodies, tidal flats, and bare land exhibited slight change trends. The spatial coverage of Zhoushan Island continuously expanded toward the coast, encroaching on nearby sea areas and tidal flats. The cropland/grassland was the most transferred-out area, at up to 108.94 km[2], and built-up land was the most transferred-in areas, at up to 73.31 km[2]. This study provides a data basis and technical support for the scientific management of land resources.

RevDate: 2023-10-07

Lakhan A, Mohammed MA, Abdulkareem KH, et al (2023)

Autism Spectrum Disorder detection framework for children based on federated learning integrated CNN-LSTM.

Computers in biology and medicine, 166:107539 pii:S0010-4825(23)01004-1 [Epub ahead of print].

The incidence of Autism Spectrum Disorder (ASD) among children, attributed to genetics and environmental factors, has been increasing daily. ASD is a non-curable neurodevelopmental disorder that affects children's communication, behavior, social interaction, and learning skills. While machine learning has been employed for ASD detection in children, existing ASD frameworks offer limited services to monitor and improve the health of ASD patients. This paper presents a complex and efficient ASD framework with comprehensive services to enhance the results of existing ASD frameworks. Our proposed approach is the Federated Learning-enabled CNN-LSTM (FCNN-LSTM) scheme, designed for ASD detection in children using multimodal datasets. The ASD framework is built in a distributed computing environment where different ASD laboratories are connected to the central hospital. The FCNN-LSTM scheme enables local laboratories to train and validate different datasets, including Ages and Stages Questionnaires (ASQ), Facial Communication and Symbolic Behavior Scales (CSBS) Dataset, Parents Evaluate Developmental Status (PEDS), Modified Checklist for Autism in Toddlers (M-CHAT), and Screening Tool for Autism in Toddlers and Children (STAT) datasets, on different computing laboratories. To ensure the security of patient data, we have implemented a security mechanism based on advanced standard encryption (AES) within the federated learning environment. This mechanism allows all laboratories to offload and download data securely. We integrate all trained datasets into the aggregated nodes and make the final decision for ASD patients based on the decision process tree. Additionally, we have designed various Internet of Things (IoT) applications to improve the efficiency of ASD patients and achieve more optimal learning results. Simulation results demonstrate that our proposed framework achieves an ASD detection accuracy of approximately 99% compared to all existing ASD frameworks.

RevDate: 2023-10-05

Lee J, Kim H, F Kron (2023)

Virtual education strategies in the context of sustainable health care and medical education: A topic modelling analysis of four decades of research.

Medical education [Epub ahead of print].

BACKGROUND: The growing importance of sustainability has led to the current literature being saturated with studies on the necessity of, and suggested topics for, education for sustainable health care (ESH). Even so, ESH implementation has been hindered by educator unpreparedness and resource scarcity. A potential resolution lies in virtual education. However, research on the strategies needed for successfully implementing virtual education in the context of sustainable health care and medical education is sparse; this study aims to fill the gap.

METHODS: Topic modelling, a computational text-mining method for analysing recurring patterns of co-occurring word clusters to reveal key topics prevalent across the texts, was used to examine how sustainability was addressed in research in medicine, medical education, and virtual education. A total of 17 631 studies, retrieved from Web of Science, Scopus and PubMed, were analysed.

RESULTS: Sustainability-related topics within health care, medical education and virtual education provided systematic implications for Sustainable Virtual Medical Education (SVME)-ESH via virtual platforms in a sustainable way. Analyses of keywords, phrases, topics and their associated networks indicate that SVME should address the three pillars of environmental, social and economic sustainability and medical practices to uphold them; employ different technologies and methods including simulations, virtual reality (VR), artificial intelligence (AI), cloud computing, distance learning; and implement strategies for collaborative development, persuasive diffusion and quality assurance.

CONCLUSIONS: This research suggests that sustainable strategies in virtual education for ESH require a systems approach, encompassing components such as learning content and objectives, evaluation, targeted learners, media, methods and strategies. The advancement of SVME necessitates that medical educators and researchers play a central and bridging role, guiding both the fields of sustainable health care and medical education in the development and implementation of SVME. In this way, they can prepare future physicians to address sustainability issues that impact patient care.

RevDate: 2023-09-29

Buyukcavus MH, Aydogan Akgun F, Solak S, et al (2023)

Facial recognition by cloud-based APIs following surgically assisted rapid maxillary expansion.

Journal of orofacial orthopedics = Fortschritte der Kieferorthopadie : Organ/official journal Deutsche Gesellschaft fur Kieferorthopadie [Epub ahead of print].

INTRODUCTION: This study aimed to investigate whether the facial soft tissue changes of individuals who had undergone surgically assisted rapid maxillary expansion (SARME) would be detected by three different well-known facial biometric recognition applications.

METHODS: To calculate similarity scores, the pre- and postsurgical photographs of 22 patients who had undergone SARME treatment were examined using three prominent cloud computing-based facial recognition application programming interfaces (APIs): AWS Rekognition (Amazon Web Services, Seattle, WA, USA), Microsoft Azure Cognitive (Microsoft, Redmond, WA, USA), and Face++ (Megvii, Beijing, China). The pre- and post-SARME photographs of the patients (relaxed, smiling, profile, and semiprofile) were used to calculate similarity scores using the APIs. Friedman's two-way analysis of variance and the Wilcoxon signed-rank test were used to compare the similarity scores obtained from the photographs of the different aspects of the face before and after surgery using the different programs. The relationship between measurements on lateral and posteroanterior cephalograms and the similarity scores was evaluated using the Spearman rank correlation.

RESULTS: The similarity scores were found to be lower with the Face++ program. When looking at the photo types, it was observed that the similarity scores were higher in the smiling photos. A statistically significant difference in the similarity scores (P < 0.05) was found between the relaxed and smiling photographs using the different programs. The correlation between the cephalometric and posteroanterior measurements and the similarity scores was not significant (P > 0.05).

CONCLUSION: SARME treatment caused a significant change in the similarity scores calculated with the help of three different facial recognition programs. The highest similarity scores were found in the smiling photographs, whereas the lowest scores were found in the profile photographs.

RevDate: 2023-09-30

Mangalampalli S, Karri GR, Gupta A, et al (2023)

Fault-Tolerant Trust-Based Task Scheduling Algorithm Using Harris Hawks Optimization in Cloud Computing.

Sensors (Basel, Switzerland), 23(18):.

Cloud computing is a distributed computing model which renders services for cloud users around the world. These services need to be rendered to customers with high availability and fault tolerance, but there are still chances of having single-point failures in the cloud paradigm, and one challenge to cloud providers is effectively scheduling tasks to avoid failures and acquire the trust of their cloud services by users. This research proposes a fault-tolerant trust-based task scheduling algorithm in which we carefully schedule tasks within precise virtual machines by calculating priorities for tasks and VMs. Harris hawks optimization was used as a methodology to design our scheduler. We used Cloudsim as a simulating tool for our entire experiment. For the entire simulation, we used synthetic fabricated data with different distributions and real-time supercomputer worklogs. Finally, we evaluated the proposed approach (FTTATS) with state-of-the-art approaches, i.e., ACO, PSO, and GA. From the simulation results, our proposed FTTATS greatly minimizes the makespan for ACO, PSO and GA algorithms by 24.3%, 33.31%, and 29.03%, respectively. The rate of failures for ACO, PSO, and GA were minimized by 65.31%, 65.4%, and 60.44%, respectively. Trust-based SLA parameters improved, i.e., availability improved for ACO, PSO, and GA by 33.38%, 35.71%, and 28.24%, respectively. The success rate improved for ACO, PSO, and GA by 52.69%, 39.41%, and 38.45%, respectively. Turnaround efficiency was minimized for ACO, PSO, and GA by 51.8%, 47.2%, and 33.6%, respectively.

RevDate: 2023-10-03
CmpDate: 2023-09-29

Emish M, Kelani Z, Hassani M, et al (2023)

A Mobile Health Application Using Geolocation for Behavioral Activity Tracking.

Sensors (Basel, Switzerland), 23(18):.

The increasing popularity of mHealth presents an opportunity for collecting rich datasets using mobile phone applications (apps). Our health-monitoring mobile application uses motion detection to track an individual's physical activity and location. The data collected are used to improve health outcomes, such as reducing the risk of chronic diseases and promoting healthier lifestyles through analyzing physical activity patterns. Using smartphone motion detection sensors and GPS receivers, we implemented an energy-efficient tracking algorithm that captures user locations whenever they are in motion. To ensure security and efficiency in data collection and storage, encryption algorithms are used with serverless and scalable cloud storage design. The database schema is designed around Mobile Advertising ID (MAID) as a unique identifier for each device, allowing for accurate tracking and high data quality. Our application uses Google's Activity Recognition Application Programming Interface (API) on Android OS or geofencing and motion sensors on iOS to track most smartphones available. In addition, our app leverages blockchain and traditional payments to streamline the compensations and has an intuitive user interface to encourage participation in research. The mobile tracking app was tested for 20 days on an iPhone 14 Pro Max, finding that it accurately captured location during movement and promptly resumed tracking after inactivity periods, while consuming a low percentage of battery life while running in the background.

RevDate: 2023-09-30

Lilhore UK, Manoharan P, Simaiya S, et al (2023)

HIDM: Hybrid Intrusion Detection Model for Industry 4.0 Networks Using an Optimized CNN-LSTM with Transfer Learning.

Sensors (Basel, Switzerland), 23(18):.

Industrial automation systems are undergoing a revolutionary change with the use of Internet-connected operating equipment and the adoption of cutting-edge advanced technology such as AI, IoT, cloud computing, and deep learning within business organizations. These innovative and additional solutions are facilitating Industry 4.0. However, the emergence of these technological advances and the quality solutions that they enable will also introduce unique security challenges whose consequence needs to be identified. This research presents a hybrid intrusion detection model (HIDM) that uses OCNN-LSTM and transfer learning (TL) for Industry 4.0. The proposed model utilizes an optimized CNN by using enhanced parameters of the CNN via the grey wolf optimizer (GWO) method, which fine-tunes the CNN parameters and helps to improve the model's prediction accuracy. The transfer learning model helps to train the model, and it transfers the knowledge to the OCNN-LSTM model. The TL method enhances the training process, acquiring the necessary knowledge from the OCNN-LSTM model and utilizing it in each next cycle, which helps to improve detection accuracy. To measure the performance of the proposed model, we conducted a multi-class classification analysis on various online industrial IDS datasets, i.e., ToN-IoT and UNW-NB15. We have conducted two experiments for these two datasets, and various performance-measuring parameters, i.e., precision, F-measure, recall, accuracy, and detection rate, were calculated for the OCNN-LSTM model with and without TL and also for the CNN and LSTM models. For the ToN-IoT dataset, the OCNN-LSTM with TL model achieved a precision of 92.7%; for the UNW-NB15 dataset, the precision was 94.25%, which is higher than OCNN-LSTM without TL.

RevDate: 2023-09-30

Li M, Zhang J, Lin J, et al (2023)

FireFace: Leveraging Internal Function Features for Configuration of Functions on Serverless Edge Platforms.

Sensors (Basel, Switzerland), 23(18):.

The emerging serverless computing has become a captivating paradigm for deploying cloud applications, alleviating developers' concerns about infrastructure resource management by configuring necessary parameters such as latency and memory constraints. Existing resource configuration solutions for cloud-based serverless applications can be broadly classified into modeling based on historical data or a combination of sparse measurements and interpolation/modeling. In pursuit of service response and conserving network bandwidth, platforms have progressively expanded from the traditional cloud to the edge. Compared to cloud platforms, serverless edge platforms often lead to more running overhead due to their limited resources, resulting in undesirable financial costs for developers when using the existing solutions. Meanwhile, it is extremely challenging to handle the heterogeneity of edge platforms, characterized by distinct pricing owing to their varying resource preferences. To tackle these challenges, we propose an adaptive and efficient approach called FireFace, consisting of prediction and decision modules. The prediction module extracts the internal features of all functions within the serverless application and uses this information to predict the execution time of the functions under specific configuration schemes. Based on the prediction module, the decision module analyzes the environment information and uses the Adaptive Particle Swarm Optimization algorithm and Genetic Algorithm Operator (APSO-GA) algorithm to select the most suitable configuration plan for each function, including CPU, memory, and edge platforms. In this way, it is possible to effectively minimize the financial overhead while fulfilling the Service Level Objectives (SLOs). Extensive experimental results show that our prediction model obtains optimal results under all three metrics, and the prediction error rate for real-world serverless applications is in the range of 4.25∼9.51%. Our approach can find the optimal resource configuration scheme for each application, which saves 7.2∼44.8% on average compared to other classic algorithms. Moreover, FireFace exhibits rapid adaptability, efficiently adjusting resource allocation schemes in response to dynamic environments.

RevDate: 2023-09-30

Yang D, Liu Z, S Wei (2023)

Interactive Learning for Network Anomaly Monitoring and Detection with Human Guidance in the Loop.

Sensors (Basel, Switzerland), 23(18):.

With the advancement in big data and cloud computing technology, we have witnessed tremendous developments in applying intelligent techniques in network operation and management. However, learning- and data-based solutions for network operation and maintenance cannot effectively adapt to the dynamic security situation or satisfy administrators' expectations alone. Anomaly detection of time-series monitoring indicators has been a major challenge for network administrative personnel. Monitored indicators in network operations are characterized by multiple instances with high dimensions and fluctuating time-series features and rely on system resource deployment and business environment variations. Hence, there is a growing consensus that conducting anomaly detection with machine intelligence under the operation and maintenance personnel's guidance is more effective than solely using learning and modeling. This paper intends to model the anomaly detection task as a Markov Decision Process and adopts the Double Deep Q-Network algorithm to train an anomaly detection agent, in which the multidimensional temporal convolution network is applied as the principal structure of the Q network and the interactive guidance information from the operation and maintenance personnel is introduced into the procedure to facilitate model convergence. Experimental results on the SMD dataset indicate that the proposed modeling and detection method achieves higher precision and recall rates compared to other learning-based methods. Our method achieves model optimization by using human-computer interactions continuously, which guarantees a faster and more consistent model training procedure and convergence.

RevDate: 2023-10-03

Canonico M, Desimoni F, Ferrero A, et al (2023)

Gait Monitoring and Analysis: A Mathematical Approach.

Sensors (Basel, Switzerland), 23(18):.

Gait abnormalities are common in the elderly and individuals diagnosed with Parkinson's, often leading to reduced mobility and increased fall risk. Monitoring and assessing gait patterns in these populations play a crucial role in understanding disease progression, early detection of motor impairments, and developing personalized rehabilitation strategies. In particular, by identifying gait irregularities at an early stage, healthcare professionals can implement timely interventions and personalized therapeutic approaches, potentially delaying the onset of severe motor symptoms and improving overall patient outcomes. In this paper, we studied older adults affected by chronic diseases and/or Parkinson's disease by monitoring their gait due to wearable devices that can accurately detect a person's movements. In our study, about 50 people were involved in the trial (20 with Parkinson's disease and 30 people with chronic diseases) who have worn our device for at least 6 months. During the experimentation, each device collected 25 samples from the accelerometer sensor for each second. By analyzing those data, we propose a metric for the "gait quality" based on the measure of entropy obtained by applying the Fourier transform.

RevDate: 2023-10-03

Wu YL, Wang CS, Weng WC, et al (2023)

Development of a Cloud-Based Image Processing Health Checkup System for Multi-Item Urine Analysis.

Sensors (Basel, Switzerland), 23(18):.

With the busy pace of modern life, an increasing number of people are afflicted by lifestyle diseases. Going directly to the hospital for medical checks is not only time-consuming but also costly. Fortunately, the emergence of rapid tests has alleviated this burden. Accurately interpreting test results is extremely important; misinterpreting the results of rapid tests could lead to delayed medical treatment. Given that URS-10 serve as a rapid test capable of detecting 10 distinct parameters in urine samples, the results of assessing these parameters can offer insights into the subject's physiological condition. These parameters encompass aspects such as metabolism, renal function, diabetes, urinary tract disorders, hemolytic diseases, and acid-base balance, among others. Although the operational procedure is straightforward, the variegated color changes exhibited in the outcomes of individual parameters render it challenging for lay users to deduce causal factors solely from color variations. Moreover, potential misinterpretations could arise due to visual discrepancies. In this study, we successfully developed a cloud-based health checkup system that can be used in an indoor environment. The system is used by placing a URS-10 test strip on a colorimetric board developed for this study, then using a smartphone application to take images which are uploaded to a server for cloud computing. Finally, the interpretation results are stored in the cloud and sent back to the smartphone to be checked by the user. Furthermore, to confirm whether the color calibration technology can eliminate color differences between different cameras, and also whether the colorimetric board and the urine test strips can perform color comparisons correctly in different light intensity environments, indoor environments that could simulate a specific light intensity were established for testing purposes. When comparing the experimental results to real test strips, only two groups failed to reach an identification success rate of 100%, and in both of these cases the success rate reached 95%. The experimental results confirmed that the system developed in this study was able to eliminate color differences between camera devices and could be used without special technical requirements or training.

RevDate: 2023-09-27

Palmer GA, Tomkin G, Martín-Alcalá HE, et al (2023)

The Internet of Things in assisted reproduction.

Reproductive biomedicine online, 47(5):103338 pii:S1472-6483(23)00438-8 [Epub ahead of print].

The Internet of Things (IoT) is a network connecting physical objects with sensors, software and internet connectivity for data exchange. Integrating the IoT with medical devices shows promise in healthcare, particularly in IVF laboratories. By leveraging telecommunications, cybersecurity, data management and intelligent systems, the IoT can enable a data-driven laboratory with automation, improved conditions, personalized treatment and efficient workflows. The integration of 5G technology ensures fast and reliable connectivity for real-time data transmission, while blockchain technology secures patient data. Fog computing reduces latency and enables real-time analytics. Microelectromechanical systems enable wearable IoT and miniaturized monitoring devices for tracking IVF processes. However, challenges such as security risks and network issues must be addressed through cybersecurity measures and networking advancements. Clinical embryologists should maintain their expertise and knowledge for safety and oversight, even with IoT in the IVF laboratory.

RevDate: 2023-09-26

Baghdadi A, Guo E, Lama S, et al (2023)

Force Profile as Surgeon-Specific Signature.

Annals of surgery open : perspectives of surgical history, education, and clinical approaches, 4(3):e326.

OBJECTIVE: To investigate the notion that a surgeon's force profile can be the signature of their identity and performance.

SUMMARY BACKGROUND DATA: Surgeon performance in the operating room is an understudied topic. The advent of deep learning methods paired with a sensorized surgical device presents an opportunity to incorporate quantitative insight into surgical performance and processes. Using a device called the SmartForceps System and through automated analytics, we have previously reported surgeon force profile, surgical skill, and task classification. However, an investigation of whether an individual surgeon can be identified by surgical technique has yet to be studied.

METHODS: In this study, we investigate multiple neural network architectures to identify the surgeon associated with their time-series tool-tissue forces using bipolar forceps data. The surgeon associated with each 10-second window of force data was labeled, and the data were randomly split into 80% for model training and validation (10% validation) and 20% for testing. Data imbalance was mitigated through subsampling from more populated classes with a random size adjustment based on 0.1% of sample counts in the respective class. An exploratory analysis of force segments was performed to investigate underlying patterns differentiating individual surgical techniques.

RESULTS: In a dataset of 2819 ten-second time segments from 89 neurosurgical cases, the best-performing model achieved a micro-average area under the curve of 0.97, a testing F1-score of 0.82, a sensitivity of 82%, and a precision of 82%. This model was a time-series ResNet model to extract features from the time-series data followed by a linearized output into the XGBoost algorithm. Furthermore, we found that convolutional neural networks outperformed long short-term memory networks in performance and speed. Using a weighted average approach, an ensemble model was able to identify an expert surgeon with 83.8% accuracy using a validation dataset.

CONCLUSIONS: Our results demonstrate that each surgeon has a unique force profile amenable to identification using deep learning methods. We anticipate our models will enable a quantitative framework to provide bespoke feedback to surgeons and to track their skill progression longitudinally. Furthermore, the ability to recognize individual surgeons introduces the mechanism of correlating outcome to surgeon performance.

RevDate: 2023-09-26

Habib W, J Connolly (2023)

A national-scale assessment of land use change in peatlands between 1989 and 2020 using Landsat data and Google Earth Engine-a case study of Ireland.

Regional environmental change, 23(4):124.

Over the centuries, anthropogenic pressure has severely impacted peatlands on the European continent. Peatlands cover ~ 21% (1.46 Mha) of Ireland's land surface, but 85% have been degraded due to management activities (land use). Ireland needs to meet its 2030 climate energy framework targets related to greenhouse gas (GHG) emissions from land use, land use change and forestry, including wetlands. Despite Ireland's voluntary decision to include peatlands in this system in 2020, information on land use activities and associated GHG emissions from peatlands is lacking. This study strives to fill this information gap by using Landsat (5, 8) data with Google Earth Engine and machine learning to examine and quantify land use on Irish peatlands across three time periods: 1990, 2005 and 2019. Four peatland land use classes were mapped and assessed: industrial peat extraction, forestry, grassland and residual peatland. The overall accuracy of the classification was 86% and 85% for the 2005 and 2019 maps, respectively. The accuracy of the 1990 dataset could not be assessed due to the unavailability of high-resolution reference data. The results indicate that extensive management activities have taken place in peatlands over the past three decades, which may have negative impacts on its ecological integrity and the many ecosystem services provided. By utilising cloud computing, temporal mosaicking and Landsat data, this study developed a robust methodology that overcomes cloud contamination and produces the first peatland land use maps of Ireland with wall-to-wall coverage. This has the potential for regional and global applications, providing maps that could help understand unsustainable management practices on peatlands and the impact on GHG emissions.

RevDate: 2023-09-26

Intelligence And Neuroscience C (2023)

Retracted: The Reform of University Education Teaching Based on Cloud Computing and Big Data Background.

Computational intelligence and neuroscience, 2023:9893153.

[This retracts the article DOI: 10.1155/2022/8169938.].

RevDate: 2023-10-04

Verner E, Petropoulos H, Baker B, et al (2023)

BrainForge: an online data analysis platform for integrative neuroimaging acquisition, analysis, and sharing.

Concurrency and computation : practice & experience, 35(18):.

BrainForge is a cloud-enabled, web-based analysis platform for neuroimaging research. This website allows users to archive data from a study and effortlessly process data on a high-performance computing cluster. After analyses are completed, results can be quickly shared with colleagues. BrainForge solves multiple problems for researchers who want to analyze neuroimaging data, including issues related to software, reproducibility, computational resources, and data sharing. BrainForge can currently process structural, functional, diffusion, and arterial spin labeling MRI modalities, including preprocessing and group level analyses. Additional pipelines are currently being added, and the pipelines can accept the BIDS format. Analyses are conducted completely inside of Singularity containers and utilize popular software packages including Nipype, Statistical Parametric Mapping, the Group ICA of fMRI Toolbox, and FreeSurfer. BrainForge also features several interfaces for group analysis, including a fully automated adaptive ICA approach.

RevDate: 2023-09-25
CmpDate: 2023-09-25

Lim HG, Fann YC, YG Lee (2023)

COWID: an efficient cloud-based genomics workflow for scalable identification of SARS-COV-2.

Briefings in bioinformatics, 24(5):.

Implementing a specific cloud resource to analyze extensive genomic data on severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) poses a challenge when resources are limited. To overcome this, we repurposed a cloud platform initially designed for use in research on cancer genomics (https://cgc.sbgenomics.com) to enable its use in research on SARS-CoV-2 to build Cloud Workflow for Viral and Variant Identification (COWID). COWID is a workflow based on the Common Workflow Language that realizes the full potential of sequencing technology for use in reliable SARS-CoV-2 identification and leverages cloud computing to achieve efficient parallelization. COWID outperformed other contemporary methods for identification by offering scalable identification and reliable variant findings with no false-positive results. COWID typically processed each sample of raw sequencing data within 5 min at a cost of only US$0.01. The COWID source code is publicly available (https://github.com/hendrick0403/COWID) and can be accessed on any computer with Internet access. COWID is designed to be user-friendly; it can be implemented without prior programming knowledge. Therefore, COWID is a time-efficient tool that can be used during a pandemic.

RevDate: 2023-09-22

Pessin VZ, Santos CAS, Yamane LH, et al (2023)

A method of Mapping Process for scientific production using the Smart Bibliometrics.

MethodsX, 11:102367.

Big data launches a modern way of producing science and research around the world. Due to an explosion of data available in scientific databases, combined with recent advances in information technology, the researcher has at his disposal new methods and technologies that facilitate scientific development. Considering the challenges of producing science in a dynamic and complex scenario, the main objective of this article is to present a method aligned with tools recently developed to support scientific production, based on steps and technologies that will help researchers to materialize their objectives efficiently and effectively. Applying this method, the researcher can apply science mapping and bibliometric techniques with agility, taking advantage of an easy-to-use solution with cloud computing capabilities. From the application of the "Scientific Mapping Process", the researcher will be able to generate strategic information for a result-oriented scientific production, assertively going through the main steps of research and boosting scientific discovery in the most diverse fields of investigation. •The Scientific Mapping Process provides a method and a system to boost scientific development.•It automates Science Mapping and bibliometric analysis from scientific datasets.•It facilitates the researcher's work, increasing the assertiveness in scientific production.

RevDate: 2023-09-23

Willett DS, Brannock J, Dissen J, et al (2023)

NOAA Open Data Dissemination: Petabyte-scale Earth system data in the cloud.

Science advances, 9(38):eadh0032.

NOAA Open Data Dissemination (NODD) makes NOAA environmental data publicly and freely available on Amazon Web Services (AWS), Microsoft Azure (Azure), and Google Cloud Platform (GCP). These data can be accessed by anyone with an internet connection and span key datasets across the Earth system including satellite imagery, radar, weather models and observations, ocean databases, and climate data records. Since its inception, NODD has grown to provide public access to more than 24 PB of NOAA data and can support billions of requests and petabytes of access daily. Stakeholders routinely access more than 5 PB of NODD data every month. NODD continues to grow to support open petabyte-scale Earth system data science in the cloud by onboarding additional NOAA data and exploring performant data formats. Here, we document how this program works with a focus on provenance, key datasets, and use. We also highlight how to access these data with the goal of accelerating use of NOAA resources in the cloud.

RevDate: 2023-09-19
CmpDate: 2023-09-19

Namazi F, Ezoji M, EG Parmehr (2023)

Paddy Rice mapping in fragmented lands by improved phenology curve and correlation measurements on Sentinel-2 imagery in Google earth engine.

Environmental monitoring and assessment, 195(10):1220.

Accurate and timely rice crop mapping is important to address the challenges of food security, water management, disease transmission, and land use change. However, accurate rice crop mapping is difficult due to the presence of mixed pixels in small and fragmented rice fields as well as cloud cover. In this paper, a phenology-based method using Sentinel-2 time series images is presented to solve these problems. First, the improved rice phenology curve is extracted based on Normalized Difference Vegetation Index and Land Surface Water Index time series data of rice fields. Then, correlation was taken between rice phenology curve and time series data of each pixel. The correlation result of each pixel shows the similarity of its time series behavior with the proposed rice phenology curve. In the next step, the maximum correlation value and its occurrence time are used as the feature vectors of each pixel to classification. Since correlation measurement provides data with better separability than its input data, training the classifier can be done with fewer samples and the classification is more accurate. The implementation of the proposed correlation-based algorithm can be done in a parallel computing. All the processes were performed on the Google Earth Engine cloud platform on the time series images of the Sentinel 2. The implementations show the high accuracy of this method.

RevDate: 2023-09-16

Yang J, Han J, Wan Q, et al (2023)

A novel similarity measurement for triangular cloud models based on dual consideration of shape and distance.

PeerJ. Computer science, 9:e1506.

It is important to be able to measure the similarity between two uncertain concepts for many real-life AI applications, such as image retrieval, collaborative filtering, risk assessment, and data clustering. Cloud models are important cognitive computing models that show promise in measuring the similarity of uncertain concepts. Here, we aim to address the shortcomings of existing cloud model similarity measurement algorithms, such as poor discrimination ability and unstable measurement results. We propose an EPTCM algorithm based on the triangular fuzzy number EW-type closeness and cloud drop variance, considering the shape and distance similarities of existing cloud models. The experimental results show that the EPTCM algorithm has good recognition and classification accuracy and is more accurate than the existing Likeness comparing method (LICM), overlap-based expectation curve (OECM), fuzzy distance-based similarity (FDCM) and multidimensional similarity cloud model (MSCM) methods. The experimental results also demonstrate that the EPTCM algorithm has successfully overcome the shortcomings of existing algorithms. In summary, the EPTCM method proposed here is effective and feasible to implement.


RJR Experience and Expertise


Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.


Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.


Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.


Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.


While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.


Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.


Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.


Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.

Support this website:
Order from Amazon
We will earn a commission.

This is a must read book for anyone with an interest in invasion biology. The full title of the book lays out the author's premise — The New Wild: Why Invasive Species Will Be Nature's Salvation. Not only is species movement not bad for ecosystems, it is the way that ecosystems respond to perturbation — it is the way ecosystems heal. Even if you are one of those who is absolutely convinced that invasive species are actually "a blight, pollution, an epidemic, or a cancer on nature", you should read this book to clarify your own thinking. True scientific understanding never comes from just interacting with those with whom you already agree. R. Robbins

963 Red Tail Lane
Bellingham, WA 98226


E-mail: RJR8222@gmail.com

Collection of publications by R J Robbins

Reprints and preprints of publications, slide presentations, instructional materials, and data compilations written or prepared by Robert Robbins. Most papers deal with computational biology, genome informatics, using information technology to support biomedical research, and related matters.

Research Gate page for R J Robbins

ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. According to a study by Nature and an article in Times Higher Education , it is the largest academic social network in terms of active users.

Curriculum Vitae for R J Robbins

short personal version

Curriculum Vitae for R J Robbins

long standard version

RJR Picks from Around the Web (updated 11 MAY 2018 )