picture
RJR-logo

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

icon

Bibliography Options Menu

icon
QUERY RUN:
26 Jul 2024 at 01:31
HITS:
3651
PAGE OPTIONS:
Hide Abstracts   |   Hide Additional Links
NOTE:
Long bibliographies are displayed in blocks of 100 citations at a time. At the end of each block there is an option to load the next block.

Bibliography on: Cloud Computing

RJR-3x

Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About:  RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE

ESP: PubMed Auto Bibliography 26 Jul 2024 at 01:31 Created: 

Cloud Computing

Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.

Created with PubMed® Query: ( cloud[TIAB] AND (computing[TIAB] OR "amazon web services"[TIAB] OR google[TIAB] OR "microsoft azure"[TIAB]) ) NOT pmcbook NOT ispreviousversion

Citations The Papers (from PubMed®)

-->

RevDate: 2024-07-23
CmpDate: 2024-07-23

Nguyen H, Pham VD, Nguyen H, et al (2024)

CCPA: cloud-based, self-learning modules for consensus pathway analysis using GO, KEGG and Reactome.

Briefings in bioinformatics, 25(Supplement_1):.

This manuscript describes the development of a resource module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' (https://github.com/NIGMS/NIGMS-Sandbox). The module delivers learning materials on Cloud-based Consensus Pathway Analysis in an interactive format that uses appropriate cloud resources for data access and analyses. Pathway analysis is important because it allows us to gain insights into biological mechanisms underlying conditions. But the availability of many pathway analysis methods, the requirement of coding skills, and the focus of current tools on only a few species all make it very difficult for biomedical researchers to self-learn and perform pathway analysis efficiently. Furthermore, there is a lack of tools that allow researchers to compare analysis results obtained from different experiments and different analysis methods to find consensus results. To address these challenges, we have designed a cloud-based, self-learning module that provides consensus results among established, state-of-the-art pathway analysis techniques to provide students and researchers with necessary training and example materials. The training module consists of five Jupyter Notebooks that provide complete tutorials for the following tasks: (i) process expression data, (ii) perform differential analysis, visualize and compare the results obtained from four differential analysis methods (limma, t-test, edgeR, DESeq2), (iii) process three pathway databases (GO, KEGG and Reactome), (iv) perform pathway analysis using eight methods (ORA, CAMERA, KS test, Wilcoxon test, FGSEA, GSA, SAFE and PADOG) and (v) combine results of multiple analyses. We also provide examples, source code, explanations and instructional videos for trainees to complete each Jupyter Notebook. The module supports the analysis for many model (e.g. human, mouse, fruit fly, zebra fish) and non-model species. The module is publicly available at https://github.com/NIGMS/Consensus-Pathway-Analysis-in-the-Cloud. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.

RevDate: 2024-07-23
CmpDate: 2024-07-23

Woessner AE, Anjum U, Salman H, et al (2024)

Identifying and training deep learning neural networks on biomedical-related datasets.

Briefings in bioinformatics, 25(Supplement_1):.

This manuscript describes the development of a resources module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on implementing deep learning algorithms for biomedical image data in an interactive format that uses appropriate cloud resources for data access and analyses. Biomedical-related datasets are widely used in both research and clinical settings, but the ability for professionally trained clinicians and researchers to interpret datasets becomes difficult as the size and breadth of these datasets increases. Artificial intelligence, and specifically deep learning neural networks, have recently become an important tool in novel biomedical research. However, use is limited due to their computational requirements and confusion regarding different neural network architectures. The goal of this learning module is to introduce types of deep learning neural networks and cover practices that are commonly used in biomedical research. This module is subdivided into four submodules that cover classification, augmentation, segmentation and regression. Each complementary submodule was written on the Google Cloud Platform and contains detailed code and explanations, as well as quizzes and challenges to facilitate user training. Overall, the goal of this learning module is to enable users to identify and integrate the correct type of neural network with their data while highlighting the ease-of-use of cloud computing for implementing neural networks. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.

RevDate: 2024-07-23
CmpDate: 2024-07-23

O'Connell KA, Kopchick B, Carlson T, et al (2024)

Understanding proteome quantification in an interactive learning module on Google Cloud Platform.

Briefings in bioinformatics, 25(Supplement_1):.

This manuscript describes the development of a resource module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on protein quantification in an interactive format that uses appropriate cloud resources for data access and analyses. Quantitative proteomics is a rapidly growing discipline due to the cutting-edge technologies of high resolution mass spectrometry. There are many data types to consider for proteome quantification including data dependent acquisition, data independent acquisition, multiplexing with Tandem Mass Tag reporter ions, spectral counts, and more. As part of the NIH NIGMS Sandbox effort, we developed a learning module to introduce students to mass spectrometry terminology, normalization methods, statistical designs, and basics of R programming. By utilizing the Google Cloud environment, the learning module is easily accessible without the need for complex installation procedures. The proteome quantification module demonstrates the analysis using a provided TMT10plex data set using MS3 reporter ion intensity quantitative values in a Jupyter notebook with an R kernel. The learning module begins with the raw intensities, performs normalization, and differential abundance analysis using limma models, and is designed for researchers with a basic understanding of mass spectrometry and R programming language. Learners walk away with a better understanding of how to navigate Google Cloud Platform for proteomic research, and with the basics of mass spectrometry data analysis at the command line. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.

RevDate: 2024-07-23
CmpDate: 2024-07-23

Qin Y, Maggio A, Hawkins D, et al (2024)

Whole-genome bisulfite sequencing data analysis learning module on Google Cloud Platform.

Briefings in bioinformatics, 25(Supplement_1):.

This study describes the development of a resource module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module is designed to facilitate interactive learning of whole-genome bisulfite sequencing (WGBS) data analysis utilizing cloud-based tools in Google Cloud Platform, such as Cloud Storage, Vertex AI notebooks and Google Batch. WGBS is a powerful technique that can provide comprehensive insights into DNA methylation patterns at single cytosine resolution, essential for understanding epigenetic regulation across the genome. The designed learning module first provides step-by-step tutorials that guide learners through two main stages of WGBS data analysis, preprocessing and the identification of differentially methylated regions. And then, it provides a streamlined workflow and demonstrates how to effectively use it for large datasets given the power of cloud infrastructure. The integration of these interconnected submodules progressively deepens the user's understanding of the WGBS analysis process along with the use of cloud resources. Through this module, we can enhance the accessibility and adoption of cloud computing in epigenomic research, speeding up the advancements in the related field and beyond. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.

RevDate: 2024-07-23
CmpDate: 2024-07-23

Hemme CL, Beaudry L, Yosufzai Z, et al (2024)

A cloud-based learning module for biomarker discovery.

Briefings in bioinformatics, 25(Supplement_1):.

This manuscript describes the development of a resource module that is part of a learning platform named "NIGMS Sandbox for Cloud-based Learning" https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on basic principles in biomarker discovery in an interactive format that uses appropriate cloud resources for data access and analyses. In collaboration with Google Cloud, Deloitte Consulting and NIGMS, the Rhode Island INBRE Molecular Informatics Core developed a cloud-based training module for biomarker discovery. The module consists of nine submodules covering various topics on biomarker discovery and assessment and is deployed on the Google Cloud Platform and available for public use through the NIGMS Sandbox. The submodules are written as a series of Jupyter Notebooks utilizing R and Bioconductor for biomarker and omics data analysis. The submodules cover the following topics: 1) introduction to biomarkers; 2) introduction to R data structures; 3) introduction to linear models; 4) introduction to exploratory analysis; 5) rat renal ischemia-reperfusion injury case study; (6) linear and logistic regression for comparison of quantitative biomarkers; 7) exploratory analysis of proteomics IRI data; 8) identification of IRI biomarkers from proteomic data; and 9) machine learning methods for biomarker discovery. Each notebook includes an in-line quiz for self-assessment on the submodule topic and an overview video is available on YouTube (https://www.youtube.com/watch?v=2-Q9Ax8EW84). This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.

RevDate: 2024-07-23
CmpDate: 2024-07-23

Wilkins OM, Campbell R, Yosufzai Z, et al (2024)

Cloud-based introduction to BASH programming for biologists.

Briefings in bioinformatics, 25(Supplement_1):.

This manuscript describes the development of a resource module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning', https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial authored by National Institute of General Medical Sciences: NIGMS Sandbox: A Learning Platform toward Democratizing Cloud Computing for Biomedical Research at the beginning of this supplement. This module delivers learning materials introducing the utility of the BASH (Bourne Again Shell) programming language for genomic data analysis in an interactive format that uses appropriate cloud resources for data access and analyses. The next-generation sequencing revolution has generated massive amounts of novel biological data from a multitude of platforms that survey an ever-growing list of genomic modalities. These data require significant downstream computational and statistical analyses to glean meaningful biological insights. However, the skill sets required to generate these data are vastly different from the skills required to analyze these data. Bench scientists that generate next-generation data often lack the training required to perform analysis of these datasets and require support from bioinformatics specialists. Dedicated computational training is required to empower biologists in the area of genomic data analysis, however, learning to efficiently leverage a command line interface is a significant barrier in learning how to leverage common analytical tools. Cloud platforms have the potential to democratize access to the technical tools and computational resources necessary to work with modern sequencing data, providing an effective framework for bioinformatics education. This module aims to provide an interactive platform that slowly builds technical skills and knowledge needed to interact with genomics data on the command line in the Cloud. The sandbox format of this module enables users to move through the material at their own pace and test their grasp of the material with knowledge self-checks before building on that material in the next sub-module. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.

RevDate: 2024-07-23
CmpDate: 2024-07-23

Veerappa AM, Rowley MJ, Maggio A, et al (2024)

CloudATAC: a cloud-based framework for ATAC-Seq data analysis.

Briefings in bioinformatics, 25(Supplement_1):.

Assay for transposase-accessible chromatin with high-throughput sequencing (ATAC-seq) generates genome-wide chromatin accessibility profiles, providing valuable insights into epigenetic gene regulation at both pooled-cell and single-cell population levels. Comprehensive analysis of ATAC-seq data involves the use of various interdependent programs. Learning the correct sequence of steps needed to process the data can represent a major hurdle. Selecting appropriate parameters at each stage, including pre-analysis, core analysis, and advanced downstream analysis, is important to ensure accurate analysis and interpretation of ATAC-seq data. Additionally, obtaining and working within a limited computational environment presents a significant challenge to non-bioinformatic researchers. Therefore, we present Cloud ATAC, an open-source, cloud-based interactive framework with a scalable, flexible, and streamlined analysis framework based on the best practices approach for pooled-cell and single-cell ATAC-seq data. These frameworks use on-demand computational power and memory, scalability, and a secure and compliant environment provided by the Google Cloud. Additionally, we leverage Jupyter Notebook's interactive computing platform that combines live code, tutorials, narrative text, flashcards, quizzes, and custom visualizations to enhance learning and analysis. Further, leveraging GPU instances has significantly improved the run-time of the single-cell framework. The source codes and data are publicly available through NIH Cloud lab https://github.com/NIGMS/ATAC-Seq-and-Single-Cell-ATAC-Seq-Analysis. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.

RevDate: 2024-07-23

Almalawi A, Zafar A, Unhelkar B, et al (2024)

Enhancing security in smart healthcare systems: Using intelligent edge computing with a novel Salp Swarm Optimization and radial basis neural network algorithm.

Heliyon, 10(13):e33792.

A smart healthcare system (SHS) is a health service system that employs advanced technologies such as wearable devices, the Internet of Things (IoT), and mobile internet to dynamically access information and connect people and institutions related to healthcare, thereby actively managing and responding to medical ecosystem needs. Edge computing (EC) plays a significant role in SHS as it enables real-time data processing and analysis at the data source, which reduces latency and improves medical intervention speed. However, the integration of patient information, including electronic health records (EHRs), into the SHS framework induces security and privacy concerns. To address these issues, an intelligent EC framework was proposed in this study. The objective of this study is to accurately identify security threats and ensure secure data transmission in the SHS environment. The proposed EC framework leverages the effectiveness of Salp Swarm Optimization and Radial Basis Functional Neural Network (SS-RBFN) for enhancing security and data privacy. The proposed methodology commences with the collection of healthcare information, which is then pre-processed to ensure the consistency and quality of the database for further analysis. Subsequently, the SS-RBFN algorithm was trained using the pre-processed database to distinguish between normal and malicious data streams accurately, offering continuous monitoring in the SHS environment. Additionally, a Rivest-Shamir-Adelman (RSA) approach was applied to safeguard data against security threats during transmission to cloud storage. The proposed model was trained and validated using the IoT-based healthcare database available at Kaggle, and the experimental results demonstrated that it achieved 99.87 % accuracy, 99.76 % precision, 99.49 % f-measure, 98.99 % recall, 97.37 % throughput, and 1.2s latency. Furthermore, the results achieved by the proposed model were compared with the existing models to validate its effectiveness in enhancing security.

RevDate: 2024-07-22
CmpDate: 2024-07-22

Pulido-Gaytan B, A Tchernykh (2024)

Self-learning activation functions to increase accuracy of privacy-preserving Convolutional Neural Networks with homomorphic encryption.

PloS one, 19(7):e0306420 pii:PONE-D-23-25899.

The widespread adoption of cloud computing necessitates privacy-preserving techniques that allow information to be processed without disclosure. This paper proposes a method to increase the accuracy and performance of privacy-preserving Convolutional Neural Networks with Homomorphic Encryption (CNN-HE) by Self-Learning Activation Functions (SLAF). SLAFs are polynomials with trainable coefficients updated during training, together with synaptic weights, for each polynomial independently to learn task-specific and CNN-specific features. We theoretically prove its feasibility to approximate any continuous activation function to the desired error as a function of the SLAF degree. Two CNN-HE models are proposed: CNN-HE-SLAF and CNN-HE-SLAF-R. In the first model, all activation functions are replaced by SLAFs, and CNN is trained to find weights and coefficients. In the second one, CNN is trained with the original activation, then weights are fixed, activation is substituted by SLAF, and CNN is shortly re-trained to adapt SLAF coefficients. We show that such self-learning can achieve the same accuracy 99.38% as a non-polynomial ReLU over non-homomorphic CNNs and lead to an increase in accuracy (99.21%) and higher performance (6.26 times faster) than the state-of-the-art CNN-HE CryptoNets on the MNIST optical character recognition benchmark dataset.

RevDate: 2024-07-19

Luo W, Huang K, Liang X, et al (2024)

Process Manufacturing Intelligence Empowered by Industrial Metaverse: A Survey.

IEEE transactions on cybernetics, PP: [Epub ahead of print].

The intelligent goal of process manufacturing is to achieve high efficiency and greening of the entire production. Whereas the information system it used is functionally independent, resulting to knowledge gaps between each level. Decision-making still requires lots of knowledge workers making manually. The industrial metaverse is a necessary means to bridge the knowledge gaps by sharing and collaborative decision-making. Considering the safety and stability requirements of the process manufacturing, this article conducts a thorough survey on the process manufacturing intelligence empowered by industrial metaverse. First, it analyzes the current status and challenges of process manufacturing intelligence, and then summarizes the latest developments about key enabling technologies of industrial metaverse, such as interconnection technologies, artificial intelligence, cloud-edge computing, digital twin (DT), immersive interaction, and blockchain technology. On this basis, taking into account the characteristics of process manufacturing, a construction approach and architecture for the process industrial metaverse is proposed: a virtual-real fused industrial metaverse construction method that combines DTs with physical avatar, which can effectively ensure the safety of metaverse's application in industrial scenarios. Finally, we conducted preliminary exploration and research, to prove the feasibility of proposed method.

RevDate: 2024-07-18
CmpDate: 2024-07-18

McCoy ES, Park SK, Patel RP, et al (2024)

Development of PainFace software to simplify, standardize, and scale up mouse grimace analyses.

Pain, 165(8):1793-1805.

Facial grimacing is used to quantify spontaneous pain in mice and other mammals, but scoring relies on humans with different levels of proficiency. Here, we developed a cloud-based software platform called PainFace (http://painface.net) that uses machine learning to detect 4 facial action units of the mouse grimace scale (orbitals, nose, ears, whiskers) and score facial grimaces of black-coated C57BL/6 male and female mice on a 0 to 8 scale. Platform accuracy was validated in 2 different laboratories, with 3 conditions that evoke grimacing-laparotomy surgery, bilateral hindpaw injection of carrageenan, and intraplantar injection of formalin. PainFace can generate up to 1 grimace score per second from a standard 30 frames/s video, making it possible to quantify facial grimacing over time, and operates at a speed that scales with computing power. By analyzing the frequency distribution of grimace scores, we found that mice spent 7x more time in a "high grimace" state following laparotomy surgery relative to sham surgery controls. Our study shows that PainFace reproducibly quantifies facial grimaces indicative of nonevoked spontaneous pain and enables laboratories to standardize and scale-up facial grimace analyses.

RevDate: 2024-07-18

Malakhov KS (2024)

Innovative Hybrid Cloud Solutions for Physical Medicine and Telerehabilitation Research.

International journal of telerehabilitation, 16(1):e6635.

PURPOSE: The primary objective of this study was to develop and implement a Hybrid Cloud Environment for Telerehabilitation (HCET) to enhance patient care and research in the Physical Medicine and Rehabilitation (PM&R) domain. This environment aims to integrate advanced information and communication technologies to support both traditional in-person therapy and digital health solutions.

BACKGROUND: Telerehabilitation is emerging as a core component of modern healthcare, especially within the PM&R field. By applying digital health technologies, telerehabilitation provides continuous, comprehensive support for patient rehabilitation, bridging the gap between traditional therapy, and remote healthcare delivery. This study focuses on the design, and implementation of a hybrid HCET system tailored for the PM&R domain.

METHODS: The study involved the development of a comprehensive architectural and structural organization for the HCET, including a three-layer model (infrastructure, platform, service layers). Core components of the HCET were designed and implemented, such as the Hospital Information System (HIS) for PM&R, the MedRehabBot system, and the MedLocalGPT project. These components were integrated using advanced technologies like large language models (LLMs), word embeddings, and ontology-related approaches, along with APIs for enhanced functionality and interaction.

FINDINGS: The HCET system was successfully implemented and is operational, providing a robust platform for telerehabilitation. Key features include the MVP of the HIS for PM&R, supporting patient profile management, and rehabilitation goal tracking; the MedRehabBot and WhiteBookBot systems; and the MedLocalGPT project, which offers sophisticated querying capabilities, and access to extensive domain-specific knowledge. The system supports both Ukrainian and English languages, ensuring broad accessibility and usability.

INTERPRETATION: The practical implementation, and operation of the HCET system demonstrate its potential to transform telerehabilitation within the PM&R domain. By integrating advanced technologies, and providing comprehensive digital health solutions, the HCET enhances patient care, supports ongoing rehabilitation, and facilitates advanced research. Future work will focus on optimizing services and expanding language support to further improve the system's functionality and impact.

RevDate: 2024-07-17
CmpDate: 2024-07-17

Idalino FD, Rosa KKD, Hillebrand FL, et al (2024)

Variability in wet and dry snow radar zones in the North of the Antarctic Peninsula using a cloud computing environment.

Anais da Academia Brasileira de Ciencias, 96(suppl 2):e20230704 pii:S0001-37652024000401101.

This work investigated the annual variations in dry snow (DSRZ) and wet snow radar zones (WSRZ) in the north of the Antarctic Peninsula between 2015-2023. A specific code for snow zone detection on Sentinel-1 images was created on Google Earth Engine by combining the CryoSat-2 digital elevation model and air temperature data from ERA5. Regions with backscatter coefficients (σ[0]) values exceeding -6.5 dB were considered the extent of surface melt occurrence, and the dry snow line was considered to coincide with the -11 °C isotherm of the average annual air temperature. The annual variation in WSRZ exhibited moderate correlations with annual average air temperature, total precipitation, and the sum of annual degree-days. However, statistical tests indicated low determination coefficients and no significant trend values in DSRZ behavior with atmospheric variables. The results of reducing DSRZ area for 2019/2020 and 2020/2021 compared to 2018/2018 indicated the upward in dry zone line in this AP region. The methodology demonstrated its efficacy for both quantitative and qualitative analyses of data obtained in digital processing environments, allowing for the large-scale spatial and temporal variations monitoring and for the understanding changes in glacier mass loss.

RevDate: 2024-07-15

Lee G, CW Connor (2024)

"Alexa, Cycle The Blood Pressure": A Voice Control Interface Method for Anesthesia Monitoring.

Anesthesia and analgesia pii:00000539-990000000-00865 [Epub ahead of print].

BACKGROUND: Anesthesia monitors and devices are usually controlled with some combination of dials, keypads, a keyboard, or a touch screen. Thus, anesthesiologists can operate their monitors only when they are physically close to them, and not otherwise task-loaded with sterile procedures such as line or block placement. Voice recognition technology has become commonplace and may offer advantages in anesthesia practice such as reducing surface contamination rates and allowing anesthesiologists to effect changes in monitoring and therapy when they would otherwise presently be unable to do so. We hypothesized that this technology is practicable and that anesthesiologists would consider it useful.

METHODS: A novel voice-driven prototype controller was designed for the GE Solar 8000M anesthesia patient monitor. The apparatus was implemented using a Raspberry Pi 4 single-board computer, an external conference audio device, a Google Cloud Speech-to-Text platform, and a modified Solar controller to effect commands. Fifty anesthesia providers tested the prototype. Evaluations and surveys were completed in a nonclinical environment to avoid any ethical or safety concerns regarding the use of the device in direct patient care. All anesthesiologists sampled were fluent English speakers; many with inflections from their first language or national origin, reflecting diversity in the population of practicing anesthesiologists.

RESULTS: The prototype was uniformly well-received by anesthesiologists. Ease-of-use, usefulness, and effectiveness were assessed on a Likert scale with means of 9.96, 7.22, and 8.48 of 10, respectively. No population cofactors were associated with these results. Advancing level of training (eg, nonattending versus attending) was not correlated with any preference. Accent of country or region was not correlated with any preference. Vocal pitch register did not correlate with any preference. Statistical analyses were performed with analysis of variance and the unpaired t-test.

CONCLUSIONS: The use of voice recognition to control operating room monitors was well-received anesthesia providers. Additional commands are easily implemented on the prototype controller. No adverse relationship was found between acceptability and level of anesthesia experience, pitch of voice, or presence of accent. Voice recognition is a promising method of controlling anesthesia monitors and devices that could potentially increase usability and situational awareness in circumstances where the anesthesiologist is otherwise out-of-position or task-loaded.

RevDate: 2024-07-15

Hsu WT, MR Shirts (2024)

Replica Exchange of Expanded Ensembles: A Generalized Ensemble Approach with Enhanced Flexibility and Parallelizability.

Journal of chemical theory and computation [Epub ahead of print].

Generalized ensemble methods such as Hamiltonian replica exchange (HREX) and expanded ensemble (EE) have been shown effective in free energy calculations for various contexts, given their ability to circumvent free energy barriers via nonphysical pathways defined by states with different modified Hamiltonians. However, both HREX and EE methods come with drawbacks, such as limited flexibility in parameter specification or the lack of parallelizability for more complicated applications. To address this challenge, we present the method of replica exchange of expanded ensembles (REXEE), which integrates the principles of HREX and EE methods by periodically exchanging coordinates of EE replicas sampling different yet overlapping sets of alchemical states. With the solvation free energy calculation of anthracene and binding free energy calculation of the CB7-10 binding complex, we show that the REXEE method achieves the same level of accuracy in free energy calculations as the HREX and EE methods, while offering enhanced flexibility and parallelizability. Additionally, we examined REXEE simulations with various setups to understand how different exchange frequencies and replica configurations influence the sampling efficiency in the fixed-weight phase and the weight convergence in the weight-updating phase. The REXEE approach can be further extended to support asynchronous parallelization schemes, allowing looser communications between larger numbers of loosely coupled processors such as cloud computing and therefore promising much more scalable and adaptive executions of alchemical free energy calculations. All algorithms for the REXEE method are available in the Python package ensemble_md, which offers an interface for REXEE simulation management without modifying the source code in GROMACS.

RevDate: 2024-07-13

Nyangaresi VO, Abduljabbar ZA, Mutlaq KA, et al (2024)

Smart city energy efficient data privacy preservation protocol based on biometrics and fuzzy commitment scheme.

Scientific reports, 14(1):16223.

Advancements in cloud computing, flying ad-hoc networks, wireless sensor networks, artificial intelligence, big data, 5th generation mobile network and internet of things have led to the development of smart cities. Owing to their massive interconnectedness, high volumes of data are collected and exchanged over the public internet. Therefore, the exchanged messages are susceptible to numerous security and privacy threats across these open public channels. Although many security techniques have been designed to address this issue, most of them are still vulnerable to attacks while some deploy computationally extensive cryptographic operations such as bilinear pairings and blockchain. In this paper, we leverage on biometrics, error correction codes and fuzzy commitment schemes to develop a secure and energy efficient authentication scheme for the smart cities. This is informed by the fact that biometric data is cumbersome to reproduce and hence attacks such as side-channeling are thwarted. We formally analyze the security of our protocol using the Burrows-Abadi-Needham logic logic, which shows that our scheme achieves strong mutual authentication among the communicating entities. The semantic analysis of our protocol shows that it mitigates attacks such as de-synchronization, eavesdropping, session hijacking, forgery and side-channeling. In addition, its formal security analysis demonstrates that it is secure under the Canetti and Krawczyk attack model. In terms of performance, our scheme is shown to reduce the computation overheads by 20.7% and hence is the most efficient among the state-of-the-art protocols.

RevDate: 2024-07-13

Alwakeel AM, AK Alnaim (2024)

Trust Management and Resource Optimization in Edge and Fog Computing Using the CyberGuard Framework.

Sensors (Basel, Switzerland), 24(13): pii:s24134308.

The growing importance of edge and fog computing in the modern IT infrastructure is driven by the rise of decentralized applications. However, resource allocation within these frameworks is challenging due to varying device capabilities and dynamic network conditions. Conventional approaches often result in poor resource use and slowed advancements. This study presents a novel strategy for enhancing resource allocation in edge and fog computing by integrating machine learning with the blockchain for reliable trust management. Our proposed framework, called CyberGuard, leverages the blockchain's inherent immutability and decentralization to establish a trustworthy and transparent network for monitoring and verifying edge and fog computing transactions. CyberGuard combines the Trust2Vec model with conventional machine-learning models like SVM, KNN, and random forests, creating a robust mechanism for assessing trust and security risks. Through detailed optimization and case studies, CyberGuard demonstrates significant improvements in resource allocation efficiency and overall system performance in real-world scenarios. Our results highlight CyberGuard's effectiveness, evidenced by a remarkable accuracy, precision, recall, and F1-score of 98.18%, showcasing the transformative potential of our comprehensive approach in edge and fog computing environments.

RevDate: 2024-07-13

Alwakeel AM, AK Alnaim (2024)

Network Slicing in 6G: A Strategic Framework for IoT in Smart Cities.

Sensors (Basel, Switzerland), 24(13): pii:s24134254.

The emergence of 6G communication technologies brings both opportunities and challenges for the Internet of Things (IoT) in smart cities. In this paper, we introduce an advanced network slicing framework designed to meet the complex demands of 6G smart cities' IoT deployments. The framework development follows a detailed methodology that encompasses requirement analysis, metric formulation, constraint specification, objective setting, mathematical modeling, configuration optimization, performance evaluation, parameter tuning, and validation of the final design. Our evaluations demonstrate the framework's high efficiency, evidenced by low round-trip time (RTT), minimal packet loss, increased availability, and enhanced throughput. Notably, the framework scales effectively, managing multiple connections simultaneously without compromising resource efficiency. Enhanced security is achieved through robust features such as 256-bit encryption and a high rate of authentication success. The discussion elaborates on these findings, underscoring the framework's impressive performance, scalability, and security capabilities.

RevDate: 2024-07-13

Shahid U, Ahmed G, Siddiqui S, et al (2024)

Latency-Sensitive Function Placement among Heterogeneous Nodes in Serverless Computing.

Sensors (Basel, Switzerland), 24(13): pii:s24134195.

Function as a Service (FaaS) is highly beneficial to smart city infrastructure due to its flexibility, efficiency, and adaptability, specifically for integration in the digital landscape. FaaS has serverless setup, which means that an organization no longer has to worry about specific infrastructure management tasks; the developers can focus on how to deploy and create code efficiently. Since FaaS aligns well with the IoT, it easily integrates with IoT devices, thereby making it possible to perform event-based actions and real-time computations. In our research, we offer an exclusive likelihood-based model of adaptive machine learning for identifying the right place of function. We employ the XGBoost regressor to estimate the execution time for each function and utilize the decision tree regressor to predict network latency. By encompassing factors like network delay, arrival computation, and emphasis on resources, the machine learning model eases the selection process of a placement. In replication, we use Docker containers, focusing on serverless node type, serverless node variety, function location, deadlines, and edge-cloud topology. Thus, the primary objectives are to address deadlines and enhance the use of any resource, and from this, we can see that effective utilization of resources leads to enhanced deadline compliance.

RevDate: 2024-07-13

Liu X, Dong X, Jia N, et al (2024)

Federated Learning-Oriented Edge Computing Framework for the IIoT.

Sensors (Basel, Switzerland), 24(13): pii:s24134182.

With the maturity of artificial intelligence (AI) technology, applications of AI in edge computing will greatly promote the development of industrial technology. However, the existing studies on the edge computing framework for the Industrial Internet of Things (IIoT) still face several challenges, such as deep hardware and software coupling, diverse protocols, difficult deployment of AI models, insufficient computing capabilities of edge devices, and sensitivity to delay and energy consumption. To solve the above problems, this paper proposes a software-defined AI-oriented three-layer IIoT edge computing framework and presents the design and implementation of an AI-oriented edge computing system, aiming to support device access, enable the acceptance and deployment of AI models from the cloud, and allow the whole process from data acquisition to model training to be completed at the edge. In addition, this paper proposes a time series-based method for device selection and computation offloading in the federated learning process, which selectively offloads the tasks of inefficient nodes to the edge computing center to reduce the training delay and energy consumption. Finally, experiments carried out to verify the feasibility and effectiveness of the proposed method are reported. The model training time with the proposed method is generally 30% to 50% less than that with the random device selection method, and the training energy consumption under the proposed method is generally 35% to 55% less.

RevDate: 2024-07-13

Zuo G, Wang R, Wan C, et al (2024)

Unveiling the Evolution of Virtual Reality in Medicine: A Bibliometric Analysis of Research Hotspots and Trends over the Past 12 Years.

Healthcare (Basel, Switzerland), 12(13): pii:healthcare12131266.

BACKGROUND: Virtual reality (VR), widely used in the medical field, may affect future medical training and treatment. Therefore, this study examined VR's potential uses and research directions in medicine.

METHODS: Citation data were downloaded from the Web of Science Core Collection database (WoSCC) to evaluate VR in medicine in articles published between 1 January 2012 and 31 December 2023. These data were analyzed using CiteSpace 6.2. R2 software. Present limitations and future opportunities were summarized based on the data.

RESULTS: A total of 2143 related publications from 86 countries and regions were analyzed. The country with the highest number of publications is the USA, with 461 articles. The University of London has the most publications among institutions, with 43 articles. The burst keywords represent the research frontier from 2020 to 2023, such as "task analysis", "deep learning", and "machine learning".

CONCLUSION: The number of publications on VR applications in the medical field has been steadily increasing year by year. The USA is the leading country in this area, while the University of London stands out as the most published, and most influential institution. Currently, there is a strong focus on integrating VR and AI to address complex issues such as medical education and training, rehabilitation, and surgical navigation. Looking ahead, the future trend involves integrating VR, augmented reality (AR), and mixed reality (MR) with the Internet of Things (IoT), wireless sensor networks (WSNs), big data analysis (BDA), and cloud computing (CC) technologies to develop intelligent healthcare systems within hospitals or medical centers.

RevDate: 2024-07-12
CmpDate: 2024-07-12

Allers S, O'Connell KA, Carlson T, et al (2024)

Reusable tutorials for using cloud-based computing environments for the analysis of bacterial gene expression data from bulk RNA sequencing.

Briefings in bioinformatics, 25(4):.

This manuscript describes the development of a resource module that is part of a learning platform named "NIGMS Sandbox for Cloud-based Learning" https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on RNA sequencing (RNAseq) data analysis in an interactive format that uses appropriate cloud resources for data access and analyses. Biomedical research is increasingly data-driven, and dependent upon data management and analysis methods that facilitate rigorous, robust, and reproducible research. Cloud-based computing resources provide opportunities to broaden the application of bioinformatics and data science in research. Two obstacles for researchers, particularly those at small institutions, are: (i) access to bioinformatics analysis environments tailored to their research; and (ii) training in how to use Cloud-based computing resources. We developed five reusable tutorials for bulk RNAseq data analysis to address these obstacles. Using Jupyter notebooks run on the Google Cloud Platform, the tutorials guide the user through a workflow featuring an RNAseq dataset from a study of prophage altered drug resistance in Mycobacterium chelonae. The first tutorial uses a subset of the data so users can learn analysis steps rapidly, and the second uses the entire dataset. Next, a tutorial demonstrates how to analyze the read count data to generate lists of differentially expressed genes using R/DESeq2. Additional tutorials generate read counts using the Snakemake workflow manager and Nextflow with Google Batch. All tutorials are open-source and can be used as templates for other analysis.

RevDate: 2024-07-11

Kaur R, R Vaithiyanathan (2024)

Hybrid YSGOA and neural networks based software failure prediction in cloud systems.

Scientific reports, 14(1):16035.

In the realm of cloud computing, ensuring the dependability and robustness of software systems is paramount. The intricate and evolving nature of cloud infrastructures, however, presents substantial obstacles in the pre-emptive identification and rectification of software anomalies. This study introduces an innovative methodology that amalgamates hybrid optimization algorithms with Neural Networks (NN) to refine the prediction of software malfunctions. The core objective is to augment the purity metric of our method across diverse operational conditions. This is accomplished through the utilization of two distinct optimization algorithms: the Yellow Saddle Goat Fish Algorithm (YSGA), which is instrumental in the discernment of pivotal features linked to software failures, and the Grasshopper Optimization Algorithm (GOA), which further polishes the feature compilation. These features are then processed by Neural Networks (NN), capitalizing on their proficiency in deciphering intricate data patterns and interconnections. The NNs are integral to the classification of instances predicated on the ascertained features. Our evaluation, conducted using the Failure-Dataset-OpenStack database and MATLAB Software, demonstrates that the hybrid optimization strategy employed for feature selection significantly curtails complexity and expedites processing.

RevDate: 2024-07-11

Martinez C, Etxaniz I, Molinuevo A, et al (2024)

MEDINA Catalogue of Cloud Security controls and metrics: Towards Continuous Cloud Security compliance.

Open research Europe, 4:90.

In order to address current challenges on security certification of European ICT products, processes and services, the European Comission, through ENISA (European Union Agency for Cybersecurity), has developed the European Cybersecurity Certification Scheme for Cloud Services (EUCS). This paper presents the overview of the H2020 MEDINA project approach and tools to support the adoption of EUCS and offers a detailed description of one of the core components of the framework, the MEDINA Catalogue of Controls and Metrics. The main objective of the MEDINA Catalogue is to provide automated functionalities for CSPs' compliance managers and auditors to ease the certification process towards EUCS, through the provision of all information and guidance related to the scheme, namely categories, controls, security requirements, assurance levels, etc. The tool has been enhanced with all the research and implementation works performed in MEDINA, such as definition of compliance metrics, suggestion of related implementation guidelines, alignment of similar controls in other schemes, and a set of self-assessment questionnaires, which are presented and discussed in this paper.

RevDate: 2024-07-10

Alsadie D (2024)

Advancements in heuristic task scheduling for IoT applications in fog-cloud computing: challenges and prospects.

PeerJ. Computer science, 10:e2128.

Fog computing has emerged as a prospective paradigm to address the computational requirements of IoT applications, extending the capabilities of cloud computing to the network edge. Task scheduling is pivotal in enhancing energy efficiency, optimizing resource utilization and ensuring the timely execution of tasks within fog computing environments. This article presents a comprehensive review of the advancements in task scheduling methodologies for fog computing systems, covering priority-based, greedy heuristics, metaheuristics, learning-based, hybrid heuristics, and nature-inspired heuristic approaches. Through a systematic analysis of relevant literature, we highlight the strengths and limitations of each approach and identify key challenges facing fog computing task scheduling, including dynamic environments, heterogeneity, scalability, resource constraints, security concerns, and algorithm transparency. Furthermore, we propose future research directions to address these challenges, including the integration of machine learning techniques for real-time adaptation, leveraging federated learning for collaborative scheduling, developing resource-aware and energy-efficient algorithms, incorporating security-aware techniques, and advancing explainable AI methodologies. By addressing these challenges and pursuing these research directions, we aim to facilitate the development of more robust, adaptable, and efficient task-scheduling solutions for fog computing environments, ultimately fostering trust, security, and sustainability in fog computing systems and facilitating their widespread adoption across diverse applications and domains.

RevDate: 2024-07-09

Chen C, Nguyen DT, Lee SJ, et al (2024)

Accelerating Computational Materials Discovery with Machine Learning and Cloud High-Performance Computing: from Large-Scale Screening to Experimental Validation.

Journal of the American Chemical Society [Epub ahead of print].

High-throughput computational materials discovery has promised significant acceleration of the design and discovery of new materials for many years. Despite a surge in interest and activity, the constraints imposed by large-scale computational resources present a significant bottleneck. Furthermore, examples of very large-scale computational discovery carried out through experimental validation remain scarce, especially for materials with product applicability. Here, we demonstrate how this vision became reality by combining state-of-the-art machine learning (ML) models and traditional physics-based models on cloud high-performance computing (HPC) resources to quickly navigate through more than 32 million candidates and predict around half a million potentially stable materials. By focusing on solid-state electrolytes for battery applications, our discovery pipeline further identified 18 promising candidates with new compositions and rediscovered a decade's worth of collective knowledge in the field as a byproduct. We then synthesized and experimentally characterized the structures and conductivities of our top candidates, the NaxLi3-xYCl6 (0≤ x≤ 3) series, demonstrating the potential of these compounds to serve as solid electrolytes. Additional candidate materials that are currently under experimental investigation could offer more examples of the computational discovery of new phases of Li- and Na-conducting solid electrolytes. The showcased screening of millions of materials candidates highlights the transformative potential of advanced ML and HPC methodologies, propelling materials discovery into a new era of efficiency and innovation.

RevDate: 2024-07-08

Kumar A, G Verma (2024)

Multi-level authentication for security in cloud using improved quantum key distribution.

Network (Bristol, England) [Epub ahead of print].

Cloud computing is an on-demand virtual-based technology to develop, configure, and modify applications online through the internet. It enables the users to handle various operations such as storage, back-up, and recovery of data, data analysis, delivery of software applications, implementation of new services and applications, hosting websites and blogs, and streaming of audio and video files. Thereby, it provides us many benefits although it is backlashed due to problems related to cloud security like data leakage, data loss, cyber attacks, etc. To address the security concerns, researchers have developed a variety of authentication mechanisms. This means that the authentication procedure used in the suggested method is multi-levelled. As a result, a better QKD method is offered to strengthen cloud security against different types of security risks. Key generation for enhanced QKD is based on the ABE public key cryptography approach. Here, an approach named CPABE is used in improved QKD. The Improved QKD scored the reduced KCA attack ratings of 0.3193, this is superior to CMMLA (0.7915), CPABE (0.8916), AES (0.5277), Blowfish (0.6144), and ECC (0.4287), accordingly. Finally, this multi-level authentication using an improved QKD approach is analysed under various measures and validates the enhancement over the state-of-the-art models.

RevDate: 2024-07-08

Yan L, Wang G, Feng H, et al (2024)

Efficient and accountable anti-leakage attribute-based encryption scheme for cloud storage.

Heliyon, 10(12):e32404 pii:S2405-8440(24)08435-4.

To ensure secure and flexible data sharing in cloud storage, attribute-based encryption (ABE) is introduced to meet the requirements of fine-grained access control and secure one-to-many data sharing. However, the computational burden imposed by attribute encryption renders it unsuitable for resource-constrained environments such as the Internet of Things (IoT) and edge computing. Furthermore, the issue of accountability for illegal keys is crucial, as authorized users may actively disclose or sell authorization keys for personal gain, and keys may also passively leak due to management negligence or hacking incidents. Additionally, since all authorization keys are generated by the attribute authorization center, there is a potential risk of unauthorized key forgery. In response to these challenges, this paper proposes an efficient and accountable leakage-resistant scheme based on attribute encryption. The scheme adopts more secure online/offline encryption mechanisms and cloud server-assisted decryption to alleviate the computational burden on resource-constrained devices. For illegal keys, the scheme supports accountability for both users and the authorization center, allowing the revocation of decryption privileges for malicious users. In the case of passively leaked keys, timely key updates and revocation of decryption capabilities for leaked keys are implemented. Finally, the paper provides selective security and accountability proofs for the scheme under standard models. Efficiency analysis and experimental results demonstrate that the proposed scheme enhances encryption/decryption efficiency, and the storage overhead for accountability is also extremely low.

RevDate: 2024-07-04
CmpDate: 2024-07-04

Edfeldt K, Edwards AM, Engkvist O, et al (2024)

A data science roadmap for open science organizations engaged in early-stage drug discovery.

Nature communications, 15(1):5640.

The Structural Genomics Consortium is an international open science research organization with a focus on accelerating early-stage drug discovery, namely hit discovery and optimization. We, as many others, believe that artificial intelligence (AI) is poised to be a main accelerator in the field. The question is then how to best benefit from recent advances in AI and how to generate, format and disseminate data to enable future breakthroughs in AI-guided drug discovery. We present here the recommendations of a working group composed of experts from both the public and private sectors. Robust data management requires precise ontologies and standardized vocabulary while a centralized database architecture across laboratories facilitates data integration into high-value datasets. Lab automation and opening electronic lab notebooks to data mining push the boundaries of data sharing and data modeling. Important considerations for building robust machine-learning models include transparent and reproducible data processing, choosing the most relevant data representation, defining the right training and test sets, and estimating prediction uncertainty. Beyond data-sharing, cloud-based computing can be harnessed to build and disseminate machine-learning models. Important vectors of acceleration for hit and chemical probe discovery will be (1) the real-time integration of experimental data generation and modeling workflows within design-make-test-analyze (DMTA) cycles openly, and at scale and (2) the adoption of a mindset where data scientists and experimentalists work as a unified team, and where data science is incorporated into the experimental design.

RevDate: 2024-07-04

Khazali M (2024)

Universal terminal for cloud quantum computing.

Scientific reports, 14(1):15412.

To bring the quantum computing capacities to the personal edge devices, the optimum approach is to have simple non-error-corrected personal devices that offload the computational tasks to scalable quantum computers via edge servers with cryogenic components and fault-tolerant schemes. Hence the network elements deploy different encoding protocols. This article proposes quantum terminals that are compatible with different encoding protocols; paving the way for realizing mobile edge-quantum computing. By accommodating the atomic lattice processor inside a cavity, the entangling mechanism is provided by the Rydberg cavity-QED technology. The auxiliary atom, responsible for photon emission, senses the logical qubit state via the long-range Rydberg interaction. In other words, the state of logical qubit determines the interaction-induced level-shift at the central atom and hence derives the system over distinguished eigenstates, featuring photon emission at the early or late times controlled by quantum interference. Applying an entanglement-swapping gate on two emitted photons would make the far-separated logical qubits entangled regardless of their encoding protocols. The proposed scheme provides a universal photonic interface for clustering the processors and connecting them with the quantum memories and quantum cloud compatible with different encoding formats.

RevDate: 2024-07-04

Li F, Lv K, Liu X, et al (2024)

Accurately Computing the Interacted Volume of Molecules over Their 3D Mesh Models.

Journal of chemical information and modeling [Epub ahead of print].

For quickly predicting the rational arrangement of catalysts and substrates, we previously proposed a method to calculate the interacted volumes of molecules over their 3D point cloud models. However, the nonuniform density in molecular point clouds may lead to incomplete contours in some slices, reducing the accuracy of the previous method. In this paper, we propose a two-step method for more accurately computing molecular interacted volumes. First, by employing a prematched mesh slicing method, we layer the 3D triangular mesh models of the electrostatic potential isosurfaces of two molecules globally, transforming the volume calculation into finding the intersecting areas in each layer. Next, by subdividing polygonal edges, we accurately identify intersecting parts within each layer, ensuring precise calculation of interacted volumes. In addition, we present a concise overview for computing intersecting areas in cases of multiple contour intersections and for improving computational efficiency by incorporating bounding boxes at three stages. Experimental results demonstrate that our method maintains high accuracy in different experimental data sets, with an average relative error of 0.16%. On the same experimental setup, our average relative error is 0.07%, which is lower than the previous algorithm's 1.73%, improving the accuracy and stability in calculating interacted volumes.

RevDate: 2024-06-28
CmpDate: 2024-06-28

Seaman RP, Campbell R, Doe V, et al (2024)

A cloud-based training module for efficient de novo transcriptome assembly using Nextflow and Google cloud.

Briefings in bioinformatics, 25(4):.

This study describes the development of a resource module that is part of a learning platform named "NIGMS Sandbox for Cloud-based Learning" (https://github.com/NIGMS/NIGMS-Sandbox). The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on de novo transcriptome assembly using Nextflow in an interactive format that uses appropriate cloud resources for data access and analysis. Cloud computing is a powerful new means by which biomedical researchers can access resources and capacity that were previously either unattainable or prohibitively expensive. To take advantage of these resources, however, the biomedical research community needs new skills and knowledge. We present here a cloud-based training module, developed in conjunction with Google Cloud, Deloitte Consulting, and the NIH STRIDES Program, that uses the biological problem of de novo transcriptome assembly to demonstrate and teach the concepts of computational workflows (using Nextflow) and cost- and resource-efficient use of Cloud services (using Google Cloud Platform). Our work highlights the reduced necessity of on-site computing resources and the accessibility of cloud-based infrastructure for bioinformatics applications.

RevDate: 2024-06-28

Tanade C, Rakestraw E, Ladd W, et al (2023)

Cloud Computing to Enable Wearable-Driven Longitudinal Hemodynamic Maps.

International Conference for High Performance Computing, Networking, Storage and Analysis : [proceedings]. SC (Conference : Supercomputing), 2023:.

Tracking hemodynamic responses to treatment and stimuli over long periods remains a grand challenge. Moving from established single-heartbeat technology to longitudinal profiles would require continuous data describing how the patient's state evolves, new methods to extend the temporal domain over which flow is sampled, and high-throughput computing resources. While personalized digital twins can accurately measure 3D hemodynamics over several heartbeats, state-of-the-art methods would require hundreds of years of wallclock time on leadership scale systems to simulate one day of activity. To address these challenges, we propose a cloud-based, parallel-in-time framework leveraging continuous data from wearable devices to capture the first 3D patient-specific, longitudinal hemodynamic maps. We demonstrate the validity of our method by establishing ground truth data for 750 beats and comparing the results. Our cloud-based framework is based on an initial fixed set of simulations to enable the wearable-informed creation of personalized longitudinal hemodynamic maps.

RevDate: 2024-06-27

Siruvoru V, S Aparna (2024)

Hybrid deep learning and optimized clustering mechanism for load balancing and fault tolerance in cloud computing.

Network (Bristol, England) [Epub ahead of print].

Cloud services are one of the most quickly developing technologies. Furthermore, load balancing is recognized as a fundamental challenge for achieving energy efficiency. The primary function of load balancing is to deliver optimal services by releasing the load over multiple resources. Fault tolerance is being used to improve the reliability and accessibility of the network. In this paper, a hybrid Deep Learning-based load balancing algorithm is developed. Initially, tasks are allocated to all VMs in a round-robin method. Furthermore, the Deep Embedding Cluster (DEC) utilizes the Central Processing Unit (CPU), bandwidth, memory, processing elements, and frequency scaling factors while determining if a VM is overloaded or underloaded. The task performed on the overloaded VM is valued and the tasks accomplished on the overloaded VM are assigned to the underloaded VM for cloud load balancing. In addition, the Deep Q Recurrent Neural Network (DQRNN) is proposed to balance the load based on numerous factors such as supply, demand, capacity, load, resource utilization, and fault tolerance. Furthermore, the effectiveness of this model is assessed by load, capacity, resource consumption, and success rate, with ideal values of 0.147, 0.726, 0.527, and 0.895 are achieved.

RevDate: 2024-06-27

Francini S, Marcelli A, Chirici G, et al (2024)

Per-Pixel Forest Attribute Mapping and Error Estimation: The Google Earth Engine and R dataDriven Tool.

Sensors (Basel, Switzerland), 24(12): pii:s24123947.

Remote sensing products are typically assessed using a single accuracy estimate for the entire map, despite significant variations in accuracy across different map areas or classes. Estimating per-pixel uncertainty is a major challenge for enhancing the usability and potential of remote sensing products. This paper introduces the dataDriven open access tool, a novel statistical design-based approach that specifically addresses this issue by estimating per-pixel uncertainty through a bootstrap resampling procedure. Leveraging Sentinel-2 remote sensing data as auxiliary information, the capabilities of the Google Earth Engine cloud computing platform, and the R programming language, dataDriven can be applied in any world region and variables of interest. In this study, the dataDriven tool was tested in the Rincine forest estate study area-eastern Tuscany, Italy-focusing on volume density as the variable of interest. The average volume density was 0.042, corresponding to 420 m[3] per hectare. The estimated pixel errors ranged between 93 m[3] and 979 m[3] per hectare and were 285 m[3] per hectare on average. The ability to produce error estimates for each pixel in the map is a novel aspect in the context of the current advances in remote sensing and forest monitoring and assessment. It constitutes a significant support in forest management applications and also a powerful communication tool since it informs users about areas where map estimates are unreliable, at the same time highlighting the areas where the information provided via the map is more trustworthy. In light of this, the dataDriven tool aims to support researchers and practitioners in the spatially exhaustive use of remote sensing-derived products and map validation.

RevDate: 2024-06-27

Hong S, Kim Y, Nam J, et al (2024)

On the Analysis of Inter-Relationship between Auto-Scaling Policy and QoS of FaaS Workloads.

Sensors (Basel, Switzerland), 24(12): pii:s24123774.

A recent development in cloud computing has introduced serverless technology, enabling the convenient and flexible management of cloud-native applications. Typically, the Function-as-a-Service (FaaS) solutions rely on serverless backend solutions, such as Kubernetes (K8s) and Knative, to leverage the advantages of resource management for underlying containerized contexts, including auto-scaling and pod scheduling. To take the advantages, recent cloud service providers also deploy self-hosted serverless services by facilitating their on-premise hosted FaaS platforms rather than relying on commercial public cloud offerings. However, the lack of standardized guidelines on K8s abstraction to fairly schedule and allocate resources on auto-scaling configuration options for such on-premise hosting environment in serverless computing poses challenges in meeting the service level objectives (SLOs) of diverse workloads. This study fills this gap by exploring the relationship between auto-scaling behavior and the performance of FaaS workloads depending on scaling-related configurations in K8s. Based on comprehensive measurement studies, we derived the logic as to which workload should be applied and with what type of scaling configurations, such as base metric, threshold to maximize the difference in latency SLO, and number of responses. Additionally, we propose a methodology to assess the scaling efficiency of the related K8s configurations regarding the quality of service (QoS) of FaaS workloads.

RevDate: 2024-06-25

Hernández Olcina J, Anquela Julián AB, ÁE Martín Furones (2024)

Navigating latency hurdles: an in-depth examination of a cloud-powered GNSS real-time positioning application on mobile devices.

Scientific reports, 14(1):14668.

A growing dependence on real-time positioning apps for navigation, safety, and location-based services necessitates a deep understanding of latency challenges within cloud-based Global Navigation Satellite System (GNSS) solutions. This study analyses a GNSS real-time positioning app on smartphones that utilizes cloud computing for positioning data delivery. The study investigates and quantifies diverse latency contributors throughout the system architecture, including GNSS signal acquisition, data transmission, cloud processing, and result dissemination. Controlled experiments and real-world scenarios are employed to assess the influence of network conditions, device capabilities, and cloud server load on overall positioning latency. Findings highlight system bottlenecks and their relative contributions to latency. Additionally, practical recommendations are presented for developers and cloud service providers to mitigate these challenges and guarantee an optimal user experience for real-time positioning applications. This study not only elucidates the complex interplay of factors affecting GNSS app latency, but also paves the way for future advancements in cloud-based positioning solutions, ensuring the accuracy and timeliness critical for safety-critical and emerging applications.

RevDate: 2024-06-25

Ćosić K, Popović S, BK Wiederhold (2024)

Enhancing Aviation Safety through AI-Driven Mental Health Management for Pilots and Air Traffic Controllers.

Cyberpsychology, behavior and social networking [Epub ahead of print].

This article provides an overview of the mental health challenges faced by pilots and air traffic controllers (ATCs), whose stressful professional lives may negatively impact global flight safety and security. The adverse effects of mental health disorders on their flight performance pose a particular safety risk, especially in sudden unexpected startle situations. Therefore, the early detection, prediction and prevention of mental health deterioration in pilots and ATCs, particularly among those at high risk, are crucial to minimize potential air crash incidents caused by human factors. Recent research in artificial intelligence (AI) demonstrates the potential of machine and deep learning, edge and cloud computing, virtual reality and wearable multimodal physiological sensors for monitoring and predicting mental health disorders. Longitudinal monitoring and analysis of pilots' and ATCs physiological, cognitive and behavioral states could help predict individuals at risk of undisclosed or emerging mental health disorders. Utilizing AI tools and methodologies to identify and select these individuals for preventive mental health training and interventions could be a promising and effective approach to preventing potential air crash accidents attributed to human factors and related mental health problems. Based on these insights, the article advocates for the design of a multidisciplinary mental healthcare ecosystem in modern aviation using AI tools and technologies, to foster more efficient and effective mental health management, thereby enhancing flight safety and security standards. This proposed ecosystem requires the collaboration of multidisciplinary experts, including psychologists, neuroscientists, physiologists, psychiatrists, etc. to address these challenges in modern aviation.

RevDate: 2024-06-25

Czech E, Millar TR, White T, et al (2024)

Analysis-ready VCF at Biobank scale using Zarr.

bioRxiv : the preprint server for biology pii:2024.06.11.598241.

BACKGROUND: Variant Call Format (VCF) is the standard file format for interchanging genetic variation data and associated quality control metrics. The usual row-wise encoding of the VCF data model (either as text or packed binary) emphasises efficient retrieval of all data for a given variant, but accessing data on a field or sample basis is inefficient. Biobank scale datasets currently available consist of hundreds of thousands of whole genomes and hundreds of terabytes of compressed VCF. Row-wise data storage is fundamentally unsuitable and a more scalable approach is needed.

RESULTS: We present the VCF Zarr specification, an encoding of the VCF data model using Zarr which makes retrieving subsets of the data much more efficient. Zarr is a cloud-native format for storing multi-dimensional data, widely used in scientific computing. We show how this format is far more efficient than standard VCF based approaches, and competitive with specialised methods for storing genotype data in terms of compression ratios and calculation performance. We demonstrate the VCF Zarr format (and the vcf2zarr conversion utility) on a subset of the Genomics England aggV2 dataset comprising 78,195 samples and 59,880,903 variants, with a 5X reduction in storage and greater than 300X reduction in CPU usage in some representative benchmarks.

CONCLUSIONS: Large row-encoded VCF files are a major bottleneck for current research, and storing and processing these files incurs a substantial cost. The VCF Zarr specification, building on widely-used, open-source technologies has the potential to greatly reduce these costs, and may enable a diverse ecosystem of next-generation tools for analysing genetic variation data directly from cloud-based object stores.

RevDate: 2024-06-24

Yang Y, Ren K, J Song (2024)

Enhancing Earth data analysis in 5G satellite networks: A novel lightweight approach integrating improved deep learning.

Heliyon, 10(11):e32071.

Efficiently handling huge data amounts and enabling processing-intensive applications to run in faraway areas simultaneously is the ultimate objective of 5G networks. Currently, in order to distribute computing tasks, ongoing studies are exploring the incorporation of fog-cloud servers onto satellites, presenting a promising solution to enhance connectivity in remote areas. Nevertheless, analyzing the copious amounts of data produced by scattered sensors remains a challenging endeavor. The conventional strategy of transmitting this data to a central server for analysis can be costly. In contrast to centralized learning methods, distributed machine learning (ML) provides an alternative approach, albeit with notable drawbacks. This paper addresses the comparative learning expenses of centralized and distributed learning systems to tackle these challenges directly. It proposes the creation of an integrated system that harmoniously merges cloud servers with satellite network structures, leveraging the strengths of each system. This integration could represent a major breakthrough in satellite-based networking technology by streamlining data processing from remote nodes and cutting down on expenses. The core of this approach lies in the adaptive tailoring of learning techniques for individual entities based on their specific contextual nuances. The experimental findings underscore the prowess of the innovative lightweight strategy, LMAED[2]L (Enhanced Deep Learning for Earth Data Analysis), across a spectrum of machine learning assignments, showcasing remarkable and consistent performance under diverse operational conditions. Through a strategic fusion of centralized and distributed learning frameworks, the LMAED2L method emerges as a dynamic and effective remedy for the intricate data analysis challenges encountered within satellite networks interfaced with cloud servers. The empirical findings reveal a significant performance boost of our novel approach over traditional methods, with an average increase in reward (4.1 %), task completion rate (3.9 %), and delivered packets (3.4 %). This report suggests that these advancements will catalyze the integration of cutting-edge machine learning algorithms within future networks, elevating responsiveness, efficiency, and resource utilization to new heights.

RevDate: 2024-06-22

Qu L, Xie HQ, Pei JL, et al (2024)

Cloud inversion analysis of surrounding rock parameters for underground powerhouse based on PSO-BP optimized neural network and web technology.

Scientific reports, 14(1):14399.

Aiming at the shortcomings of the BP neural network in practical applications, such as easy to fall into local extremum and slow convergence speed, we optimized the initial weights and thresholds of the BP neural network using the particle swarm optimization (PSO). Additionally, cloud computing service, web technology, cloud database and numerical simulation were integrated to construct an intelligent feedback analysis cloud program for underground engineering safety monitoring based on the PSO-BP algorithm. The program could conveniently, quickly, and intelligently carry out numerical analysis of underground engineering and dynamic feedback analysis of surrounding rock parameters. The program was applied to the cloud inversion analysis of the surrounding rock parameters for the underground powerhouse of the Shuangjiangkou Hydropower Station. The calculated displacement simulated with the back-analyzed parameters matches the measured displacement very well. The posterior variance evaluation shows that the posterior error ratio is 0.045 and the small error probability is 0.999. The evaluation results indicate that the intelligent feedback analysis cloud program has high accuracy and can be applied to engineering practice.

RevDate: 2024-06-22

Tonti S, Marzolini B, M Bulgheroni (2021)

Smartphone-Based Passive Sensing for Behavioral and Physical Monitoring in Free-Life Conditions: Technical Usability Study.

JMIR biomedical engineering, 6(2):e15417 pii:v6i2e15417.

BACKGROUND: Smartphone use is widely spreading in society. Their embedded functions and sensors may play an important role in therapy monitoring and planning. However, the use of smartphones for intrapersonal behavioral and physical monitoring is not yet fully supported by adequate studies addressing technical reliability and acceptance.

OBJECTIVE: The objective of this paper is to identify and discuss technical issues that may impact on the wide use of smartphones as clinical monitoring tools. The focus is on the quality of the data and transparency of the acquisition process.

METHODS: QuantifyMyPerson is a platform for continuous monitoring of smartphone use and embedded sensors data. The platform consists of an app for data acquisition, a backend cloud server for data storage and processing, and a web-based dashboard for data management and visualization. The data processing aims to extract meaningful features for the description of daily life such as phone status, calls, app use, GPS, and accelerometer data. A total of health subjects installed the app on their smartphones, running it for 7 months. The acquired data were analyzed to assess impact on smartphone performance (ie, battery consumption and anomalies in functioning) and data integrity. Relevance of the selected features in describing changes in daily life was assessed through the computation of a k-nearest neighbors global anomaly score to detect days that differ from others.

RESULTS: The effectiveness of smartphone-based monitoring depends on the acceptability and interoperability of the system as user retention and data integrity are key aspects. Acceptability was confirmed by the full transparency of the app and the absence of any conflicts with daily smartphone use. The only perceived issue was the battery consumption even though the trend of battery drain with and without the app running was comparable. Regarding interoperability, the app was successfully installed and run on several Android brands. The study shows that some smartphone manufacturers implement power-saving policies not allowing continuous sensor data acquisition and impacting integrity. Data integrity was 96% on smartphones whose power-saving policies do not impact the embedded sensor management and 84% overall.

CONCLUSIONS: The main technological barriers to continuous behavioral and physical monitoring (ie, battery consumption and power-saving policies of manufacturers) may be overcome. Battery consumption increase is mainly due to GPS triangulation and may be limited, while data missing because of power-saving policies are related only to periods of nonuse of the phone since the embedded sensors are reactivated by any smartphone event. Overall, smartphone-based passive sensing is fully feasible and scalable despite the Android market fragmentation.

RevDate: 2024-06-21

Navaneethakrishnan M, Robinson Joel M, Kalavai Palani S, et al (2024)

EfficientNet-deep quantum neural network-based economic denial of sustainability attack detection to enhance network security in cloud.

Network (Bristol, England) [Epub ahead of print].

Cloud computing (CC) is a future revolution in the Information technology (IT) and Communication field. Security and internet connectivity are the common major factors to slow down the proliferation of CC. Recently, a new kind of denial of service (DDoS) attacks, known as Economic Denial of Sustainability (EDoS) attack, has been emerging. Though EDoS attacks are smaller at a moment, it can be expected to develop in nearer prospective in tandem with progression in the cloud usage. Here, EfficientNet-B3-Attn-2 fused Deep Quantum Neural Network (EfficientNet-DQNN) is presented for EDoS detection. Initially, cloud is simulated and thereafter, considered input log file is fed to perform data pre-processing. Z-Score Normalization ;(ZSN) is employed to carry out pre-processing of data. Afterwards, feature fusion (FF) is accomplished based on Deep Neural Network (DNN) with Kulczynski similarity. Then, data augmentation (DA) is executed by oversampling based upon Synthetic Minority Over-sampling Technique (SMOTE). At last, attack detection is conducted utilizing EfficientNet-DQNN. Furthermore, EfficientNet-DQNN is formed by incorporation of EfficientNet-B3-Attn-2 with DQNN. In addition, EfficientNet-DQNN attained 89.8% of F1-score, 90.4% of accuracy, 91.1% of precision and 91.2% of recall using BOT-IOT dataset at K-Fold is 9.

RevDate: 2024-06-20

Dai S (2024)

On the quantum circuit implementation of modus ponens.

Scientific reports, 14(1):14245.

The process of inference reflects the structure of propositions with assigned truth values, either true or false. Modus ponens is a fundamental form of inference that involves affirming the antecedent to affirm the consequent. Inspired by the quantum computer, the superposition of true and false is used for the parallel processing. In this work, we propose a quantum version of modus ponens. Additionally, we introduce two generations of quantum modus ponens: quantum modus ponens inference chain and multidimensional quantum modus ponens. Finally, a simple implementation of quantum modus ponens on the OriginQ quantum computing cloud platform is demonstrated.

RevDate: 2024-06-19

Gazis A, E Katsiri (2024)

Streamline Intelligent Crowd Monitoring with IoT Cloud Computing Middleware.

Sensors (Basel, Switzerland), 24(11):.

This article introduces a novel middleware that utilizes cost-effective, low-power computing devices like Raspberry Pi to analyze data from wireless sensor networks (WSNs). It is designed for indoor settings like historical buildings and museums, tracking visitors and identifying points of interest. It serves as an evacuation aid by monitoring occupancy and gauging the popularity of specific areas, subjects, or art exhibitions. The middleware employs a basic form of the MapReduce algorithm to gather WSN data and distribute it across available computer nodes. Data collected by RFID sensors on visitor badges is stored on mini-computers placed in exhibition rooms and then transmitted to a remote database after a preset time frame. Utilizing MapReduce for data analysis and a leader election algorithm for fault tolerance, this middleware showcases its viability through metrics, demonstrating applications like swift prototyping and accurate validation of findings. Despite using simpler hardware, its performance matches resource-intensive methods involving audiovisual and AI techniques. This design's innovation lies in its fault-tolerant, distributed setup using budget-friendly, low-power devices rather than resource-heavy hardware or methods. Successfully tested at a historical building in Greece (M. Hatzidakis' residence), it is tailored for indoor spaces. This paper compares its algorithmic application layer with other implementations, highlighting its technical strengths and advantages. Particularly relevant in the wake of the COVID-19 pandemic and general monitoring middleware for indoor locations, this middleware holds promise in tracking visitor counts and overall building occupancy.

RevDate: 2024-06-19

López-Ortiz EJ, Perea-Trigo M, Soria-Morillo LM, et al (2024)

Energy-Efficient Edge and Cloud Image Classification with Multi-Reservoir Echo State Network and Data Processing Units.

Sensors (Basel, Switzerland), 24(11):.

In an era dominated by Internet of Things (IoT) devices, software-as-a-service (SaaS) platforms, and rapid advances in cloud and edge computing, the demand for efficient and lightweight models suitable for resource-constrained devices such as data processing units (DPUs) has surged. Traditional deep learning models, such as convolutional neural networks (CNNs), pose significant computational and memory challenges, limiting their use in resource-constrained environments. Echo State Networks (ESNs), based on reservoir computing principles, offer a promising alternative with reduced computational complexity and shorter training times. This study explores the applicability of ESN-based architectures in image classification and weather forecasting tasks, using benchmarks such as the MNIST, FashionMnist, and CloudCast datasets. Through comprehensive evaluations, the Multi-Reservoir ESN (MRESN) architecture emerges as a standout performer, demonstrating its potential for deployment on DPUs or home stations. In exploiting the dynamic adaptability of MRESN to changing input signals, such as weather forecasts, continuous on-device training becomes feasible, eliminating the need for static pre-trained models. Our results highlight the importance of lightweight models such as MRESN in cloud and edge computing applications where efficiency and sustainability are paramount. This study contributes to the advancement of efficient computing practices by providing novel insights into the performance and versatility of MRESN architectures. By facilitating the adoption of lightweight models in resource-constrained environments, our research provides a viable alternative for improved efficiency and scalability in modern computing paradigms.

RevDate: 2024-06-14

Bayerlein R, Swarnakar V, Selfridge A, et al (2024)

Cloud-based serverless computing enables accelerated monte carlo simulations for nuclear medicine imaging.

Biomedical physics & engineering express [Epub ahead of print].

This study investigates the potential of cloud-based serverless computing to accelerate Monte Carlo (MC) simulations for nuclear medicine imaging tasks. MC simulations can pose a high computational burden - even when executed on modern multi-core computing servers. Cloud computing allows simulation tasks to be highly parallelized and considerably accelerated. We investigate the computational performance of a cloud-based serverless MC simulation of radioactive decays for positron emission tomography imaging using Amazon Web Service (AWS) Lambda serverless computing platform for the first time in scientific literature. We provide a comparison of the computational performance of AWS to a modern on-premises multi-thread reconstruction server by measuring the execution times of the processes using between 10^5 and 2∙10^10 simulated decays. We deployed two popular MC simulation frameworks - SimSET and GATE - within the AWS computing environment. Containerized application images were used as a basis for an AWS Lambda function, and local (non-cloud) scripts were used to orchestrate the deployment of simulations. The task was broken down into smaller parallel runs, and launched on concurrently running AWS Lambda instances, and the results were postprocessed and downloaded via the Simple Storage Service. Our implementation of cloud-based MC simulations with SimSET outperforms local server-based computations by more than an order of magnitude. However, the GATE implementation creates more and larger output file sizes and reveals that the internet connection speed can become the primary bottleneck for data transfers. Simulating 109 decays using SimSET is possible within 5 min and accrues computation costs of about $10 on AWS, whereas GATE would have to run in batches for more than 100 min at considerably higher costs. Adopting cloud-based serverless computing architecture in medical imaging research facilities can considerably improve processing times and overall workflow efficiency, with future research exploring additional enhancements through optimized configurations and computational methods.

RevDate: 2024-06-14

Guo Y, Ganti S, Y Wu (2024)

Enhancing Energy Efficiency in Telehealth Internet of Things Systems Through Fog and Cloud Computing Integration: Simulation Study.

JMIR biomedical engineering, 9:e50175 pii:v9i1e50175.

BACKGROUND: The increasing adoption of telehealth Internet of Things (IoT) devices in health care informatics has led to concerns about energy use and data processing efficiency.

OBJECTIVE: This paper introduces an innovative model that integrates telehealth IoT devices with a fog and cloud computing-based platform, aiming to enhance energy efficiency in telehealth IoT systems.

METHODS: The proposed model incorporates adaptive energy-saving strategies, localized fog nodes, and a hybrid cloud infrastructure. Simulation analyses were conducted to assess the model's effectiveness in reducing energy consumption and enhancing data processing efficiency.

RESULTS: Simulation results demonstrated significant energy savings, with a 2% reduction in energy consumption achieved through adaptive energy-saving strategies. The sample size for the simulation was 10-40, providing statistical robustness to the findings.

CONCLUSIONS: The proposed model successfully addresses energy and data processing challenges in telehealth IoT scenarios. By integrating fog computing for local processing and a hybrid cloud infrastructure, substantial energy savings are achieved. Ongoing research will focus on refining the energy conservation model and exploring additional functional enhancements for broader applicability in health care and industrial contexts.

RevDate: 2024-06-14

Chan NB, Li W, Aung T, et al (2023)

Machine Learning-Based Time in Patterns for Blood Glucose Fluctuation Pattern Recognition in Type 1 Diabetes Management: Development and Validation Study.

JMIR AI, 2:e45450 pii:v2i1e45450.

BACKGROUND: Continuous glucose monitoring (CGM) for diabetes combines noninvasive glucose biosensors, continuous monitoring, cloud computing, and analytics to connect and simulate a hospital setting in a person's home. CGM systems inspired analytics methods to measure glycemic variability (GV), but existing GV analytics methods disregard glucose trends and patterns; hence, they fail to capture entire temporal patterns and do not provide granular insights about glucose fluctuations.

OBJECTIVE: This study aimed to propose a machine learning-based framework for blood glucose fluctuation pattern recognition, which enables a more comprehensive representation of GV profiles that could present detailed fluctuation information, be easily understood by clinicians, and provide insights about patient groups based on time in blood fluctuation patterns.

METHODS: Overall, 1.5 million measurements from 126 patients in the United Kingdom with type 1 diabetes mellitus (T1DM) were collected, and prevalent blood fluctuation patterns were extracted using dynamic time warping. The patterns were further validated in 225 patients in the United States with T1DM. Hierarchical clustering was then applied on time in patterns to form 4 clusters of patients. Patient groups were compared using statistical analysis.

RESULTS: In total, 6 patterns depicting distinctive glucose levels and trends were identified and validated, based on which 4 GV profiles of patients with T1DM were found. They were significantly different in terms of glycemic statuses such as diabetes duration (P=.04), glycated hemoglobin level (P<.001), and time in range (P<.001) and thus had different management needs.

CONCLUSIONS: The proposed method can analytically extract existing blood fluctuation patterns from CGM data. Thus, time in patterns can capture a rich view of patients' GV profile. Its conceptual resemblance with time in range, along with rich blood fluctuation details, makes it more scalable, accessible, and informative to clinicians.

RevDate: 2024-06-13

Danning Z, Jia Q, Yinni M, et al (2024)

Establishment and Verification of a Skin Cancer Diagnosis Model Based on Image Convolutional Neural Network Analysis and Artificial Intelligence Algorithms.

Alternative therapies in health and medicine pii:AT10026 [Epub ahead of print].

Skin cancer is a serious public health problem, with countless deaths due to skin cancer each year. Early detection, aggressive and effective primary focus is the best treatment for skin cancer, which is important to improve patients' prognosis and reduce the death rate of the disease. However, judging skin tumors by the naked eye alone is a highly subjective factor, and the diagnosis can vary greatly even among professionally trained physicians. Clinically, skin endoscopy is a commonly used method for early diagnosis. However, the manual examination is time-consuming, laborious, and highly dependent on the clinical practice of dermatologists. In today's society, with the rapid development of information technology, the amount of information is increasing at a geometric rate, and new technologies such as cloud computing, distributed, data mining, and meta-inspiration are emerging. In this paper, we design and build a computer-aided diagnosis system for dermatoscopic images and apply meta-heuristic algorithms to image enhancement and image cutting to improve the quality of images, thus increasing the speed of diagnosis, early detection, and early treatment.

RevDate: 2024-06-13

Hu Y, Schnaubelt M, Chen L, et al (2024)

MS-PyCloud: A Cloud Computing-Based Pipeline for Proteomic and Glycoproteomic Data Analyses.

Analytical chemistry [Epub ahead of print].

Rapid development and wide adoption of mass spectrometry-based glycoproteomic technologies have empowered scientists to study proteins and protein glycosylation in complex samples on a large scale. This progress has also created unprecedented challenges for individual laboratories to store, manage, and analyze proteomic and glycoproteomic data, both in the cost for proprietary software and high-performance computing and in the long processing time that discourages on-the-fly changes of data processing settings required in explorative and discovery analysis. We developed an open-source, cloud computing-based pipeline, MS-PyCloud, with graphical user interface (GUI), for proteomic and glycoproteomic data analysis. The major components of this pipeline include data file integrity validation, MS/MS database search for spectral assignments to peptide sequences, false discovery rate estimation, protein inference, quantitation of global protein levels, and specific glycan-modified glycopeptides as well as other modification-specific peptides such as phosphorylation, acetylation, and ubiquitination. To ensure the transparency and reproducibility of data analysis, MS-PyCloud includes open-source software tools with comprehensive testing and versioning for spectrum assignments. Leveraging public cloud computing infrastructure via Amazon Web Services (AWS), MS-PyCloud scales seamlessly based on analysis demand to achieve fast and efficient performance. Application of the pipeline to the analysis of large-scale LC-MS/MS data sets demonstrated the effectiveness and high performance of MS-PyCloud. The software can be downloaded at https://github.com/huizhanglab-jhu/ms-pycloud.

RevDate: 2024-06-13
CmpDate: 2024-06-13

Sochat V, Culquicondor A, Ojea A, et al (2024)

The Flux Operator.

F1000Research, 13:203.

Converged computing is an emerging area of computing that brings together the best of both worlds for high performance computing (HPC) and cloud-native communities. The economic influence of cloud computing and the need for workflow portability, flexibility, and manageability are driving this emergence. Navigating the uncharted territory and building an effective space for both HPC and cloud require collaborative technological development and research. In this work, we focus on developing components for the converged workload manager, the central component of batch workflows running in any environment. From the cloud we base our work on Kubernetes, the de facto standard batch workload orchestrator. From HPC the orchestrator counterpart is Flux Framework, a fully hierarchical resource management and graph-based scheduler with a modular architecture that supports sophisticated scheduling and job management. Bringing these managers together consists of implementing Flux inside of Kubernetes, enabling hierarchical resource management and scheduling that scales without burdening the Kubernetes scheduler. This paper introduces the Flux Operator - an on-demand HPC workload manager deployed in Kubernetes. Our work describes design decisions, mapping components between environments, and experimental features. We perform experiments that compare application performance when deployed by the Flux Operator and the MPI Operator and present the results. Finally, we review remaining challenges and describe our vision of the future for improved technological innovation and collaboration through converged computing.

RevDate: 2024-06-11

Salcedo A, Tarabichi M, Buchanan A, et al (2024)

Crowd-sourced benchmarking of single-sample tumor subclonal reconstruction.

Nature biotechnology [Epub ahead of print].

Subclonal reconstruction algorithms use bulk DNA sequencing data to quantify parameters of tumor evolution, allowing an assessment of how cancers initiate, progress and respond to selective pressures. We launched the ICGC-TCGA (International Cancer Genome Consortium-The Cancer Genome Atlas) DREAM Somatic Mutation Calling Tumor Heterogeneity and Evolution Challenge to benchmark existing subclonal reconstruction algorithms. This 7-year community effort used cloud computing to benchmark 31 subclonal reconstruction algorithms on 51 simulated tumors. Algorithms were scored on seven independent tasks, leading to 12,061 total runs. Algorithm choice influenced performance substantially more than tumor features but purity-adjusted read depth, copy-number state and read mappability were associated with the performance of most algorithms on most tasks. No single algorithm was a top performer for all seven tasks and existing ensemble strategies were unable to outperform the best individual methods, highlighting a key research need. All containerized methods, evaluation code and datasets are available to support further assessment of the determinants of subclonal reconstruction accuracy and development of improved methods to understand tumor evolution.

RevDate: 2024-06-11
CmpDate: 2024-06-11

Ko G, Lee JH, Sim YM, et al (2024)

KoNA: Korean Nucleotide Archive as A New Data Repository for Nucleotide Sequence Data.

Genomics, proteomics & bioinformatics, 22(1):.

During the last decade, the generation and accumulation of petabase-scale high-throughput sequencing data have resulted in great challenges, including access to human data, as well as transfer, storage, and sharing of enormous amounts of data. To promote data-driven biological research, the Korean government announced that all biological data generated from government-funded research projects should be deposited at the Korea BioData Station (K-BDS), which consists of multiple databases for individual data types. Here, we introduce the Korean Nucleotide Archive (KoNA), a repository of nucleotide sequence data. As of July 2022, the Korean Read Archive in KoNA has collected over 477 TB of raw next-generation sequencing data from national genome projects. To ensure data quality and prepare for international alignment, a standard operating procedure was adopted, which is similar to that of the International Nucleotide Sequence Database Collaboration. The standard operating procedure includes quality control processes for submitted data and metadata using an automated pipeline, followed by manual examination. To ensure fast and stable data transfer, a high-speed transmission system called GBox is used in KoNA. Furthermore, the data uploaded to or downloaded from KoNA through GBox can be readily processed using a cloud computing service called Bio-Express. This seamless coupling of KoNA, GBox, and Bio-Express enhances the data experience, including submission, access, and analysis of raw nucleotide sequences. KoNA not only satisfies the unmet needs for a national sequence repository in Korea but also provides datasets to researchers globally and contributes to advances in genomics. The KoNA is available at https://www.kobic.re.kr/kona/.

RevDate: 2024-06-11

McMurry AJ, Gottlieb DI, Miller TA, et al (2024)

Cumulus: a federated electronic health record-based learning system powered by Fast Healthcare Interoperability Resources and artificial intelligence.

Journal of the American Medical Informatics Association : JAMIA pii:7690920 [Epub ahead of print].

OBJECTIVE: To address challenges in large-scale electronic health record (EHR) data exchange, we sought to develop, deploy, and test an open source, cloud-hosted app "listener" that accesses standardized data across the SMART/HL7 Bulk FHIR Access application programming interface (API).

METHODS: We advance a model for scalable, federated, data sharing and learning. Cumulus software is designed to address key technology and policy desiderata including local utility, control, and administrative simplicity as well as privacy preservation during robust data sharing, and artificial intelligence (AI) for processing unstructured text.

RESULTS: Cumulus relies on containerized, cloud-hosted software, installed within a healthcare organization's security envelope. Cumulus accesses EHR data via the Bulk FHIR interface and streamlines automated processing and sharing. The modular design enables use of the latest AI and natural language processing tools and supports provider autonomy and administrative simplicity. In an initial test, Cumulus was deployed across 5 healthcare systems each partnered with public health. Cumulus output is patient counts which were aggregated into a table stratifying variables of interest to enable population health studies. All code is available open source. A policy stipulating that only aggregate data leave the institution greatly facilitated data sharing agreements.

DISCUSSION AND CONCLUSION: Cumulus addresses barriers to data sharing based on (1) federally required support for standard APIs, (2) increasing use of cloud computing, and (3) advances in AI. There is potential for scalability to support learning across myriad network configurations and use cases.

RevDate: 2024-06-11

Yang T, Du Y, Sun M, et al (2024)

Risk Management for Whole-Process Safe Disposal of Medical Waste: Progress and Challenges.

Risk management and healthcare policy, 17:1503-1522.

Over the past decade, the global outbreaks of SARS, influenza A (H1N1), COVID-19, and other major infectious diseases have exposed the insufficient capacity for emergency disposal of medical waste in numerous countries and regions. Particularly during epidemics of major infectious diseases, medical waste exhibits new characteristics such as accelerated growth rate, heightened risk level, and more stringent disposal requirements. Consequently, there is an urgent need for advanced theoretical approaches that can perceive, predict, evaluate, and control risks associated with safe disposal throughout the entire process in a timely, accurate, efficient, and comprehensive manner. This article provides a systematic review of relevant research on collection, storage, transportation, and disposal of medical waste throughout its entirety to illustrate the current state of safe disposal practices. Building upon this foundation and leveraging emerging information technologies like Internet of Things (IoT), cloud computing, big data analytics, and artificial intelligence (AI), we deeply contemplate future research directions with an aim to minimize risks across all stages of medical waste disposal while offering valuable references and decision support to further advance safe disposal practices.

RevDate: 2024-06-10

Ullah S, Ou J, Xie Y, et al (2024)

Facial expression recognition (FER) survey: a vision, architectural elements, and future directions.

PeerJ. Computer science, 10:e2024.

With the cutting-edge advancements in computer vision, facial expression recognition (FER) is an active research area due to its broad practical applications. It has been utilized in various fields, including education, advertising and marketing, entertainment and gaming, health, and transportation. The facial expression recognition-based systems are rapidly evolving due to new challenges, and significant research studies have been conducted on both basic and compound facial expressions of emotions; however, measuring emotions is challenging. Fueled by the recent advancements and challenges to the FER systems, in this article, we have discussed the basics of FER and architectural elements, FER applications and use-cases, FER-based global leading companies, interconnection between FER, Internet of Things (IoT) and Cloud computing, summarize open challenges in-depth to FER technologies, and future directions through utilizing Preferred Reporting Items for Systematic reviews and Meta Analyses Method (PRISMA). In the end, the conclusion and future thoughts are discussed. By overcoming the identified challenges and future directions in this research study, researchers will revolutionize the discipline of facial expression recognition in the future.

RevDate: 2024-06-10
CmpDate: 2024-06-10

Aman SS, N'guessan BG, Agbo DDA, et al (2023)

Search engine Performance optimization: methods and techniques.

F1000Research, 12:1317.

BACKGROUND: With the rapid advancement of information technology, search engine optimisation (SEO) has become crucial for enhancing the visibility and relevance of online content. In this context, the use of cloud platforms like Microsoft Azure is being explored to bolster SEO capabilities.

METHODS: This scientific article offers an in-depth study of search engine optimisation. It explores the different methods and techniques used to improve the performance and efficiency of a search engine, focusing on key aspects such as result relevance, search speed and user experience. The article also presents case studies and concrete examples to illustrate the practical application of optimisation techniques.

RESULTS: The results demonstrate the importance of optimisation in delivering high quality search results and meeting the increasing demands of users.

CONCLUSIONS: The article addresses the enhancement of search engines through the Microsoft Azure infrastructure and its associated components. It highlights methods such as indexing, semantic analysis, parallel searches, and caching to strengthen the relevance of results, speed up searches, and optimise the user experience. Following the application of these methods, a marked improvement was observed in these areas, thereby showcasing the capability of Microsoft Azure in enhancing search engines. The study sheds light on the implementation and analysis of these Azure-focused techniques, introduces a methodology for assessing their efficacy, and details the specific benefits of each method. Looking forward, the article suggests integrating artificial intelligence to elevate the relevance of results, venturing into other cloud infrastructures to boost performance, and evaluating these methods in specific scenarios, such as multimedia information search. In summary, with Microsoft Azure, the enhancement of search engines appears promising, with increased relevance and a heightened user experience in a rapidly evolving sector.

RevDate: 2024-06-06

Hie BL, Kim S, Rando TA, et al (2024)

Scanorama: integrating large and diverse single-cell transcriptomic datasets.

Nature protocols [Epub ahead of print].

Merging diverse single-cell RNA sequencing (scRNA-seq) data from numerous experiments, laboratories and technologies can uncover important biological insights. Nonetheless, integrating scRNA-seq data encounters special challenges when the datasets are composed of diverse cell type compositions. Scanorama offers a robust solution for improving the quality and interpretation of heterogeneous scRNA-seq data by effectively merging information from diverse sources. Scanorama is designed to address the technical variation introduced by differences in sample preparation, sequencing depth and experimental batches that can confound the analysis of multiple scRNA-seq datasets. Here we provide a detailed protocol for using Scanorama within a Scanpy-based single-cell analysis workflow coupled with Google Colaboratory, a cloud-based free Jupyter notebook environment service. The protocol involves Scanorama integration, a process that typically spans 0.5-3 h. Scanorama integration requires a basic understanding of cellular biology, transcriptomic technologies and bioinformatics. Our protocol and new Scanorama-Colaboratory resource should make scRNA-seq integration more widely accessible to researchers.

RevDate: 2024-06-05

Zheng P, Yang J, Lou J, et al (2024)

Design and application of virtual simulation teaching platform for intelligent manufacturing.

Scientific reports, 14(1):12895.

Aiming at the practical teaching of intelligent manufacturing majors faced with lack of equipment, tense teachers and other problems such as high equipment investment, high material loss, high teaching risk, difficult to implement internship, difficult to observe production, difficult to reproduce the results, and so on, we take the electrical automation technology, mechatronics technology and industrial robotics technology majors of intelligent manufacturing majors as an example, and design and establish a virtual simulation teaching platform for intelligent manufacturing majors by using the cloud computing platform, edge computing technology, and terminal equipment synergy. The platform includes six major virtual simulation modules, including virtual simulation of electrician electronics and PLC control, virtual and real combination of typical production lines of intelligent manufacturing, dual-axis collaborative robotics workstation, digital twin simulation, virtual disassembly of industrial robots, virtual simulation of magnetic yoke axis flexible production line. The platform covers the virtual simulation teaching content of basic principle experiments, advanced application experiments, and advanced integration experiments in intelligent manufacturing majors. In order to test the effectiveness of this virtual simulation platform for practical teaching in engineering, this paper organizes a teaching practice activity involving 246 students from two parallel classes of three different majors. Through a one-year teaching application, we analyzed the data on the grades of 7 core courses involved in three majors in one academic year, the proportion of participation in competitions and innovative activities, the number of awards and certificates of professional qualifications, and the subjective questionnaires of the testers. The analysis shows that the learners who adopt the virtual simulation teaching platform proposed in this paper for practical teaching are better than the learners under the traditional teaching method in terms of academic performance, proportion of participation in competitions and innovative activities, and proportion of awards and certificates by more than 13%, 37%, 36%, 27% and 22%, respectively. Therefore, the virtual simulation teaching platform of intelligent manufacturing established in this paper has obvious superiority in solving the problem of "three highs and three difficulties" existing in the practical teaching of engineering, and according to the questionnaire feedback from the testers, the platform can effectively alleviate the shortage of practical training equipment, stimulate the interest in learning, and help to broaden and improve the knowledge system of the learners.

RevDate: 2024-06-05

Lai Q, S Guo (2024)

Heterogeneous coexisting attractors, large-scale amplitude control and finite-time synchronization of central cyclic memristive neural networks.

Neural networks : the official journal of the International Neural Network Society, 178:106412 pii:S0893-6080(24)00336-8 [Epub ahead of print].

Memristors are of great theoretical and practical significance for chaotic dynamics research of brain-like neural networks due to their excellent physical properties such as brain synapse-like memorability and nonlinearity, especially crucial for the promotion of AI big models, cloud computing, and intelligent systems in the artificial intelligence field. In this paper, we introduce memristors as self-connecting synapses into a four-dimensional Hopfield neural network, constructing a central cyclic memristive neural network (CCMNN), and achieving its effective control. The model adopts a central loop topology and exhibits a variety of complex dynamic behaviors such as chaos, bifurcation, and homogeneous and heterogeneous coexisting attractors. The complex dynamic behaviors of the CCMNN are investigated in depth numerically by equilibrium point stability analysis as well as phase trajectory maps, bifurcation maps, time-domain maps, and LEs. It is found that with the variation of the internal parameters of the memristor, asymmetric heterogeneous attractor coexistence phenomena appear under different initial conditions, including the multi-stable coexistence behaviors of periodic-periodic, periodic-stable point, periodic-chaotic, and stable point-chaotic. In addition, by adjusting the structural parameters, a wide range of amplitude control can be realized without changing the chaotic state of the system. Finally, based on the CCMNN model, an adaptive synchronization controller is designed to achieve finite-time synchronization control, and its application prospect in simple secure communication is discussed. A microcontroller-based hardware circuit and NIST test are conducted to verify the correctness of the numerical results and theoretical analysis.

RevDate: 2024-06-05
CmpDate: 2024-06-05

Oliva A, Kaphle A, Reguant R, et al (2024)

Future-proofing genomic data and consent management: a comprehensive review of technology innovations.

GigaScience, 13:.

Genomic information is increasingly used to inform medical treatments and manage future disease risks. However, any personal and societal gains must be carefully balanced against the risk to individuals contributing their genomic data. Expanding our understanding of actionable genomic insights requires researchers to access large global datasets to capture the complexity of genomic contribution to diseases. Similarly, clinicians need efficient access to a patient's genome as well as population-representative historical records for evidence-based decisions. Both researchers and clinicians hence rely on participants to consent to the use of their genomic data, which in turn requires trust in the professional and ethical handling of this information. Here, we review existing and emerging solutions for secure and effective genomic information management, including storage, encryption, consent, and authorization that are needed to build participant trust. We discuss recent innovations in cloud computing, quantum-computing-proof encryption, and self-sovereign identity. These innovations can augment key developments from within the genomics community, notably GA4GH Passports and the Crypt4GH file container standard. We also explore how decentralized storage as well as the digital consenting process can offer culturally acceptable processes to encourage data contributions from ethnic minorities. We conclude that the individual and their right for self-determination needs to be put at the center of any genomics framework, because only on an individual level can the received benefits be accurately balanced against the risk of exposing private information.

RevDate: 2024-06-04

Peter R, Moreira S, Tagliabue E, et al (2024)

Stereo reconstruction from microscopic images for computer-assisted ophthalmic surgery.

International journal of computer assisted radiology and surgery [Epub ahead of print].

PURPOSE: This work presents a novel platform for stereo reconstruction in anterior segment ophthalmic surgery to enable enhanced scene understanding, especially depth perception, for advanced computer-assisted eye surgery by effectively addressing the lack of texture and corneal distortions artifacts in the surgical scene.

METHODS: The proposed platform for stereo reconstruction uses a two-step approach: generating a sparse 3D point cloud from microscopic images, deriving a dense 3D representation by fitting surfaces onto the point cloud, and considering geometrical priors of the eye anatomy. We incorporate a pre-processing step to rectify distortion artifacts induced by the cornea's high refractive power, achieved by aligning a 3D phenotypical cornea geometry model to the images and computing a distortion map using ray tracing.

RESULTS: The accuracy of 3D reconstruction is evaluated on stereo microscopic images of ex vivo porcine eyes, rigid phantom eyes, and synthetic photo-realistic images. The results demonstrate the potential of the proposed platform to enhance scene understanding via an accurate 3D representation of the eye and enable the estimation of instrument to layer distances in porcine eyes with a mean average error of 190  μ m , comparable to the scale of surgeons' hand tremor.

CONCLUSION: This work marks a significant advancement in stereo reconstruction for ophthalmic surgery by addressing corneal distortions, a previously often overlooked aspect in such surgical scenarios. This could improve surgical outcomes by allowing for intra-operative computer assistance, e.g., in the form of virtual distance sensors.

RevDate: 2024-06-04
CmpDate: 2024-06-04

H S M, P Gupta (2024)

Federated learning inspired Antlion based orchestration for Edge computing environment.

PloS one, 19(6):e0304067 pii:PONE-D-24-06086.

Edge computing is a scalable, modern, and distributed computing architecture that brings computational workloads closer to smart gateways or Edge devices. This computing model delivers IoT (Internet of Things) computations and processes the IoT requests from the Edge of the network. In a diverse and independent environment like Fog-Edge, resource management is a critical issue. Hence, scheduling is a vital process to enhance efficiency and allocation of resources properly to the tasks. The manuscript proposes an Artificial Neural Network (ANN) inspired Antlion algorithm for task orchestration Edge environments. Its aim is to enhance resource utilization and reduce energy consumption. Comparative analysis with different algorithms shows that the proposed algorithm balances the load on the Edge layer, which results in lower load on the cloud, improves power consumption, CPU utilization, network utilization, and reduces average waiting time for requests. The proposed model is tested for healthcare application in Edge computing environment. The evaluation shows that the proposed algorithm outperforms existing fuzzy logic algorithms. The performance of the ANN inspired Antlion based orchestration approach is evaluated using performance metrics, power consumption, CPU utilization, network utilization, and average waiting time for requests respectively. It outperforms the existing fuzzy logic, round robin algorithm. The proposed technique achieves an average cloud energy consumption improvement of 95.94%, and average Edge energy consumption improvement of 16.79%, 19.85% in average CPU utilization in Edge computing environment, 10.64% in average CPU utilization in cloud environment, and 23.33% in average network utilization, and the average waiting time decreases by 96% compared to fuzzy logic and 1.4% compared to round-robin respectively.

RevDate: 2024-06-04

Herre C, Ho A, Eisenbraun B, et al (2024)

Introduction of the Capsules environment to support further growth of the SBGrid structural biology software collection.

Acta crystallographica. Section D, Structural biology pii:S2059798324004881 [Epub ahead of print].

The expansive scientific software ecosystem, characterized by millions of titles across various platforms and formats, poses significant challenges in maintaining reproducibility and provenance in scientific research. The diversity of independently developed applications, evolving versions and heterogeneous components highlights the need for rigorous methodologies to navigate these complexities. In response to these challenges, the SBGrid team builds, installs and configures over 530 specialized software applications for use in the on-premises and cloud-based computing environments of SBGrid Consortium members. To address the intricacies of supporting this diverse application collection, the team has developed the Capsule Software Execution Environment, generally referred to as Capsules. Capsules rely on a collection of programmatically generated bash scripts that work together to isolate the runtime environment of one application from all other applications, thereby providing a transparent cross-platform solution without requiring specialized tools or elevated account privileges for researchers. Capsules facilitate modular, secure software distribution while maintaining a centralized, conflict-free environment. The SBGrid platform, which combines Capsules with the SBGrid collection of structural biology applications, aligns with FAIR goals by enhancing the findability, accessibility, interoperability and reusability of scientific software, ensuring seamless functionality across diverse computing environments. Its adaptability enables application beyond structural biology into other scientific fields.

RevDate: 2024-06-03

Rathinam R, Sivakumar P, Sigamani S, et al (2024)

SJFO: Sail Jelly Fish Optimization enabled VM migration with DRNN-based prediction for load balancing in cloud computing.

Network (Bristol, England) [Epub ahead of print].

The dynamic workload is evenly distributed among all nodes using balancing methods like hosts or VMs. Load Balancing as a Service (LBaaS) is another name for load balancing in the cloud. In this research work, the load is balanced by the application of Virtual Machine (VM) migration carried out by proposed Sail Jelly Fish Optimization (SJFO). The SJFO is formed by combining Sail Fish Optimizer (SFO) and Jellyfish Search (JS) optimizer. In the Cloud model, many Physical Machines (PMs) are present, where these PMs are comprised of many VMs. Each VM has many tasks, and these tasks depend on various parameters like Central Processing Unit (CPU), memory, Million Instructions per Second (MIPS), capacity, total number of processing entities, as well as bandwidth. Here, the load is predicted by Deep Recurrent Neural Network (DRNN) and this predicted load is compared with a threshold value, where VM migration is done based on predicted values. Furthermore, the performance of SJFO-VM is analysed using the metrics like capacity, load, and resource utilization. The proposed method shows better performance with a superior capacity of 0.598, an inferior load of 0.089, and an inferior resource utilization of 0.257.

RevDate: 2024-06-03

McCormick I, Butcher R, Ramke J, et al (2024)

The Rapid Assessment of Avoidable Blindness survey: Review of the methodology and protocol for the seventh version (RAAB7).

Wellcome open research, 9:133.

The Rapid Assessment of Avoidable Blindness (RAAB) is a population-based cross-sectional survey methodology used to collect data on the prevalence of vision impairment and its causes and eye care service indicators among the population 50 years and older. RAAB has been used for over 20 years with modifications to the protocol over time reflected in changing version numbers; this paper describes the latest version of the methodology-RAAB7. RAAB7 is a collaborative project between the International Centre for Eye Health and Peek Vision with guidance from a steering group of global eye health stakeholders. We have fully digitised RAAB, allowing for fast, accurate and secure data collection. A bespoke Android mobile application automatically synchronises data to a secure Amazon Web Services virtual private cloud when devices are online so users can monitor data collection in real-time. Vision is screened using Peek Vision's digital visual acuity test for mobile devices and uncorrected, corrected and pinhole visual acuity are collected. An optional module on Disability is available. We have rebuilt the RAAB data repository as the end point of RAAB7's digital data workflow, including a front-end website to access the past 20 years of RAAB surveys worldwide. This website (https://www.raab.world) hosts open access RAAB data to support the advocacy and research efforts of the global eye health community. Active research sub-projects are finalising three new components in 2024-2025: 1) Near vision screening to address data gaps on near vision impairment and effective refractive error coverage; 2) an optional Health Economics module to assess the affordability of eye care services and productivity losses associated with vision impairment; 3) an optional Health Systems data collection module to support RAAB's primary aim to inform eye health service planning by supporting users to integrate eye care facility data with population data.

RevDate: 2024-06-03

Zhu X, X Peng (2024)

Strategic assessment model of smart stadiums based on genetic algorithms and literature visualization analysis: A case study from Chengdu, China.

Heliyon, 10(11):e31759.

This paper leverages Citespace and VOSviewer software to perform a comprehensive bibliometric analysis on a corpus of 384 references related to smart sports venues, spanning from 1998 to 2022. The analysis encompasses various facets, including author network analysis, institutional network analysis, temporal mapping, keyword clustering, and co-citation network analysis. Moreover, this paper constructs a smart stadiums strategic assessment model (SSSAM) to compensate for confusion and aimlessness by genetic algorithms (GA). Our findings indicate an exponential growth in publications on smart sports venues year over year. Arizona State University emerges as the institution with the highest number of collaborative publications, Energy and Buildings becomes the publication with the most documents. While, Wang X stands out as the scholar with the most substantial contribution to the field. In scrutinizing the betweenness centrality indicators, a paradigm shift in research hotspots becomes evident-from intelligent software to the domains of the Internet of Things (IoT), intelligent services, and artificial intelligence (AI). The SSSAM model based on artificial neural networks (ANN) and GA algorithms also reached similar conclusions through a case study of the International University Sports Federation (FISU), building Information Modeling (BIM), cloud computing and artificial intelligence Internet of Things (AIoT) are expected to develop in the future. Three key themes developed over time. Finally, a comprehensive knowledge system with common references and future hot spots is proposed.

RevDate: 2024-06-03

Nisanova A, Yavary A, Deaner J, et al (2024)

Performance of Automated Machine Learning in Predicting Outcomes of Pneumatic Retinopexy.

Ophthalmology science, 4(5):100470.

PURPOSE: Automated machine learning (AutoML) has emerged as a novel tool for medical professionals lacking coding experience, enabling them to develop predictive models for treatment outcomes. This study evaluated the performance of AutoML tools in developing models predicting the success of pneumatic retinopexy (PR) in treatment of rhegmatogenous retinal detachment (RRD). These models were then compared with custom models created by machine learning (ML) experts.

DESIGN: Retrospective multicenter study.

PARTICIPANTS: Five hundred and thirty nine consecutive patients with primary RRD that underwent PR by a vitreoretinal fellow at 6 training hospitals between 2002 and 2022.

METHODS: We used 2 AutoML platforms: MATLAB Classification Learner and Google Cloud AutoML. Additional models were developed by computer scientists. We included patient demographics and baseline characteristics, including lens and macula status, RRD size, number and location of breaks, presence of vitreous hemorrhage and lattice degeneration, and physicians' experience. The dataset was split into a training (n = 483) and test set (n = 56). The training set, with a 2:1 success-to-failure ratio, was used to train the MATLAB models. Because Google Cloud AutoML requires a minimum of 1000 samples, the training set was tripled to create a new set with 1449 datapoints. Additionally, balanced datasets with a 1:1 success-to-failure ratio were created using Python.

MAIN OUTCOME MEASURES: Single-procedure anatomic success rate, as predicted by the ML models. F2 scores and area under the receiver operating curve (AUROC) were used as primary metrics to compare models.

RESULTS: The best performing AutoML model (F2 score: 0.85; AUROC: 0.90; MATLAB), showed comparable performance to the custom model (0.92, 0.86) when trained on the balanced datasets. However, training the AutoML model with imbalanced data yielded misleadingly high AUROC (0.81) despite low F2-score (0.2) and sensitivity (0.17).

CONCLUSIONS: We demonstrated the feasibility of using AutoML as an accessible tool for medical professionals to develop models from clinical data. Such models can ultimately aid in the clinical decision-making, contributing to better patient outcomes. However, outcomes can be misleading or unreliable if used naively. Limitations exist, particularly if datasets contain missing variables or are highly imbalanced. Proper model selection and data preprocessing can improve the reliability of AutoML tools.

FINANCIAL DISCLOSURES: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

RevDate: 2024-06-03

Rodriguez A, Kim Y, Nandi TN, et al (2024)

Accelerating Genome- and Phenome-Wide Association Studies using GPUs - A case study using data from the Million Veteran Program.

bioRxiv : the preprint server for biology pii:2024.05.17.594583.

The expansion of biobanks has significantly propelled genomic discoveries yet the sheer scale of data within these repositories poses formidable computational hurdles, particularly in handling extensive matrix operations required by prevailing statistical frameworks. In this work, we introduce computational optimizations to the SAIGE (Scalable and Accurate Implementation of Generalized Mixed Model) algorithm, notably employing a GPU-based distributed computing approach to tackle these challenges. We applied these optimizations to conduct a large-scale genome-wide association study (GWAS) across 2,068 phenotypes derived from electronic health records of 635,969 diverse participants from the Veterans Affairs (VA) Million Veteran Program (MVP). Our strategies enabled scaling up the analysis to over 6,000 nodes on the Department of Energy (DOE) Oak Ridge Leadership Computing Facility (OLCF) Summit High-Performance Computer (HPC), resulting in a 20-fold acceleration compared to the baseline model. We also provide a Docker container with our optimizations that was successfully used on multiple cloud infrastructures on UK Biobank and All of Us datasets where we showed significant time and cost benefits over the baseline SAIGE model.

RevDate: 2024-06-03

Lowndes JS, Holder AM, Markowitz EH, et al (2024)

Shifting institutional culture to develop climate solutions with Open Science.

Ecology and evolution, 14(6):e11341.

To address our climate emergency, "we must rapidly, radically reshape society"-Johnson & Wilkinson, All We Can Save. In science, reshaping requires formidable technical (cloud, coding, reproducibility) and cultural shifts (mindsets, hybrid collaboration, inclusion). We are a group of cross-government and academic scientists that are exploring better ways of working and not being too entrenched in our bureaucracies to do better science, support colleagues, and change the culture at our organizations. We share much-needed success stories and action for what we can all do to reshape science as part of the Open Science movement and 2023 Year of Open Science.

RevDate: 2024-05-30

Mimar S, Paul AS, Lucarelli N, et al (2024)

ComPRePS: An Automated Cloud-based Image Analysis tool to democratize AI in Digital Pathology.

Proceedings of SPIE--the International Society for Optical Engineering, 12933:.

Artificial intelligence (AI) has extensive applications in a wide range of disciplines including healthcare and clinical practice. Advances in high-resolution whole-slide brightfield microscopy allow for the digitization of histologically stained tissue sections, producing gigapixel-scale whole-slide images (WSI). The significant improvement in computing and revolution of deep neural network (DNN)-based AI technologies over the last decade allow us to integrate massively parallelized computational power, cutting-edge AI algorithms, and big data storage, management, and processing. Applied to WSIs, AI has created opportunities for improved disease diagnostics and prognostics with the ultimate goal of enhancing precision medicine and resulting patient care. The National Institutes of Health (NIH) has recognized the importance of developing standardized principles for data management and discovery for the advancement of science and proposed the Findable, Accessible, Interoperable, Reusable, (FAIR) Data Principles[1] with the goal of building a modernized biomedical data resource ecosystem to establish collaborative research communities. In line with this mission and to democratize AI-based image analysis in digital pathology, we propose ComPRePS: an end-to-end automated Computational Renal Pathology Suite which combines massive scalability, on-demand cloud computing, and an easy-to-use web-based user interface for data upload, storage, management, slide-level visualization, and domain expert interaction. Moreover, our platform is equipped with both in-house and collaborator developed sophisticated AI algorithms in the back-end server for image analysis to identify clinically relevant micro-anatomic functional tissue units (FTU) and to extract image features.

RevDate: 2024-05-29

Yu J, Nie S, Liu W, et al (2024)

Mapping global mangrove canopy height by integrating Ice, Cloud, and Land Elevation Satellite-2 photon-counting LiDAR data with multi-source images.

The Science of the total environment pii:S0048-9697(24)03634-9 [Epub ahead of print].

Large-scale and precise measurement of mangrove canopy height is crucial for understanding and evaluating wetland ecosystems' condition, health, and productivity. This study generates a global mangrove canopy height map with a 30 m resolution by integrating Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) photon-counting light detection and ranging (LiDAR) data with multi-source imagery. Initially, high-quality mangrove canopy height samples were extracted using meticulous processing and filtering of ICESat-2 data. Subsequently, mangrove canopy height models were established using the random forest (RF) algorithm, incorporating ICESat-2 canopy height samples, Sentinel-2 data, TanDEM-X DEM data and WorldClim data. Furthermore, a global 30 m mangrove canopy height map was generated utilizing the Google Earth Engine platform. Finally, the global map's accuracy was evaluated by comparing it with reference canopy heights derived from both space-borne and airborne LiDAR data. Results indicate that the global 30 m resolution mangrove height map was found to be consistent with canopy heights obtained from space-borne (r = 0.88, Bisa = -0.07 m, RMSE = 3.66 m, RMSE% = 29.86 %) and airborne LiDAR (r = 0.52, Bisa = -1.08 m, RMSE = 3.39 m, RMSE% = 39.05 %). Additionally, our findings reveal that mangroves worldwide exhibit an average height of 12.65 m, with the tallest mangrove reaching a height of 44.94 m. These results demonstrate the feasibility and effectiveness of using ICESat-2 data integrated with multi-source imagery to generate a global mangrove canopy height map. This dataset offers reliable information that can significantly support government and organizational efforts to protect and conserve mangrove ecosystems.

RevDate: 2024-05-27

Oh S, Gravel-Pucillo K, Ramos M, et al (2024)

AnVILWorkflow: A runnable workflow package for Cloud-implemented bioinformatics analysis pipelines.

Research square pii:rs.3.rs-4370115.

Advancements in sequencing technologies and the development of new data collection methods produce large volumes of biological data. The Genomic Data Science Analysis, Visualization, and Informatics Lab-space (AnVIL) provides a cloud-based platform for democratizing access to large-scale genomics data and analysis tools. However, utilizing the full capabilities of AnVIL can be challenging for researchers without extensive bioinformatics expertise, especially for executing complex workflows. Here we present the AnVILWorkflow R package, which enables the convenient execution of bioinformatics workflows hosted on AnVIL directly from an R environment. AnVILWorkflowsimplifies the setup of the cloud computing environment, input data formatting, workflow submission, and retrieval of results through intuitive functions. We demonstrate the utility of AnVILWorkflowfor three use cases: bulk RNA-seq analysis with Salmon, metagenomics analysis with bioBakery, and digital pathology image processing with PathML. The key features of AnVILWorkflow include user-friendly browsing of available data and workflows, seamless integration of R and non-R tools within a reproducible analysis pipeline, and accessibility to scalable computing resources without direct management overhead. While some limitations exist around workflow customization, AnVILWorkflowlowers the barrier to taking advantage of AnVIL's resources, especially for exploratory analyses or bulk processing with established workflows. This empowers a broader community of researchers to leverage the latest genomics tools and datasets using familiar R syntax. This package is distributed through the Bioconductor project (https://bioconductor.org/packages/AnVILWorkflow), and the source code is available through GitHub (https://github.com/shbrief/AnVILWorkflow).

RevDate: 2024-05-26
CmpDate: 2024-05-26

Alrashdi I (2024)

Fog-based deep learning framework for real-time pandemic screening in smart cities from multi-site tomographies.

BMC medical imaging, 24(1):123.

The quick proliferation of pandemic diseases has been imposing many concerns on the international health infrastructure. To combat pandemic diseases in smart cities, Artificial Intelligence of Things (AIoT) technology, based on the integration of artificial intelligence (AI) with the Internet of Things (IoT), is commonly used to promote efficient control and diagnosis during the outbreak, thereby minimizing possible losses. However, the presence of multi-source institutional data remains one of the major challenges hindering the practical usage of AIoT solutions for pandemic disease diagnosis. This paper presents a novel framework that utilizes multi-site data fusion to boost the accurateness of pandemic disease diagnosis. In particular, we focus on a case study of COVID-19 lesion segmentation, a crucial task for understanding disease progression and optimizing treatment strategies. In this study, we propose a novel multi-decoder segmentation network for efficient segmentation of infections from cross-domain CT scans in smart cities. The multi-decoder segmentation network leverages data from heterogeneous domains and utilizes strong learning representations to accurately segment infections. Performance evaluation of the multi-decoder segmentation network was conducted on three publicly accessible datasets, demonstrating robust results with an average dice score of 89.9% and an average surface dice of 86.87%. To address scalability and latency issues associated with centralized cloud systems, fog computing (FC) emerges as a viable solution. FC brings resources closer to the operator, offering low latency and energy-efficient data management and processing. In this context, we propose a unique FC technique called PANDFOG to deploy the multi-decoder segmentation network on edge nodes for practical and clinical applications of automated COVID-19 pneumonia analysis. The results of this study highlight the efficacy of the multi-decoder segmentation network in accurately segmenting infections from cross-domain CT scans. Moreover, the proposed PANDFOG system demonstrates the practical deployment of the multi-decoder segmentation network on edge nodes, providing real-time access to COVID-19 segmentation findings for improved patient monitoring and clinical decision-making.

RevDate: 2024-05-25

de Azevedo Soares Dos Santos HC, Rodrigues Cintra Armellini B, Naves GL, et al (2024)

Using "adopt a bacterium" as an E-learning tool for simultaneously teaching microbiology to different health-related university courses.

FEMS microbiology letters pii:7681972 [Epub ahead of print].

The COVID-19 pandemic has posed challenges for education, particularly in undergraduate teaching. In this study, we report on the experience of how a private university successfully addressed this challenge through an active methodology applied to a microbiology discipline offered remotely to students from various health-related courses (veterinary, physiotherapy, nursing, biomedicine, and nutrition). Remote teaching was combined with the 'Adopt a Bacterium' methodology, implemented for the first time on Google Sites. The distance learning activity notably improved student participation in microbiology discussions, both through word cloud analysis and the richness of discourse measured by the Shannon index. Furthermore, feedback from students about the e-learning approach was highly positive, indicating its effectiveness in motivating and involving students in the learning process. The results also demonstrate that despite being offered simultaneously to students, the methodology allowed for the acquisition of specialized knowledge within each course and sparked student interest in various aspects of microbiology. In conclusion, the remote 'Adopt a Bacterium' methodology facilitated knowledge sharing among undergraduate students from different health-related courses and represented a valuable resource in distance microbiology education.

RevDate: 2024-05-25

Shaghaghi N, Fazlollahi F, Shrivastav T, et al (2024)

DOxy: A Dissolved Oxygen Monitoring System.

Sensors (Basel, Switzerland), 24(10): pii:s24103253.

Dissolved Oxygen (DO) in water enables marine life. Measuring the prevalence of DO in a body of water is an important part of sustainability efforts because low oxygen levels are a primary indicator of contamination and distress in bodies of water. Therefore, aquariums and aquaculture of all types are in need of near real-time dissolved oxygen monitoring and spend a lot of money on purchasing and maintaining DO meters that are either expensive, inefficient, or manually operated-in which case they also need to ensure that manual readings are taken frequently which is time consuming. Hence a cost-effective and sustainable automated Internet of Things (IoT) system for this task is necessary and long overdue. DOxy, is such an IoT system under research and development at Santa Clara University's Ethical, Pragmatic, and Intelligent Computing (EPIC) Laboratory which utilizes cost-effective, accessible, and sustainable Sensing Units (SUs) for measuring the dissolved oxygen levels present in bodies of water which send their readings to a web based cloud infrastructure for storage, analysis, and visualization. DOxy's SUs are equipped with a High-sensitivity Pulse Oximeter meant for measuring dissolved oxygen levels in human blood, not water. Hence a number of parallel readings of water samples were gathered by both the High-sensitivity Pulse Oximeter and a standard dissolved oxygen meter. Then, two approaches for relating the readings were investigated. In the first, various machine learning models were trained and tested to produce a dynamic mapping of sensor readings to actual DO values. In the second, curve-fitting models were used to produce a successful conversion formula usable in the DOxy SUs offline. Both proved successful in producing accurate results.

RevDate: 2024-05-25

Kitsiou A, Sideri M, Pantelelis M, et al (2024)

Specification of Self-Adaptive Privacy-Related Requirements within Cloud Computing Environments (CCE).

Sensors (Basel, Switzerland), 24(10): pii:s24103227.

This paper presents a novel approach to address the challenges of self-adaptive privacy in cloud computing environments (CCE). Under the Cloud-InSPiRe project, the aim is to provide an interdisciplinary framework and a beta-version tool for self-adaptive privacy design, effectively focusing on the integration of technical measures with social needs. To address that, a pilot taxonomy that aligns technical, infrastructural, and social requirements is proposed after two supplementary surveys that have been conducted, focusing on users' privacy needs and developers' perspectives on self-adaptive privacy. Through the integration of users' social identity-based practices and developers' insights, the taxonomy aims to provide clear guidance for developers, ensuring compliance with regulatory standards and fostering a user-centric approach to self-adaptive privacy design tailored to diverse user groups, ultimately enhancing satisfaction and confidence in cloud services.

RevDate: 2024-05-25
CmpDate: 2024-05-25

Zimmerleiter R, Greibl W, Meininger G, et al (2024)

Sensor for Rapid In-Field Classification of Cannabis Samples Based on Near-Infrared Spectroscopy.

Sensors (Basel, Switzerland), 24(10): pii:s24103188.

A rugged handheld sensor for rapid in-field classification of cannabis samples based on their THC content using ultra-compact near-infrared spectrometer technology is presented. The device is designed for use by the Austrian authorities to discriminate between legal and illegal cannabis samples directly at the place of intervention. Hence, the sensor allows direct measurement through commonly encountered transparent plastic packaging made from polypropylene or polyethylene without any sample preparation. The measurement time is below 20 s. Measured spectral data are evaluated using partial least squares discriminant analysis directly on the device's hardware, eliminating the need for internet connectivity for cloud computing. The classification result is visually indicated directly on the sensor via a colored LED. Validation of the sensor is performed on an independent data set acquired by non-expert users after a short introduction. Despite the challenging setting, the achieved classification accuracy is higher than 80%. Therefore, the handheld sensor has the potential to reduce the number of unnecessarily confiscated legal cannabis samples, which would lead to significant monetary savings for the authorities.

RevDate: 2024-05-25

Lin J, Y Guan (2024)

Load Prediction in Double-Channel Residual Self-Attention Temporal Convolutional Network with Weight Adaptive Updating in Cloud Computing.

Sensors (Basel, Switzerland), 24(10): pii:s24103181.

When resource demand increases and decreases rapidly, container clusters in the cloud environment need to respond to the number of containers in a timely manner to ensure service quality. Resource load prediction is a prominent challenge issue with the widespread adoption of cloud computing. A novel cloud computing load prediction method has been proposed, the Double-channel residual Self-attention Temporal convolutional Network with Weight adaptive updating (DSTNW), in order to make the response of the container cluster more rapid and accurate. A Double-channel Temporal Convolution Network model (DTN) has been developed to capture long-term sequence dependencies and enhance feature extraction capabilities when the model handles long load sequences. Double-channel dilated causal convolution has been adopted to replace the single-channel dilated causal convolution in the DTN. A residual temporal self-attention mechanism (SM) has been proposed to improve the performance of the network and focus on features with significant contributions from the DTN. DTN and SM jointly constitute a dual-channel residual self-attention temporal convolutional network (DSTN). In addition, by evaluating the accuracy aspects of single and stacked DSTNs, an adaptive weight strategy has been proposed to assign corresponding weights for the single and stacked DSTNs, respectively. The experimental results highlight that the developed method has outstanding prediction performance for cloud computing in comparison with some state-of-the-art methods. The proposed method achieved an average improvement of 24.16% and 30.48% on the Container dataset and Google dataset, respectively.

RevDate: 2024-05-25

Xie Y, Meng X, Nguyen DT, et al (2024)

A Discussion of Building a Smart SHM Platform for Long-Span Bridge Monitoring.

Sensors (Basel, Switzerland), 24(10): pii:s24103163.

This paper explores the development of a smart Structural Health Monitoring (SHM) platform tailored for long-span bridge monitoring, using the Forth Road Bridge (FRB) as a case study. It discusses the selection of smart sensors available for real-time monitoring, the formulation of an effective data strategy encompassing the collection, processing, management, analysis, and visualization of monitoring data sets to support decision-making, and the establishment of a cost-effective and intelligent sensor network aligned with the objectives set through comprehensive communication with asset owners. Due to the high data rates and dense sensor installations, conventional processing techniques are inadequate for fulfilling monitoring functionalities and ensuring security. Cloud-computing emerges as a widely adopted solution for processing and storing vast monitoring data sets. Drawing from the authors' experience in implementing long-span bridge monitoring systems in the UK and China, this paper compares the advantages and limitations of employing cloud- computing for long-span bridge monitoring. Furthermore, it explores strategies for developing a robust data strategy and leveraging artificial intelligence (AI) and digital twin (DT) technologies to extract relevant information or patterns regarding asset health conditions. This information is then visualized through the interaction between physical and virtual worlds, facilitating timely and informed decision-making in managing critical road transport infrastructure.

RevDate: 2024-05-24

Peralta T, Menoscal M, Bravo G, et al (2024)

Rock Slope Stability Analysis Using Terrestrial Photogrammetry and Virtual Reality on Ignimbritic Deposits.

Journal of imaging, 10(5): pii:jimaging10050106.

Puerto de Cajas serves as a vital high-altitude passage in Ecuador, connecting the coastal region to the city of Cuenca. The stability of this rocky massif is carefully managed through the assessment of blocks and discontinuities, ensuring safe travel. This study presents a novel approach, employing rapid and cost-effective methods to evaluate an unexplored area within the protected expanse of Cajas. Using terrestrial photogrammetry and strategically positioned geomechanical stations along the slopes, we generated a detailed point cloud capturing elusive terrain features. We have used terrestrial photogrammetry for digitalization of the slope. Validation of the collected data was achieved by comparing directional data from Cloud Compare software with manual readings using a digital compass integrated in a phone at control points. The analysis encompasses three slopes, employing the SMR, Q-slope, and kinematic methodologies. Results from the SMR system closely align with kinematic analysis, indicating satisfactory slope quality. Nonetheless, continued vigilance in stability control remains imperative for ensuring road safety and preserving the site's integrity. Moreover, this research lays the groundwork for the creation of a publicly accessible 3D repository, enhancing visualization capabilities through Google Virtual Reality. This initiative not only aids in replicating the findings but also facilitates access to an augmented reality environment, thereby fostering collaborative research endeavors.

RevDate: 2024-05-22

Li X, Zhao P, Liang M, et al (2024)

Dynamics changes of coastal aquaculture ponds based on the Google Earth Engine in Jiangsu Province, China.

Marine pollution bulletin, 203:116502 pii:S0025-326X(24)00479-X [Epub ahead of print].

Monitoring the spatiotemporal variation in coastal aquaculture zones is essential to providing a scientific basis for formulating scientifically reasonable land management policies. This study uses the Google Earth Engine (GEE) remote sensing cloud platform to extract aquaculture information based on Landsat series and Sentinel-2 images for the six years of 1984 to 2021 (1984, 1990, 2000, 2010, 2016 and 2021), so as to analyze the changes in the coastal aquaculture pond area, along with its spatiotemporal characteristics, of Jiangsu Province. The overall area of coastal aquaculture ponds in Jiangsu shows an increasing trend in the early period and a decreasing trend in the later period. Over the past 37 years, the area of coastal aquaculture ponds has increased by a total of 54,639.73 ha. This study can provide basic data for the sustainable development of coastal aquaculture in Jiangsu, and a reference for related studies in other regions.

RevDate: 2024-05-21

Hulagappa Nebagiri M, L Pillappa Hnumanthappa (2024)

Fractional social optimization-based migration and replica management algorithm for load balancing in distributed file system for cloud computing.

Network (Bristol, England) [Epub ahead of print].

Effective management of data is a major issue in Distributed File System (DFS), like the cloud. This issue is handled by replicating files in an effective manner, which can minimize the time of data access and elevate the data availability. This paper devises a Fractional Social Optimization Algorithm (FSOA) for replica management along with balancing load in DFS in the cloud stage. Balancing the workload for DFS is the main objective. Here, the chunk creation is done by partitioning the file into a different number of chunks considering Deep Fuzzy Clustering (DFC) and then in the round-robin manner the Virtual machine (VM) is assigned. In that case for balancing the load considering certain objectives like resource use, energy consumption and migration cost thereby the load balancing is performed with the proposed FSOA. Here, the FSOA is formulated by uniting the Social optimization algorithm (SOA) and Fractional Calculus (FC). The replica management is done in DFS using the proposed FSOA by considering the various objectives. The FSOA has the smallest load of 0.299, smallest cost of 0.395, smallest energy consumption of 0.510, smallest overhead of 0.358, and smallest throughput of 0.537.

RevDate: 2024-05-21

Qureshi KM, Mewada BG, Kaur S, et al (2024)

Investigating industry 4.0 technologies in logistics 4.0 usage towards sustainable manufacturing supply chain.

Heliyon, 10(10):e30661.

In the era of Industry 4.0 (I4.0), automation and data analysis have undergone significant advancements, greatly impacting production management and operations management. Technologies such as the Internet of Things (IoT), robotics, cloud computing (CC), and big data, have played a crucial role in shaping Logistics 4.0 (L4.0) and improving the efficiency of the manufacturing supply chain (SC), ultimately contributing to sustainability goals. The present research investigates the role of I4.0 technologies within the framework of the extended theory of planned behavior (ETPB). The research explores various variables including subjective norms, attitude, perceived behavior control, leading to word-of-mouth, and purchase intention. By modeling these variables, the study aims to understand the influence of I4.0 technologies on L4.0 to establish a sustainable manufacturing SC. A questionnaire was administered to gather input from small and medium-sized firms (SMEs) in the manufacturing industry. An empirical study along with partial least squares structural equation modeling (SEM), was conducted to analyze the data. The findings indicate that the use of I4.0 technology in L4.0 influences subjective norms, which subsequently influence attitudes and personal behavior control. This, in turn, leads to word-of-mouth and purchase intention. The results provide valuable insights for shippers and logistics service providers empowering them to enhance their performance and contribute to achieving sustainability objectives. Consequently, this study contributes to promoting sustainability in the manufacturing SC by stimulating the adoption of I4.0 technologies in L4.0.

RevDate: 2024-05-20
CmpDate: 2024-05-20

Vo DH, Vo AT, Dinh CT, et al (2024)

Corporate restructuring and firm performance in Vietnam: The moderating role of digital transformation.

PloS one, 19(5):e0303491 pii:PONE-D-23-40777.

In the digital age, firms should continually innovate and adapt to remain competitive and enhance performance. Innovation and adaptation require firms to take a holistic approach to their corporate structuring to ensure efficiency and effectiveness to stay competitive. This study examines how corporate restructuring impacts firm performance in Vietnam. We then investigate the moderating role of digital transformation in the corporate restructuring-firm performance nexus. We use content analysis, with a focus on particular terms, including "digitalization," "big data," "cloud computing," "blockchain," and "information technology" for 11 years, from 2011 to 2021. The frequency index from these keywords is developed to proxy the digital transformation for the Vietnamese listed firms. A final sample includes 118 Vietnamese listed firms with sufficient data for the analysis using the generalized method of moments (GMM) approach. The results indicate that corporate restructuring, including financial, portfolio, and operational restructuring, has a negative effect on firm performance in Vietnam. Digital transformation also negatively affects firm performance. However, corporate restructuring implemented in conjunction with digital transformation improves the performance of Vietnamese listed firms. These findings largely remain unchanged across various robustness analyses.

RevDate: 2024-05-16

Gupta I, Saxena D, Singh AK, et al (2024)

A Multiple Controlled Toffoli Driven Adaptive Quantum Neural Network Model for Dynamic Workload Prediction in Cloud Environments.

IEEE transactions on pattern analysis and machine intelligence, PP: [Epub ahead of print].

The key challenges in cloud computing encompass dynamic resource scaling, load balancing, and power consumption. Accurate workload prediction is identified as a crucial strategy to address these challenges. Despite numerous methods proposed to tackle this issue, existing approaches fall short of capturing the high-variance nature of volatile and dynamic cloud workloads. Consequently, this paper introduces a novel model aimed at addressing this limitation. This paper presents a novel Multiple Controlled Toffoli-driven Adaptive Quantum Neural Network (MCT-AQNN) model to establish an empirical solution to complex, elastic as well as challenging workload prediction problems by optimizing the exploration, adaption, and exploitation proficiencies through quantum learning. The computational adaptability of quantum computing is ingrained with machine learning algorithms to derive more precise correlations from dynamic and complex workloads. The furnished input data point and hatched neural weights are refitted in the form of qubits while the controlling effects of Multiple Controlled Toffoli (MCT) gates are operated at the hidden and output layers of Quantum Neural Network (QNN) for enhancing learning capabilities. Complimentarily, a Uniformly Adaptive Quantum Machine Learning (UAQL) algorithm has evolved to functionally and effectually train the QNN. The extensive experiments are conducted and the comparisons are performed with state-of-the-art methods using four real-world benchmark datasets. Experimental results evince that MCT-AQNN has up to 32%-96% higher accuracy than the existing approaches.

RevDate: 2024-05-15

Koenig Z, Yohannes MT, Nkambule LL, et al (2024)

A harmonized public resource of deeply sequenced diverse human genomes.

Genome research pii:gr.278378.123 [Epub ahead of print].

Underrepresented populations are often excluded from genomic studies due in part to a lack of resources supporting their analyses. The 1000 Genomes Project (1kGP) and Human Genome Diversity Project (HGDP), which have recently been sequenced to high coverage, are valuable genomic resources because of the global diversity they capture and their open data sharing policies. Here, we harmonized a high quality set of 4,094 whole genomes from 80 populations in the HGDP and 1kGP with data from the Genome Aggregation Database (gnomAD) and identified over 153 million high-quality SNVs, indels, and SVs. We performed a detailed ancestry analysis of this cohort, characterizing population structure and patterns of admixture across populations, analyzing site frequency spectra, and measuring variant counts at global and subcontinental levels. We also demonstrate substantial added value from this dataset compared to the prior versions of the component resources, typically combined via liftOver and variant intersection; for example, we catalog millions of new genetic variants, mostly rare, compared to previous releases. In addition to unrestricted individual-level public release, we provide detailed tutorials for conducting many of the most common quality control steps and analyses with these data in a scalable cloud-computing environment and publicly release this new phased joint callset for use as a haplotype resource in phasing and imputation pipelines. This jointly called reference panel will serve as a key resource to support research of diverse ancestry populations.

RevDate: 2024-05-15

Thiriveedhi VK, Krishnaswamy D, Clunie D, et al (2024)

Cloud-based large-scale curation of medical imaging data using AI segmentation.

Research square pii:rs.3.rs-4351526.

Rapid advances in medical imaging Artificial Intelligence (AI) offer unprecedented opportunities for automatic analysis and extraction of data from large imaging collections. Computational demands of such modern AI tools may be difficult to satisfy with the capabilities available on premises. Cloud computing offers the promise of economical access and extreme scalability. Few studies examine the price/performance tradeoffs of using the cloud, in particular for medical image analysis tasks. We investigate the use of cloud-provisioned compute resources for AI-based curation of the National Lung Screening Trial (NLST) Computed Tomography (CT) images available from the National Cancer Institute (NCI) Imaging Data Commons (IDC). We evaluated NCI Cancer Research Data Commons (CRDC) Cloud Resources - Terra (FireCloud) and Seven Bridges-Cancer Genomics Cloud (SB-CGC) platforms - to perform automatic image segmentation with TotalSegmentator and pyradiomics feature extraction for a large cohort containing >126,000 CT volumes from >26,000 patients. Utilizing >21,000 Virtual Machines (VMs) over the course of the computation we completed analysis in under 9 hours, as compared to the estimated 522 days that would be needed on a single workstation. The total cost of utilizing the cloud for this analysis was $1,011.05. Our contributions include: 1) an evaluation of the numerous tradeoffs towards optimizing the use of cloud resources for large-scale image analysis; 2) CloudSegmentator, an open source reproducible implementation of the developed workflows, which can be reused and extended; 3) practical recommendations for utilizing the cloud for large-scale medical image computing tasks. We also share the results of the analysis: the total of 9,565,554 segmentations of the anatomic structures and the accompanying radiomics features in IDC as of release v18.

RevDate: 2024-05-14

Philippou J, Yáñez Feliú G, TJ Rudge (2024)

WebCM: A Web-Based Platform for Multiuser Individual-Based Modeling of Multicellular Microbial Populations and Communities.

ACS synthetic biology [Epub ahead of print].

WebCM is a web platform that enables users to create, edit, run, and view individual-based simulations of multicellular microbial populations and communities on a remote compute server. WebCM builds upon the simulation software CellModeller in the back end and provides users with a web-browser-based modeling interface including model editing, execution, and playback. Multiple users can run and manage multiple simulations simultaneously, sharing the host hardware. Since it is based on CellModeller, it can utilize both GPU and CPU parallelization. The user interface provides real-time interactive 3D graphical representations for inspection of simulations at all time points, and the results can be downloaded for detailed offline analysis. It can be run on cloud computing services or on a local server, allowing collaboration within and between laboratories.

RevDate: 2024-05-11

Lin Z, J Liang (2024)

Edge Caching Data Distribution Strategy with Minimum Energy Consumption.

Sensors (Basel, Switzerland), 24(9): pii:s24092898.

In the context of the rapid development of the Internet of Vehicles, virtual reality, automatic driving and the industrial Internet, the terminal devices in the network show explosive growth. As a result, more and more information is generated from the edge of the network, which makes the data throughput increase dramatically in the mobile communication network. As the key technology of the fifth-generation mobile communication network, mobile edge caching technology which caches popular data to the edge server deployed at the edge of the network avoids the data transmission delay of the backhaul link and the occurrence of network congestion. With the growing scale of the network, distributing hot data from cloud servers to edge servers will generate huge energy consumption. To realize the green and sustainable development of the communication industry and reduce the energy consumption of distribution of data that needs to be cached in edge servers, we make the first attempt to propose and solve the problem of edge caching data distribution with minimum energy consumption (ECDDMEC) in this paper. First, we model and formulate the problem as a constrained optimization problem and then prove its NP-hardness. Subsequently, we design a greedy algorithm with computational complexity of O(n2) to solve the problem approximately. Experimental results show that compared with the distribution strategy of each edge server directly requesting data from the cloud server, the strategy obtained by the algorithm can significantly reduce the energy consumption of data distribution.

RevDate: 2024-05-11

Emvoliadis A, Vryzas N, Stamatiadou ME, et al (2024)

Multimodal Environmental Sensing Using AI & IoT Solutions: A Cognitive Sound Analysis Perspective.

Sensors (Basel, Switzerland), 24(9): pii:s24092755.

This study presents a novel audio compression technique, tailored for environmental monitoring within multi-modal data processing pipelines. Considering the crucial role that audio data play in environmental evaluations, particularly in contexts with extreme resource limitations, our strategy substantially decreases bit rates to facilitate efficient data transfer and storage. This is accomplished without undermining the accuracy necessary for trustworthy air pollution analysis while simultaneously minimizing processing expenses. More specifically, our approach fuses a Deep-Learning-based model, optimized for edge devices, along with a conventional coding schema for audio compression. Once transmitted to the cloud, the compressed data undergo a decoding process, leveraging vast cloud computing resources for accurate reconstruction and classification. The experimental results indicate that our approach leads to a relatively minor decrease in accuracy, even at notably low bit rates, and demonstrates strong robustness in identifying data from labels not included in our training dataset.

RevDate: 2024-05-11

Hanczewski S, Stasiak M, M Weissenberg (2024)

An Analytical Model of IaaS Architecture for Determining Resource Utilization.

Sensors (Basel, Switzerland), 24(9): pii:s24092758.

Cloud computing has become a major component of the modern IT ecosystem. A key contributor to this has been the development of Infrastructure as a Service (IaaS) architecture, in which users' virtual machines (VMs) are run on the service provider's physical infrastructure, making it possible to become independent of the need to purchase one's own physical machines (PMs). One of the main aspects to consider when designing such systems is achieving the optimal utilization of individual resources, such as processor, RAM, disk, and available bandwidth. In response to these challenges, the authors developed an analytical model (the ARU method) to determine the average utilization levels of the aforementioned resources. The effectiveness of the proposed analytical model was evaluated by comparing the results obtained by utilizing the model with those obtained by conducting a digital simulation of the operation of a cloud system according to the IaaS paradigm. The results show the effectiveness of the model regardless of the structure of the emerging requests, the variability of the capacity of individual resources, and the number of physical machines in the system. This translates into the applicability of the model in the design process of cloud systems.

RevDate: 2024-05-10

Du X, Novoa-Laurentiev J, Plasaek JM, et al (2024)

Enhancing Early Detection of Cognitive Decline in the Elderly: A Comparative Study Utilizing Large Language Models in Clinical Notes.

medRxiv : the preprint server for health sciences pii:2024.04.03.24305298.

BACKGROUND: Large language models (LLMs) have shown promising performance in various healthcare domains, but their effectiveness in identifying specific clinical conditions in real medical records is less explored. This study evaluates LLMs for detecting signs of cognitive decline in real electronic health record (EHR) clinical notes, comparing their error profiles with traditional models. The insights gained will inform strategies for performance enhancement.

METHODS: This study, conducted at Mass General Brigham in Boston, MA, analyzed clinical notes from the four years prior to a 2019 diagnosis of mild cognitive impairment in patients aged 50 and older. We used a randomly annotated sample of 4,949 note sections, filtered with keywords related to cognitive functions, for model development. For testing, a random annotated sample of 1,996 note sections without keyword filtering was utilized. We developed prompts for two LLMs, Llama 2 and GPT-4, on HIPAA-compliant cloud-computing platforms using multiple approaches (e.g., both hard and soft prompting and error analysis-based instructions) to select the optimal LLM-based method. Baseline models included a hierarchical attention-based neural network and XGBoost. Subsequently, we constructed an ensemble of the three models using a majority vote approach.

RESULTS: GPT-4 demonstrated superior accuracy and efficiency compared to Llama 2, but did not outperform traditional models. The ensemble model outperformed the individual models, achieving a precision of 90.3%, a recall of 94.2%, and an F1-score of 92.2%. Notably, the ensemble model showed a significant improvement in precision, increasing from a range of 70%-79% to above 90%, compared to the best-performing single model. Error analysis revealed that 63 samples were incorrectly predicted by at least one model; however, only 2 cases (3.2%) were mutual errors across all models, indicating diverse error profiles among them.

CONCLUSIONS: LLMs and traditional machine learning models trained using local EHR data exhibited diverse error profiles. The ensemble of these models was found to be complementary, enhancing diagnostic performance. Future research should investigate integrating LLMs with smaller, localized models and incorporating medical data and domain knowledge to enhance performance on specific tasks.

RevDate: 2024-05-08

Kent RM, Barbosa WAS, DJ Gauthier (2024)

Controlling chaos using edge computing hardware.

Nature communications, 15(1):3886.

Machine learning provides a data-driven approach for creating a digital twin of a system - a digital model used to predict the system behavior. Having an accurate digital twin can drive many applications, such as controlling autonomous systems. Often, the size, weight, and power consumption of the digital twin or related controller must be minimized, ideally realized on embedded computing hardware that can operate without a cloud-computing connection. Here, we show that a nonlinear controller based on next-generation reservoir computing can tackle a difficult control problem: controlling a chaotic system to an arbitrary time-dependent state. The model is accurate, yet it is small enough to be evaluated on a field-programmable gate array typically found in embedded devices. Furthermore, the model only requires 25.0 ± 7.0 nJ per evaluation, well below other algorithms, even without systematic power optimization. Our work represents the first step in deploying efficient machine learning algorithms to the computing "edge."

RevDate: 2024-05-07

Buchanan BC, Tang Y, Lopez H, et al (2024)

Development of a cloud-based flow rate tool for eNAMPT biomarker detection.

PNAS nexus, 3(5):pgae173.

Increased levels of extracellular nicotinamide phosphoribosyltransferase (eNAMPT) are increasingly recognized as a highly useful biomarker of inflammatory disease and disease severity. In preclinical animal studies, a monoclonal antibody that neutralizes eNAMPT has been generated to successfully reduce the extent of inflammatory cascade activation. Thus, the rapid detection of eNAMPT concentration in plasma samples at the point of care (POC) would be of great utility in assessing the benefit of administering an anti-eNAMPT therapeutic. To determine the feasibility of this POC test, we conducted a particle immunoagglutination assay on a paper microfluidic platform and quantified its extent with a flow rate measurement in less than 1 min. A smartphone and cloud-based Google Colab were used to analyze the flow rates automatically. A horizontal flow model and an immunoagglutination binding model were evaluated to optimize the detection time, sample dilution, and particle concentration. This assay successfully detected eNAMPT in both human whole blood and plasma samples (diluted to 10 and 1%), with the limit of detection of 1-20 pg/mL (equivalent to 0.1-0.2 ng/mL in undiluted blood and plasma) and a linear range of 5-40 pg/mL. Furthermore, the smartphone POC assay distinguished clinical samples with low, mid, and high eNAMPT concentrations. Together, these results indicate this POC assay, which utilizes low-cost materials, time-effective methods, and a straightforward immunoassay (without surface immobilization), may reliably allow rapid determination of eNAMPT blood/plasma levels to advantage patient stratification in clinical trials and guide ALT-100 mAb therapeutic decision-making.

RevDate: 2024-05-06

Sankar M S K, Gupta S, Luthra S, et al (2024)

Empowering sustainable manufacturing: Unleashing digital innovation in spool fabrication industries.

Heliyon, 10(9):e29994.

In industrial landscapes, spool fabrication industries play a crucial role in the successful completion of numerous industrial projects by providing prefabricated modules. However, the implementation of digitalized sustainable practices in spool fabrication industries is progressing slowly and is still in its embryonic stage due to several challenges. To implement digitalized sustainable manufacturing (SM), digital technologies such as Internet of Things, Cloud computing, Big data analytics, Cyber-physical systems, Augmented reality, Virtual reality, and Machine learning are required in the context of sustainability. The scope of the present study entails prioritization of the enablers that promote the implementation of digitalized sustainable practices in spool fabrication industries using the Improved Fuzzy Stepwise Weight Assessment Ratio Analysis (IMF-SWARA) method integrated with Triangular Fuzzy Bonferroni Mean (TFBM). The enablers are identified through a systematic literature review and are validated by a team of seven experts through a questionnaire survey. Then the finally identified enablers are analyzed by the IMF-SWARA and TFBM integrated approach. The results indicate that the most significant enablers are management support, leadership, governmental policies and regulations to implement digitalized SM. The study provides a comprehensive analysis of digital SM enablers in the spool fabrication industry and offers guidelines for the transformation of conventional systems into digitalized SM practices.

RevDate: 2024-05-05

Mishra A, Kim HS, Kumar R, et al (2024)

Advances in Vibrio-related infection management: an integrated technology approach for aquaculture and human health.

Critical reviews in biotechnology [Epub ahead of print].

Vibrio species pose significant threats worldwide, causing mortalities in aquaculture and infections in humans. Global warming and the emergence of worldwide strains of Vibrio diseases are increasing day by day. Control of Vibrio species requires effective monitoring, diagnosis, and treatment strategies at the global scale. Despite current efforts based on chemical, biological, and mechanical means, Vibrio control management faces limitations due to complicated implementation processes. This review explores the intricacies and challenges of Vibrio-related diseases, including accurate and cost-effective diagnosis and effective control. The global burden due to emerging Vibrio species further complicates management strategies. We propose an innovative integrated technology model that harnesses cutting-edge technologies to address these obstacles. The proposed model incorporates advanced tools, such as biosensing technologies, the Internet of Things (IoT), remote sensing devices, cloud computing, and machine learning. This model offers invaluable insights and supports better decision-making by integrating real-time ecological data and biological phenotype signatures. A major advantage of our approach lies in leveraging cloud-based analytics programs, efficiently extracting meaningful information from vast and complex datasets. Collaborating with data and clinical professionals ensures logical and customized solutions tailored to each unique situation. Aquaculture biotechnology that prioritizes sustainability may have a large impact on human health and the seafood industry. Our review underscores the importance of adopting this model, revolutionizing the prognosis and management of Vibrio-related infections, even under complex circumstances. Furthermore, this model has promising implications for aquaculture and public health, addressing the United Nations Sustainable Development Goals and their development agenda.

RevDate: 2024-05-02

Han Y, Wei Z, G Huang (2024)

An imbalance data quality monitoring based on SMOTE-XGBOOST supported by edge computing.

Scientific reports, 14(1):10151.

Product assembly involves extensive production data that is characterized by high dimensionality, multiple samples, and data imbalance. The article proposes an edge computing-based framework for monitoring product assembly quality in industrial Internet of Things. Edge computing technology relieves the pressure of aggregating enormous amounts of data to cloud center for processing. To address the problem of data imbalance, we compared five sampling methods: Borderline SMOTE, Random Downsampling, Random Upsampling, SMOTE, and ADASYN. Finally, the quality monitoring model SMOTE-XGBoost is proposed, and the hyperparameters of the model are optimized by using the Grid Search method. The proposed framework and quality control methodology were applied to an assembly line of IGBT modules for the traction system, and the validity of the model was experimentally verified.

RevDate: 2024-05-02

Peccoud S, Berezin CT, Hernandez SI, et al (2024)

PlasCAT: Plasmid cloud assembly tool.

Bioinformatics (Oxford, England) pii:7663467 [Epub ahead of print].

SUMMARY: PlasCAT is an easy-to-use cloud-based bioinformatics tool that enables de novo plasmid sequence assembly from raw sequencing data. Non-technical users can now assemble sequences from long reads and short reads without ever touching a line of code. PlasCAT uses high-performance computing servers to reduce run times on assemblies and deliver results faster.

PlasCAT is freely available on the web at https://sequencing.genofab.com. The assembly pipeline source code and server code are available for download at https://bitbucket.org/genofabinc/workspace/projects/PLASCAT. Click the Cancel button to access the source code without authenticating. Web servers implemented in React.js and Python, with all major browsers supported.

LOAD NEXT 100 CITATIONS

RJR Experience and Expertise

Researcher

Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.

Educator

Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.

Administrator

Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.

Technologist

Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.

Publisher

While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.

Speaker

Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.

Facilitator

Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.

Designer

Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.

Support this website:
Order from Amazon
We will earn a commission.

This is a must read book for anyone with an interest in invasion biology. The full title of the book lays out the author's premise — The New Wild: Why Invasive Species Will Be Nature's Salvation. Not only is species movement not bad for ecosystems, it is the way that ecosystems respond to perturbation — it is the way ecosystems heal. Even if you are one of those who is absolutely convinced that invasive species are actually "a blight, pollution, an epidemic, or a cancer on nature", you should read this book to clarify your own thinking. True scientific understanding never comes from just interacting with those with whom you already agree. R. Robbins

963 Red Tail Lane
Bellingham, WA 98226

206-300-3443

E-mail: RJR8222@gmail.com

Collection of publications by R J Robbins

Reprints and preprints of publications, slide presentations, instructional materials, and data compilations written or prepared by Robert Robbins. Most papers deal with computational biology, genome informatics, using information technology to support biomedical research, and related matters.

Research Gate page for R J Robbins

ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. According to a study by Nature and an article in Times Higher Education , it is the largest academic social network in terms of active users.

Curriculum Vitae for R J Robbins

short personal version

Curriculum Vitae for R J Robbins

long standard version

RJR Picks from Around the Web (updated 11 MAY 2018 )