About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot


Bibliography Options Menu

21 Jun 2024 at 01:41
Hide Abstracts   |   Hide Additional Links
Long bibliographies are displayed in blocks of 100 citations at a time. At the end of each block there is an option to load the next block.

Bibliography on: Cloud Computing


Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About:  RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE

ESP: PubMed Auto Bibliography 21 Jun 2024 at 01:41 Created: 

Cloud Computing

Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.

Created with PubMed® Query: ( cloud[TIAB] AND (computing[TIAB] OR "amazon web services"[TIAB] OR google[TIAB] OR "microsoft azure"[TIAB]) ) NOT pmcbook NOT ispreviousversion

Citations The Papers (from PubMed®)


RevDate: 2024-06-20

Dai S (2024)

On the quantum circuit implementation of modus ponens.

Scientific reports, 14(1):14245.

The process of inference reflects the structure of propositions with assigned truth values, either true or false. Modus ponens is a fundamental form of inference that involves affirming the antecedent to affirm the consequent. Inspired by the quantum computer, the superposition of true and false is used for the parallel processing. In this work, we propose a quantum version of modus ponens. Additionally, we introduce two generations of quantum modus ponens: quantum modus ponens inference chain and multidimensional quantum modus ponens. Finally, a simple implementation of quantum modus ponens on the OriginQ quantum computing cloud platform is demonstrated.

RevDate: 2024-06-19

Gazis A, E Katsiri (2024)

Streamline Intelligent Crowd Monitoring with IoT Cloud Computing Middleware.

Sensors (Basel, Switzerland), 24(11):.

This article introduces a novel middleware that utilizes cost-effective, low-power computing devices like Raspberry Pi to analyze data from wireless sensor networks (WSNs). It is designed for indoor settings like historical buildings and museums, tracking visitors and identifying points of interest. It serves as an evacuation aid by monitoring occupancy and gauging the popularity of specific areas, subjects, or art exhibitions. The middleware employs a basic form of the MapReduce algorithm to gather WSN data and distribute it across available computer nodes. Data collected by RFID sensors on visitor badges is stored on mini-computers placed in exhibition rooms and then transmitted to a remote database after a preset time frame. Utilizing MapReduce for data analysis and a leader election algorithm for fault tolerance, this middleware showcases its viability through metrics, demonstrating applications like swift prototyping and accurate validation of findings. Despite using simpler hardware, its performance matches resource-intensive methods involving audiovisual and AI techniques. This design's innovation lies in its fault-tolerant, distributed setup using budget-friendly, low-power devices rather than resource-heavy hardware or methods. Successfully tested at a historical building in Greece (M. Hatzidakis' residence), it is tailored for indoor spaces. This paper compares its algorithmic application layer with other implementations, highlighting its technical strengths and advantages. Particularly relevant in the wake of the COVID-19 pandemic and general monitoring middleware for indoor locations, this middleware holds promise in tracking visitor counts and overall building occupancy.

RevDate: 2024-06-19

López-Ortiz EJ, Perea-Trigo M, Soria-Morillo LM, et al (2024)

Energy-Efficient Edge and Cloud Image Classification with Multi-Reservoir Echo State Network and Data Processing Units.

Sensors (Basel, Switzerland), 24(11):.

In an era dominated by Internet of Things (IoT) devices, software-as-a-service (SaaS) platforms, and rapid advances in cloud and edge computing, the demand for efficient and lightweight models suitable for resource-constrained devices such as data processing units (DPUs) has surged. Traditional deep learning models, such as convolutional neural networks (CNNs), pose significant computational and memory challenges, limiting their use in resource-constrained environments. Echo State Networks (ESNs), based on reservoir computing principles, offer a promising alternative with reduced computational complexity and shorter training times. This study explores the applicability of ESN-based architectures in image classification and weather forecasting tasks, using benchmarks such as the MNIST, FashionMnist, and CloudCast datasets. Through comprehensive evaluations, the Multi-Reservoir ESN (MRESN) architecture emerges as a standout performer, demonstrating its potential for deployment on DPUs or home stations. In exploiting the dynamic adaptability of MRESN to changing input signals, such as weather forecasts, continuous on-device training becomes feasible, eliminating the need for static pre-trained models. Our results highlight the importance of lightweight models such as MRESN in cloud and edge computing applications where efficiency and sustainability are paramount. This study contributes to the advancement of efficient computing practices by providing novel insights into the performance and versatility of MRESN architectures. By facilitating the adoption of lightweight models in resource-constrained environments, our research provides a viable alternative for improved efficiency and scalability in modern computing paradigms.

RevDate: 2024-06-14

Bayerlein R, Swarnakar V, Selfridge A, et al (2024)

Cloud-based serverless computing enables accelerated monte carlo simulations for nuclear medicine imaging.

Biomedical physics & engineering express [Epub ahead of print].

This study investigates the potential of cloud-based serverless computing to accelerate Monte Carlo (MC) simulations for nuclear medicine imaging tasks. MC simulations can pose a high computational burden - even when executed on modern multi-core computing servers. Cloud computing allows simulation tasks to be highly parallelized and considerably accelerated. We investigate the computational performance of a cloud-based serverless MC simulation of radioactive decays for positron emission tomography imaging using Amazon Web Service (AWS) Lambda serverless computing platform for the first time in scientific literature. We provide a comparison of the computational performance of AWS to a modern on-premises multi-thread reconstruction server by measuring the execution times of the processes using between 10^5 and 2∙10^10 simulated decays. We deployed two popular MC simulation frameworks - SimSET and GATE - within the AWS computing environment. Containerized application images were used as a basis for an AWS Lambda function, and local (non-cloud) scripts were used to orchestrate the deployment of simulations. The task was broken down into smaller parallel runs, and launched on concurrently running AWS Lambda instances, and the results were postprocessed and downloaded via the Simple Storage Service. Our implementation of cloud-based MC simulations with SimSET outperforms local server-based computations by more than an order of magnitude. However, the GATE implementation creates more and larger output file sizes and reveals that the internet connection speed can become the primary bottleneck for data transfers. Simulating 109 decays using SimSET is possible within 5 min and accrues computation costs of about $10 on AWS, whereas GATE would have to run in batches for more than 100 min at considerably higher costs. Adopting cloud-based serverless computing architecture in medical imaging research facilities can considerably improve processing times and overall workflow efficiency, with future research exploring additional enhancements through optimized configurations and computational methods.

RevDate: 2024-06-14

Guo Y, Ganti S, Y Wu (2024)

Enhancing Energy Efficiency in Telehealth Internet of Things Systems Through Fog and Cloud Computing Integration: Simulation Study.

JMIR biomedical engineering, 9:e50175 pii:v9i1e50175.

BACKGROUND: The increasing adoption of telehealth Internet of Things (IoT) devices in health care informatics has led to concerns about energy use and data processing efficiency.

OBJECTIVE: This paper introduces an innovative model that integrates telehealth IoT devices with a fog and cloud computing-based platform, aiming to enhance energy efficiency in telehealth IoT systems.

METHODS: The proposed model incorporates adaptive energy-saving strategies, localized fog nodes, and a hybrid cloud infrastructure. Simulation analyses were conducted to assess the model's effectiveness in reducing energy consumption and enhancing data processing efficiency.

RESULTS: Simulation results demonstrated significant energy savings, with a 2% reduction in energy consumption achieved through adaptive energy-saving strategies. The sample size for the simulation was 10-40, providing statistical robustness to the findings.

CONCLUSIONS: The proposed model successfully addresses energy and data processing challenges in telehealth IoT scenarios. By integrating fog computing for local processing and a hybrid cloud infrastructure, substantial energy savings are achieved. Ongoing research will focus on refining the energy conservation model and exploring additional functional enhancements for broader applicability in health care and industrial contexts.

RevDate: 2024-06-14

Chan NB, Li W, Aung T, et al (2023)

Machine Learning-Based Time in Patterns for Blood Glucose Fluctuation Pattern Recognition in Type 1 Diabetes Management: Development and Validation Study.

JMIR AI, 2:e45450 pii:v2i1e45450.

BACKGROUND: Continuous glucose monitoring (CGM) for diabetes combines noninvasive glucose biosensors, continuous monitoring, cloud computing, and analytics to connect and simulate a hospital setting in a person's home. CGM systems inspired analytics methods to measure glycemic variability (GV), but existing GV analytics methods disregard glucose trends and patterns; hence, they fail to capture entire temporal patterns and do not provide granular insights about glucose fluctuations.

OBJECTIVE: This study aimed to propose a machine learning-based framework for blood glucose fluctuation pattern recognition, which enables a more comprehensive representation of GV profiles that could present detailed fluctuation information, be easily understood by clinicians, and provide insights about patient groups based on time in blood fluctuation patterns.

METHODS: Overall, 1.5 million measurements from 126 patients in the United Kingdom with type 1 diabetes mellitus (T1DM) were collected, and prevalent blood fluctuation patterns were extracted using dynamic time warping. The patterns were further validated in 225 patients in the United States with T1DM. Hierarchical clustering was then applied on time in patterns to form 4 clusters of patients. Patient groups were compared using statistical analysis.

RESULTS: In total, 6 patterns depicting distinctive glucose levels and trends were identified and validated, based on which 4 GV profiles of patients with T1DM were found. They were significantly different in terms of glycemic statuses such as diabetes duration (P=.04), glycated hemoglobin level (P<.001), and time in range (P<.001) and thus had different management needs.

CONCLUSIONS: The proposed method can analytically extract existing blood fluctuation patterns from CGM data. Thus, time in patterns can capture a rich view of patients' GV profile. Its conceptual resemblance with time in range, along with rich blood fluctuation details, makes it more scalable, accessible, and informative to clinicians.

RevDate: 2024-06-13

Danning Z, Jia Q, Yinni M, et al (2024)

Establishment and Verification of a Skin Cancer Diagnosis Model Based on Image Convolutional Neural Network Analysis and Artificial Intelligence Algorithms.

Alternative therapies in health and medicine pii:AT10026 [Epub ahead of print].

Skin cancer is a serious public health problem, with countless deaths due to skin cancer each year. Early detection, aggressive and effective primary focus is the best treatment for skin cancer, which is important to improve patients' prognosis and reduce the death rate of the disease. However, judging skin tumors by the naked eye alone is a highly subjective factor, and the diagnosis can vary greatly even among professionally trained physicians. Clinically, skin endoscopy is a commonly used method for early diagnosis. However, the manual examination is time-consuming, laborious, and highly dependent on the clinical practice of dermatologists. In today's society, with the rapid development of information technology, the amount of information is increasing at a geometric rate, and new technologies such as cloud computing, distributed, data mining, and meta-inspiration are emerging. In this paper, we design and build a computer-aided diagnosis system for dermatoscopic images and apply meta-heuristic algorithms to image enhancement and image cutting to improve the quality of images, thus increasing the speed of diagnosis, early detection, and early treatment.

RevDate: 2024-06-13

Hu Y, Schnaubelt M, Chen L, et al (2024)

MS-PyCloud: A Cloud Computing-Based Pipeline for Proteomic and Glycoproteomic Data Analyses.

Analytical chemistry [Epub ahead of print].

Rapid development and wide adoption of mass spectrometry-based glycoproteomic technologies have empowered scientists to study proteins and protein glycosylation in complex samples on a large scale. This progress has also created unprecedented challenges for individual laboratories to store, manage, and analyze proteomic and glycoproteomic data, both in the cost for proprietary software and high-performance computing and in the long processing time that discourages on-the-fly changes of data processing settings required in explorative and discovery analysis. We developed an open-source, cloud computing-based pipeline, MS-PyCloud, with graphical user interface (GUI), for proteomic and glycoproteomic data analysis. The major components of this pipeline include data file integrity validation, MS/MS database search for spectral assignments to peptide sequences, false discovery rate estimation, protein inference, quantitation of global protein levels, and specific glycan-modified glycopeptides as well as other modification-specific peptides such as phosphorylation, acetylation, and ubiquitination. To ensure the transparency and reproducibility of data analysis, MS-PyCloud includes open-source software tools with comprehensive testing and versioning for spectrum assignments. Leveraging public cloud computing infrastructure via Amazon Web Services (AWS), MS-PyCloud scales seamlessly based on analysis demand to achieve fast and efficient performance. Application of the pipeline to the analysis of large-scale LC-MS/MS data sets demonstrated the effectiveness and high performance of MS-PyCloud. The software can be downloaded at https://github.com/huizhanglab-jhu/ms-pycloud.

RevDate: 2024-06-13
CmpDate: 2024-06-13

Sochat V, Culquicondor A, Ojea A, et al (2024)

The Flux Operator.

F1000Research, 13:203.

Converged computing is an emerging area of computing that brings together the best of both worlds for high performance computing (HPC) and cloud-native communities. The economic influence of cloud computing and the need for workflow portability, flexibility, and manageability are driving this emergence. Navigating the uncharted territory and building an effective space for both HPC and cloud require collaborative technological development and research. In this work, we focus on developing components for the converged workload manager, the central component of batch workflows running in any environment. From the cloud we base our work on Kubernetes, the de facto standard batch workload orchestrator. From HPC the orchestrator counterpart is Flux Framework, a fully hierarchical resource management and graph-based scheduler with a modular architecture that supports sophisticated scheduling and job management. Bringing these managers together consists of implementing Flux inside of Kubernetes, enabling hierarchical resource management and scheduling that scales without burdening the Kubernetes scheduler. This paper introduces the Flux Operator - an on-demand HPC workload manager deployed in Kubernetes. Our work describes design decisions, mapping components between environments, and experimental features. We perform experiments that compare application performance when deployed by the Flux Operator and the MPI Operator and present the results. Finally, we review remaining challenges and describe our vision of the future for improved technological innovation and collaboration through converged computing.

RevDate: 2024-06-11

Salcedo A, Tarabichi M, Buchanan A, et al (2024)

Crowd-sourced benchmarking of single-sample tumor subclonal reconstruction.

Nature biotechnology [Epub ahead of print].

Subclonal reconstruction algorithms use bulk DNA sequencing data to quantify parameters of tumor evolution, allowing an assessment of how cancers initiate, progress and respond to selective pressures. We launched the ICGC-TCGA (International Cancer Genome Consortium-The Cancer Genome Atlas) DREAM Somatic Mutation Calling Tumor Heterogeneity and Evolution Challenge to benchmark existing subclonal reconstruction algorithms. This 7-year community effort used cloud computing to benchmark 31 subclonal reconstruction algorithms on 51 simulated tumors. Algorithms were scored on seven independent tasks, leading to 12,061 total runs. Algorithm choice influenced performance substantially more than tumor features but purity-adjusted read depth, copy-number state and read mappability were associated with the performance of most algorithms on most tasks. No single algorithm was a top performer for all seven tasks and existing ensemble strategies were unable to outperform the best individual methods, highlighting a key research need. All containerized methods, evaluation code and datasets are available to support further assessment of the determinants of subclonal reconstruction accuracy and development of improved methods to understand tumor evolution.

RevDate: 2024-06-11
CmpDate: 2024-06-11

Ko G, Lee JH, Sim YM, et al (2024)

KoNA: Korean Nucleotide Archive as A New Data Repository for Nucleotide Sequence Data.

Genomics, proteomics & bioinformatics, 22(1):.

During the last decade, the generation and accumulation of petabase-scale high-throughput sequencing data have resulted in great challenges, including access to human data, as well as transfer, storage, and sharing of enormous amounts of data. To promote data-driven biological research, the Korean government announced that all biological data generated from government-funded research projects should be deposited at the Korea BioData Station (K-BDS), which consists of multiple databases for individual data types. Here, we introduce the Korean Nucleotide Archive (KoNA), a repository of nucleotide sequence data. As of July 2022, the Korean Read Archive in KoNA has collected over 477 TB of raw next-generation sequencing data from national genome projects. To ensure data quality and prepare for international alignment, a standard operating procedure was adopted, which is similar to that of the International Nucleotide Sequence Database Collaboration. The standard operating procedure includes quality control processes for submitted data and metadata using an automated pipeline, followed by manual examination. To ensure fast and stable data transfer, a high-speed transmission system called GBox is used in KoNA. Furthermore, the data uploaded to or downloaded from KoNA through GBox can be readily processed using a cloud computing service called Bio-Express. This seamless coupling of KoNA, GBox, and Bio-Express enhances the data experience, including submission, access, and analysis of raw nucleotide sequences. KoNA not only satisfies the unmet needs for a national sequence repository in Korea but also provides datasets to researchers globally and contributes to advances in genomics. The KoNA is available at https://www.kobic.re.kr/kona/.

RevDate: 2024-06-11

McMurry AJ, Gottlieb DI, Miller TA, et al (2024)

Cumulus: a federated electronic health record-based learning system powered by Fast Healthcare Interoperability Resources and artificial intelligence.

Journal of the American Medical Informatics Association : JAMIA pii:7690920 [Epub ahead of print].

OBJECTIVE: To address challenges in large-scale electronic health record (EHR) data exchange, we sought to develop, deploy, and test an open source, cloud-hosted app "listener" that accesses standardized data across the SMART/HL7 Bulk FHIR Access application programming interface (API).

METHODS: We advance a model for scalable, federated, data sharing and learning. Cumulus software is designed to address key technology and policy desiderata including local utility, control, and administrative simplicity as well as privacy preservation during robust data sharing, and artificial intelligence (AI) for processing unstructured text.

RESULTS: Cumulus relies on containerized, cloud-hosted software, installed within a healthcare organization's security envelope. Cumulus accesses EHR data via the Bulk FHIR interface and streamlines automated processing and sharing. The modular design enables use of the latest AI and natural language processing tools and supports provider autonomy and administrative simplicity. In an initial test, Cumulus was deployed across 5 healthcare systems each partnered with public health. Cumulus output is patient counts which were aggregated into a table stratifying variables of interest to enable population health studies. All code is available open source. A policy stipulating that only aggregate data leave the institution greatly facilitated data sharing agreements.

DISCUSSION AND CONCLUSION: Cumulus addresses barriers to data sharing based on (1) federally required support for standard APIs, (2) increasing use of cloud computing, and (3) advances in AI. There is potential for scalability to support learning across myriad network configurations and use cases.

RevDate: 2024-06-11

Yang T, Du Y, Sun M, et al (2024)

Risk Management for Whole-Process Safe Disposal of Medical Waste: Progress and Challenges.

Risk management and healthcare policy, 17:1503-1522.

Over the past decade, the global outbreaks of SARS, influenza A (H1N1), COVID-19, and other major infectious diseases have exposed the insufficient capacity for emergency disposal of medical waste in numerous countries and regions. Particularly during epidemics of major infectious diseases, medical waste exhibits new characteristics such as accelerated growth rate, heightened risk level, and more stringent disposal requirements. Consequently, there is an urgent need for advanced theoretical approaches that can perceive, predict, evaluate, and control risks associated with safe disposal throughout the entire process in a timely, accurate, efficient, and comprehensive manner. This article provides a systematic review of relevant research on collection, storage, transportation, and disposal of medical waste throughout its entirety to illustrate the current state of safe disposal practices. Building upon this foundation and leveraging emerging information technologies like Internet of Things (IoT), cloud computing, big data analytics, and artificial intelligence (AI), we deeply contemplate future research directions with an aim to minimize risks across all stages of medical waste disposal while offering valuable references and decision support to further advance safe disposal practices.

RevDate: 2024-06-10

Ullah S, Ou J, Xie Y, et al (2024)

Facial expression recognition (FER) survey: a vision, architectural elements, and future directions.

PeerJ. Computer science, 10:e2024.

With the cutting-edge advancements in computer vision, facial expression recognition (FER) is an active research area due to its broad practical applications. It has been utilized in various fields, including education, advertising and marketing, entertainment and gaming, health, and transportation. The facial expression recognition-based systems are rapidly evolving due to new challenges, and significant research studies have been conducted on both basic and compound facial expressions of emotions; however, measuring emotions is challenging. Fueled by the recent advancements and challenges to the FER systems, in this article, we have discussed the basics of FER and architectural elements, FER applications and use-cases, FER-based global leading companies, interconnection between FER, Internet of Things (IoT) and Cloud computing, summarize open challenges in-depth to FER technologies, and future directions through utilizing Preferred Reporting Items for Systematic reviews and Meta Analyses Method (PRISMA). In the end, the conclusion and future thoughts are discussed. By overcoming the identified challenges and future directions in this research study, researchers will revolutionize the discipline of facial expression recognition in the future.

RevDate: 2024-06-10
CmpDate: 2024-06-10

Aman SS, N'guessan BG, Agbo DDA, et al (2023)

Search engine Performance optimization: methods and techniques.

F1000Research, 12:1317.

BACKGROUND: With the rapid advancement of information technology, search engine optimisation (SEO) has become crucial for enhancing the visibility and relevance of online content. In this context, the use of cloud platforms like Microsoft Azure is being explored to bolster SEO capabilities.

METHODS: This scientific article offers an in-depth study of search engine optimisation. It explores the different methods and techniques used to improve the performance and efficiency of a search engine, focusing on key aspects such as result relevance, search speed and user experience. The article also presents case studies and concrete examples to illustrate the practical application of optimisation techniques.

RESULTS: The results demonstrate the importance of optimisation in delivering high quality search results and meeting the increasing demands of users.

CONCLUSIONS: The article addresses the enhancement of search engines through the Microsoft Azure infrastructure and its associated components. It highlights methods such as indexing, semantic analysis, parallel searches, and caching to strengthen the relevance of results, speed up searches, and optimise the user experience. Following the application of these methods, a marked improvement was observed in these areas, thereby showcasing the capability of Microsoft Azure in enhancing search engines. The study sheds light on the implementation and analysis of these Azure-focused techniques, introduces a methodology for assessing their efficacy, and details the specific benefits of each method. Looking forward, the article suggests integrating artificial intelligence to elevate the relevance of results, venturing into other cloud infrastructures to boost performance, and evaluating these methods in specific scenarios, such as multimedia information search. In summary, with Microsoft Azure, the enhancement of search engines appears promising, with increased relevance and a heightened user experience in a rapidly evolving sector.

RevDate: 2024-06-06

Hie BL, Kim S, Rando TA, et al (2024)

Scanorama: integrating large and diverse single-cell transcriptomic datasets.

Nature protocols [Epub ahead of print].

Merging diverse single-cell RNA sequencing (scRNA-seq) data from numerous experiments, laboratories and technologies can uncover important biological insights. Nonetheless, integrating scRNA-seq data encounters special challenges when the datasets are composed of diverse cell type compositions. Scanorama offers a robust solution for improving the quality and interpretation of heterogeneous scRNA-seq data by effectively merging information from diverse sources. Scanorama is designed to address the technical variation introduced by differences in sample preparation, sequencing depth and experimental batches that can confound the analysis of multiple scRNA-seq datasets. Here we provide a detailed protocol for using Scanorama within a Scanpy-based single-cell analysis workflow coupled with Google Colaboratory, a cloud-based free Jupyter notebook environment service. The protocol involves Scanorama integration, a process that typically spans 0.5-3 h. Scanorama integration requires a basic understanding of cellular biology, transcriptomic technologies and bioinformatics. Our protocol and new Scanorama-Colaboratory resource should make scRNA-seq integration more widely accessible to researchers.

RevDate: 2024-06-05

Zheng P, Yang J, Lou J, et al (2024)

Design and application of virtual simulation teaching platform for intelligent manufacturing.

Scientific reports, 14(1):12895.

Aiming at the practical teaching of intelligent manufacturing majors faced with lack of equipment, tense teachers and other problems such as high equipment investment, high material loss, high teaching risk, difficult to implement internship, difficult to observe production, difficult to reproduce the results, and so on, we take the electrical automation technology, mechatronics technology and industrial robotics technology majors of intelligent manufacturing majors as an example, and design and establish a virtual simulation teaching platform for intelligent manufacturing majors by using the cloud computing platform, edge computing technology, and terminal equipment synergy. The platform includes six major virtual simulation modules, including virtual simulation of electrician electronics and PLC control, virtual and real combination of typical production lines of intelligent manufacturing, dual-axis collaborative robotics workstation, digital twin simulation, virtual disassembly of industrial robots, virtual simulation of magnetic yoke axis flexible production line. The platform covers the virtual simulation teaching content of basic principle experiments, advanced application experiments, and advanced integration experiments in intelligent manufacturing majors. In order to test the effectiveness of this virtual simulation platform for practical teaching in engineering, this paper organizes a teaching practice activity involving 246 students from two parallel classes of three different majors. Through a one-year teaching application, we analyzed the data on the grades of 7 core courses involved in three majors in one academic year, the proportion of participation in competitions and innovative activities, the number of awards and certificates of professional qualifications, and the subjective questionnaires of the testers. The analysis shows that the learners who adopt the virtual simulation teaching platform proposed in this paper for practical teaching are better than the learners under the traditional teaching method in terms of academic performance, proportion of participation in competitions and innovative activities, and proportion of awards and certificates by more than 13%, 37%, 36%, 27% and 22%, respectively. Therefore, the virtual simulation teaching platform of intelligent manufacturing established in this paper has obvious superiority in solving the problem of "three highs and three difficulties" existing in the practical teaching of engineering, and according to the questionnaire feedback from the testers, the platform can effectively alleviate the shortage of practical training equipment, stimulate the interest in learning, and help to broaden and improve the knowledge system of the learners.

RevDate: 2024-06-05

Lai Q, S Guo (2024)

Heterogeneous coexisting attractors, large-scale amplitude control and finite-time synchronization of central cyclic memristive neural networks.

Neural networks : the official journal of the International Neural Network Society, 178:106412 pii:S0893-6080(24)00336-8 [Epub ahead of print].

Memristors are of great theoretical and practical significance for chaotic dynamics research of brain-like neural networks due to their excellent physical properties such as brain synapse-like memorability and nonlinearity, especially crucial for the promotion of AI big models, cloud computing, and intelligent systems in the artificial intelligence field. In this paper, we introduce memristors as self-connecting synapses into a four-dimensional Hopfield neural network, constructing a central cyclic memristive neural network (CCMNN), and achieving its effective control. The model adopts a central loop topology and exhibits a variety of complex dynamic behaviors such as chaos, bifurcation, and homogeneous and heterogeneous coexisting attractors. The complex dynamic behaviors of the CCMNN are investigated in depth numerically by equilibrium point stability analysis as well as phase trajectory maps, bifurcation maps, time-domain maps, and LEs. It is found that with the variation of the internal parameters of the memristor, asymmetric heterogeneous attractor coexistence phenomena appear under different initial conditions, including the multi-stable coexistence behaviors of periodic-periodic, periodic-stable point, periodic-chaotic, and stable point-chaotic. In addition, by adjusting the structural parameters, a wide range of amplitude control can be realized without changing the chaotic state of the system. Finally, based on the CCMNN model, an adaptive synchronization controller is designed to achieve finite-time synchronization control, and its application prospect in simple secure communication is discussed. A microcontroller-based hardware circuit and NIST test are conducted to verify the correctness of the numerical results and theoretical analysis.

RevDate: 2024-06-05
CmpDate: 2024-06-05

Oliva A, Kaphle A, Reguant R, et al (2024)

Future-proofing genomic data and consent management: a comprehensive review of technology innovations.

GigaScience, 13:.

Genomic information is increasingly used to inform medical treatments and manage future disease risks. However, any personal and societal gains must be carefully balanced against the risk to individuals contributing their genomic data. Expanding our understanding of actionable genomic insights requires researchers to access large global datasets to capture the complexity of genomic contribution to diseases. Similarly, clinicians need efficient access to a patient's genome as well as population-representative historical records for evidence-based decisions. Both researchers and clinicians hence rely on participants to consent to the use of their genomic data, which in turn requires trust in the professional and ethical handling of this information. Here, we review existing and emerging solutions for secure and effective genomic information management, including storage, encryption, consent, and authorization that are needed to build participant trust. We discuss recent innovations in cloud computing, quantum-computing-proof encryption, and self-sovereign identity. These innovations can augment key developments from within the genomics community, notably GA4GH Passports and the Crypt4GH file container standard. We also explore how decentralized storage as well as the digital consenting process can offer culturally acceptable processes to encourage data contributions from ethnic minorities. We conclude that the individual and their right for self-determination needs to be put at the center of any genomics framework, because only on an individual level can the received benefits be accurately balanced against the risk of exposing private information.

RevDate: 2024-06-04

Peter R, Moreira S, Tagliabue E, et al (2024)

Stereo reconstruction from microscopic images for computer-assisted ophthalmic surgery.

International journal of computer assisted radiology and surgery [Epub ahead of print].

PURPOSE: This work presents a novel platform for stereo reconstruction in anterior segment ophthalmic surgery to enable enhanced scene understanding, especially depth perception, for advanced computer-assisted eye surgery by effectively addressing the lack of texture and corneal distortions artifacts in the surgical scene.

METHODS: The proposed platform for stereo reconstruction uses a two-step approach: generating a sparse 3D point cloud from microscopic images, deriving a dense 3D representation by fitting surfaces onto the point cloud, and considering geometrical priors of the eye anatomy. We incorporate a pre-processing step to rectify distortion artifacts induced by the cornea's high refractive power, achieved by aligning a 3D phenotypical cornea geometry model to the images and computing a distortion map using ray tracing.

RESULTS: The accuracy of 3D reconstruction is evaluated on stereo microscopic images of ex vivo porcine eyes, rigid phantom eyes, and synthetic photo-realistic images. The results demonstrate the potential of the proposed platform to enhance scene understanding via an accurate 3D representation of the eye and enable the estimation of instrument to layer distances in porcine eyes with a mean average error of 190  μ m , comparable to the scale of surgeons' hand tremor.

CONCLUSION: This work marks a significant advancement in stereo reconstruction for ophthalmic surgery by addressing corneal distortions, a previously often overlooked aspect in such surgical scenarios. This could improve surgical outcomes by allowing for intra-operative computer assistance, e.g., in the form of virtual distance sensors.

RevDate: 2024-06-04
CmpDate: 2024-06-04

H S M, P Gupta (2024)

Federated learning inspired Antlion based orchestration for Edge computing environment.

PloS one, 19(6):e0304067 pii:PONE-D-24-06086.

Edge computing is a scalable, modern, and distributed computing architecture that brings computational workloads closer to smart gateways or Edge devices. This computing model delivers IoT (Internet of Things) computations and processes the IoT requests from the Edge of the network. In a diverse and independent environment like Fog-Edge, resource management is a critical issue. Hence, scheduling is a vital process to enhance efficiency and allocation of resources properly to the tasks. The manuscript proposes an Artificial Neural Network (ANN) inspired Antlion algorithm for task orchestration Edge environments. Its aim is to enhance resource utilization and reduce energy consumption. Comparative analysis with different algorithms shows that the proposed algorithm balances the load on the Edge layer, which results in lower load on the cloud, improves power consumption, CPU utilization, network utilization, and reduces average waiting time for requests. The proposed model is tested for healthcare application in Edge computing environment. The evaluation shows that the proposed algorithm outperforms existing fuzzy logic algorithms. The performance of the ANN inspired Antlion based orchestration approach is evaluated using performance metrics, power consumption, CPU utilization, network utilization, and average waiting time for requests respectively. It outperforms the existing fuzzy logic, round robin algorithm. The proposed technique achieves an average cloud energy consumption improvement of 95.94%, and average Edge energy consumption improvement of 16.79%, 19.85% in average CPU utilization in Edge computing environment, 10.64% in average CPU utilization in cloud environment, and 23.33% in average network utilization, and the average waiting time decreases by 96% compared to fuzzy logic and 1.4% compared to round-robin respectively.

RevDate: 2024-06-04

Herre C, Ho A, Eisenbraun B, et al (2024)

Introduction of the Capsules environment to support further growth of the SBGrid structural biology software collection.

Acta crystallographica. Section D, Structural biology pii:S2059798324004881 [Epub ahead of print].

The expansive scientific software ecosystem, characterized by millions of titles across various platforms and formats, poses significant challenges in maintaining reproducibility and provenance in scientific research. The diversity of independently developed applications, evolving versions and heterogeneous components highlights the need for rigorous methodologies to navigate these complexities. In response to these challenges, the SBGrid team builds, installs and configures over 530 specialized software applications for use in the on-premises and cloud-based computing environments of SBGrid Consortium members. To address the intricacies of supporting this diverse application collection, the team has developed the Capsule Software Execution Environment, generally referred to as Capsules. Capsules rely on a collection of programmatically generated bash scripts that work together to isolate the runtime environment of one application from all other applications, thereby providing a transparent cross-platform solution without requiring specialized tools or elevated account privileges for researchers. Capsules facilitate modular, secure software distribution while maintaining a centralized, conflict-free environment. The SBGrid platform, which combines Capsules with the SBGrid collection of structural biology applications, aligns with FAIR goals by enhancing the findability, accessibility, interoperability and reusability of scientific software, ensuring seamless functionality across diverse computing environments. Its adaptability enables application beyond structural biology into other scientific fields.

RevDate: 2024-06-03

Rathinam R, Sivakumar P, Sigamani S, et al (2024)

SJFO: Sail Jelly Fish Optimization enabled VM migration with DRNN-based prediction for load balancing in cloud computing.

Network (Bristol, England) [Epub ahead of print].

The dynamic workload is evenly distributed among all nodes using balancing methods like hosts or VMs. Load Balancing as a Service (LBaaS) is another name for load balancing in the cloud. In this research work, the load is balanced by the application of Virtual Machine (VM) migration carried out by proposed Sail Jelly Fish Optimization (SJFO). The SJFO is formed by combining Sail Fish Optimizer (SFO) and Jellyfish Search (JS) optimizer. In the Cloud model, many Physical Machines (PMs) are present, where these PMs are comprised of many VMs. Each VM has many tasks, and these tasks depend on various parameters like Central Processing Unit (CPU), memory, Million Instructions per Second (MIPS), capacity, total number of processing entities, as well as bandwidth. Here, the load is predicted by Deep Recurrent Neural Network (DRNN) and this predicted load is compared with a threshold value, where VM migration is done based on predicted values. Furthermore, the performance of SJFO-VM is analysed using the metrics like capacity, load, and resource utilization. The proposed method shows better performance with a superior capacity of 0.598, an inferior load of 0.089, and an inferior resource utilization of 0.257.

RevDate: 2024-06-03

McCormick I, Butcher R, Ramke J, et al (2024)

The Rapid Assessment of Avoidable Blindness survey: Review of the methodology and protocol for the seventh version (RAAB7).

Wellcome open research, 9:133.

The Rapid Assessment of Avoidable Blindness (RAAB) is a population-based cross-sectional survey methodology used to collect data on the prevalence of vision impairment and its causes and eye care service indicators among the population 50 years and older. RAAB has been used for over 20 years with modifications to the protocol over time reflected in changing version numbers; this paper describes the latest version of the methodology-RAAB7. RAAB7 is a collaborative project between the International Centre for Eye Health and Peek Vision with guidance from a steering group of global eye health stakeholders. We have fully digitised RAAB, allowing for fast, accurate and secure data collection. A bespoke Android mobile application automatically synchronises data to a secure Amazon Web Services virtual private cloud when devices are online so users can monitor data collection in real-time. Vision is screened using Peek Vision's digital visual acuity test for mobile devices and uncorrected, corrected and pinhole visual acuity are collected. An optional module on Disability is available. We have rebuilt the RAAB data repository as the end point of RAAB7's digital data workflow, including a front-end website to access the past 20 years of RAAB surveys worldwide. This website (https://www.raab.world) hosts open access RAAB data to support the advocacy and research efforts of the global eye health community. Active research sub-projects are finalising three new components in 2024-2025: 1) Near vision screening to address data gaps on near vision impairment and effective refractive error coverage; 2) an optional Health Economics module to assess the affordability of eye care services and productivity losses associated with vision impairment; 3) an optional Health Systems data collection module to support RAAB's primary aim to inform eye health service planning by supporting users to integrate eye care facility data with population data.

RevDate: 2024-06-03

Zhu X, X Peng (2024)

Strategic assessment model of smart stadiums based on genetic algorithms and literature visualization analysis: A case study from Chengdu, China.

Heliyon, 10(11):e31759.

This paper leverages Citespace and VOSviewer software to perform a comprehensive bibliometric analysis on a corpus of 384 references related to smart sports venues, spanning from 1998 to 2022. The analysis encompasses various facets, including author network analysis, institutional network analysis, temporal mapping, keyword clustering, and co-citation network analysis. Moreover, this paper constructs a smart stadiums strategic assessment model (SSSAM) to compensate for confusion and aimlessness by genetic algorithms (GA). Our findings indicate an exponential growth in publications on smart sports venues year over year. Arizona State University emerges as the institution with the highest number of collaborative publications, Energy and Buildings becomes the publication with the most documents. While, Wang X stands out as the scholar with the most substantial contribution to the field. In scrutinizing the betweenness centrality indicators, a paradigm shift in research hotspots becomes evident-from intelligent software to the domains of the Internet of Things (IoT), intelligent services, and artificial intelligence (AI). The SSSAM model based on artificial neural networks (ANN) and GA algorithms also reached similar conclusions through a case study of the International University Sports Federation (FISU), building Information Modeling (BIM), cloud computing and artificial intelligence Internet of Things (AIoT) are expected to develop in the future. Three key themes developed over time. Finally, a comprehensive knowledge system with common references and future hot spots is proposed.

RevDate: 2024-06-03

Nisanova A, Yavary A, Deaner J, et al (2024)

Performance of Automated Machine Learning in Predicting Outcomes of Pneumatic Retinopexy.

Ophthalmology science, 4(5):100470.

PURPOSE: Automated machine learning (AutoML) has emerged as a novel tool for medical professionals lacking coding experience, enabling them to develop predictive models for treatment outcomes. This study evaluated the performance of AutoML tools in developing models predicting the success of pneumatic retinopexy (PR) in treatment of rhegmatogenous retinal detachment (RRD). These models were then compared with custom models created by machine learning (ML) experts.

DESIGN: Retrospective multicenter study.

PARTICIPANTS: Five hundred and thirty nine consecutive patients with primary RRD that underwent PR by a vitreoretinal fellow at 6 training hospitals between 2002 and 2022.

METHODS: We used 2 AutoML platforms: MATLAB Classification Learner and Google Cloud AutoML. Additional models were developed by computer scientists. We included patient demographics and baseline characteristics, including lens and macula status, RRD size, number and location of breaks, presence of vitreous hemorrhage and lattice degeneration, and physicians' experience. The dataset was split into a training (n = 483) and test set (n = 56). The training set, with a 2:1 success-to-failure ratio, was used to train the MATLAB models. Because Google Cloud AutoML requires a minimum of 1000 samples, the training set was tripled to create a new set with 1449 datapoints. Additionally, balanced datasets with a 1:1 success-to-failure ratio were created using Python.

MAIN OUTCOME MEASURES: Single-procedure anatomic success rate, as predicted by the ML models. F2 scores and area under the receiver operating curve (AUROC) were used as primary metrics to compare models.

RESULTS: The best performing AutoML model (F2 score: 0.85; AUROC: 0.90; MATLAB), showed comparable performance to the custom model (0.92, 0.86) when trained on the balanced datasets. However, training the AutoML model with imbalanced data yielded misleadingly high AUROC (0.81) despite low F2-score (0.2) and sensitivity (0.17).

CONCLUSIONS: We demonstrated the feasibility of using AutoML as an accessible tool for medical professionals to develop models from clinical data. Such models can ultimately aid in the clinical decision-making, contributing to better patient outcomes. However, outcomes can be misleading or unreliable if used naively. Limitations exist, particularly if datasets contain missing variables or are highly imbalanced. Proper model selection and data preprocessing can improve the reliability of AutoML tools.

FINANCIAL DISCLOSURES: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

RevDate: 2024-06-03

Rodriguez A, Kim Y, Nandi TN, et al (2024)

Accelerating Genome- and Phenome-Wide Association Studies using GPUs - A case study using data from the Million Veteran Program.

bioRxiv : the preprint server for biology pii:2024.05.17.594583.

The expansion of biobanks has significantly propelled genomic discoveries yet the sheer scale of data within these repositories poses formidable computational hurdles, particularly in handling extensive matrix operations required by prevailing statistical frameworks. In this work, we introduce computational optimizations to the SAIGE (Scalable and Accurate Implementation of Generalized Mixed Model) algorithm, notably employing a GPU-based distributed computing approach to tackle these challenges. We applied these optimizations to conduct a large-scale genome-wide association study (GWAS) across 2,068 phenotypes derived from electronic health records of 635,969 diverse participants from the Veterans Affairs (VA) Million Veteran Program (MVP). Our strategies enabled scaling up the analysis to over 6,000 nodes on the Department of Energy (DOE) Oak Ridge Leadership Computing Facility (OLCF) Summit High-Performance Computer (HPC), resulting in a 20-fold acceleration compared to the baseline model. We also provide a Docker container with our optimizations that was successfully used on multiple cloud infrastructures on UK Biobank and All of Us datasets where we showed significant time and cost benefits over the baseline SAIGE model.

RevDate: 2024-06-03

Lowndes JS, Holder AM, Markowitz EH, et al (2024)

Shifting institutional culture to develop climate solutions with Open Science.

Ecology and evolution, 14(6):e11341.

To address our climate emergency, "we must rapidly, radically reshape society"-Johnson & Wilkinson, All We Can Save. In science, reshaping requires formidable technical (cloud, coding, reproducibility) and cultural shifts (mindsets, hybrid collaboration, inclusion). We are a group of cross-government and academic scientists that are exploring better ways of working and not being too entrenched in our bureaucracies to do better science, support colleagues, and change the culture at our organizations. We share much-needed success stories and action for what we can all do to reshape science as part of the Open Science movement and 2023 Year of Open Science.

RevDate: 2024-05-30

Mimar S, Paul AS, Lucarelli N, et al (2024)

ComPRePS: An Automated Cloud-based Image Analysis tool to democratize AI in Digital Pathology.

Proceedings of SPIE--the International Society for Optical Engineering, 12933:.

Artificial intelligence (AI) has extensive applications in a wide range of disciplines including healthcare and clinical practice. Advances in high-resolution whole-slide brightfield microscopy allow for the digitization of histologically stained tissue sections, producing gigapixel-scale whole-slide images (WSI). The significant improvement in computing and revolution of deep neural network (DNN)-based AI technologies over the last decade allow us to integrate massively parallelized computational power, cutting-edge AI algorithms, and big data storage, management, and processing. Applied to WSIs, AI has created opportunities for improved disease diagnostics and prognostics with the ultimate goal of enhancing precision medicine and resulting patient care. The National Institutes of Health (NIH) has recognized the importance of developing standardized principles for data management and discovery for the advancement of science and proposed the Findable, Accessible, Interoperable, Reusable, (FAIR) Data Principles[1] with the goal of building a modernized biomedical data resource ecosystem to establish collaborative research communities. In line with this mission and to democratize AI-based image analysis in digital pathology, we propose ComPRePS: an end-to-end automated Computational Renal Pathology Suite which combines massive scalability, on-demand cloud computing, and an easy-to-use web-based user interface for data upload, storage, management, slide-level visualization, and domain expert interaction. Moreover, our platform is equipped with both in-house and collaborator developed sophisticated AI algorithms in the back-end server for image analysis to identify clinically relevant micro-anatomic functional tissue units (FTU) and to extract image features.

RevDate: 2024-05-29

Yu J, Nie S, Liu W, et al (2024)

Mapping global mangrove canopy height by integrating Ice, Cloud, and Land Elevation Satellite-2 photon-counting LiDAR data with multi-source images.

The Science of the total environment pii:S0048-9697(24)03634-9 [Epub ahead of print].

Large-scale and precise measurement of mangrove canopy height is crucial for understanding and evaluating wetland ecosystems' condition, health, and productivity. This study generates a global mangrove canopy height map with a 30 m resolution by integrating Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) photon-counting light detection and ranging (LiDAR) data with multi-source imagery. Initially, high-quality mangrove canopy height samples were extracted using meticulous processing and filtering of ICESat-2 data. Subsequently, mangrove canopy height models were established using the random forest (RF) algorithm, incorporating ICESat-2 canopy height samples, Sentinel-2 data, TanDEM-X DEM data and WorldClim data. Furthermore, a global 30 m mangrove canopy height map was generated utilizing the Google Earth Engine platform. Finally, the global map's accuracy was evaluated by comparing it with reference canopy heights derived from both space-borne and airborne LiDAR data. Results indicate that the global 30 m resolution mangrove height map was found to be consistent with canopy heights obtained from space-borne (r = 0.88, Bisa = -0.07 m, RMSE = 3.66 m, RMSE% = 29.86 %) and airborne LiDAR (r = 0.52, Bisa = -1.08 m, RMSE = 3.39 m, RMSE% = 39.05 %). Additionally, our findings reveal that mangroves worldwide exhibit an average height of 12.65 m, with the tallest mangrove reaching a height of 44.94 m. These results demonstrate the feasibility and effectiveness of using ICESat-2 data integrated with multi-source imagery to generate a global mangrove canopy height map. This dataset offers reliable information that can significantly support government and organizational efforts to protect and conserve mangrove ecosystems.

RevDate: 2024-05-27

Oh S, Gravel-Pucillo K, Ramos M, et al (2024)

AnVILWorkflow: A runnable workflow package for Cloud-implemented bioinformatics analysis pipelines.

Research square pii:rs.3.rs-4370115.

Advancements in sequencing technologies and the development of new data collection methods produce large volumes of biological data. The Genomic Data Science Analysis, Visualization, and Informatics Lab-space (AnVIL) provides a cloud-based platform for democratizing access to large-scale genomics data and analysis tools. However, utilizing the full capabilities of AnVIL can be challenging for researchers without extensive bioinformatics expertise, especially for executing complex workflows. Here we present the AnVILWorkflow R package, which enables the convenient execution of bioinformatics workflows hosted on AnVIL directly from an R environment. AnVILWorkflowsimplifies the setup of the cloud computing environment, input data formatting, workflow submission, and retrieval of results through intuitive functions. We demonstrate the utility of AnVILWorkflowfor three use cases: bulk RNA-seq analysis with Salmon, metagenomics analysis with bioBakery, and digital pathology image processing with PathML. The key features of AnVILWorkflow include user-friendly browsing of available data and workflows, seamless integration of R and non-R tools within a reproducible analysis pipeline, and accessibility to scalable computing resources without direct management overhead. While some limitations exist around workflow customization, AnVILWorkflowlowers the barrier to taking advantage of AnVIL's resources, especially for exploratory analyses or bulk processing with established workflows. This empowers a broader community of researchers to leverage the latest genomics tools and datasets using familiar R syntax. This package is distributed through the Bioconductor project (https://bioconductor.org/packages/AnVILWorkflow), and the source code is available through GitHub (https://github.com/shbrief/AnVILWorkflow).

RevDate: 2024-05-26
CmpDate: 2024-05-26

Alrashdi I (2024)

Fog-based deep learning framework for real-time pandemic screening in smart cities from multi-site tomographies.

BMC medical imaging, 24(1):123.

The quick proliferation of pandemic diseases has been imposing many concerns on the international health infrastructure. To combat pandemic diseases in smart cities, Artificial Intelligence of Things (AIoT) technology, based on the integration of artificial intelligence (AI) with the Internet of Things (IoT), is commonly used to promote efficient control and diagnosis during the outbreak, thereby minimizing possible losses. However, the presence of multi-source institutional data remains one of the major challenges hindering the practical usage of AIoT solutions for pandemic disease diagnosis. This paper presents a novel framework that utilizes multi-site data fusion to boost the accurateness of pandemic disease diagnosis. In particular, we focus on a case study of COVID-19 lesion segmentation, a crucial task for understanding disease progression and optimizing treatment strategies. In this study, we propose a novel multi-decoder segmentation network for efficient segmentation of infections from cross-domain CT scans in smart cities. The multi-decoder segmentation network leverages data from heterogeneous domains and utilizes strong learning representations to accurately segment infections. Performance evaluation of the multi-decoder segmentation network was conducted on three publicly accessible datasets, demonstrating robust results with an average dice score of 89.9% and an average surface dice of 86.87%. To address scalability and latency issues associated with centralized cloud systems, fog computing (FC) emerges as a viable solution. FC brings resources closer to the operator, offering low latency and energy-efficient data management and processing. In this context, we propose a unique FC technique called PANDFOG to deploy the multi-decoder segmentation network on edge nodes for practical and clinical applications of automated COVID-19 pneumonia analysis. The results of this study highlight the efficacy of the multi-decoder segmentation network in accurately segmenting infections from cross-domain CT scans. Moreover, the proposed PANDFOG system demonstrates the practical deployment of the multi-decoder segmentation network on edge nodes, providing real-time access to COVID-19 segmentation findings for improved patient monitoring and clinical decision-making.

RevDate: 2024-05-25

de Azevedo Soares Dos Santos HC, Rodrigues Cintra Armellini B, Naves GL, et al (2024)

Using "adopt a bacterium" as an E-learning tool for simultaneously teaching microbiology to different health-related university courses.

FEMS microbiology letters pii:7681972 [Epub ahead of print].

The COVID-19 pandemic has posed challenges for education, particularly in undergraduate teaching. In this study, we report on the experience of how a private university successfully addressed this challenge through an active methodology applied to a microbiology discipline offered remotely to students from various health-related courses (veterinary, physiotherapy, nursing, biomedicine, and nutrition). Remote teaching was combined with the 'Adopt a Bacterium' methodology, implemented for the first time on Google Sites. The distance learning activity notably improved student participation in microbiology discussions, both through word cloud analysis and the richness of discourse measured by the Shannon index. Furthermore, feedback from students about the e-learning approach was highly positive, indicating its effectiveness in motivating and involving students in the learning process. The results also demonstrate that despite being offered simultaneously to students, the methodology allowed for the acquisition of specialized knowledge within each course and sparked student interest in various aspects of microbiology. In conclusion, the remote 'Adopt a Bacterium' methodology facilitated knowledge sharing among undergraduate students from different health-related courses and represented a valuable resource in distance microbiology education.

RevDate: 2024-05-25

Shaghaghi N, Fazlollahi F, Shrivastav T, et al (2024)

DOxy: A Dissolved Oxygen Monitoring System.

Sensors (Basel, Switzerland), 24(10): pii:s24103253.

Dissolved Oxygen (DO) in water enables marine life. Measuring the prevalence of DO in a body of water is an important part of sustainability efforts because low oxygen levels are a primary indicator of contamination and distress in bodies of water. Therefore, aquariums and aquaculture of all types are in need of near real-time dissolved oxygen monitoring and spend a lot of money on purchasing and maintaining DO meters that are either expensive, inefficient, or manually operated-in which case they also need to ensure that manual readings are taken frequently which is time consuming. Hence a cost-effective and sustainable automated Internet of Things (IoT) system for this task is necessary and long overdue. DOxy, is such an IoT system under research and development at Santa Clara University's Ethical, Pragmatic, and Intelligent Computing (EPIC) Laboratory which utilizes cost-effective, accessible, and sustainable Sensing Units (SUs) for measuring the dissolved oxygen levels present in bodies of water which send their readings to a web based cloud infrastructure for storage, analysis, and visualization. DOxy's SUs are equipped with a High-sensitivity Pulse Oximeter meant for measuring dissolved oxygen levels in human blood, not water. Hence a number of parallel readings of water samples were gathered by both the High-sensitivity Pulse Oximeter and a standard dissolved oxygen meter. Then, two approaches for relating the readings were investigated. In the first, various machine learning models were trained and tested to produce a dynamic mapping of sensor readings to actual DO values. In the second, curve-fitting models were used to produce a successful conversion formula usable in the DOxy SUs offline. Both proved successful in producing accurate results.

RevDate: 2024-05-25

Kitsiou A, Sideri M, Pantelelis M, et al (2024)

Specification of Self-Adaptive Privacy-Related Requirements within Cloud Computing Environments (CCE).

Sensors (Basel, Switzerland), 24(10): pii:s24103227.

This paper presents a novel approach to address the challenges of self-adaptive privacy in cloud computing environments (CCE). Under the Cloud-InSPiRe project, the aim is to provide an interdisciplinary framework and a beta-version tool for self-adaptive privacy design, effectively focusing on the integration of technical measures with social needs. To address that, a pilot taxonomy that aligns technical, infrastructural, and social requirements is proposed after two supplementary surveys that have been conducted, focusing on users' privacy needs and developers' perspectives on self-adaptive privacy. Through the integration of users' social identity-based practices and developers' insights, the taxonomy aims to provide clear guidance for developers, ensuring compliance with regulatory standards and fostering a user-centric approach to self-adaptive privacy design tailored to diverse user groups, ultimately enhancing satisfaction and confidence in cloud services.

RevDate: 2024-05-25
CmpDate: 2024-05-25

Zimmerleiter R, Greibl W, Meininger G, et al (2024)

Sensor for Rapid In-Field Classification of Cannabis Samples Based on Near-Infrared Spectroscopy.

Sensors (Basel, Switzerland), 24(10): pii:s24103188.

A rugged handheld sensor for rapid in-field classification of cannabis samples based on their THC content using ultra-compact near-infrared spectrometer technology is presented. The device is designed for use by the Austrian authorities to discriminate between legal and illegal cannabis samples directly at the place of intervention. Hence, the sensor allows direct measurement through commonly encountered transparent plastic packaging made from polypropylene or polyethylene without any sample preparation. The measurement time is below 20 s. Measured spectral data are evaluated using partial least squares discriminant analysis directly on the device's hardware, eliminating the need for internet connectivity for cloud computing. The classification result is visually indicated directly on the sensor via a colored LED. Validation of the sensor is performed on an independent data set acquired by non-expert users after a short introduction. Despite the challenging setting, the achieved classification accuracy is higher than 80%. Therefore, the handheld sensor has the potential to reduce the number of unnecessarily confiscated legal cannabis samples, which would lead to significant monetary savings for the authorities.

RevDate: 2024-05-25

Lin J, Y Guan (2024)

Load Prediction in Double-Channel Residual Self-Attention Temporal Convolutional Network with Weight Adaptive Updating in Cloud Computing.

Sensors (Basel, Switzerland), 24(10): pii:s24103181.

When resource demand increases and decreases rapidly, container clusters in the cloud environment need to respond to the number of containers in a timely manner to ensure service quality. Resource load prediction is a prominent challenge issue with the widespread adoption of cloud computing. A novel cloud computing load prediction method has been proposed, the Double-channel residual Self-attention Temporal convolutional Network with Weight adaptive updating (DSTNW), in order to make the response of the container cluster more rapid and accurate. A Double-channel Temporal Convolution Network model (DTN) has been developed to capture long-term sequence dependencies and enhance feature extraction capabilities when the model handles long load sequences. Double-channel dilated causal convolution has been adopted to replace the single-channel dilated causal convolution in the DTN. A residual temporal self-attention mechanism (SM) has been proposed to improve the performance of the network and focus on features with significant contributions from the DTN. DTN and SM jointly constitute a dual-channel residual self-attention temporal convolutional network (DSTN). In addition, by evaluating the accuracy aspects of single and stacked DSTNs, an adaptive weight strategy has been proposed to assign corresponding weights for the single and stacked DSTNs, respectively. The experimental results highlight that the developed method has outstanding prediction performance for cloud computing in comparison with some state-of-the-art methods. The proposed method achieved an average improvement of 24.16% and 30.48% on the Container dataset and Google dataset, respectively.

RevDate: 2024-05-25

Xie Y, Meng X, Nguyen DT, et al (2024)

A Discussion of Building a Smart SHM Platform for Long-Span Bridge Monitoring.

Sensors (Basel, Switzerland), 24(10): pii:s24103163.

This paper explores the development of a smart Structural Health Monitoring (SHM) platform tailored for long-span bridge monitoring, using the Forth Road Bridge (FRB) as a case study. It discusses the selection of smart sensors available for real-time monitoring, the formulation of an effective data strategy encompassing the collection, processing, management, analysis, and visualization of monitoring data sets to support decision-making, and the establishment of a cost-effective and intelligent sensor network aligned with the objectives set through comprehensive communication with asset owners. Due to the high data rates and dense sensor installations, conventional processing techniques are inadequate for fulfilling monitoring functionalities and ensuring security. Cloud-computing emerges as a widely adopted solution for processing and storing vast monitoring data sets. Drawing from the authors' experience in implementing long-span bridge monitoring systems in the UK and China, this paper compares the advantages and limitations of employing cloud- computing for long-span bridge monitoring. Furthermore, it explores strategies for developing a robust data strategy and leveraging artificial intelligence (AI) and digital twin (DT) technologies to extract relevant information or patterns regarding asset health conditions. This information is then visualized through the interaction between physical and virtual worlds, facilitating timely and informed decision-making in managing critical road transport infrastructure.

RevDate: 2024-05-24

Peralta T, Menoscal M, Bravo G, et al (2024)

Rock Slope Stability Analysis Using Terrestrial Photogrammetry and Virtual Reality on Ignimbritic Deposits.

Journal of imaging, 10(5): pii:jimaging10050106.

Puerto de Cajas serves as a vital high-altitude passage in Ecuador, connecting the coastal region to the city of Cuenca. The stability of this rocky massif is carefully managed through the assessment of blocks and discontinuities, ensuring safe travel. This study presents a novel approach, employing rapid and cost-effective methods to evaluate an unexplored area within the protected expanse of Cajas. Using terrestrial photogrammetry and strategically positioned geomechanical stations along the slopes, we generated a detailed point cloud capturing elusive terrain features. We have used terrestrial photogrammetry for digitalization of the slope. Validation of the collected data was achieved by comparing directional data from Cloud Compare software with manual readings using a digital compass integrated in a phone at control points. The analysis encompasses three slopes, employing the SMR, Q-slope, and kinematic methodologies. Results from the SMR system closely align with kinematic analysis, indicating satisfactory slope quality. Nonetheless, continued vigilance in stability control remains imperative for ensuring road safety and preserving the site's integrity. Moreover, this research lays the groundwork for the creation of a publicly accessible 3D repository, enhancing visualization capabilities through Google Virtual Reality. This initiative not only aids in replicating the findings but also facilitates access to an augmented reality environment, thereby fostering collaborative research endeavors.

RevDate: 2024-05-22

Li X, Zhao P, Liang M, et al (2024)

Dynamics changes of coastal aquaculture ponds based on the Google Earth Engine in Jiangsu Province, China.

Marine pollution bulletin, 203:116502 pii:S0025-326X(24)00479-X [Epub ahead of print].

Monitoring the spatiotemporal variation in coastal aquaculture zones is essential to providing a scientific basis for formulating scientifically reasonable land management policies. This study uses the Google Earth Engine (GEE) remote sensing cloud platform to extract aquaculture information based on Landsat series and Sentinel-2 images for the six years of 1984 to 2021 (1984, 1990, 2000, 2010, 2016 and 2021), so as to analyze the changes in the coastal aquaculture pond area, along with its spatiotemporal characteristics, of Jiangsu Province. The overall area of coastal aquaculture ponds in Jiangsu shows an increasing trend in the early period and a decreasing trend in the later period. Over the past 37 years, the area of coastal aquaculture ponds has increased by a total of 54,639.73 ha. This study can provide basic data for the sustainable development of coastal aquaculture in Jiangsu, and a reference for related studies in other regions.

RevDate: 2024-05-21

Hulagappa Nebagiri M, L Pillappa Hnumanthappa (2024)

Fractional social optimization-based migration and replica management algorithm for load balancing in distributed file system for cloud computing.

Network (Bristol, England) [Epub ahead of print].

Effective management of data is a major issue in Distributed File System (DFS), like the cloud. This issue is handled by replicating files in an effective manner, which can minimize the time of data access and elevate the data availability. This paper devises a Fractional Social Optimization Algorithm (FSOA) for replica management along with balancing load in DFS in the cloud stage. Balancing the workload for DFS is the main objective. Here, the chunk creation is done by partitioning the file into a different number of chunks considering Deep Fuzzy Clustering (DFC) and then in the round-robin manner the Virtual machine (VM) is assigned. In that case for balancing the load considering certain objectives like resource use, energy consumption and migration cost thereby the load balancing is performed with the proposed FSOA. Here, the FSOA is formulated by uniting the Social optimization algorithm (SOA) and Fractional Calculus (FC). The replica management is done in DFS using the proposed FSOA by considering the various objectives. The FSOA has the smallest load of 0.299, smallest cost of 0.395, smallest energy consumption of 0.510, smallest overhead of 0.358, and smallest throughput of 0.537.

RevDate: 2024-05-21

Qureshi KM, Mewada BG, Kaur S, et al (2024)

Investigating industry 4.0 technologies in logistics 4.0 usage towards sustainable manufacturing supply chain.

Heliyon, 10(10):e30661.

In the era of Industry 4.0 (I4.0), automation and data analysis have undergone significant advancements, greatly impacting production management and operations management. Technologies such as the Internet of Things (IoT), robotics, cloud computing (CC), and big data, have played a crucial role in shaping Logistics 4.0 (L4.0) and improving the efficiency of the manufacturing supply chain (SC), ultimately contributing to sustainability goals. The present research investigates the role of I4.0 technologies within the framework of the extended theory of planned behavior (ETPB). The research explores various variables including subjective norms, attitude, perceived behavior control, leading to word-of-mouth, and purchase intention. By modeling these variables, the study aims to understand the influence of I4.0 technologies on L4.0 to establish a sustainable manufacturing SC. A questionnaire was administered to gather input from small and medium-sized firms (SMEs) in the manufacturing industry. An empirical study along with partial least squares structural equation modeling (SEM), was conducted to analyze the data. The findings indicate that the use of I4.0 technology in L4.0 influences subjective norms, which subsequently influence attitudes and personal behavior control. This, in turn, leads to word-of-mouth and purchase intention. The results provide valuable insights for shippers and logistics service providers empowering them to enhance their performance and contribute to achieving sustainability objectives. Consequently, this study contributes to promoting sustainability in the manufacturing SC by stimulating the adoption of I4.0 technologies in L4.0.

RevDate: 2024-05-20
CmpDate: 2024-05-20

Vo DH, Vo AT, Dinh CT, et al (2024)

Corporate restructuring and firm performance in Vietnam: The moderating role of digital transformation.

PloS one, 19(5):e0303491 pii:PONE-D-23-40777.

In the digital age, firms should continually innovate and adapt to remain competitive and enhance performance. Innovation and adaptation require firms to take a holistic approach to their corporate structuring to ensure efficiency and effectiveness to stay competitive. This study examines how corporate restructuring impacts firm performance in Vietnam. We then investigate the moderating role of digital transformation in the corporate restructuring-firm performance nexus. We use content analysis, with a focus on particular terms, including "digitalization," "big data," "cloud computing," "blockchain," and "information technology" for 11 years, from 2011 to 2021. The frequency index from these keywords is developed to proxy the digital transformation for the Vietnamese listed firms. A final sample includes 118 Vietnamese listed firms with sufficient data for the analysis using the generalized method of moments (GMM) approach. The results indicate that corporate restructuring, including financial, portfolio, and operational restructuring, has a negative effect on firm performance in Vietnam. Digital transformation also negatively affects firm performance. However, corporate restructuring implemented in conjunction with digital transformation improves the performance of Vietnamese listed firms. These findings largely remain unchanged across various robustness analyses.

RevDate: 2024-05-16

Gupta I, Saxena D, Singh AK, et al (2024)

A Multiple Controlled Toffoli Driven Adaptive Quantum Neural Network Model for Dynamic Workload Prediction in Cloud Environments.

IEEE transactions on pattern analysis and machine intelligence, PP: [Epub ahead of print].

The key challenges in cloud computing encompass dynamic resource scaling, load balancing, and power consumption. Accurate workload prediction is identified as a crucial strategy to address these challenges. Despite numerous methods proposed to tackle this issue, existing approaches fall short of capturing the high-variance nature of volatile and dynamic cloud workloads. Consequently, this paper introduces a novel model aimed at addressing this limitation. This paper presents a novel Multiple Controlled Toffoli-driven Adaptive Quantum Neural Network (MCT-AQNN) model to establish an empirical solution to complex, elastic as well as challenging workload prediction problems by optimizing the exploration, adaption, and exploitation proficiencies through quantum learning. The computational adaptability of quantum computing is ingrained with machine learning algorithms to derive more precise correlations from dynamic and complex workloads. The furnished input data point and hatched neural weights are refitted in the form of qubits while the controlling effects of Multiple Controlled Toffoli (MCT) gates are operated at the hidden and output layers of Quantum Neural Network (QNN) for enhancing learning capabilities. Complimentarily, a Uniformly Adaptive Quantum Machine Learning (UAQL) algorithm has evolved to functionally and effectually train the QNN. The extensive experiments are conducted and the comparisons are performed with state-of-the-art methods using four real-world benchmark datasets. Experimental results evince that MCT-AQNN has up to 32%-96% higher accuracy than the existing approaches.

RevDate: 2024-05-15

Koenig Z, Yohannes MT, Nkambule LL, et al (2024)

A harmonized public resource of deeply sequenced diverse human genomes.

Genome research pii:gr.278378.123 [Epub ahead of print].

Underrepresented populations are often excluded from genomic studies due in part to a lack of resources supporting their analyses. The 1000 Genomes Project (1kGP) and Human Genome Diversity Project (HGDP), which have recently been sequenced to high coverage, are valuable genomic resources because of the global diversity they capture and their open data sharing policies. Here, we harmonized a high quality set of 4,094 whole genomes from 80 populations in the HGDP and 1kGP with data from the Genome Aggregation Database (gnomAD) and identified over 153 million high-quality SNVs, indels, and SVs. We performed a detailed ancestry analysis of this cohort, characterizing population structure and patterns of admixture across populations, analyzing site frequency spectra, and measuring variant counts at global and subcontinental levels. We also demonstrate substantial added value from this dataset compared to the prior versions of the component resources, typically combined via liftOver and variant intersection; for example, we catalog millions of new genetic variants, mostly rare, compared to previous releases. In addition to unrestricted individual-level public release, we provide detailed tutorials for conducting many of the most common quality control steps and analyses with these data in a scalable cloud-computing environment and publicly release this new phased joint callset for use as a haplotype resource in phasing and imputation pipelines. This jointly called reference panel will serve as a key resource to support research of diverse ancestry populations.

RevDate: 2024-05-15

Thiriveedhi VK, Krishnaswamy D, Clunie D, et al (2024)

Cloud-based large-scale curation of medical imaging data using AI segmentation.

Research square pii:rs.3.rs-4351526.

Rapid advances in medical imaging Artificial Intelligence (AI) offer unprecedented opportunities for automatic analysis and extraction of data from large imaging collections. Computational demands of such modern AI tools may be difficult to satisfy with the capabilities available on premises. Cloud computing offers the promise of economical access and extreme scalability. Few studies examine the price/performance tradeoffs of using the cloud, in particular for medical image analysis tasks. We investigate the use of cloud-provisioned compute resources for AI-based curation of the National Lung Screening Trial (NLST) Computed Tomography (CT) images available from the National Cancer Institute (NCI) Imaging Data Commons (IDC). We evaluated NCI Cancer Research Data Commons (CRDC) Cloud Resources - Terra (FireCloud) and Seven Bridges-Cancer Genomics Cloud (SB-CGC) platforms - to perform automatic image segmentation with TotalSegmentator and pyradiomics feature extraction for a large cohort containing >126,000 CT volumes from >26,000 patients. Utilizing >21,000 Virtual Machines (VMs) over the course of the computation we completed analysis in under 9 hours, as compared to the estimated 522 days that would be needed on a single workstation. The total cost of utilizing the cloud for this analysis was $1,011.05. Our contributions include: 1) an evaluation of the numerous tradeoffs towards optimizing the use of cloud resources for large-scale image analysis; 2) CloudSegmentator, an open source reproducible implementation of the developed workflows, which can be reused and extended; 3) practical recommendations for utilizing the cloud for large-scale medical image computing tasks. We also share the results of the analysis: the total of 9,565,554 segmentations of the anatomic structures and the accompanying radiomics features in IDC as of release v18.

RevDate: 2024-05-14

Philippou J, Yáñez Feliú G, TJ Rudge (2024)

WebCM: A Web-Based Platform for Multiuser Individual-Based Modeling of Multicellular Microbial Populations and Communities.

ACS synthetic biology [Epub ahead of print].

WebCM is a web platform that enables users to create, edit, run, and view individual-based simulations of multicellular microbial populations and communities on a remote compute server. WebCM builds upon the simulation software CellModeller in the back end and provides users with a web-browser-based modeling interface including model editing, execution, and playback. Multiple users can run and manage multiple simulations simultaneously, sharing the host hardware. Since it is based on CellModeller, it can utilize both GPU and CPU parallelization. The user interface provides real-time interactive 3D graphical representations for inspection of simulations at all time points, and the results can be downloaded for detailed offline analysis. It can be run on cloud computing services or on a local server, allowing collaboration within and between laboratories.

RevDate: 2024-05-11

Lin Z, J Liang (2024)

Edge Caching Data Distribution Strategy with Minimum Energy Consumption.

Sensors (Basel, Switzerland), 24(9): pii:s24092898.

In the context of the rapid development of the Internet of Vehicles, virtual reality, automatic driving and the industrial Internet, the terminal devices in the network show explosive growth. As a result, more and more information is generated from the edge of the network, which makes the data throughput increase dramatically in the mobile communication network. As the key technology of the fifth-generation mobile communication network, mobile edge caching technology which caches popular data to the edge server deployed at the edge of the network avoids the data transmission delay of the backhaul link and the occurrence of network congestion. With the growing scale of the network, distributing hot data from cloud servers to edge servers will generate huge energy consumption. To realize the green and sustainable development of the communication industry and reduce the energy consumption of distribution of data that needs to be cached in edge servers, we make the first attempt to propose and solve the problem of edge caching data distribution with minimum energy consumption (ECDDMEC) in this paper. First, we model and formulate the problem as a constrained optimization problem and then prove its NP-hardness. Subsequently, we design a greedy algorithm with computational complexity of O(n2) to solve the problem approximately. Experimental results show that compared with the distribution strategy of each edge server directly requesting data from the cloud server, the strategy obtained by the algorithm can significantly reduce the energy consumption of data distribution.

RevDate: 2024-05-11

Emvoliadis A, Vryzas N, Stamatiadou ME, et al (2024)

Multimodal Environmental Sensing Using AI & IoT Solutions: A Cognitive Sound Analysis Perspective.

Sensors (Basel, Switzerland), 24(9): pii:s24092755.

This study presents a novel audio compression technique, tailored for environmental monitoring within multi-modal data processing pipelines. Considering the crucial role that audio data play in environmental evaluations, particularly in contexts with extreme resource limitations, our strategy substantially decreases bit rates to facilitate efficient data transfer and storage. This is accomplished without undermining the accuracy necessary for trustworthy air pollution analysis while simultaneously minimizing processing expenses. More specifically, our approach fuses a Deep-Learning-based model, optimized for edge devices, along with a conventional coding schema for audio compression. Once transmitted to the cloud, the compressed data undergo a decoding process, leveraging vast cloud computing resources for accurate reconstruction and classification. The experimental results indicate that our approach leads to a relatively minor decrease in accuracy, even at notably low bit rates, and demonstrates strong robustness in identifying data from labels not included in our training dataset.

RevDate: 2024-05-11

Hanczewski S, Stasiak M, M Weissenberg (2024)

An Analytical Model of IaaS Architecture for Determining Resource Utilization.

Sensors (Basel, Switzerland), 24(9): pii:s24092758.

Cloud computing has become a major component of the modern IT ecosystem. A key contributor to this has been the development of Infrastructure as a Service (IaaS) architecture, in which users' virtual machines (VMs) are run on the service provider's physical infrastructure, making it possible to become independent of the need to purchase one's own physical machines (PMs). One of the main aspects to consider when designing such systems is achieving the optimal utilization of individual resources, such as processor, RAM, disk, and available bandwidth. In response to these challenges, the authors developed an analytical model (the ARU method) to determine the average utilization levels of the aforementioned resources. The effectiveness of the proposed analytical model was evaluated by comparing the results obtained by utilizing the model with those obtained by conducting a digital simulation of the operation of a cloud system according to the IaaS paradigm. The results show the effectiveness of the model regardless of the structure of the emerging requests, the variability of the capacity of individual resources, and the number of physical machines in the system. This translates into the applicability of the model in the design process of cloud systems.

RevDate: 2024-05-10

Du X, Novoa-Laurentiev J, Plasaek JM, et al (2024)

Enhancing Early Detection of Cognitive Decline in the Elderly: A Comparative Study Utilizing Large Language Models in Clinical Notes.

medRxiv : the preprint server for health sciences pii:2024.04.03.24305298.

BACKGROUND: Large language models (LLMs) have shown promising performance in various healthcare domains, but their effectiveness in identifying specific clinical conditions in real medical records is less explored. This study evaluates LLMs for detecting signs of cognitive decline in real electronic health record (EHR) clinical notes, comparing their error profiles with traditional models. The insights gained will inform strategies for performance enhancement.

METHODS: This study, conducted at Mass General Brigham in Boston, MA, analyzed clinical notes from the four years prior to a 2019 diagnosis of mild cognitive impairment in patients aged 50 and older. We used a randomly annotated sample of 4,949 note sections, filtered with keywords related to cognitive functions, for model development. For testing, a random annotated sample of 1,996 note sections without keyword filtering was utilized. We developed prompts for two LLMs, Llama 2 and GPT-4, on HIPAA-compliant cloud-computing platforms using multiple approaches (e.g., both hard and soft prompting and error analysis-based instructions) to select the optimal LLM-based method. Baseline models included a hierarchical attention-based neural network and XGBoost. Subsequently, we constructed an ensemble of the three models using a majority vote approach.

RESULTS: GPT-4 demonstrated superior accuracy and efficiency compared to Llama 2, but did not outperform traditional models. The ensemble model outperformed the individual models, achieving a precision of 90.3%, a recall of 94.2%, and an F1-score of 92.2%. Notably, the ensemble model showed a significant improvement in precision, increasing from a range of 70%-79% to above 90%, compared to the best-performing single model. Error analysis revealed that 63 samples were incorrectly predicted by at least one model; however, only 2 cases (3.2%) were mutual errors across all models, indicating diverse error profiles among them.

CONCLUSIONS: LLMs and traditional machine learning models trained using local EHR data exhibited diverse error profiles. The ensemble of these models was found to be complementary, enhancing diagnostic performance. Future research should investigate integrating LLMs with smaller, localized models and incorporating medical data and domain knowledge to enhance performance on specific tasks.

RevDate: 2024-05-08

Kent RM, Barbosa WAS, DJ Gauthier (2024)

Controlling chaos using edge computing hardware.

Nature communications, 15(1):3886.

Machine learning provides a data-driven approach for creating a digital twin of a system - a digital model used to predict the system behavior. Having an accurate digital twin can drive many applications, such as controlling autonomous systems. Often, the size, weight, and power consumption of the digital twin or related controller must be minimized, ideally realized on embedded computing hardware that can operate without a cloud-computing connection. Here, we show that a nonlinear controller based on next-generation reservoir computing can tackle a difficult control problem: controlling a chaotic system to an arbitrary time-dependent state. The model is accurate, yet it is small enough to be evaluated on a field-programmable gate array typically found in embedded devices. Furthermore, the model only requires 25.0 ± 7.0 nJ per evaluation, well below other algorithms, even without systematic power optimization. Our work represents the first step in deploying efficient machine learning algorithms to the computing "edge."

RevDate: 2024-05-07

Buchanan BC, Tang Y, Lopez H, et al (2024)

Development of a cloud-based flow rate tool for eNAMPT biomarker detection.

PNAS nexus, 3(5):pgae173.

Increased levels of extracellular nicotinamide phosphoribosyltransferase (eNAMPT) are increasingly recognized as a highly useful biomarker of inflammatory disease and disease severity. In preclinical animal studies, a monoclonal antibody that neutralizes eNAMPT has been generated to successfully reduce the extent of inflammatory cascade activation. Thus, the rapid detection of eNAMPT concentration in plasma samples at the point of care (POC) would be of great utility in assessing the benefit of administering an anti-eNAMPT therapeutic. To determine the feasibility of this POC test, we conducted a particle immunoagglutination assay on a paper microfluidic platform and quantified its extent with a flow rate measurement in less than 1 min. A smartphone and cloud-based Google Colab were used to analyze the flow rates automatically. A horizontal flow model and an immunoagglutination binding model were evaluated to optimize the detection time, sample dilution, and particle concentration. This assay successfully detected eNAMPT in both human whole blood and plasma samples (diluted to 10 and 1%), with the limit of detection of 1-20 pg/mL (equivalent to 0.1-0.2 ng/mL in undiluted blood and plasma) and a linear range of 5-40 pg/mL. Furthermore, the smartphone POC assay distinguished clinical samples with low, mid, and high eNAMPT concentrations. Together, these results indicate this POC assay, which utilizes low-cost materials, time-effective methods, and a straightforward immunoassay (without surface immobilization), may reliably allow rapid determination of eNAMPT blood/plasma levels to advantage patient stratification in clinical trials and guide ALT-100 mAb therapeutic decision-making.

RevDate: 2024-05-06

Sankar M S K, Gupta S, Luthra S, et al (2024)

Empowering sustainable manufacturing: Unleashing digital innovation in spool fabrication industries.

Heliyon, 10(9):e29994.

In industrial landscapes, spool fabrication industries play a crucial role in the successful completion of numerous industrial projects by providing prefabricated modules. However, the implementation of digitalized sustainable practices in spool fabrication industries is progressing slowly and is still in its embryonic stage due to several challenges. To implement digitalized sustainable manufacturing (SM), digital technologies such as Internet of Things, Cloud computing, Big data analytics, Cyber-physical systems, Augmented reality, Virtual reality, and Machine learning are required in the context of sustainability. The scope of the present study entails prioritization of the enablers that promote the implementation of digitalized sustainable practices in spool fabrication industries using the Improved Fuzzy Stepwise Weight Assessment Ratio Analysis (IMF-SWARA) method integrated with Triangular Fuzzy Bonferroni Mean (TFBM). The enablers are identified through a systematic literature review and are validated by a team of seven experts through a questionnaire survey. Then the finally identified enablers are analyzed by the IMF-SWARA and TFBM integrated approach. The results indicate that the most significant enablers are management support, leadership, governmental policies and regulations to implement digitalized SM. The study provides a comprehensive analysis of digital SM enablers in the spool fabrication industry and offers guidelines for the transformation of conventional systems into digitalized SM practices.

RevDate: 2024-05-05

Mishra A, Kim HS, Kumar R, et al (2024)

Advances in Vibrio-related infection management: an integrated technology approach for aquaculture and human health.

Critical reviews in biotechnology [Epub ahead of print].

Vibrio species pose significant threats worldwide, causing mortalities in aquaculture and infections in humans. Global warming and the emergence of worldwide strains of Vibrio diseases are increasing day by day. Control of Vibrio species requires effective monitoring, diagnosis, and treatment strategies at the global scale. Despite current efforts based on chemical, biological, and mechanical means, Vibrio control management faces limitations due to complicated implementation processes. This review explores the intricacies and challenges of Vibrio-related diseases, including accurate and cost-effective diagnosis and effective control. The global burden due to emerging Vibrio species further complicates management strategies. We propose an innovative integrated technology model that harnesses cutting-edge technologies to address these obstacles. The proposed model incorporates advanced tools, such as biosensing technologies, the Internet of Things (IoT), remote sensing devices, cloud computing, and machine learning. This model offers invaluable insights and supports better decision-making by integrating real-time ecological data and biological phenotype signatures. A major advantage of our approach lies in leveraging cloud-based analytics programs, efficiently extracting meaningful information from vast and complex datasets. Collaborating with data and clinical professionals ensures logical and customized solutions tailored to each unique situation. Aquaculture biotechnology that prioritizes sustainability may have a large impact on human health and the seafood industry. Our review underscores the importance of adopting this model, revolutionizing the prognosis and management of Vibrio-related infections, even under complex circumstances. Furthermore, this model has promising implications for aquaculture and public health, addressing the United Nations Sustainable Development Goals and their development agenda.

RevDate: 2024-05-02

Han Y, Wei Z, G Huang (2024)

An imbalance data quality monitoring based on SMOTE-XGBOOST supported by edge computing.

Scientific reports, 14(1):10151.

Product assembly involves extensive production data that is characterized by high dimensionality, multiple samples, and data imbalance. The article proposes an edge computing-based framework for monitoring product assembly quality in industrial Internet of Things. Edge computing technology relieves the pressure of aggregating enormous amounts of data to cloud center for processing. To address the problem of data imbalance, we compared five sampling methods: Borderline SMOTE, Random Downsampling, Random Upsampling, SMOTE, and ADASYN. Finally, the quality monitoring model SMOTE-XGBoost is proposed, and the hyperparameters of the model are optimized by using the Grid Search method. The proposed framework and quality control methodology were applied to an assembly line of IGBT modules for the traction system, and the validity of the model was experimentally verified.

RevDate: 2024-05-02

Peccoud S, Berezin CT, Hernandez SI, et al (2024)

PlasCAT: Plasmid cloud assembly tool.

Bioinformatics (Oxford, England) pii:7663467 [Epub ahead of print].

SUMMARY: PlasCAT is an easy-to-use cloud-based bioinformatics tool that enables de novo plasmid sequence assembly from raw sequencing data. Non-technical users can now assemble sequences from long reads and short reads without ever touching a line of code. PlasCAT uses high-performance computing servers to reduce run times on assemblies and deliver results faster.

PlasCAT is freely available on the web at https://sequencing.genofab.com. The assembly pipeline source code and server code are available for download at https://bitbucket.org/genofabinc/workspace/projects/PLASCAT. Click the Cancel button to access the source code without authenticating. Web servers implemented in React.js and Python, with all major browsers supported.

RevDate: 2024-05-02

Blindenbach J, Kang J, Hong S, et al (2024)

Ultra-secure storage and analysis of genetic data for the advancement of precision medicine.

bioRxiv : the preprint server for biology pii:2024.04.16.589793.

Cloud computing provides the opportunity to store the ever-growing genotype-phenotype data sets needed to achieve the full potential of precision medicine. However, due to the sensitive nature of this data and the patchwork of data privacy laws across states and countries, additional security protections are proving necessary to ensure data privacy and security. Here we present SQUiD, a secure queryable database for storing and analyzing genotype-phenotype data. With SQUiD, genotype-phenotype data can be stored in a low-security, low-cost public cloud in the encrypted form, which researchers can securely query without the public cloud ever being able to decrypt the data. We demonstrate the usability of SQUiD by replicating various commonly used calculations such as polygenic risk scores, cohort creation for GWAS, MAF filtering, and patient similarity analysis both on synthetic and UK Biobank data. Our work represents a new and scalable platform enabling the realization of precision medicine without security and privacy concerns.

RevDate: 2024-04-29

Drmota P, Nadlinger DP, Main D, et al (2024)

Verifiable Blind Quantum Computing with Trapped Ions and Single Photons.

Physical review letters, 132(15):150604.

We report the first hybrid matter-photon implementation of verifiable blind quantum computing. We use a trapped-ion quantum server and a client-side photonic detection system networked via a fiber-optic quantum link. The availability of memory qubits and deterministic entangling gates enables interactive protocols without postselection-key requirements for any scalable blind server, which previous realizations could not provide. We quantify the privacy at ≲0.03 leaked classical bits per qubit. This experiment demonstrates a path to fully verified quantum computing in the cloud.

RevDate: 2024-04-29

Schweitzer M, Ostheimer P, Lins A, et al (2024)

Transforming Tele-Ophthalmology: Utilizing Cloud Computing for Remote Eye Care.

Studies in health technology and informatics, 313:215-220.

BACKGROUND: Tele-ophthalmology is gaining recognition for its role in improving eye care accessibility via cloud-based solutions. The Google Cloud Platform (GCP) Healthcare API enables secure and efficient management of medical image data such as high-resolution ophthalmic images.

OBJECTIVES: This study investigates cloud-based solutions' effectiveness in tele-ophthalmology, with a focus on GCP's role in data management, annotation, and integration for a novel imaging device.

METHODS: Leveraging the Integrating the Healthcare Enterprise (IHE) Eye Care profile, the cloud platform was utilized as a PACS and integrated with the Open Health Imaging Foundation (OHIF) Viewer for image display and annotation capabilities for ophthalmic images.

RESULTS: The setup of a GCP DICOM storage and the OHIF Viewer facilitated remote image data analytics. Prolonged loading times and relatively large individual image file sizes indicated system challenges.

CONCLUSION: Cloud platforms have the potential to ease distributed data analytics, as needed for efficient tele-ophthalmology scenarios in research and clinical practice, by providing scalable and secure image management solutions.

RevDate: 2024-04-29

Iorga M, K Scarfone (2016)

Using a Capability Oriented Methodology to Build Your Cloud Ecosystem.

IEEE cloud computing, 3(2):.

Organizations often struggle to capture the necessary functional capabilities for each cloud-based solution adopted for their information systems. Identifying, defining, selecting, and prioritizing these functional capabilities and the security components that implement and enforce them is surprisingly challenging. This article explains recent developments by the National Institute of Standards and Technology (NIST) in addressing these challenges. The article focuses on the capability oriented methodology for orchestrating a secure cloud ecosystem proposed as part of the NIST Cloud Computing Security Reference Architecture. The methodology recognizes that risk may vary for cloud Actors within a single ecosystem, so it takes a risk-based approach to functional capabilities. The result is an assessment of which cloud Actor is responsible for implementing each security component and how implementation should be prioritized. A cloud Actor, especially a cloud Consumer, that follows the methodology can more easily make well-informed decisions regarding their cloud ecosystems.

RevDate: 2024-04-27

van der Laan EE, Hazenberg P, AH Weerts (2024)

Simulation of long-term storage dynamics of headwater reservoirs across the globe using public cloud computing infrastructure.

The Science of the total environment pii:S0048-9697(24)02825-0 [Epub ahead of print].

Reservoirs play an important role in relation to water security, flood risk, hydropower and natural flow regime. This study derives a novel dataset with a long-term daily water-balance (reservoir volume, inflow, outflow, evaporation and precipitation) of headwater reservoirs and storage dynamics across the globe. The data is generated using cloud computing infrastructure and a high resolution distributed hydrological model wflow_sbm. Model results are validated against earth observed surface water area and in-situ measured reservoir volume and show an overall good model performance. Simulated headwater reservoir storage indicate that 19.4-24.4 % of the reservoirs had a significant decrease in storage. This change is mainly driven by a decrease in reservoir inflow and increase in evaporation. Deployment on a kubernetes cloud environment and using reproducible workflows shows that these kind of simulations and analyses can be conducted in less than a day.

RevDate: 2024-04-27

Abdullahi I, Longo S, M Samie (2024)

Towards a Distributed Digital Twin Framework for Predictive Maintenance in Industrial Internet of Things (IIoT).

Sensors (Basel, Switzerland), 24(8): pii:s24082663.

This study uses a wind turbine case study as a subdomain of Industrial Internet of Things (IIoT) to showcase an architecture for implementing a distributed digital twin in which all important aspects of a predictive maintenance solution in a DT use a fog computing paradigm, and the typical predictive maintenance DT is improved to offer better asset utilization and management through real-time condition monitoring, predictive analytics, and health management of selected components of wind turbines in a wind farm. Digital twin (DT) is a technology that sits at the intersection of Internet of Things, Cloud Computing, and Software Engineering to provide a suitable tool for replicating physical objects in the digital space. This can facilitate the implementation of asset management in manufacturing systems through predictive maintenance solutions leveraged by machine learning (ML). With DTs, a solution architecture can easily use data and software to implement asset management solutions such as condition monitoring and predictive maintenance using acquired sensor data from physical objects and computing capabilities in the digital space. While DT offers a good solution, it is an emerging technology that could be improved with better standards, architectural framework, and implementation methodologies. Researchers in both academia and industry have showcased DT implementations with different levels of success. However, DTs remain limited in standards and architectures that offer efficient predictive maintenance solutions with real-time sensor data and intelligent DT capabilities. An appropriate feedback mechanism is also needed to improve asset management operations.

RevDate: 2024-04-26
CmpDate: 2024-04-26

Hsiao J, Deng LC, Moroz LL, et al (2024)

Ocean to Tree: Leveraging Single-Molecule RNA-Seq to Repair Genome Gene Models and Improve Phylogenomic Analysis of Gene and Species Evolution.

Methods in molecular biology (Clifton, N.J.), 2757:461-490.

Understanding gene evolution across genomes and organisms, including ctenophores, can provide unexpected biological insights. It enables powerful integrative approaches that leverage sequence diversity to advance biomedicine. Sequencing and bioinformatic tools can be inexpensive and user-friendly, but numerous options and coding can intimidate new users. Distinct challenges exist in working with data from diverse species but may go unrecognized by researchers accustomed to gold-standard genomes. Here, we provide a high-level workflow and detailed pipeline to enable animal collection, single-molecule sequencing, and phylogenomic analysis of gene and species evolution. As a demonstration, we focus on (1) PacBio RNA-seq of the genome-sequenced ctenophore Mnemiopsis leidyi, (2) diversity and evolution of the mechanosensitive ion channel Piezo in genetic models and basal-branching animals, and (3) associated challenges and solutions to working with diverse species and genomes, including gene model updating and repair using single-molecule RNA-seq. We provide a Python Jupyter Notebook version of our pipeline (GitHub Repository: Ctenophore-Ocean-To-Tree-2023 https://github.com/000generic/Ctenophore-Ocean-To-Tree-2023) that can be run for free in the Google Colab cloud to replicate our findings or modified for specific or greater use. Our protocol enables users to design new sequencing projects in ctenophores, marine invertebrates, or other novel organisms. It provides a simple, comprehensive platform that can ease new user entry into running their evolutionary sequence analyses.

RevDate: 2024-04-26

El Jaouhari A, Arif J, Samadhiya A, et al (2024)

Exploring the application of ICTs in decarbonizing the agriculture supply chain: A literature review and research agenda.

Heliyon, 10(8):e29564 pii:S2405-8440(24)05595-6.

The contemporary agricultural supply chain necessitates the integration of information and communication technologies to effectively mitigate the multifaceted challenges posed by climate change and rising global demand for food products. Furthermore, recent developments in information and communication technologies, such as blockchain, big data analytics, the internet of things, artificial intelligence, cloud computing, etc., have made this transformation possible. Each of these technologies plays a particular role in enabling the agriculture supply chain ecosystem to be intelligent enough to handle today's world's challenges. Thus, this paper reviews the crucial information and communication technologies-enabled agriculture supply chains to understand their potential uses and contemporary developments. The review is supported by 57 research papers from the Scopus database. Five research areas analyze the applications of the technology reviewed in the agriculture supply chain: food safety and traceability, security and information system management, wasting food, supervision and tracking, agricultural businesses and decision-making, and other applications not explicitly related to the agriculture supply chain. The study also emphasizes how information and communication technologies can help agriculture supply chains and promote agriculture supply chain decarbonization. An information and communication technologies application framework for a decarbonized agriculture supply chain is suggested based on the research's findings. The framework identifies the contribution of information and communication technologies to decision-making in agriculture supply chains. The review also offers guidelines to academics, policymakers, and practitioners on managing agriculture supply chains successfully for enhanced agricultural productivity and decarbonization.

RevDate: 2024-04-26

Khazali M, W Lechner (2023)

Scalable quantum processors empowered by the Fermi scattering of Rydberg electrons.

Communications physics, 6(1):57.

Quantum computing promises exponential speed-up compared to its classical counterpart. While the neutral atom processors are the pioneering platform in terms of scalability, the dipolar Rydberg gates impose the main bottlenecks on the scaling of these devices. This article presents an alternative scheme for neutral atom quantum processing, based on the Fermi scattering of a Rydberg electron from ground-state atoms in spin-dependent lattice geometries. Instead of relying on Rydberg pair-potentials, the interaction is controlled by engineering the electron cloud of a sole Rydberg atom. The present scheme addresses the scaling obstacles in Rydberg processors by exponentially suppressing the population of short-lived states and by operating in ultra-dense atomic lattices. The restoring forces in molecule type Rydberg-Fermi potential preserve the trapping over a long interaction period. Furthermore, the proposed scheme mitigates different competing infidelity criteria, eliminates unwanted cross-talks, and significantly suppresses the operation depth in running complicated quantum algorithms.

RevDate: 2024-04-25

Ullah R, Yahya M, Mostarda L, et al (2024)

Intelligent decision making for energy efficient fog nodes selection and smart switching in the IOT: a machine learning approach.

PeerJ. Computer science, 10:e1833.

With the emergence of Internet of Things (IoT) technology, a huge amount of data is generated, which is costly to transfer to the cloud data centers in terms of security, bandwidth, and latency. Fog computing is an efficient paradigm for locally processing and manipulating IoT-generated data. It is difficult to configure the fog nodes to provide all of the services required by the end devices because of the static configuration, poor processing, and storage capacities. To enhance fog nodes' capabilities, it is essential to reconfigure them to accommodate a broader range and variety of hosted services. In this study, we focus on the placement of fog services and their dynamic reconfiguration in response to the end-device requests. Due to its growing successes and popularity in the IoT era, the Decision Tree (DT) machine learning model is implemented to predict the occurrence of requests and events in advance. The DT model enables the fog nodes to predict requests for a specific service in advance and reconfigure the fog node accordingly. The performance of the proposed model is evaluated in terms of high throughput, minimized energy consumption, and dynamic fog node smart switching. The simulation results demonstrate a notable increase in the fog node hit ratios, scaling up to 99% for the majority of services concurrently with a substantial reduction in miss ratios. Furthermore, the energy consumption is greatly reduced by over 50% as compared to a static node.

RevDate: 2024-04-25

Cambronero ME, Martínez MA, Llana L, et al (2024)

Towards a GDPR-compliant cloud architecture with data privacy controlled through sticky policies.

PeerJ. Computer science, 10:e1898.

Data privacy is one of the biggest challenges facing system architects at the system design stage. Especially when certain laws, such as the General Data Protection Regulation (GDPR), have to be complied with by cloud environments. In this article, we want to help cloud providers comply with the GDPR by proposing a GDPR-compliant cloud architecture. To do this, we use model-driven engineering techniques to design cloud architecture and analyze cloud interactions. In particular, we develop a complete framework, called MDCT, which includes a Unified Modeling Language profile that allows us to define specific cloud scenarios and profile validation to ensure that certain required properties are met. The validation process is implemented through the Object Constraint Language (OCL) rules, which allow us to describe the constraints in these models. To comply with many GDPR articles, the proposed cloud architecture considers data privacy and data tracking, enabling safe and secure data management and tracking in the context of the cloud. For this purpose, sticky policies associated with the data are incorporated to define permission for third parties to access the data and track instances of data access. As a result, a cloud architecture designed with MDCT contains a set of OCL rules to validate it as a GDPR-compliant cloud architecture. Our tool models key GDPR points such as user consent/withdrawal, the purpose of access, and data transparency and auditing, and considers data privacy and data tracking with the help of sticky policies.

RevDate: 2024-04-25

Hassan SR, Rehman AU, Alsharabi N, et al (2024)

Design of load-aware resource allocation for heterogeneous fog computing systems.

PeerJ. Computer science, 10:e1986.

The execution of delay-aware applications can be effectively handled by various computing paradigms, including the fog computing, edge computing, and cloudlets. Cloud computing offers services in a centralized way through a cloud server. On the contrary, the fog computing paradigm offers services in a dispersed manner providing services and computational facilities near the end devices. Due to the distributed provision of resources by the fog paradigm, this architecture is suitable for large-scale implementation of applications. Furthermore, fog computing offers a reduction in delay and network load as compared to cloud architecture. Resource distribution and load balancing are always important tasks in deploying efficient systems. In this research, we have proposed heuristic-based approach that achieves a reduction in network consumption and delays by efficiently utilizing fog resources according to the load generated by the clusters of edge nodes. The proposed algorithm considers the magnitude of data produced at the edge clusters while allocating the fog resources. The results of the evaluations performed on different scales confirm the efficacy of the proposed approach in achieving optimal performance.

RevDate: 2024-04-24

Wei W, Xia X, Li T, et al (2024)

Shaoxia: a web-based interactive analysis platform for single cell RNA sequencing data.

BMC genomics, 25(1):402.

BACKGROUND: In recent years, Single-cell RNA sequencing (scRNA-seq) is increasingly accessible to researchers of many fields. However, interpreting its data demands proficiency in multiple programming languages and bioinformatic skills, which limited researchers, without such expertise, exploring information from scRNA-seq data. Therefore, there is a tremendous need to develop easy-to-use software, covering all the aspects of scRNA-seq data analysis.

RESULTS: We proposed a clear analysis framework for scRNA-seq data, which emphasized the fundamental and crucial roles of cell identity annotation, abstracting the analysis process into three stages: upstream analysis, cell annotation and downstream analysis. The framework can equip researchers with a comprehensive understanding of the analysis procedure and facilitate effective data interpretation. Leveraging the developed framework, we engineered Shaoxia, an analysis platform designed to democratize scRNA-seq analysis by accelerating processing through high-performance computing capabilities and offering a user-friendly interface accessible even to wet-lab researchers without programming expertise.

CONCLUSION: Shaoxia stands as a powerful and user-friendly open-source software for automated scRNA-seq analysis, offering comprehensive functionality for streamlined functional genomics studies. Shaoxia is freely accessible at http://www.shaoxia.cloud , and its source code is publicly available at https://github.com/WiedenWei/shaoxia .

RevDate: 2024-04-20

Abbas Q, Alyas T, Alghamdi T, et al (2024)

Redefining governance: a critical analysis of sustainability transformation in e-governance.

Frontiers in big data, 7:1349116.

With the rapid growth of information and communication technologies, governments worldwide are embracing digital transformation to enhance service delivery and governance practices. In the rapidly evolving landscape of information technology (IT), secure data management stands as a cornerstone for organizations aiming to safeguard sensitive information. Robust data modeling techniques are pivotal in structuring and organizing data, ensuring its integrity, and facilitating efficient retrieval and analysis. As the world increasingly emphasizes sustainability, integrating eco-friendly practices into data management processes becomes imperative. This study focuses on the specific context of Pakistan and investigates the potential of cloud computing in advancing e-governance capabilities. Cloud computing offers scalability, cost efficiency, and enhanced data security, making it an ideal technology for digital transformation. Through an extensive literature review, analysis of case studies, and interviews with stakeholders, this research explores the current state of e-governance in Pakistan, identifies the challenges faced, and proposes a framework for leveraging cloud computing to overcome these challenges. The findings reveal that cloud computing can significantly enhance the accessibility, scalability, and cost-effectiveness of e-governance services, thereby improving citizen engagement and satisfaction. This study provides valuable insights for policymakers, government agencies, and researchers interested in the digital transformation of e-governance in Pakistan and offers a roadmap for leveraging cloud computing technologies in similar contexts. The findings contribute to the growing body of knowledge on e-governance and cloud computing, supporting the advancement of digital governance practices globally. This research identifies monitoring parameters necessary to establish a sustainable e-governance system incorporating big data and cloud computing. The proposed framework, Monitoring and Assessment System using Cloud (MASC), is validated through secondary data analysis and successfully fulfills the research objectives. By leveraging big data and cloud computing, governments can revolutionize their digital governance practices, driving transformative changes and enhancing efficiency and effectiveness in public administration.

RevDate: 2024-04-18

Wang TH, Kao CC, TH Chang (2024)

Ensemble Machine Learning for Predicting 90-Day Outcomes and Analyzing Risk Factors in Acute Kidney Injury Requiring Dialysis.

Journal of multidisciplinary healthcare, 17:1589-1602.

PURPOSE: Our objectives were to (1) employ ensemble machine learning algorithms utilizing real-world clinical data to predict 90-day prognosis, including dialysis dependence and mortality, following the first hospitalized dialysis and (2) identify the significant factors associated with overall outcomes.

PATIENTS AND METHODS: We identified hospitalized patients with Acute kidney injury requiring dialysis (AKI-D) from a dataset of the Taipei Medical University Clinical Research Database (TMUCRD) from January 2008 to December 2020. The extracted data comprise demographics, comorbidities, medications, and laboratory parameters. Ensemble machine learning models were developed utilizing real-world clinical data through the Google Cloud Platform.

RESULTS: The Study Analyzed 1080 Patients in the Dialysis-Dependent Module, Out of Which 616 Received Regular Dialysis After 90 Days. Our Ensemble Model, Consisting of 25 Feedforward Neural Network Models, Demonstrated the Best Performance with an Auroc of 0.846. We Identified the Baseline Creatinine Value, Assessed at Least 90 Days Before the Initial Dialysis, as the Most Crucial Factor. We selected 2358 patients, 984 of whom were deceased after 90 days, for the survival module. The ensemble model, comprising 15 feedforward neural network models and 10 gradient-boosted decision tree models, achieved superior performance with an AUROC of 0.865. The pre-dialysis creatinine value, tested within 90 days prior to the initial dialysis, was identified as the most significant factor.

CONCLUSION: Ensemble machine learning models outperform logistic regression models in predicting outcomes of AKI-D, compared to existing literature. Our study, which includes a large sample size from three different hospitals, supports the significance of the creatinine value tested before the first hospitalized dialysis in determining overall prognosis. Healthcare providers could benefit from utilizing our validated prediction model to improve clinical decision-making and enhance patient care for the high-risk population.

RevDate: 2024-04-18

Fujinami H, Kuraishi S, Teramoto A, et al (2024)

Development of a novel endoscopic hemostasis-assisted navigation AI system in the standardization of post-ESD coagulation.

Endoscopy international open, 12(4):E520-E525.

Background and study aims While gastric endoscopic submucosal dissection (ESD) has become a treatment with fewer complications, delayed bleeding remains a challenge. Post-ESD coagulation (PEC) is performed to prevent delayed bleeding. Therefore, we developed an artificial intelligence (AI) to detect vessels that require PEC in real time. Materials and methods Training data were extracted from 153 gastric ESD videos with sufficient images taken with a second-look endoscopy (SLE) and annotated as follows: (1) vessels that showed bleeding during SLE without PEC; (2) vessels that did not bleed during SLE with PEC; and (3) vessels that did not bleed even without PEC. The training model was created using Google Cloud Vertex AI and a program was created to display the vessels requiring PEC in real time using a bounding box. The evaluation of this AI was verified with 12 unlearned test videos, including four cases that required additional coagulation during SLE. Results The results of the test video validation indicated that 109 vessels on the ulcer required cauterization. Of these, 80 vessels (73.4%) were correctly determined as not requiring additional treatment. However, 25 vessels (22.9%), which did not require PEC, were overestimated. In the four videos that required additional coagulation in SLE, AI was able to detect all bleeding vessels. Conclusions The effectiveness and safety of this endoscopic treatment-assisted AI system that identifies visible vessels requiring PEC should be confirmed in future studies.

RevDate: 2024-04-19
CmpDate: 2024-04-18

Frimpong T, Hayfron Acquah JB, Missah YM, et al (2024)

Securing cloud data using secret key 4 optimization algorithm (SK4OA) with a non-linearity run time trend.

PloS one, 19(4):e0301760.

Cloud computing alludes to the on-demand availability of personal computer framework resources, primarily information storage and processing power, without the customer's direct personal involvement. Cloud computing has developed dramatically among many organizations due to its benefits such as cost savings, resource pooling, broad network access, and ease of management; nonetheless, security has been a major concern. Researchers have proposed several cryptographic methods to offer cloud data security; however, their execution times are linear and longer. A Security Key 4 Optimization Algorithm (SK4OA) with a non-linear run time is proposed in this paper. The secret key of SK4OA determines the run time rather than the size of the data as such is able to transmit large volumes of data with minimal bandwidth and able to resist security attacks like brute force since its execution timings are unpredictable. A data set from Kaggle was used to determine the algorithm's mean and standard deviation after thirty (30) times of execution. Data sizes of 3KB, 5KB, 8KB, 12KB, and 16 KB were used in this study. There was an empirical analysis done against RC4, Salsa20, and Chacha20 based on encryption time, decryption time, throughput and memory utilization. The analysis showed that SK4OA generated lowest mean non-linear run time of 5.545±2.785 when 16KB of data was executed. Additionally, SK4OA's standard deviation was greater, indicating that the observed data varied far from the mean. However, RC4, Salsa20, and Chacha20 showed smaller standard deviations making them more clustered around the mean resulting in predictable run times.

RevDate: 2024-04-15

Ocampo AF, Fida MR, Elmokashfi A, et al (2024)

Assessing the Cloud-RAN in the Linux Kernel: Sharing Computing and Network Resources.

Sensors (Basel, Switzerland), 24(7):.

Cloud-based Radio Access Network (Cloud-RAN) leverages virtualization to enable the coexistence of multiple virtual Base Band Units (vBBUs) with collocated workloads on a single edge computer, aiming for economic and operational efficiency. However, this coexistence can cause performance degradation in vBBUs due to resource contention. In this paper, we conduct an empirical analysis of vBBU performance on a Linux RT-Kernel, highlighting the impact of resource sharing with user-space tasks and Kernel threads. Furthermore, we evaluate CPU management strategies such as CPU affinity and CPU isolation as potential solutions to these performance challenges. Our results highlight that the implementation of CPU affinity can significantly reduce throughput variability by up to 40%, decrease vBBU's NACK ratios, and reduce vBBU scheduling latency within the Linux RT-Kernel. Collectively, these findings underscore the potential of CPU management strategies to enhance vBBU performance in Cloud-RAN environments, enabling more efficient and stable network operations. The paper concludes with a discussion on the efficient realization of Cloud-RAN, elucidating the benefits of implementing proposed CPU affinity allocations. The demonstrated enhancements, including reduced scheduling latency and improved end-to-end throughput, affirm the practicality and efficacy of the proposed strategies for optimizing Cloud-RAN deployments.

RevDate: 2024-04-15

Liang YP, Chang CM, CC Chung (2024)

Implementation of Lightweight Convolutional Neural Networks with an Early Exit Mechanism Utilizing 40 nm CMOS Process for Fire Detection in Unmanned Aerial Vehicles.

Sensors (Basel, Switzerland), 24(7):.

The advancement of unmanned aerial vehicles (UAVs) enables early detection of numerous disasters. Efforts have been made to automate the monitoring of data from UAVs, with machine learning methods recently attracting significant interest. These solutions often face challenges with high computational costs and energy usage. Conventionally, data from UAVs are processed using cloud computing, where they are sent to the cloud for analysis. However, this method might not meet the real-time needs of disaster relief scenarios. In contrast, edge computing provides real-time processing at the site but still struggles with computational and energy efficiency issues. To overcome these obstacles and enhance resource utilization, this paper presents a convolutional neural network (CNN) model with an early exit mechanism designed for fire detection in UAVs. This model is implemented using TSMC 40 nm CMOS technology, which aids in hardware acceleration. Notably, the neural network has a modest parameter count of 11.2 k. In the hardware computation part, the CNN circuit completes fire detection in approximately 230,000 cycles. Power-gating techniques are also used to turn off inactive memory, contributing to reduced power consumption. The experimental results show that this neural network reaches a maximum accuracy of 81.49% in the hardware implementation stage. After automatic layout and routing, the CNN hardware accelerator can operate at 300 MHz, consuming 117 mW of power.

RevDate: 2024-04-15

Gomes B, Soares C, Torres JM, et al (2024)

An Efficient Edge Computing-Enabled Network for Used Cooking Oil Collection.

Sensors (Basel, Switzerland), 24(7):.

In Portugal, more than 98% of domestic cooking oil is disposed of improperly every day. This avoids recycling/reconverting into another energy. Is also may become a potential harmful contaminant of soil and water. Driven by the utility of recycled cooking oil, and leveraging the exponential growth of ubiquitous computing approaches, we propose an IoT smart solution for domestic used cooking oil (UCO) collection bins. We call this approach SWAN, which stands for Smart Waste Accumulation Network. It is deployed and evaluated in Portugal. It consists of a countrywide network of collection bin units, available in public areas. Two metrics are considered to evaluate the system's success: (i) user engagement, and (ii) used cooking oil collection efficiency. The presented system should (i) perform under scenarios of temporary communication network failures, and (ii) be scalable to accommodate an ever-growing number of installed collection units. Thus, we choose a disruptive approach from the traditional cloud computing paradigm. It relies on edge node infrastructure to process, store, and act upon the locally collected data. The communication appears as a delay-tolerant task, i.e., an edge computing solution. We conduct a comparative analysis revealing the benefits of the edge computing enabled collection bin vs. a cloud computing solution. The studied period considers four years of collected data. An exponential increase in the amount of used cooking oil collected is identified, with the developed solution being responsible for surpassing the national collection totals of previous years. During the same period, we also improved the collection process as we were able to more accurately estimate the optimal collection and system's maintenance intervals.

RevDate: 2024-04-15

Armijo A, D Zamora-Sánchez (2024)

Integration of Railway Bridge Structural Health Monitoring into the Internet of Things with a Digital Twin: A Case Study.

Sensors (Basel, Switzerland), 24(7):.

Structural health monitoring (SHM) is critical for ensuring the safety of infrastructure such as bridges. This article presents a digital twin solution for the SHM of railway bridges using low-cost wireless accelerometers and machine learning (ML). The system architecture combines on-premises edge computing and cloud analytics to enable efficient real-time monitoring and complete storage of relevant time-history datasets. After train crossings, the accelerometers stream raw vibration data, which are processed in the frequency domain and analyzed using machine learning to detect anomalies that indicate potential structural issues. The digital twin approach is demonstrated on an in-service railway bridge for which vibration data were collected over two years under normal operating conditions. By learning allowable ranges for vibration patterns, the digital twin model identifies abnormal spectral peaks that indicate potential changes in structural integrity. The long-term pilot proves that this affordable SHM system can provide automated and real-time warnings of bridge damage and also supports the use of in-house-designed sensors with lower cost and edge computing capabilities such as those used in the demonstration. The successful on-premises-cloud hybrid implementation provides a cost effective and scalable model for expanding monitoring to thousands of railway bridges, democratizing SHM to improve safety by avoiding catastrophic failures.

RevDate: 2024-04-15

Gaffurini M, Flammini A, Ferrari P, et al (2024)

End-to-End Emulation of LoRaWAN Architecture and Infrastructure in Complex Smart City Scenarios Exploiting Containers.

Sensors (Basel, Switzerland), 24(7):.

In a LoRaWAN network, the backend is generally distributed as Software as a Service (SaaS) based on container technology, and recently, a containerized version of the LoRaWAN node stack is also available. Exploiting the disaggregation of LoRaWAN components, this paper focuses on the emulation of complex end-to-end architecture and infrastructures for smart city scenarios, leveraging on lightweight virtualization technology. The fundamental metrics to gain insights and evaluate the scaling complexity of the emulated scenario are defined. Then, the methodology is applied to use cases taken from a real LoRaWAN application in a smart city with hundreds of nodes. As a result, the proposed approach based on containers allows for the following: (i) deployments of functionalities on diverse distributed hosts; (ii) the use of the very same SW running on real nodes; (iii) the simple configuration and management of the emulation process; (iv) affordable costs. Both premise and cloud servers are considered as emulation platforms to evaluate the resource request and emulation cost of the proposed approach. For instance, emulating one hour of an entire LoRaWAN network with hundreds of nodes requires very affordable hardware that, if realized with a cloud-based computing platform, may cost less than USD 1.

RevDate: 2024-04-12

Gupta P, DP Shukla (2024)

Demi-decadal land use land cover change analysis of Mizoram, India, with topographic correction using machine learning algorithm.

Environmental science and pollution research international [Epub ahead of print].

Mizoram (India) is part of UNESCO's biodiversity hotspots in India that is primarily populated by tribes who engage in shifting agriculture. Hence, the land use land cover (LULC) pattern of the state is frequently changing. We have used Landsat 5 and 8 satellite images to prepare LULC maps from 2000 to 2020 in every 5 years. The atmospherically corrected images were pre-processed for removal of cloud cover and then classified into six classes: waterbodies, farmland, settlement, open forest, dense forest, and bare land. We applied four machine learning (ML) algorithms for classification, namely, random forest (RF), classification and regression tree (CART), minimum distance (MD), and support vector machine (SVM) for the images from 2000 to 2020. With 80% training and 20% testing data, we found that the RF classifier works best with the most accuracy than other classifiers. The average overall accuracy (OA) and Kappa coefficient (KC) from 2000 to 2020 were 84.00% and 0.79 when the RF classifier was used. When using SVM, CART, and MD, the average OA and KC were 78.06%, 0.73; 78.60%, 0.72; and 73.32%, 0.65, respectively. We utilised three methods of topographic correction, namely, C-correction, SCS (sun canopy sensor) correction, and SCS + C correction to reduce the misclassification due to shadow effects. SCS + C correction worked best for this region; hence, we prepared LULC maps on SCS + C corrected satellite image. Hence, we have used RF classifier for LULC preparation demi-decadal from 2000 to 2020. The OA for 2000, 2005, 2010, 2015, and 2020 was found to be 84%, 81%, 81%, 85%, and 89%, respectively, using RF. The dense forest decreased from 2000 to 2020 with an increase in open forest, settlement, and agriculture; nevertheless, when Farmland was low, there was an increase in the barren land. The results were significantly improved with the topographic correction, and misclassification was quite less.

RevDate: 2024-04-15

Zhang Y, Geng H, Su L, et al (2024)

An efficient polynomial-based verifiable computation scheme on multi-source outsourced data.

Scientific reports, 14(1):8512.

With the development of cloud computing, users are more inclined to outsource complex computing tasks to cloud servers with strong computing capacity, and the cloud returns the final calculation results. However, the cloud is not completely trustworthy, which may leak the data of user and even return incorrect calculations on purpose. Therefore, it is important to verify the results of computing tasks without revealing the privacy of the users. Among all the computing tasks, the polynomial calculation is widely used in information security, linear algebra, signal processing and other fields. Most existing polynomial-based verifiable computation schemes require that the input of the polynomial function must come from a single data source, which means that the data must be signed by a single user. However, the input of the polynomial may come from multiple users in the practical application. In order to solve this problem, the researchers have proposed some schemes for multi-source outsourced data, but these schemes have the common problem of low efficiency. To improve the efficiency, this paper proposes an efficient polynomial-based verifiable computation scheme on multi-source outsourced data. We optimize the polynomials using Horner's method to increase the speed of verification, in which the addition gate and the multiplication gate can be interleaved to represent the polynomial function. In order to adapt to this structure, we design the corresponding homomorphic verification tag, so that the input of the polynomial can come from multiple data sources. We prove the correctness and rationality of the scheme, and carry out numerical analysis and evaluation research to verify the efficiency of the scheme. The experimental indicate that data contributors can sign 1000 new data in merely 2 s, while the verification of a delegated polynomial function with a power of 100 requires only 18 ms. These results confirm that the proposed scheme is better than the existing scheme.

RevDate: 2024-04-15
CmpDate: 2024-04-15

Li S, Nair R, SM Naqvi (2024)

Acoustic and Text Features Analysis for Adult ADHD Screening: A Data-Driven Approach Utilizing DIVA Interview.

IEEE journal of translational engineering in health and medicine, 12:359-370.

Attention Deficit Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder commonly seen in childhood that leads to behavioural changes in social development and communication patterns, often continues into undiagnosed adulthood due to a global shortage of psychiatrists, resulting in delayed diagnoses with lasting consequences on individual's well-being and the societal impact. Recently, machine learning methodologies have been incorporated into healthcare systems to facilitate the diagnosis and enhance the potential prediction of treatment outcomes for mental health conditions. In ADHD detection, the previous research focused on utilizing functional magnetic resonance imaging (fMRI) or Electroencephalography (EEG) signals, which require costly equipment and trained personnel for data collection. In recent years, speech and text modalities have garnered increasing attention due to their cost-effectiveness and non-wearable sensing in data collection. In this research, conducted in collaboration with the Cumbria, Northumberland, Tyne and Wear NHS Foundation Trust, we gathered audio data from both ADHD patients and normal controls based on the clinically popular Diagnostic Interview for ADHD in adults (DIVA). Subsequently, we transformed the speech data into text modalities through the utilization of the Google Cloud Speech API. We extracted both acoustic and text features from the data, encompassing traditional acoustic features (e.g., MFCC), specialized feature sets (e.g., eGeMAPS), as well as deep-learned linguistic and semantic features derived from pre-trained deep learning models. These features are employed in conjunction with a support vector machine for ADHD classification, yielding promising outcomes in the utilization of audio and text data for effective adult ADHD screening. Clinical impact: This research introduces a transformative approach in ADHD diagnosis, employing speech and text analysis to facilitate early and more accessible detection, particularly beneficial in areas with limited psychiatric resources. Clinical and Translational Impact Statement: The successful application of machine learning techniques in analyzing audio and text data for ADHD screening represents a significant advancement in mental health diagnostics, paving the way for its integration into clinical settings and potentially improving patient outcomes on a broader scale.

RevDate: 2024-04-12

Sachdeva S, Bhatia S, Al Harrasi A, et al (2024)

Unraveling the role of cloud computing in health care system and biomedical sciences.

Heliyon, 10(7):e29044.

Cloud computing has emerged as a transformative force in healthcare and biomedical sciences, offering scalable, on-demand resources for managing vast amounts of data. This review explores the integration of cloud computing within these fields, highlighting its pivotal role in enhancing data management, security, and accessibility. We examine the application of cloud computing in various healthcare domains, including electronic medical records, telemedicine, and personalized patient care, as well as its impact on bioinformatics research, particularly in genomics, proteomics, and metabolomics. The review also addresses the challenges and ethical considerations associated with cloud-based healthcare solutions, such as data privacy and cybersecurity. By providing a comprehensive overview, we aim to assist readers in understanding the significance of cloud computing in modern medical applications and its potential to revolutionize both patient care and biomedical research.

RevDate: 2024-04-09

Hicks CB, TJ Martinez (2024)

Massively scalable workflows for quantum chemistry: BigChem and ChemCloud.

The Journal of chemical physics, 160(14):.

Electronic structure theory, i.e., quantum chemistry, is the fundamental building block for many problems in computational chemistry. We present a new distributed computing framework (BigChem), which allows for an efficient solution of many quantum chemistry problems in parallel. BigChem is designed to be easily composable and leverages industry-standard middleware (e.g., Celery, RabbitMQ, and Redis) for distributed approaches to large scale problems. BigChem can harness any collection of worker nodes, including ones on cloud providers (such as AWS or Azure), local clusters, or supercomputer centers (and any mixture of these). BigChem builds upon MolSSI packages, such as QCEngine to standardize the operation of numerous computational chemistry programs, demonstrated here with Psi4, xtb, geomeTRIC, and TeraChem. BigChem delivers full utilization of compute resources at scale, offers a programable canvas for designing sophisticated quantum chemistry workflows, and is fault tolerant to node failures and network disruptions. We demonstrate linear scalability of BigChem running computational chemistry workloads on up to 125 GPUs. Finally, we present ChemCloud, a web API to BigChem and successor to TeraChem Cloud. ChemCloud delivers scalable and secure access to BigChem over the Internet.

RevDate: 2024-04-10

Holl F, Clarke L, Raffort T, et al (2024)

The Red Cross Red Crescent Health Information System (RCHIS): an electronic medical records and health information management system for the red cross red crescent emergency response units.

Conflict and health, 18(1):28.

BACKGROUND: The Red Cross and Red Crescent Movement (RCRC) utilizes specialized Emergency Response Units (ERUs) for international disaster response. However, data collection and reporting within ERUs have been time-consuming and paper-based. The Red Cross Red Crescent Health Information System (RCHIS) was developed to improve clinical documentation and reporting, ensuring accuracy and ease of use while increasing compliance with reporting standards.

CASE PRESENTATION: RCHIS is an Electronic Medical Record (EMR) and Health Information System (HIS) designed for RCRC ERUs. It can be accessed on Android tablets or Windows laptops, both online and offline. The system securely stores data on Microsoft Azure cloud, with synchronization facilitated through a local ERU server. The functional architecture covers all clinical functions of ERU clinics and hospitals, incorporating user-friendly features. A pilot study was conducted with the Portuguese Red Cross (PRC) during a large-scale event. Thirteen super users were trained and subsequently trained the staff. During the four-day pilot, 77 user accounts were created, and 243 patient files were documented. Feedback indicated that RCHIS was easy to use, requiring minimal training time, and had sufficient training for full utilization. Real-time reporting facilitated coordination with the civil defense authority.

CONCLUSIONS: The development and pilot use of RCHIS demonstrated its feasibility and efficacy within RCRC ERUs. The system addressed the need for an EMR and HIS solution, enabling comprehensive clinical documentation and supporting administrative reporting functions. The pilot study validated the training of trainers' approach and paved the way for further domestic use of RCHIS. RCHIS has the potential to improve patient safety, quality of care, and reporting efficiency within ERUs. Automated reporting reduces the burden on ERU leadership, while electronic compilation enhances record completeness and correctness. Ongoing feedback collection and feature development continue to enhance RCHIS's functionality. Further trainings took place in 2023 and preparations for international deployments are under way. RCHIS represents a significant step toward improved emergency medical care and coordination within the RCRC and has implications for similar systems in other Emergency Medical Teams.

RevDate: 2024-04-09

Chen A, Yu S, Yang X, et al (2024)

IoT data security in outsourced databases: A survey of verifiable database.

Heliyon, 10(7):e28117.

With the swift advancement of cloud computing and the Internet of Things (IoT), to address the issue of massive data storage, IoT devices opt to offload their data to cloud servers so as to alleviate the pressure of resident storage and computation. However, storing local data in an outsourced database is bound to face the danger of tampering. To handle the above problem, a verifiable database (VDB), which was initially suggested in 2011, has garnered sustained interest from researchers. The concept of VDB enables resource-limited clients to securely outsource extremely large databases to untrusted servers, where users can retrieve database records and modify them by allocating new values, and any attempts at tampering will be detected. This paper provides a systematic summary of VDB. First, a definition of VDB is given, along with correctness and security proofs. And the VDB based on commitment constructions is introduced separately, mainly divided into vector commitments and polynomial commitments. Then VDB schemes based on delegated polynomial functions are introduced, mainly in combination with Merkle trees and forward-secure symmetric searchable encryption. We then classify the current VDB schemes relying on four different assumptions. Besides, we classify the established VDB schemes built upon two different groups. Finally, we introduce the applications and future development of VDB. To our knowledge, this is the first VDB review paper to date.

RevDate: 2024-04-18

Mimar S, Paul AS, Lucarelli N, et al (2024)

ComPRePS: An Automated Cloud-based Image Analysis tool to democratize AI in Digital Pathology.

bioRxiv : the preprint server for biology.

Artificial intelligence (AI) has extensive applications in a wide range of disciplines including healthcare and clinical practice. Advances in high-resolution whole-slide brightfield microscopy allow for the digitization of histologically stained tissue sections, producing gigapixel-scale whole-slide images (WSI). The significant improvement in computing and revolution of deep neural network (DNN)-based AI technologies over the last decade allow us to integrate massively parallelized computational power, cutting-edge AI algorithms, and big data storage, management, and processing. Applied to WSIs, AI has created opportunities for improved disease diagnostics and prognostics with the ultimate goal of enhancing precision medicine and resulting patient care. The National Institutes of Health (NIH) has recognized the importance of developing standardized principles for data management and discovery for the advancement of science and proposed the Findable, Accessible, Interoperable, Reusable, (FAIR) Data Principles[1] with the goal of building a modernized biomedical data resource ecosystem to establish collaborative research communities. In line with this mission and to democratize AI-based image analysis in digital pathology, we propose ComPRePS: an end-to-end automated Computational Renal Pathology Suite which combines massive scalability, on-demand cloud computing, and an easy-to-use web-based user interface for data upload, storage, management, slide-level visualization, and domain expert interaction. Moreover, our platform is equipped with both in-house and collaborator developed sophisticated AI algorithms in the back-end server for image analysis to identify clinically relevant micro-anatomic functional tissue units (FTU) and to extract image features.

RevDate: 2024-04-22

Copeland CJ, Roddy JW, Schmidt AK, et al (2024)

VIBES: a workflow for annotating and visualizing viral sequences integrated into bacterial genomes.

NAR genomics and bioinformatics, 6(2):lqae030.

Bacteriophages are viruses that infect bacteria. Many bacteriophages integrate their genomes into the bacterial chromosome and become prophages. Prophages may substantially burden or benefit host bacteria fitness, acting in some cases as parasites and in others as mutualists. Some prophages have been demonstrated to increase host virulence. The increasing ease of bacterial genome sequencing provides an opportunity to deeply explore prophage prevalence and insertion sites. Here we present VIBES (Viral Integrations in Bacterial genomES), a workflow intended to automate prophage annotation in complete bacterial genome sequences. VIBES provides additional context to prophage annotations by annotating bacterial genes and viral proteins in user-provided bacterial and viral genomes. The VIBES pipeline is implemented as a Nextflow-driven workflow, providing a simple, unified interface for execution on local, cluster and cloud computing environments. For each step of the pipeline, a container including all necessary software dependencies is provided. VIBES produces results in simple tab-separated format and generates intuitive and interactive visualizations for data exploration. Despite VIBES's primary emphasis on prophage annotation, its generic alignment-based design allows it to be deployed as a general-purpose sequence similarity search manager. We demonstrate the utility of the VIBES prophage annotation workflow by searching for 178 Pf phage genomes across 1072 Pseudomonas spp. genomes.

RevDate: 2024-04-08
CmpDate: 2024-04-08

Nawaz Tareen F, Alvi AN, Alsamani B, et al (2024)

EOTE-FSC: An efficient offloaded task execution for fog enabled smart cities.

PloS one, 19(4):e0298363.

Smart cities provide ease in lifestyle to their community members with the help of Information and Communication Technology (ICT). It provides better water, waste and energy management, enhances the security and safety of its citizens and offers better health facilities. Most of these applications are based on IoT-based sensor networks, that are deployed in different areas of applications according to their demand. Due to limited processing capabilities, sensor nodes cannot process multiple tasks simultaneously and need to offload some of their tasks to remotely placed cloud servers, which may cause delays. To reduce the delay, computing nodes are placed in different vicinitys acting as fog-computing nodes are used, to execute the offloaded tasks. It has been observed that the offloaded tasks are not uniformly received by fog computing nodes and some fog nodes may receive more tasks as some may receive less number of tasks. This may cause an increase in overall task execution time. Furthermore, these tasks comprise different priority levels and must be executed before their deadline. In this work, an Efficient Offloaded Task Execution for Fog enabled Smart cities (EOTE - FSC) is proposed. EOTE - FSC proposes a load balancing mechanism by modifying the greedy algorithm to efficiently distribute the offloaded tasks to its attached fog nodes to reduce the overall task execution time. This results in the successful execution of most of the tasks within their deadline. In addition, EOTE - FSC modifies the task sequencing with a deadline algorithm for the fog node to optimally execute the offloaded tasks in such a way that most of the high-priority tasks are entertained. The load balancing results of EOTE - FSC are compared with state-of-the-art well-known Round Robin, Greedy, Round Robin with longest job first, and Round Robin with shortest job first algorithms. However, fog computing results of EOTE - FSC are compared with the First Come First Serve algorithm. The results show that the EOTE - FSC effectively offloaded the tasks on fog nodes and the maximum load on the fog computing nodes is reduced up to 29%, 27.3%, 23%, and 24.4% as compared to Round Robin, Greedy, Round Robin with LJF and Round Robin with SJF algorithms respectively. However, task execution in the proposed EOTE - FSC executes a maximum number of offloaded high-priority tasks as compared to the FCFS algorithm within the same computing capacity of fog nodes.

RevDate: 2024-04-03

Khan NS, Roy SK, Talukdar S, et al (2024)

Empowering real-time flood impact assessment through the integration of machine learning and Google Earth Engine: a comprehensive approach.

Environmental science and pollution research international [Epub ahead of print].

Floods cause substantial losses to life and property, especially in flood-prone regions like northwestern Bangladesh. Timely and precise evaluation of flood impacts is critical for effective flood management and decision-making. This research demonstrates an integrated approach utilizing machine learning and Google Earth Engine to enable real-time flood assessment. Synthetic aperture radar (SAR) data from Sentinel-1 and the Google Earth Engine platform were employed to generate near real-time flood maps of the 2020 flood in Kurigram and Lalmonirhat. An automatic thresholding technique quantified flooded areas. For land use/land cover (LULC) analysis, Sentinel-2's high resolution and machine learning models like artificial neural networks (ANN), random forests (RF) and support vector machines (SVM) were leveraged. ANN delivered the best LULC mapping with 0.94 accuracy based on metrics like accuracy, kappa, mean F1 score, mean sensitivity, mean specificity, mean positive predictive value, mean negative value, mean precision, mean recall, mean detection rate and mean balanced accuracy. Results showed over 600,000 people exposed at peak inundation in July-about 17% of the population. The machine learning-enabled LULC maps reliably identified vulnerable areas to prioritize flood management. Over half of croplands flooded in July. This research demonstrates the potential of integrating SAR, machine learning and cloud computing to empower authorities through real-time monitoring and accurate LULC mapping essential for effective flood response. The proposed comprehensive methodology can assist stakeholders in developing data-driven flood management strategies to reduce impacts.

RevDate: 2024-04-03

Gheni HM, AbdulRahaim LA, A Abdellatif (2024)

Real-time driver identification in IoV: A deep learning and cloud integration approach.

Heliyon, 10(7):e28109.

The Internet of Vehicles (IoV) emerges as a pivotal extension of the Internet of Things (IoT), specifically geared towards transforming the automotive landscape. In this evolving ecosystem, the demand for a seamless end-to-end system becomes paramount for enhancing operational efficiency and safety. Hence, this study introduces an innovative method for real-time driver identification by integrating cloud computing with deep learning. Utilizing the integrated capabilities of Google Cloud, Thingsboard, and Apache Kafka, the developed solution tailored for IoV technology is adept at managing real-time data collection, processing, prediction, and visualization, with resilience against sensor data anomalies. Also, this research suggests an appropriate method for driver identification by utilizing a combination of Convolutional Neural Networks (CNN) and multi-head self-attention in the proposed approach. The proposed model is validated on two datasets: Security and collected. Moreover, the results show that the proposed model surpassed the previous works by achieving an accuracy and F1 score of 99.95%. Even when challenged with data anomalies, this model maintains a high accuracy of 96.2%. By achieving accurate driver identification results, the proposed end-to-end IoV system can aid in optimizing fleet management, vehicle security, personalized driving experiences, insurance, and risk assessment. This emphasizes its potential for road safety and managing transportation more effectively.

RevDate: 2024-04-03

Li Y, Xue F, Li B, et al (2024)

Analyzing bivariate cross-trait genetic architecture in GWAS summary statistics with the BIGA cloud computing platform.

bioRxiv : the preprint server for biology.

As large-scale biobanks provide increasing access to deep phenotyping and genomic data, genome-wide association studies (GWAS) are rapidly uncovering the genetic architecture behind various complex traits and diseases. GWAS publications typically make their summary-level data (GWAS summary statistics) publicly available, enabling further exploration of genetic overlaps between phenotypes gathered from different studies and cohorts. However, systematically analyzing high-dimensional GWAS summary statistics for thousands of phenotypes can be both logistically challenging and computationally demanding. In this paper, we introduce BIGA (https://bigagwas.org/), a website that aims to offer unified data analysis pipelines and processed data resources for cross-trait genetic architecture analyses using GWAS summary statistics. We have developed a framework to implement statistical genetics tools on a cloud computing platform, combined with extensive curated GWAS data resources. Through BIGA, users can upload data, submit jobs, and share results, providing the research community with a convenient tool for consolidating GWAS data and generating new insights.

RevDate: 2024-04-10

Marini S, Barquero A, Wadhwani AA, et al (2024)

OCTOPUS: Disk-based, Multiplatform, Mobile-friendly Metagenomics Classifier.

bioRxiv : the preprint server for biology.

Portable genomic sequencers such as Oxford Nanopore's MinION enable real-time applications in both clinical and environmental health, e.g., detection of bacterial outbreaks. However, there is a bottleneck in the downstream analytics when bioinformatics pipelines are unavailable, e.g., when cloud processing is unreachable due to absence of Internet connection, or only low-end computing devices can be carried on site. For instance, metagenomics classifiers usually require a large amount of memory or specific operating systems/libraries. In this work, we present a platform-friendly software for portable metagenomic analysis of Nanopore data, the Oligomer-based Classifier of Taxonomic Operational and Pan-genome Units via Singletons (OCTOPUS). OCTOPUS is written in Java, reimplements several features of the popular Kraken2 and KrakenUniq software, with original components for improving metagenomics classification on incomplete/sampled reference databases (e.g., selection of bacteria of public health priority), making it ideal for running on smartphones or tablets. We indexed both OCTOPUS and Kraken2 on a bacterial database with ~4,000 reference genomes, then simulated a positive (bacterial genomes from the same species, but different genomes) and two negative (viral, mammalian) Nanopore test sets. On the bacterial test set OCTOPUS yielded sensitivity and precision comparable to Kraken2 (94.4% and 99.8% versus 94.5% and 99.1%, respectively). On non-bacterial sequences (mammals and viral), OCTOPUS dramatically decreased (4- to 16-fold) the false positive rate when compared to Kraken2 (2.1% and 0.7% versus 8.2% and 11.2%, respectively). We also developed customized databases including viruses, and the World Health Organization's set of bacteria of concern for drug resistance, tested with real Nanopore data on an Android smartphone. OCTOPUS is publicly available at https://github.com/DataIntellSystLab/OCTOPUS and https://github.com/Ruiz-HCI-Lab/OctopusMobile.

RevDate: 2024-03-30

Du J, Dong G, Ning J, et al (2024)

Identity-based controlled delegated outsourcing data integrity auditing scheme.

Scientific reports, 14(1):7582.

With the continuous development of cloud computing, the application of cloud storage has become more and more popular. To ensure the integrity and availability of cloud data, scholars have proposed several cloud data auditing schemes. Still, most need help with outsourced data integrity, controlled outsourcing, and source file auditing. Therefore, we propose a controlled delegation outsourcing data integrity auditing scheme based on the identity-based encryption model. Our proposed scheme allows users to specify a dedicated agent to assist in uploading data to the cloud. These authorized proxies use recognizable identities for authentication and authorization, thus avoiding the need for cumbersome certificate management in a secure distributed computing system. While solving the above problems, our scheme adopts a bucket-based red-black tree structure to efficiently realize the dynamic updating of data, which can complete the updating of data and rebalancing of structural updates constantly and realize the high efficiency of data operations. We define the security model of the scheme in detail and prove the scheme's security under the difficult problem assumption. In the performance analysis section, the proposed scheme is analyzed experimentally in comparison with other schemes, and the results show that the proposed scheme is efficient and secure.

RevDate: 2024-03-28

Chen X, Xu G, Xu X, et al (2024)

Multicenter Hierarchical Federated Learning With Fault-Tolerance Mechanisms for Resilient Edge Computing Networks.

IEEE transactions on neural networks and learning systems, PP: [Epub ahead of print].

In the realm of federated learning (FL), the conventional dual-layered architecture, comprising a central parameter server and peripheral devices, often encounters challenges due to its significant reliance on the central server for communication and security. This dependence becomes particularly problematic in scenarios involving potential malfunctions of devices and servers. While existing device-edge-cloud hierarchical FL (HFL) models alleviate some dependence on central servers and reduce communication overheads, they primarily focus on load balancing within edge computing networks and fall short of achieving complete decentralization and edge-centric model aggregation. Addressing these limitations, we introduce the multicenter HFL (MCHFL) framework. This innovative framework replaces the traditional single central server architecture with a distributed network of robust global aggregation centers located at the edge, inherently enhancing fault tolerance crucial for maintaining operational integrity amidst edge network disruptions. Our comprehensive experiments with the MNIST, FashionMNIST, and CIFAR-10 datasets demonstrate the MCHFL's superior performance. Notably, even under high paralysis ratios of up to 50%, the MCHFL maintains high accuracy levels, with maximum accuracy reductions of only 2.60%, 5.12%, and 16.73% on these datasets, respectively. This performance significantly surpasses the notable accuracy declines observed in traditional single-center models under similar conditions. To the best of our knowledge, the MCHFL is the first edge multicenter FL framework with theoretical underpinnings. Our extensive experimental results across various datasets validate the MCHFL's effectiveness, showcasing its higher accuracy, faster convergence speed, and stronger robustness compared to single-center models, thereby establishing it as a pioneering paradigm in edge multicenter FL.

RevDate: 2024-03-29

Lock C, Toh EMS, NC Keong (2024)

Structural volumetric and Periodic Table DTI patterns in Complex Normal Pressure Hydrocephalus-Toward the principles of a translational taxonomy.

Frontiers in human neuroscience, 18:1188533.

INTRODUCTION: We previously proposed a novel taxonomic framework to describe the diffusion tensor imaging (DTI) profiles of white matter tracts by their diffusivity and neural properties. We have shown the relevance of this strategy toward interpreting brain tissue signatures in Classic Normal Pressure Hydrocephalus vs. comparator cohorts of mild traumatic brain injury and Alzheimer's disease. In this iteration of the Periodic Table of DTI Elements, we examined patterns of tissue distortion in Complex NPH (CoNPH) and validated the methodology against an open-access dataset of healthy subjects, to expand its accessibility to a larger community.

METHODS: DTI measures for 12 patients with CoNPH with multiple comorbidities and 45 cognitively normal controls from the ADNI database were derived using the image processing pipeline on the brainlife.io open cloud computing platform. Using the Periodic Table algorithm, DTI profiles for CoNPH vs. controls were mapped according to injury patterns.

RESULTS: Structural volumes in most structures tested were significantly lower and the lateral ventricles higher in CoNPH vs. controls. In CoNPH, significantly lower fractional anisotropy (FA) and higher mean, axial, and radial diffusivities (MD, L1, and L2 and 3, respectively) were observed in white matter related to the lateral ventricles. Most diffusivity measures across supratentorial and infratentorial structures were significantly higher in CoNPH, with the largest differences in the cerebellum cortex. In subcortical deep gray matter structures, CoNPH and controls differed most significantly in the hippocampus, with the CoNPH group having a significantly lower FA and higher MD, L1, and L2 and 3. Cerebral and cerebellar white matter demonstrated more potential reversibility of injury compared to cerebral and cerebellar cortices.

DISCUSSION: The findings of widespread and significant reductions in subcortical deep gray matter structures, in comparison to healthy controls, support the hypothesis that Complex NPH cohorts retain imaging features associated with Classic NPH. The use of the algorithm of the Periodic Table allowed for greater consistency in the interpretation of DTI results by focusing on patterns of injury rather than an over-reliance on the interrogation of individual measures by statistical significance alone. Our aim is to provide a prototype that could be refined for an approach toward the concept of a "translational taxonomy."

RevDate: 2024-03-30

Kang S, Lee S, Y Jung (2024)

Design of Network-on-Chip-Based Restricted Coulomb Energy Neural Network Accelerator on FPGA Device.

Sensors (Basel, Switzerland), 24(6):.

Sensor applications in internet of things (IoT) systems, coupled with artificial intelligence (AI) technology, are becoming an increasingly significant part of modern life. For low-latency AI computation in IoT systems, there is a growing preference for edge-based computing over cloud-based alternatives. The restricted coulomb energy neural network (RCE-NN) is a machine learning algorithm well-suited for implementation on edge devices due to its simple learning and recognition scheme. In addition, because the RCE-NN generates neurons as needed, it is easy to adjust the network structure and learn additional data. Therefore, the RCE-NN can provide edge-based real-time processing for various sensor applications. However, previous RCE-NN accelerators have limited scalability when the number of neurons increases. In this paper, we propose a network-on-chip (NoC)-based RCE-NN accelerator and present the results of implementation on a field-programmable gate array (FPGA). NoC is an effective solution for managing massive interconnections. The proposed RCE-NN accelerator utilizes a hierarchical-star (H-star) topology, which efficiently handles a large number of neurons, along with routers specifically designed for the RCE-NN. These approaches result in only a slight decrease in the maximum operating frequency as the number of neurons increases. Consequently, the maximum operating frequency of the proposed RCE-NN accelerator with 512 neurons increased by 126.1% compared to a previous RCE-NN accelerator. This enhancement was verified with two datasets for gas and sign language recognition, achieving accelerations of up to 54.8% in learning time and up to 45.7% in recognition time. The NoC scheme of the proposed RCE-NN accelerator is an appropriate solution to ensure the scalability of the neural network while providing high-performance on-chip learning and recognition.

RevDate: 2024-03-30

Zhan Y, Xie W, Shi R, et al (2024)

Dynamic Privacy-Preserving Anonymous Authentication Scheme for Condition-Matching in Fog-Cloud-Based VANETs.

Sensors (Basel, Switzerland), 24(6):.

Secure group communication in Vehicle Ad hoc Networks (VANETs) over open channels remains a challenging task. To enable secure group communications with conditional privacy, it is necessary to establish a secure session using Authenticated Key Agreement (AKA). However, existing AKAs suffer from problems such as cross-domain dynamic group session key negotiation and heavy computational burdens on the Trusted Authority (TA) and vehicles. To address these challenges, we propose a dynamic privacy-preserving anonymous authentication scheme for condition matching in fog-cloud-based VANETs. The scheme employs general Elliptic Curve Cryptosystem (ECC) technology and fog-cloud computing methods to decrease computational overhead for On-Board Units (OBUs) and supports multiple TAs for improved service quality and robustness. Furthermore, certificateless technology alleviates TAs of key management burdens. The security analysis indicates that our solution satisfies the communication security and privacy requirements. Experimental simulations verify that our method achieves optimal overall performance with lower computational costs and smaller communication overhead compared to state-of-the-art solutions.

RevDate: 2024-03-30
CmpDate: 2024-03-29

Yuan DY, Park JH, Li Z, et al (2024)

A New Cloud-Native Tool for Pharmacogenetic Analysis.

Genes, 15(3):.

BACKGROUND: The advancement of next-generation sequencing (NGS) technologies provides opportunities for large-scale Pharmacogenetic (PGx) studies and pre-emptive PGx testing to cover a wide range of genotypes present in diverse populations. However, NGS-based PGx testing is limited by the lack of comprehensive computational tools to support genetic data analysis and clinical decisions.

METHODS: Bioinformatics utilities specialized for human genomics and the latest cloud-based technologies were used to develop a bioinformatics pipeline for analyzing the genomic sequence data and reporting PGx genotypes. A database was created and integrated in the pipeline for filtering the actionable PGx variants and clinical interpretations. Strict quality verification procedures were conducted on variant calls with the whole genome sequencing (WGS) dataset of the 1000 Genomes Project (G1K). The accuracy of PGx allele identification was validated using the WGS dataset of the Pharmacogenetics Reference Materials from the Centers for Disease Control and Prevention (CDC).

RESULTS: The newly created bioinformatics pipeline, Pgxtools, can analyze genomic sequence data, identify actionable variants in 13 PGx relevant genes, and generate reports annotated with specific interpretations and recommendations based on clinical practice guidelines. Verified with two independent methods, we have found that Pgxtools consistently identifies variants more accurately than the results in the G1K dataset on GRCh37 and GRCh38.

CONCLUSIONS: Pgxtools provides an integrated workflow for large-scale genomic data analysis and PGx clinical decision support. Implemented with cloud-native technologies, it is highly portable in a wide variety of environments from a single laptop to High-Performance Computing (HPC) clusters and cloud platforms for different production scales and requirements.

RevDate: 2024-03-29

Kukkar A, Kumar Y, Sandhu JK, et al (2024)

DengueFog: A Fog Computing-Enabled Weighted Random Forest-Based Smart Health Monitoring System for Automatic Dengue Prediction.

Diagnostics (Basel, Switzerland), 14(6):.

Dengue is a distinctive and fatal infectious disease that spreads through female mosquitoes called Aedes aegypti. It is a notable concern for developing countries due to its low diagnosis rate. Dengue has the most astounding mortality level as compared to other diseases due to tremendous platelet depletion. Hence, it can be categorized as a life-threatening fever as compared to the same class of fevers. Additionally, it has been shown that dengue fever shares many of the same symptoms as other flu-based fevers. On the other hand, the research community is closely monitoring the popular research fields related to IoT, fog, and cloud computing for the diagnosis and prediction of diseases. IoT, fog, and cloud-based technologies are used for constructing a number of health care systems. Accordingly, in this study, a DengueFog monitoring system was created based on fog computing for prediction and detection of dengue sickness. Additionally, the proposed DengueFog system includes a weighted random forest (WRF) classifier to monitor and predict the dengue infection. The proposed system's efficacy was evaluated using data on dengue infection. This dataset was gathered between 2016 and 2018 from several hospitals in the Delhi-NCR region. The accuracy, F-value, recall, precision, error rate, and specificity metrics were used to assess the simulation results of the suggested monitoring system. It was demonstrated that the proposed DengueFog monitoring system with WRF outperforms the traditional classifiers.


RJR Experience and Expertise


Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.


Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.


Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.


Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.


While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.


Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.


Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.


Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.

Support this website:
Order from Amazon
We will earn a commission.

This is a must read book for anyone with an interest in invasion biology. The full title of the book lays out the author's premise — The New Wild: Why Invasive Species Will Be Nature's Salvation. Not only is species movement not bad for ecosystems, it is the way that ecosystems respond to perturbation — it is the way ecosystems heal. Even if you are one of those who is absolutely convinced that invasive species are actually "a blight, pollution, an epidemic, or a cancer on nature", you should read this book to clarify your own thinking. True scientific understanding never comes from just interacting with those with whom you already agree. R. Robbins

963 Red Tail Lane
Bellingham, WA 98226


E-mail: RJR8222@gmail.com

Collection of publications by R J Robbins

Reprints and preprints of publications, slide presentations, instructional materials, and data compilations written or prepared by Robert Robbins. Most papers deal with computational biology, genome informatics, using information technology to support biomedical research, and related matters.

Research Gate page for R J Robbins

ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. According to a study by Nature and an article in Times Higher Education , it is the largest academic social network in terms of active users.

Curriculum Vitae for R J Robbins

short personal version

Curriculum Vitae for R J Robbins

long standard version

RJR Picks from Around the Web (updated 11 MAY 2018 )