picture
RJR-logo

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

icon

Bibliography Options Menu

icon
QUERY RUN:
22 May 2024 at 01:41
HITS:
3568
PAGE OPTIONS:
Hide Abstracts   |   Hide Additional Links
NOTE:
Long bibliographies are displayed in blocks of 100 citations at a time. At the end of each block there is an option to load the next block.

Bibliography on: Cloud Computing

RJR-3x

Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About:  RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE

ESP: PubMed Auto Bibliography 22 May 2024 at 01:41 Created: 

Cloud Computing

Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.

Created with PubMed® Query: ( cloud[TIAB] AND (computing[TIAB] OR "amazon web services"[TIAB] OR google[TIAB] OR "microsoft azure"[TIAB]) ) NOT pmcbook NOT ispreviousversion

Citations The Papers (from PubMed®)

-->

RevDate: 2024-05-21

Hulagappa Nebagiri M, L Pillappa Hnumanthappa (2024)

Fractional social optimization-based migration and replica management algorithm for load balancing in distributed file system for cloud computing.

Network (Bristol, England) [Epub ahead of print].

Effective management of data is a major issue in Distributed File System (DFS), like the cloud. This issue is handled by replicating files in an effective manner, which can minimize the time of data access and elevate the data availability. This paper devises a Fractional Social Optimization Algorithm (FSOA) for replica management along with balancing load in DFS in the cloud stage. Balancing the workload for DFS is the main objective. Here, the chunk creation is done by partitioning the file into a different number of chunks considering Deep Fuzzy Clustering (DFC) and then in the round-robin manner the Virtual machine (VM) is assigned. In that case for balancing the load considering certain objectives like resource use, energy consumption and migration cost thereby the load balancing is performed with the proposed FSOA. Here, the FSOA is formulated by uniting the Social optimization algorithm (SOA) and Fractional Calculus (FC). The replica management is done in DFS using the proposed FSOA by considering the various objectives. The FSOA has the smallest load of 0.299, smallest cost of 0.395, smallest energy consumption of 0.510, smallest overhead of 0.358, and smallest throughput of 0.537.

RevDate: 2024-05-21

Qureshi KM, Mewada BG, Kaur S, et al (2024)

Investigating industry 4.0 technologies in logistics 4.0 usage towards sustainable manufacturing supply chain.

Heliyon, 10(10):e30661.

In the era of Industry 4.0 (I4.0), automation and data analysis have undergone significant advancements, greatly impacting production management and operations management. Technologies such as the Internet of Things (IoT), robotics, cloud computing (CC), and big data, have played a crucial role in shaping Logistics 4.0 (L4.0) and improving the efficiency of the manufacturing supply chain (SC), ultimately contributing to sustainability goals. The present research investigates the role of I4.0 technologies within the framework of the extended theory of planned behavior (ETPB). The research explores various variables including subjective norms, attitude, perceived behavior control, leading to word-of-mouth, and purchase intention. By modeling these variables, the study aims to understand the influence of I4.0 technologies on L4.0 to establish a sustainable manufacturing SC. A questionnaire was administered to gather input from small and medium-sized firms (SMEs) in the manufacturing industry. An empirical study along with partial least squares structural equation modeling (SEM), was conducted to analyze the data. The findings indicate that the use of I4.0 technology in L4.0 influences subjective norms, which subsequently influence attitudes and personal behavior control. This, in turn, leads to word-of-mouth and purchase intention. The results provide valuable insights for shippers and logistics service providers empowering them to enhance their performance and contribute to achieving sustainability objectives. Consequently, this study contributes to promoting sustainability in the manufacturing SC by stimulating the adoption of I4.0 technologies in L4.0.

RevDate: 2024-05-20
CmpDate: 2024-05-20

Vo DH, Vo AT, Dinh CT, et al (2024)

Corporate restructuring and firm performance in Vietnam: The moderating role of digital transformation.

PloS one, 19(5):e0303491 pii:PONE-D-23-40777.

In the digital age, firms should continually innovate and adapt to remain competitive and enhance performance. Innovation and adaptation require firms to take a holistic approach to their corporate structuring to ensure efficiency and effectiveness to stay competitive. This study examines how corporate restructuring impacts firm performance in Vietnam. We then investigate the moderating role of digital transformation in the corporate restructuring-firm performance nexus. We use content analysis, with a focus on particular terms, including "digitalization," "big data," "cloud computing," "blockchain," and "information technology" for 11 years, from 2011 to 2021. The frequency index from these keywords is developed to proxy the digital transformation for the Vietnamese listed firms. A final sample includes 118 Vietnamese listed firms with sufficient data for the analysis using the generalized method of moments (GMM) approach. The results indicate that corporate restructuring, including financial, portfolio, and operational restructuring, has a negative effect on firm performance in Vietnam. Digital transformation also negatively affects firm performance. However, corporate restructuring implemented in conjunction with digital transformation improves the performance of Vietnamese listed firms. These findings largely remain unchanged across various robustness analyses.

RevDate: 2024-05-16

Gupta I, Saxena D, Singh AK, et al (2024)

A Multiple Controlled Toffoli Driven Adaptive Quantum Neural Network Model for Dynamic Workload Prediction in Cloud Environments.

IEEE transactions on pattern analysis and machine intelligence, PP: [Epub ahead of print].

The key challenges in cloud computing encompass dynamic resource scaling, load balancing, and power consumption. Accurate workload prediction is identified as a crucial strategy to address these challenges. Despite numerous methods proposed to tackle this issue, existing approaches fall short of capturing the high-variance nature of volatile and dynamic cloud workloads. Consequently, this paper introduces a novel model aimed at addressing this limitation. This paper presents a novel Multiple Controlled Toffoli-driven Adaptive Quantum Neural Network (MCT-AQNN) model to establish an empirical solution to complex, elastic as well as challenging workload prediction problems by optimizing the exploration, adaption, and exploitation proficiencies through quantum learning. The computational adaptability of quantum computing is ingrained with machine learning algorithms to derive more precise correlations from dynamic and complex workloads. The furnished input data point and hatched neural weights are refitted in the form of qubits while the controlling effects of Multiple Controlled Toffoli (MCT) gates are operated at the hidden and output layers of Quantum Neural Network (QNN) for enhancing learning capabilities. Complimentarily, a Uniformly Adaptive Quantum Machine Learning (UAQL) algorithm has evolved to functionally and effectually train the QNN. The extensive experiments are conducted and the comparisons are performed with state-of-the-art methods using four real-world benchmark datasets. Experimental results evince that MCT-AQNN has up to 32%-96% higher accuracy than the existing approaches.

RevDate: 2024-05-15

Koenig Z, Yohannes MT, Nkambule LL, et al (2024)

A harmonized public resource of deeply sequenced diverse human genomes.

Genome research pii:gr.278378.123 [Epub ahead of print].

Underrepresented populations are often excluded from genomic studies due in part to a lack of resources supporting their analyses. The 1000 Genomes Project (1kGP) and Human Genome Diversity Project (HGDP), which have recently been sequenced to high coverage, are valuable genomic resources because of the global diversity they capture and their open data sharing policies. Here, we harmonized a high quality set of 4,094 whole genomes from 80 populations in the HGDP and 1kGP with data from the Genome Aggregation Database (gnomAD) and identified over 153 million high-quality SNVs, indels, and SVs. We performed a detailed ancestry analysis of this cohort, characterizing population structure and patterns of admixture across populations, analyzing site frequency spectra, and measuring variant counts at global and subcontinental levels. We also demonstrate substantial added value from this dataset compared to the prior versions of the component resources, typically combined via liftOver and variant intersection; for example, we catalog millions of new genetic variants, mostly rare, compared to previous releases. In addition to unrestricted individual-level public release, we provide detailed tutorials for conducting many of the most common quality control steps and analyses with these data in a scalable cloud-computing environment and publicly release this new phased joint callset for use as a haplotype resource in phasing and imputation pipelines. This jointly called reference panel will serve as a key resource to support research of diverse ancestry populations.

RevDate: 2024-05-15

Thiriveedhi VK, Krishnaswamy D, Clunie D, et al (2024)

Cloud-based large-scale curation of medical imaging data using AI segmentation.

Research square pii:rs.3.rs-4351526.

Rapid advances in medical imaging Artificial Intelligence (AI) offer unprecedented opportunities for automatic analysis and extraction of data from large imaging collections. Computational demands of such modern AI tools may be difficult to satisfy with the capabilities available on premises. Cloud computing offers the promise of economical access and extreme scalability. Few studies examine the price/performance tradeoffs of using the cloud, in particular for medical image analysis tasks. We investigate the use of cloud-provisioned compute resources for AI-based curation of the National Lung Screening Trial (NLST) Computed Tomography (CT) images available from the National Cancer Institute (NCI) Imaging Data Commons (IDC). We evaluated NCI Cancer Research Data Commons (CRDC) Cloud Resources - Terra (FireCloud) and Seven Bridges-Cancer Genomics Cloud (SB-CGC) platforms - to perform automatic image segmentation with TotalSegmentator and pyradiomics feature extraction for a large cohort containing >126,000 CT volumes from >26,000 patients. Utilizing >21,000 Virtual Machines (VMs) over the course of the computation we completed analysis in under 9 hours, as compared to the estimated 522 days that would be needed on a single workstation. The total cost of utilizing the cloud for this analysis was $1,011.05. Our contributions include: 1) an evaluation of the numerous tradeoffs towards optimizing the use of cloud resources for large-scale image analysis; 2) CloudSegmentator, an open source reproducible implementation of the developed workflows, which can be reused and extended; 3) practical recommendations for utilizing the cloud for large-scale medical image computing tasks. We also share the results of the analysis: the total of 9,565,554 segmentations of the anatomic structures and the accompanying radiomics features in IDC as of release v18.

RevDate: 2024-05-14

Philippou J, Yáñez Feliú G, TJ Rudge (2024)

WebCM: A Web-Based Platform for Multiuser Individual-Based Modeling of Multicellular Microbial Populations and Communities.

ACS synthetic biology [Epub ahead of print].

WebCM is a web platform that enables users to create, edit, run, and view individual-based simulations of multicellular microbial populations and communities on a remote compute server. WebCM builds upon the simulation software CellModeller in the back end and provides users with a web-browser-based modeling interface including model editing, execution, and playback. Multiple users can run and manage multiple simulations simultaneously, sharing the host hardware. Since it is based on CellModeller, it can utilize both GPU and CPU parallelization. The user interface provides real-time interactive 3D graphical representations for inspection of simulations at all time points, and the results can be downloaded for detailed offline analysis. It can be run on cloud computing services or on a local server, allowing collaboration within and between laboratories.

RevDate: 2024-05-11

Lin Z, J Liang (2024)

Edge Caching Data Distribution Strategy with Minimum Energy Consumption.

Sensors (Basel, Switzerland), 24(9): pii:s24092898.

In the context of the rapid development of the Internet of Vehicles, virtual reality, automatic driving and the industrial Internet, the terminal devices in the network show explosive growth. As a result, more and more information is generated from the edge of the network, which makes the data throughput increase dramatically in the mobile communication network. As the key technology of the fifth-generation mobile communication network, mobile edge caching technology which caches popular data to the edge server deployed at the edge of the network avoids the data transmission delay of the backhaul link and the occurrence of network congestion. With the growing scale of the network, distributing hot data from cloud servers to edge servers will generate huge energy consumption. To realize the green and sustainable development of the communication industry and reduce the energy consumption of distribution of data that needs to be cached in edge servers, we make the first attempt to propose and solve the problem of edge caching data distribution with minimum energy consumption (ECDDMEC) in this paper. First, we model and formulate the problem as a constrained optimization problem and then prove its NP-hardness. Subsequently, we design a greedy algorithm with computational complexity of O(n2) to solve the problem approximately. Experimental results show that compared with the distribution strategy of each edge server directly requesting data from the cloud server, the strategy obtained by the algorithm can significantly reduce the energy consumption of data distribution.

RevDate: 2024-05-11

Emvoliadis A, Vryzas N, Stamatiadou ME, et al (2024)

Multimodal Environmental Sensing Using AI & IoT Solutions: A Cognitive Sound Analysis Perspective.

Sensors (Basel, Switzerland), 24(9): pii:s24092755.

This study presents a novel audio compression technique, tailored for environmental monitoring within multi-modal data processing pipelines. Considering the crucial role that audio data play in environmental evaluations, particularly in contexts with extreme resource limitations, our strategy substantially decreases bit rates to facilitate efficient data transfer and storage. This is accomplished without undermining the accuracy necessary for trustworthy air pollution analysis while simultaneously minimizing processing expenses. More specifically, our approach fuses a Deep-Learning-based model, optimized for edge devices, along with a conventional coding schema for audio compression. Once transmitted to the cloud, the compressed data undergo a decoding process, leveraging vast cloud computing resources for accurate reconstruction and classification. The experimental results indicate that our approach leads to a relatively minor decrease in accuracy, even at notably low bit rates, and demonstrates strong robustness in identifying data from labels not included in our training dataset.

RevDate: 2024-05-11

Hanczewski S, Stasiak M, M Weissenberg (2024)

An Analytical Model of IaaS Architecture for Determining Resource Utilization.

Sensors (Basel, Switzerland), 24(9): pii:s24092758.

Cloud computing has become a major component of the modern IT ecosystem. A key contributor to this has been the development of Infrastructure as a Service (IaaS) architecture, in which users' virtual machines (VMs) are run on the service provider's physical infrastructure, making it possible to become independent of the need to purchase one's own physical machines (PMs). One of the main aspects to consider when designing such systems is achieving the optimal utilization of individual resources, such as processor, RAM, disk, and available bandwidth. In response to these challenges, the authors developed an analytical model (the ARU method) to determine the average utilization levels of the aforementioned resources. The effectiveness of the proposed analytical model was evaluated by comparing the results obtained by utilizing the model with those obtained by conducting a digital simulation of the operation of a cloud system according to the IaaS paradigm. The results show the effectiveness of the model regardless of the structure of the emerging requests, the variability of the capacity of individual resources, and the number of physical machines in the system. This translates into the applicability of the model in the design process of cloud systems.

RevDate: 2024-05-10

Du X, Novoa-Laurentiev J, Plasaek JM, et al (2024)

Enhancing Early Detection of Cognitive Decline in the Elderly: A Comparative Study Utilizing Large Language Models in Clinical Notes.

medRxiv : the preprint server for health sciences pii:2024.04.03.24305298.

BACKGROUND: Large language models (LLMs) have shown promising performance in various healthcare domains, but their effectiveness in identifying specific clinical conditions in real medical records is less explored. This study evaluates LLMs for detecting signs of cognitive decline in real electronic health record (EHR) clinical notes, comparing their error profiles with traditional models. The insights gained will inform strategies for performance enhancement.

METHODS: This study, conducted at Mass General Brigham in Boston, MA, analyzed clinical notes from the four years prior to a 2019 diagnosis of mild cognitive impairment in patients aged 50 and older. We used a randomly annotated sample of 4,949 note sections, filtered with keywords related to cognitive functions, for model development. For testing, a random annotated sample of 1,996 note sections without keyword filtering was utilized. We developed prompts for two LLMs, Llama 2 and GPT-4, on HIPAA-compliant cloud-computing platforms using multiple approaches (e.g., both hard and soft prompting and error analysis-based instructions) to select the optimal LLM-based method. Baseline models included a hierarchical attention-based neural network and XGBoost. Subsequently, we constructed an ensemble of the three models using a majority vote approach.

RESULTS: GPT-4 demonstrated superior accuracy and efficiency compared to Llama 2, but did not outperform traditional models. The ensemble model outperformed the individual models, achieving a precision of 90.3%, a recall of 94.2%, and an F1-score of 92.2%. Notably, the ensemble model showed a significant improvement in precision, increasing from a range of 70%-79% to above 90%, compared to the best-performing single model. Error analysis revealed that 63 samples were incorrectly predicted by at least one model; however, only 2 cases (3.2%) were mutual errors across all models, indicating diverse error profiles among them.

CONCLUSIONS: LLMs and traditional machine learning models trained using local EHR data exhibited diverse error profiles. The ensemble of these models was found to be complementary, enhancing diagnostic performance. Future research should investigate integrating LLMs with smaller, localized models and incorporating medical data and domain knowledge to enhance performance on specific tasks.

RevDate: 2024-05-08

Kent RM, Barbosa WAS, DJ Gauthier (2024)

Controlling chaos using edge computing hardware.

Nature communications, 15(1):3886.

Machine learning provides a data-driven approach for creating a digital twin of a system - a digital model used to predict the system behavior. Having an accurate digital twin can drive many applications, such as controlling autonomous systems. Often, the size, weight, and power consumption of the digital twin or related controller must be minimized, ideally realized on embedded computing hardware that can operate without a cloud-computing connection. Here, we show that a nonlinear controller based on next-generation reservoir computing can tackle a difficult control problem: controlling a chaotic system to an arbitrary time-dependent state. The model is accurate, yet it is small enough to be evaluated on a field-programmable gate array typically found in embedded devices. Furthermore, the model only requires 25.0 ± 7.0 nJ per evaluation, well below other algorithms, even without systematic power optimization. Our work represents the first step in deploying efficient machine learning algorithms to the computing "edge."

RevDate: 2024-05-07

Buchanan BC, Tang Y, Lopez H, et al (2024)

Development of a cloud-based flow rate tool for eNAMPT biomarker detection.

PNAS nexus, 3(5):pgae173.

Increased levels of extracellular nicotinamide phosphoribosyltransferase (eNAMPT) are increasingly recognized as a highly useful biomarker of inflammatory disease and disease severity. In preclinical animal studies, a monoclonal antibody that neutralizes eNAMPT has been generated to successfully reduce the extent of inflammatory cascade activation. Thus, the rapid detection of eNAMPT concentration in plasma samples at the point of care (POC) would be of great utility in assessing the benefit of administering an anti-eNAMPT therapeutic. To determine the feasibility of this POC test, we conducted a particle immunoagglutination assay on a paper microfluidic platform and quantified its extent with a flow rate measurement in less than 1 min. A smartphone and cloud-based Google Colab were used to analyze the flow rates automatically. A horizontal flow model and an immunoagglutination binding model were evaluated to optimize the detection time, sample dilution, and particle concentration. This assay successfully detected eNAMPT in both human whole blood and plasma samples (diluted to 10 and 1%), with the limit of detection of 1-20 pg/mL (equivalent to 0.1-0.2 ng/mL in undiluted blood and plasma) and a linear range of 5-40 pg/mL. Furthermore, the smartphone POC assay distinguished clinical samples with low, mid, and high eNAMPT concentrations. Together, these results indicate this POC assay, which utilizes low-cost materials, time-effective methods, and a straightforward immunoassay (without surface immobilization), may reliably allow rapid determination of eNAMPT blood/plasma levels to advantage patient stratification in clinical trials and guide ALT-100 mAb therapeutic decision-making.

RevDate: 2024-05-06

Sankar M S K, Gupta S, Luthra S, et al (2024)

Empowering sustainable manufacturing: Unleashing digital innovation in spool fabrication industries.

Heliyon, 10(9):e29994.

In industrial landscapes, spool fabrication industries play a crucial role in the successful completion of numerous industrial projects by providing prefabricated modules. However, the implementation of digitalized sustainable practices in spool fabrication industries is progressing slowly and is still in its embryonic stage due to several challenges. To implement digitalized sustainable manufacturing (SM), digital technologies such as Internet of Things, Cloud computing, Big data analytics, Cyber-physical systems, Augmented reality, Virtual reality, and Machine learning are required in the context of sustainability. The scope of the present study entails prioritization of the enablers that promote the implementation of digitalized sustainable practices in spool fabrication industries using the Improved Fuzzy Stepwise Weight Assessment Ratio Analysis (IMF-SWARA) method integrated with Triangular Fuzzy Bonferroni Mean (TFBM). The enablers are identified through a systematic literature review and are validated by a team of seven experts through a questionnaire survey. Then the finally identified enablers are analyzed by the IMF-SWARA and TFBM integrated approach. The results indicate that the most significant enablers are management support, leadership, governmental policies and regulations to implement digitalized SM. The study provides a comprehensive analysis of digital SM enablers in the spool fabrication industry and offers guidelines for the transformation of conventional systems into digitalized SM practices.

RevDate: 2024-05-05

Mishra A, Kim HS, Kumar R, et al (2024)

Advances in Vibrio-related infection management: an integrated technology approach for aquaculture and human health.

Critical reviews in biotechnology [Epub ahead of print].

Vibrio species pose significant threats worldwide, causing mortalities in aquaculture and infections in humans. Global warming and the emergence of worldwide strains of Vibrio diseases are increasing day by day. Control of Vibrio species requires effective monitoring, diagnosis, and treatment strategies at the global scale. Despite current efforts based on chemical, biological, and mechanical means, Vibrio control management faces limitations due to complicated implementation processes. This review explores the intricacies and challenges of Vibrio-related diseases, including accurate and cost-effective diagnosis and effective control. The global burden due to emerging Vibrio species further complicates management strategies. We propose an innovative integrated technology model that harnesses cutting-edge technologies to address these obstacles. The proposed model incorporates advanced tools, such as biosensing technologies, the Internet of Things (IoT), remote sensing devices, cloud computing, and machine learning. This model offers invaluable insights and supports better decision-making by integrating real-time ecological data and biological phenotype signatures. A major advantage of our approach lies in leveraging cloud-based analytics programs, efficiently extracting meaningful information from vast and complex datasets. Collaborating with data and clinical professionals ensures logical and customized solutions tailored to each unique situation. Aquaculture biotechnology that prioritizes sustainability may have a large impact on human health and the seafood industry. Our review underscores the importance of adopting this model, revolutionizing the prognosis and management of Vibrio-related infections, even under complex circumstances. Furthermore, this model has promising implications for aquaculture and public health, addressing the United Nations Sustainable Development Goals and their development agenda.

RevDate: 2024-05-02

Han Y, Wei Z, G Huang (2024)

An imbalance data quality monitoring based on SMOTE-XGBOOST supported by edge computing.

Scientific reports, 14(1):10151.

Product assembly involves extensive production data that is characterized by high dimensionality, multiple samples, and data imbalance. The article proposes an edge computing-based framework for monitoring product assembly quality in industrial Internet of Things. Edge computing technology relieves the pressure of aggregating enormous amounts of data to cloud center for processing. To address the problem of data imbalance, we compared five sampling methods: Borderline SMOTE, Random Downsampling, Random Upsampling, SMOTE, and ADASYN. Finally, the quality monitoring model SMOTE-XGBoost is proposed, and the hyperparameters of the model are optimized by using the Grid Search method. The proposed framework and quality control methodology were applied to an assembly line of IGBT modules for the traction system, and the validity of the model was experimentally verified.

RevDate: 2024-05-02

Peccoud S, Berezin CT, Hernandez SI, et al (2024)

PlasCAT: Plasmid cloud assembly tool.

Bioinformatics (Oxford, England) pii:7663467 [Epub ahead of print].

SUMMARY: PlasCAT is an easy-to-use cloud-based bioinformatics tool that enables de novo plasmid sequence assembly from raw sequencing data. Non-technical users can now assemble sequences from long reads and short reads without ever touching a line of code. PlasCAT uses high-performance computing servers to reduce run times on assemblies and deliver results faster.

PlasCAT is freely available on the web at https://sequencing.genofab.com. The assembly pipeline source code and server code are available for download at https://bitbucket.org/genofabinc/workspace/projects/PLASCAT. Click the Cancel button to access the source code without authenticating. Web servers implemented in React.js and Python, with all major browsers supported.

RevDate: 2024-05-02

Blindenbach J, Kang J, Hong S, et al (2024)

Ultra-secure storage and analysis of genetic data for the advancement of precision medicine.

bioRxiv : the preprint server for biology pii:2024.04.16.589793.

Cloud computing provides the opportunity to store the ever-growing genotype-phenotype data sets needed to achieve the full potential of precision medicine. However, due to the sensitive nature of this data and the patchwork of data privacy laws across states and countries, additional security protections are proving necessary to ensure data privacy and security. Here we present SQUiD, a secure queryable database for storing and analyzing genotype-phenotype data. With SQUiD, genotype-phenotype data can be stored in a low-security, low-cost public cloud in the encrypted form, which researchers can securely query without the public cloud ever being able to decrypt the data. We demonstrate the usability of SQUiD by replicating various commonly used calculations such as polygenic risk scores, cohort creation for GWAS, MAF filtering, and patient similarity analysis both on synthetic and UK Biobank data. Our work represents a new and scalable platform enabling the realization of precision medicine without security and privacy concerns.

RevDate: 2024-04-29

Drmota P, Nadlinger DP, Main D, et al (2024)

Verifiable Blind Quantum Computing with Trapped Ions and Single Photons.

Physical review letters, 132(15):150604.

We report the first hybrid matter-photon implementation of verifiable blind quantum computing. We use a trapped-ion quantum server and a client-side photonic detection system networked via a fiber-optic quantum link. The availability of memory qubits and deterministic entangling gates enables interactive protocols without postselection-key requirements for any scalable blind server, which previous realizations could not provide. We quantify the privacy at ≲0.03 leaked classical bits per qubit. This experiment demonstrates a path to fully verified quantum computing in the cloud.

RevDate: 2024-04-29

Schweitzer M, Ostheimer P, Lins A, et al (2024)

Transforming Tele-Ophthalmology: Utilizing Cloud Computing for Remote Eye Care.

Studies in health technology and informatics, 313:215-220.

BACKGROUND: Tele-ophthalmology is gaining recognition for its role in improving eye care accessibility via cloud-based solutions. The Google Cloud Platform (GCP) Healthcare API enables secure and efficient management of medical image data such as high-resolution ophthalmic images.

OBJECTIVES: This study investigates cloud-based solutions' effectiveness in tele-ophthalmology, with a focus on GCP's role in data management, annotation, and integration for a novel imaging device.

METHODS: Leveraging the Integrating the Healthcare Enterprise (IHE) Eye Care profile, the cloud platform was utilized as a PACS and integrated with the Open Health Imaging Foundation (OHIF) Viewer for image display and annotation capabilities for ophthalmic images.

RESULTS: The setup of a GCP DICOM storage and the OHIF Viewer facilitated remote image data analytics. Prolonged loading times and relatively large individual image file sizes indicated system challenges.

CONCLUSION: Cloud platforms have the potential to ease distributed data analytics, as needed for efficient tele-ophthalmology scenarios in research and clinical practice, by providing scalable and secure image management solutions.

RevDate: 2024-04-29

Iorga M, K Scarfone (2016)

Using a Capability Oriented Methodology to Build Your Cloud Ecosystem.

IEEE cloud computing, 3(2):.

Organizations often struggle to capture the necessary functional capabilities for each cloud-based solution adopted for their information systems. Identifying, defining, selecting, and prioritizing these functional capabilities and the security components that implement and enforce them is surprisingly challenging. This article explains recent developments by the National Institute of Standards and Technology (NIST) in addressing these challenges. The article focuses on the capability oriented methodology for orchestrating a secure cloud ecosystem proposed as part of the NIST Cloud Computing Security Reference Architecture. The methodology recognizes that risk may vary for cloud Actors within a single ecosystem, so it takes a risk-based approach to functional capabilities. The result is an assessment of which cloud Actor is responsible for implementing each security component and how implementation should be prioritized. A cloud Actor, especially a cloud Consumer, that follows the methodology can more easily make well-informed decisions regarding their cloud ecosystems.

RevDate: 2024-04-27

van der Laan EE, Hazenberg P, AH Weerts (2024)

Simulation of long-term storage dynamics of headwater reservoirs across the globe using public cloud computing infrastructure.

The Science of the total environment pii:S0048-9697(24)02825-0 [Epub ahead of print].

Reservoirs play an important role in relation to water security, flood risk, hydropower and natural flow regime. This study derives a novel dataset with a long-term daily water-balance (reservoir volume, inflow, outflow, evaporation and precipitation) of headwater reservoirs and storage dynamics across the globe. The data is generated using cloud computing infrastructure and a high resolution distributed hydrological model wflow_sbm. Model results are validated against earth observed surface water area and in-situ measured reservoir volume and show an overall good model performance. Simulated headwater reservoir storage indicate that 19.4-24.4 % of the reservoirs had a significant decrease in storage. This change is mainly driven by a decrease in reservoir inflow and increase in evaporation. Deployment on a kubernetes cloud environment and using reproducible workflows shows that these kind of simulations and analyses can be conducted in less than a day.

RevDate: 2024-04-27

Abdullahi I, Longo S, M Samie (2024)

Towards a Distributed Digital Twin Framework for Predictive Maintenance in Industrial Internet of Things (IIoT).

Sensors (Basel, Switzerland), 24(8): pii:s24082663.

This study uses a wind turbine case study as a subdomain of Industrial Internet of Things (IIoT) to showcase an architecture for implementing a distributed digital twin in which all important aspects of a predictive maintenance solution in a DT use a fog computing paradigm, and the typical predictive maintenance DT is improved to offer better asset utilization and management through real-time condition monitoring, predictive analytics, and health management of selected components of wind turbines in a wind farm. Digital twin (DT) is a technology that sits at the intersection of Internet of Things, Cloud Computing, and Software Engineering to provide a suitable tool for replicating physical objects in the digital space. This can facilitate the implementation of asset management in manufacturing systems through predictive maintenance solutions leveraged by machine learning (ML). With DTs, a solution architecture can easily use data and software to implement asset management solutions such as condition monitoring and predictive maintenance using acquired sensor data from physical objects and computing capabilities in the digital space. While DT offers a good solution, it is an emerging technology that could be improved with better standards, architectural framework, and implementation methodologies. Researchers in both academia and industry have showcased DT implementations with different levels of success. However, DTs remain limited in standards and architectures that offer efficient predictive maintenance solutions with real-time sensor data and intelligent DT capabilities. An appropriate feedback mechanism is also needed to improve asset management operations.

RevDate: 2024-04-26
CmpDate: 2024-04-26

Hsiao J, Deng LC, Moroz LL, et al (2024)

Ocean to Tree: Leveraging Single-Molecule RNA-Seq to Repair Genome Gene Models and Improve Phylogenomic Analysis of Gene and Species Evolution.

Methods in molecular biology (Clifton, N.J.), 2757:461-490.

Understanding gene evolution across genomes and organisms, including ctenophores, can provide unexpected biological insights. It enables powerful integrative approaches that leverage sequence diversity to advance biomedicine. Sequencing and bioinformatic tools can be inexpensive and user-friendly, but numerous options and coding can intimidate new users. Distinct challenges exist in working with data from diverse species but may go unrecognized by researchers accustomed to gold-standard genomes. Here, we provide a high-level workflow and detailed pipeline to enable animal collection, single-molecule sequencing, and phylogenomic analysis of gene and species evolution. As a demonstration, we focus on (1) PacBio RNA-seq of the genome-sequenced ctenophore Mnemiopsis leidyi, (2) diversity and evolution of the mechanosensitive ion channel Piezo in genetic models and basal-branching animals, and (3) associated challenges and solutions to working with diverse species and genomes, including gene model updating and repair using single-molecule RNA-seq. We provide a Python Jupyter Notebook version of our pipeline (GitHub Repository: Ctenophore-Ocean-To-Tree-2023 https://github.com/000generic/Ctenophore-Ocean-To-Tree-2023) that can be run for free in the Google Colab cloud to replicate our findings or modified for specific or greater use. Our protocol enables users to design new sequencing projects in ctenophores, marine invertebrates, or other novel organisms. It provides a simple, comprehensive platform that can ease new user entry into running their evolutionary sequence analyses.

RevDate: 2024-04-26

El Jaouhari A, Arif J, Samadhiya A, et al (2024)

Exploring the application of ICTs in decarbonizing the agriculture supply chain: A literature review and research agenda.

Heliyon, 10(8):e29564 pii:S2405-8440(24)05595-6.

The contemporary agricultural supply chain necessitates the integration of information and communication technologies to effectively mitigate the multifaceted challenges posed by climate change and rising global demand for food products. Furthermore, recent developments in information and communication technologies, such as blockchain, big data analytics, the internet of things, artificial intelligence, cloud computing, etc., have made this transformation possible. Each of these technologies plays a particular role in enabling the agriculture supply chain ecosystem to be intelligent enough to handle today's world's challenges. Thus, this paper reviews the crucial information and communication technologies-enabled agriculture supply chains to understand their potential uses and contemporary developments. The review is supported by 57 research papers from the Scopus database. Five research areas analyze the applications of the technology reviewed in the agriculture supply chain: food safety and traceability, security and information system management, wasting food, supervision and tracking, agricultural businesses and decision-making, and other applications not explicitly related to the agriculture supply chain. The study also emphasizes how information and communication technologies can help agriculture supply chains and promote agriculture supply chain decarbonization. An information and communication technologies application framework for a decarbonized agriculture supply chain is suggested based on the research's findings. The framework identifies the contribution of information and communication technologies to decision-making in agriculture supply chains. The review also offers guidelines to academics, policymakers, and practitioners on managing agriculture supply chains successfully for enhanced agricultural productivity and decarbonization.

RevDate: 2024-04-26

Khazali M, W Lechner (2023)

Scalable quantum processors empowered by the Fermi scattering of Rydberg electrons.

Communications physics, 6(1):57.

Quantum computing promises exponential speed-up compared to its classical counterpart. While the neutral atom processors are the pioneering platform in terms of scalability, the dipolar Rydberg gates impose the main bottlenecks on the scaling of these devices. This article presents an alternative scheme for neutral atom quantum processing, based on the Fermi scattering of a Rydberg electron from ground-state atoms in spin-dependent lattice geometries. Instead of relying on Rydberg pair-potentials, the interaction is controlled by engineering the electron cloud of a sole Rydberg atom. The present scheme addresses the scaling obstacles in Rydberg processors by exponentially suppressing the population of short-lived states and by operating in ultra-dense atomic lattices. The restoring forces in molecule type Rydberg-Fermi potential preserve the trapping over a long interaction period. Furthermore, the proposed scheme mitigates different competing infidelity criteria, eliminates unwanted cross-talks, and significantly suppresses the operation depth in running complicated quantum algorithms.

RevDate: 2024-04-25

Ullah R, Yahya M, Mostarda L, et al (2024)

Intelligent decision making for energy efficient fog nodes selection and smart switching in the IOT: a machine learning approach.

PeerJ. Computer science, 10:e1833.

With the emergence of Internet of Things (IoT) technology, a huge amount of data is generated, which is costly to transfer to the cloud data centers in terms of security, bandwidth, and latency. Fog computing is an efficient paradigm for locally processing and manipulating IoT-generated data. It is difficult to configure the fog nodes to provide all of the services required by the end devices because of the static configuration, poor processing, and storage capacities. To enhance fog nodes' capabilities, it is essential to reconfigure them to accommodate a broader range and variety of hosted services. In this study, we focus on the placement of fog services and their dynamic reconfiguration in response to the end-device requests. Due to its growing successes and popularity in the IoT era, the Decision Tree (DT) machine learning model is implemented to predict the occurrence of requests and events in advance. The DT model enables the fog nodes to predict requests for a specific service in advance and reconfigure the fog node accordingly. The performance of the proposed model is evaluated in terms of high throughput, minimized energy consumption, and dynamic fog node smart switching. The simulation results demonstrate a notable increase in the fog node hit ratios, scaling up to 99% for the majority of services concurrently with a substantial reduction in miss ratios. Furthermore, the energy consumption is greatly reduced by over 50% as compared to a static node.

RevDate: 2024-04-25

Cambronero ME, Martínez MA, Llana L, et al (2024)

Towards a GDPR-compliant cloud architecture with data privacy controlled through sticky policies.

PeerJ. Computer science, 10:e1898.

Data privacy is one of the biggest challenges facing system architects at the system design stage. Especially when certain laws, such as the General Data Protection Regulation (GDPR), have to be complied with by cloud environments. In this article, we want to help cloud providers comply with the GDPR by proposing a GDPR-compliant cloud architecture. To do this, we use model-driven engineering techniques to design cloud architecture and analyze cloud interactions. In particular, we develop a complete framework, called MDCT, which includes a Unified Modeling Language profile that allows us to define specific cloud scenarios and profile validation to ensure that certain required properties are met. The validation process is implemented through the Object Constraint Language (OCL) rules, which allow us to describe the constraints in these models. To comply with many GDPR articles, the proposed cloud architecture considers data privacy and data tracking, enabling safe and secure data management and tracking in the context of the cloud. For this purpose, sticky policies associated with the data are incorporated to define permission for third parties to access the data and track instances of data access. As a result, a cloud architecture designed with MDCT contains a set of OCL rules to validate it as a GDPR-compliant cloud architecture. Our tool models key GDPR points such as user consent/withdrawal, the purpose of access, and data transparency and auditing, and considers data privacy and data tracking with the help of sticky policies.

RevDate: 2024-04-25

Hassan SR, Rehman AU, Alsharabi N, et al (2024)

Design of load-aware resource allocation for heterogeneous fog computing systems.

PeerJ. Computer science, 10:e1986.

The execution of delay-aware applications can be effectively handled by various computing paradigms, including the fog computing, edge computing, and cloudlets. Cloud computing offers services in a centralized way through a cloud server. On the contrary, the fog computing paradigm offers services in a dispersed manner providing services and computational facilities near the end devices. Due to the distributed provision of resources by the fog paradigm, this architecture is suitable for large-scale implementation of applications. Furthermore, fog computing offers a reduction in delay and network load as compared to cloud architecture. Resource distribution and load balancing are always important tasks in deploying efficient systems. In this research, we have proposed heuristic-based approach that achieves a reduction in network consumption and delays by efficiently utilizing fog resources according to the load generated by the clusters of edge nodes. The proposed algorithm considers the magnitude of data produced at the edge clusters while allocating the fog resources. The results of the evaluations performed on different scales confirm the efficacy of the proposed approach in achieving optimal performance.

RevDate: 2024-04-24

Wei W, Xia X, Li T, et al (2024)

Shaoxia: a web-based interactive analysis platform for single cell RNA sequencing data.

BMC genomics, 25(1):402.

BACKGROUND: In recent years, Single-cell RNA sequencing (scRNA-seq) is increasingly accessible to researchers of many fields. However, interpreting its data demands proficiency in multiple programming languages and bioinformatic skills, which limited researchers, without such expertise, exploring information from scRNA-seq data. Therefore, there is a tremendous need to develop easy-to-use software, covering all the aspects of scRNA-seq data analysis.

RESULTS: We proposed a clear analysis framework for scRNA-seq data, which emphasized the fundamental and crucial roles of cell identity annotation, abstracting the analysis process into three stages: upstream analysis, cell annotation and downstream analysis. The framework can equip researchers with a comprehensive understanding of the analysis procedure and facilitate effective data interpretation. Leveraging the developed framework, we engineered Shaoxia, an analysis platform designed to democratize scRNA-seq analysis by accelerating processing through high-performance computing capabilities and offering a user-friendly interface accessible even to wet-lab researchers without programming expertise.

CONCLUSION: Shaoxia stands as a powerful and user-friendly open-source software for automated scRNA-seq analysis, offering comprehensive functionality for streamlined functional genomics studies. Shaoxia is freely accessible at http://www.shaoxia.cloud , and its source code is publicly available at https://github.com/WiedenWei/shaoxia .

RevDate: 2024-04-20

Abbas Q, Alyas T, Alghamdi T, et al (2024)

Redefining governance: a critical analysis of sustainability transformation in e-governance.

Frontiers in big data, 7:1349116.

With the rapid growth of information and communication technologies, governments worldwide are embracing digital transformation to enhance service delivery and governance practices. In the rapidly evolving landscape of information technology (IT), secure data management stands as a cornerstone for organizations aiming to safeguard sensitive information. Robust data modeling techniques are pivotal in structuring and organizing data, ensuring its integrity, and facilitating efficient retrieval and analysis. As the world increasingly emphasizes sustainability, integrating eco-friendly practices into data management processes becomes imperative. This study focuses on the specific context of Pakistan and investigates the potential of cloud computing in advancing e-governance capabilities. Cloud computing offers scalability, cost efficiency, and enhanced data security, making it an ideal technology for digital transformation. Through an extensive literature review, analysis of case studies, and interviews with stakeholders, this research explores the current state of e-governance in Pakistan, identifies the challenges faced, and proposes a framework for leveraging cloud computing to overcome these challenges. The findings reveal that cloud computing can significantly enhance the accessibility, scalability, and cost-effectiveness of e-governance services, thereby improving citizen engagement and satisfaction. This study provides valuable insights for policymakers, government agencies, and researchers interested in the digital transformation of e-governance in Pakistan and offers a roadmap for leveraging cloud computing technologies in similar contexts. The findings contribute to the growing body of knowledge on e-governance and cloud computing, supporting the advancement of digital governance practices globally. This research identifies monitoring parameters necessary to establish a sustainable e-governance system incorporating big data and cloud computing. The proposed framework, Monitoring and Assessment System using Cloud (MASC), is validated through secondary data analysis and successfully fulfills the research objectives. By leveraging big data and cloud computing, governments can revolutionize their digital governance practices, driving transformative changes and enhancing efficiency and effectiveness in public administration.

RevDate: 2024-04-18

Wang TH, Kao CC, TH Chang (2024)

Ensemble Machine Learning for Predicting 90-Day Outcomes and Analyzing Risk Factors in Acute Kidney Injury Requiring Dialysis.

Journal of multidisciplinary healthcare, 17:1589-1602.

PURPOSE: Our objectives were to (1) employ ensemble machine learning algorithms utilizing real-world clinical data to predict 90-day prognosis, including dialysis dependence and mortality, following the first hospitalized dialysis and (2) identify the significant factors associated with overall outcomes.

PATIENTS AND METHODS: We identified hospitalized patients with Acute kidney injury requiring dialysis (AKI-D) from a dataset of the Taipei Medical University Clinical Research Database (TMUCRD) from January 2008 to December 2020. The extracted data comprise demographics, comorbidities, medications, and laboratory parameters. Ensemble machine learning models were developed utilizing real-world clinical data through the Google Cloud Platform.

RESULTS: The Study Analyzed 1080 Patients in the Dialysis-Dependent Module, Out of Which 616 Received Regular Dialysis After 90 Days. Our Ensemble Model, Consisting of 25 Feedforward Neural Network Models, Demonstrated the Best Performance with an Auroc of 0.846. We Identified the Baseline Creatinine Value, Assessed at Least 90 Days Before the Initial Dialysis, as the Most Crucial Factor. We selected 2358 patients, 984 of whom were deceased after 90 days, for the survival module. The ensemble model, comprising 15 feedforward neural network models and 10 gradient-boosted decision tree models, achieved superior performance with an AUROC of 0.865. The pre-dialysis creatinine value, tested within 90 days prior to the initial dialysis, was identified as the most significant factor.

CONCLUSION: Ensemble machine learning models outperform logistic regression models in predicting outcomes of AKI-D, compared to existing literature. Our study, which includes a large sample size from three different hospitals, supports the significance of the creatinine value tested before the first hospitalized dialysis in determining overall prognosis. Healthcare providers could benefit from utilizing our validated prediction model to improve clinical decision-making and enhance patient care for the high-risk population.

RevDate: 2024-04-18

Fujinami H, Kuraishi S, Teramoto A, et al (2024)

Development of a novel endoscopic hemostasis-assisted navigation AI system in the standardization of post-ESD coagulation.

Endoscopy international open, 12(4):E520-E525.

Background and study aims While gastric endoscopic submucosal dissection (ESD) has become a treatment with fewer complications, delayed bleeding remains a challenge. Post-ESD coagulation (PEC) is performed to prevent delayed bleeding. Therefore, we developed an artificial intelligence (AI) to detect vessels that require PEC in real time. Materials and methods Training data were extracted from 153 gastric ESD videos with sufficient images taken with a second-look endoscopy (SLE) and annotated as follows: (1) vessels that showed bleeding during SLE without PEC; (2) vessels that did not bleed during SLE with PEC; and (3) vessels that did not bleed even without PEC. The training model was created using Google Cloud Vertex AI and a program was created to display the vessels requiring PEC in real time using a bounding box. The evaluation of this AI was verified with 12 unlearned test videos, including four cases that required additional coagulation during SLE. Results The results of the test video validation indicated that 109 vessels on the ulcer required cauterization. Of these, 80 vessels (73.4%) were correctly determined as not requiring additional treatment. However, 25 vessels (22.9%), which did not require PEC, were overestimated. In the four videos that required additional coagulation in SLE, AI was able to detect all bleeding vessels. Conclusions The effectiveness and safety of this endoscopic treatment-assisted AI system that identifies visible vessels requiring PEC should be confirmed in future studies.

RevDate: 2024-04-19
CmpDate: 2024-04-18

Frimpong T, Hayfron Acquah JB, Missah YM, et al (2024)

Securing cloud data using secret key 4 optimization algorithm (SK4OA) with a non-linearity run time trend.

PloS one, 19(4):e0301760.

Cloud computing alludes to the on-demand availability of personal computer framework resources, primarily information storage and processing power, without the customer's direct personal involvement. Cloud computing has developed dramatically among many organizations due to its benefits such as cost savings, resource pooling, broad network access, and ease of management; nonetheless, security has been a major concern. Researchers have proposed several cryptographic methods to offer cloud data security; however, their execution times are linear and longer. A Security Key 4 Optimization Algorithm (SK4OA) with a non-linear run time is proposed in this paper. The secret key of SK4OA determines the run time rather than the size of the data as such is able to transmit large volumes of data with minimal bandwidth and able to resist security attacks like brute force since its execution timings are unpredictable. A data set from Kaggle was used to determine the algorithm's mean and standard deviation after thirty (30) times of execution. Data sizes of 3KB, 5KB, 8KB, 12KB, and 16 KB were used in this study. There was an empirical analysis done against RC4, Salsa20, and Chacha20 based on encryption time, decryption time, throughput and memory utilization. The analysis showed that SK4OA generated lowest mean non-linear run time of 5.545±2.785 when 16KB of data was executed. Additionally, SK4OA's standard deviation was greater, indicating that the observed data varied far from the mean. However, RC4, Salsa20, and Chacha20 showed smaller standard deviations making them more clustered around the mean resulting in predictable run times.

RevDate: 2024-04-15

Ocampo AF, Fida MR, Elmokashfi A, et al (2024)

Assessing the Cloud-RAN in the Linux Kernel: Sharing Computing and Network Resources.

Sensors (Basel, Switzerland), 24(7):.

Cloud-based Radio Access Network (Cloud-RAN) leverages virtualization to enable the coexistence of multiple virtual Base Band Units (vBBUs) with collocated workloads on a single edge computer, aiming for economic and operational efficiency. However, this coexistence can cause performance degradation in vBBUs due to resource contention. In this paper, we conduct an empirical analysis of vBBU performance on a Linux RT-Kernel, highlighting the impact of resource sharing with user-space tasks and Kernel threads. Furthermore, we evaluate CPU management strategies such as CPU affinity and CPU isolation as potential solutions to these performance challenges. Our results highlight that the implementation of CPU affinity can significantly reduce throughput variability by up to 40%, decrease vBBU's NACK ratios, and reduce vBBU scheduling latency within the Linux RT-Kernel. Collectively, these findings underscore the potential of CPU management strategies to enhance vBBU performance in Cloud-RAN environments, enabling more efficient and stable network operations. The paper concludes with a discussion on the efficient realization of Cloud-RAN, elucidating the benefits of implementing proposed CPU affinity allocations. The demonstrated enhancements, including reduced scheduling latency and improved end-to-end throughput, affirm the practicality and efficacy of the proposed strategies for optimizing Cloud-RAN deployments.

RevDate: 2024-04-15

Liang YP, Chang CM, CC Chung (2024)

Implementation of Lightweight Convolutional Neural Networks with an Early Exit Mechanism Utilizing 40 nm CMOS Process for Fire Detection in Unmanned Aerial Vehicles.

Sensors (Basel, Switzerland), 24(7):.

The advancement of unmanned aerial vehicles (UAVs) enables early detection of numerous disasters. Efforts have been made to automate the monitoring of data from UAVs, with machine learning methods recently attracting significant interest. These solutions often face challenges with high computational costs and energy usage. Conventionally, data from UAVs are processed using cloud computing, where they are sent to the cloud for analysis. However, this method might not meet the real-time needs of disaster relief scenarios. In contrast, edge computing provides real-time processing at the site but still struggles with computational and energy efficiency issues. To overcome these obstacles and enhance resource utilization, this paper presents a convolutional neural network (CNN) model with an early exit mechanism designed for fire detection in UAVs. This model is implemented using TSMC 40 nm CMOS technology, which aids in hardware acceleration. Notably, the neural network has a modest parameter count of 11.2 k. In the hardware computation part, the CNN circuit completes fire detection in approximately 230,000 cycles. Power-gating techniques are also used to turn off inactive memory, contributing to reduced power consumption. The experimental results show that this neural network reaches a maximum accuracy of 81.49% in the hardware implementation stage. After automatic layout and routing, the CNN hardware accelerator can operate at 300 MHz, consuming 117 mW of power.

RevDate: 2024-04-15

Gomes B, Soares C, Torres JM, et al (2024)

An Efficient Edge Computing-Enabled Network for Used Cooking Oil Collection.

Sensors (Basel, Switzerland), 24(7):.

In Portugal, more than 98% of domestic cooking oil is disposed of improperly every day. This avoids recycling/reconverting into another energy. Is also may become a potential harmful contaminant of soil and water. Driven by the utility of recycled cooking oil, and leveraging the exponential growth of ubiquitous computing approaches, we propose an IoT smart solution for domestic used cooking oil (UCO) collection bins. We call this approach SWAN, which stands for Smart Waste Accumulation Network. It is deployed and evaluated in Portugal. It consists of a countrywide network of collection bin units, available in public areas. Two metrics are considered to evaluate the system's success: (i) user engagement, and (ii) used cooking oil collection efficiency. The presented system should (i) perform under scenarios of temporary communication network failures, and (ii) be scalable to accommodate an ever-growing number of installed collection units. Thus, we choose a disruptive approach from the traditional cloud computing paradigm. It relies on edge node infrastructure to process, store, and act upon the locally collected data. The communication appears as a delay-tolerant task, i.e., an edge computing solution. We conduct a comparative analysis revealing the benefits of the edge computing enabled collection bin vs. a cloud computing solution. The studied period considers four years of collected data. An exponential increase in the amount of used cooking oil collected is identified, with the developed solution being responsible for surpassing the national collection totals of previous years. During the same period, we also improved the collection process as we were able to more accurately estimate the optimal collection and system's maintenance intervals.

RevDate: 2024-04-15

Armijo A, D Zamora-Sánchez (2024)

Integration of Railway Bridge Structural Health Monitoring into the Internet of Things with a Digital Twin: A Case Study.

Sensors (Basel, Switzerland), 24(7):.

Structural health monitoring (SHM) is critical for ensuring the safety of infrastructure such as bridges. This article presents a digital twin solution for the SHM of railway bridges using low-cost wireless accelerometers and machine learning (ML). The system architecture combines on-premises edge computing and cloud analytics to enable efficient real-time monitoring and complete storage of relevant time-history datasets. After train crossings, the accelerometers stream raw vibration data, which are processed in the frequency domain and analyzed using machine learning to detect anomalies that indicate potential structural issues. The digital twin approach is demonstrated on an in-service railway bridge for which vibration data were collected over two years under normal operating conditions. By learning allowable ranges for vibration patterns, the digital twin model identifies abnormal spectral peaks that indicate potential changes in structural integrity. The long-term pilot proves that this affordable SHM system can provide automated and real-time warnings of bridge damage and also supports the use of in-house-designed sensors with lower cost and edge computing capabilities such as those used in the demonstration. The successful on-premises-cloud hybrid implementation provides a cost effective and scalable model for expanding monitoring to thousands of railway bridges, democratizing SHM to improve safety by avoiding catastrophic failures.

RevDate: 2024-04-15

Gaffurini M, Flammini A, Ferrari P, et al (2024)

End-to-End Emulation of LoRaWAN Architecture and Infrastructure in Complex Smart City Scenarios Exploiting Containers.

Sensors (Basel, Switzerland), 24(7):.

In a LoRaWAN network, the backend is generally distributed as Software as a Service (SaaS) based on container technology, and recently, a containerized version of the LoRaWAN node stack is also available. Exploiting the disaggregation of LoRaWAN components, this paper focuses on the emulation of complex end-to-end architecture and infrastructures for smart city scenarios, leveraging on lightweight virtualization technology. The fundamental metrics to gain insights and evaluate the scaling complexity of the emulated scenario are defined. Then, the methodology is applied to use cases taken from a real LoRaWAN application in a smart city with hundreds of nodes. As a result, the proposed approach based on containers allows for the following: (i) deployments of functionalities on diverse distributed hosts; (ii) the use of the very same SW running on real nodes; (iii) the simple configuration and management of the emulation process; (iv) affordable costs. Both premise and cloud servers are considered as emulation platforms to evaluate the resource request and emulation cost of the proposed approach. For instance, emulating one hour of an entire LoRaWAN network with hundreds of nodes requires very affordable hardware that, if realized with a cloud-based computing platform, may cost less than USD 1.

RevDate: 2024-04-12

Gupta P, DP Shukla (2024)

Demi-decadal land use land cover change analysis of Mizoram, India, with topographic correction using machine learning algorithm.

Environmental science and pollution research international [Epub ahead of print].

Mizoram (India) is part of UNESCO's biodiversity hotspots in India that is primarily populated by tribes who engage in shifting agriculture. Hence, the land use land cover (LULC) pattern of the state is frequently changing. We have used Landsat 5 and 8 satellite images to prepare LULC maps from 2000 to 2020 in every 5 years. The atmospherically corrected images were pre-processed for removal of cloud cover and then classified into six classes: waterbodies, farmland, settlement, open forest, dense forest, and bare land. We applied four machine learning (ML) algorithms for classification, namely, random forest (RF), classification and regression tree (CART), minimum distance (MD), and support vector machine (SVM) for the images from 2000 to 2020. With 80% training and 20% testing data, we found that the RF classifier works best with the most accuracy than other classifiers. The average overall accuracy (OA) and Kappa coefficient (KC) from 2000 to 2020 were 84.00% and 0.79 when the RF classifier was used. When using SVM, CART, and MD, the average OA and KC were 78.06%, 0.73; 78.60%, 0.72; and 73.32%, 0.65, respectively. We utilised three methods of topographic correction, namely, C-correction, SCS (sun canopy sensor) correction, and SCS + C correction to reduce the misclassification due to shadow effects. SCS + C correction worked best for this region; hence, we prepared LULC maps on SCS + C corrected satellite image. Hence, we have used RF classifier for LULC preparation demi-decadal from 2000 to 2020. The OA for 2000, 2005, 2010, 2015, and 2020 was found to be 84%, 81%, 81%, 85%, and 89%, respectively, using RF. The dense forest decreased from 2000 to 2020 with an increase in open forest, settlement, and agriculture; nevertheless, when Farmland was low, there was an increase in the barren land. The results were significantly improved with the topographic correction, and misclassification was quite less.

RevDate: 2024-04-15

Zhang Y, Geng H, Su L, et al (2024)

An efficient polynomial-based verifiable computation scheme on multi-source outsourced data.

Scientific reports, 14(1):8512.

With the development of cloud computing, users are more inclined to outsource complex computing tasks to cloud servers with strong computing capacity, and the cloud returns the final calculation results. However, the cloud is not completely trustworthy, which may leak the data of user and even return incorrect calculations on purpose. Therefore, it is important to verify the results of computing tasks without revealing the privacy of the users. Among all the computing tasks, the polynomial calculation is widely used in information security, linear algebra, signal processing and other fields. Most existing polynomial-based verifiable computation schemes require that the input of the polynomial function must come from a single data source, which means that the data must be signed by a single user. However, the input of the polynomial may come from multiple users in the practical application. In order to solve this problem, the researchers have proposed some schemes for multi-source outsourced data, but these schemes have the common problem of low efficiency. To improve the efficiency, this paper proposes an efficient polynomial-based verifiable computation scheme on multi-source outsourced data. We optimize the polynomials using Horner's method to increase the speed of verification, in which the addition gate and the multiplication gate can be interleaved to represent the polynomial function. In order to adapt to this structure, we design the corresponding homomorphic verification tag, so that the input of the polynomial can come from multiple data sources. We prove the correctness and rationality of the scheme, and carry out numerical analysis and evaluation research to verify the efficiency of the scheme. The experimental indicate that data contributors can sign 1000 new data in merely 2 s, while the verification of a delegated polynomial function with a power of 100 requires only 18 ms. These results confirm that the proposed scheme is better than the existing scheme.

RevDate: 2024-04-15
CmpDate: 2024-04-15

Li S, Nair R, SM Naqvi (2024)

Acoustic and Text Features Analysis for Adult ADHD Screening: A Data-Driven Approach Utilizing DIVA Interview.

IEEE journal of translational engineering in health and medicine, 12:359-370.

Attention Deficit Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder commonly seen in childhood that leads to behavioural changes in social development and communication patterns, often continues into undiagnosed adulthood due to a global shortage of psychiatrists, resulting in delayed diagnoses with lasting consequences on individual's well-being and the societal impact. Recently, machine learning methodologies have been incorporated into healthcare systems to facilitate the diagnosis and enhance the potential prediction of treatment outcomes for mental health conditions. In ADHD detection, the previous research focused on utilizing functional magnetic resonance imaging (fMRI) or Electroencephalography (EEG) signals, which require costly equipment and trained personnel for data collection. In recent years, speech and text modalities have garnered increasing attention due to their cost-effectiveness and non-wearable sensing in data collection. In this research, conducted in collaboration with the Cumbria, Northumberland, Tyne and Wear NHS Foundation Trust, we gathered audio data from both ADHD patients and normal controls based on the clinically popular Diagnostic Interview for ADHD in adults (DIVA). Subsequently, we transformed the speech data into text modalities through the utilization of the Google Cloud Speech API. We extracted both acoustic and text features from the data, encompassing traditional acoustic features (e.g., MFCC), specialized feature sets (e.g., eGeMAPS), as well as deep-learned linguistic and semantic features derived from pre-trained deep learning models. These features are employed in conjunction with a support vector machine for ADHD classification, yielding promising outcomes in the utilization of audio and text data for effective adult ADHD screening. Clinical impact: This research introduces a transformative approach in ADHD diagnosis, employing speech and text analysis to facilitate early and more accessible detection, particularly beneficial in areas with limited psychiatric resources. Clinical and Translational Impact Statement: The successful application of machine learning techniques in analyzing audio and text data for ADHD screening represents a significant advancement in mental health diagnostics, paving the way for its integration into clinical settings and potentially improving patient outcomes on a broader scale.

RevDate: 2024-04-12

Sachdeva S, Bhatia S, Al Harrasi A, et al (2024)

Unraveling the role of cloud computing in health care system and biomedical sciences.

Heliyon, 10(7):e29044.

Cloud computing has emerged as a transformative force in healthcare and biomedical sciences, offering scalable, on-demand resources for managing vast amounts of data. This review explores the integration of cloud computing within these fields, highlighting its pivotal role in enhancing data management, security, and accessibility. We examine the application of cloud computing in various healthcare domains, including electronic medical records, telemedicine, and personalized patient care, as well as its impact on bioinformatics research, particularly in genomics, proteomics, and metabolomics. The review also addresses the challenges and ethical considerations associated with cloud-based healthcare solutions, such as data privacy and cybersecurity. By providing a comprehensive overview, we aim to assist readers in understanding the significance of cloud computing in modern medical applications and its potential to revolutionize both patient care and biomedical research.

RevDate: 2024-04-09

Hicks CB, TJ Martinez (2024)

Massively scalable workflows for quantum chemistry: BigChem and ChemCloud.

The Journal of chemical physics, 160(14):.

Electronic structure theory, i.e., quantum chemistry, is the fundamental building block for many problems in computational chemistry. We present a new distributed computing framework (BigChem), which allows for an efficient solution of many quantum chemistry problems in parallel. BigChem is designed to be easily composable and leverages industry-standard middleware (e.g., Celery, RabbitMQ, and Redis) for distributed approaches to large scale problems. BigChem can harness any collection of worker nodes, including ones on cloud providers (such as AWS or Azure), local clusters, or supercomputer centers (and any mixture of these). BigChem builds upon MolSSI packages, such as QCEngine to standardize the operation of numerous computational chemistry programs, demonstrated here with Psi4, xtb, geomeTRIC, and TeraChem. BigChem delivers full utilization of compute resources at scale, offers a programable canvas for designing sophisticated quantum chemistry workflows, and is fault tolerant to node failures and network disruptions. We demonstrate linear scalability of BigChem running computational chemistry workloads on up to 125 GPUs. Finally, we present ChemCloud, a web API to BigChem and successor to TeraChem Cloud. ChemCloud delivers scalable and secure access to BigChem over the Internet.

RevDate: 2024-04-10

Holl F, Clarke L, Raffort T, et al (2024)

The Red Cross Red Crescent Health Information System (RCHIS): an electronic medical records and health information management system for the red cross red crescent emergency response units.

Conflict and health, 18(1):28.

BACKGROUND: The Red Cross and Red Crescent Movement (RCRC) utilizes specialized Emergency Response Units (ERUs) for international disaster response. However, data collection and reporting within ERUs have been time-consuming and paper-based. The Red Cross Red Crescent Health Information System (RCHIS) was developed to improve clinical documentation and reporting, ensuring accuracy and ease of use while increasing compliance with reporting standards.

CASE PRESENTATION: RCHIS is an Electronic Medical Record (EMR) and Health Information System (HIS) designed for RCRC ERUs. It can be accessed on Android tablets or Windows laptops, both online and offline. The system securely stores data on Microsoft Azure cloud, with synchronization facilitated through a local ERU server. The functional architecture covers all clinical functions of ERU clinics and hospitals, incorporating user-friendly features. A pilot study was conducted with the Portuguese Red Cross (PRC) during a large-scale event. Thirteen super users were trained and subsequently trained the staff. During the four-day pilot, 77 user accounts were created, and 243 patient files were documented. Feedback indicated that RCHIS was easy to use, requiring minimal training time, and had sufficient training for full utilization. Real-time reporting facilitated coordination with the civil defense authority.

CONCLUSIONS: The development and pilot use of RCHIS demonstrated its feasibility and efficacy within RCRC ERUs. The system addressed the need for an EMR and HIS solution, enabling comprehensive clinical documentation and supporting administrative reporting functions. The pilot study validated the training of trainers' approach and paved the way for further domestic use of RCHIS. RCHIS has the potential to improve patient safety, quality of care, and reporting efficiency within ERUs. Automated reporting reduces the burden on ERU leadership, while electronic compilation enhances record completeness and correctness. Ongoing feedback collection and feature development continue to enhance RCHIS's functionality. Further trainings took place in 2023 and preparations for international deployments are under way. RCHIS represents a significant step toward improved emergency medical care and coordination within the RCRC and has implications for similar systems in other Emergency Medical Teams.

RevDate: 2024-04-09

Chen A, Yu S, Yang X, et al (2024)

IoT data security in outsourced databases: A survey of verifiable database.

Heliyon, 10(7):e28117.

With the swift advancement of cloud computing and the Internet of Things (IoT), to address the issue of massive data storage, IoT devices opt to offload their data to cloud servers so as to alleviate the pressure of resident storage and computation. However, storing local data in an outsourced database is bound to face the danger of tampering. To handle the above problem, a verifiable database (VDB), which was initially suggested in 2011, has garnered sustained interest from researchers. The concept of VDB enables resource-limited clients to securely outsource extremely large databases to untrusted servers, where users can retrieve database records and modify them by allocating new values, and any attempts at tampering will be detected. This paper provides a systematic summary of VDB. First, a definition of VDB is given, along with correctness and security proofs. And the VDB based on commitment constructions is introduced separately, mainly divided into vector commitments and polynomial commitments. Then VDB schemes based on delegated polynomial functions are introduced, mainly in combination with Merkle trees and forward-secure symmetric searchable encryption. We then classify the current VDB schemes relying on four different assumptions. Besides, we classify the established VDB schemes built upon two different groups. Finally, we introduce the applications and future development of VDB. To our knowledge, this is the first VDB review paper to date.

RevDate: 2024-04-18

Mimar S, Paul AS, Lucarelli N, et al (2024)

ComPRePS: An Automated Cloud-based Image Analysis tool to democratize AI in Digital Pathology.

bioRxiv : the preprint server for biology.

Artificial intelligence (AI) has extensive applications in a wide range of disciplines including healthcare and clinical practice. Advances in high-resolution whole-slide brightfield microscopy allow for the digitization of histologically stained tissue sections, producing gigapixel-scale whole-slide images (WSI). The significant improvement in computing and revolution of deep neural network (DNN)-based AI technologies over the last decade allow us to integrate massively parallelized computational power, cutting-edge AI algorithms, and big data storage, management, and processing. Applied to WSIs, AI has created opportunities for improved disease diagnostics and prognostics with the ultimate goal of enhancing precision medicine and resulting patient care. The National Institutes of Health (NIH) has recognized the importance of developing standardized principles for data management and discovery for the advancement of science and proposed the Findable, Accessible, Interoperable, Reusable, (FAIR) Data Principles[1] with the goal of building a modernized biomedical data resource ecosystem to establish collaborative research communities. In line with this mission and to democratize AI-based image analysis in digital pathology, we propose ComPRePS: an end-to-end automated Computational Renal Pathology Suite which combines massive scalability, on-demand cloud computing, and an easy-to-use web-based user interface for data upload, storage, management, slide-level visualization, and domain expert interaction. Moreover, our platform is equipped with both in-house and collaborator developed sophisticated AI algorithms in the back-end server for image analysis to identify clinically relevant micro-anatomic functional tissue units (FTU) and to extract image features.

RevDate: 2024-04-22

Copeland CJ, Roddy JW, Schmidt AK, et al (2024)

VIBES: a workflow for annotating and visualizing viral sequences integrated into bacterial genomes.

NAR genomics and bioinformatics, 6(2):lqae030.

Bacteriophages are viruses that infect bacteria. Many bacteriophages integrate their genomes into the bacterial chromosome and become prophages. Prophages may substantially burden or benefit host bacteria fitness, acting in some cases as parasites and in others as mutualists. Some prophages have been demonstrated to increase host virulence. The increasing ease of bacterial genome sequencing provides an opportunity to deeply explore prophage prevalence and insertion sites. Here we present VIBES (Viral Integrations in Bacterial genomES), a workflow intended to automate prophage annotation in complete bacterial genome sequences. VIBES provides additional context to prophage annotations by annotating bacterial genes and viral proteins in user-provided bacterial and viral genomes. The VIBES pipeline is implemented as a Nextflow-driven workflow, providing a simple, unified interface for execution on local, cluster and cloud computing environments. For each step of the pipeline, a container including all necessary software dependencies is provided. VIBES produces results in simple tab-separated format and generates intuitive and interactive visualizations for data exploration. Despite VIBES's primary emphasis on prophage annotation, its generic alignment-based design allows it to be deployed as a general-purpose sequence similarity search manager. We demonstrate the utility of the VIBES prophage annotation workflow by searching for 178 Pf phage genomes across 1072 Pseudomonas spp. genomes.

RevDate: 2024-04-08
CmpDate: 2024-04-08

Nawaz Tareen F, Alvi AN, Alsamani B, et al (2024)

EOTE-FSC: An efficient offloaded task execution for fog enabled smart cities.

PloS one, 19(4):e0298363.

Smart cities provide ease in lifestyle to their community members with the help of Information and Communication Technology (ICT). It provides better water, waste and energy management, enhances the security and safety of its citizens and offers better health facilities. Most of these applications are based on IoT-based sensor networks, that are deployed in different areas of applications according to their demand. Due to limited processing capabilities, sensor nodes cannot process multiple tasks simultaneously and need to offload some of their tasks to remotely placed cloud servers, which may cause delays. To reduce the delay, computing nodes are placed in different vicinitys acting as fog-computing nodes are used, to execute the offloaded tasks. It has been observed that the offloaded tasks are not uniformly received by fog computing nodes and some fog nodes may receive more tasks as some may receive less number of tasks. This may cause an increase in overall task execution time. Furthermore, these tasks comprise different priority levels and must be executed before their deadline. In this work, an Efficient Offloaded Task Execution for Fog enabled Smart cities (EOTE - FSC) is proposed. EOTE - FSC proposes a load balancing mechanism by modifying the greedy algorithm to efficiently distribute the offloaded tasks to its attached fog nodes to reduce the overall task execution time. This results in the successful execution of most of the tasks within their deadline. In addition, EOTE - FSC modifies the task sequencing with a deadline algorithm for the fog node to optimally execute the offloaded tasks in such a way that most of the high-priority tasks are entertained. The load balancing results of EOTE - FSC are compared with state-of-the-art well-known Round Robin, Greedy, Round Robin with longest job first, and Round Robin with shortest job first algorithms. However, fog computing results of EOTE - FSC are compared with the First Come First Serve algorithm. The results show that the EOTE - FSC effectively offloaded the tasks on fog nodes and the maximum load on the fog computing nodes is reduced up to 29%, 27.3%, 23%, and 24.4% as compared to Round Robin, Greedy, Round Robin with LJF and Round Robin with SJF algorithms respectively. However, task execution in the proposed EOTE - FSC executes a maximum number of offloaded high-priority tasks as compared to the FCFS algorithm within the same computing capacity of fog nodes.

RevDate: 2024-04-03

Khan NS, Roy SK, Talukdar S, et al (2024)

Empowering real-time flood impact assessment through the integration of machine learning and Google Earth Engine: a comprehensive approach.

Environmental science and pollution research international [Epub ahead of print].

Floods cause substantial losses to life and property, especially in flood-prone regions like northwestern Bangladesh. Timely and precise evaluation of flood impacts is critical for effective flood management and decision-making. This research demonstrates an integrated approach utilizing machine learning and Google Earth Engine to enable real-time flood assessment. Synthetic aperture radar (SAR) data from Sentinel-1 and the Google Earth Engine platform were employed to generate near real-time flood maps of the 2020 flood in Kurigram and Lalmonirhat. An automatic thresholding technique quantified flooded areas. For land use/land cover (LULC) analysis, Sentinel-2's high resolution and machine learning models like artificial neural networks (ANN), random forests (RF) and support vector machines (SVM) were leveraged. ANN delivered the best LULC mapping with 0.94 accuracy based on metrics like accuracy, kappa, mean F1 score, mean sensitivity, mean specificity, mean positive predictive value, mean negative value, mean precision, mean recall, mean detection rate and mean balanced accuracy. Results showed over 600,000 people exposed at peak inundation in July-about 17% of the population. The machine learning-enabled LULC maps reliably identified vulnerable areas to prioritize flood management. Over half of croplands flooded in July. This research demonstrates the potential of integrating SAR, machine learning and cloud computing to empower authorities through real-time monitoring and accurate LULC mapping essential for effective flood response. The proposed comprehensive methodology can assist stakeholders in developing data-driven flood management strategies to reduce impacts.

RevDate: 2024-04-03

Gheni HM, AbdulRahaim LA, A Abdellatif (2024)

Real-time driver identification in IoV: A deep learning and cloud integration approach.

Heliyon, 10(7):e28109.

The Internet of Vehicles (IoV) emerges as a pivotal extension of the Internet of Things (IoT), specifically geared towards transforming the automotive landscape. In this evolving ecosystem, the demand for a seamless end-to-end system becomes paramount for enhancing operational efficiency and safety. Hence, this study introduces an innovative method for real-time driver identification by integrating cloud computing with deep learning. Utilizing the integrated capabilities of Google Cloud, Thingsboard, and Apache Kafka, the developed solution tailored for IoV technology is adept at managing real-time data collection, processing, prediction, and visualization, with resilience against sensor data anomalies. Also, this research suggests an appropriate method for driver identification by utilizing a combination of Convolutional Neural Networks (CNN) and multi-head self-attention in the proposed approach. The proposed model is validated on two datasets: Security and collected. Moreover, the results show that the proposed model surpassed the previous works by achieving an accuracy and F1 score of 99.95%. Even when challenged with data anomalies, this model maintains a high accuracy of 96.2%. By achieving accurate driver identification results, the proposed end-to-end IoV system can aid in optimizing fleet management, vehicle security, personalized driving experiences, insurance, and risk assessment. This emphasizes its potential for road safety and managing transportation more effectively.

RevDate: 2024-04-03

Li Y, Xue F, Li B, et al (2024)

Analyzing bivariate cross-trait genetic architecture in GWAS summary statistics with the BIGA cloud computing platform.

bioRxiv : the preprint server for biology.

As large-scale biobanks provide increasing access to deep phenotyping and genomic data, genome-wide association studies (GWAS) are rapidly uncovering the genetic architecture behind various complex traits and diseases. GWAS publications typically make their summary-level data (GWAS summary statistics) publicly available, enabling further exploration of genetic overlaps between phenotypes gathered from different studies and cohorts. However, systematically analyzing high-dimensional GWAS summary statistics for thousands of phenotypes can be both logistically challenging and computationally demanding. In this paper, we introduce BIGA (https://bigagwas.org/), a website that aims to offer unified data analysis pipelines and processed data resources for cross-trait genetic architecture analyses using GWAS summary statistics. We have developed a framework to implement statistical genetics tools on a cloud computing platform, combined with extensive curated GWAS data resources. Through BIGA, users can upload data, submit jobs, and share results, providing the research community with a convenient tool for consolidating GWAS data and generating new insights.

RevDate: 2024-04-10

Marini S, Barquero A, Wadhwani AA, et al (2024)

OCTOPUS: Disk-based, Multiplatform, Mobile-friendly Metagenomics Classifier.

bioRxiv : the preprint server for biology.

Portable genomic sequencers such as Oxford Nanopore's MinION enable real-time applications in both clinical and environmental health, e.g., detection of bacterial outbreaks. However, there is a bottleneck in the downstream analytics when bioinformatics pipelines are unavailable, e.g., when cloud processing is unreachable due to absence of Internet connection, or only low-end computing devices can be carried on site. For instance, metagenomics classifiers usually require a large amount of memory or specific operating systems/libraries. In this work, we present a platform-friendly software for portable metagenomic analysis of Nanopore data, the Oligomer-based Classifier of Taxonomic Operational and Pan-genome Units via Singletons (OCTOPUS). OCTOPUS is written in Java, reimplements several features of the popular Kraken2 and KrakenUniq software, with original components for improving metagenomics classification on incomplete/sampled reference databases (e.g., selection of bacteria of public health priority), making it ideal for running on smartphones or tablets. We indexed both OCTOPUS and Kraken2 on a bacterial database with ~4,000 reference genomes, then simulated a positive (bacterial genomes from the same species, but different genomes) and two negative (viral, mammalian) Nanopore test sets. On the bacterial test set OCTOPUS yielded sensitivity and precision comparable to Kraken2 (94.4% and 99.8% versus 94.5% and 99.1%, respectively). On non-bacterial sequences (mammals and viral), OCTOPUS dramatically decreased (4- to 16-fold) the false positive rate when compared to Kraken2 (2.1% and 0.7% versus 8.2% and 11.2%, respectively). We also developed customized databases including viruses, and the World Health Organization's set of bacteria of concern for drug resistance, tested with real Nanopore data on an Android smartphone. OCTOPUS is publicly available at https://github.com/DataIntellSystLab/OCTOPUS and https://github.com/Ruiz-HCI-Lab/OctopusMobile.

RevDate: 2024-03-30

Du J, Dong G, Ning J, et al (2024)

Identity-based controlled delegated outsourcing data integrity auditing scheme.

Scientific reports, 14(1):7582.

With the continuous development of cloud computing, the application of cloud storage has become more and more popular. To ensure the integrity and availability of cloud data, scholars have proposed several cloud data auditing schemes. Still, most need help with outsourced data integrity, controlled outsourcing, and source file auditing. Therefore, we propose a controlled delegation outsourcing data integrity auditing scheme based on the identity-based encryption model. Our proposed scheme allows users to specify a dedicated agent to assist in uploading data to the cloud. These authorized proxies use recognizable identities for authentication and authorization, thus avoiding the need for cumbersome certificate management in a secure distributed computing system. While solving the above problems, our scheme adopts a bucket-based red-black tree structure to efficiently realize the dynamic updating of data, which can complete the updating of data and rebalancing of structural updates constantly and realize the high efficiency of data operations. We define the security model of the scheme in detail and prove the scheme's security under the difficult problem assumption. In the performance analysis section, the proposed scheme is analyzed experimentally in comparison with other schemes, and the results show that the proposed scheme is efficient and secure.

RevDate: 2024-03-28

Chen X, Xu G, Xu X, et al (2024)

Multicenter Hierarchical Federated Learning With Fault-Tolerance Mechanisms for Resilient Edge Computing Networks.

IEEE transactions on neural networks and learning systems, PP: [Epub ahead of print].

In the realm of federated learning (FL), the conventional dual-layered architecture, comprising a central parameter server and peripheral devices, often encounters challenges due to its significant reliance on the central server for communication and security. This dependence becomes particularly problematic in scenarios involving potential malfunctions of devices and servers. While existing device-edge-cloud hierarchical FL (HFL) models alleviate some dependence on central servers and reduce communication overheads, they primarily focus on load balancing within edge computing networks and fall short of achieving complete decentralization and edge-centric model aggregation. Addressing these limitations, we introduce the multicenter HFL (MCHFL) framework. This innovative framework replaces the traditional single central server architecture with a distributed network of robust global aggregation centers located at the edge, inherently enhancing fault tolerance crucial for maintaining operational integrity amidst edge network disruptions. Our comprehensive experiments with the MNIST, FashionMNIST, and CIFAR-10 datasets demonstrate the MCHFL's superior performance. Notably, even under high paralysis ratios of up to 50%, the MCHFL maintains high accuracy levels, with maximum accuracy reductions of only 2.60%, 5.12%, and 16.73% on these datasets, respectively. This performance significantly surpasses the notable accuracy declines observed in traditional single-center models under similar conditions. To the best of our knowledge, the MCHFL is the first edge multicenter FL framework with theoretical underpinnings. Our extensive experimental results across various datasets validate the MCHFL's effectiveness, showcasing its higher accuracy, faster convergence speed, and stronger robustness compared to single-center models, thereby establishing it as a pioneering paradigm in edge multicenter FL.

RevDate: 2024-03-29

Lock C, Toh EMS, NC Keong (2024)

Structural volumetric and Periodic Table DTI patterns in Complex Normal Pressure Hydrocephalus-Toward the principles of a translational taxonomy.

Frontiers in human neuroscience, 18:1188533.

INTRODUCTION: We previously proposed a novel taxonomic framework to describe the diffusion tensor imaging (DTI) profiles of white matter tracts by their diffusivity and neural properties. We have shown the relevance of this strategy toward interpreting brain tissue signatures in Classic Normal Pressure Hydrocephalus vs. comparator cohorts of mild traumatic brain injury and Alzheimer's disease. In this iteration of the Periodic Table of DTI Elements, we examined patterns of tissue distortion in Complex NPH (CoNPH) and validated the methodology against an open-access dataset of healthy subjects, to expand its accessibility to a larger community.

METHODS: DTI measures for 12 patients with CoNPH with multiple comorbidities and 45 cognitively normal controls from the ADNI database were derived using the image processing pipeline on the brainlife.io open cloud computing platform. Using the Periodic Table algorithm, DTI profiles for CoNPH vs. controls were mapped according to injury patterns.

RESULTS: Structural volumes in most structures tested were significantly lower and the lateral ventricles higher in CoNPH vs. controls. In CoNPH, significantly lower fractional anisotropy (FA) and higher mean, axial, and radial diffusivities (MD, L1, and L2 and 3, respectively) were observed in white matter related to the lateral ventricles. Most diffusivity measures across supratentorial and infratentorial structures were significantly higher in CoNPH, with the largest differences in the cerebellum cortex. In subcortical deep gray matter structures, CoNPH and controls differed most significantly in the hippocampus, with the CoNPH group having a significantly lower FA and higher MD, L1, and L2 and 3. Cerebral and cerebellar white matter demonstrated more potential reversibility of injury compared to cerebral and cerebellar cortices.

DISCUSSION: The findings of widespread and significant reductions in subcortical deep gray matter structures, in comparison to healthy controls, support the hypothesis that Complex NPH cohorts retain imaging features associated with Classic NPH. The use of the algorithm of the Periodic Table allowed for greater consistency in the interpretation of DTI results by focusing on patterns of injury rather than an over-reliance on the interrogation of individual measures by statistical significance alone. Our aim is to provide a prototype that could be refined for an approach toward the concept of a "translational taxonomy."

RevDate: 2024-03-30

Kang S, Lee S, Y Jung (2024)

Design of Network-on-Chip-Based Restricted Coulomb Energy Neural Network Accelerator on FPGA Device.

Sensors (Basel, Switzerland), 24(6):.

Sensor applications in internet of things (IoT) systems, coupled with artificial intelligence (AI) technology, are becoming an increasingly significant part of modern life. For low-latency AI computation in IoT systems, there is a growing preference for edge-based computing over cloud-based alternatives. The restricted coulomb energy neural network (RCE-NN) is a machine learning algorithm well-suited for implementation on edge devices due to its simple learning and recognition scheme. In addition, because the RCE-NN generates neurons as needed, it is easy to adjust the network structure and learn additional data. Therefore, the RCE-NN can provide edge-based real-time processing for various sensor applications. However, previous RCE-NN accelerators have limited scalability when the number of neurons increases. In this paper, we propose a network-on-chip (NoC)-based RCE-NN accelerator and present the results of implementation on a field-programmable gate array (FPGA). NoC is an effective solution for managing massive interconnections. The proposed RCE-NN accelerator utilizes a hierarchical-star (H-star) topology, which efficiently handles a large number of neurons, along with routers specifically designed for the RCE-NN. These approaches result in only a slight decrease in the maximum operating frequency as the number of neurons increases. Consequently, the maximum operating frequency of the proposed RCE-NN accelerator with 512 neurons increased by 126.1% compared to a previous RCE-NN accelerator. This enhancement was verified with two datasets for gas and sign language recognition, achieving accelerations of up to 54.8% in learning time and up to 45.7% in recognition time. The NoC scheme of the proposed RCE-NN accelerator is an appropriate solution to ensure the scalability of the neural network while providing high-performance on-chip learning and recognition.

RevDate: 2024-03-30

Zhan Y, Xie W, Shi R, et al (2024)

Dynamic Privacy-Preserving Anonymous Authentication Scheme for Condition-Matching in Fog-Cloud-Based VANETs.

Sensors (Basel, Switzerland), 24(6):.

Secure group communication in Vehicle Ad hoc Networks (VANETs) over open channels remains a challenging task. To enable secure group communications with conditional privacy, it is necessary to establish a secure session using Authenticated Key Agreement (AKA). However, existing AKAs suffer from problems such as cross-domain dynamic group session key negotiation and heavy computational burdens on the Trusted Authority (TA) and vehicles. To address these challenges, we propose a dynamic privacy-preserving anonymous authentication scheme for condition matching in fog-cloud-based VANETs. The scheme employs general Elliptic Curve Cryptosystem (ECC) technology and fog-cloud computing methods to decrease computational overhead for On-Board Units (OBUs) and supports multiple TAs for improved service quality and robustness. Furthermore, certificateless technology alleviates TAs of key management burdens. The security analysis indicates that our solution satisfies the communication security and privacy requirements. Experimental simulations verify that our method achieves optimal overall performance with lower computational costs and smaller communication overhead compared to state-of-the-art solutions.

RevDate: 2024-03-30
CmpDate: 2024-03-29

Yuan DY, Park JH, Li Z, et al (2024)

A New Cloud-Native Tool for Pharmacogenetic Analysis.

Genes, 15(3):.

BACKGROUND: The advancement of next-generation sequencing (NGS) technologies provides opportunities for large-scale Pharmacogenetic (PGx) studies and pre-emptive PGx testing to cover a wide range of genotypes present in diverse populations. However, NGS-based PGx testing is limited by the lack of comprehensive computational tools to support genetic data analysis and clinical decisions.

METHODS: Bioinformatics utilities specialized for human genomics and the latest cloud-based technologies were used to develop a bioinformatics pipeline for analyzing the genomic sequence data and reporting PGx genotypes. A database was created and integrated in the pipeline for filtering the actionable PGx variants and clinical interpretations. Strict quality verification procedures were conducted on variant calls with the whole genome sequencing (WGS) dataset of the 1000 Genomes Project (G1K). The accuracy of PGx allele identification was validated using the WGS dataset of the Pharmacogenetics Reference Materials from the Centers for Disease Control and Prevention (CDC).

RESULTS: The newly created bioinformatics pipeline, Pgxtools, can analyze genomic sequence data, identify actionable variants in 13 PGx relevant genes, and generate reports annotated with specific interpretations and recommendations based on clinical practice guidelines. Verified with two independent methods, we have found that Pgxtools consistently identifies variants more accurately than the results in the G1K dataset on GRCh37 and GRCh38.

CONCLUSIONS: Pgxtools provides an integrated workflow for large-scale genomic data analysis and PGx clinical decision support. Implemented with cloud-native technologies, it is highly portable in a wide variety of environments from a single laptop to High-Performance Computing (HPC) clusters and cloud platforms for different production scales and requirements.

RevDate: 2024-03-29

Kukkar A, Kumar Y, Sandhu JK, et al (2024)

DengueFog: A Fog Computing-Enabled Weighted Random Forest-Based Smart Health Monitoring System for Automatic Dengue Prediction.

Diagnostics (Basel, Switzerland), 14(6):.

Dengue is a distinctive and fatal infectious disease that spreads through female mosquitoes called Aedes aegypti. It is a notable concern for developing countries due to its low diagnosis rate. Dengue has the most astounding mortality level as compared to other diseases due to tremendous platelet depletion. Hence, it can be categorized as a life-threatening fever as compared to the same class of fevers. Additionally, it has been shown that dengue fever shares many of the same symptoms as other flu-based fevers. On the other hand, the research community is closely monitoring the popular research fields related to IoT, fog, and cloud computing for the diagnosis and prediction of diseases. IoT, fog, and cloud-based technologies are used for constructing a number of health care systems. Accordingly, in this study, a DengueFog monitoring system was created based on fog computing for prediction and detection of dengue sickness. Additionally, the proposed DengueFog system includes a weighted random forest (WRF) classifier to monitor and predict the dengue infection. The proposed system's efficacy was evaluated using data on dengue infection. This dataset was gathered between 2016 and 2018 from several hospitals in the Delhi-NCR region. The accuracy, F-value, recall, precision, error rate, and specificity metrics were used to assess the simulation results of the suggested monitoring system. It was demonstrated that the proposed DengueFog monitoring system with WRF outperforms the traditional classifiers.

RevDate: 2024-03-29

Ali I, Wassif K, H Bayomi (2024)

Dimensionality reduction for images of IoT using machine learning.

Scientific reports, 14(1):7205.

Sensors, wearables, mobile devices, and other Internet of Things (IoT) devices are becoming increasingly integrated into all aspects of our lives. They are capable of gathering enormous amounts of data, such as image data, which can then be sent to the cloud for processing. However, this results in an increase in network traffic and latency. To overcome these difficulties, edge computing has been proposed as a paradigm for computing that brings processing closer to the location where data is produced. This paper explores the merging of cloud and edge computing for IoT and investigates approaches using machine learning for dimensionality reduction of images on the edge, employing the autoencoder deep learning-based approach and principal component analysis (PCA). The encoded data is then sent to the cloud server, where it is used directly for any machine learning task without significantly impacting the accuracy of the data processed in the cloud. The proposed approach has been evaluated on an object detection task using a set of 4000 images randomly chosen from three datasets: COCO, human detection, and HDA datasets. Results show that a 77% reduction in data did not have a significant impact on the object detection task's accuracy.

RevDate: 2024-03-30

Huettmann F, Andrews P, Steiner M, et al (2024)

A super SDM (species distribution model) 'in the cloud' for better habitat-association inference with a 'big data' application of the Great Gray Owl for Alaska.

Scientific reports, 14(1):7213.

The currently available distribution and range maps for the Great Grey Owl (GGOW; Strix nebulosa) are ambiguous, contradictory, imprecise, outdated, often hand-drawn and thus not quantified, not based on data or scientific. In this study, we present a proof of concept with a biological application for technical and biological workflow progress on latest global open access 'Big Data' sharing, Open-source methods of R and geographic information systems (OGIS and QGIS) assessed with six recent multi-evidence citizen-science sightings of the GGOW. This proposed workflow can be applied for quantified inference for any species-habitat model such as typically applied with species distribution models (SDMs). Using Random Forest-an ensemble-type model of Machine Learning following Leo Breiman's approach of inference from predictions-we present a Super SDM for GGOWs in Alaska running on Oracle Cloud Infrastructure (OCI). These Super SDMs were based on best publicly available data (410 occurrences + 1% new assessment sightings) and over 100 environmental GIS habitat predictors ('Big Data'). The compiled global open access data and the associated workflow overcome for the first time the limitations of traditionally used PC and laptops. It breaks new ground and has real-world implications for conservation and land management for GGOW, for Alaska, and for other species worldwide as a 'new' baseline. As this research field remains dynamic, Super SDMs can have limits, are not the ultimate and final statement on species-habitat associations yet, but they summarize all publicly available data and information on a topic in a quantified and testable fashion allowing fine-tuning and improvements as needed. At minimum, they allow for low-cost rapid assessment and a great leap forward to be more ecological and inclusive of all information at-hand. Using GGOWs, here we aim to correct the perception of this species towards a more inclusive, holistic, and scientifically correct assessment of this urban-adapted owl in the Anthropocene, rather than a mysterious wilderness-inhabiting species (aka 'Phantom of the North'). Such a Super SDM was never created for any bird species before and opens new perspectives for impact assessment policy and global sustainability.

RevDate: 2024-03-28
CmpDate: 2024-03-27

Budge J, Carrell T, Yaqub M, et al (2024)

The ARIA trial protocol: a randomised controlled trial to assess the clinical, technical, and cost-effectiveness of a cloud-based, ARtificially Intelligent image fusion system in comparison to standard treatment to guide endovascular Aortic aneurysm repair.

Trials, 25(1):214.

BACKGROUND: Endovascular repair of aortic aneurysmal disease is established due to perceived advantages in patient survival, reduced postoperative complications, and shorter hospital lengths of stay. High spatial and contrast resolution 3D CT angiography images are used to plan the procedures and inform device selection and manufacture, but in standard care, the surgery is performed using image-guidance from 2D X-ray fluoroscopy with injection of nephrotoxic contrast material to visualise the blood vessels. This study aims to assess the benefit to patients, practitioners, and the health service of a novel image fusion medical device (Cydar EV), which allows this high-resolution 3D information to be available to operators at the time of surgery.

METHODS: The trial is a multi-centre, open label, two-armed randomised controlled clinical trial of 340 patient, randomised 1:1 to either standard treatment in endovascular aneurysm repair or treatment using Cydar EV, a CE-marked medical device comprising of cloud computing, augmented intelligence, and computer vision. The primary outcome is procedural time, with secondary outcomes of procedural efficiency, technical effectiveness, patient outcomes, and cost-effectiveness. Patients with a clinical diagnosis of AAA or TAAA suitable for endovascular repair and able to provide written informed consent will be invited to participate.

DISCUSSION: This trial is the first randomised controlled trial evaluating advanced image fusion technology in endovascular aortic surgery and is well placed to evaluate the effect of this technology on patient outcomes and cost to the NHS.

TRIAL REGISTRATION: ISRCTN13832085. Dec. 3, 2021.

RevDate: 2024-03-28
CmpDate: 2024-03-27

Zhang S, Li H, Jing Q, et al (2024)

Anesthesia decision analysis using a cloud-based big data platform.

European journal of medical research, 29(1):201.

Big data technologies have proliferated since the dawn of the cloud-computing era. Traditional data storage, extraction, transformation, and analysis technologies have thus become unsuitable for the large volume, diversity, high processing speed, and low value density of big data in medical strategies, which require the development of novel big data application technologies. In this regard, we investigated the most recent big data platform breakthroughs in anesthesiology and designed an anesthesia decision model based on a cloud system for storing and analyzing massive amounts of data from anesthetic records. The presented Anesthesia Decision Analysis Platform performs distributed computing on medical records via several programming tools, and provides services such as keyword search, data filtering, and basic statistics to reduce inaccurate and subjective judgments by decision-makers. Importantly, it can potentially to improve anesthetic strategy and create individualized anesthesia decisions, lowering the likelihood of perioperative complications.

RevDate: 2024-03-26

Mukuka A (2024)

Data on mathematics teacher educators' proficiency and willingness to use technology: A structural equation modelling analysis.

Data in brief, 54:110307.

The role of Mathematics Teacher Educators (MTEs) in preparing future teachers to effectively integrate technology into their mathematics instruction is of paramount importance yet remains an underexplored domain. Technology has the potential to enhance the development of 21st-century skills, such as problem-solving and critical thinking, which are essential for students in the era of the fourth industrial revolution. However, the rapid evolution of technology and the emergence of new trends like data analytics, the Internet of Things, machine learning, cloud computing, and artificial intelligence present new challenges in the realm of mathematics teaching and learning. Consequently, MTEs need to equip prospective teachers with the knowledge and skills to harness technology in innovative ways within their future mathematics classrooms. This paper presents and describes data from a survey of 104 MTEs in Zambia. The study focuses on MTEs' proficiency, perceived usefulness, perceived ease of use, and willingness to incorporate technology in their classrooms. This data-driven article aims to unveil patterns and trends within the dataset, with the objective of offering insights rather than drawing definitive conclusions. The article also highlights the data collection process and outlines the procedure for assessing the measurement model of the hypothesised relationships among variables through structural equation modelling analysis. The data described in this article not only sheds light on the current landscape but also serves as a valuable resource for mathematics teacher training institutions and other stakeholders seeking to understand the requisites for MTEs to foster technological skills among prospective teachers of mathematics.

RevDate: 2024-04-17
CmpDate: 2024-04-17

Tadi AA, Alhadidi D, L Rueda (2024)

PPPCT: Privacy-Preserving framework for Parallel Clustering Transcriptomics data.

Computers in biology and medicine, 173:108351.

Single-cell transcriptomics data provides crucial insights into patients' health, yet poses significant privacy concerns. Genomic data privacy attacks can have deep implications, encompassing not only the patients' health information but also extending widely to compromise their families'. Moreover, the permanence of leaked data exacerbates the challenges, making retraction an impossibility. While extensive efforts have been directed towards clustering single-cell transcriptomics data, addressing critical challenges, especially in the realm of privacy, remains pivotal. This paper introduces an efficient, fast, privacy-preserving approach for clustering single-cell RNA-sequencing (scRNA-seq) datasets. The key contributions include ensuring data privacy, achieving high-quality clustering, accommodating the high dimensionality inherent in the datasets, and maintaining reasonable computation time for big-scale datasets. Our proposed approach utilizes the map-reduce scheme to parallelize clustering, addressing intensive calculation challenges. Intel Software Guard eXtension (SGX) processors are used to ensure the security of sensitive code and data during processing. Additionally, the approach incorporates a logarithm transformation as a preprocessing step, employs non-negative matrix factorization for dimensionality reduction, and utilizes parallel k-means for clustering. The approach fully leverages the computing capabilities of all processing resources within a secure private cloud environment. Experimental results demonstrate the efficacy of our approach in preserving patient privacy while surpassing state-of-the-art methods in both clustering quality and computation time. Our method consistently achieves a minimum of 7% higher Adjusted Rand Index (ARI) than existing approaches, contingent on dataset size. Additionally, due to parallel computations and dimensionality reduction, our approach exhibits efficiency, converging to very good results in less than 10 seconds for a scRNA-seq dataset with 5000 genes and 6000 cells when prioritizing privacy and under two seconds without privacy considerations. Availability and implementation Code and datasets availability: https://github.com/University-of-Windsor/PPPCT.

RevDate: 2024-03-22

Hajiaghabozorgi M, Fischbach M, Albrecht M, et al (2024)

BridGE: a pathway-based analysis tool for detecting genetic interactions from GWAS.

Nature protocols [Epub ahead of print].

Genetic interactions have the potential to modulate phenotypes, including human disease. In principle, genome-wide association studies (GWAS) provide a platform for detecting genetic interactions; however, traditional methods for identifying them, which tend to focus on testing individual variant pairs, lack statistical power. In this protocol, we describe a novel computational approach, called Bridging Gene sets with Epistasis (BridGE), for discovering genetic interactions between biological pathways from GWAS data. We present a Python-based implementation of BridGE along with instructions for its application to a typical human GWAS cohort. The major stages include initial data processing and quality control, construction of a variant-level genetic interaction network, measurement of pathway-level genetic interactions, evaluation of statistical significance using sample permutations and generation of results in a standardized output format. The BridGE software pipeline includes options for running the analysis on multiple cores and multiple nodes for users who have access to computing clusters or a cloud computing environment. In a cluster computing environment with 10 nodes and 100 GB of memory per node, the method can be run in less than 24 h for typical human GWAS cohorts. Using BridGE requires knowledge of running Python programs and basic shell script programming experience.

RevDate: 2024-04-06
CmpDate: 2024-03-21

Sahu KS, Dubin JA, Majowicz SE, et al (2024)

Revealing the Mysteries of Population Mobility Amid the COVID-19 Pandemic in Canada: Comparative Analysis With Internet of Things-Based Thermostat Data and Google Mobility Insights.

JMIR public health and surveillance, 10:e46903.

BACKGROUND: The COVID-19 pandemic necessitated public health policies to limit human mobility and curb infection spread. Human mobility, which is often underestimated, plays a pivotal role in health outcomes, impacting both infectious and chronic diseases. Collecting precise mobility data is vital for understanding human behavior and informing public health strategies. Google's GPS-based location tracking, which is compiled in Google Mobility Reports, became the gold standard for monitoring outdoor mobility during the pandemic. However, indoor mobility remains underexplored.

OBJECTIVE: This study investigates in-home mobility data from ecobee's smart thermostats in Canada (February 2020 to February 2021) and compares it directly with Google's residential mobility data. By assessing the suitability of smart thermostat data, we aim to shed light on indoor mobility patterns, contributing valuable insights to public health research and strategies.

METHODS: Motion sensor data were acquired from the ecobee "Donate Your Data" initiative via Google's BigQuery cloud platform. Concurrently, residential mobility data were sourced from the Google Mobility Report. This study centered on 4 Canadian provinces-Ontario, Quebec, Alberta, and British Columbia-during the period from February 15, 2020, to February 14, 2021. Data processing, analysis, and visualization were conducted on the Microsoft Azure platform using Python (Python Software Foundation) and R programming languages (R Foundation for Statistical Computing). Our investigation involved assessing changes in mobility relative to the baseline in both data sets, with the strength of this relationship assessed using Pearson and Spearman correlation coefficients. We scrutinized daily, weekly, and monthly variations in mobility patterns across the data sets and performed anomaly detection for further insights.

RESULTS: The results revealed noteworthy week-to-week and month-to-month shifts in population mobility within the chosen provinces, aligning with pandemic-driven policy adjustments. Notably, the ecobee data exhibited a robust correlation with Google's data set. Examination of Google's daily patterns detected more pronounced mobility fluctuations during weekdays, a trend not mirrored in the ecobee data. Anomaly detection successfully identified substantial mobility deviations coinciding with policy modifications and cultural events.

CONCLUSIONS: This study's findings illustrate the substantial influence of the Canadian stay-at-home and work-from-home policies on population mobility. This impact was discernible through both Google's out-of-house residential mobility data and ecobee's in-house smart thermostat data. As such, we deduce that smart thermostats represent a valid tool for facilitating intelligent monitoring of population mobility in response to policy-driven shifts.

RevDate: 2024-03-19

Wang H, Chen H, Y Wang (2024)

Analysis of Hot Topics Regarding Global Smart Elderly Care Research - 1997-2021.

China CDC weekly, 6(9):157-161.

With the assistance of the internet, big data, cloud computing, and other technologies, the concept of smart elderly care has emerged.

WHAT IS ADDED BY THIS REPORT?: This study presents information on the countries or regions that have conducted research on smart elderly care, as well as identifies global hotspots and development trends in this field.

The results of this study suggest that future research should focus on fall detection, health monitoring, and guidance systems that are user-friendly and contribute to the creation of smarter safer communities for the well-being of the elderly.

RevDate: 2024-04-12

Li J, Xiong Y, Feng S, et al (2024)

CloudProteoAnalyzer: scalable processing of big data from proteomics using cloud computing.

Bioinformatics advances, 4(1):vbae024.

SUMMARY: Shotgun proteomics is widely used in many system biology studies to determine the global protein expression profiles of tissues, cultures, and microbiomes. Many non-distributed computer algorithms have been developed for users to process proteomics data on their local computers. However, the amount of data acquired in a typical proteomics study has grown rapidly in recent years, owing to the increasing throughput of mass spectrometry and the expanding scale of study designs. This presents a big data challenge for researchers to process proteomics data in a timely manner. To overcome this challenge, we developed a cloud-based parallel computing application to offer end-to-end proteomics data analysis software as a service (SaaS). A web interface was provided to users to upload mass spectrometry-based proteomics data, configure parameters, submit jobs, and monitor job status. The data processing was distributed across multiple nodes in a supercomputer to achieve scalability for large datasets. Our study demonstrated SaaS for proteomics as a viable solution for the community to scale up the data processing using cloud computing.

This application is available online at https://sipros.oscer.ou.edu/ or https://sipros.unt.edu for free use. The source code is available at https://github.com/Biocomputing-Research-Group/CloudProteoAnalyzer under the GPL version 3.0 license.

RevDate: 2024-03-19
CmpDate: 2024-03-18

Clements J, Goina C, Hubbard PM, et al (2024)

NeuronBridge: an intuitive web application for neuronal morphology search across large data sets.

BMC bioinformatics, 25(1):114.

BACKGROUND: Neuroscience research in Drosophila is benefiting from large-scale connectomics efforts using electron microscopy (EM) to reveal all the neurons in a brain and their connections. To exploit this knowledge base, researchers relate a connectome's structure to neuronal function, often by studying individual neuron cell types. Vast libraries of fly driver lines expressing fluorescent reporter genes in sets of neurons have been created and imaged using confocal light microscopy (LM), enabling the targeting of neurons for experimentation. However, creating a fly line for driving gene expression within a single neuron found in an EM connectome remains a challenge, as it typically requires identifying a pair of driver lines where only the neuron of interest is expressed in both. This task and other emerging scientific workflows require finding similar neurons across large data sets imaged using different modalities.

RESULTS: Here, we present NeuronBridge, a web application for easily and rapidly finding putative morphological matches between large data sets of neurons imaged using different modalities. We describe the functionality and construction of the NeuronBridge service, including its user-friendly graphical user interface (GUI), extensible data model, serverless cloud architecture, and massively parallel image search engine.

CONCLUSIONS: NeuronBridge fills a critical gap in the Drosophila research workflow and is used by hundreds of neuroscience researchers around the world. We offer our software code, open APIs, and processed data sets for integration and reuse, and provide the application as a service at http://neuronbridge.janelia.org .

RevDate: 2024-03-15
CmpDate: 2024-03-14

Tripathi A, Waqas A, Venkatesan K, et al (2024)

Building Flexible, Scalable, and Machine Learning-Ready Multimodal Oncology Datasets.

Sensors (Basel, Switzerland), 24(5):.

The advancements in data acquisition, storage, and processing techniques have resulted in the rapid growth of heterogeneous medical data. Integrating radiological scans, histopathology images, and molecular information with clinical data is essential for developing a holistic understanding of the disease and optimizing treatment. The need for integrating data from multiple sources is further pronounced in complex diseases such as cancer for enabling precision medicine and personalized treatments. This work proposes Multimodal Integration of Oncology Data System (MINDS)-a flexible, scalable, and cost-effective metadata framework for efficiently fusing disparate data from public sources such as the Cancer Research Data Commons (CRDC) into an interconnected, patient-centric framework. MINDS consolidates over 41,000 cases from across repositories while achieving a high compression ratio relative to the 3.78 PB source data size. It offers sub-5-s query response times for interactive exploration. MINDS offers an interface for exploring relationships across data types and building cohorts for developing large-scale multimodal machine learning models. By harmonizing multimodal data, MINDS aims to potentially empower researchers with greater analytical ability to uncover diagnostic and prognostic insights and enable evidence-based personalized care. MINDS tracks granular end-to-end data provenance, ensuring reproducibility and transparency. The cloud-native architecture of MINDS can handle exponential data growth in a secure, cost-optimized manner while ensuring substantial storage optimization, replication avoidance, and dynamic access capabilities. Auto-scaling, access controls, and other mechanisms guarantee pipelines' scalability and security. MINDS overcomes the limitations of existing biomedical data silos via an interoperable metadata-driven approach that represents a pivotal step toward the future of oncology data integration.

RevDate: 2024-03-15

Gaba P, Raw RS, Kaiwartya O, et al (2024)

B-SAFE: Blockchain-Enabled Security Architecture for Connected Vehicle Fog Environment.

Sensors (Basel, Switzerland), 24(5):.

Vehicles are no longer stand-alone mechanical entities due to the advancements in vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication-centric Internet of Connected Vehicles (IoV) frameworks. However, the advancement in connected vehicles leads to another serious security threat, online vehicle hijacking, where the steering control of vehicles can be hacked online. The feasibility of traditional security solutions in IoV environments is very limited, considering the intermittent network connectivity to cloud servers and vehicle-centric computing capability constraints. In this context, this paper presents a Blockchain-enabled Security Architecture for a connected vehicular Fog networking Environment (B-SAFE). Firstly, blockchain security and vehicular fog networking are introduced as preliminaries of the framework. Secondly, a three-layer architecture of B-SAFE is presented, focusing on vehicular communication, blockchain at fog nodes, and the cloud as trust and reward management for vehicles. Thirdly, details of the blockchain implementation at fog nodes is presented, along with a flowchart and algorithm. The performance of the evaluation of the proposed framework B-SAFE attests to the benefits in terms of trust, reward points, and threshold calculation.

RevDate: 2024-03-15

Vercheval N, Royen R, Munteanu A, et al (2024)

PCGen: A Fully Parallelizable Point Cloud Generative Model.

Sensors (Basel, Switzerland), 24(5):.

Generative models have the potential to revolutionize 3D extended reality. A primary obstacle is that augmented and virtual reality need real-time computing. Current state-of-the-art point cloud random generation methods are not fast enough for these applications. We introduce a vector-quantized variational autoencoder model (VQVAE) that can synthesize high-quality point clouds in milliseconds. Unlike previous work in VQVAEs, our model offers a compact sample representation suitable for conditional generation and data exploration with potential applications in rapid prototyping. We achieve this result by combining architectural improvements with an innovative approach for probabilistic random generation. First, we rethink current parallel point cloud autoencoder structures, and we propose several solutions to improve robustness, efficiency and reconstruction quality. Notable contributions in the decoder architecture include an innovative computation layer to process the shape semantic information, an attention mechanism that helps the model focus on different areas and a filter to cover possible sampling errors. Secondly, we introduce a parallel sampling strategy for VQVAE models consisting of a double encoding system, where a variational autoencoder learns how to generate the complex discrete distribution of the VQVAE, not only allowing quick inference but also describing the shape with a few global variables. We compare the proposed decoder and our VQVAE model with established and concurrent work, and we prove, one by one, the validity of the single contributions.

RevDate: 2024-03-15

AlSaleh I, Al-Samawi A, L Nissirat (2024)

Novel Machine Learning Approach for DDoS Cloud Detection: Bayesian-Based CNN and Data Fusion Enhancements.

Sensors (Basel, Switzerland), 24(5):.

Cloud computing has revolutionized the information technology landscape, offering businesses the flexibility to adapt to diverse business models without the need for costly on-site servers and network infrastructure. A recent survey reveals that 95% of enterprises have already embraced cloud technology, with 79% of their workloads migrating to cloud environments. However, the deployment of cloud technology introduces significant cybersecurity risks, including network security vulnerabilities, data access control challenges, and the ever-looming threat of cyber-attacks such as Distributed Denial of Service (DDoS) attacks, which pose substantial risks to both cloud and network security. While Intrusion Detection Systems (IDS) have traditionally been employed for DDoS attack detection, prior studies have been constrained by various limitations. In response to these challenges, we present an innovative machine learning approach for DDoS cloud detection, known as the Bayesian-based Convolutional Neural Network (BaysCNN) model. Leveraging the CICDDoS2019 dataset, which encompasses 88 features, we employ Principal Component Analysis (PCA) for dimensionality reduction. Our BaysCNN model comprises 19 layers of analysis, forming the basis for training and validation. Our experimental findings conclusively demonstrate that the BaysCNN model significantly enhances the accuracy of DDoS cloud detection, achieving an impressive average accuracy rate of 99.66% across 13 multi-class attacks. To further elevate the model's performance, we introduce the Data Fusion BaysFusCNN approach, encompassing 27 layers. By leveraging Bayesian methods to estimate uncertainties and integrating features from multiple sources, this approach attains an even higher average accuracy of 99.79% across the same 13 multi-class attacks. Our proposed methodology not only offers valuable insights for the development of robust machine learning-based intrusion detection systems but also enhances the reliability and scalability of IDS in cloud computing environments. This empowers organizations to proactively mitigate security risks and fortify their defenses against malicious cyber-attacks.

RevDate: 2024-03-13

Yakubu B, Appiah EM, AF Adu (2024)

Pangenome Analysis of Helicobacter pylori Isolates from Selected Areas of Africa Indicated Diverse Antibiotic Resistance and Virulence Genes.

International journal of genomics, 2024:5536117.

The challenge facing Helicobacter pylori (H. pylori) infection management in some parts of Africa is the evolution of drug-resistant species, the lack of gold standard in diagnostic methods, and the ineffectiveness of current vaccines against the bacteria. It is being established that even though clinical consequences linked to the bacteria vary geographically, there is rather a generic approach to treatment. This situation has remained problematic in the successful fight against the bacteria in parts of Africa. As a result, this study compared the genomes of selected H. pylori isolates from selected areas of Africa and evaluated their virulence and antibiotic drug resistance, those that are highly pathogenic and are associated with specific clinical outcomes and those that are less virulent and rarely associated with clinical outcomes. 146 genomes of H. pylori isolated from selected locations of Africa were sampled, and bioinformatic tools such as Abricate, CARD RGI, MLST, Prokka, Roary, Phandango, Google Sheets, and iTOLS were used to compare the isolates and their antibiotic resistance or susceptibility. Over 20 k virulence and AMR genes were observed. About 95% of the isolates were genetically diverse, 90% of the isolates harbored shell genes, and 50% harbored cloud and core genes. Some isolates did not retain the cagA and vacA genes. Clarithromycin, metronidazole, amoxicillin, and tinidazole were resistant to most AMR genes (vacA, cagA, oip, and bab). Conclusion. This study found both virulence and AMR genes in all H. pylori strains in all the selected geographies around Africa with differing quantities. MLST, Pangenome, and ORF analyses showed disparities among the isolates. This in general could imply diversities in terms of genetics, evolution, and protein production. Therefore, generic administration of antibiotics such as clarithromycin, amoxicillin, and erythromycin as treatment methods in the African subregion could be contributing to the spread of the bacterium's antibiotic resistance.

RevDate: 2024-03-13

Tripathy SS, Bebortta S, Chowdhary CL, et al (2024)

FedHealthFog: A federated learning-enabled approach towards healthcare analytics over fog computing platform.

Heliyon, 10(5):e26416.

The emergence of federated learning (FL) technique in fog-enabled healthcare system has leveraged enhanced privacy towards safeguarding sensitive patient information over heterogeneous computing platforms. In this paper, we introduce the FedHealthFog framework, which was meticulously developed to overcome the difficulties of distributed learning in resource-constrained IoT-enabled healthcare systems, particularly those sensitive to delays and energy efficiency. Conventional federated learning approaches face challenges stemming from substantial compute requirements and significant communication costs. This is primarily due to their reliance on a singular server for the aggregation of global data, which results in inefficient training models. We present a transformational approach to address these problems by elevating strategically placed fog nodes to the position of local aggregators within the federated learning architecture. A sophisticated greedy heuristic technique is used to optimize the choice of a fog node as the global aggregator in each communication cycle between edge devices and the cloud. The FedHealthFog system notably accounts for drop in communication latency of 87.01%, 26.90%, and 71.74%, and energy consumption of 57.98%, 34.36%, and 35.37% respectively, for three benchmark algorithms analyzed in this study. The effectiveness of FedHealthFog is strongly supported by outcomes of our experiments compared to cutting-edge alternatives while simultaneously reducing number of global aggregation cycles. These findings highlight FedHealthFog's potential to transform federated learning in resource-constrained IoT environments for delay-sensitive applications.

RevDate: 2024-03-13
CmpDate: 2024-03-13

Shafi I, Din S, Farooq S, et al (2024)

Design and development of patient health tracking, monitoring and big data storage using Internet of Things and real time cloud computing.

PloS one, 19(3):e0298582.

With the outbreak of the COVID-19 pandemic, social isolation and quarantine have become commonplace across the world. IoT health monitoring solutions eliminate the need for regular doctor visits and interactions among patients and medical personnel. Many patients in wards or intensive care units require continuous monitoring of their health. Continuous patient monitoring is a hectic practice in hospitals with limited staff; in a pandemic situation like COVID-19, it becomes much more difficult practice when hospitals are working at full capacity and there is still a risk of medical workers being infected. In this study, we propose an Internet of Things (IoT)-based patient health monitoring system that collects real-time data on important health indicators such as pulse rate, blood oxygen saturation, and body temperature but can be expanded to include more parameters. Our system is comprised of a hardware component that collects and transmits data from sensors to a cloud-based storage system, where it can be accessed and analyzed by healthcare specialists. The ESP-32 microcontroller interfaces with the multiple sensors and wirelessly transmits the collected data to the cloud storage system. A pulse oximeter is utilized in our system to measure blood oxygen saturation and body temperature, as well as a heart rate monitor to measure pulse rate. A web-based interface is also implemented, allowing healthcare practitioners to access and visualize the collected data in real-time, making remote patient monitoring easier. Overall, our IoT-based patient health monitoring system represents a significant advancement in remote patient monitoring, allowing healthcare practitioners to access real-time data on important health metrics and detect potential health issues before they escalate.

RevDate: 2024-04-08
CmpDate: 2024-04-08

Ghiandoni GM, Evertsson E, Riley DJ, et al (2024)

Augmenting DMTA using predictive AI modelling at AstraZeneca.

Drug discovery today, 29(4):103945.

Design-Make-Test-Analyse (DMTA) is the discovery cycle through which molecules are designed, synthesised, and assayed to produce data that in turn are analysed to inform the next iteration. The process is repeated until viable drug candidates are identified, often requiring many cycles before reaching a sweet spot. The advent of artificial intelligence (AI) and cloud computing presents an opportunity to innovate drug discovery to reduce the number of cycles needed to yield a candidate. Here, we present the Predictive Insight Platform (PIP), a cloud-native modelling platform developed at AstraZeneca. The impact of PIP in each step of DMTA, as well as its architecture, integration, and usage, are discussed and used to provide insights into the future of drug discovery.

RevDate: 2024-03-09

Gokool S, Mahomed M, Brewer K, et al (2024)

Crop mapping in smallholder farms using unmanned aerial vehicle imagery and geospatial cloud computing infrastructure.

Heliyon, 10(5):e26913.

Smallholder farms are major contributors to agricultural production, food security, and socio-economic growth in many developing countries. However, they generally lack the resources to fully maximize their potential. Subsequently they require innovative, evidence-based and lower-cost solutions to optimize their productivity. Recently, precision agricultural practices facilitated by unmanned aerial vehicles (UAVs) have gained traction in the agricultural sector and have great potential for smallholder farm applications. Furthermore, advances in geospatial cloud computing have opened new and exciting possibilities in the remote sensing arena. In light of these recent developments, the focus of this study was to explore and demonstrate the utility of using the advanced image processing capabilities of the Google Earth Engine (GEE) geospatial cloud computing platform to process and analyse a very high spatial resolution multispectral UAV image for mapping land use land cover (LULC) within smallholder farms. The results showed that LULC could be mapped at a 0.50 m spatial resolution with an overall accuracy of 91%. Overall, we found GEE to be an extremely useful platform for conducting advanced image analysis on UAV imagery and rapid communication of results. Notwithstanding the limitations of the study, the findings presented herein are quite promising and clearly demonstrate how modern agricultural practices can be implemented to facilitate improved agricultural management in smallholder farmers.

RevDate: 2024-03-11

Inam S, Kanwal S, Firdous R, et al (2024)

Blockchain based medical image encryption using Arnold's cat map in a cloud environment.

Scientific reports, 14(1):5678.

Improved software for processing medical images has inspired tremendous interest in modern medicine in recent years. Modern healthcare equipment generates huge amounts of data, such as scanned medical images and computerized patient information, which must be secured for future use. Diversity in the healthcare industry, namely in the form of medical data, is one of the largest challenges for researchers. Cloud environment and the Block chain technology have both demonstrated their own use. The purpose of this study is to combine both technologies for safe and secure transaction. Storing or sending medical data through public clouds exposes information into potential eavesdropping, data breaches and unauthorized access. Encrypting data before transmission is crucial to mitigate these security risks. As a result, a Blockchain based Chaotic Arnold's cat map Encryption Scheme (BCAES) is proposed in this paper. The BCAES first encrypts the image using Arnold's cat map encryption scheme and then sends the encrypted image into Cloud Server and stores the signed document of plain image into blockchain. As blockchain is often considered more secure due to its distributed nature and consensus mechanism, data receiver will ensure data integrity and authenticity of image after decryption using signed document stored into the blockchain. Various analysis techniques have been used to examine the proposed scheme. The results of analysis like key sensitivity analysis, key space analysis, Information Entropy, histogram correlation of adjacent pixels, Number of Pixel Change Rate, Peak Signal Noise Ratio, Unified Average Changing Intensity, and similarity analysis like Mean Square Error, and Structural Similarity Index Measure illustrated that our proposed scheme is an efficient encryption scheme as compared to some recent literature. Our current achievements surpass all previous endeavors, setting a new standard of excellence.

RevDate: 2024-03-26
CmpDate: 2024-03-26

Zhong C, Darbandi M, Nassr M, et al (2024)

A new cloud-based method for composition of healthcare services using deep reinforcement learning and Kalman filtering.

Computers in biology and medicine, 172:108152.

Healthcare has significantly contributed to the well-being of individuals around the globe; nevertheless, further benefits could be derived from a more streamlined healthcare system without incurring additional costs. Recently, the main attributes of cloud computing, such as on-demand service, high scalability, and virtualization, have brought many benefits across many areas, especially in medical services. It is considered an important element in healthcare services, enhancing the performance and efficacy of the services. The current state of the healthcare industry requires the supply of healthcare products and services, increasing its viability for everyone involved. Developing new approaches for discovering and selecting healthcare services in the cloud has become more critical due to the rising popularity of these kinds of services. As a result of the diverse array of healthcare services, service composition enables the execution of intricate operations by integrating multiple services' functionalities into a single procedure. However, many methods in this field encounter several issues, such as high energy consumption, cost, and response time. This article introduces a novel layered method for selecting and evaluating healthcare services to find optimal service selection and composition solutions based on Deep Reinforcement Learning (Deep RL), Kalman filtering, and repeated training, addressing the aforementioned issues. The results revealed that the proposed method has achieved acceptable results in terms of availability, reliability, energy consumption, and response time when compared to other methods.

RevDate: 2024-03-07

Wang J, Yin J, Nguyen MH, et al (2024)

Editorial: Big scientific data analytics on HPC and cloud.

Frontiers in big data, 7:1353988.

RevDate: 2024-03-08

Saad M, Enam RN, R Qureshi (2024)

Optimizing multi-objective task scheduling in fog computing with GA-PSO algorithm for big data application.

Frontiers in big data, 7:1358486.

As the volume and velocity of Big Data continue to grow, traditional cloud computing approaches struggle to meet the demands of real-time processing and low latency. Fog computing, with its distributed network of edge devices, emerges as a compelling solution. However, efficient task scheduling in fog computing remains a challenge due to its inherently multi-objective nature, balancing factors like execution time, response time, and resource utilization. This paper proposes a hybrid Genetic Algorithm (GA)-Particle Swarm Optimization (PSO) algorithm to optimize multi-objective task scheduling in fog computing environments. The hybrid approach combines the strengths of GA and PSO, achieving effective exploration and exploitation of the search space, leading to improved performance compared to traditional single-algorithm approaches. The proposed hybrid algorithm results improved the execution time by 85.68% when compared with GA algorithm, by 84% when compared with Hybrid PWOA and by 51.03% when compared with PSO algorithm as well as it improved the response time by 67.28% when compared with GA algorithm, by 54.24% when compared with Hybrid PWOA and by 75.40% when compared with PSO algorithm as well as it improved the completion time by 68.69% when compared with GA algorithm, by 98.91% when compared with Hybrid PWOA and by 75.90% when compared with PSO algorithm when various tasks inputs are given. The proposed hybrid algorithm results also improved the execution time by 84.87% when compared with GA algorithm, by 88.64% when compared with Hybrid PWOA and by 85.07% when compared with PSO algorithm it improved the response time by 65.92% when compared with GA algorithm, by 80.51% when compared with Hybrid PWOA and by 85.26% when compared with PSO algorithm as well as it improved the completion time by 67.60% when compared with GA algorithm, by 81.34% when compared with Hybrid PWOA and by 85.23% when compared with PSO algorithm when various fog nodes are given.

RevDate: 2024-03-05

Mehmood T, Latif S, Jamail NSM, et al (2024)

LSTMDD: an optimized LSTM-based drift detector for concept drift in dynamic cloud computing.

PeerJ. Computer science, 10:e1827.

This study aims to investigate the problem of concept drift in cloud computing and emphasizes the importance of early detection for enabling optimum resource utilization and offering an effective solution. The analysis includes synthetic and real-world cloud datasets, stressing the need for appropriate drift detectors tailored to the cloud domain. A modified version of Long Short-Term Memory (LSTM) called the LSTM Drift Detector (LSTMDD) is proposed and compared with other top drift detection techniques using prediction error as the primary evaluation metric. LSTMDD is optimized to improve performance in detecting anomalies in non-Gaussian distributed cloud environments. The experiments show that LSTMDD outperforms other methods for gradual and sudden drift in the cloud domain. The findings suggest that machine learning techniques such as LSTMDD could be a promising approach to addressing the problem of concept drift in cloud computing, leading to more efficient resource allocation and improved performance.

RevDate: 2024-03-04

Yin X, Fang W, Liu Z, et al (2024)

A novel multi-scale CNN and Bi-LSTM arbitration dense network model for low-rate DDoS attack detection.

Scientific reports, 14(1):5111.

Low-rate distributed denial of service attacks, as known as LDDoS attacks, pose the notorious security risks in cloud computing network. They overload the cloud servers and degrade network service quality with the stealthy strategy. Furthermore, this kind of small ratio and pulse-like abnormal traffic leads to a serious data scale problem. As a result, the existing models for detecting minority and adversary LDDoS attacks are insufficient in both detection accuracy and time consumption. This paper proposes a novel multi-scale Convolutional Neural Networks (CNN) and bidirectional Long-short Term Memory (bi-LSTM) arbitration dense network model (called MSCBL-ADN) for learning and detecting LDDoS attack behaviors under the condition of limited dataset and time consumption. The MSCBL-ADN incorporates CNN for preliminary spatial feature extraction and embedding-based bi-LSTM for time relationship extraction. And then, it employs arbitration network to re-weigh feature importance for higher accuracy. At last, it uses 2-block dense connection network to perform final classification. The experimental results conducted on popular ISCX-2016-SlowDos dataset have demonstrated that the proposed MSCBL-ADN model has a significant improvement with high detection accuracy and superior time performance over the state-of-the-art models.

RevDate: 2024-03-12
CmpDate: 2024-03-01

Mahato T, Parida BR, S Bar (2024)

Assessing tea plantations biophysical and biochemical characteristics in Northeast India using satellite data.

Environmental monitoring and assessment, 196(3):327.

Despite advancements in using multi-temporal satellite data to assess long-term changes in Northeast India's tea plantations, a research gap exists in understanding the intricate interplay between biophysical and biochemical characteristics. Further exploration is crucial for precise, sustainable monitoring and management. In this study, satellite-derived vegetation indices and near-proximal sensor data were deployed to deduce various physico-chemical characteristics and to evaluate the health conditions of tea plantations in northeast India. The districts, such as Sonitpur, Jorhat, Sibsagar, Dibrugarh, and Tinsukia in Assam were selected, which are the major contributors to the tea industry in India. The Sentinel-2A (2022) data was processed in the Google Earth Engine (GEE) cloud platform and utilized for analyzing tea plantations biochemical and biophysical properties. Leaf chlorophyll (Cab) and nitrogen contents are determined using the Normalized Area Over Reflectance Curve (NAOC) index and flavanol contents, respectively. Biophysical and biochemical parameters of the tea assessed during the spring season (March-April) 2022 revealed that tea plantations located in Tinsukia and Dibrugarh were much healthier than the other districts in Assam which are evident from satellite-derived Enhanced Vegetation Index (EVI), Modified Soil Adjusted Vegetation Index (MSAVI), Leaf Area Index (LAI), and Fraction of Absorbed Photosynthetically Active Radiation (fPAR), including the Cab and nitrogen contents. The Cab of healthy tea plants varied from 25 to 35 µg/cm[2]. Pearson correlation among satellite-derived Cab and nitrogen with field measurements showed R[2] of 0.61-0.62 (p-value < 0.001). This study offered vital information about land alternations and tea health conditions, which can be crucial for conservation, monitoring, and management practices.

RevDate: 2024-03-01

Liu X, Wider W, Fauzi MA, et al (2024)

The evolution of smart hotels: A bibliometric review of the past, present and future trends.

Heliyon, 10(4):e26472.

This study provides a bibliometric analysis of smart hotel research, drawing from 613 publications in the Web of Science (WoS) database to examine scholarly trends and developments in this dynamic field. Smart hotels, characterized by integrating advanced technologies such as AI, IoT, cloud computing, and big data, aim to redefine customer experiences and operational efficiency. Utilizing co-citation and co-word analysis techniques, the research delves into the depth of literature from past to future trends. In co-citation analysis, clusters including "Sustainable Hotel and Green Hotel", "Theories Integration in Smart Hotel Research", and "Consumers' Decisions about Green Hotels" underscore the pivotal areas of past and current research. Co-word analysis further reveals emergent trend clusters: "The New Era of Sustainable Tourism", "Elevating Standards and Guest Loyalty", and "Hotels' New Sustainable Blueprint in Modern Travel". These clusters reflect the industry's evolving focus on sustainability and technology-enhanced guest experiences. Theoretically, this research bridges gaps in smart hotel literature, proposing new frameworks for understanding customer decisions amid technological advancements and environmental responsibilities. Practically, it offers valuable insights for hotel managers, guiding technology integration strategies for enhanced efficiency and customer loyalty while underscoring the critical role of green strategies and sustainability.

RevDate: 2024-03-01

Mukred M, Mokhtar UA, Hawash B, et al (2024)

The adoption and use of learning analytics tools to improve decision making in higher learning institutions: An extension of technology acceptance model.

Heliyon, 10(4):e26315.

Learning Analytics Tools (LATs) can be used for informed decision-making regarding teaching strategies and their continuous enhancement. Therefore, LATs must be adopted in higher learning institutions, but several factors hinder its implementation, primarily due to the lack of an implementation model. Therefore, in this study, the focus is directed towards examining LATs adoption in Higher Learning Institutions (HLIs), with emphasis on the determinants of the adoption process. The study mainly aims to design a model of LAT adoption and use it in the above context to improve the institutions' decision-making and accordingly, the study adopted an extended version of Technology Acceptance Model (TAM) as the underpinning theory. Five experts validated the employed survey instrument, and 500 questionnaire copies were distributed through e-mails, from which 275 copies were retrieved from Saudi employees working at public HLIs. Data gathered was exposed to Partial Least Square-Structural Equation Modeling (PLS-SEM) for analysis and to test the proposed conceptual model. Based on the findings, the perceived usefulness of LAT plays a significant role as a determinant of its adoption. Other variables include top management support, financial support, and the government's role in LATs acceptance and adoption among HLIs. The findings also supported the contribution of LAT adoption and acceptance towards making informed decisions and highlighted the need for big data facility and cloud computing ability towards LATs usefulness. The findings have significant implications towards LATs implementation success among HLIs, providing clear insights into the factors that can enhance its adoption and acceptance. They also lay the basis for future studies in the area to validate further the effect of LATs on decision-making among HLIs institutions. Furthermore, the obtained findings are expected to serve as practical implications for policy makers and educational leaders in their objective to implement LAT using a multi-layered method that considers other aspects in addition to the perceptions of the individual user.

RevDate: 2024-02-29
CmpDate: 2024-02-28

Grossman RL, Boyles RR, Davis-Dusenbery BN, et al (2024)

A Framework for the Interoperability of Cloud Platforms: Towards FAIR Data in SAFE Environments.

Scientific data, 11(1):241.

As the number of cloud platforms supporting scientific research grows, there is an increasing need to support interoperability between two or more cloud platforms. A well accepted core concept is to make data in cloud platforms Findable, Accessible, Interoperable and Reusable (FAIR). We introduce a companion concept that applies to cloud-based computing environments that we call a Secure and Authorized FAIR Environment (SAFE). SAFE environments require data and platform governance structures and are designed to support the interoperability of sensitive or controlled access data, such as biomedical data. A SAFE environment is a cloud platform that has been approved through a defined data and platform governance process as authorized to hold data from another cloud platform and exposes appropriate APIs for the two platforms to interoperate.

RevDate: 2024-02-26

Rusinovich Y, Rusinovich V, Buhayenka A, et al (2024)

Classification of anatomic patterns of peripheral artery disease with automated machine learning (AutoML).

Vascular [Epub ahead of print].

AIM: The aim of this study was to investigate the potential of novel automated machine learning (AutoML) in vascular medicine by developing a discriminative artificial intelligence (AI) model for the classification of anatomical patterns of peripheral artery disease (PAD).

MATERIAL AND METHODS: Random open-source angiograms of lower limbs were collected using a web-indexed search. An experienced researcher in vascular medicine labelled the angiograms according to the most applicable grade of femoropopliteal disease in the Global Limb Anatomic Staging System (GLASS). An AutoML model was trained using the Vertex AI (Google Cloud) platform to classify the angiograms according to the GLASS grade with a multi-label algorithm. Following deployment, we conducted a test using 25 random angiograms (five from each GLASS grade). Model tuning through incremental training by introducing new angiograms was executed to the limit of the allocated quota following the initial evaluation to determine its effect on the software's performance.

RESULTS: We collected 323 angiograms to create the AutoML model. Among these, 80 angiograms were labelled as grade 0 of femoropopliteal disease in GLASS, 114 as grade 1, 34 as grade 2, 25 as grade 3 and 70 as grade 4. After 4.5 h of training, the AI model was deployed. The AI self-assessed average precision was 0.77 (0 is minimal and 1 is maximal). During the testing phase, the AI model successfully determined the GLASS grade in 100% of the cases. The agreement with the researcher was almost perfect with the number of observed agreements being 22 (88%), Kappa = 0.85 (95% CI 0.69-1.0). The best results were achieved in predicting GLASS grade 0 and grade 4 (initial precision: 0.76 and 0.84). However, the AI model exhibited poorer results in classifying GLASS grade 3 (initial precision: 0.2) compared to other grades. Disagreements between the AI and the researcher were associated with the low resolution of the test images. Incremental training expanded the initial dataset by 23% to a total of 417 images, which improved the model's average precision by 11% to 0.86.

CONCLUSION: After a brief training period with a limited dataset, AutoML has demonstrated its potential in identifying and classifying the anatomical patterns of PAD, operating unhindered by the factors that can affect human analysts, such as fatigue or lack of experience. This technology bears the potential to revolutionize outcome prediction and standardize evidence-based revascularization strategies for patients with PAD, leveraging its adaptability and ability to continuously improve with additional data. The pursuit of further research in AutoML within the field of vascular medicine is both promising and warranted. However, it necessitates additional financial support to realize its full potential.

RevDate: 2024-02-27
CmpDate: 2024-02-27

Wu ZF, Yang SJ, Yang YQ, et al (2024)

[Current situation and development trend of digital traditional Chinese medicine pharmacy].

Zhongguo Zhong yao za zhi = Zhongguo zhongyao zazhi = China journal of Chinese materia medica, 49(2):285-293.

The 21st century is a highly information-driven era, and traditional Chinese medicine(TCM) pharmacy is also moving towards digitization and informatization. New technologies such as artificial intelligence and big data with information technology as the core are being integrated into various aspects of drug research, manufacturing, evaluation, and application, promoting interaction between these stages and improving the quality and efficiency of TCM preparations. This, in turn, provides better healthcare services to the general population. The deep integration of emerging technologies such as artificial intelligence, big data, and cloud computing with the TCM pharmaceutical industry will innovate TCM pharmaceutical technology, accelerate the research and industrialization process of TCM pharmacy, provide cutting-edge technological support to the global scientific community, boost the efficiency of the TCM industry, and promote economic and social development. Drawing from recent developments in TCM pharmacy in China, this paper discussed the current research status and future trends in digital TCM pharmacy, aiming to provide a reference for future research in this field.

RevDate: 2024-02-27
CmpDate: 2024-02-26

Alasmary H (2024)

ScalableDigitalHealth (SDH): An IoT-Based Scalable Framework for Remote Patient Monitoring.

Sensors (Basel, Switzerland), 24(4):.

Addressing the increasing demand for remote patient monitoring, especially among the elderly and mobility-impaired, this study proposes the "ScalableDigitalHealth" (SDH) framework. The framework integrates smart digital health solutions with latency-aware edge computing autoscaling, providing a novel approach to remote patient monitoring. By leveraging IoT technology and application autoscaling, the "SDH" enables the real-time tracking of critical health parameters, such as ECG, body temperature, blood pressure, and oxygen saturation. These vital metrics are efficiently transmitted in real time to AWS cloud storage through a layered networking architecture. The contributions are two-fold: (1) establishing real-time remote patient monitoring and (2) developing a scalable architecture that features latency-aware horizontal pod autoscaling for containerized healthcare applications. The architecture incorporates a scalable IoT-based architecture and an innovative microservice autoscaling strategy in edge computing, driven by dynamic latency thresholds and enhanced by the integration of custom metrics. This work ensures heightened accessibility, cost-efficiency, and rapid responsiveness to patient needs, marking a significant leap forward in the field. By dynamically adjusting pod numbers based on latency, the system optimizes system responsiveness, particularly in edge computing's proximity-based processing. This innovative fusion of technologies not only revolutionizes remote healthcare delivery but also enhances Kubernetes performance, preventing unresponsiveness during high usage.

RevDate: 2024-02-27

Dhiman P, Saini N, Gulzar Y, et al (2024)

A Review and Comparative Analysis of Relevant Approaches of Zero Trust Network Model.

Sensors (Basel, Switzerland), 24(4):.

The Zero Trust safety architecture emerged as an intriguing approach for overcoming the shortcomings of standard network security solutions. This extensive survey study provides a meticulous explanation of the underlying principles of Zero Trust, as well as an assessment of the many strategies and possibilities for effective implementation. The survey begins by examining the role of authentication and access control within Zero Trust Architectures, and subsequently investigates innovative authentication, as well as access control solutions across different scenarios. It more deeply explores traditional techniques for encryption, micro-segmentation, and security automation, emphasizing their importance in achieving a secure Zero Trust environment. Zero Trust Architecture is explained in brief, along with the Taxonomy of Zero Trust Network Features. This review article provides useful insights into the Zero Trust paradigm, its approaches, problems, and future research objectives for scholars, practitioners, and policymakers. This survey contributes to the growth and implementation of secure network architectures in critical infrastructures by developing a deeper knowledge of Zero Trust.

RevDate: 2024-02-27

Li W, Zhou H, Lu Z, et al (2024)

Navigating the Evolution of Digital Twins Research through Keyword Co-Occurence Network Analysis.

Sensors (Basel, Switzerland), 24(4):.

Digital twin technology has become increasingly popular and has revolutionized data integration and system modeling across various industries, such as manufacturing, energy, and healthcare. This study aims to explore the evolving research landscape of digital twins using Keyword Co-occurrence Network (KCN) analysis. We analyze metadata from 9639 peer-reviewed articles published between 2000 and 2023. The results unfold in two parts. The first part examines trends and keyword interconnection over time, and the second part maps sensing technology keywords to six application areas. This study reveals that research on digital twins is rapidly diversifying, with focused themes such as predictive and decision-making functions. Additionally, there is an emphasis on real-time data and point cloud technologies. The advent of federated learning and edge computing also highlights a shift toward distributed computation, prioritizing data privacy. This study confirms that digital twins have evolved into complex systems that can conduct predictive operations through advanced sensing technologies. The discussion also identifies challenges in sensor selection and empirical knowledge integration.

RevDate: 2024-02-27
CmpDate: 2024-02-26

Wiryasaputra R, Huang CY, Lin YJ, et al (2024)

An IoT Real-Time Potable Water Quality Monitoring and Prediction Model Based on Cloud Computing Architecture.

Sensors (Basel, Switzerland), 24(4):.

In order to achieve the Sustainable Development Goals (SDG), it is imperative to ensure the safety of drinking water. The characteristics of each drinkable water, encompassing taste, aroma, and appearance, are unique. Inadequate water infrastructure and treatment can affect these features and may also threaten public health. This study utilizes the Internet of Things (IoT) in developing a monitoring system, particularly for water quality, to reduce the risk of contracting diseases. Water quality components data, such as water temperature, alkalinity or acidity, and contaminants, were obtained through a series of linked sensors. An Arduino microcontroller board acquired all the data and the Narrow Band-IoT (NB-IoT) transmitted them to the web server. Due to limited human resources to observe the water quality physically, the monitoring was complemented by real-time notifications alerts via a telephone text messaging application. The water quality data were monitored using Grafana in web mode, and the binary classifiers of machine learning techniques were applied to predict whether the water was drinkable or not based on the data collected, which were stored in a database. The non-decision tree, as well as the decision tree, were evaluated based on the improvements of the artificial intelligence framework. With a ratio of 60% for data training: at 20% for data validation, and 10% for data testing, the performance of the decision tree (DT) model was more prominent in comparison with the Gradient Boosting (GB), Random Forest (RF), Neural Network (NN), and Support Vector Machine (SVM) modeling approaches. Through the monitoring and prediction of results, the authorities can sample the water sources every two weeks.

RevDate: 2024-02-27

Pan S, Huang C, Fan J, et al (2024)

Optimizing Internet of Things Fog Computing: Through Lyapunov-Based Long Short-Term Memory Particle Swarm Optimization Algorithm for Energy Consumption Optimization.

Sensors (Basel, Switzerland), 24(4):.

In the era of continuous development in Internet of Things (IoT) technology, smart services are penetrating various facets of societal life, leading to a growing demand for interconnected devices. Many contemporary devices are no longer mere data producers but also consumers of data. As a result, massive amounts of data are transmitted to the cloud, but the latency generated in edge-to-cloud communication is unacceptable for many tasks. In response to this, this paper introduces a novel contribution-a layered computing network built on the principles of fog computing, accompanied by a newly devised algorithm designed to optimize user tasks and allocate computing resources within rechargeable networks. The proposed algorithm, a synergy of Lyapunov-based, dynamic Long Short-Term Memory (LSTM) networks, and Particle Swarm Optimization (PSO), allows for predictive task allocation. The fog servers dynamically train LSTM networks to effectively forecast the data features of user tasks, facilitating proper unload decisions based on task priorities. In response to the challenge of slower hardware upgrades in edge devices compared to user demands, the algorithm optimizes the utilization of low-power devices and addresses performance limitations. Additionally, this paper considers the unique characteristics of rechargeable networks, where computing nodes acquire energy through charging. Utilizing Lyapunov functions for dynamic resource control enables nodes with abundant resources to maximize their potential, significantly reducing energy consumption and enhancing overall performance. The simulation results demonstrate that our algorithm surpasses traditional methods in terms of energy efficiency and resource allocation optimization. Despite the limitations of prediction accuracy in Fog Servers (FS), the proposed results significantly promote overall performance. The proposed approach improves the efficiency and the user experience of Internet of Things systems in terms of latency and energy consumption.

RevDate: 2024-02-27

Brata KC, Funabiki N, Panduman YYF, et al (2024)

An Enhancement of Outdoor Location-Based Augmented Reality Anchor Precision through VSLAM and Google Street View.

Sensors (Basel, Switzerland), 24(4):.

Outdoor Location-Based Augmented Reality (LAR) applications require precise positioning for seamless integrations of virtual content into immersive experiences. However, common solutions in outdoor LAR applications rely on traditional smartphone sensor fusion methods, such as the Global Positioning System (GPS) and compasses, which often lack the accuracy needed for precise AR content alignments. In this paper, we introduce an innovative approach to enhance LAR anchor precision in outdoor environments. We leveraged Visual Simultaneous Localization and Mapping (VSLAM) technology, in combination with innovative cloud-based methodologies, and harnessed the extensive visual reference database of Google Street View (GSV), to address the accuracy limitation problems. For the evaluation, 10 Point of Interest (POI) locations were used as anchor point coordinates in the experiments. We compared the accuracies between our approach and the common sensor fusion LAR solution comprehensively involving accuracy benchmarking and running load performance testing. The results demonstrate substantial enhancements in overall positioning accuracies compared to conventional GPS-based approaches for aligning AR anchor content in the real world.

RevDate: 2024-03-06
CmpDate: 2024-03-05

Horstmann A, Riggs S, Chaban Y, et al (2024)

A service-based approach to cryoEM facility processing pipelines at eBIC.

Acta crystallographica. Section D, Structural biology, 80(Pt 3):174-180.

Electron cryo-microscopy image-processing workflows are typically composed of elements that may, broadly speaking, be categorized as high-throughput workloads which transition to high-performance workloads as preprocessed data are aggregated. The high-throughput elements are of particular importance in the context of live processing, where an optimal response is highly coupled to the temporal profile of the data collection. In other words, each movie should be processed as quickly as possible at the earliest opportunity. The high level of disconnected parallelization in the high-throughput problem directly allows a completely scalable solution across a distributed computer system, with the only technical obstacle being an efficient and reliable implementation. The cloud computing frameworks primarily developed for the deployment of high-availability web applications provide an environment with a number of appealing features for such high-throughput processing tasks. Here, an implementation of an early-stage processing pipeline for electron cryotomography experiments using a service-based architecture deployed on a Kubernetes cluster is discussed in order to demonstrate the benefits of this approach and how it may be extended to scenarios of considerably increased complexity.

RevDate: 2024-02-26

McMurry AJ, Gottlieb DI, Miller TA, et al (2024)

Cumulus: A federated EHR-based learning system powered by FHIR and AI.

medRxiv : the preprint server for health sciences.

OBJECTIVE: To address challenges in large-scale electronic health record (EHR) data exchange, we sought to develop, deploy, and test an open source, cloud-hosted app 'listener' that accesses standardized data across the SMART/HL7 Bulk FHIR Access application programming interface (API).

METHODS: We advance a model for scalable, federated, data sharing and learning. Cumulus software is designed to address key technology and policy desiderata including local utility, control, and administrative simplicity as well as privacy preservation during robust data sharing, and AI for processing unstructured text.

RESULTS: Cumulus relies on containerized, cloud-hosted software, installed within a healthcare organization's security envelope. Cumulus accesses EHR data via the Bulk FHIR interface and streamlines automated processing and sharing. The modular design enables use of the latest AI and natural language processing tools and supports provider autonomy and administrative simplicity. In an initial test, Cumulus was deployed across five healthcare systems each partnered with public health. Cumulus output is patient counts which were aggregated into a table stratifying variables of interest to enable population health studies. All code is available open source. A policy stipulating that only aggregate data leave the institution greatly facilitated data sharing agreements.

DISCUSSION AND CONCLUSION: Cumulus addresses barriers to data sharing based on (1) federally required support for standard APIs (2), increasing use of cloud computing, and (3) advances in AI. There is potential for scalability to support learning across myriad network configurations and use cases.

LOAD NEXT 100 CITATIONS

RJR Experience and Expertise

Researcher

Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.

Educator

Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.

Administrator

Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.

Technologist

Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.

Publisher

While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.

Speaker

Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.

Facilitator

Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.

Designer

Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.

Support this website:
Order from Amazon
We will earn a commission.

This is a must read book for anyone with an interest in invasion biology. The full title of the book lays out the author's premise — The New Wild: Why Invasive Species Will Be Nature's Salvation. Not only is species movement not bad for ecosystems, it is the way that ecosystems respond to perturbation — it is the way ecosystems heal. Even if you are one of those who is absolutely convinced that invasive species are actually "a blight, pollution, an epidemic, or a cancer on nature", you should read this book to clarify your own thinking. True scientific understanding never comes from just interacting with those with whom you already agree. R. Robbins

963 Red Tail Lane
Bellingham, WA 98226

206-300-3443

E-mail: RJR8222@gmail.com

Collection of publications by R J Robbins

Reprints and preprints of publications, slide presentations, instructional materials, and data compilations written or prepared by Robert Robbins. Most papers deal with computational biology, genome informatics, using information technology to support biomedical research, and related matters.

Research Gate page for R J Robbins

ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. According to a study by Nature and an article in Times Higher Education , it is the largest academic social network in terms of active users.

Curriculum Vitae for R J Robbins

short personal version

Curriculum Vitae for R J Robbins

long standard version

RJR Picks from Around the Web (updated 11 MAY 2018 )