picture
RJR-logo

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

icon

Bibliography Options Menu

icon
QUERY RUN:
27 Mar 2024 at 01:41
HITS:
3502
PAGE OPTIONS:
Hide Abstracts   |   Hide Additional Links
NOTE:
Long bibliographies are displayed in blocks of 100 citations at a time. At the end of each block there is an option to load the next block.

Bibliography on: Cloud Computing

RJR-3x

Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About:  RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE

ESP: PubMed Auto Bibliography 27 Mar 2024 at 01:41 Created: 

Cloud Computing

Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.

Created with PubMed® Query: ( cloud[TIAB] AND (computing[TIAB] OR "amazon web services"[TIAB] OR google[TIAB] OR "microsoft azure"[TIAB]) ) NOT pmcbook NOT ispreviousversion

Citations The Papers (from PubMed®)

-->

RevDate: 2024-03-26

Budge J, Carrell T, Yaqub M, et al (2024)

The ARIA trial protocol: a randomised controlled trial to assess the clinical, technical, and cost-effectiveness of a cloud-based, ARtificially Intelligent image fusion system in comparison to standard treatment to guide endovascular Aortic aneurysm repair.

Trials, 25(1):214.

BACKGROUND: Endovascular repair of aortic aneurysmal disease is established due to perceived advantages in patient survival, reduced postoperative complications, and shorter hospital lengths of stay. High spatial and contrast resolution 3D CT angiography images are used to plan the procedures and inform device selection and manufacture, but in standard care, the surgery is performed using image-guidance from 2D X-ray fluoroscopy with injection of nephrotoxic contrast material to visualise the blood vessels. This study aims to assess the benefit to patients, practitioners, and the health service of a novel image fusion medical device (Cydar EV), which allows this high-resolution 3D information to be available to operators at the time of surgery.

METHODS: The trial is a multi-centre, open label, two-armed randomised controlled clinical trial of 340 patient, randomised 1:1 to either standard treatment in endovascular aneurysm repair or treatment using Cydar EV, a CE-marked medical device comprising of cloud computing, augmented intelligence, and computer vision. The primary outcome is procedural time, with secondary outcomes of procedural efficiency, technical effectiveness, patient outcomes, and cost-effectiveness. Patients with a clinical diagnosis of AAA or TAAA suitable for endovascular repair and able to provide written informed consent will be invited to participate.

DISCUSSION: This trial is the first randomised controlled trial evaluating advanced image fusion technology in endovascular aortic surgery and is well placed to evaluate the effect of this technology on patient outcomes and cost to the NHS.

TRIAL REGISTRATION: ISRCTN13832085. Dec. 3, 2021.

RevDate: 2024-03-26

Zhang S, Li H, Jing Q, et al (2024)

Anesthesia decision analysis using a cloud-based big data platform.

European journal of medical research, 29(1):201.

Big data technologies have proliferated since the dawn of the cloud-computing era. Traditional data storage, extraction, transformation, and analysis technologies have thus become unsuitable for the large volume, diversity, high processing speed, and low value density of big data in medical strategies, which require the development of novel big data application technologies. In this regard, we investigated the most recent big data platform breakthroughs in anesthesiology and designed an anesthesia decision model based on a cloud system for storing and analyzing massive amounts of data from anesthetic records. The presented Anesthesia Decision Analysis Platform performs distributed computing on medical records via several programming tools, and provides services such as keyword search, data filtering, and basic statistics to reduce inaccurate and subjective judgments by decision-makers. Importantly, it can potentially to improve anesthetic strategy and create individualized anesthesia decisions, lowering the likelihood of perioperative complications.

RevDate: 2024-03-25

Mukuka A (2024)

Data on mathematics teacher educators' proficiency and willingness to use technology: A structural equation modelling analysis.

Data in brief, 54:110307.

The role of Mathematics Teacher Educators (MTEs) in preparing future teachers to effectively integrate technology into their mathematics instruction is of paramount importance yet remains an underexplored domain. Technology has the potential to enhance the development of 21st-century skills, such as problem-solving and critical thinking, which are essential for students in the era of the fourth industrial revolution. However, the rapid evolution of technology and the emergence of new trends like data analytics, the Internet of Things, machine learning, cloud computing, and artificial intelligence present new challenges in the realm of mathematics teaching and learning. Consequently, MTEs need to equip prospective teachers with the knowledge and skills to harness technology in innovative ways within their future mathematics classrooms. This paper presents and describes data from a survey of 104 MTEs in Zambia. The study focuses on MTEs' proficiency, perceived usefulness, perceived ease of use, and willingness to incorporate technology in their classrooms. This data-driven article aims to unveil patterns and trends within the dataset, with the objective of offering insights rather than drawing definitive conclusions. The article also highlights the data collection process and outlines the procedure for assessing the measurement model of the hypothesised relationships among variables through structural equation modelling analysis. The data described in this article not only sheds light on the current landscape but also serves as a valuable resource for mathematics teacher training institutions and other stakeholders seeking to understand the requisites for MTEs to foster technological skills among prospective teachers of mathematics.

RevDate: 2024-03-23

Tadi AA, Alhadidi D, L Rueda (2024)

PPPCT: Privacy-Preserving framework for Parallel Clustering Transcriptomics data.

Computers in biology and medicine, 173:108351 pii:S0010-4825(24)00435-9 [Epub ahead of print].

Single-cell transcriptomics data provides crucial insights into patients' health, yet poses significant privacy concerns. Genomic data privacy attacks can have deep implications, encompassing not only the patients' health information but also extending widely to compromise their families'. Moreover, the permanence of leaked data exacerbates the challenges, making retraction an impossibility. While extensive efforts have been directed towards clustering single-cell transcriptomics data, addressing critical challenges, especially in the realm of privacy, remains pivotal. This paper introduces an efficient, fast, privacy-preserving approach for clustering single-cell RNA-sequencing (scRNA-seq) datasets. The key contributions include ensuring data privacy, achieving high-quality clustering, accommodating the high dimensionality inherent in the datasets, and maintaining reasonable computation time for big-scale datasets. Our proposed approach utilizes the map-reduce scheme to parallelize clustering, addressing intensive calculation challenges. Intel Software Guard eXtension (SGX) processors are used to ensure the security of sensitive code and data during processing. Additionally, the approach incorporates a logarithm transformation as a preprocessing step, employs non-negative matrix factorization for dimensionality reduction, and utilizes parallel k-means for clustering. The approach fully leverages the computing capabilities of all processing resources within a secure private cloud environment. Experimental results demonstrate the efficacy of our approach in preserving patient privacy while surpassing state-of-the-art methods in both clustering quality and computation time. Our method consistently achieves a minimum of 7% higher Adjusted Rand Index (ARI) than existing approaches, contingent on dataset size. Additionally, due to parallel computations and dimensionality reduction, our approach exhibits efficiency, converging to very good results in less than 10 seconds for a scRNA-seq dataset with 5000 genes and 6000 cells when prioritizing privacy and under two seconds without privacy considerations. Availability and implementation Code and datasets availability: https://github.com/University-of-Windsor/PPPCT.

RevDate: 2024-03-22

Hajiaghabozorgi M, Fischbach M, Albrecht M, et al (2024)

BridGE: a pathway-based analysis tool for detecting genetic interactions from GWAS.

Nature protocols [Epub ahead of print].

Genetic interactions have the potential to modulate phenotypes, including human disease. In principle, genome-wide association studies (GWAS) provide a platform for detecting genetic interactions; however, traditional methods for identifying them, which tend to focus on testing individual variant pairs, lack statistical power. In this protocol, we describe a novel computational approach, called Bridging Gene sets with Epistasis (BridGE), for discovering genetic interactions between biological pathways from GWAS data. We present a Python-based implementation of BridGE along with instructions for its application to a typical human GWAS cohort. The major stages include initial data processing and quality control, construction of a variant-level genetic interaction network, measurement of pathway-level genetic interactions, evaluation of statistical significance using sample permutations and generation of results in a standardized output format. The BridGE software pipeline includes options for running the analysis on multiple cores and multiple nodes for users who have access to computing clusters or a cloud computing environment. In a cluster computing environment with 10 nodes and 100 GB of memory per node, the method can be run in less than 24 h for typical human GWAS cohorts. Using BridGE requires knowledge of running Python programs and basic shell script programming experience.

RevDate: 2024-03-20

Sahu KS, Dubin JA, Majowicz SE, et al (2024)

Revealing the Mysteries of Population Mobility Amid the COVID-19 Pandemic in Canada: Comparative Analysis With Internet of Things-Based Thermostat Data and Google Mobility Insights.

JMIR public health and surveillance, 10:e46903 pii:v10i1e46903.

BACKGROUND: The COVID-19 pandemic necessitated public health policies to limit human mobility and curb infection spread. Human mobility, which is often underestimated, plays a pivotal role in health outcomes, impacting both infectious and chronic diseases. Collecting precise mobility data is vital for understanding human behavior and informing public health strategies. Google's GPS-based location tracking, which is compiled in Google Mobility Reports, became the gold standard for monitoring outdoor mobility during the pandemic. However, indoor mobility remains underexplored.

OBJECTIVE: This study investigates in-home mobility data from ecobee's smart thermostats in Canada (February 2020 to February 2021) and compares it directly with Google's residential mobility data. By assessing the suitability of smart thermostat data, we aim to shed light on indoor mobility patterns, contributing valuable insights to public health research and strategies.

METHODS: Motion sensor data were acquired from the ecobee "Donate Your Data" initiative via Google's BigQuery cloud platform. Concurrently, residential mobility data were sourced from the Google Mobility Report. This study centered on 4 Canadian provinces-Ontario, Quebec, Alberta, and British Columbia-during the period from February 15, 2020, to February 14, 2021. Data processing, analysis, and visualization were conducted on the Microsoft Azure platform using Python (Python Software Foundation) and R programming languages (R Foundation for Statistical Computing). Our investigation involved assessing changes in mobility relative to the baseline in both data sets, with the strength of this relationship assessed using Pearson and Spearman correlation coefficients. We scrutinized daily, weekly, and monthly variations in mobility patterns across the data sets and performed anomaly detection for further insights.

RESULTS: The results revealed noteworthy week-to-week and month-to-month shifts in population mobility within the chosen provinces, aligning with pandemic-driven policy adjustments. Notably, the ecobee data exhibited a robust correlation with Google's data set. Examination of Google's daily patterns detected more pronounced mobility fluctuations during weekdays, a trend not mirrored in the ecobee data. Anomaly detection successfully identified substantial mobility deviations coinciding with policy modifications and cultural events.

CONCLUSIONS: This study's findings illustrate the substantial influence of the Canadian stay-at-home and work-from-home policies on population mobility. This impact was discernible through both Google's out-of-house residential mobility data and ecobee's in-house smart thermostat data. As such, we deduce that smart thermostats represent a valid tool for facilitating intelligent monitoring of population mobility in response to policy-driven shifts.

RevDate: 2024-03-18

Wang H, Chen H, Y Wang (2024)

Analysis of Hot Topics Regarding Global Smart Elderly Care Research - 1997-2021.

China CDC weekly, 6(9):157-161.

With the assistance of the internet, big data, cloud computing, and other technologies, the concept of smart elderly care has emerged.

WHAT IS ADDED BY THIS REPORT?: This study presents information on the countries or regions that have conducted research on smart elderly care, as well as identifies global hotspots and development trends in this field.

The results of this study suggest that future research should focus on fall detection, health monitoring, and guidance systems that are user-friendly and contribute to the creation of smarter safer communities for the well-being of the elderly.

RevDate: 2024-03-18

Li J, Xiong Y, Feng S, et al (2024)

CloudProteoAnalyzer: scalable processing of big data from proteomics using cloud computing.

Bioinformatics advances, 4(1):vbae024.

SUMMARY: Shotgun proteomics is widely used in many system biology studies to determine the global protein expression profiles of tissues, cultures, and microbiomes. Many non-distributed computer algorithms have been developed for users to process proteomics data on their local computers. However, the amount of data acquired in a typical proteomics study has grown rapidly in recent years, owing to the increasing throughput of mass spectrometry and the expanding scale of study designs. This presents a big data challenge for researchers to process proteomics data in a timely manner. To overcome this challenge, we developed a cloud-based parallel computing application to offer end-to-end proteomics data analysis software as a service (SaaS). A web interface was provided to users to upload mass spectrometry-based proteomics data, configure parameters, submit jobs, and monitor job status. The data processing was distributed across multiple nodes in a supercomputer to achieve scalability for large datasets. Our study demonstrated SaaS for proteomics as a viable solution for the community to scale up the data processing using cloud computing.

This application is available online at https://sipros.oscer.ou.edu/ or https://sipros.unt.edu for free use. The source code is available at https://github.com/Biocomputing-Research-Group/CloudProteoAnalyzer under the GPL version 3.0 license.

RevDate: 2024-03-16

Clements J, Goina C, Hubbard PM, et al (2024)

NeuronBridge: an intuitive web application for neuronal morphology search across large data sets.

BMC bioinformatics, 25(1):114.

BACKGROUND: Neuroscience research in Drosophila is benefiting from large-scale connectomics efforts using electron microscopy (EM) to reveal all the neurons in a brain and their connections. To exploit this knowledge base, researchers relate a connectome's structure to neuronal function, often by studying individual neuron cell types. Vast libraries of fly driver lines expressing fluorescent reporter genes in sets of neurons have been created and imaged using confocal light microscopy (LM), enabling the targeting of neurons for experimentation. However, creating a fly line for driving gene expression within a single neuron found in an EM connectome remains a challenge, as it typically requires identifying a pair of driver lines where only the neuron of interest is expressed in both. This task and other emerging scientific workflows require finding similar neurons across large data sets imaged using different modalities.

RESULTS: Here, we present NeuronBridge, a web application for easily and rapidly finding putative morphological matches between large data sets of neurons imaged using different modalities. We describe the functionality and construction of the NeuronBridge service, including its user-friendly graphical user interface (GUI), extensible data model, serverless cloud architecture, and massively parallel image search engine.

CONCLUSIONS: NeuronBridge fills a critical gap in the Drosophila research workflow and is used by hundreds of neuroscience researchers around the world. We offer our software code, open APIs, and processed data sets for integration and reuse, and provide the application as a service at http://neuronbridge.janelia.org .

RevDate: 2024-03-15

Liang F, Yu W, Liu X, et al (2020)

Towards Edge-Based Deep Learning in Industrial Internet of Things.

IEEE internet of things journal, 7(5):.

As a typical application of the Internet of Things (IoT), the Industrial Internet of Things (IIoT) connects all the related IoT sensing and actuating devices ubiquitously so that the monitoring and control of numerous industrial systems can be realized. Deep learning, as one viable way to carry out big data-driven modeling and analysis, could be integrated in IIoT systems to aid the automation and intelligence of IIoT systems. As deep learning requires large computation power, it is commonly deployed in cloud servers. Thus, the data collected by IoT devices must be transmitted to the cloud for training process, contributing to network congestion and affecting the IoT network performance as well as the supported applications. To address this issue, in this paper we leverage fog/edge computing paradigm and propose an edge computing-based deep learning model, which utilizes edge computing to migrate the deep learning process from cloud servers to edge nodes, reducing data transmission demands in the IIoT network and mitigating network congestion. Since edge nodes have limited computation ability compared to servers, we design a mechanism to optimize the deep learning model so that its requirements for computational power can be reduced. To evaluate our proposed solution, we design a testbed implemented in the Google cloud and deploy the proposed Convolutional Neural Network (CNN) model, utilizing a real-world IIoT dataset to evaluate our approach. Our experimental results confirm the effectiveness of our approach, which can not only reduce the network traffic overhead for IIoT, but also maintain the classification accuracy in comparison with several baseline schemes.

RevDate: 2024-03-13

Tripathi A, Waqas A, Venkatesan K, et al (2024)

Building Flexible, Scalable, and Machine Learning-Ready Multimodal Oncology Datasets.

Sensors (Basel, Switzerland), 24(5): pii:s24051634.

The advancements in data acquisition, storage, and processing techniques have resulted in the rapid growth of heterogeneous medical data. Integrating radiological scans, histopathology images, and molecular information with clinical data is essential for developing a holistic understanding of the disease and optimizing treatment. The need for integrating data from multiple sources is further pronounced in complex diseases such as cancer for enabling precision medicine and personalized treatments. This work proposes Multimodal Integration of Oncology Data System (MINDS)-a flexible, scalable, and cost-effective metadata framework for efficiently fusing disparate data from public sources such as the Cancer Research Data Commons (CRDC) into an interconnected, patient-centric framework. MINDS consolidates over 41,000 cases from across repositories while achieving a high compression ratio relative to the 3.78 PB source data size. It offers sub-5-s query response times for interactive exploration. MINDS offers an interface for exploring relationships across data types and building cohorts for developing large-scale multimodal machine learning models. By harmonizing multimodal data, MINDS aims to potentially empower researchers with greater analytical ability to uncover diagnostic and prognostic insights and enable evidence-based personalized care. MINDS tracks granular end-to-end data provenance, ensuring reproducibility and transparency. The cloud-native architecture of MINDS can handle exponential data growth in a secure, cost-optimized manner while ensuring substantial storage optimization, replication avoidance, and dynamic access capabilities. Auto-scaling, access controls, and other mechanisms guarantee pipelines' scalability and security. MINDS overcomes the limitations of existing biomedical data silos via an interoperable metadata-driven approach that represents a pivotal step toward the future of oncology data integration.

RevDate: 2024-03-13

Gaba P, Raw RS, Kaiwartya O, et al (2024)

B-SAFE: Blockchain-Enabled Security Architecture for Connected Vehicle Fog Environment.

Sensors (Basel, Switzerland), 24(5): pii:s24051515.

Vehicles are no longer stand-alone mechanical entities due to the advancements in vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication-centric Internet of Connected Vehicles (IoV) frameworks. However, the advancement in connected vehicles leads to another serious security threat, online vehicle hijacking, where the steering control of vehicles can be hacked online. The feasibility of traditional security solutions in IoV environments is very limited, considering the intermittent network connectivity to cloud servers and vehicle-centric computing capability constraints. In this context, this paper presents a Blockchain-enabled Security Architecture for a connected vehicular Fog networking Environment (B-SAFE). Firstly, blockchain security and vehicular fog networking are introduced as preliminaries of the framework. Secondly, a three-layer architecture of B-SAFE is presented, focusing on vehicular communication, blockchain at fog nodes, and the cloud as trust and reward management for vehicles. Thirdly, details of the blockchain implementation at fog nodes is presented, along with a flowchart and algorithm. The performance of the evaluation of the proposed framework B-SAFE attests to the benefits in terms of trust, reward points, and threshold calculation.

RevDate: 2024-03-13

Vercheval N, Royen R, Munteanu A, et al (2024)

PCGen: A Fully Parallelizable Point Cloud Generative Model.

Sensors (Basel, Switzerland), 24(5): pii:s24051414.

Generative models have the potential to revolutionize 3D extended reality. A primary obstacle is that augmented and virtual reality need real-time computing. Current state-of-the-art point cloud random generation methods are not fast enough for these applications. We introduce a vector-quantized variational autoencoder model (VQVAE) that can synthesize high-quality point clouds in milliseconds. Unlike previous work in VQVAEs, our model offers a compact sample representation suitable for conditional generation and data exploration with potential applications in rapid prototyping. We achieve this result by combining architectural improvements with an innovative approach for probabilistic random generation. First, we rethink current parallel point cloud autoencoder structures, and we propose several solutions to improve robustness, efficiency and reconstruction quality. Notable contributions in the decoder architecture include an innovative computation layer to process the shape semantic information, an attention mechanism that helps the model focus on different areas and a filter to cover possible sampling errors. Secondly, we introduce a parallel sampling strategy for VQVAE models consisting of a double encoding system, where a variational autoencoder learns how to generate the complex discrete distribution of the VQVAE, not only allowing quick inference but also describing the shape with a few global variables. We compare the proposed decoder and our VQVAE model with established and concurrent work, and we prove, one by one, the validity of the single contributions.

RevDate: 2024-03-13

AlSaleh I, Al-Samawi A, L Nissirat (2024)

Novel Machine Learning Approach for DDoS Cloud Detection: Bayesian-Based CNN and Data Fusion Enhancements.

Sensors (Basel, Switzerland), 24(5): pii:s24051418.

Cloud computing has revolutionized the information technology landscape, offering businesses the flexibility to adapt to diverse business models without the need for costly on-site servers and network infrastructure. A recent survey reveals that 95% of enterprises have already embraced cloud technology, with 79% of their workloads migrating to cloud environments. However, the deployment of cloud technology introduces significant cybersecurity risks, including network security vulnerabilities, data access control challenges, and the ever-looming threat of cyber-attacks such as Distributed Denial of Service (DDoS) attacks, which pose substantial risks to both cloud and network security. While Intrusion Detection Systems (IDS) have traditionally been employed for DDoS attack detection, prior studies have been constrained by various limitations. In response to these challenges, we present an innovative machine learning approach for DDoS cloud detection, known as the Bayesian-based Convolutional Neural Network (BaysCNN) model. Leveraging the CICDDoS2019 dataset, which encompasses 88 features, we employ Principal Component Analysis (PCA) for dimensionality reduction. Our BaysCNN model comprises 19 layers of analysis, forming the basis for training and validation. Our experimental findings conclusively demonstrate that the BaysCNN model significantly enhances the accuracy of DDoS cloud detection, achieving an impressive average accuracy rate of 99.66% across 13 multi-class attacks. To further elevate the model's performance, we introduce the Data Fusion BaysFusCNN approach, encompassing 27 layers. By leveraging Bayesian methods to estimate uncertainties and integrating features from multiple sources, this approach attains an even higher average accuracy of 99.79% across the same 13 multi-class attacks. Our proposed methodology not only offers valuable insights for the development of robust machine learning-based intrusion detection systems but also enhances the reliability and scalability of IDS in cloud computing environments. This empowers organizations to proactively mitigate security risks and fortify their defenses against malicious cyber-attacks.

RevDate: 2024-03-12

Yakubu B, Appiah EM, AF Adu (2024)

Pangenome Analysis of Helicobacter pylori Isolates from Selected Areas of Africa Indicated Diverse Antibiotic Resistance and Virulence Genes.

International journal of genomics, 2024:5536117.

The challenge facing Helicobacter pylori (H. pylori) infection management in some parts of Africa is the evolution of drug-resistant species, the lack of gold standard in diagnostic methods, and the ineffectiveness of current vaccines against the bacteria. It is being established that even though clinical consequences linked to the bacteria vary geographically, there is rather a generic approach to treatment. This situation has remained problematic in the successful fight against the bacteria in parts of Africa. As a result, this study compared the genomes of selected H. pylori isolates from selected areas of Africa and evaluated their virulence and antibiotic drug resistance, those that are highly pathogenic and are associated with specific clinical outcomes and those that are less virulent and rarely associated with clinical outcomes. 146 genomes of H. pylori isolated from selected locations of Africa were sampled, and bioinformatic tools such as Abricate, CARD RGI, MLST, Prokka, Roary, Phandango, Google Sheets, and iTOLS were used to compare the isolates and their antibiotic resistance or susceptibility. Over 20 k virulence and AMR genes were observed. About 95% of the isolates were genetically diverse, 90% of the isolates harbored shell genes, and 50% harbored cloud and core genes. Some isolates did not retain the cagA and vacA genes. Clarithromycin, metronidazole, amoxicillin, and tinidazole were resistant to most AMR genes (vacA, cagA, oip, and bab). Conclusion. This study found both virulence and AMR genes in all H. pylori strains in all the selected geographies around Africa with differing quantities. MLST, Pangenome, and ORF analyses showed disparities among the isolates. This in general could imply diversities in terms of genetics, evolution, and protein production. Therefore, generic administration of antibiotics such as clarithromycin, amoxicillin, and erythromycin as treatment methods in the African subregion could be contributing to the spread of the bacterium's antibiotic resistance.

RevDate: 2024-03-12

Tripathy SS, Bebortta S, Chowdhary CL, et al (2024)

FedHealthFog: A federated learning-enabled approach towards healthcare analytics over fog computing platform.

Heliyon, 10(5):e26416.

The emergence of federated learning (FL) technique in fog-enabled healthcare system has leveraged enhanced privacy towards safeguarding sensitive patient information over heterogeneous computing platforms. In this paper, we introduce the FedHealthFog framework, which was meticulously developed to overcome the difficulties of distributed learning in resource-constrained IoT-enabled healthcare systems, particularly those sensitive to delays and energy efficiency. Conventional federated learning approaches face challenges stemming from substantial compute requirements and significant communication costs. This is primarily due to their reliance on a singular server for the aggregation of global data, which results in inefficient training models. We present a transformational approach to address these problems by elevating strategically placed fog nodes to the position of local aggregators within the federated learning architecture. A sophisticated greedy heuristic technique is used to optimize the choice of a fog node as the global aggregator in each communication cycle between edge devices and the cloud. The FedHealthFog system notably accounts for drop in communication latency of 87.01%, 26.90%, and 71.74%, and energy consumption of 57.98%, 34.36%, and 35.37% respectively, for three benchmark algorithms analyzed in this study. The effectiveness of FedHealthFog is strongly supported by outcomes of our experiments compared to cutting-edge alternatives while simultaneously reducing number of global aggregation cycles. These findings highlight FedHealthFog's potential to transform federated learning in resource-constrained IoT environments for delay-sensitive applications.

RevDate: 2024-03-11

Shafi I, Din S, Farooq S, et al (2024)

Design and development of patient health tracking, monitoring and big data storage using Internet of Things and real time cloud computing.

PloS one, 19(3):e0298582 pii:PONE-D-22-32000.

With the outbreak of the COVID-19 pandemic, social isolation and quarantine have become commonplace across the world. IoT health monitoring solutions eliminate the need for regular doctor visits and interactions among patients and medical personnel. Many patients in wards or intensive care units require continuous monitoring of their health. Continuous patient monitoring is a hectic practice in hospitals with limited staff; in a pandemic situation like COVID-19, it becomes much more difficult practice when hospitals are working at full capacity and there is still a risk of medical workers being infected. In this study, we propose an Internet of Things (IoT)-based patient health monitoring system that collects real-time data on important health indicators such as pulse rate, blood oxygen saturation, and body temperature but can be expanded to include more parameters. Our system is comprised of a hardware component that collects and transmits data from sensors to a cloud-based storage system, where it can be accessed and analyzed by healthcare specialists. The ESP-32 microcontroller interfaces with the multiple sensors and wirelessly transmits the collected data to the cloud storage system. A pulse oximeter is utilized in our system to measure blood oxygen saturation and body temperature, as well as a heart rate monitor to measure pulse rate. A web-based interface is also implemented, allowing healthcare practitioners to access and visualize the collected data in real-time, making remote patient monitoring easier. Overall, our IoT-based patient health monitoring system represents a significant advancement in remote patient monitoring, allowing healthcare practitioners to access real-time data on important health metrics and detect potential health issues before they escalate.

RevDate: 2024-03-09

Marco G, Evertsson E, Riley DJ, et al (2024)

Augmenting DMTA using predictive AI modelling at AstraZeneca.

Drug discovery today pii:S1359-6446(24)00070-9 [Epub ahead of print].

Design-Make-Test-Analyse (DMTA) is the discovery cycle through which molecules are designed, synthesised, and assayed to produce data that in turn are analysed to inform the next iteration. The process is repeated until viable drug candidates are identified, often requiring many cycles before reaching a sweet spot. The advent of artificial intelligence (AI) and cloud computing presents an opportunity to innovate drug discovery to reduce the number of cycles needed to yield a candidate. Here, we present the Predictive Insight Platform (PIP), a cloud-native modelling platform developed at AstraZeneca. The impact of PIP in each step of DMTA, as well as its architecture, integration, and usage, are discussed and used to provide insights into the future of drug discovery. Teaser: This review of the role, impact, and architecture of AstraZeneca's Predictive Insight Platform (PIP), a cloud-native modelling platform that aims to accelerate drug discovery, offers perspective on the evolution of R&D in pharma.

RevDate: 2024-03-08

Gokool S, Mahomed M, Brewer K, et al (2024)

Crop mapping in smallholder farms using unmanned aerial vehicle imagery and geospatial cloud computing infrastructure.

Heliyon, 10(5):e26913 pii:S2405-8440(24)02944-X.

Smallholder farms are major contributors to agricultural production, food security, and socio-economic growth in many developing countries. However, they generally lack the resources to fully maximize their potential. Subsequently they require innovative, evidence-based and lower-cost solutions to optimize their productivity. Recently, precision agricultural practices facilitated by unmanned aerial vehicles (UAVs) have gained traction in the agricultural sector and have great potential for smallholder farm applications. Furthermore, advances in geospatial cloud computing have opened new and exciting possibilities in the remote sensing arena. In light of these recent developments, the focus of this study was to explore and demonstrate the utility of using the advanced image processing capabilities of the Google Earth Engine (GEE) geospatial cloud computing platform to process and analyse a very high spatial resolution multispectral UAV image for mapping land use land cover (LULC) within smallholder farms. The results showed that LULC could be mapped at a 0.50 m spatial resolution with an overall accuracy of 91%. Overall, we found GEE to be an extremely useful platform for conducting advanced image analysis on UAV imagery and rapid communication of results. Notwithstanding the limitations of the study, the findings presented herein are quite promising and clearly demonstrate how modern agricultural practices can be implemented to facilitate improved agricultural management in smallholder farmers.

RevDate: 2024-03-07

Inam S, Kanwal S, Firdous R, et al (2024)

Blockchain based medical image encryption using Arnold's cat map in a cloud environment.

Scientific reports, 14(1):5678.

Improved software for processing medical images has inspired tremendous interest in modern medicine in recent years. Modern healthcare equipment generates huge amounts of data, such as scanned medical images and computerized patient information, which must be secured for future use. Diversity in the healthcare industry, namely in the form of medical data, is one of the largest challenges for researchers. Cloud environment and the Block chain technology have both demonstrated their own use. The purpose of this study is to combine both technologies for safe and secure transaction. Storing or sending medical data through public clouds exposes information into potential eavesdropping, data breaches and unauthorized access. Encrypting data before transmission is crucial to mitigate these security risks. As a result, a Blockchain based Chaotic Arnold's cat map Encryption Scheme (BCAES) is proposed in this paper. The BCAES first encrypts the image using Arnold's cat map encryption scheme and then sends the encrypted image into Cloud Server and stores the signed document of plain image into blockchain. As blockchain is often considered more secure due to its distributed nature and consensus mechanism, data receiver will ensure data integrity and authenticity of image after decryption using signed document stored into the blockchain. Various analysis techniques have been used to examine the proposed scheme. The results of analysis like key sensitivity analysis, key space analysis, Information Entropy, histogram correlation of adjacent pixels, Number of Pixel Change Rate, Peak Signal Noise Ratio, Unified Average Changing Intensity, and similarity analysis like Mean Square Error, and Structural Similarity Index Measure illustrated that our proposed scheme is an efficient encryption scheme as compared to some recent literature. Our current achievements surpass all previous endeavors, setting a new standard of excellence.

RevDate: 2024-03-07

Zhong C, Darbandi M, Nassr M, et al (2024)

A new cloud-based method for composition of healthcare services using deep reinforcement learning and Kalman filtering.

Computers in biology and medicine, 172:108152 pii:S0010-4825(24)00236-1 [Epub ahead of print].

Healthcare has significantly contributed to the well-being of individuals around the globe; nevertheless, further benefits could be derived from a more streamlined healthcare system without incurring additional costs. Recently, the main attributes of cloud computing, such as on-demand service, high scalability, and virtualization, have brought many benefits across many areas, especially in medical services. It is considered an important element in healthcare services, enhancing the performance and efficacy of the services. The current state of the healthcare industry requires the supply of healthcare products and services, increasing its viability for everyone involved. Developing new approaches for discovering and selecting healthcare services in the cloud has become more critical due to the rising popularity of these kinds of services. As a result of the diverse array of healthcare services, service composition enables the execution of intricate operations by integrating multiple services' functionalities into a single procedure. However, many methods in this field encounter several issues, such as high energy consumption, cost, and response time. This article introduces a novel layered method for selecting and evaluating healthcare services to find optimal service selection and composition solutions based on Deep Reinforcement Learning (Deep RL), Kalman filtering, and repeated training, addressing the aforementioned issues. The results revealed that the proposed method has achieved acceptable results in terms of availability, reliability, energy consumption, and response time when compared to other methods.

RevDate: 2024-03-07

Wang J, Yin J, Nguyen MH, et al (2024)

Editorial: Big scientific data analytics on HPC and cloud.

Frontiers in big data, 7:1353988.

RevDate: 2024-03-07

Saad M, Enam RN, R Qureshi (2024)

Optimizing multi-objective task scheduling in fog computing with GA-PSO algorithm for big data application.

Frontiers in big data, 7:1358486.

As the volume and velocity of Big Data continue to grow, traditional cloud computing approaches struggle to meet the demands of real-time processing and low latency. Fog computing, with its distributed network of edge devices, emerges as a compelling solution. However, efficient task scheduling in fog computing remains a challenge due to its inherently multi-objective nature, balancing factors like execution time, response time, and resource utilization. This paper proposes a hybrid Genetic Algorithm (GA)-Particle Swarm Optimization (PSO) algorithm to optimize multi-objective task scheduling in fog computing environments. The hybrid approach combines the strengths of GA and PSO, achieving effective exploration and exploitation of the search space, leading to improved performance compared to traditional single-algorithm approaches. The proposed hybrid algorithm results improved the execution time by 85.68% when compared with GA algorithm, by 84% when compared with Hybrid PWOA and by 51.03% when compared with PSO algorithm as well as it improved the response time by 67.28% when compared with GA algorithm, by 54.24% when compared with Hybrid PWOA and by 75.40% when compared with PSO algorithm as well as it improved the completion time by 68.69% when compared with GA algorithm, by 98.91% when compared with Hybrid PWOA and by 75.90% when compared with PSO algorithm when various tasks inputs are given. The proposed hybrid algorithm results also improved the execution time by 84.87% when compared with GA algorithm, by 88.64% when compared with Hybrid PWOA and by 85.07% when compared with PSO algorithm it improved the response time by 65.92% when compared with GA algorithm, by 80.51% when compared with Hybrid PWOA and by 85.26% when compared with PSO algorithm as well as it improved the completion time by 67.60% when compared with GA algorithm, by 81.34% when compared with Hybrid PWOA and by 85.23% when compared with PSO algorithm when various fog nodes are given.

RevDate: 2024-03-05

Mehmood T, Latif S, Jamail NSM, et al (2024)

LSTMDD: an optimized LSTM-based drift detector for concept drift in dynamic cloud computing.

PeerJ. Computer science, 10:e1827.

This study aims to investigate the problem of concept drift in cloud computing and emphasizes the importance of early detection for enabling optimum resource utilization and offering an effective solution. The analysis includes synthetic and real-world cloud datasets, stressing the need for appropriate drift detectors tailored to the cloud domain. A modified version of Long Short-Term Memory (LSTM) called the LSTM Drift Detector (LSTMDD) is proposed and compared with other top drift detection techniques using prediction error as the primary evaluation metric. LSTMDD is optimized to improve performance in detecting anomalies in non-Gaussian distributed cloud environments. The experiments show that LSTMDD outperforms other methods for gradual and sudden drift in the cloud domain. The findings suggest that machine learning techniques such as LSTMDD could be a promising approach to addressing the problem of concept drift in cloud computing, leading to more efficient resource allocation and improved performance.

RevDate: 2024-03-04

Yin X, Fang W, Liu Z, et al (2024)

A novel multi-scale CNN and Bi-LSTM arbitration dense network model for low-rate DDoS attack detection.

Scientific reports, 14(1):5111.

Low-rate distributed denial of service attacks, as known as LDDoS attacks, pose the notorious security risks in cloud computing network. They overload the cloud servers and degrade network service quality with the stealthy strategy. Furthermore, this kind of small ratio and pulse-like abnormal traffic leads to a serious data scale problem. As a result, the existing models for detecting minority and adversary LDDoS attacks are insufficient in both detection accuracy and time consumption. This paper proposes a novel multi-scale Convolutional Neural Networks (CNN) and bidirectional Long-short Term Memory (bi-LSTM) arbitration dense network model (called MSCBL-ADN) for learning and detecting LDDoS attack behaviors under the condition of limited dataset and time consumption. The MSCBL-ADN incorporates CNN for preliminary spatial feature extraction and embedding-based bi-LSTM for time relationship extraction. And then, it employs arbitration network to re-weigh feature importance for higher accuracy. At last, it uses 2-block dense connection network to perform final classification. The experimental results conducted on popular ISCX-2016-SlowDos dataset have demonstrated that the proposed MSCBL-ADN model has a significant improvement with high detection accuracy and superior time performance over the state-of-the-art models.

RevDate: 2024-03-01
CmpDate: 2024-03-01

Mahato T, Parida BR, S Bar (2024)

Assessing tea plantations biophysical and biochemical characteristics in Northeast India using satellite data.

Environmental monitoring and assessment, 196(3):327.

Despite advancements in using multi-temporal satellite data to assess long-term changes in Northeast India's tea plantations, a research gap exists in understanding the intricate interplay between biophysical and biochemical characteristics. Further exploration is crucial for precise, sustainable monitoring and management. In this study, satellite-derived vegetation indices and near-proximal sensor data were deployed to deduce various physico-chemical characteristics and to evaluate the health conditions of tea plantations in northeast India. The districts, such as Sonitpur, Jorhat, Sibsagar, Dibrugarh, and Tinsukia in Assam were selected, which are the major contributors to the tea industry in India. The Sentinel-2A (2022) data was processed in the Google Earth Engine (GEE) cloud platform and utilized for analyzing tea plantations biochemical and biophysical properties. Leaf chlorophyll (Cab) and nitrogen contents are determined using the Normalized Area Over Reflectance Curve (NAOC) index and flavanol contents, respectively. Biophysical and biochemical parameters of the tea assessed during the spring season (March-April) 2022 revealed that tea plantations located in Tinsukia and Dibrugarh were much healthier than the other districts in Assam which are evident from satellite-derived Enhanced Vegetation Index (EVI), Modified Soil Adjusted Vegetation Index (MSAVI), Leaf Area Index (LAI), and Fraction of Absorbed Photosynthetically Active Radiation (fPAR), including the Cab and nitrogen contents. The Cab of healthy tea plants varied from 25 to 35 µg/cm[2]. Pearson correlation among satellite-derived Cab and nitrogen with field measurements showed R[2] of 0.61-0.62 (p-value < 0.001). This study offered vital information about land alternations and tea health conditions, which can be crucial for conservation, monitoring, and management practices.

RevDate: 2024-03-01

Liu X, Wider W, Fauzi MA, et al (2024)

The evolution of smart hotels: A bibliometric review of the past, present and future trends.

Heliyon, 10(4):e26472.

This study provides a bibliometric analysis of smart hotel research, drawing from 613 publications in the Web of Science (WoS) database to examine scholarly trends and developments in this dynamic field. Smart hotels, characterized by integrating advanced technologies such as AI, IoT, cloud computing, and big data, aim to redefine customer experiences and operational efficiency. Utilizing co-citation and co-word analysis techniques, the research delves into the depth of literature from past to future trends. In co-citation analysis, clusters including "Sustainable Hotel and Green Hotel", "Theories Integration in Smart Hotel Research", and "Consumers' Decisions about Green Hotels" underscore the pivotal areas of past and current research. Co-word analysis further reveals emergent trend clusters: "The New Era of Sustainable Tourism", "Elevating Standards and Guest Loyalty", and "Hotels' New Sustainable Blueprint in Modern Travel". These clusters reflect the industry's evolving focus on sustainability and technology-enhanced guest experiences. Theoretically, this research bridges gaps in smart hotel literature, proposing new frameworks for understanding customer decisions amid technological advancements and environmental responsibilities. Practically, it offers valuable insights for hotel managers, guiding technology integration strategies for enhanced efficiency and customer loyalty while underscoring the critical role of green strategies and sustainability.

RevDate: 2024-03-01

Mukred M, Mokhtar UA, Hawash B, et al (2024)

The adoption and use of learning analytics tools to improve decision making in higher learning institutions: An extension of technology acceptance model.

Heliyon, 10(4):e26315.

Learning Analytics Tools (LATs) can be used for informed decision-making regarding teaching strategies and their continuous enhancement. Therefore, LATs must be adopted in higher learning institutions, but several factors hinder its implementation, primarily due to the lack of an implementation model. Therefore, in this study, the focus is directed towards examining LATs adoption in Higher Learning Institutions (HLIs), with emphasis on the determinants of the adoption process. The study mainly aims to design a model of LAT adoption and use it in the above context to improve the institutions' decision-making and accordingly, the study adopted an extended version of Technology Acceptance Model (TAM) as the underpinning theory. Five experts validated the employed survey instrument, and 500 questionnaire copies were distributed through e-mails, from which 275 copies were retrieved from Saudi employees working at public HLIs. Data gathered was exposed to Partial Least Square-Structural Equation Modeling (PLS-SEM) for analysis and to test the proposed conceptual model. Based on the findings, the perceived usefulness of LAT plays a significant role as a determinant of its adoption. Other variables include top management support, financial support, and the government's role in LATs acceptance and adoption among HLIs. The findings also supported the contribution of LAT adoption and acceptance towards making informed decisions and highlighted the need for big data facility and cloud computing ability towards LATs usefulness. The findings have significant implications towards LATs implementation success among HLIs, providing clear insights into the factors that can enhance its adoption and acceptance. They also lay the basis for future studies in the area to validate further the effect of LATs on decision-making among HLIs institutions. Furthermore, the obtained findings are expected to serve as practical implications for policy makers and educational leaders in their objective to implement LAT using a multi-layered method that considers other aspects in addition to the perceptions of the individual user.

RevDate: 2024-02-29
CmpDate: 2024-02-28

Grossman RL, Boyles RR, Davis-Dusenbery BN, et al (2024)

A Framework for the Interoperability of Cloud Platforms: Towards FAIR Data in SAFE Environments.

Scientific data, 11(1):241.

As the number of cloud platforms supporting scientific research grows, there is an increasing need to support interoperability between two or more cloud platforms. A well accepted core concept is to make data in cloud platforms Findable, Accessible, Interoperable and Reusable (FAIR). We introduce a companion concept that applies to cloud-based computing environments that we call a Secure and Authorized FAIR Environment (SAFE). SAFE environments require data and platform governance structures and are designed to support the interoperability of sensitive or controlled access data, such as biomedical data. A SAFE environment is a cloud platform that has been approved through a defined data and platform governance process as authorized to hold data from another cloud platform and exposes appropriate APIs for the two platforms to interoperate.

RevDate: 2024-02-26

Rusinovich Y, Rusinovich V, Buhayenka A, et al (2024)

Classification of anatomic patterns of peripheral artery disease with automated machine learning (AutoML).

Vascular [Epub ahead of print].

AIM: The aim of this study was to investigate the potential of novel automated machine learning (AutoML) in vascular medicine by developing a discriminative artificial intelligence (AI) model for the classification of anatomical patterns of peripheral artery disease (PAD).

MATERIAL AND METHODS: Random open-source angiograms of lower limbs were collected using a web-indexed search. An experienced researcher in vascular medicine labelled the angiograms according to the most applicable grade of femoropopliteal disease in the Global Limb Anatomic Staging System (GLASS). An AutoML model was trained using the Vertex AI (Google Cloud) platform to classify the angiograms according to the GLASS grade with a multi-label algorithm. Following deployment, we conducted a test using 25 random angiograms (five from each GLASS grade). Model tuning through incremental training by introducing new angiograms was executed to the limit of the allocated quota following the initial evaluation to determine its effect on the software's performance.

RESULTS: We collected 323 angiograms to create the AutoML model. Among these, 80 angiograms were labelled as grade 0 of femoropopliteal disease in GLASS, 114 as grade 1, 34 as grade 2, 25 as grade 3 and 70 as grade 4. After 4.5 h of training, the AI model was deployed. The AI self-assessed average precision was 0.77 (0 is minimal and 1 is maximal). During the testing phase, the AI model successfully determined the GLASS grade in 100% of the cases. The agreement with the researcher was almost perfect with the number of observed agreements being 22 (88%), Kappa = 0.85 (95% CI 0.69-1.0). The best results were achieved in predicting GLASS grade 0 and grade 4 (initial precision: 0.76 and 0.84). However, the AI model exhibited poorer results in classifying GLASS grade 3 (initial precision: 0.2) compared to other grades. Disagreements between the AI and the researcher were associated with the low resolution of the test images. Incremental training expanded the initial dataset by 23% to a total of 417 images, which improved the model's average precision by 11% to 0.86.

CONCLUSION: After a brief training period with a limited dataset, AutoML has demonstrated its potential in identifying and classifying the anatomical patterns of PAD, operating unhindered by the factors that can affect human analysts, such as fatigue or lack of experience. This technology bears the potential to revolutionize outcome prediction and standardize evidence-based revascularization strategies for patients with PAD, leveraging its adaptability and ability to continuously improve with additional data. The pursuit of further research in AutoML within the field of vascular medicine is both promising and warranted. However, it necessitates additional financial support to realize its full potential.

RevDate: 2024-02-27
CmpDate: 2024-02-27

Wu ZF, Yang SJ, Yang YQ, et al (2024)

[Current situation and development trend of digital traditional Chinese medicine pharmacy].

Zhongguo Zhong yao za zhi = Zhongguo zhongyao zazhi = China journal of Chinese materia medica, 49(2):285-293.

The 21st century is a highly information-driven era, and traditional Chinese medicine(TCM) pharmacy is also moving towards digitization and informatization. New technologies such as artificial intelligence and big data with information technology as the core are being integrated into various aspects of drug research, manufacturing, evaluation, and application, promoting interaction between these stages and improving the quality and efficiency of TCM preparations. This, in turn, provides better healthcare services to the general population. The deep integration of emerging technologies such as artificial intelligence, big data, and cloud computing with the TCM pharmaceutical industry will innovate TCM pharmaceutical technology, accelerate the research and industrialization process of TCM pharmacy, provide cutting-edge technological support to the global scientific community, boost the efficiency of the TCM industry, and promote economic and social development. Drawing from recent developments in TCM pharmacy in China, this paper discussed the current research status and future trends in digital TCM pharmacy, aiming to provide a reference for future research in this field.

RevDate: 2024-02-27
CmpDate: 2024-02-26

Alasmary H (2024)

ScalableDigitalHealth (SDH): An IoT-Based Scalable Framework for Remote Patient Monitoring.

Sensors (Basel, Switzerland), 24(4):.

Addressing the increasing demand for remote patient monitoring, especially among the elderly and mobility-impaired, this study proposes the "ScalableDigitalHealth" (SDH) framework. The framework integrates smart digital health solutions with latency-aware edge computing autoscaling, providing a novel approach to remote patient monitoring. By leveraging IoT technology and application autoscaling, the "SDH" enables the real-time tracking of critical health parameters, such as ECG, body temperature, blood pressure, and oxygen saturation. These vital metrics are efficiently transmitted in real time to AWS cloud storage through a layered networking architecture. The contributions are two-fold: (1) establishing real-time remote patient monitoring and (2) developing a scalable architecture that features latency-aware horizontal pod autoscaling for containerized healthcare applications. The architecture incorporates a scalable IoT-based architecture and an innovative microservice autoscaling strategy in edge computing, driven by dynamic latency thresholds and enhanced by the integration of custom metrics. This work ensures heightened accessibility, cost-efficiency, and rapid responsiveness to patient needs, marking a significant leap forward in the field. By dynamically adjusting pod numbers based on latency, the system optimizes system responsiveness, particularly in edge computing's proximity-based processing. This innovative fusion of technologies not only revolutionizes remote healthcare delivery but also enhances Kubernetes performance, preventing unresponsiveness during high usage.

RevDate: 2024-02-27

Dhiman P, Saini N, Gulzar Y, et al (2024)

A Review and Comparative Analysis of Relevant Approaches of Zero Trust Network Model.

Sensors (Basel, Switzerland), 24(4):.

The Zero Trust safety architecture emerged as an intriguing approach for overcoming the shortcomings of standard network security solutions. This extensive survey study provides a meticulous explanation of the underlying principles of Zero Trust, as well as an assessment of the many strategies and possibilities for effective implementation. The survey begins by examining the role of authentication and access control within Zero Trust Architectures, and subsequently investigates innovative authentication, as well as access control solutions across different scenarios. It more deeply explores traditional techniques for encryption, micro-segmentation, and security automation, emphasizing their importance in achieving a secure Zero Trust environment. Zero Trust Architecture is explained in brief, along with the Taxonomy of Zero Trust Network Features. This review article provides useful insights into the Zero Trust paradigm, its approaches, problems, and future research objectives for scholars, practitioners, and policymakers. This survey contributes to the growth and implementation of secure network architectures in critical infrastructures by developing a deeper knowledge of Zero Trust.

RevDate: 2024-02-27

Li W, Zhou H, Lu Z, et al (2024)

Navigating the Evolution of Digital Twins Research through Keyword Co-Occurence Network Analysis.

Sensors (Basel, Switzerland), 24(4):.

Digital twin technology has become increasingly popular and has revolutionized data integration and system modeling across various industries, such as manufacturing, energy, and healthcare. This study aims to explore the evolving research landscape of digital twins using Keyword Co-occurrence Network (KCN) analysis. We analyze metadata from 9639 peer-reviewed articles published between 2000 and 2023. The results unfold in two parts. The first part examines trends and keyword interconnection over time, and the second part maps sensing technology keywords to six application areas. This study reveals that research on digital twins is rapidly diversifying, with focused themes such as predictive and decision-making functions. Additionally, there is an emphasis on real-time data and point cloud technologies. The advent of federated learning and edge computing also highlights a shift toward distributed computation, prioritizing data privacy. This study confirms that digital twins have evolved into complex systems that can conduct predictive operations through advanced sensing technologies. The discussion also identifies challenges in sensor selection and empirical knowledge integration.

RevDate: 2024-02-27
CmpDate: 2024-02-26

Wiryasaputra R, Huang CY, Lin YJ, et al (2024)

An IoT Real-Time Potable Water Quality Monitoring and Prediction Model Based on Cloud Computing Architecture.

Sensors (Basel, Switzerland), 24(4):.

In order to achieve the Sustainable Development Goals (SDG), it is imperative to ensure the safety of drinking water. The characteristics of each drinkable water, encompassing taste, aroma, and appearance, are unique. Inadequate water infrastructure and treatment can affect these features and may also threaten public health. This study utilizes the Internet of Things (IoT) in developing a monitoring system, particularly for water quality, to reduce the risk of contracting diseases. Water quality components data, such as water temperature, alkalinity or acidity, and contaminants, were obtained through a series of linked sensors. An Arduino microcontroller board acquired all the data and the Narrow Band-IoT (NB-IoT) transmitted them to the web server. Due to limited human resources to observe the water quality physically, the monitoring was complemented by real-time notifications alerts via a telephone text messaging application. The water quality data were monitored using Grafana in web mode, and the binary classifiers of machine learning techniques were applied to predict whether the water was drinkable or not based on the data collected, which were stored in a database. The non-decision tree, as well as the decision tree, were evaluated based on the improvements of the artificial intelligence framework. With a ratio of 60% for data training: at 20% for data validation, and 10% for data testing, the performance of the decision tree (DT) model was more prominent in comparison with the Gradient Boosting (GB), Random Forest (RF), Neural Network (NN), and Support Vector Machine (SVM) modeling approaches. Through the monitoring and prediction of results, the authorities can sample the water sources every two weeks.

RevDate: 2024-02-27

Pan S, Huang C, Fan J, et al (2024)

Optimizing Internet of Things Fog Computing: Through Lyapunov-Based Long Short-Term Memory Particle Swarm Optimization Algorithm for Energy Consumption Optimization.

Sensors (Basel, Switzerland), 24(4):.

In the era of continuous development in Internet of Things (IoT) technology, smart services are penetrating various facets of societal life, leading to a growing demand for interconnected devices. Many contemporary devices are no longer mere data producers but also consumers of data. As a result, massive amounts of data are transmitted to the cloud, but the latency generated in edge-to-cloud communication is unacceptable for many tasks. In response to this, this paper introduces a novel contribution-a layered computing network built on the principles of fog computing, accompanied by a newly devised algorithm designed to optimize user tasks and allocate computing resources within rechargeable networks. The proposed algorithm, a synergy of Lyapunov-based, dynamic Long Short-Term Memory (LSTM) networks, and Particle Swarm Optimization (PSO), allows for predictive task allocation. The fog servers dynamically train LSTM networks to effectively forecast the data features of user tasks, facilitating proper unload decisions based on task priorities. In response to the challenge of slower hardware upgrades in edge devices compared to user demands, the algorithm optimizes the utilization of low-power devices and addresses performance limitations. Additionally, this paper considers the unique characteristics of rechargeable networks, where computing nodes acquire energy through charging. Utilizing Lyapunov functions for dynamic resource control enables nodes with abundant resources to maximize their potential, significantly reducing energy consumption and enhancing overall performance. The simulation results demonstrate that our algorithm surpasses traditional methods in terms of energy efficiency and resource allocation optimization. Despite the limitations of prediction accuracy in Fog Servers (FS), the proposed results significantly promote overall performance. The proposed approach improves the efficiency and the user experience of Internet of Things systems in terms of latency and energy consumption.

RevDate: 2024-02-27

Brata KC, Funabiki N, Panduman YYF, et al (2024)

An Enhancement of Outdoor Location-Based Augmented Reality Anchor Precision through VSLAM and Google Street View.

Sensors (Basel, Switzerland), 24(4):.

Outdoor Location-Based Augmented Reality (LAR) applications require precise positioning for seamless integrations of virtual content into immersive experiences. However, common solutions in outdoor LAR applications rely on traditional smartphone sensor fusion methods, such as the Global Positioning System (GPS) and compasses, which often lack the accuracy needed for precise AR content alignments. In this paper, we introduce an innovative approach to enhance LAR anchor precision in outdoor environments. We leveraged Visual Simultaneous Localization and Mapping (VSLAM) technology, in combination with innovative cloud-based methodologies, and harnessed the extensive visual reference database of Google Street View (GSV), to address the accuracy limitation problems. For the evaluation, 10 Point of Interest (POI) locations were used as anchor point coordinates in the experiments. We compared the accuracies between our approach and the common sensor fusion LAR solution comprehensively involving accuracy benchmarking and running load performance testing. The results demonstrate substantial enhancements in overall positioning accuracies compared to conventional GPS-based approaches for aligning AR anchor content in the real world.

RevDate: 2024-03-06
CmpDate: 2024-03-05

Horstmann A, Riggs S, Chaban Y, et al (2024)

A service-based approach to cryoEM facility processing pipelines at eBIC.

Acta crystallographica. Section D, Structural biology, 80(Pt 3):174-180.

Electron cryo-microscopy image-processing workflows are typically composed of elements that may, broadly speaking, be categorized as high-throughput workloads which transition to high-performance workloads as preprocessed data are aggregated. The high-throughput elements are of particular importance in the context of live processing, where an optimal response is highly coupled to the temporal profile of the data collection. In other words, each movie should be processed as quickly as possible at the earliest opportunity. The high level of disconnected parallelization in the high-throughput problem directly allows a completely scalable solution across a distributed computer system, with the only technical obstacle being an efficient and reliable implementation. The cloud computing frameworks primarily developed for the deployment of high-availability web applications provide an environment with a number of appealing features for such high-throughput processing tasks. Here, an implementation of an early-stage processing pipeline for electron cryotomography experiments using a service-based architecture deployed on a Kubernetes cluster is discussed in order to demonstrate the benefits of this approach and how it may be extended to scenarios of considerably increased complexity.

RevDate: 2024-02-26

McMurry AJ, Gottlieb DI, Miller TA, et al (2024)

Cumulus: A federated EHR-based learning system powered by FHIR and AI.

medRxiv : the preprint server for health sciences.

OBJECTIVE: To address challenges in large-scale electronic health record (EHR) data exchange, we sought to develop, deploy, and test an open source, cloud-hosted app 'listener' that accesses standardized data across the SMART/HL7 Bulk FHIR Access application programming interface (API).

METHODS: We advance a model for scalable, federated, data sharing and learning. Cumulus software is designed to address key technology and policy desiderata including local utility, control, and administrative simplicity as well as privacy preservation during robust data sharing, and AI for processing unstructured text.

RESULTS: Cumulus relies on containerized, cloud-hosted software, installed within a healthcare organization's security envelope. Cumulus accesses EHR data via the Bulk FHIR interface and streamlines automated processing and sharing. The modular design enables use of the latest AI and natural language processing tools and supports provider autonomy and administrative simplicity. In an initial test, Cumulus was deployed across five healthcare systems each partnered with public health. Cumulus output is patient counts which were aggregated into a table stratifying variables of interest to enable population health studies. All code is available open source. A policy stipulating that only aggregate data leave the institution greatly facilitated data sharing agreements.

DISCUSSION AND CONCLUSION: Cumulus addresses barriers to data sharing based on (1) federally required support for standard APIs (2), increasing use of cloud computing, and (3) advances in AI. There is potential for scalability to support learning across myriad network configurations and use cases.

RevDate: 2024-02-20

Yadav N, Pattabiraman B, Tummuru NR, et al (2024)

Toward improving water-energy-food nexus through dynamic energy management of solar powered automated irrigation system.

Heliyon, 10(4):e25359.

This paper focuses on developing a water and energy-saving reliable irrigation system using state-of-the-art computing, communication, and optimal energy management framework. The framework integrates real-time soil moisture and weather forecasting information to decide the time of irrigation and quantity of water required for potato crops, which is made available to the users across a region through the cloud-based irrigation decision support system. This is accomplished through various modules such as data acquisition, soil moisture forecasting, smart irrigation scheduling, and energy management scheme. The main emphasizes is on the electrical segment which demonstrates an energy management scheme for PV-battery based grid-connected system to operate the irrigation system valves and water pump. The proposed scheme is verified through simulation and dSpace-based real-time experiment studies. Overall, the proposed energy management system demonstrates an improvement in the optimal onsite solar power generation and storage capacity to power the solar pump which save the electrical energy as well as the water in order to establish an improved solar-irrigation system. Finally, the proposed system achieved water and energy savings of around 9.24 % for potato crop with full irrigation enhancing the Water-Energy-Food Nexus at field scale.

RevDate: 2024-02-20
CmpDate: 2024-02-19

Beteri J, Lyimo JG, JV Msinde (2024)

The influence of climatic and environmental variables on sunflower planting season suitability in Tanzania.

Scientific reports, 14(1):3906.

Crop survival and growth requires identification of correlations between appropriate suitable planting season and relevant climatic and environmental characteristics. Climatic and environmental conditions may cause water and heat stress at critical stages of crop development and thus affecting planting suitability. Consequently, this may affect crop yield and productivity. This study assesses the influence of climate and environmental variables on rain-fed sunflower planting season suitability in Tanzania. Data on rainfall, temperature, slope, elevation, soil and land use/or cover were accessed from publicly available sources using Google Earth Engine. This is a cloud-based geospatial computing platform for remote sensed datasets. Tanzania sunflower production calendar of 2022 was adopted to mark the start and end limits of planting across the country. The default climate and environmental parameters from FAO database were used. In addition, Pearson correlation was used to evaluate the relationship between rainfall, temperature over Normalized Difference Vegetation Index (NDVI) from 2000 to 2020 at five-year interval for January-April and June-September, for high and poor suitability season. The results showed that planting suitability of sunflower in Tanzania is driven more by rainfall than temperature. It was revealed that intra-annual planting suitability increases gradually from short to long- rain season and diminishes towards dry season of the year. January-April planting season window showing highest suitability (41.65%), whereas June-September indicating lowest suitability (0.05%). Though, not statistically significant, rainfall and NDVI were positively correlated with r = 0.65 and 0.75 whereas negative correlation existed between temperature and NDVI with r = -- 0.6 and - 0.77. We recommend sunflower subsector interventions that consider appropriate intra-regional and seasonal diversity as an important adaptive mechanism to ensure high sunflower yields.

RevDate: 2024-02-18

Periola AA, Alonge AA, KA Ogudo (2024)

Ocean warming events resilience capability in underwater computing platforms.

Scientific reports, 14(1):3781.

Underwater data centers (UDCs) use the ocean's cold-water resources for free cooling and have low cooling costs. However, UDC cooling is affected by marine heat waves, and underwater seismic events thereby affecting UDC functioning continuity. Though feasible, the use of reservoirs for UDC cooling is non-scalable due to the high computing overhead, and inability to support continuity for long duration marine heat waves. The presented research proposes a mobile UDC (capable of migration) to address this challenge. The proposed UDC migrates from high underwater ground displacement ocean regions to regions having no or small underwater ground displacement. It supports multiple client underwater applications without requiring clients to develop, deploy, and launch own UDCs. The manner of resource utilization is influenced by the client's service level agreement. Hence, the proposed UDC provides resilient services to the clients and the requiring applications. Analysis shows that using the mobile UDC instead of the existing reservoir UDC approach enhances the operational duration and power usage effectiveness by 8.9-48.5% and 55.6-70.7% on average, respectively. In addition, the overhead is reduced by an average of 95.8-99.4%.

RevDate: 2024-02-18

Kashyap P, Shivgan K, Patil S, et al (2024)

Unsupervised deep learning framework for temperature-compensated damage assessment using ultrasonic guided waves on edge device.

Scientific reports, 14(1):3751.

Fueled by the rapid development of machine learning (ML) and greater access to cloud computing and graphics processing units, various deep learning based models have been proposed for improving performance of ultrasonic guided wave structural health monitoring (GW-SHM) systems, especially to counter complexity and heterogeneity in data due to varying environmental factors (e.g., temperature) and types of damages. Such models typically comprise of millions of trainable parameters, and therefore add to cost of deployment due to requirements of cloud connectivity and processing, thus limiting the scale of deployment of GW-SHM. In this work, we propose an alternative solution that leverages TinyML framework for development of light-weight ML models that could be directly deployed on embedded edge devices. The utility of our solution is illustrated by presenting an unsupervised learning framework for damage detection in honeycomb composite sandwich structure with disbond and delamination type of damages, validated using data generated by finite element simulations and experiments performed at various temperatures in the range 0-90 °C. We demonstrate a fully-integrated solution using a Xilinx Artix-7 FPGA for data acquisition and control, and edge-inference of damage. Despite the limited number of features, the lightweight model shows reasonably high accuracy, thereby enabling detection of small size defects with improved sensitivity on an edge device for online GW-SHM.

RevDate: 2024-02-17
CmpDate: 2024-02-15

Feng Q, Niu B, Ren Y, et al (2024)

A 10-m national-scale map of ground-mounted photovoltaic power stations in China of 2020.

Scientific data, 11(1):198.

We provide a remote sensing derived dataset for large-scale ground-mounted photovoltaic (PV) power stations in China of 2020, which has high spatial resolution of 10 meters. The dataset is based on the Google Earth Engine (GEE) cloud computing platform via random forest classifier and active learning strategy. Specifically, ground samples are carefully collected across China via both field survey and visual interpretation. Afterwards, spectral and texture features are calculated from publicly available Sentinel-2 imagery. Meanwhile, topographic features consisting of slope and aspect that are sensitive to PV locations are also included, aiming to construct a multi-dimensional and discriminative feature space. Finally, the trained random forest model is adopted to predict PV power stations of China parallelly on GEE. Technical validation has been carefully performed across China which achieved a satisfactory accuracy over 89%. Above all, as the first publicly released 10-m national-scale distribution dataset of China's ground-mounted PV power stations, it can provide data references for relevant researchers in fields such as energy, land, remote sensing and environmental sciences.

RevDate: 2024-02-17
CmpDate: 2024-02-15

Chuntakaruk H, Hengphasatporn K, Shigeta Y, et al (2024)

FMO-guided design of darunavir analogs as HIV-1 protease inhibitors.

Scientific reports, 14(1):3639.

The prevalence of HIV-1 infection continues to pose a significant global public health issue, highlighting the need for antiretroviral drugs that target viral proteins to reduce viral replication. One such target is HIV-1 protease (PR), responsible for cleaving viral polyproteins, leading to the maturation of viral proteins. While darunavir (DRV) is a potent HIV-1 PR inhibitor, drug resistance can arise due to mutations in HIV-1 PR. To address this issue, we developed a novel approach using the fragment molecular orbital (FMO) method and structure-based drug design to create DRV analogs. Using combinatorial programming, we generated novel analogs freely accessible via an on-the-cloud mode implemented in Google Colab, Combined Analog generator Tool (CAT). The designed analogs underwent cascade screening through molecular docking with HIV-1 PR wild-type and major mutations at the active site. Molecular dynamics (MD) simulations confirmed the assess ligand binding and susceptibility of screened designed analogs. Our findings indicate that the three designed analogs guided by FMO, 19-0-14-3, 19-8-10-0, and 19-8-14-3, are superior to DRV and have the potential to serve as efficient PR inhibitors. These findings demonstrate the effectiveness of our approach and its potential to be used in further studies for developing new antiretroviral drugs.

RevDate: 2024-02-15
CmpDate: 2024-02-15

Bell J, Decker B, Eichmann A, et al (2024)

Effectiveness of Virtual Reality for Upper Extremity Function and Motor Performance of Children With Cerebral Palsy: A Systematic Review.

The American journal of occupational therapy : official publication of the American Occupational Therapy Association, 78(2):.

IMPORTANCE: Research on the functional and motor performance impact of virtual reality (VR) as an intervention tool for children with cerebral palsy (CP) is limited.

OBJECTIVE: To understand whether VR is an effective intervention to improve upper extremity (UE) function and motor performance of children diagnosed with CP.

DATA SOURCES: Databases used in the search were EBSCOhost, One Search, PubMed, Cloud Source, CINAHL, SPORTDiscus, and Google Scholar.

Studies published from 2006 to 2021 were included if children had a diagnosis of CP and were age 21 yr or younger, VR was used as an intervention, and measures of UE function and motor performance were used.

FINDINGS: Twenty-one studies were included, and the results provided promising evidence for improvements in areas of UE function, motor performance, and fine motor skills when VR is used as an intervention. To yield noticeable UE improvements in children with CP, VR should be implemented for 30 to 60 min/session and for at least 360 min over more than 3 wk. Additional areas of improvement include gross motor skills, functional mobility, occupational performance, and intrinsic factors.

CONCLUSIONS AND RELEVANCE: The use of VR as an intervention for children with CP to improve UE function and motor performance is supported. More randomized controlled trials with larger sample sizes focusing on similar outcomes and intervention frequencies are needed to determine the most effective type of VR for use in clinical occupational therapy. Plain-Language Summary: This systematic review explains how virtual reality (VR) has been used as an intervention with children with cerebral palsy (CP). The review synthesizes the results of 21 research studies of children who had a diagnosis of CP and who were 21 years old or younger. The findings support using VR to improve upper extremity performance, motor performance, and fine motor skills. The findings also show that occupational therapy practitioners should use a VR intervention at a minimum frequency of 30 to 60 minutes per session and for at least 360 minutes over more than 3 weeks to yield noticeable improvements in upper extremity, motor performance, and fine motor skills for children with CP.

RevDate: 2024-02-14

Bhattacharjee T, Kiwuwa-Muyingo S, Kanjala C, et al (2024)

INSPIRE datahub: a pan-African integrated suite of services for harmonising longitudinal population health data using OHDSI tools.

Frontiers in digital health, 6:1329630.

INTRODUCTION: Population health data integration remains a critical challenge in low- and middle-income countries (LMIC), hindering the generation of actionable insights to inform policy and decision-making. This paper proposes a pan-African, Findable, Accessible, Interoperable, and Reusable (FAIR) research architecture and infrastructure named the INSPIRE datahub. This cloud-based Platform-as-a-Service (PaaS) and on-premises setup aims to enhance the discovery, integration, and analysis of clinical, population-based surveys, and other health data sources.

METHODS: The INSPIRE datahub, part of the Implementation Network for Sharing Population Information from Research Entities (INSPIRE), employs the Observational Health Data Sciences and Informatics (OHDSI) open-source stack of tools and the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) to harmonise data from African longitudinal population studies. Operating on Microsoft Azure and Amazon Web Services cloud platforms, and on on-premises servers, the architecture offers adaptability and scalability for other cloud providers and technology infrastructure. The OHDSI-based tools enable a comprehensive suite of services for data pipeline development, profiling, mapping, extraction, transformation, loading, documentation, anonymization, and analysis.

RESULTS: The INSPIRE datahub's "On-ramp" services facilitate the integration of data and metadata from diverse sources into the OMOP CDM. The datahub supports the implementation of OMOP CDM across data producers, harmonizing source data semantically with standard vocabularies and structurally conforming to OMOP table structures. Leveraging OHDSI tools, the datahub performs quality assessment and analysis of the transformed data. It ensures FAIR data by establishing metadata flows, capturing provenance throughout the ETL processes, and providing accessible metadata for potential users. The ETL provenance is documented in a machine- and human-readable Implementation Guide (IG), enhancing transparency and usability.

CONCLUSION: The pan-African INSPIRE datahub presents a scalable and systematic solution for integrating health data in LMICs. By adhering to FAIR principles and leveraging established standards like OMOP CDM, this architecture addresses the current gap in generating evidence to support policy and decision-making for improving the well-being of LMIC populations. The federated research network provisions allow data producers to maintain control over their data, fostering collaboration while respecting data privacy and security concerns. A use-case demonstrated the pipeline using OHDSI and other open-source tools.

RevDate: 2024-02-29

Zandesh Z (2024)

Privacy, Security, and Legal Issues in the Health Cloud: Structured Review for Taxonomy Development.

JMIR formative research, 8:e38372.

BACKGROUND: Privacy in our digital world is a very complicated topic, especially when meeting cloud computing technological achievements with its multidimensional context. Here, privacy is an extended concept that is sometimes referred to as legal, philosophical, or even technical. Consequently, there is a need to harmonize it with other aspects in health care in order to provide a new ecosystem. This new ecosystem can lead to a paradigm shift involving the reconstruction and redesign of some of the most important and essential requirements like privacy concepts, legal issues, and security services. Cloud computing in the health domain has markedly contributed to other technologies, such as mobile health, health Internet of Things, and wireless body area networks, with their increasing numbers of embedded applications. Other dependent applications, which are usually used in health businesses like social networks, or some newly introduced applications have issues regarding privacy transparency boundaries and privacy-preserving principles, which have made policy making difficult in the field.

OBJECTIVE: One way to overcome this challenge is to develop a taxonomy to identify all relevant factors. A taxonomy serves to bring conceptual clarity to the set of alternatives in in-person health care delivery. This study aimed to construct a comprehensive taxonomy for privacy in the health cloud, which also provides a prospective landscape for privacy in related technologies.

METHODS: A search was performed for relevant published English papers in databases, including Web of Science, IEEE Digital Library, Google Scholar, Scopus, and PubMed. A total of 2042 papers were related to the health cloud privacy concept according to predefined keywords and search strings. Taxonomy designing was performed using the deductive methodology.

RESULTS: This taxonomy has 3 layers. The first layer has 4 main dimensions, including cloud, data, device, and legal. The second layer has 15 components, and the final layer has related subcategories (n=57). This taxonomy covers some related concepts, such as privacy, security, confidentiality, and legal issues, which are categorized here and defined by their expansion and distinctive boundaries. The main merits of this taxonomy are its ability to clarify privacy terms for different scenarios and signalize the privacy multidisciplinary objectification in eHealth.

CONCLUSIONS: This taxonomy can cover health industry requirements with its specifications like health data and scenarios, which are considered as the most complicated among businesses and industries. Therefore, the use of this taxonomy could be generalized and customized to other domains and businesses that have less complications. Moreover, this taxonomy has different stockholders, including people, organizations, and systems. If the antecedent effort in the taxonomy is proven, subject matter experts could enhance the extent of privacy in the health cloud by verifying, evaluating, and revising this taxonomy.

RevDate: 2024-02-12

McCoy ES, Park SK, Patel RP, et al (2024)

Development of PainFace software to simplify, standardize, and scale up mouse grimace analyses.

Pain pii:00006396-990000000-00526 [Epub ahead of print].

Facial grimacing is used to quantify spontaneous pain in mice and other mammals, but scoring relies on humans with different levels of proficiency. Here, we developed a cloud-based software platform called PainFace (http://painface.net) that uses machine learning to detect 4 facial action units of the mouse grimace scale (orbitals, nose, ears, whiskers) and score facial grimaces of black-coated C57BL/6 male and female mice on a 0 to 8 scale. Platform accuracy was validated in 2 different laboratories, with 3 conditions that evoke grimacing-laparotomy surgery, bilateral hindpaw injection of carrageenan, and intraplantar injection of formalin. PainFace can generate up to 1 grimace score per second from a standard 30 frames/s video, making it possible to quantify facial grimacing over time, and operates at a speed that scales with computing power. By analyzing the frequency distribution of grimace scores, we found that mice spent 7x more time in a "high grimace" state following laparotomy surgery relative to sham surgery controls. Our study shows that PainFace reproducibly quantifies facial grimaces indicative of nonevoked spontaneous pain and enables laboratories to standardize and scale-up facial grimace analyses.

RevDate: 2024-02-17

Simpson RL, Lee JA, Li Y, et al (2024)

Medicare meets the cloud: the development of a secure platform for the storage and analysis of claims data.

JAMIA open, 7(1):ooae007.

INTRODUCTION: Cloud-based solutions are a modern-day necessity for data intense computing. This case report describes in detail the development and implementation of Amazon Web Services (AWS) at Emory-a secure, reliable, and scalable platform to store and analyze identifiable research data from the Centers for Medicare and Medicaid Services (CMS).

MATERIALS AND METHODS: Interdisciplinary teams from CMS, MBL Technologies, and Emory University collaborated to ensure compliance with CMS policy that consolidates laws, regulations, and other drivers of information security and privacy.

RESULTS: A dedicated team of individuals ensured successful transition from a physical storage server to a cloud-based environment. This included implementing access controls, vulnerability scanning, and audit logs that are reviewed regularly with a remediation plan. User adaptation required specific training to overcome the challenges of cloud computing.

CONCLUSION: Challenges created opportunities for lessons learned through the creation of an end-product accepted by CMS and shared across disciplines university-wide.

RevDate: 2024-02-12

González-Herbón R, González-Mateos G, Rodríguez-Ossorio JR, et al (2024)

An Approach to Develop Digital Twins in Industry.

Sensors (Basel, Switzerland), 24(3):.

The industry is currently undergoing a digital revolution driven by the integration of several enabling technologies. These include automation, robotics, cloud computing, industrial cybersecurity, systems integration, digital twins, etc. Of particular note is the increasing use of digital twins, which offer significant added value by providing realistic and fully functional process simulations. This paper proposes an approach for developing digital twins in industrial environments. The novelty lies in not only focusing on obtaining the model of the industrial system and integrating virtual reality and/or augmented reality but also in emphasizing the importance of incorporating other enabled technologies of Industry 4.0, such as system integration, connectivity with standard and specific industrial protocols, cloud services, or new industrial automation systems, to enhance the capabilities of the digital twin. Furthermore, a proposal of the software tools that can be used to achieve this incorporation is made. Unity is chosen as the real-time 3D development tool for its cross-platform capability and streamlined industrial system modeling. The integration of augmented reality is facilitated by the Vuforia SDK. Node-RED is selected as the system integration option, and communications are carried out with MQTT protocol. Finally, cloud-based services are recommended for effective data storage and processing. Furthermore, this approach has been used to develop a digital twin of a robotic electro-pneumatic cell.

RevDate: 2024-02-12

Lu Y, Zhou L, Zhang A, et al (2024)

Application of Deep Learning and Intelligent Sensing Analysis in Smart Home.

Sensors (Basel, Switzerland), 24(3):.

Deep learning technology can improve sensing efficiency and has the ability to discover potential patterns in data; the efficiency of user behavior recognition in the field of smart homes has been further improved, making the recognition process more intelligent and humanized. This paper analyzes the optical sensors commonly used in smart homes and their working principles through case studies and explores the technical framework of user behavior recognition based on optical sensors. At the same time, CiteSpace (Basic version 6.2.R6) software is used to visualize and analyze the related literature, elaborate the main research hotspots and evolutionary changes of optical sensor-based smart home user behavior recognition, and summarize the future research trends. Finally, fully utilizing the advantages of cloud computing technology, such as scalability and on-demand services, combining typical life situations and the requirements of smart home users, a smart home data collection and processing technology framework based on elderly fall monitoring scenarios is designed. Based on the comprehensive research results, the application and positive impact of optical sensors in smart home user behavior recognition were analyzed, and inspiration was provided for future smart home user experience research.

RevDate: 2024-02-12

Ehtisham M, Hassan MU, Al-Awady AA, et al (2024)

Internet of Vehicles (IoV)-Based Task Scheduling Approach Using Fuzzy Logic Technique in Fog Computing Enables Vehicular Ad Hoc Network (VANET).

Sensors (Basel, Switzerland), 24(3):.

The intelligent transportation system (ITS) relies heavily on the vehicular ad hoc network (VANET) and the internet of vehicles (IoVs), which combine cloud and fog to improve task processing capabilities. As a cloud extension, the fog processes' infrastructure is close to VANET, fostering an environment favorable to smart cars with IT equipment and effective task management oversight. Vehicle processing power, bandwidth, time, and high-speed mobility are all limited in VANET. It is critical to satisfy the vehicles' requirements for minimal latency and fast reaction times while offloading duties to the fog layer. We proposed a fuzzy logic-based task scheduling system in VANET to minimize latency and improve the enhanced response time when offloading tasks in the IoV. The proposed method effectively transfers workloads to the fog computing layer while considering the constrained resources of car nodes. After choosing a suitable processing unit, the algorithm sends the job and its associated resources to the fog layer. The dataset is related to crisp values for fog computing for system utilization, latency, and task deadline time for over 5000 values. The task execution, latency, deadline of task, storage, CPU, and bandwidth utilizations are used for fuzzy set values. We proved the effectiveness of our proposed task scheduling framework via simulation tests, outperforming current algorithms in terms of task ratio by 13%, decreasing average turnaround time by 9%, minimizing makespan time by 15%, and effectively overcoming average latency time within the network parameters. The proposed technique shows better results and responses than previous techniques by scheduling the tasks toward fog layers with less response time and minimizing the overall time from task submission to completion.

RevDate: 2024-02-12

Hassan MU, Al-Awady AA, Ali A, et al (2024)

Smart Resource Allocation in Mobile Cloud Next-Generation Network (NGN) Orchestration with Context-Aware Data and Machine Learning for the Cost Optimization of Microservice Applications.

Sensors (Basel, Switzerland), 24(3):.

Mobile cloud computing (MCC) provides resources to users to handle smart mobile applications. In MCC, task scheduling is the solution for mobile users' context-aware computation resource-rich applications. Most existing approaches have achieved a moderate service reliability rate due to a lack of instance-centric resource estimations and task offloading, a statistical NP-hard problem. The current intelligent scheduling process cannot address NP-hard problems due to traditional task offloading approaches. To address this problem, the authors design an efficient context-aware service offloading approach based on instance-centric measurements. The revised machine learning model/algorithm employs task adaptation to make decisions regarding task offloading. The proposed MCVS scheduling algorithm predicts the usage rates of individual microservices for a practical task scheduling scheme, considering mobile device time, cost, network, location, and central processing unit (CPU) power to train data. One notable feature of the microservice software architecture is its capacity to facilitate the scalability, flexibility, and independent deployment of individual components. A series of simulation results show the efficiency of the proposed technique based on offloading, CPU usage, and execution time metrics. The experimental results efficiently show the learning rate in training and testing in comparison with existing approaches, showing efficient training and task offloading phases. The proposed system has lower costs and uses less energy to offload microservices in MCC. Graphical results are presented to define the effectiveness of the proposed model. For a service arrival rate of 80%, the proposed model achieves an average 4.5% service offloading rate and 0.18% CPU usage rate compared with state-of-the-art approaches. The proposed method demonstrates efficiency in terms of cost and energy savings for microservice offloading in mobile cloud computing (MCC).

RevDate: 2024-02-14
CmpDate: 2024-02-14

Parracciani C, Gigante D, Bonini F, et al (2024)

Leveraging Google Earth Engine for a More Effective Grassland Management: A Decision Support Application Perspective.

Sensors (Basel, Switzerland), 24(3):.

Grasslands cover a substantial portion of the earth's surface and agricultural land and is crucial for human well-being and livestock farming. Ranchers and grassland management authorities face challenges in effectively controlling herders' grazing behavior and grassland utilization due to underdeveloped infrastructure and poor communication in pastoral areas. Cloud-based grazing management and decision support systems (DSS) are needed to address this issue, promote sustainable grassland use, and preserve their ecosystem services. These systems should enable rapid and large-scale grassland growth and utilization monitoring, providing a basis for decision-making in managing grazing and grassland areas. In this context, this study contributes to the objectives of the EU LIFE IMAGINE project, aiming to develop a Web-GIS app for conserving and monitoring Umbria's grasslands and promoting more informed decisions for more sustainable livestock management. The app, called "Praterie" and developed in Google Earth Engine, utilizes historical Sentinel-2 satellite data and harmonic modeling of the EVI (Enhanced Vegetation Index) to estimate vegetation growth curves and maturity periods for the forthcoming vegetation cycle. The app is updated in quasi-real time and enables users to visualize estimates for the upcoming vegetation cycle, including the maximum greenness, the days remaining to the subsequent maturity period, the accuracy of the harmonic models, and the grassland greenness status in the previous 10 days. Even though future additional developments can improve the informative value of the Praterie app, this platform can contribute to optimizing livestock management and biodiversity conservation by providing timely and accurate data about grassland status and growth curves.

RevDate: 2024-02-14
CmpDate: 2024-02-14

Gragnaniello M, Borghese A, Marrazzo VR, et al (2024)

Real-Time Myocardial Infarction Detection Approaches with a Microcontroller-Based Edge-AI Device.

Sensors (Basel, Switzerland), 24(3):.

Myocardial Infarction (MI), commonly known as heart attack, is a cardiac condition characterized by damage to a portion of the heart, specifically the myocardium, due to the disruption of blood flow. Given its recurring and often asymptomatic nature, there is the need for continuous monitoring using wearable devices. This paper proposes a single-microcontroller-based system designed for the automatic detection of MI based on the Edge Computing paradigm. Two solutions for MI detection are evaluated, based on Machine Learning (ML) and Deep Learning (DL) techniques. The developed algorithms are based on two different approaches currently available in the literature, and they are optimized for deployment on low-resource hardware. A feasibility assessment of their implementation on a single 32-bit microcontroller with an ARM Cortex-M4 core was examined, and a comparison in terms of accuracy, inference time, and memory usage was detailed. For ML techniques, significant data processing for feature extraction, coupled with a simpler Neural Network (NN) is involved. On the other hand, the second method, based on DL, employs a Spectrogram Analysis for feature extraction and a Convolutional Neural Network (CNN) with a longer inference time and higher memory utilization. Both methods employ the same low power hardware reaching an accuracy of 89.40% and 94.76%, respectively. The final prototype is an energy-efficient system capable of real-time detection of MI without the need to connect to remote servers or the cloud. All processing is performed at the edge, enabling NN inference on the same microcontroller.

RevDate: 2024-02-09

Huang Z, Herbozo Contreras LF, Yu L, et al (2024)

S4D-ECG: A Shallow State-of-the-Art Model for Cardiac Abnormality Classification.

Cardiovascular engineering and technology [Epub ahead of print].

PURPOSE: This study introduces an algorithm specifically designed for processing unprocessed 12-lead electrocardiogram (ECG) data, with the primary aim of detecting cardiac abnormalities.

METHODS: The proposed model integrates Diagonal State Space Sequence (S4D) model into its architecture, leveraging its effectiveness in capturing dynamics within time-series data. The S4D model is designed with stacked S4D layers for processing raw input data and a simplified decoder using a dense layer for predicting abnormality types. Experimental optimization determines the optimal number of S4D layers, striking a balance between computational efficiency and predictive performance. This comprehensive approach ensures the model's suitability for real-time processing on hardware devices with limited capabilities, offering a streamlined yet effective solution for heart monitoring.

RESULTS: Among the notable features of this algorithm is its strong resilience to noise, enabling the algorithm to achieve an average F1-score of 81.2% and an AUROC of 95.5% in generalization. The model underwent testing specifically on the lead II ECG signal, exhibiting consistent performance with an F1-score of 79.5% and an AUROC of 95.7%.

CONCLUSION: It is characterized by the elimination of pre-processing features and the availability of a low-complexity architecture that makes it suitable for implementation on numerous computing devices because it is easily implementable. Consequently, this algorithm exhibits considerable potential for practical applications in analyzing real-world ECG data. This model can be placed on the cloud for diagnosis. The model was also tested on lead II of the ECG alone and has demonstrated promising results, supporting its potential for on-device application.

RevDate: 2024-02-10

Schönherr S, Schachtl-Riess JF, Di Maio S, et al (2024)

Performing highly parallelized and reproducible GWAS analysis on biobank-scale data.

NAR genomics and bioinformatics, 6(1):lqae015.

Genome-wide association studies (GWAS) are transforming genetic research and enable the detection of novel genotype-phenotype relationships. In the last two decades, over 60 000 genetic associations across thousands of traits have been discovered using a GWAS approach. Due to increasing sample sizes, researchers are increasingly faced with computational challenges. A reproducible, modular and extensible pipeline with a focus on parallelization is essential to simplify data analysis and to allow researchers to devote their time to other essential tasks. Here we present nf-gwas, a Nextflow pipeline to run biobank-scale GWAS analysis. The pipeline automatically performs numerous pre- and post-processing steps, integrates regression modeling from the REGENIE package and supports single-variant, gene-based and interaction testing. It includes an extensive reporting functionality that allows to inspect thousands of phenotypes and navigate interactive Manhattan plots directly in the web browser. The pipeline is tested using the unit-style testing framework nf-test, a crucial requirement in clinical and pharmaceutical settings. Furthermore, we validated the pipeline against published GWAS datasets and benchmarked the pipeline on high-performance computing and cloud infrastructures to provide cost estimations to end users. nf-gwas is a highly parallelized, scalable and well-tested Nextflow pipeline to perform GWAS analysis in a reproducible manner.

RevDate: 2024-02-22
CmpDate: 2024-02-21

Swetnam TL, Antin PB, Bartelme R, et al (2024)

CyVerse: Cyberinfrastructure for open science.

PLoS computational biology, 20(2):e1011270.

CyVerse, the largest publicly-funded open-source research cyberinfrastructure for life sciences, has played a crucial role in advancing data-driven research since the 2010s. As the technology landscape evolved with the emergence of cloud computing platforms, machine learning and artificial intelligence (AI) applications, CyVerse has enabled access by providing interfaces, Software as a Service (SaaS), and cloud-native Infrastructure as Code (IaC) to leverage new technologies. CyVerse services enable researchers to integrate institutional and private computational resources, custom software, perform analyses, and publish data in accordance with open science principles. Over the past 13 years, CyVerse has registered more than 124,000 verified accounts from 160 countries and was used for over 1,600 peer-reviewed publications. Since 2011, 45,000 students and researchers have been trained to use CyVerse. The platform has been replicated and deployed in three countries outside the US, with additional private deployments on commercial clouds for US government agencies and multinational corporations. In this manuscript, we present a strategic blueprint for creating and managing SaaS cyberinfrastructure and IaC as free and open-source software.

RevDate: 2024-02-11

Lewis EC, Zhu S, Oladimeji AT, et al (2024)

Design of an innovative digital application to facilitate access to healthy foods in low-income urban settings.

mHealth, 10:2.

BACKGROUND: Under-resourced urban minority communities in the United States are characterized by food environments with low access to healthy foods, high food insecurity, and high rates of diet-related chronic disease. In Baltimore, Maryland, low access to healthy food largely results from a distribution gap between small food sources (retailers) and their suppliers. Digital interventions have the potential to address this gap, while keeping costs low.

METHODS: In this paper, we describe the technical (I) front-end design and (II) back-end development process of the Baltimore Urban food Distribution (BUD) application (app). We identify and detail four main phases of the process: (I) information architecture; (II) low and high-fidelity wireframes; (III) prototype; and (IV) back-end components, while considering formative research and a pre-pilot test of a preliminary version of the BUD app.

RESULTS: Our lessons learned provide valuable insight into developing a stable app with a user-friendly experience and interface, and accessible cloud computing services for advanced technical features.

CONCLUSIONS: Next steps will involve a pilot trial of the app in Baltimore, and eventually, other urban and rural settings nationwide. Once iterative feedback is incorporated into the app, all code will be made publicly available via an open source repository to encourage adaptation for desired communities.

TRIAL REGISTRATION: ClinicalTrials.gov NCT05010018.

RevDate: 2024-02-10

Pacios D, Vázquez-Poletti JL, Dhuri DB, et al (2024)

A serverless computing architecture for Martian aurora detection with the Emirates Mars Mission.

Scientific reports, 14(1):3029.

Remote sensing technologies are experiencing a surge in adoption for monitoring Earth's environment, demanding more efficient and scalable methods for image analysis. This paper presents a new approach for the Emirates Mars Mission (Hope probe); A serverless computing architecture designed to analyze images of Martian auroras, a key aspect in understanding the Martian atmosphere. Harnessing the power of OpenCV and machine learning algorithms, our architecture offers image classification, object detection, and segmentation in a swift and cost-effective manner. Leveraging the scalability and elasticity of cloud computing, this innovative system is capable of managing high volumes of image data, adapting to fluctuating workloads. This technology, applied to the study of Martian auroras within the HOPE Mission, not only solves a complex problem but also paves the way for future applications in the broad field of remote sensing.

RevDate: 2024-02-22

Xu J (2024)

The Current Status and Promotional Strategies for Cloud Migration of Hospital Information Systems in China: Strengths, Weaknesses, Opportunities, and Threats Analysis.

JMIR medical informatics, 12:e52080.

BACKGROUND: In the 21st century, Chinese hospitals have witnessed innovative medical business models, such as online diagnosis and treatment, cross-regional multidepartment consultation, and real-time sharing of medical test results, that surpass traditional hospital information systems (HISs). The introduction of cloud computing provides an excellent opportunity for hospitals to address these challenges. However, there is currently no comprehensive research assessing the cloud migration of HISs in China. This lack may hinder the widespread adoption and secure implementation of cloud computing in hospitals.

OBJECTIVE: The objective of this study is to comprehensively assess external and internal factors influencing the cloud migration of HISs in China and propose promotional strategies.

METHODS: Academic articles from January 1, 2007, to February 21, 2023, on the topic were searched in PubMed and HuiyiMd databases, and relevant documents such as national policy documents, white papers, and survey reports were collected from authoritative sources for analysis. A systematic assessment of factors influencing cloud migration of HISs in China was conducted by combining a Strengths, Weaknesses, Opportunities, and Threats (SWOT) analysis and literature review methods. Then, various promotional strategies based on different combinations of external and internal factors were proposed.

RESULTS: After conducting a thorough search and review, this study included 94 academic articles and 37 relevant documents. The analysis of these documents reveals the increasing application of and research on cloud computing in Chinese hospitals, and that it has expanded to 22 disciplinary domains. However, more than half (n=49, 52%) of the documents primarily focused on task-specific cloud-based systems in hospitals, while only 22% (n=21 articles) discussed integrated cloud platforms shared across the entire hospital, medical alliance, or region. The SWOT analysis showed that cloud computing adoption in Chinese hospitals benefits from policy support, capital investment, and social demand for new technology. However, it also faces threats like loss of digital sovereignty, supplier competition, cyber risks, and insufficient supervision. Factors driving cloud migration for HISs include medical big data analytics and use, interdisciplinary collaboration, health-centered medical service provision, and successful cases. Barriers include system complexity, security threats, lack of strategic planning and resource allocation, relevant personnel shortages, and inadequate investment. This study proposes 4 promotional strategies: encouraging more hospitals to migrate, enhancing hospitals' capabilities for migration, establishing a provincial-level unified medical hybrid multi-cloud platform, strengthening legal frameworks, and providing robust technical support.

CONCLUSIONS: Cloud computing is an innovative technology that has gained significant attention from both the Chinese government and the global community. In order to effectively support the rapid growth of a novel, health-centered medical industry, it is imperative for Chinese health authorities and hospitals to seize this opportunity by implementing comprehensive strategies aimed at encouraging hospitals to migrate their HISs to the cloud.

RevDate: 2024-02-06

Ssekagiri A, Jjingo D, Bbosa N, et al (2024)

HIVseqDB: a portable resource for NGS and sample metadata integration for HIV-1 drug resistance analysis.

Bioinformatics advances, 4(1):vbae008.

SUMMARY: Human immunodeficiency virus (HIV) remains a public health threat, with drug resistance being a major concern in HIV treatment. Next-generation sequencing (NGS) is a powerful tool for identifying low-abundance drug resistance mutations (LA-DRMs) that conventional Sanger sequencing cannot reliably detect. To fully understand the significance of LA-DRMs, it is necessary to integrate NGS data with clinical and demographic data. However, freely available tools for NGS-based HIV-1 drug resistance analysis do not integrate these data. This poses a challenge in interpretation of the impact of LA-DRMs, mainly for resource-limited settings due to the shortage of bioinformatics expertise. To address this challenge, we present HIVseqDB, a portable, secure, and user-friendly resource for integrating NGS data with associated clinical and demographic data for analysis of HIV drug resistance. HIVseqDB currently supports uploading of NGS data and associated sample data, HIV-1 drug resistance data analysis, browsing of uploaded data, and browsing and visualizing of analysis results. Each function of HIVseqDB corresponds to an individual Django application. This ensures efficient incorporation of additional features with minimal effort. HIVseqDB can be deployed on various computing environments, such as on-premises high-performance computing facilities and cloud-based platforms.

HIVseqDB is available at https://github.com/AlfredUg/HIVseqDB. A deployed instance of HIVseqDB is available at https://hivseqdb.org.

RevDate: 2024-02-19
CmpDate: 2024-02-19

Lan L, Wang YG, Chen HS, et al (2024)

Improving on mapping long-term surface water with a novel framework based on the Landsat imagery series.

Journal of environmental management, 353:120202.

Surface water plays a crucial role in the ecological environment and societal development. Remote sensing detection serves as a significant approach to understand the temporal and spatial change in surface water series (SWS) and to directly construct long-term SWS. Limited by various factors such as cloud, cloud shadow, and problematic satellite sensor monitoring, the existent surface water mapping datasets might be short and incomplete due to losing raw information on certain dates. Improved algorithms are desired to increase the completeness and quality of SWS datasets. The present study proposes an automated framework to detect SWS, based on the Google Earth Engine and Landsat satellite imagery. This framework incorporates implementing a raw image filtering algorithm to increase available images, thereby expanding the completeness. It improves OTSU thresholding by replacing anomaly thresholds with the median value, thus enhancing the accuracy of SWS datasets. Gaps caused by Landsat7 ETM + SLC-off are respired with the random forest algorithm and morphological operations. The results show that this novel framework effectively expands the long-term series of SWS for three surface water bodies with distinct geomorphological patterns. The evaluation of confusion matrices suggests the good performance of extracting surface water, with the overall accuracy ranging from 0.96 to 0.97, and user's accuracy between 0.96 and 0.98, producer's accuracy ranging from 0.83 to 0.89, and Matthews correlation coefficient ranging from 0.87 to 0.9 for several spectral water indices (NDWI, MNDWI, ANNDWI, and AWEI). Compared with the Global Reservoirs Surface Area Dynamics (GRSAD) dataset, our constructed datasets promote greater completeness of SWS datasets by 27.01%-91.89% for the selected water bodies. The proposed framework for detecting SWS shows good potential in enlarging and completing long-term global-scale SWS datasets, capable of supporting assessments of surface-water-related environmental management and disaster prevention.

RevDate: 2024-02-02

Lv W, Chen J, Cheng S, et al (2024)

QoS-driven resource allocation in fog radio access network: A VR service perspective.

Mathematical biosciences and engineering : MBE, 21(1):1573-1589.

While immersive media services represented by virtual reality (VR) are booming, They are facing fundamental challenges, i.e., soaring multimedia applications, large operation costs and scarce spectrum resources. It is difficult to simultaneously address these service challenges in a conventional radio access network (RAN) system. These problems motivated us to explore a quality-of-service (QoS)-driven resource allocation framework from VR service perspective based on the fog radio access network (F-RAN) architecture. We elaborated details of deployment on the caching allocation, dynamic base station (BS) clustering, statistical beamforming and cost strategy under the QoS constraints in the F-RAN architecture. The key solutions aimed to break through the bottleneck of the network design and to deep integrate the network-computing resources from different perspectives of cloud, network, edge, terminal and use of collaboration and integration. Accordingly, we provided a tailored algorithm to solve the corresponding formulation problem. This is the first design of VR services based on caching and statistical beamforming under the F-RAN. A case study provided to demonstrate the advantage of our proposed framework compared with existing schemes. Finally, we concluded the article and discussed possible open research problems.

RevDate: 2024-02-09
CmpDate: 2024-02-09

Niu Q, Li H, Liu Y, et al (2024)

Toward the Internet of Medical Things: Architecture, trends and challenges.

Mathematical biosciences and engineering : MBE, 21(1):650-678.

In recent years, the growing pervasiveness of wearable technology has created new opportunities for medical and emergency rescue operations to protect users' health and safety, such as cost-effective medical solutions, more convenient healthcare and quick hospital treatments, which make it easier for the Internet of Medical Things (IoMT) to evolve. The study first presents an overview of the IoMT before introducing the IoMT architecture. Later, it portrays an overview of the core technologies of the IoMT, including cloud computing, big data and artificial intelligence, and it elucidates their utilization within the healthcare system. Further, several emerging challenges, such as cost-effectiveness, security, privacy, accuracy and power consumption, are discussed, and potential solutions for these challenges are also suggested.

RevDate: 2024-02-21

Shrestha N, Kolarik NE, JS Brandt (2024)

Mesic vegetation persistence: A new approach for monitoring spatial and temporal changes in water availability in dryland regions using cloud computing and the sentinel and Landsat constellations.

The Science of the total environment, 917:170491.

Climate change and anthropogenic activity pose severe threats to water availability in drylands. A better understanding of water availability response to these threats could improve our ability to adapt and mitigate climate and anthropogenic effects. Here, we present a Mesic Vegetation Persistence (MVP) workflow that takes every usable image in the Sentinel (10-m) and Landsat (30-m) archives to generate a dense time-series of water availability that is continuously updated as new images become available in Google Earth Engine. MVP takes advantage of the fact that mesic vegetation can be used as a proxy of available water in drylands. Our MVP workflow combines a novel moisture-based index (moisture change index - MCI) with a vegetation index (Modified Chlorophyll Absorption Ratio Vegetation Index (MCARI2)). MCI is the difference in soil moisture condition between an individual pixel's state and the dry and wet reference reflectance in the image, derived using 5th and 95th percentiles of the visible and shortwave infra-red drought index (VSDI). We produced and validated our MVP products across drylands of the western U.S., covering a broad range of elevation, land use, and ecoregions. MVP outperforms NDVI, a commonly-employed index for mesic ecosystem health, in both rangeland and forested ecosystems, and in mesic habitats with particularly high and low vegetation cover. We applied our MVP product at case study sites and found that MVP more accurately characterizes differences in mesic persistence, late-season water availability, and restoration success compared to NDVI. MVP could be applied as an indicator of change in a variety of contexts to provide a greater understanding of how water availability changes as a result of climate and management. Our MVP product for the western U.S. is freely available within a Google Earth Engine Web App, and the MVP workflow is replicable for other dryland regions.

RevDate: 2024-02-01

Zurqani HA (2024)

The first generation of a regional-scale 1-m forest canopy cover dataset using machine learning and google earth engine cloud computing platform: A case study of Arkansas, USA.

Data in brief, 52:109986.

Forest canopy cover (FCC) is essential in forest assessment and management, affecting ecosystem services such as carbon sequestration, wildlife habitat, and water regulation. Ongoing advancements in techniques for accurately and efficiently mapping and extracting FCC information require a thorough evaluation of their validity and reliability. The primary objectives of this study are to: (1) create a large-scale forest FCC dataset with a 1-meter spatial resolution, (2) assess the regional spatial distribution of FCC at a regional scale, and (3) investigate differences in FCC areas among the Global Forest Change (Hansen et al., 2013) and U.S. Forest Service Tree Canopy Cover products at various spatial scales in Arkansas (i.e., county and city levels). This study utilized high-resolution aerial imagery and a machine learning algorithm processed and analyzed using the Google Earth Engine cloud computing platform to produce the FCC dataset. The accuracy of this dataset was validated using one-third of the reference locations obtained from the Global Forest Change (Hansen et al., 2013) dataset and the National Agriculture Imagery Program (NAIP) aerial imagery with a 0.6-m spatial resolution. The results showed that the dataset successfully identified FCC at a 1-m resolution in the study area, with overall accuracy ranging between 83.31% and 94.35% per county. Spatial comparison results between the produced FCC dataset and the Hansen et al., 2013 and USFS products indicated a strong positive correlation, with R[2] values ranging between 0.94 and 0.98 for county and city levels. This dataset provides valuable information for monitoring, forecasting, and managing forest resources in Arkansas and beyond. The methodology followed in this study enhances efficiency, cost-effectiveness, and scalability, as it enables the processing of large-scale datasets with high computational demands in a cloud-based environment. It also demonstrates that machine learning and cloud computing technologies can generate high-resolution forest cover datasets, which might be helpful in other regions of the world.

RevDate: 2024-02-01

Li W, Zhang Z, Xie B, et al (2024)

HiOmics: A cloud-based one-stop platform for the comprehensive analysis of large-scale omics data.

Computational and structural biotechnology journal, 23:659-668.

Analyzing the vast amount of omics data generated comprehensively by high-throughput sequencing technology is of utmost importance for scientists. In this context, we propose HiOmics, a cloud-based platform equipped with nearly 300 plugins designed for the comprehensive analysis and visualization of omics data. HiOmics utilizes the Element Plus framework to craft a user-friendly interface and harnesses Docker container technology to ensure the reliability and reproducibility of data analysis results. Furthermore, HiOmics employs the Workflow Description Language and Cromwell engine to construct workflows, ensuring the portability of data analysis and simplifying the examination of intricate data. Additionally, HiOmics has developed DataCheck, a tool based on Golang, which verifies and converts data formats. Finally, by leveraging the object storage technology and batch computing capabilities of public cloud platforms, HiOmics enables the storage and processing of large-scale data while maintaining resource independence among users.

RevDate: 2024-02-06
CmpDate: 2024-02-01

Abbasi IA, Jan SU, Alqahtani AS, et al (2024)

A lightweight and robust authentication scheme for the healthcare system using public cloud server.

PloS one, 19(1):e0294429.

Cloud computing is vital in various applications, such as healthcare, transportation, governance, and mobile computing. When using a public cloud server, it is mandatory to be secured from all known threats because a minor attacker's disturbance severely threatens the whole system. A public cloud server is posed with numerous threats; an adversary can easily enter the server to access sensitive information, especially for the healthcare industry, which offers services to patients, researchers, labs, and hospitals in a flexible way with minimal operational costs. It is challenging to make it a reliable system and ensure the privacy and security of a cloud-enabled healthcare system. In this regard, numerous security mechanisms have been proposed in past decades. These protocols either suffer from replay attacks, are completed in three to four round trips or have maximum computation, which means the security doesn't balance with performance. Thus, this work uses a fuzzy extractor method to propose a robust security method for a cloud-enabled healthcare system based on Elliptic Curve Cryptography (ECC). The proposed scheme's security analysis has been examined formally with BAN logic, ROM and ProVerif and informally using pragmatic illustration and different attacks' discussions. The proposed security mechanism is analyzed in terms of communication and computation costs. Upon comparing the proposed protocol with prior work, it has been demonstrated that our scheme is 33.91% better in communication costs and 35.39% superior to its competitors in computation costs.

RevDate: 2024-02-06
CmpDate: 2024-02-01

Sun Y, Du X, Niu S, et al (2024)

A lightweight attribute-based signcryption scheme based on cloud-fog assisted in smart healthcare.

PloS one, 19(1):e0297002.

In the environment of big data of the Internet of Things, smart healthcare is developed in combination with cloud computing. However, with the generation of massive data in smart healthcare systems and the need for real-time data processing, traditional cloud computing is no longer suitable for resources-constrained devices in the Internet of Things. In order to address this issue, we combine the advantages of fog computing and propose a cloud-fog assisted attribute-based signcryption for smart healthcare. In the constructed "cloud-fog-terminal" three-layer model, before the patient (data owner)signcryption, it first offloads some heavy computation burden to fog nodes and the doctor (data user) also outsources some complicated operations to fog nodes before unsigncryption by providing a blinded private key, which greatly reduces the calculation overhead of resource-constrained devices of patient and doctor, improves the calculation efficiency. Thus it implements a lightweight signcryption algorithm. Security analysis confirms that the proposed scheme achieves indistinguishability under chosen ciphertext attack and existential unforgeability under chosen message attack if the computational bilinear Diffie-Hellman problem and the decisional bilinear Diffie-Hellman problem holds. Furthermore, performance analysis demonstrates that our new scheme has less computational overhead for both doctors and patients, so it offers higher computational efficiency and is well-suited for application scenarios of smart healthcare.

RevDate: 2024-01-31

Amjad S, Akhtar A, Ali M, et al (2024)

Orchestration and Management of Adaptive IoT-centric Distributed Applications.

IEEE internet of things journal, 11(3):3779-3791.

Current Internet of Things (IoT) devices provide a diverse range of functionalities, ranging from measurement and dissemination of sensory data observation, to computation services for real-time data stream processing. In extreme situations such as emergencies, a significant benefit of IoT devices is that they can help gain a more complete situational understanding of the environment. However, this requires the ability to utilize IoT resources while taking into account location, battery life, and other constraints of the underlying edge and IoT devices. A dynamic approach is proposed for orchestration and management of distributed workflow applications using services available in cloud data centers, deployed on servers, or IoT devices at the network edge. Our proposed approach is specifically designed for knowledge-driven business process workflows that are adaptive, interactive, evolvable and emergent. A comprehensive empirical evaluation shows that the proposed approach is effective and resilient to situational changes.

RevDate: 2024-02-28
CmpDate: 2024-02-07

Wu Y, Sanati O, Uchimiya M, et al (2024)

SAND: Automated Time-Domain Modeling of NMR Spectra Applied to Metabolite Quantification.

Analytical chemistry, 96(5):1843-1851.

Developments in untargeted nuclear magnetic resonance (NMR) metabolomics enable the profiling of thousands of biological samples. The exploitation of this rich source of information requires a detailed quantification of spectral features. However, the development of a consistent and automatic workflow has been challenging because of extensive signal overlap. To address this challenge, we introduce the software Spectral Automated NMR Decomposition (SAND). SAND follows on from the previous success of time-domain modeling and automatically quantifies entire spectra without manual interaction. The SAND approach uses hybrid optimization with Markov chain Monte Carlo methods, employing subsampling in both time and frequency domains. In particular, SAND randomly divides the time-domain data into training and validation sets to help avoid overfitting. We demonstrate the accuracy of SAND, which provides a correlation of ∼0.9 with ground truth on cases including highly overlapped simulated data sets, a two-compound mixture, and a urine sample spiked with different amounts of a four-compound mixture. We further demonstrate an automated annotation using correlation networks derived from SAND decomposed peaks, and on average, 74% of peaks for each compound can be recovered in single clusters. SAND is available in NMRbox, the cloud computing environment for NMR software hosted by the Network for Advanced NMR (NAN). Since the SAND method uses time-domain subsampling (i.e., random subset of time-domain points), it has the potential to be extended to a higher dimensionality and nonuniformly sampled data.

RevDate: 2024-02-17

Dral PO, Ge F, Hou YF, et al (2024)

MLatom 3: A Platform for Machine Learning-Enhanced Computational Chemistry Simulations and Workflows.

Journal of chemical theory and computation, 20(3):1193-1213.

Machine learning (ML) is increasingly becoming a common tool in computational chemistry. At the same time, the rapid development of ML methods requires a flexible software framework for designing custom workflows. MLatom 3 is a program package designed to leverage the power of ML to enhance typical computational chemistry simulations and to create complex workflows. This open-source package provides plenty of choice to the users who can run simulations with the command-line options, input files, or with scripts using MLatom as a Python package, both on their computers and on the online XACS cloud computing service at XACScloud.com. Computational chemists can calculate energies and thermochemical properties, optimize geometries, run molecular and quantum dynamics, and simulate (ro)vibrational, one-photon UV/vis absorption, and two-photon absorption spectra with ML, quantum mechanical, and combined models. The users can choose from an extensive library of methods containing pretrained ML models and quantum mechanical approximations such as AIQM1 approaching coupled-cluster accuracy. The developers can build their own models using various ML algorithms. The great flexibility of MLatom is largely due to the extensive use of the interfaces to many state-of-the-art software packages and libraries.

RevDate: 2024-02-04
CmpDate: 2024-01-26

Renato A, Luna D, S Benítez (2024)

Development of an ASR System for Medical Conversations.

Studies in health technology and informatics, 310:664-668.

In this work we document the development of an ASR system for the transcription of conversations between patient and doctor and we will point out the critical aspects of the domain. The system was trained with an acoustic base of spontaneous speech that has a domain language model and a supervised phonetic dictionary. Its performance was compared with two systems: a) NeMo End-to-End Conformers in Spanish and b) Google API ASR (Automatic Speech Recognition) Cloud. The evaluation was carried out on a set of 208 teleconsultations recorded during the year 2020. The WER (Word Error Rate) was evaluated in ASR, and Recall and F1 for recognized medical entities. In conclusion, the developed system performed better, reaching 72.5% accuracy in the domain of teleconsultations and an F1 for entity recognition of 0.80.

RevDate: 2024-01-29

Malik AW, Bhatti DS, Park TJ, et al (2024)

Cloud Digital Forensics: Beyond Tools, Techniques, and Challenges.

Sensors (Basel, Switzerland), 24(2):.

Cloud computing technology is rapidly becoming ubiquitous and indispensable. However, its widespread adoption also exposes organizations and individuals to a broad spectrum of potential threats. Despite the multiple advantages the cloud offers, organizations remain cautious about migrating their data and applications to the cloud due to fears of data breaches and security compromises. In light of these concerns, this study has conducted an in-depth examination of a variety of articles to enhance the comprehension of the challenges related to safeguarding and fortifying data within the cloud environment. Furthermore, the research has scrutinized several well-documented data breaches, analyzing the financial consequences they inflicted. Additionally, it scrutinizes the distinctions between conventional digital forensics and the forensic procedures specific to cloud computing. As a result of this investigation, the study has concluded by proposing potential opportunities for further research in this critical domain. By doing so, it contributes to our collective understanding of the complex panorama of cloud data protection and security, while acknowledging the evolving nature of technology and the need for ongoing exploration and innovation in this field. This study also helps in understanding the compound annual growth rate (CAGR) of cloud digital forensics, which is found to be quite high at ≈16.53% from 2023 to 2031. Moreover, its market is expected to reach ≈USD 36.9 billion by the year 2031; presently, it is ≈USD 11.21 billion, which shows that there are great opportunities for investment in this area. This study also strategically addresses emerging challenges in cloud digital forensics, providing a comprehensive approach to navigating and overcoming the complexities associated with the evolving landscape of cloud computing.

RevDate: 2024-01-28

Molnár T, G Király (2024)

Forest Disturbance Monitoring Using Cloud-Based Sentinel-2 Satellite Imagery and Machine Learning.

Journal of imaging, 10(1):.

Forest damage has become more frequent in Hungary in the last decades, and remote sensing offers a powerful tool for monitoring them rapidly and cost-effectively. A combined approach was developed to utilise high-resolution ESA Sentinel-2 satellite imagery and Google Earth Engine cloud computing and field-based forest inventory data. Maps and charts were derived from vegetation indices (NDVI and Z∙NDVI) of satellite images to detect forest disturbances in the Hungarian study site for the period of 2017-2020. The NDVI maps were classified to reveal forest disturbances, and the cloud-based method successfully showed drought and frost damage in the oak-dominated Nagyerdő forest of Debrecen. Differences in the reactions to damage between tree species were visible on the index maps; therefore, a random forest machine learning classifier was applied to show the spatial distribution of dominant species. An accuracy assessment was accomplished with confusion matrices that compared classified index maps to field-surveyed data, demonstrating 99.1% producer, 71% user, and 71% total accuracies for forest damage and 81.9% for tree species. Based on the results of this study and the resilience of Google Earth Engine, the presented method has the potential to be extended to monitor all of Hungary in a faster, more accurate way using systematically collected field-data, the latest satellite imagery, and artificial intelligence.

RevDate: 2024-02-02
CmpDate: 2024-01-23

Willingham TB, Stowell J, Collier G, et al (2024)

Leveraging Emerging Technologies to Expand Accessibility and Improve Precision in Rehabilitation and Exercise for People with Disabilities.

International journal of environmental research and public health, 21(1):.

Physical rehabilitation and exercise training have emerged as promising solutions for improving health, restoring function, and preserving quality of life in populations that face disparate health challenges related to disability. Despite the immense potential for rehabilitation and exercise to help people with disabilities live longer, healthier, and more independent lives, people with disabilities can experience physical, psychosocial, environmental, and economic barriers that limit their ability to participate in rehabilitation, exercise, and other physical activities. Together, these barriers contribute to health inequities in people with disabilities, by disproportionately limiting their ability to participate in health-promoting physical activities, relative to people without disabilities. Therefore, there is great need for research and innovation focusing on the development of strategies to expand accessibility and promote participation in rehabilitation and exercise programs for people with disabilities. Here, we discuss how cutting-edge technologies related to telecommunications, wearables, virtual and augmented reality, artificial intelligence, and cloud computing are providing new opportunities to improve accessibility in rehabilitation and exercise for people with disabilities. In addition, we highlight new frontiers in digital health technology and emerging lines of scientific research that will shape the future of precision care strategies for people with disabilities.

RevDate: 2024-01-28

Yan Z, Lin X, Zhang X, et al (2024)

Identity-Based Matchmaking Encryption with Equality Test.

Entropy (Basel, Switzerland), 26(1):.

The identity-based encryption with equality test (IBEET) has become a hot research topic in cloud computing as it provides an equality test for ciphertexts generated under different identities while preserving the confidentiality. Subsequently, for the sake of the confidentiality and authenticity of the data, the identity-based signcryption with equality test (IBSC-ET) has been put forward. Nevertheless, the existing schemes do not consider the anonymity of the sender and the receiver, which leads to the potential leakage of sensitive personal information. How to ensure confidentiality, authenticity, and anonymity in the IBEET setting remains a significant challenge. In this paper, we put forward the concept of the identity-based matchmaking encryption with equality test (IBME-ET) to address this issue. We formalized the system model, the definition, and the security models of the IBME-ET and, then, put forward a concrete scheme. Furthermore, our scheme was confirmed to be secure and practical by proving its security and evaluating its performance.

RevDate: 2024-01-28

Kim J, Jang H, H Koh (2024)

MiMultiCat: A Unified Cloud Platform for the Analysis of Microbiome Data with Multi-Categorical Responses.

Bioengineering (Basel, Switzerland), 11(1):.

The field of the human microbiome is rapidly growing due to the recent advances in high-throughput sequencing technologies. Meanwhile, there have also been many new analytic pipelines, methods and/or tools developed for microbiome data preprocessing and analytics. They are usually focused on microbiome data with continuous (e.g., body mass index) or binary responses (e.g., diseased vs. healthy), yet multi-categorical responses that have more than two categories are also common in reality. In this paper, we introduce a new unified cloud platform, named MiMultiCat, for the analysis of microbiome data with multi-categorical responses. The two main distinguishing features of MiMultiCat are as follows: First, MiMultiCat streamlines a long sequence of microbiome data preprocessing and analytic procedures on user-friendly web interfaces; as such, it is easy to use for many people in various disciplines (e.g., biology, medicine, public health). Second, MiMultiCat performs both association testing and prediction modeling extensively. For association testing, MiMultiCat handles both ecological (e.g., alpha and beta diversity) and taxonomical (e.g., phylum, class, order, family, genus, species) contexts through covariate-adjusted or unadjusted analysis. For prediction modeling, MiMultiCat employs the random forest and gradient boosting algorithms that are well suited to microbiome data while providing nice visual interpretations. We demonstrate its use through the reanalysis of gut microbiome data on obesity with body mass index categories. MiMultiCat is freely available on our web server.

RevDate: 2024-01-19

Xun D, Wang R, Zhang X, et al (2024)

Microsnoop: A generalist tool for microscopy image representation.

Innovation (Cambridge (Mass.)), 5(1):100541.

Accurate profiling of microscopy images from small scale to high throughput is an essential procedure in basic and applied biological research. Here, we present Microsnoop, a novel deep learning-based representation tool trained on large-scale microscopy images using masked self-supervised learning. Microsnoop can process various complex and heterogeneous images, and we classified images into three categories: single-cell, full-field, and batch-experiment images. Our benchmark study on 10 high-quality evaluation datasets, containing over 2,230,000 images, demonstrated Microsnoop's robust and state-of-the-art microscopy image representation ability, surpassing existing generalist and even several custom algorithms. Microsnoop can be integrated with other pipelines to perform tasks such as superresolution histopathology image and multimodal analysis. Furthermore, Microsnoop can be adapted to various hardware and can be easily deployed on local or cloud computing platforms. We will regularly retrain and reevaluate the model using community-contributed data to consistently improve Microsnoop.

RevDate: 2024-01-19

Putra IMS, Siahaan D, A Saikhu (2024)

SNLI Indo: A recognizing textual entailment dataset in Indonesian derived from the Stanford Natural Language Inference dataset.

Data in brief, 52:109998.

Recognizing textual entailment (RTE) is an essential task in natural language processing (NLP). It is the task of determining the inference relationship between text fragments (premise and hypothesis), of which the inference relationship is either entailment (true), contradiction (false), or neutral (undetermined). The most popular approach for RTE is neural networks, which has resulted in the best RTE models. Neural network approaches, in particular deep learning, are data-driven and, consequently, the quantity and quality of the data significantly influences the performance of these approaches. Therefore, we introduce SNLI Indo, a large-scale RTE dataset in the Indonesian language, which was derived from the Stanford Natural Language Inference (SNLI) corpus by translating the original sentence pairs. SNLI is a large-scale dataset that contains premise-hypothesis pairs that were generated using a crowdsourcing framework. The SNLI dataset is comprised of a total of 569,027 sentence pairs with the distribution of sentence pairs as follows: 549,365 pairs for training, 9,840 pairs for model validation, and 9,822 pairs for testing. We translated the original sentence pairs of the SNLI dataset from English to Indonesian using the Google Cloud Translation API. The existence of SNLI Indo addresses the resource gap in the field of NLP for the Indonesian language. Even though large datasets are available in other languages, in particular English, the SNLI Indo dataset enables a more optimal development of deep learning models for RTE in the Indonesian language.

RevDate: 2024-01-19

Koulgi P, S Jumani (2024)

Dataset of temporal trends of surface water area across India's rivers and basins.

Data in brief, 52:109991.

This dataset [1] quantifies the extent and rate of annual change in surface water area (SWA) across India's rivers and basins over a period of 30 years spanning 1991 to 2020. This data has been derived from the Global Surface Water Explorer, which maps historical terrestrial surface water occurrence globally using the Landsat satellite image archive since 1984, at a spatial resolution of 30 m/pixel and a temporal resolution of once a month. This monthly time-series was used to create annual composites of wet-season (October, November, December), dry-season (February, March, April), and permanent (October, November, December, February, March, April) surface water extent, which were then used to estimate annual rates of change. To estimate SWA trends for both river networks and their basins, we conducted our analysis at two spatial scales - (1) cross-sectional reaches (transects) across river networks, and (2) sub-basins within river catchments. For each reach and sub-basin (henceforth basin), temporal trends in wet-season, dry-season, and permanent SWA were estimated using the non-parametric Sen's slope estimator. For every valid reach and basin, the temporal timeseries of invalid or missing data was also computed as a fractional area to inform the level of certainty associated with reported SWA trends estimates. In addition to a Zenodo data repository, this data [1] is presented as an interactive web application (https://sites.google.com/view/surface-water-trends-india/; henceforth Website) to allow users to visualize the trends of permanent, wet-season, and dry-season water along with the extent of missing data for individual transects or basins across India. The Website provides a simple user interface to enable users to download seasonal time-series of SWA for any region of interest at the scale of the river network or basin. The Website also provides details about accessing the annual permanent, dry and wet season composites, which are stored as publicly accessible cloud assets on the Google Earth Engine platform. The spatial (basin and reach) and temporal (wet season, dry season, and permanent water scenarios) scales of information provided in this dataset yield a granular understanding of water systems in India. We envision this dataset to serve as a baseline information layer that can be used in combination with other data sources to support regional analysis of hydrologic trends, watershed-based analysis, and conservation planning. Specific applications include, but are not limited to, monitoring and identifying at-risk wetlands, visualizing and measuring changes to surface water extent before and after water infrastructure projects (such as dams and water abstraction projects), mapping drought prone regions, and mapping natural and anthropogenic changes to SWA along river networks. Intended users include, but are not limited to, students, academics, decision-makers, planners, policymakers, activists, and others interested in water-related issues.

RevDate: 2024-02-28
CmpDate: 2024-02-23

Gheisari M, Ghaderzadeh M, Li H, et al (2024)

Mobile Apps for COVID-19 Detection and Diagnosis for Future Pandemic Control: Multidimensional Systematic Review.

JMIR mHealth and uHealth, 12:e44406.

BACKGROUND: In the modern world, mobile apps are essential for human advancement, and pandemic control is no exception. The use of mobile apps and technology for the detection and diagnosis of COVID-19 has been the subject of numerous investigations, although no thorough analysis of COVID-19 pandemic prevention has been conducted using mobile apps, creating a gap.

OBJECTIVE: With the intention of helping software companies and clinical researchers, this study provides comprehensive information regarding the different fields in which mobile apps were used to diagnose COVID-19 during the pandemic.

METHODS: In this systematic review, 535 studies were found after searching 5 major research databases (ScienceDirect, Scopus, PubMed, Web of Science, and IEEE). Of these, only 42 (7.9%) studies concerned with diagnosing and detecting COVID-19 were chosen after applying inclusion and exclusion criteria using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) protocol.

RESULTS: Mobile apps were categorized into 6 areas based on the content of these 42 studies: contact tracing, data gathering, data visualization, artificial intelligence (AI)-based diagnosis, rule- and guideline-based diagnosis, and data transformation. Patients with COVID-19 were identified via mobile apps using a variety of clinical, geographic, demographic, radiological, serological, and laboratory data. Most studies concentrated on using AI methods to identify people who might have COVID-19. Additionally, symptoms, cough sounds, and radiological images were used more frequently compared to other data types. Deep learning techniques, such as convolutional neural networks, performed comparatively better in the processing of health care data than other types of AI techniques, which improved the diagnosis of COVID-19.

CONCLUSIONS: Mobile apps could soon play a significant role as a powerful tool for data collection, epidemic health data analysis, and the early identification of suspected cases. These technologies can work with the internet of things, cloud storage, 5th-generation technology, and cloud computing. Processing pipelines can be moved to mobile device processing cores using new deep learning methods, such as lightweight neural networks. In the event of future pandemics, mobile apps will play a critical role in rapid diagnosis using various image data and clinical symptoms. Consequently, the rapid diagnosis of these diseases can improve the management of their effects and obtain excellent results in treating patients.

RevDate: 2024-01-19

Simaiya S, Lilhore UK, Sharma YK, et al (2024)

A hybrid cloud load balancing and host utilization prediction method using deep learning and optimization techniques.

Scientific reports, 14(1):1337.

Virtual machine (VM) integration methods have effectively proven an optimized load balancing in cloud data centers. The main challenge with VM integration methods is the trade-off among cost effectiveness, quality of service, performance, optimal resource utilization and compliance with service level agreement violations. Deep Learning methods are widely used in existing research on cloud load balancing. However, there is still a problem with acquiring noisy multilayered fluctuations in workload due to the limited resource-level provisioning. The long short-term memory (LSTM) model plays a vital role in the prediction of server load and workload provisioning. This research presents a hybrid model using deep learning with Particle Swarm Intelligence and Genetic Algorithm ("DPSO-GA") for dynamic workload provisioning in cloud computing. The proposed model works in two phases. The first phase utilizes a hybrid PSO-GA approach to address the prediction challenge by combining the benefits of these two methods in fine-tuning the Hyperparameters. In the second phase, CNN-LSTM is utilized. Before using the CNN-LSTM approach to forecast the consumption of resources, a hybrid approach, PSO-GA, is used for training it. In the proposed framework, a one-dimensional CNN and LSTM are used to forecast the cloud resource utilization at various subsequent time steps. The LSTM module simulates temporal information that predicts the upcoming VM workload, while a CNN module extracts complicated distinguishing features gathered from VM workload statistics. The proposed model simultaneously integrates the resource utilization in a multi-resource utilization, which helps overcome the load balancing and over-provisioning issues. Comprehensive simulations are carried out utilizing the Google cluster traces benchmarks dataset to verify the efficiency of the proposed DPSO-GA technique in enhancing the distribution of resources and load balancing for the cloud. The proposed model achieves outstanding results in terms of better precision, accuracy and load allocation.

RevDate: 2024-01-16

Zhao Y, Sazlina SG, Rokhani FZ, et al (2024)

The expectations and acceptability of a smart nursing home model among Chinese older adults: a mixed methods study.

BMC nursing, 23(1):40.

BACKGROUND: Smart nursing homes (SNHs) integrate advanced technologies, including IoT, digital health, big data, AI, and cloud computing to optimise remote clinical services, monitor abnormal events, enhance decision-making, and support daily activities for older residents, ensuring overall well-being in a safe and cost-effective environment. This study developed and validated a 24-item Expectation and Acceptability of Smart Nursing Homes Questionnaire (EASNH-Q), and examined the levels of expectations and acceptability of SNHs and associated factors among older adults in China.

METHODS: This was an exploratory sequential mixed methods study, where the qualitative case study was conducted in Hainan and Dalian, while the survey was conducted in Xi'an, Nanjing, Shenyang, and Xiamen. The validation of EASNH-Q also included exploratory and confirmatory factor analyses. Multinomial logistic regression analysis was used to estimate the determinants of expectations and acceptability of SNHs.

RESULTS: The newly developed EASNH-Q uses a Likert Scale ranging from 1 (strongly disagree) to 5 (strongly agree), and underwent validation and refinement from 49 items to the final 24 items. The content validity indices for relevance, comprehensibility, and comprehensiveness were all above 0.95. The expectations and acceptability of SNHs exhibited a strong correlation (r = 0.85, p < 0.01), and good test-retest reliability for expectation (0.90) and acceptability (0.81). The highest tertile of expectations (X[2]=28.89, p < 0.001) and acceptability (X[2]=25.64, p < 0.001) towards SNHs were significantly associated with the willingness to relocate to such facilities. Older adults with self-efficacy in applying smart technologies (OR: 28.0) and those expressing a willingness to move to a nursing home (OR: 3.0) were more likely to have the highest tertile of expectations compared to those in the lowest tertile. Similarly, older adults with self-efficacy in applying smart technologies were more likely to be in the highest tertile of acceptability of SNHs (OR: 13.8).

CONCLUSIONS: EASNH-Q demonstrated commendable validity, reliability, and stability. The majority of Chinese older adults have high expectations for and accept SNHs. Self-efficacy in applying smart technologies and willingness to relocate to a nursing home associated with high expectations and acceptability of SNHs.

RevDate: 2024-01-16

Putzier M, Khakzad T, Dreischarf M, et al (2024)

Implementation of cloud computing in the German healthcare system.

NPJ digital medicine, 7(1):12.

With the advent of artificial intelligence and Big Data - projects, the necessity for a transition from analog medicine to modern-day solutions such as cloud computing becomes unavoidable. Even though this need is now common knowledge, the process is not always easy to start. Legislative changes, for example at the level of the European Union, are helping the respective healthcare systems to take the necessary steps. This article provides an overview of how a German university hospital is dealing with European data protection laws on the integration of cloud computing into everyday clinical practice. By describing our model approach, we aim to identify opportunities and possible pitfalls to sustainably influence digitization in Germany.

RevDate: 2024-01-17

Chen M, Wei Z, Li L, et al (2024)

Edge computing-based proactive control method for industrial product manufacturing quality prediction.

Scientific reports, 14(1):1288.

With the emergence of intelligent manufacturing, new-generation information technologies such as big data and artificial intelligence are rapidly integrating with the manufacturing industry. One of the primary applications is to assist manufacturing plants in predicting product quality. Traditional predictive models primarily focus on establishing high-precision classification or regression models, with less emphasis on imbalanced data. This is a specific but common scenario in practical industrial environments concerning quality prediction. A SMOTE-XGboost quality prediction active control method based on joint optimization hyperparameters is proposed to address the problem of imbalanced data classification in product quality prediction. In addition, edge computing technology is introduced to address issues in industrial manufacturing, such as the large bandwidth load and resource limitations associated with traditional cloud computing models. Finally, the practicality and effectiveness of the proposed method are validated through a case study of the brake disc production line. Experimental results indicate that the proposed method outperforms other classification methods in brake disc quality prediction.

RevDate: 2024-01-12

Zhao B, Chen WN, Wei FF, et al (2024)

PEGA: A Privacy-Preserving Genetic Algorithm for Combinatorial Optimization.

IEEE transactions on cybernetics, PP: [Epub ahead of print].

EA, such as the genetic algorithm (GA), offer an elegant way to handle combinatorial optimization problems (COPs). However, limited by expertise and resources, most users lack the capability to implement evolutionary algorithms (EAs) for solving COPs. An intuitive and promising solution is to outsource evolutionary operations to a cloud server, however, it poses privacy concerns. To this end, this article proposes a novel computing paradigm called evolutionary computation as a service (ECaaS), where a cloud server renders evolutionary computation services for users while ensuring their privacy. Following the concept of ECaaS, this article presents privacy-preserving genetic algorithm (PEGA), a privacy-preserving GA designed specifically for COPs. PEGA enables users, regardless of their domain expertise or resource availability, to outsource COPs to the cloud server that holds a competitive GA and approximates the optimal solution while safeguarding privacy. Notably, PEGA features the following characteristics. First, PEGA empowers users without domain expertise or sufficient resources to solve COPs effectively. Second, PEGA protects the privacy of users by preventing the leakage of optimization problem details. Third, PEGA performs comparably to the conventional GA when approximating the optimal solution. To realize its functionality, we implement PEGA falling in a twin-server architecture and evaluate it on two widely known COPs: 1) the traveling Salesman problem (TSP) and 2) the 0/1 knapsack problem (KP). Particularly, we utilize encryption cryptography to protect users' privacy and carefully design a suite of secure computing protocols to support evolutionary operators of GA on encrypted chromosomes. Privacy analysis demonstrates that PEGA successfully preserves the confidentiality of COP contents. Experimental evaluation results on several TSP datasets and KP datasets reveal that PEGA performs equivalently to the conventional GA in approximating the optimal solution.

RevDate: 2024-01-15
CmpDate: 2024-01-15

Sun X, Sun W, Z Wang (2024)

Novel enterprises digital transformation influence empirical study.

PloS one, 19(1):e0296693.

With the rapid development of technologies such as cloud computing and big data, various levels of government departments in the country have successively introduced digital subsidy policies to promote enterprises' digital transformation. However, the effectiveness of these policies and their ability to truly achieve policy objectives have become pressing concerns across society. Against this backdrop, this paper employs a moderated mediation effects model to empirically analyze the incentive effects of financial subsidies on the digital transformation of A-share listed manufacturing companies in the Shanghai and Shenzhen stock markets from 2013 to 2022. The research findings indicate a significant promotion effect of financial subsidies on the digital transformation of manufacturing enterprises, especially demonstrating a notable incentive impact on the digital transformation of large enterprises, non-asset-intensive enterprises, technology-intensive enterprises, and non-labor-intensive enterprises. However, the incentive effect on the digital transformation of small and medium-sized enterprises (SMEs), asset-intensive enterprises, non-technology-intensive enterprises, and labor-intensive enterprises is not significant. Notably, the expansion of financial subsidies positively influences the augmentation of R&D investment within manufacturing enterprises, subsequently providing indirect encouragement for their digital transformation. Additionally, the incorporation of the degree of marketization implies its potential to moderate both the direct and indirect impacts of financial subsidies on enterprise digital transformation. This study enriches the research on the mechanism of the role of financial subsidies in digital transformation and provides empirical evidence on how market participation influences the effects of financial subsidies, thereby assisting policymakers in comprehensively understanding the impact of financial subsidy policies on different types of enterprises.

RevDate: 2024-01-15
CmpDate: 2024-01-15

Fan Y (2024)

Load balance -aware dynamic cloud-edge-end collaborative offloading strategy.

PloS one, 19(1):e0296897.

Cloud-edge-end (CEE) computing is a hybrid computing paradigm that converges the principles of edge and cloud computing. In the design of CEE systems, a crucial challenge is to develop efficient offloading strategies to achieve the collaboration of edge and cloud offloading. Although CEE offloading problems have been widely studied under various backgrounds and methodologies, load balance, which is an indispensable scheme in CEE systems to ensure the full utilization of edge resources, is still a factor that has not yet been accounted for. To fill this research gap, we are devoted to developing a dynamic load balance -aware CEE offloading strategy. First, we propose a load evolution model to characterize the influences of offloading strategies on the system load dynamics and, on this basis, establish a latency model as a performance metric of different offloading strategies. Then, we formulate an optimal control model to seek the optimal offloading strategy that minimizes the latency. Second, we analyze the feasibility of typical optimal control numerical methods in solving our proposed model, and develop a numerical method based on the framework of genetic algorithm. Third, through a series of numerical experiments, we verify our proposed method. Results show that our method is effective.

RevDate: 2024-02-28
CmpDate: 2024-02-02

Peltzer A, Mohr C, Stadermann KB, et al (2024)

nf-core/nanostring: a pipeline for reproducible NanoString nCounter analysis.

Bioinformatics (Oxford, England), 40(1):.

MOTIVATION: The NanoString™ nCounter® technology platform is a widely used targeted quantification platform for the analysis of gene expression of up to ∼800 genes. Whereas the software tools by the manufacturer can perform the analysis in an interactive and GUI driven approach, there is no portable and user-friendly workflow available that can be used to perform reproducible analysis of multiple samples simultaneously in a scalable fashion on different computing infrastructures.

RESULTS: Here, we present the nf-core/nanostring open-source pipeline to perform a comprehensive analysis including quality control and additional features such as expression visualization, annotation with additional metadata and input creation for differential gene expression analysis. The workflow features an easy installation, comprehensive documentation, open-source code with the possibility for further extensions, a strong portability across multiple computing environments and detailed quality metrics reporting covering all parts of the pipeline. nf-core/nanostring has been implemented in the Nextflow workflow language and supports Docker, Singularity, Podman container technologies as well as Conda environments, enabling easy deployment on any Nextflow supported compatible system, including most widely used cloud computing environments such as Google GCP or Amazon AWS.

The source code, documentation and installation instructions as well as results for continuous tests are freely available at https://github.com/nf-core/nanostring and https://nf-co.re/nanostring.

RevDate: 2024-02-10
CmpDate: 2024-02-09

Ayeni KI, Berry D, Ezekiel CN, et al (2024)

Enhancing microbiome research in sub-Saharan Africa.

Trends in microbiology, 32(2):111-115.

While there are lighthouse examples of microbiome research in sub-Saharan Africa (SSA), a significant proportion of local researchers face several challenges. Here, we highlight prevailing issues limiting microbiome research in SSA and suggest potential technological, societal, and research-based solutions. We emphasize the need for considerable investment in infrastructures, training, and appropriate funding to democratize modern technologies with a view to providing useful data to improve human health.

RevDate: 2024-01-13

An X, Cai B, L Chai (2024)

Research on Over-the-Horizon Perception Distance Division of Optical Fiber Communication Based on Intelligent Roadways.

Sensors (Basel, Switzerland), 24(1):.

With the construction and application of more and more intelligent networking demonstration projects, a large number of advanced roadside digital infrastructures are deployed on both sides of the intelligent road. These devices sense the road situation in real time through algorithms and transmit it to edge computing units and cloud control platforms through high-speed optical fiber transmission networks. This article proposes a cloud edge terminal architecture system based on cloud edge cooperation, as well as a data exchange protocol for cloud control basic platforms. The over-the-horizon scene division and optical fiber network communication model are verified by deploying intelligent roadside devices on the intelligent highway. At the same time, this article uses the optical fiber network communication algorithm and ModelScope large model to model inference on real-time video data. The actual data results show that the StreamYOLO (Stream You Only Look Once) model can use the Streaming Perception method to detect and continuously track target vehicles in real-time videos. Finally, the method proposed in this article was experimentally validated in an actual smart highway digital infrastructure construction project. The experimental results demonstrate the high application value and promotion prospects of the fiber optic network in the division of over the horizon perception distance in intelligent roadways construction.

RevDate: 2024-01-11

Sheik AT, Maple C, Epiphaniou G, et al (2023)

Securing Cloud-Assisted Connected and Autonomous Vehicles: An In-Depth Threat Analysis and Risk Assessment.

Sensors (Basel, Switzerland), 24(1): pii:s24010241.

As threat vectors and adversarial capabilities evolve, Cloud-Assisted Connected and Autonomous Vehicles (CCAVs) are becoming more vulnerable to cyberattacks. Several established threat analysis and risk assessment (TARA) methodologies are publicly available to address the evolving threat landscape. However, these methodologies inadequately capture the threat data of CCAVs, resulting in poorly defined threat boundaries or the reduced efficacy of the TARA. This is due to multiple factors, including complex hardware-software interactions, rapid technological advancements, outdated security frameworks, heterogeneous standards and protocols, and human errors in CCAV systems. To address these factors, this study begins by systematically evaluating TARA methods and applying the Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privileges (STRIDE) threat model and Damage, Reproducibility, Exploitability, Affected Users, and Discoverability (DREAD) risk assessment to target system architectures. This study identifies vulnerabilities, quantifies risks, and methodically examines defined data processing components. In addition, this study offers an attack tree to delineate attack vectors and provides a novel defense taxonomy against identified risks. This article demonstrates the efficacy of the TARA in systematically capturing compromised security requirements, threats, limits, and associated risks with greater precision. By doing so, we further discuss the challenges in protecting hardware-software assets against multi-staged attacks due to emerging vulnerabilities. As a result, this research informs advanced threat analyses and risk management strategies for enhanced security engineering of cyberphysical CCAV systems.

RevDate: 2024-01-13

Suo L, Ma H, Jiao W, et al (2023)

Job-Deadline-Guarantee-Based Joint Flow Scheduling and Routing Scheme in Data Center Networks.

Sensors (Basel, Switzerland), 24(1):.

Many emerging Internet of Things (IoT) applications deployed on cloud platforms have strict latency requirements or deadline constraints, and thus meeting the deadlines is crucial to ensure the quality of service for users and the revenue for service providers in these delay-stringent IoT applications. Efficient flow scheduling in data center networks (DCNs) plays a major role in reducing the execution time of jobs and has garnered significant attention in recent years. However, only few studies have attempted to combine job-level flow scheduling and routing to guarantee meeting the deadlines of multi-stage jobs. In this paper, an efficient heuristic joint flow scheduling and routing (JFSR) scheme is proposed. First, targeting maximizing the number of jobs for which the deadlines have been met, we formulate the joint flow scheduling and routing optimization problem for multiple multi-stage jobs. Second, due to its mathematical intractability, this problem is decomposed into two sub-problems: inter-coflow scheduling and intra-coflow scheduling. In the first sub-problem, coflows from different jobs are scheduled according to their relative remaining times; in the second sub-problem, an iterative coflow scheduling and routing (ICSR) algorithm is designed to alternately optimize the routing path and bandwidth allocation for each scheduled coflow. Finally, simulation results demonstrate that the proposed JFSR scheme can significantly increase the number of jobs for which the deadlines have been met in DCNs.

RevDate: 2024-01-13

Oyucu S, Polat O, Türkoğlu M, et al (2023)

Ensemble Learning Framework for DDoS Detection in SDN-Based SCADA Systems.

Sensors (Basel, Switzerland), 24(1):.

Supervisory Control and Data Acquisition (SCADA) systems play a crucial role in overseeing and controlling renewable energy sources like solar, wind, hydro, and geothermal resources. Nevertheless, with the expansion of conventional SCADA network infrastructures, there arise significant challenges in managing and scaling due to increased size, complexity, and device diversity. Using Software Defined Networking (SDN) technology in traditional SCADA network infrastructure offers management, scaling and flexibility benefits. However, as the integration of SDN-based SCADA systems with modern technologies such as the Internet of Things, cloud computing, and big data analytics increases, cybersecurity becomes a major concern for these systems. Therefore, cyber-physical energy systems (CPES) should be considered together with all energy systems. One of the most dangerous types of cyber-attacks against SDN-based SCADA systems is Distributed Denial of Service (DDoS) attacks. DDoS attacks disrupt the management of energy resources, causing service interruptions and increasing operational costs. Therefore, the first step to protect against DDoS attacks in SDN-based SCADA systems is to develop an effective intrusion detection system. This paper proposes a Decision Tree-based Ensemble Learning technique to detect DDoS attacks in SDN-based SCADA systems by accurately distinguishing between normal and DDoS attack traffic. For training and testing the ensemble learning models, normal and DDoS attack traffic data are obtained over a specific simulated experimental network topology. Techniques based on feature selection and hyperparameter tuning are used to optimize the performance of the decision tree ensemble models. Experimental results show that feature selection, combination of different decision tree ensemble models, and hyperparameter tuning can lead to a more accurate machine learning model with better performance detecting DDoS attacks against SDN-based SCADA systems.

RevDate: 2024-01-13

Rodríguez-Azar PI, Mejía-Muñoz JM, Cruz-Mejía O, et al (2023)

Fog Computing for Control of Cyber-Physical Systems in Industry Using BCI.

Sensors (Basel, Switzerland), 24(1):.

Brain-computer interfaces use signals from the brain, such as EEG, to determine brain states, which in turn can be used to issue commands, for example, to control industrial machinery. While Cloud computing can aid in the creation and operation of industrial multi-user BCI systems, the vast amount of data generated from EEG signals can lead to slow response time and bandwidth problems. Fog computing reduces latency in high-demand computation networks. Hence, this paper introduces a fog computing solution for BCI processing. The solution consists in using fog nodes that incorporate machine learning algorithms to convert EEG signals into commands to control a cyber-physical system. The machine learning module uses a deep learning encoder to generate feature images from EEG signals that are subsequently classified into commands by a random forest. The classification scheme is compared using various classifiers, being the random forest the one that obtained the best performance. Additionally, a comparison was made between the fog computing approach and using only cloud computing through the use of a fog computing simulator. The results indicate that the fog computing method resulted in less latency compared to the solely cloud computing approach.

RevDate: 2024-01-13

Feng YC, Zeng SY, TY Liang (2023)

Part2Point: A Part-Oriented Point Cloud Reconstruction Framework.

Sensors (Basel, Switzerland), 24(1):.

Three-dimensional object modeling is necessary for developing virtual and augmented reality applications. Traditionally, application engineers must manually use art software to edit object shapes or exploit LIDAR to scan physical objects for constructing 3D models. This is very time-consuming and costly work. Fortunately, GPU recently provided a cost-effective solution for massive data computation. With GPU support, many studies have proposed 3D model generators based on different learning architectures, which can automatically convert 2D object pictures into 3D object models with good performance. However, as the demand for model resolution increases, the required computing time and memory space increase as significantly as the parameters of the learning architecture, which seriously degrades the efficiency of 3D model construction and the feasibility of resolution improvement. To resolve this problem, this paper proposes a part-oriented point cloud reconstruction framework called Part2Point. This framework segments the object's parts, reconstructs the point cloud for individual object parts, and combines the part point clouds into the complete object point cloud. Therefore, it can reduce the number of learning network parameters at the exact resolution, effectively minimizing the calculation time cost and the required memory space. Moreover, it can improve the resolution of the reconstructed point cloud so that the reconstructed model can present more details of object parts.

RevDate: 2024-01-13

Chen C, Gong L, Luo X, et al (2024)

Research on a new management model of distribution Internet of Things.

Scientific reports, 14(1):995.

Based on the characteristics of controllable intelligence of the Internet of Things (IoT) and the requirements of the new distribution Network for function and transmission delay, this study proposes a method of combining edge collaborative computing and distribution Network station area, and builds a distribution Network management structure model by combining the Packet Transport Network (PTN) Network structure. The multi-terminal node distribution model of distributed IoT is established. Finally, a distribution IoT management model is constructed based on the edge multi-node cooperative reasoning algorithm and collaborative computing architecture model. The purpose of this paper is to solve the problem of large reasoning delay caused by heavy computing tasks in distribution cloud servers. The final results show that the model reduces the inference delay of cloud computing when a large number of smart device terminals of distribution IoT are connected to the network.

LOAD NEXT 100 CITATIONS

RJR Experience and Expertise

Researcher

Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.

Educator

Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.

Administrator

Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.

Technologist

Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.

Publisher

While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.

Speaker

Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.

Facilitator

Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.

Designer

Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.

Support this website:
Order from Amazon
We will earn a commission.

This is a must read book for anyone with an interest in invasion biology. The full title of the book lays out the author's premise — The New Wild: Why Invasive Species Will Be Nature's Salvation. Not only is species movement not bad for ecosystems, it is the way that ecosystems respond to perturbation — it is the way ecosystems heal. Even if you are one of those who is absolutely convinced that invasive species are actually "a blight, pollution, an epidemic, or a cancer on nature", you should read this book to clarify your own thinking. True scientific understanding never comes from just interacting with those with whom you already agree. R. Robbins

963 Red Tail Lane
Bellingham, WA 98226

206-300-3443

E-mail: RJR8222@gmail.com

Collection of publications by R J Robbins

Reprints and preprints of publications, slide presentations, instructional materials, and data compilations written or prepared by Robert Robbins. Most papers deal with computational biology, genome informatics, using information technology to support biomedical research, and related matters.

Research Gate page for R J Robbins

ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. According to a study by Nature and an article in Times Higher Education , it is the largest academic social network in terms of active users.

Curriculum Vitae for R J Robbins

short personal version

Curriculum Vitae for R J Robbins

long standard version

RJR Picks from Around the Web (updated 11 MAY 2018 )