picture
RJR-logo

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

icon

Bibliography Options Menu

icon
QUERY RUN:
07 Oct 2025 at 01:42
HITS:
4243
PAGE OPTIONS:
Hide Abstracts   |   Hide Additional Links
NOTE:
Long bibliographies are displayed in blocks of 100 citations at a time. At the end of each block there is an option to load the next block.

Bibliography on: Cloud Computing

RJR-3x

Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About:  RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE

ESP: PubMed Auto Bibliography 07 Oct 2025 at 01:42 Created: 

Cloud Computing

Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.

Created with PubMed® Query: ( cloud[TIAB] AND (computing[TIAB] OR "amazon web services"[TIAB] OR google[TIAB] OR "microsoft azure"[TIAB]) ) NOT pmcbook NOT ispreviousversion

Citations The Papers (from PubMed®)

-->

RevDate: 2025-10-03

Samantray S, Lockwood M, Andersen A, et al (2025)

PTM-Psi on the Cloud: A Cloud-Compatible Workflow for Scalable, High-Throughput Simulation of Post-Translational Modifications in Protein Complexes.

Journal of chemical information and modeling [Epub ahead of print].

We developed an advanced computational framework to accelerate the study of the impact of post-translational modifications on protein structures and interactions (PTM-Psi) using asynchronous, loosely coupled workflows on the Azure Quantum Elements Cloud platform. We seamlessly integrate emerging cloud computing assets that further expand the scope and capability of PTM-Psi Python package by refactoring it into a cloud-compatible library. We employed a "workflow of workflows" approach, wherein a parent workflow spawns one or more child workflows, managing them, and acting on their results. This approach enabled us to optimize resource allocation according to each workflow's needs and allowed us to use the cloud heterogeneous architecture for the computational investigation of a combinatorial explosion of thiol protein PTMs on an exemplary protein megacomplex critical to the Calvin-Benson cycle of light-dependent sugar production in cyanobacteria. With PTM-Psi on the cloud, we transformed the pipeline for the thiol PTM analysis to achieve high throughput by leveraging the strengths of the cloud service. PTM-Psi on the cloud reduces operational complexity and lowers entry barriers to data interpretation with structural modeling for a redox proteomics mass spectrometry specialist.

RevDate: 2025-10-03

Catalucci S, Koutecký T, Senin N, et al (2025)

Investigation on the effects of the application of a sublimating matte coating in optical coordinate measurement of additively manufactured parts.

The International journal, advanced manufacturing technology, 140(5-6):2749-2775.

Coating sprays play a crucial role in extending the capabilities of optical measuring systems, especially when dealing with reflective surfaces, where excessive reflections, caused by incident light hitting the object surface, lead to increased noise and missing data points in the measurement results. This work focuses on metal additively manufactured parts, and explores how the application of a sublimating matting spray on the measured surfaces can improve measurement performance. The use of sublimating matting sprays is a recent development for achieving temporary coatings that are useful for measurement, but then disappear in the final product. A series of experiments was performed involving measurement by fringe projection on a selected test part pre- and post-application of a sublimating coating layer. A comparison of measurement performance across the experiments was run by computing a selected set of custom-developed point cloud quality indicators: rate of surface coverage, level of sampling density, local point dispersion, variation of selected linear dimensions computed from the point clouds. In addition, measurements were performed using an optical profilometer on the coated and uncoated surfaces to determine both thickness of the coating layer and changes of surface texture (matte effect) due to the presence of the coating layer.

RevDate: 2025-10-02

Sun X, Liao B, Huang S, et al (2025)

Evaluation of the particle characteristics of aggregates from construction spoils treatment through a real-time detection multimodal module based on 3D point cloud technology.

Waste management (New York, N.Y.), 208:115165 pii:S0956-053X(25)00576-8 [Epub ahead of print].

Construction spoils are generated during construction activities and typically contain aggregates along with mud, requiring size distribution (gradation) assessment for reuse. Conventional methods using the square opening sieves are inefficient and labor-intensive. This study introduced an intelligent multi-modal module primarily for gradation detection based on 3D scanning technology to replace traditional sieve techniques. The proposed Particle Point Cloud Clustering algorithm achieved nearly 100% segmentation accuracy for multi-particle point clouds within 2 s through adaptive point-spacing optimization. A Particle Sieving Size Determination method ensured particle size classification accuracy exceeding 93.0%. A particle surface reconstruction algorithm was integrated into the Particle Characteristics Extraction (PCE) method to address the challenge of volume calculation for unscanned particle bottom surfaces, providing a novel strategy for computing particle geometry that encompasses traditional analysis. To streamline volume calculation and bypass individual particle reconstruction, we developed a volume prediction approach that combines the Oriented Bounding Box volume with the particle morphological parameter (λ) obtained through the PCE method. Furthermore, the Particle Mass Modification model determined aggregate mass by multiplying the predicted volume with the established density. This model significantly reduced gradation errors to less than 1.2% on average, which was experimentally validated. Experimental results also confirmed that the proposed method achieves real-time, second-level detection and fulfills the typical application needs in a construction site. This study is expected to benefit other industrial processes, such as particle screening in the mining industry, since information on particle characteristics is equally crucial for this sector.

RevDate: 2025-10-02

Ma Q, Fan R, Zhao L, et al (2025)

SGSG: Stroke-Guided Scene Graph Generation.

IEEE transactions on visualization and computer graphics, PP: [Epub ahead of print].

3D scene graph generation is essential for spatial computing in Extended Reality (XR), providing structured semantics for task planning and intelligent perception. However, unlike instance-segmentation-driven setups, generating semantic scene graphs still suffer from limited accuracy due to coarse and noisy point cloud data typically acquired in practice, and from the lack of interactive strategies to incorporate users, spatialized and intuitive guidance. We identify three key challenges: designing controllable interaction forms, involving guidance in inference, and generalizing from local corrections. To address these, we propose SGSG, a Stroke-Guided Scene Graph generation method that enables users to interactively refine 3D semantic relationships and improve predictions in real time. We propose three types of strokes and a lightweight SGstrokes dataset tailored for this modality. Our model integrates stroke guidance representation and injection for spatio-temporal feature learning and reasoning correction, along with intervention losses that combine consistency-repulsive and geometry-sensitive constraints to enhance accuracy and generalization. Experiments and the user study show that SGSG outperforms state-of-the-art methods 3DSSG and SGFN in overall accuracy and precision, surpasses JointSSG in predicate-level metrics, and reduces task load across all control conditions, establishing SGSG as a new benchmark for interactive 3D scene graph generation and semantic understanding in XR. Implementation resources are available at: https://github.com/Sycamore-Ma/SGSG-runtime.

RevDate: 2025-10-01

Sudhakar M, K Vivekrabinson (2025)

Enhanced CNN based approach for IoT edge enabled smart car driving system for improving real time control and navigation.

Scientific reports, 15(1):33932.

This study investigates the critical control factors differentiating human-driven vehicles from IoT edge-enabled smart driving systems Real-time steering, throttle, and brake control are the main areas of emphasis. By combining many high-precision sensors and using edge computing for real-time processing, the research seeks to improve autonomous vehicle decision-making. The suggested system gathers real-time time-series data using LiDAR, radar, GPS, IMU, and ultrasonic sensors. Before sending this data to a cloud server, edge nodes preprocess it. There, a Convolutional Neural Network (CNN) creates predicted control vectors for vehicle navigation. The study uses a MATLAB 2023 simulation framework that includes 100 autonomous cars, five edge nodes, and a centralized cloud server. Multiple convolutional and pooling layers make up the CNN architecture, which is followed by fully linked layers. To enhance trajectory estimation, grayscale and optical flow pictures are used. Trajectory smoothness measures, loss function trends, and Root Mean Square Error (RMSE) are used to evaluate performance. According to experimental data, the suggested CNN-based edge-enabled driving system outperforms conventional autonomous driving techniques in terms of navigation accuracy, achieving an RMSE of 15.123 and a loss value of 2.114. The results show how edge computing may improve vehicle autonomy and reduce computational delay, opening the door for more effective smart driving systems. In order to better evaluate the system's suitability for dynamic situations, future study will incorporate real-world validation.

RevDate: 2025-09-30

Kario K, Asayama K, Arima H, et al (2025)

Digital hypertension - what we need for the high-quality management of hypertension in the new era.

Hypertension research : official journal of the Japanese Society of Hypertension [Epub ahead of print].

Digital technologies are playing an increasing role in hypertension management. Digital hypertension is a new field that integrates advancing technologies into hypertension management. This research area encompasses various aspects of digital transformation technologies, including the development of novel blood pressure (BP) measurement devices-whether cuffless or cuff-based sensors-the transmission of large-scale time-series BP data, cloud-based computing and analysis of BP indices, presentation of the results, and feedback systems for both patients and physicians. A key component of this approach is novel blood pressure (BP) monitoring devices. This article summarizes the latest information and discussions about "held at the 2024 Japan Society of Hypertension scientific meeting. Novel BP monitoring includes cuffless devices that estimate BP, but cuffless devices require achieving accuracy without the need for calibration using conventional cuff-based devices. New BP monitoring devices can provide information on novel biomarkers beyond BP and may improve risk assessment and outcomes. Integration of BP data with omics and clinical information should enable personalized hypertension management. Key data gaps relating to novel BP monitoring devices are accuracy/validation in different settings/populations, association between BP metrics and hard clinical outcomes, and measurement/interpretation of BP variability data. Human- and health system-related factors also need to be addressed or overcome before these devices can be successfully integrated into routine clinical practice. If these things can be achieved, new BP monitoring technologies could transform hypertension management and play a pivotal role in the future of remote healthcare. This article summarizes the latest information and discussions about digital hypertension from the Digital Hypertension symposium that took place during the 2024 Japan Society of Hypertension scientific meeting.

RevDate: 2025-09-30

Alamro H, Albouq SS, Khan J, et al (2025)

An intelligent deep representation learning with enhanced feature selection approach for cyberattack detection in internet of things enabled cloud environment.

Scientific reports, 15(1):34013.

Users of computer networks can take advantage of cloud computing (CC), a relatively new concept that provides features such as processing, in addition to storing and sharing data. Cloud computing (CC) is attracting global investment due to its services, while IoT faces rising advanced cyberattacks, making its cybersecurity crucial to protect privacy and digital assets. A significant challenge for intrusion detection systems (IDS) is detecting complex and hidden malware, as attackers use advanced evasion techniques to bypass conventional security measures. At the cutting edge of cybersecurity is artificial intelligence (AI), which is applied to develop composite models that protect systems and networks, including Internet of Things (IoT) systems. AI-based deep learning (DL) is highly effective in detecting cybersecurity threats. This paper presents an Intelligent Hybrid Deep Learning Method for Cyber Attack Detection Using an Enhanced Feature Selection Technique (IHDLM-CADEFST) approach in IoT-enabled cloud networks. The aim is to strengthen IoT cybersecurity by identifying key threats and developing effective detection and mitigation strategies. Initially, the data pre-processing phase uses the standard scaler method to convert input data into a suitable format. Furthermore, the feature selection (FS) strategy is implemented using the recursive feature elimination with information gain (RFE-IG) model to detect the most pertinent features and prevent overfitting. Finally, a hybrid Convolutional Neural Network and Long Short-Term Memory (CNN-LSTM) model is employed for attack classification, utilizing the RMSprop optimizer to enhance the performance and efficiency of the classification process. The experimentation of the IHDLM-CADEFST approach is examined under the ToN-IoT and Edge-IIoT datasets. The comparison analysis of the IHDLM-CADEFST approach yielded superior accuracy values of 99.45% and 99.19% compared to recent models on the dual dataset.

RevDate: 2025-09-30

He M, Zhou N, Peng H, et al (2025)

A Multivariate Cloud Workload Prediction Method Integrating Convolutional Nonlinear Spiking Neural Model with Bidirectional Long Short-Term Memory.

International journal of neural systems [Epub ahead of print].

Multivariate workload prediction in cloud computing environments is a critical research problem. Effectively capturing inter-variable correlations and temporal patterns in multivariate time series is key to addressing this challenge. To address this issue, this paper proposes a convolutional model based on a Nonlinear Spiking Neural P System (ConvNSNP), which enhances the ability to process nonlinear data compared to conventional convolutional models. Building upon this, a hybrid forecasting model is developed by integrating ConvNSNP with a Bidirectional Long Short-Term Memory (BiLSTM) network. ConvNSNP is first employed to extract temporal and cross-variable dependencies from the multivariate time series, followed by BiLSTM to further strengthen long-term temporal modeling. Comprehensive experiments are conducted on three public cloud workload traces from Alibaba and Google. The proposed model is compared with a range of established deep learning approaches, including CNN, RNN, LSTM, TCN and hybrid models such as LSTNet, CNN-GRU and CNN-LSTM. Experimental results on three public datasets demonstrate that our proposed model achieves up to 9.9% improvement in RMSE and 11.6% improvement in MAE compared with the most effective baseline methods. The model also achieves favorable performance in terms of MAPE, further validating its effectiveness in multivariate workload prediction.

RevDate: 2025-09-30
CmpDate: 2025-09-30

Labayle O, Roskams-Hieter B, Slaughter J, et al (2024)

Semiparametric efficient estimation of small genetic effects in large-scale population cohorts.

Biostatistics (Oxford, England), 26(1):.

Population genetics seeks to quantify DNA variant associations with traits or diseases, as well as interactions among variants and with environmental factors. Computing millions of estimates in large cohorts in which small effect sizes and tight confidence intervals are expected, necessitates minimizing model-misspecification bias to increase power and control false discoveries. We present TarGene, a unified statistical workflow for the semi-parametric efficient and double robust estimation of genetic effects including $ k $-point interactions among categorical variables in the presence of confounding and weak population dependence. $ k $-point interactions, or Average Interaction Effects (AIEs), are a direct generalization of the usual average treatment effect (ATE). We estimate genetic effects with cross-validated and/or weighted versions of Targeted Minimum Loss-based Estimators (TMLE) and One-Step Estimators (OSE). The effect of dependence among data units on variance estimates is corrected by using sieve plateau variance estimators based on genetic relatedness across the units. We present extensive realistic simulations to demonstrate power, coverage, and control of type I error. Our motivating application is the targeted estimation of genetic effects on trait, including two-point and higher-order gene-gene and gene-environment interactions, in large-scale genomic databases such as UK Biobank and All of Us. All cross-validated and/or weighted TMLE and OSE for the AIE $ k $-point interaction, as well as ATEs, conditional ATEs and functions thereof, are implemented in the general purpose Julia package TMLE.jl. For high-throughput applications in population genomics, we provide the open-source Nextflow pipeline and software TarGene which integrates seamlessly with modern high-performance and cloud computing platforms.

RevDate: 2025-09-29

Ala'anzy MA, Abilakim A, Zhanuzak R, et al (2025)

Real time smart parking system based on IoT and fog computing evaluated through a practical case study.

Scientific reports, 15(1):33483.

The increasing urban population and the growing preference for private transportation have led to a significant rise in vehicle numbers, exacerbating traffic congestion and parking challenges. Cruising for parking not only consumes time and fuel but also contributes to environmental and energy inefficiencies. Smart parking systems have emerged as essential solutions to these issues, addressing everyday urban challenges and enabling the development of smart, sustainable cities. By reducing traffic congestion and streamlining parking processes, these systems promote eco-friendly and efficient urban transportation. This paper introduces a provenance-based smart parking system leveraging fog computing to enhance real-time parking space management and resource allocation. The proposed system employs a hierarchical fog architecture, with four layers architecture nodes for efficient data storage, transfer, and resource utilisation. The provenance component empowers users with real-time insights into parking availability, facilitating informed decision-making. Simulations conducted using the iFogSim2 toolkit evaluated the system across key metrics, including end-to-end latency, execution cost, execution time, network usage, and energy consumption in both fog and cloud-based environments. A comparative analysis demonstrates that the fog-based approach significantly outperforms its cloud-based counterpart in terms of efficiency and responsiveness. Additionally, the system minimises network usage and optimises space utilisation, reducing the need for parking area expansion. A real-world case study from SDU University Park validated the proposed system, showcasing its effectiveness in managing parking spaces, particularly during peak hours.

RevDate: 2025-09-29
CmpDate: 2025-09-29

Yao S, Yu T, Ramos AFV, et al (2025)

Toward smart and in-situ mycotoxin detection in food via vibrational spectroscopy and machine learning.

Food chemistry: X, 31:103016.

Recent advances in vibrational spectroscopy combined with machine learning are enabling smart and in-situ detection of mycotoxins in complex food matrices. Infrared and spontaneous Raman spectroscopy detect molecular vibrations or compositional changes in host matrices, capturing direct or indirect mycotoxin fingerprints, while surface-enhanced Raman spectroscopy (SERs) amplifies characteristic mycotoxins molecular vibrations via plasmonic nanostructures, enabling ultra-sensitive detection. Machine learning further enhances analysis by extracting subtle and unique mycotoxin spectral features from information-rich spectra, suppressing noise, and enabling robust predictions across heterogeneous samples. This review critically examines recent sensing strategies, model development, application performance, non-destructive screening, and potential application challenges, highlighting strengths and limitations relative to conventional methods. Innovations in portable, miniaturized spectrometers integrated with cloud computation are also discussed, supporting scalable, rapid, and on-site mycotoxin monitoring. By integrating state-of-art vibrational fingerprints with computational analysis, these approaches provide a pathway toward sensitive, smart, and field-deployable mycotoxin detection in food.

RevDate: 2025-09-27

Mangalampalli SS, Reddy PV, Reddy Karri G, et al (2025)

Priority-Aware Multi-Objective Task Scheduling in Fog Computing Using Simulated Annealing.

Sensors (Basel, Switzerland), 25(18): pii:s25185744.

The number of IoT devices has been increasing at a rapid rate, and the advent of information-intensive Internet of Multimedia Things (IoMT) applications has placed serious challenges on computing infrastructure, especially for latency, energy efficiency, and responsiveness to tasks. The legacy cloud-centric approach cannot meet such requirements because it suffers from local latency and central resource allocation. To overcome such limitations, fog computing proposes a decentralized model by reducing latency and bringing computation closer to data sources. However, effective scheduling of tasks within heterogeneous and resource-limited fog environments is still an NP-hard problem, especially in multi-criteria optimization and priority-sensitive situations. This research work proposes a new simulated annealing (SA)-based task scheduling framework to perform multi-objective optimization for fog computing environments. The proposed model minimizes makespan, energy consumption, and execution cost, and integrates a priority-aware penalty function to provide high responsiveness to high-priority tasks. The SA algorithm searches the scheduling solution space by accepting potentially sub-optimal configurations during the initial iterations and further improving towards optimality as the temperature decreases. Experimental analyses on benchmark datasets obtained from Google Cloud Job Workloads demonstrate that the proposed approach outperforms ACO, PSO, I-FASC and M2MPA approaches in terms of makespan, energy consumption, execution cost, and reliability at all task volume scales. These results confirm the proposed SA-based scheduler as a scalable and effective solution for smart task scheduling within fog-enabled IoT infrastructures.

RevDate: 2025-09-27

Arockiyadoss MA, Yao CK, Liu PC, et al (2025)

Spectral Demodulation of Mixed-Linewidth FBG Sensor Networks Using Cloud-Based Deep Learning for Land Monitoring.

Sensors (Basel, Switzerland), 25(18): pii:s25185627.

Fiber Bragg grating (FBG) sensing systems face significant challenges in resolving overlapping spectral signatures when multiple sensors operate within limited wavelength ranges, severely limiting sensor density and network scalability. This study introduces a novel Transformer-based neural network architecture that effectively resolves spectral overlap in both uniform and mixed-linewidth FBG sensor arrays, operating under bidirectional drift. The system uniquely combines dual-linewidth configurations with reflection and transmission mode fusion to enhance demodulation accuracy and sensing capacity. By integrating cloud computing, the model enables scalable deployment and near-real-time inference even in large-scale monitoring environments. The proposed approach supports self-healing functionality through dynamic switching between spectral modes during fiber breaks and enhances resilience against spectral congestion. Comprehensive evaluation across twelve drift scenarios demonstrates exceptional demodulation performance under severe spectral overlap conditions that challenge conventional peak-finding algorithms. This breakthrough establishes a new paradigm for high-density, distributed FBG sensing networks applicable to land monitoring, soil stability assessment, groundwater detection, maritime surveillance, and smart agriculture.

RevDate: 2025-09-27

Wang Y, Tang Z, Qian G, et al (2025)

A Prototype of a Lightweight Structural Health Monitoring System Based on Edge Computing.

Sensors (Basel, Switzerland), 25(18): pii:s25185612.

Bridge Structural Health Monitoring (BSHM) is vital for assessing structural integrity and operational safety. Traditional wired systems are limited by high installation costs and complexity, while existing wireless systems still face issues with cost, synchronization, and reliability. Moreover, cloud-based methods for extreme event detection struggle to meet real-time and bandwidth constraints in edge environments. To address these challenges, this study proposes a lightweight wireless BSHM system based on edge computing, enabling local data acquisition and real-time intelligent detection of extreme events. The system consists of wireless sensor nodes for front-end acceleration data collection and an intelligent hub for data storage, visualization, and earthquake recognition. Acceleration data are converted into time-frequency images to train a MobileNetV2-based model. With model quantization and Neural Processing Unit (NPU) acceleration, efficient on-device inference is achieved. Experiments on a laboratory steel bridge verify the system's high acquisition accuracy, precise clock synchronization, and strong anti-interference performance. Compared with inference on a general-purpose ARM CPU running the unquantized model, the quantized model deployed on the NPU achieves a 26× speedup in inference, a 35% reduction in power consumption, and less than 1% accuracy loss. This solution provides a cost-effective, reliable BSHM framework for small-to-medium-sized bridges, offering local intelligence and rapid response with strong potential for real-world applications.

RevDate: 2025-09-26

Reddy CL, K Malathi (2025)

Revolutionary hybrid ensembled deep learning model for accurate and robust side-channel attack detection in cloud computing.

Scientific reports, 15(1):32949.

Cryptographic systems are essential for securing sensitive information but are increasingly susceptible to side-channel attacks (SCAs) that exploit physical data leakages. In cloud computing environments, where resources shared across multiple tenants, detecting SCAs becomes particularly challenging due to increased noise and complex data patterns. This study aims to develop a robust detection model for SCAs in cloud environments, leveraging deep learning techniques to capture the multi-dimensional characteristics of power traces while ensuring scalability and accuracy. We propose a hybrid ensembled deep learning (HEDL) model that integrates convolutional neural networks (CNN), long short-term memory (LSTM) networks, and AutoEncoders, enhanced by an attention mechanism to focus on the most critical data segments. The model trained and evaluated on the ASCAD dataset, a benchmark dataset for SCA research, and implemented in a cloud environment to assess real-time detection capabilities. The HEDL model achieved a detection accuracy of 98.65%, significantly outperforming traditional machine learning and standalone deep learning models in both clean and noisy data conditions. The attention mechanism improved the model's focus on key data segments, reducing computational demands and enhancing detection precision. The proposed HEDL model demonstrates superior robustness and accuracy in SCA detection within noisy cloud environments, marking a significant advancement in cloud-based cryptographic security.

RevDate: 2025-09-25
CmpDate: 2025-09-26

Adebangbe SA, Dixon DP, B Barrett (2025)

Evaluating contaminated land and the environmental impact of oil spills in the Niger Delta region: a remote sensing-based approach.

Environmental monitoring and assessment, 197(10):1149.

The Niger Delta region of Nigeria is a major oil-producing area which experiences frequent oil spills that severely impacts the local environment and communities. Effective environmental monitoring and management remain inadequate in this area due to negligence, slow response times following oil spills, and difficulties regarding access and safety. This study investigates the pervasive issue of oil spills in the Niger Delta region, by employing a remote sensing approach, leveraging geospatial cloud computing and machine learning to evaluate vegetation health indices (SR, SR2, NDVI, EVI2, GRNDVI, GNDVI) derived from PlanetScope satellite data. These indices were analysed using Slow Moving Average regression, which revealed significant declines in vegetation health following oil spill events. The contaminated landcovers exhibit a Spearman's correlation coefficient (ρ) ranging from - 0.68 to - 0.82, P < 0.005 and P-values below 0.05 in most landcover categories, suggesting a clear and consistent downward trend in the indices' values, reflecting a decrease in vegetation health in contaminated areas between 2016 and 2023. A random forest classifier further quantified the extent of contaminated land cover, demonstrating the effectiveness of this method for monitoring environmental damage in this challenging terrain. Contaminated vegetation, wetland, farmland, and grassland cover approximately 4% (1180 ha) of the total Niger Delta area. This integrated approach will enable decision-makers, including government agencies and oil companies, to gain a deeper understanding of the environmental consequences of oil pollution and implement targeted mitigation and remediation strategies.

RevDate: 2025-09-25

Schlenz MA, Chillemi L, B Wöstmann (2025)

Clinical Study on the Accuracy of Wireless Intraoral Scanners for Digital Full Arch Impressions of Dentate Arches.

Journal of dentistry pii:S0300-5712(25)00578-0 [Epub ahead of print].

OBJECTIVE: The aim of this clinical study was to update the literature on the scan accuracy (trueness and precision) of four modern wireless intraoral scanners (IOS) and to compare their performance with wired IOS and conventional impressions (CVI). A metallic reference aid was employed as the reference dataset.

METHODS: Digital impressions were obtained from four wireless IOS (Dexis IS 3800W, Medit i700, Primescan 2, and Trios 5), one wired IOS (Primescan AC), and one CVI in thirty patients. Scan data were analysed using 3D software, and CVI dental stone casts were evaluated using a coordinate measuring machine. Scan accuracy between the reference aid and the various impression systems was compared. Statistical analysis was performed using mixed-effects ANOVA models, with significance set at p < 0.05.

RESULTS: Statistically significant differences in trueness and precision were observed between the impression systems (p < 0.05). A significant interaction between impression system and linear distance (p < 0.05) indicated that performance varied depending on the length of scan path. The Dexis IS 3800W and Medit i700 exhibited the greatest deviations, whereas the cloud-native Primescan 2 demonstrated comparable or superior accuracy to other impression systems.

CONCLUSIONS: Within the limitations of this clinical study, the overall accuracy of CVI remained high. Accuracy was influenced by both the impression system and the length of the scan path, with smaller deviations observed over short distances and increased inaccuracies over longer distances, particularly in diagonal and intermolar regions.

CLINICAL SIGNIFICANCE: Wireless IOS demonstrated statistically significant differences in certain cases, highlighting the importance of carefully evaluating the performance of each system individually.

RevDate: 2025-09-25
CmpDate: 2025-09-25

Ahmad SZ, Qamar F, Alshehri H, et al (2025)

A GAN-Based Approach for enhancing security in satellite based IoT networks using MPI enabled HPC.

PloS one, 20(9):e0331019 pii:PONE-D-25-23842.

Satellite Internet of Things (IoT) networks based on satellites are becoming increasingly critical for mission-critical applications, including disaster recovery, environmental surveillance, and remote sensing. While becoming more widespread, they are also more vulnerable to various risks, particularly due to the heterogeneous communication technologies they support and the limited computing capacity on each device. When such IoT systems are connected with central HighPerformance Computing (HPC) clouds, particularly by satellite links, new security issues arise, the primary one being the secure transmission of confidential information. To overcome such challenges, this research proposes a new security framework termed DLGAN (Deep Learning-based Generative Adversarial Network), specially designed for satellite-based IoT scenarios. The model leverages the strengths of Convolutional Neural Networks (CNNs) for real-time anomaly detection, combined with Generative Adversarial Networks (GANs) to generate realistic synthetic attack data, thereby addressing the challenge of skewed datasets prevalent in cybersecurity research. Since training GANs may be computationally expensive, the model is optimized to run on an HPC system via the Message Passing Interface (MPI) to enable scalable parallel processing of huge IoT data. Fundamentally, the DLGAN model is based on a generator/discriminator mechanism for effectively distinguishing network traffic as either benign or malicious, with the capability to detect 14 different types of attacks. By harnessingAI-enabled GPUs in the HPC cloud, the system can provide fast and accurate detection while maintaining low computational costs. Experimental evaluations demonstrate that the framework significantly enhances detection accuracy, reduces training time, and scales well with large data volumes, making it highly suitable for real-time security operations. In total, this study highlights how integrating advanced deep learning technologies with HPC-based distributed environments can deliver an efficient and dynamic defense mechanism for contemporary IoT networks. The envisaged solution is unique in its ability to scale, maximize efficiency, and resist attacks while securing satellite-based IoT infrastructures.

RevDate: 2025-09-25

Lee Y, Chen R, S Bhattacharyya (2025)

An Online Learning Framework for Neural Decoding in Embedded Neuromodulation Systems.

Brain connectivity [Epub ahead of print].

Introduction: Advancements in brain-computer interfaces (BCIs) have improved real-time neural signal decoding, enabling adaptive closed-loop neuromodulation. These systems dynamically adjust stimulation parameters based on neural biomarkers, enhancing treatment precision and adaptability. However, existing neuromodulation frameworks often depend on high-power computational platforms, limiting their feasibility for portable, real-time applications. Methods: We propose RONDO (Recursive Online Neural DecOding), a resource-efficient neural decoding framework that employs dynamic updating schemes in online learning with recurrent neural networks (RNNs). RONDO supports simple RNNs, long short-term memory networks, and gated recurrent units, allowing flexible adaptation to different signal type, accuracy, and real-time constraints. Results: Experimental results show that RONDO's adaptive model updating improves neural decoding accuracy by 35% to 45% compared to offline learning. Additionally, RONDO operates within real-time constraints of neuroimaging devices without requiring cloud-based or high-performance computing. Its dynamic updating scheme ensures high accuracy with minimal updates, improving energy efficiency and robustness in resource-limited settings. Conclusions: RONDO presents a scalable, adaptive, and energy-efficient solution for real-time closed-loop neuromodulation, eliminating reliance on cloud computing. Its flexibility makes it a promising tool for clinical and research applications, advancing personalized neurostimulation and adaptive BCIs.

RevDate: 2025-09-24

Mehrtabar S, Marey A, Desai A, et al (2025)

Ethical Considerations in Patient Privacy and Data Handling for AI in Cardiovascular Imaging and Radiology.

Journal of imaging informatics in medicine [Epub ahead of print].

The integration of artificial intelligence (AI) into cardiovascular imaging and radiology offers the potential to enhance diagnostic accuracy, streamline workflows, and personalize patient care. However, the rapid adoption of AI has introduced complex ethical challenges, particularly concerning patient privacy, data handling, informed consent, and data ownership. This narrative review explores these issues by synthesizing literature from clinical, technical, and regulatory perspectives. We examine the tensions between data utility and data protection, the evolving role of transparency and explainable AI, and the disparities in ethical and legal frameworks across jurisdictions such as the European Union, the USA, and emerging players like China. We also highlight the vulnerabilities introduced by cloud computing, adversarial attacks, and the use of commercial datasets. Ethical frameworks and regulatory guidelines are compared, and proposed mitigation strategies such as federated learning, blockchain, and differential privacy are discussed. To ensure ethical implementation, we emphasize the need for shared accountability among clinicians, developers, healthcare institutions, and policymakers. Ultimately, the responsible development of AI in medical imaging must prioritize patient trust, fairness, and equity, underpinned by robust governance and transparent data stewardship.

RevDate: 2025-09-24
CmpDate: 2025-09-24

Chen Y, Chan WH, Su ELM, et al (2025)

Multi-objective optimization for smart cities: a systematic review of algorithms, challenges, and future directions.

PeerJ. Computer science, 11:e3042.

With the growing complexity and interdependence of urban systems, multi-objective optimization (MOO) has become a critical tool for smart-city planning, sustainability, and real-time decision-making. This article presents a systematic literature review (SLR) of 117 peer-reviewed studies published between 2015 and 2025, assessing the evolution, classification, and performance of MOO techniques in smart-city contexts. Existing algorithms are organised into four families-bio-inspired, mathematical theory-driven, physics-inspired, and machine-learning-enhanced-and benchmarked for computational efficiency, scalability, and scenario suitability across six urban domains: infrastructure, energy, transportation, Internet of Things (IoT)/cloud systems, agriculture, and water management. While established methods such as Non-dominated Sorting Genetic Algorithm II (NSGA-II) and Multiobjective Evolutionary Algorithm based on Decomposition (MOED/D) remain prevalent, hybrid frameworks that couple deep learning with evolutionary search display superior adaptability in high-dimensional, dynamic environments. Persistent challenges include limited cross-domain generalisability, inadequate uncertainty handling, and low interpretability of artificial intelligence (AI)-assisted models. Twelve research gaps are synthesised-from privacy-preserving optimisation and sustainable trade-off resolution to integration with digital twins, large language models, and neuromorphic computing-and a roadmap towards scalable, interpretable, and resilient optimisation frameworks is outlined. Finally, a ready-to-use benchmarking toolkit and a deployment-oriented algorithm-selection matrix are provided to guide researchers, engineers, and policy-makers in real-world smart-city applications. This review targets interdisciplinary researchers, optimisation developers, and smart-city practitioners seeking to apply or advance MOO techniques in complex urban systems.

RevDate: 2025-09-24
CmpDate: 2025-09-24

Huang W, Tian H, Wang L, et al (2025)

SA3C-ID: a novel network intrusion detection model using feature selection and adversarial training.

PeerJ. Computer science, 11:e3089.

With the continuous proliferation of emerging technologies such as cloud computing, 5G networks, and the Internet of Things, the field of cybersecurity is facing an increasing number of complex challenges. Network intrusion detection systems, as a fundamental part of network security, have become increasingly significant. However, traditional intrusion detection methods exhibit several limitations, including insufficient feature extraction from network data, high model complexity, and data imbalance, which result in issues like low detection efficiency, as well as frequent false positives and missed alarms. To address the above issues, this article proposed an adversarial intrusion detection model (Soft Adversarial Asynchronous Actor-Critic Intrusion Detection, SA3C-ID) based on reinforcement learning. Firstly, the raw dataset is preprocessed via one-hot encoding and standardization. Subsequently, the refined data undergoes feature selection employing an improved pigeon-inspired optimizer (PIO) algorithm. This operation eliminates redundant and irrelevant features, consequently reducing data dimensionality while maintaining critical information. Next, the network intrusion detection process is modeled as a Markov decision process and integrated with the Soft Actor-Critic (SAC) reinforcement learning algorithm, with a view to constructing agents; In the context of adversarial training, two agents, designated as the attacker and the defender, are defined to perform asynchronous adversarial training. During this training process, both agents calculate the reward value, update their respective strategies, and transfer parameters based on the classification results. Finally, to verify the robustness and generalization ability of the SA3C-ID model, ablation experiments and comparative evaluations are conducted on two benchmark datasets, NSL-KDD and CSE-CIC-IDS2018. The experimental results demonstrate that SA3C-ID exhibits superior performance in comparison to other prevalent intrusion detection models. The F1-score attained by SA3C-ID was 92.58% and 98.76% on the NSL-KDD and CSE-CIC-IDS2018 datasets, respectively.

RevDate: 2025-09-24
CmpDate: 2025-09-24

Jenifer P, J Angela Jennifa Sujana (2025)

Quality of experience-aware application deployment in fog computing environments using machine learning.

PeerJ. Computer science, 11:e3143.

Edge intelligence is fast becoming indispensable as billions of sensors demand real-time inference without saturating backbone links or exposing sensitive data in remote data centres and emerging artificial intelligence (AI)-edge boards such as NVIDIA CPUs, 16 GB RAM, and microcontrollers with chip neural processing unit (NPU) (<1 W). This article introduces the Energy-Smart Component Placement (ESCP) algorithm of fog devices like fog cluster manager nodes (FCMNs) and fog nodes (FNs), allocates modules to fog devices, and saves energy by deactivating inactive devices framework transparently distributes compressed neural workloads across serverless. To optimize the deployment of AI workloads on fog edge devices as a service (FEdaaS), this project aims to provide a reliable and dynamic architecture that guarantees quality of service (QoS) and quality of experience (QoE). The cloud, fog, and extreme edge layers while upholding application-level QoS and QoE. Two machine learning (ML) methods that fuse eXtreme Gradient Boosting (XGB)-based instantaneous QoS scoring and long short term memory (LSTM) forecasting of node congestion, and a meta-heuristic scheduler that uses XGB for instantaneous QoS scoring and LSTM for short-horizon load forecasting. Compared with a cloud-only baseline, ESCP improved bandwidth utilization by 5.2%, scalability (requests per second) by 3.2%, energy consumption by 3.8% and response time by 2.1% while maintaining prediction accuracy within +0.4%. The results confirm that low-resource AI-edge devices, when orchestrated through our adaptive framework, can meet QoE targets such as 250 ms latency and 24 h of battery life. Future work will explore federated on-device learning to enhance data privacy, extend the scheduler to neuromorphic processors, and validate the architecture in real-time intensive care and smart city deployments.

RevDate: 2025-09-23

Castilla-Puentes R, Isidoro AF, Orosito A, et al (2025)

Perinatal bereavement rooms: a narrative review of physical space in perinatal grief.

Archives of gynecology and obstetrics [Epub ahead of print].

BACKGROUND: Perinatal loss is a profoundly complex form of grief, often linked to heightened risk of prolonged bereavement and adverse mental health outcomes. Perinatal grief rooms-private, supportive spaces within healthcare settings-aim to help families process their loss, spend time with their baby, and create meaningful memories in a respectful environment. While bereavement care has received growing attention, the role of the physical environment in supporting grief remains underexplored.

OBJECTIVE: To synthesize current evidence on how dedicated physical spaces can support individuals and families after perinatal loss, and to identify priorities for research, design standards, and interdisciplinary collaboration.

METHODS: A narrative review was conducted in accordance with PRISMA-ScR guidelines. Literature searches were performed across PubMed, PsycINFO, Medline (OVID), Embase, ScienceDirect, SCOPUS, SciELO, and Google Scholar using terms, such as "perinatal grief rooms", "bereavement rooms", "angel suites", "butterfly suites", "snowdrop suites", "cloud rooms", "designated units for perinatal loss", and "birthing + bereavement suites". The review examined (1) the current role of physical spaces in the perinatal loss experience, and (2) how their availability and design may influence grief outcomes.

RESULTS: Of the 17 articles meeting inclusion criteria, only 4 (24%) referenced bereavement rooms, and just 3 (18%) noted the need for formal protocols-without offering concrete examples. No studies evaluated implementation, design standards, or measurable impact on grief, mental health, or family well-being. This lack of empirical evidence and standardized guidance underscores a critical gap that limits integration of therapeutic environments into perinatal bereavement care.

CONCLUSION: Despite increasing recognition of the importance of bereavement care, dedicated grief rooms remain under-researched and inconsistently implemented. Advancing this field will require rigorously designed studies, development of design standards, and collaborative partnerships among healthcare providers, researchers, policymakers, and design experts to ensure equitable access to therapeutic spaces for grieving families.

RevDate: 2025-09-23

Ying X, Zhang Q, Jiang H, et al (2025)

High isolation, low inter-channel interference, eight-channel LAN-WDM SiPh transceiver for reliable Tbps transmission.

Optics express, 33(16):34052-34067.

The rapid growth of artificial intelligence (AI) inference, training, and cloud computing has driven the continuous demands for data transmission bandwidth and rate, enlarging the modern data centers' scale and quantities. While high-speed, long-reach (LR) (∼10 km) data center interconnection (DCI) faces significant performance degradation caused by device nonlinearity, optical link loss, channel interference, etc., when adopting a wavelength-division multiplexing (WDM) architecture. This work establishes an 8-channel multiplexer (MUX)/demultiplexer (DeMUX)-based optoelectronic transceiver scheme with high isolation, low inter-channel interference, and polarization-insensitive features to minimize the four-wave mixing (FWM) interference for Tbps DCI reliable transmission. What we believe to be a novel scheme is applied to an elaborately designed 8-channel intensity modulation direct detection (IM-DD) silicon photonic (SiPh) transceiver system for the LR8 Tbps DCI-Campus (∼10 km transmission) scenario. Experimental results demonstrate the significant performance promotion by 200 Gbps with a total 1.1 Tbps transmission rate, ultra-high channel isolation (>45 dB), thorough polarization-insensitive inter-channel interference suppression, high signal-noise ratio (SNR), as well as good channel response uniformity.

RevDate: 2025-09-22

Glatt-Holtz NE, Holbrook AJ, Krometis JA, et al (2024)

Parallel MCMC algorithms: theoretical foundations, algorithm design, case studies.

Transactions of mathematics and its applications : a journal of the IMA, 8(2):.

Parallel Markov Chain Monte Carlo (pMCMC) algorithms generate clouds of proposals at each step to efficiently resolve a target probability distribution μ. We build a rigorous foundational framework for pMCMC algorithms that situates these methods within a unified 'extended phase space' measure-theoretic formalism. Drawing on our recent work that provides a comprehensive theory for reversible single-proposal methods, we herein derive general criteria for multiproposal acceptance mechanisms that yield ergodic chains on general state spaces. Our formulation encompasses a variety of methodologies, including proposal cloud resampling and Hamiltonian methods, while providing a basis for the derivation of novel algorithms. In particular, we obtain a top-down picture for a class of methods arising from 'conditionally independent' proposal structures. As an immediate application of this formalism, we identify several new algorithms including a multiproposal version of the popular preconditioned Crank-Nicolson (pCN) sampler suitable for high- and infinite-dimensional target measures that are absolutely continuous with respect to a Gaussian base measure. To supplement the aforementioned theoretical results, we carry out a selection of numerical case studies that evaluate the efficacy of these novel algorithms. First, noting that the true potential of pMCMC algorithms arises from their natural parallelizability and the ease with which they map to modern high-performance computing architectures, we provide a limited parallelization study using TensorFlow and a graphics processing unit to scale pMCMC algorithms that leverage as many as 100k proposals at each step. Second, we use our multiproposal pCN algorithm (mpCN) to resolve a selection of problems in Bayesian statistical inversion for partial differential equations motivated by fluid measurement. These examples provide preliminary evidence of the efficacy of mpCN for high-dimensional target distributions featuring complex geometries and multimodal structures.

RevDate: 2025-09-22
CmpDate: 2025-09-22

Gershkovich P (2025)

Wearing a fur coat in the summertime: Should digital pathology redefine medical imaging?.

Journal of pathology informatics, 18:100450.

Slides are data. Once digitized, they function like any enterprise asset: accessible anywhere, ready for AI, and integrated into cloud workflows. But in pathology, they enter a realm of clinical complexity-demanding systems that handle nuance, integrate diverse data streams, scale effectively, enable computational exploration, and enforce rigorous security. Although the Digital Imaging and Communications in Medicine (DICOM) standard revolutionized radiology, it is imperative to explore its adequacy in addressing modern digital pathology's orchestration needs. Designed more than 30 years ago, DICOM reflects assumptions and architectural choices that predate modular software, cloud computing, and AI-driven workflows. This article shows that by embedding metadata, annotations, and communication protocols into a unified container, DICOM limits interoperability and exposes architectural vulnerabilities. The article begins by examining these innate design risks, then challenges DICOM's interoperability claims, and ultimately presents a modular, standards-aligned alternative. The article argues that separating image data from orchestration logic improves scalability, security, and performance. Standards such as HL7 FHIR (Health Level Seven Fast Healthcare Interoperability Resources) and modern databases manage clinical metadata; formats like Scalable Vector Graphics handle annotations; and fast, cloud-native file transfer protocols, and microservices support tile-level image access. This separation of concerns allows each component to evolve independently, optimizes performance across the system, and better adapts to emerging AI-driven workflows-capabilities that are inherently constrained in monolithic architectures where these elements are tightly coupled. It further shows that security requirements should not be embedded within the DICOM standard itself. Instead, security must be addressed through a layered, format-independent framework that spans systems, networks, applications, and data governance. Security is not a discrete feature but an overarching discipline-defined by its own evolving set of standards and best practices. Overlays such as those outlined in the National Institute of Standards and Technology SP 800-53 support modern Transport Layer Security, single sign-on, cryptographic hashing, and other controls that protect data streams without imposing architectural constraints or restricting technological choices. Pathology stands at a rare inflection point. Unlike radiology, where DICOM is deeply entrenched, pathology workflows still operate in polyglot environments-leveraging proprietary formats, hybrid standards, and emerging cloud-native tools. This diversity, often seen as a limitation, offers a clean slate: an opportunity to architect a modern, modular infrastructure free from legacy constraints. While a full departure from DICOM is unnecessary, pathology is uniquely positioned to prototype the future-to define a more flexible, secure, and interoperable model that other domains in medical imaging may one day follow. With support from forward-looking DICOM advocates, pathology can help reshape not just its own infrastructure, but the trajectory of medical imaging itself.

RevDate: 2025-09-22
CmpDate: 2025-09-22

Demattê JAM, Poppiel RR, Novais JJM, et al (2025)

Frontiers in earth observation for global soil properties assessment linked to environmental and socio-economic factors.

Innovation (Cambridge (Mass.)), 6(9):100985.

Soil has garnered global attention for its role in food security and climate change. Fine-scale soil-mapping techniques are urgently needed to support food, water, and biodiversity services. A global soil dataset integrated into an Earth observation system and supported by cloud computing enabled the development of the first global soil grid of six key properties at a 90-m spatial resolution. Assessing them from environmental and socio-economic perspectives, we demonstrated that 64% of the world's topsoils are primarily sandy, with low fertility and high susceptibility to degradation. These conditions limit crop productivity and highlight potential risks to food security. Results reveal that approximately 900 Gt of soil organic carbon (SOC) is stored up to 20 cm deep. Arid biomes store three times more SOC than mangroves based on total areas. SOC content in agricultural soils is reduced by at least 60% compared to soils under natural vegetation. Most agricultural areas are being fertilized while simultaneously experiencing a depletion of the carbon pool. By integrating soil capacity with economic and social factors, we highlight the critical role of soil in supporting societal prosperity. The top 10 largest countries in area per continent store 75% of the global SOC stock. However, the poorest countries face rapid organic matter degradation. We indicate an interconnection between societal growth and spatially explicit mapping of soil properties. This soil-human nexus establishes a geographically based link between soil health and human development. It underscores the importance of soil management in enhancing agricultural productivity and promotes sustainable-land-use planning.

RevDate: 2025-09-22
CmpDate: 2025-09-22

Thapa N, Nepali S, Shrestha R, et al (2025)

Time series flood mapping using the Copernicus dataset in Google Earth Engine of the Mountainous Region.

Data in brief, 62:112010.

In mountainous countries like Nepal, floods are a major challenge due to complex topography, intense snowmelt, and highly variable monsoon rainfall that drive frequent flooding events. This study focuses on the Hilly and Himalayan regions of Nepal, where flood monitoring and risk management are increasingly important for safeguarding vulnerable communities and infrastructure. This study presents a high-resolution, time-series flood extent dataset derived from the Copernicus Sentinel-2 Level-2A imagery at a 10-meter spatial resolution, covering the years 2019 to 2023. Flood mapping was performed using the Normalized Difference Vegetation Index (NDVI) combined with region-specific thresholding. NDVI values below 0 represent open water, while values between 0 and 0.1 often indicate mud, bare soil. A threshold of NDVI <0.019 was applied to identify flood-affected areas in the hilly region to capture the debris flow type flood, whereas NDVI <0 was used for the Himalayan region, because of the presence of snow and water that complicated classification due to their spectral similarity with other features. Snow-covered areas were masked using the Copernicus Global Land Cover dataset to improve accuracy in the high altitude zones. Data processing was performed on the Google Earth Engine (GEE) platform. Monsoon-season image composites were generated after applying cloud masking using the Scene Classification Layer (SCL), and temporal cloud gaps were filled using post-monsoon imagery to ensure continuous temporal data. The resulting flood extent maps reveal consistent spatial patterns and provide critical data for flood forecasting, risk-sensitive land use planning, and interdisciplinary studies. Despite challenges with cloud interference and complex terrain, this dataset offers valuable insights into flood dynamics across Nepal's mountainous landscape.

RevDate: 2025-09-19
CmpDate: 2025-09-19

Jang WD, Gu C, Noh Y, et al (2025)

ChemBounce: a computational framework for scaffold hopping in drug discovery.

Bioinformatics (Oxford, England), 41(9):.

SUMMARY: Scaffold hopping is a critical strategy in medicinal chemistry for generating novel and patentable drug candidates. Here, we present ChemBounce, a computational framework designed to facilitate scaffold hopping by generating structurally diverse scaffolds with high synthetic accessibility. Given a user-supplied molecule in SMILES format, ChemBounce identifies the core scaffolds and replaces them using a curated in-house library of over 3 million fragments derived from the ChEMBL database. The generated compounds are evaluated based on Tanimoto and electron shape similarities to ensure retention of pharmacophores and potential biological activity. By enabling systematic exploration of unexplored chemical space, ChemBounce represents a valuable tool for hit expansion and lead optimization in modern drug discovery.

The source code for ChemBounce is available at https://github.com/jyryu3161/chembounce. In addition, a cloud-based implementation of ChemBounce is available as a Google Colaboratory notebook.

RevDate: 2025-09-19

Li X, Wood AR, Yuan Y, et al (2025)

Streamlining large-scale genomic data management: Insights from the UK Biobank whole-genome sequencing data.

Cell genomics pii:S2666-979X(25)00265-4 [Epub ahead of print].

Biobank-scale whole-genome sequencing (WGS) studies are increasingly pivotal in unraveling the genetic bases of diverse health outcomes. However, managing and analyzing these datasets' sheer volume and complexity presents significant challenges. We highlight the annotated genomic data structure (aGDS) format, substantially reducing the WGS data file size while enabling seamless integration of genomic and functional information for comprehensive WGS analyses. The aGDS format yielded 23 chromosome-specific files for the UK Biobank 500k WGS dataset, occupying only 1.10 tebibytes of storage. We develop the vcf2agds toolkit that streamlines the conversion of WGS data from VCF to aGDS format. Additionally, the STAARpipeline equipped with the aGDS files enabled scalable, comprehensive, and functionally informed WGS analysis, facilitating the detection of common and rare coding and noncoding phenotype-genotype associations. Overall, the vcf2agds toolkit and STAARpipeline provide a streamlined solution that facilitates efficient data management and analysis of biobank-scale WGS data across hundreds of thousands of samples.

RevDate: 2025-09-19

Wang J, Garthwaite MC, Wang C, et al (2025)

Development of a Multi-Sensor GNSS-IoT System for Precise Water Surface Elevation Measurement.

Sensors (Basel, Switzerland), 25(11): pii:s25113566.

The Global Navigation Satellite System (GNSS), Internet of Things (IoT) and cloud computing technologies enable high-precision positioning with flexible data communication, making real-time/near-real-time monitoring more economical and efficient. In this study, a multi-sensor GNSS-IoT system was developed for measuring precise water surface elevation (WSE). The system, which includes ultrasonic and accelerometer sensors, was deployed on a floating platform in Googong reservoir, Australia, over a four-month period in 2024. WSE data derived from the system were compared against independent reference measurements from the reservoir operator, achieving an accuracy of 7 mm for 6 h averaged solutions and 28 mm for epoch-by-epoch solutions. The results demonstrate the system's potential for remote, autonomous WSE monitoring and its suitability for validating satellite Earth observation data, particularly from the Surface Water and Ocean Topography (SWOT) mission. Despite environmental challenges such as moderate gale conditions, the system maintained robust performance, with over 90% of solutions meeting quality assurance standards. This study highlights the advantages of combining the GNSS with IoT technologies and multiple sensors for cost-effective, long-term WSE monitoring in remote and dynamic environments. Future work will focus on optimizing accuracy and expanding applications to diverse aquatic settings.

RevDate: 2025-09-19
CmpDate: 2025-09-19

Thang DV, Volkov A, Muthanna A, et al (2025)

Future of Telepresence Services in the Evolving Fog Computing Environment: A Survey on Research and Use Cases.

Sensors (Basel, Switzerland), 25(11): pii:s25113488.

With the continuing development of technology, telepresence services have emerged as an essential part of modern communication systems. Concurrently, the rapid growth of fog computing presents new opportunities and challenges for integrating telepresence capabilities into distributed networks. Fog computing is a component of the cloud computing model that is used to meet the diverse computing needs of applications in the emergence and development of fifth- and sixth-generation (5G and 6G) networks. The incorporation of fog computing into this model provides benefits that go beyond the traditional model. This survey investigates the convergence of telepresence services with fog computing, evaluating the latest advancements in research developments and practical use cases. This study examines the changes brought about by the 6G network as well as the promising future directions of 6G. This study presents the concepts of fog computing and its basic structure. We analyze Cisco's model and propose an alternative model to improve its weaknesses. Additionally, this study synthesizes, analyzes, and evaluates a body of articles on remote presence services from major bibliographic databases. Summing up, this work thoroughly reviews current research on telepresence services and fog computing for the future.

RevDate: 2025-09-19

Sun H, Xu R, Luo J, et al (2025)

Review of the Application of UAV Edge Computing in Fire Rescue.

Sensors (Basel, Switzerland), 25(11): pii:s25113304.

The use of unmanned aerial vehicles (UAVs) attracts significant attention, especially in fire emergency rescue, where UAVs serve as indispensable tools. In fire rescue scenarios, the rapid increase in the amount of data collected and transmitted by sensors poses significant challenges to traditional methods of data storage and computing. Sensor-data processing utilizing UAV edge computing technology is emerging as a research hotspot in this field and aims to address the challenges of data preprocessing and feature analysis during fire emergency rescue. This review first analyzes fire-rescue scenarios involving UAV, including forest fires, high-rise building fires, chemical plant fires, and mine fires. Then it discusses the current status of UAV edge computing technology and its application to integrating sensor data in fire emergency rescue, analyzes the advantages and disadvantages of UAV use in fire scenarios, and identifies challenges during by UAV operations in environments with no GNSS signal. Finally, based on the analysis of fire emergency-rescue scenarios, this review argues that compared with centralized computing centers and cloud computing, distributed UAV edge computing technology based on sensor data exhibits higher mobility and timeliness and is more adaptable to the urgent nature of emergency rescue. This review also seeks to provide support and reference for the research and development of UAV edge technology.

RevDate: 2025-09-18
CmpDate: 2025-09-18

Bilal M, Shah AA, Abbas S, et al (2025)

High-Performance Deep Learning for Instant Pest and Disease Detection in Precision Agriculture.

Food science & nutrition, 13(9):e70963.

Global farm productivity is constantly under attack from pests and diseases, resulting in massive crop loss and food insecurity. Manual scouting, expert estimation, and laboratory-based microscopy are time-consuming, prone to human error, and labor-intensive. Although traditional machine learning classifiers such as SVM, Random Forest, and Decision Trees provide better accuracy, they are not field deployable. This article presents a high-performance deep learning fusion model using MobileNetV2 and EfficientNetB0 for real-time detection of pests and diseases in precision farming. The model, trained on the CCMT dataset (24,881 original and 102,976 augmented images in 22 classes of cashew, cassava, maize, and tomato crops), attained a global accuracy of 89.5%, precision and recall of 95.68%, F1-score of 95.67%, and ROC-AUC of 0.95. For supporting deployment in edge environments, methods such as quantization, pruning, and knowledge distillation were employed to decrease inference time to below 10 ms per image. The suggested model is superior to baseline CNN models, including ResNet-50 (81.25%), VGG-16 (83.10%), and other edge lightweight models (83.00%). The optimized model is run on low-power devices such as smartphones, Raspberry Pi, and farm drones without the need for cloud computing, allowing real-time detection in far-off fields. Field trials using drones validated rapid image capture and inference performance. This study delivers a scalable, cost-effective, and accurate early pest and disease detection framework for sustainable agriculture and supporting food security at the global level. The model has been successfully implemented with TensorFlow Lite within Android applications and Raspberry Pi systems.

RevDate: 2025-09-17

Zolfagharinejad M, Büchel J, Cassola L, et al (2025)

Analogue speech recognition based on physical computing.

Nature [Epub ahead of print].

With the rise of decentralized computing, such as in the Internet of Things, autonomous driving and personalized healthcare, it is increasingly important to process time-dependent signals 'at the edge' efficiently: right at the place where the temporal data are collected, avoiding time-consuming, insecure and costly communication with a centralized computing facility (or 'cloud'). However, modern-day processors often cannot meet the restrained power and time budgets of edge systems because of intrinsic limitations imposed by their architecture (von Neumann bottleneck) or domain conversions (analogue to digital and time to frequency). Here we propose an edge temporal-signal processor based on two in-materia computing systems for both feature extraction and classification, reaching near-software accuracy for the TI-46-Word[1] and Google Speech Commands[2] datasets. First, a nonlinear, room-temperature reconfigurable-nonlinear-processing-unit[3,4] layer realizes analogue, time-domain feature extraction from the raw audio signals, similar to the human cochlea. Second, an analogue in-memory computing chip[5], consisting of memristive crossbar arrays, implements a compact neural network trained on the extracted features for classification. With submillisecond latency, reconfigurable-nonlinear-processing-unit-based feature extraction consuming roughly 300 nJ per inference, and the analogue in-memory computing-based classifier using around 78 µJ (with potential for roughly 10 µJ)[6], our findings offer a promising avenue for advancing the compactness, efficiency and performance of heterogeneous smart edge processors through in materia computing hardware.

RevDate: 2025-09-16

Zhao Z, Zhang H, Li R, et al (2025)

Revisiting Transferable Adversarial Images: Systemization, Evaluation, and New Insights.

IEEE transactions on pattern analysis and machine intelligence, PP: [Epub ahead of print].

Transferable adversarial images raise critical security concerns for computer vision systems in real-world, blackbox attack scenarios. Although many transfer attacks have been proposed, existing research lacks a systematic and comprehensive evaluation. In this paper, we systemize transfer attacks into five categories around the general machine learning pipeline and provide the first comprehensive evaluation, with 23 representative attacks against 11 representative defenses, including the recent, transfer-oriented defense and the real-world Google Cloud Vision. In particular, we identify two main problems of existing evaluations: (1) for attack transferability, lack of intra-category analyses with fair hyperparameter settings, and (2) for attack stealthiness, lack of diverse measures. Our evaluation results validate that these problems have indeed caused misleading conclusions and missing points, and addressing them leads to new, consensuschallenging insights, such as (1) an early attack, DI, even outperforms all similar follow-up ones, (2) the state-of-the-art (whitebox) defense, DiffPure, is even vulnerable to (black-box) transfer attacks, and (3) even under the same Lp constraint, different attacks yield dramatically different stealthiness results regarding diverse imperceptibility metrics, finer-grained measures, and a user study. We hope that our analyses will serve as guidance on properly evaluating transferable adversarial images and advance the design of attacks and defenses.

RevDate: 2025-09-15

Moharam MH, Ashraf K, Alaa H, et al (2025)

Real-time detection of Wi-Fi attacks using hybrid deep learning models on NodeMCU.

Scientific reports, 15(1):32544.

This paper presents a real-time, lightweight system for detecting Wi-Fi deauthentication (DA) attacks that uses the NodeMCU ESP8266 microcontroller for live packet sniffing and feature extraction. Tailored for low-power IoT environments, the system combines the sequential learning capabilities of Long short-term memory (LSTM), Gate recurrent unit (GRU), and Recurrent neural network (RNN) with the interpretability of logistic regression (LR). These hybrid models analyze Wi-Fi traffic in real time to detect anomalous behavior based on key metrics such as Received Signal Strength indicator (RSSI), DA, packet count, and Signal noise ratio (SNR), which are also displayed live on an OLED screen. The proposed framework uniquely integrates hybrid temporal deep learning with interpretable classification through (LR) in an ultra-low-cost embedded data acquisition setup through NodeMCU, addressing a gap in existing intrusion detection research that often focuses on either cloud-based processing or non-interpretable model. The system was trained and validated on a dataset of over 5,600 labeled samples collected under varied network conditions. Among the evaluated models, GRU_LR achieved the highest accuracy (96%) and demonstrated superior performance in identifying minority-class threats. By combining explainable AI with cost-effective embedded sensing, this work delivers a practical and transparent intrusion detection approach that can be readily adapted to diverse IoT and wireless security contexts.

RevDate: 2025-09-15

Ayouni S, Khan MH, Ibrahim M, et al (2025)

IoT-based Approach for Diabetes Patient Monitoring Using Machine Learning.

SLAS technology pii:S2472-6303(25)00106-2 [Epub ahead of print].

This study presents an IoT-based framework for real-time diabetes monitoring and management, addressing key limitations identified in previous studies by integrating four datasets: BVH Dataset, PIMA Diabetes Dataset, Simulated Dataset, and an Integrated Dataset. The proposed approach ensures diverse demographic representation and a wide range of features including real-time vital signs (e.g., oxygen saturation, pulse rate, temperature) and subjective variables (e.g., skin color, moisture, consciousness level). Advanced preprocessing techniques, including Kalman Filtering for noise reduction, KNN imputation for addressing missing data, and SMOTE-ENN for improving data quality and class balance, were employed. These methods resulted in a 25% improvement in Recall and a 20% increase in the F1-score, demonstrating the model's effectiveness and robustness. By applying PCA and SHAP for feature engineering, high-impact features were identified, enabling the tuning of models such as Random Forest, SVM, and Logistic Regression, which achieved an accuracy of 97% and an F1-score of 0.98. A novel triage system, integrated with edge and cloud computing, classifies health status in real-time (Green, Yellow, Red, Black), reducing latency by 35%. The proposed system sets a new benchmark for scalable, individualized diabetes care in IoT-based healthcare solutions, demonstrating significant improvements in accuracy, response time, and feature incorporation compared to prior works.

RevDate: 2025-09-15

Osei-Wusu F, Asiedu W, Yeboah D, et al (2025)

Leveraging Information Technology tools to create cost-effective alternatives: Using Google Sheets as a platform for competitive debate and public speaking tabulation.

PloS one, 20(9):e0332576 pii:PONE-D-24-60341.

Traditional web-based debate tabulation systems like Tabbycat, offer robust features but often pose high costs and accessibility barriers that limit participation and the smooth organization of events. In this work, we present Tab XYZ, a novel debate and public-speaking tabulation platform built on Google Sheets, as a cost-effective alternative to conventional systems. We deployed Tab XYZ's cloud-based features like Google Apps Script automation, Google Forms for data input, real-time collaboration, to replicate core functionalities of standard tabulation software without the need for dedicated servers or paid licenses. The proposed system was evaluated in five tournaments constituting a total of 435 participants, and compared against a popular web-based platform on key metrics including setup time, user satisfaction, reliability, and error handling. Results indicate that Tab XYZ eliminated all licensing and hosting costs while achieving user satisfaction scores (overall average 4.7 out of 5) comparable to the conventional system (4.6 out of 5). Tab XYZ also demonstrated robust data security and offline-capable error recovery by leveraging Google's infrastructure. These findings illustrate a viable pathway to leverage readily available IT tools like spreadsheets and cloud services, to create innovative solutions for specialized domains, avoiding the cost and complexity barriers of traditional approaches.

RevDate: 2025-09-15

Gomase VS (2025)

Cybersecurity, Research Data Management (RDM), and Regulatory Compliance in Clinical Trials.

Reviews on recent clinical trials pii:RRCT-EPUB-150556 [Epub ahead of print].

INTRODUCTION: The intersection of drug discovery and cybersecurity is becoming critical as the pharmaceutical sector adopts digital technologies to drive research and development. Drug discovery entails extensive collaboration and large volumes of data, making it highly susceptible to cyberattacks. Emerging technologies, such as big data analytics, artificial intelligence (AI), and cloud computing, hold significant innovation potential but also pose risks to the industry that can undermine intellectual property (IP), clinical trial results, and collaborative research. This review discusses the importance of cybersecurity in the drug discovery process. The focus is on determining major threats, defining best practices for protecting sensitive information, and ensuring compliance with regulatory requirements. The objective is to highlight the strategic significance of cybersecurity practices in protecting research integrity and fostering innovation.

METHODS: The review-based approach is employed to analyze present-day trends in drug discovery cybersecurity. Emerging technologies, security issues, regulatory needs, and the security controls most frequently utilized in the industry, such as encryption, multi-factor authentication, and secure data sharing, are discussed in the chapter.

RESULTS: The pharmaceutical sector has advanced significantly in securing sensitive research information through robust cybersecurity measures. However, the vulnerabilities remain for cloud security as well as for protecting AI models. Adhering to the regulatory guidelines of GDPR (General Data Protection Regulation) and HIPAA (Health Insurance Portability and Accountability Act) remains a concern as international norms evolve.

DISCUSSION: As digital technologies transform drug discovery, cybersecurity has become crucial in protecting sensitive data and intellectual property rights. Strengthening compliance with evolving regulations is key to ensuring safety and innovative pharmaceutical research.

CONCLUSION: Cybersecurity is critical in preserving the integrity of drug discovery. With the increasing adoption of digital technologies, pharmaceutical firms must implement robust cybersecurity measures to protect sensitive information, ensure compliance, and foster innovation in a secure environment.

RevDate: 2025-09-13

Liang F (2025)

Decentralized and Network-Aware Task Offloading for Smart Transportation via Blockchain.

Sensors (Basel, Switzerland), 25(17): pii:s25175555.

As intelligent transportation systems (ITSs) evolve rapidly, the increasing computational demands of connected vehicles call for efficient task offloading. Centralized approaches face challenges in scalability, security, and adaptability to dynamic network conditions. To address these issues, we propose a blockchain-based decentralized task offloading framework with network-aware resource allocation and tokenized economic incentives. In our model, vehicles generate computational tasks that are dynamically mapped to available computing nodes-including vehicle-to-vehicle (V2V) resources, roadside edge servers (RSUs), and cloud data centers-based on a multi-factor score considering computational power, bandwidth, latency, and probabilistic packet loss. A blockchain transaction layer ensures auditable and secure task assignment, while a proof-of-stake (PoS) consensus and smart-contract-driven dynamic pricing jointly incentivize participation and balance workloads to minimize delay. In extensive simulations reflecting realistic ITS dynamics, our approach reduces total completion time by 12.5-24.3%, achieves a task success rate of 84.2-88.5%, improves average resource utilization to 88.9-92.7%, and sustains >480 transactions per second (TPS) with a 10 s block interval, outperforming centralized/cloud-based baselines. These results indicate that integrating blockchain incentives with network-aware offloading yields secure, scalable, and efficient management of computational resources for future ITSs.

RevDate: 2025-09-13

Dembski J, Wiszniewski B, A Kołakowska (2025)

Anomaly Detection and Segmentation in Measurement Signals on Edge Devices Using Artificial Neural Networks.

Sensors (Basel, Switzerland), 25(17): pii:s25175526.

In this paper, three alternative solutions to the problem of detecting and cleaning anomalies in soil signal time series, involving the use of artificial neural networks deployed on in situ data measurement end devices, are proposed and investigated. These models are designed to perform calculations on MCUs, characterized by significantly limited computing capabilities and a limited supply of electrical power. Training of neural network models is carried out based on data from multiple sensors in the supporting computing cloud instance, while detection and removal of anomalies with a trained model takes place on the constrained end devices. With such a distribution of work, it is necessary to achieve a sound compromise between prediction accuracy and the computational complexity of the detection process. In this study, neural-primed heuristic (NPH), autoencoder-based (AEB), and U-Net-based (UNB) approaches were tested, which were found to vary regarding both prediction accuracy and computational complexity. Labeled data were used to train the models, transforming the detection task into an anomaly segmentation task. The obtained results reveal that the UNB approach presents certain advantages; however, it requires a significant volume of training data and has a relatively high time complexity which, in turn, translates into increased power consumption by the end device. For this reason, the other two approaches-NPH and AEB-may be worth considering as reasonable alternatives when developing in situ data cleaning solutions for IoT measurement systems.

RevDate: 2025-09-13

Zhang L, Wu S, Z Wang (2025)

LoRA-INT8 Whisper: A Low-Cost Cantonese Speech Recognition Framework for Edge Devices.

Sensors (Basel, Switzerland), 25(17): pii:s25175404.

To address the triple bottlenecks of data scarcity, oversized models, and slow inference that hinder Cantonese automatic speech recognition (ASR) in low-resource and edge-deployment settings, this study proposes a cost-effective Cantonese ASR system based on LoRA fine-tuning and INT8 quantization. First, Whisper-tiny is parameter-efficiently fine-tuned on the Common Voice zh-HK training set using LoRA with rank = 8. Only 1.6% of the original weights are updated, reducing the character error rate (CER) from 49.5% to 11.1%, a performance close to full fine-tuning (10.3%), while cutting the training memory footprint and computational cost by approximately one order of magnitude. Next, the fine-tuned model is compressed into a 60 MB INT8 checkpoint via dynamic quantization in ONNX Runtime. On a MacBook Pro M1 Max CPU, the quantized model achieves an RTF = 0.20 (offline inference 5 × real-time) and 43% lower latency than the FP16 baseline; on an NVIDIA A10 GPU, it reaches RTF = 0.06, meeting the requirements of high-concurrency cloud services. Ablation studies confirm that the LoRA-INT8 configuration offers the best trade-off among accuracy, speed, and model size. Limitations include the absence of spontaneous-speech noise data, extreme-hardware validation, and adaptive LoRA structure optimization. Future work will incorporate large-scale self-supervised pre-training, tone-aware loss functions, AdaLoRA architecture search, and INT4/NPU quantization, and will establish an mJ/char energy-accuracy curve. The ultimate goal is to achieve CER ≤ 8%, RTF < 0.1, and mJ/char < 1 for low-power real-time Cantonese ASR in practical IoT scenarios.

RevDate: 2025-09-13

Yu M, Du Y, Zhang X, et al (2025)

Efficient Navigable Area Computation for Underground Autonomous Vehicles via Ground Feature and Boundary Processing.

Sensors (Basel, Switzerland), 25(17): pii:s25175355.

Accurate boundary detection is critical for autonomous trackless rubber-wheeled vehicles in underground coal mines, as it prevents lateral collisions with tunnel walls. Unlike open-road environments, underground tunnels suffer from poor illumination, water mist, and dust, which degrade visual imaging. To address these challenges, this paper proposes a navigable area computation for underground autonomous vehicles via ground feature and boundary processing, consisting of three core steps. First, a real-time point cloud correction process via pre-correction and dynamic update aligns ground point clouds with the LiDAR coordinate system to ensure parallelism. Second, corrected point clouds are projected onto a 2D grid map using a grid-based method, effectively mitigating the impact of ground unevenness on boundary extraction; third, an adaptive boundary completion method is designed to resolve boundary discontinuities in junctions and shunting chambers. Additionally, the method emphasizes continuous extraction of boundaries over extended periods by integrating temporal context, ensuring the continuity of boundary detection during vehicle operation. Experiments on real underground vehicle data validate that the method achieves accurate detection and consistent tracking of dual-sided boundaries across straight tunnels, curves, intersections, and shunting chambers, meeting the requirements of underground autonomous driving. This work provides a rule-based, real-time solution feasible under limited computing power, offering critical safety redundancy when deep learning methods fail in harsh underground environments.

RevDate: 2025-09-13

Honarparvar S, Honarparvar Y, Ashena Z, et al (2025)

GICEDCam: A Geospatial Internet of Things Framework for Complex Event Detection in Camera Streams.

Sensors (Basel, Switzerland), 25(17): pii:s25175331.

Complex event detection (CED) adds value to camera stream data in various applications such as workplace safety, task monitoring, security, and health. Recent CED frameworks have addressed the issues of limited spatiotemporal labels and costly training by decomposing the CED into low-level features, as well as spatial and temporal relationship extraction. However, these frameworks suffer from high resource costs, low scalability, and an increased number of false positives and false negatives. This paper proposes GICEDCAM, which distributes CED across edge, stateless, and stateful layers to improve scalability and reduce computation cost. Additionally, we introduce a Spatial Event Corrector component that leverages geospatial data analysis to minimize false negatives and false positives in spatial event detection. We evaluate GICEDCAM on 16 camera streams covering four complex events. Relative to a strong open-source baseline configured for our setting, GICEDCAM reduces end-to-end latency by 36% and total computational cost by 45%, with the advantage widening as objects per frame increase. Among corrector variants, Bayesian Network (BN) yields the lowest latency, Long Short-Term Memory (LSTM) achieves the highest accuracy, and trajectory analysis offers the best accuracy-latency trade-off for this architecture.

RevDate: 2025-09-13

Gong R, Zhang H, Li G, et al (2025)

Edge Computing-Enabled Smart Agriculture: Technical Architectures, Practical Evolution, and Bottleneck Breakthroughs.

Sensors (Basel, Switzerland), 25(17): pii:s25175302.

As the global digital transformation of agriculture accelerates, the widespread deployment of farming equipment has triggered an exponential surge in agricultural production data. Consequently, traditional cloud computing frameworks face critical challenges: communication latency in the field, the demand for low-power devices, and stringent real-time decision constraints. These bottlenecks collectively exacerbate bandwidth constraints, diminish response efficiency, and introduce data security vulnerabilities. In this context, edge computing offers a promising solution for smart agriculture. By provisioning computing resources to the network periphery and enabling localized processing at data sources adjacent to agricultural machinery, sensors, and crops, edge computing leverages low-latency responses, bandwidth optimization, and distributed computation capabilities. This paper provides a comprehensive survey of the research landscape in agricultural edge computing. We begin by defining its core concepts and highlighting its advantages over cloud computing. Subsequently, anchored in the "terminal sensing-edge intelligence-cloud coordination" architecture, we analyze technological evolution in edge sensing devices, lightweight intelligent algorithms, and cooperative communication mechanisms. Additionally, through precision farming, intelligent agricultural machinery control, and full-chain crop traceability, we demonstrate its efficacy in enhancing real-time agricultural decision-making. Finally, we identify adaptation challenges in complex environments and outline future directions for research and development in this field.

RevDate: 2025-09-13

Ali EM, Abawajy J, Lemma F, et al (2025)

Analysis of Deep Reinforcement Learning Algorithms for Task Offloading and Resource Allocation in Fog Computing Environments.

Sensors (Basel, Switzerland), 25(17): pii:s25175286.

Fog computing is increasingly preferred over cloud computing for processing tasks from Internet of Things (IoT) devices with limited resources. However, placing tasks and allocating resources in distributed and dynamic fog environments remains a major challenge, especially when trying to meet strict Quality of Service (QoS) requirements. Deep reinforcement learning (DRL) has emerged as a promising solution to these challenges, offering adaptive, data-driven decision-making in real-time and uncertain conditions. While several surveys have explored DRL in fog computing, most focus on traditional centralized offloading approaches or emphasize reinforcement learning (RL) with limited integration of deep learning. To address this gap, this paper presents a comprehensive and focused survey on the full-scale application of DRL to the task offloading problem in fog computing environments involving multiple user devices and multiple fog nodes. We systematically analyze and classify the literature based on architecture, resource allocation methods, QoS objectives, offloading topology and control, optimization strategies, DRL techniques used, and application scenarios. We also introduce a taxonomy of DRL-based task offloading models and highlight key challenges, open issues, and future research directions. This survey serves as a valuable resource for researchers by identifying unexplored areas and suggesting new directions for advancing DRL-based solutions in fog computing. For practitioners, it provides insights into selecting suitable DRL techniques and system designs to implement scalable, efficient, and QoS-aware fog computing applications in real-world environments.

RevDate: 2025-09-13

Xu T, Zou K, Liu C, et al (2025)

Special Issue on Advanced Optical Technologies for Communications, Perception, and Chips.

Sensors (Basel, Switzerland), 25(17): pii:s25175278.

With the iterative upgrade and popular application of new information technologies such as 5G, cloud computing, big data, and artificial intelligence (AI), the global data traffic and the demand for computing power has ushered in explosive growth [...].

RevDate: 2025-09-13

Tasmurzayev N, Amangeldy B, Imanbek B, et al (2025)

Digital Cardiovascular Twins, AI Agents, and Sensor Data: A Narrative Review from System Architecture to Proactive Heart Health.

Sensors (Basel, Switzerland), 25(17): pii:s25175272.

Cardiovascular disease remains the world's leading cause of mortality, yet everyday care still relies on episodic, symptom-driven interventions that detect ischemia, arrhythmias, and remodeling only after tissue damage has begun, limiting the effectiveness of therapy. A narrative review synthesized 183 studies published between 2016 and 2025 that were located through PubMed, MDPI, Scopus, IEEE Xplore, and Web of Science. This review examines CVD diagnostics using innovative technologies such as digital cardiovascular twins, which involve the collection of data from wearable IoT devices (electrocardiography (ECG), photoplethysmography (PPG), and mechanocardiography), clinical records, laboratory biomarkers, and genetic markers, as well as their integration with artificial intelligence (AI), including machine learning and deep learning, graph and transformer networks for interpreting multi-dimensional data streams and creating prognostic models, as well as generative AI, medical large language models (LLMs), and autonomous agents for decision support, personalized alerts, and treatment scenario modeling, and with cloud and edge computing for data processing. This multi-layered architecture enables the detection of silent pathologies long before clinical manifestations, transforming continuous observations into actionable recommendations and shifting cardiology from reactive treatment to predictive and preventive care. Evidence converges on four layers: sensors streaming multimodal clinical and environmental data; hybrid analytics that integrate hemodynamic models with deep-, graph- and transformer learning while Bayesian and Kalman filters manage uncertainty; decision support delivered by domain-tuned medical LLMs and autonomous agents; and prospective simulations that trial pacing or pharmacotherapy before bedside use, closing the prediction-intervention loop. This stack flags silent pathology weeks in advance and steers proactive personalized prevention. It also lays the groundwork for software-as-a-medical-device ecosystems and new regulatory guidance for trustworthy AI-enabled cardiovascular care.

RevDate: 2025-09-13

Luo H, Dai S, Hu Y, et al (2025)

Integrating Knowledge-Based and Machine Learning for Betel Palm Mapping on Hainan Island Using Sentinel-1/2 and Google Earth Engine.

Plants (Basel, Switzerland), 14(17): pii:plants14172696.

The betel palm is a critical economic crop on Hainan Island. Accurate and timely maps of betel palms are fundamental for the industry's management and ecological environment evaluation. To date, mapping the spatial distribution of betel palms across a large regional scale remains a significant challenge. In this study, we propose an integrated framework that combines knowledge-based and machine learning approaches to produce a map of betel palms at 10 m spatial resolution based on Sentinel-1/2 data and Google Earth Engine (GEE) for 2023 on Hainan Island, which accounts for 95% of betel nut acreage in China. The forest map was initially delineated based on signature information and the Green Normalized Difference Vegetation Index (GNDVI) acquired from Sentinel-1 and Sentinel-2 data, respectively. Subsequently, patches of betel palms were extracted from the forest map using a random forest classifier and feature selection method via logistic regression (LR). The resultant 10 m betel palm map achieved user's, producer's, and overall accuracy of 86.89%, 88.81%, and 97.51%, respectively. According to the betel palm map in 2023, the total planted area was 189,805 hectares (ha), exhibiting high consistency with statistical data (R[2] = 0.74). The spatial distribution was primarily concentrated in eastern Hainan, reflecting favorable climatic and topographic conditions. The results demonstrate the significant potential of Sentinel-1/2 data for identifying betel palms in complex tropical regions characterized by diverse land cover types, fragmented cultivated land, and frequent cloud and rain interference. This study provides a reference framework for mapping tropical crops, and the findings are crucial for tropical agricultural management and optimization.

RevDate: 2025-09-12

Rosenblum J, Dong J, S Narayanasamy (2025)

Confidential computing for population-scale genome-wide association studies with SECRET-GWAS.

Nature computational science [Epub ahead of print].

Genomic data from a single institution lacks global diversity representation, especially for rare variants and diseases. Confidential computing can enable collaborative genome-wide association studies (GWAS) without compromising privacy or accuracy. However, due to limited secure memory space and performance overheads, previous solutions fail to support widely used regression methods. Here we present SECRET-GWAS-a rapid, privacy-preserving, population-scale, collaborative GWAS tool. We discuss several system optimizations, including streaming, batching, data parallelization and reducing trusted hardware overheads to efficiently scale linear and logistic regression to over a thousand processor cores on an Intel SGX-based cloud platform. In addition, we protect SECRET-GWAS against several hardware side-channel attacks. SECRET-GWAS is an open-source tool and works with the widely used Hail genomic analysis framework. Our experiments on Azure's Confidential Computing platform demonstrate that SECRET-GWAS enables multivariate linear and logistic regression GWAS queries on population-scale datasets from ten independent sources in just 4.5 and 29 minutes, respectively.

RevDate: 2025-09-12

Smith DS, Ramadass K, Jones L, et al (2025)

Secondary use of radiological imaging data: Vanderbilt's ImageVU approach.

Journal of biomedical informatics pii:S1532-0464(25)00134-0 [Epub ahead of print].

OBJECTIVE: To develop ImageVU, a scalable research imaging infrastructure that integrates clinical imaging data with metadata-driven cohort discovery, enabling secure, efficient, and regulatory-compliant access to imaging for secondary and opportunistic research use. This manuscript presents a detailed description of ImageVU's key components and lessons learned to assist other institutions in developing similar research imaging services and infrastructure.

METHODS: ImageVU was designed to support the secondary use of radiological imaging data through a dedicated research imaging store. The system comprises four interconnected components: a Research PACS, an Ad Hoc Backfill Host, Cloud Storage System, and a De-Identification System. Imaging metadata are extracted and stored in the Research Derivative (RD), an identified clinical data repository, and the Synthetic Derivative (SD), a de-identified research data repository, with access facilitated through the RD Discover web portal. Researchers interact with the system via structured metadata queries and multiple data delivery options, including web-based viewing, bulk downloads, and dataset preparation for high-performance computing environments.

RESULTS: The integration of metadata-driven search capabilities has streamlined cohort discovery and improved imaging data accessibility. As of December 2024, ImageVU has processed 12.9 million MRI and CT series from 1.36 million studies across 453,403 patients. The system has supported 75 project requests, delivering over 50 TB of imaging data to 55 investigators, leading to 66 published research papers.

CONCLUSION: ImageVU demonstrates a scalable and efficient approach for integrating clinical imaging into research workflows. By combining institutional data infrastructure with cloud-based storage and metadata-driven cohort identification, the platform enables secure and compliant access to imaging for translational research.

RevDate: 2025-09-09
CmpDate: 2025-09-09

Sanjalawe Y, Fraihat S, Al-E'mari S, et al (2025)

Smart load balancing in cloud computing: Integrating feature selection with advanced deep learning models.

PloS one, 20(9):e0329765 pii:PONE-D-24-52330.

The increasing dependence on cloud computing as a cornerstone of modern technological infrastructures has introduced significant challenges in resource management. Traditional load-balancing techniques often prove inadequate in addressing cloud environments' dynamic and complex nature, resulting in suboptimal resource utilization and heightened operational costs. This paper presents a novel smart load-balancing strategy incorporating advanced techniques to mitigate these limitations. Specifically, it addresses the critical need for a more adaptive and efficient approach to workload management in cloud environments, where conventional methods fall short in handling dynamic and fluctuating workloads. To bridge this gap, the paper proposes a hybrid load-balancing methodology that integrates feature selection and deep learning models for optimizing resource allocation. The proposed Smart Load Adaptive Distribution with Reinforcement and Optimization approach, SLADRO, combines Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) algorithms for load prediction, a hybrid bio-inspired optimization technique-Orthogonal Arrays and Particle Swarm Optimization (OOA-PSO)-for feature selection algorithms, and Deep Reinforcement Learning (DRL) for dynamic task scheduling. Extensive simulations conducted on a real-world dataset called Google Cluster Trace dataset reveal that the SLADRO model significantly outperforms traditional load-balancing approaches, yielding notable improvements in throughput, makespan, resource utilization, and energy efficiency. This integration of advanced techniques offers a scalable and adaptive solution, providing a comprehensive framework for efficient load balancing in cloud computing environments.

RevDate: 2025-09-09

Degatano K, Awdeh A, Cox Iii RS, et al (2025)

Warp Analysis Research Pipelines: Cloud-optimized workflows for biological data processing and reproducible analysis.

Bioinformatics (Oxford, England) pii:8250097 [Epub ahead of print].

SUMMARY: In the era of large data, the cloud is increasingly used as a computing environment, necessitating the development of cloud-compatible pipelines that can provide uniform analysis across disparate biological datasets. The Warp Analysis Research Pipelines (WARP) repository is a GitHub repository of open-source, cloud-optimized workflows for biological data processing that are semantically versioned, tested, and documented. A companion repository, WARP-Tools, hosts Docker containers and custom tools used in WARP workflows.

The WARP and WARP-Tools repositories and code are freely available at https://github.com/broadinstitute/WARP and https://github.com/broadinstitute/WARP-tools, respectively. The pipelines are available for download from the WARP repository, can be exported from Dockstore, and can be imported to a bioinformatics platform such as Terra.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

RevDate: 2025-09-08

El-Warrak LO, Miceli de Farias C, VHDM De Azevedo Costa (2025)

Simulation-based assessment of digital twin systems for immunisation.

Frontiers in digital health, 7:1603550.

BACKGROUND: This paper presents the application of simulation to assess the functionality of a proposed Digital Twin (DT) architecture for immunisation services in primary healthcare centres. The solution is based on Industry 4.0 concepts and technologies, such as IoT, machine learning, and cloud computing, and adheres to the ISO 23247 standard.

METHODS: The system modelling is carried out using the Unified Modelling Language (UML) to define the workflows and processes involved, including vaccine storage temperature monitoring and population vaccination status tracking. The proposed architecture is structured into four domains: observable elements/entities, data collection and device control, digital twin platform, and user domain. To validate the system's performance and feasibility, simulations are conducted using SimPy, enabling the evaluation of its response under various operational scenarios.

RESULTS: The system facilitates the storage, monitoring, and visualisation of data related to the thermal conditions of ice-lined refrigerators (ILR) and thermal boxes. Additionally, it analyses patient vaccination coverage based on the official immunisation schedule. The key benefits include optimising vaccine storage conditions, reducing dose wastage, continuously monitoring immunisation coverage, and supporting strategic vaccination planning.

CONCLUSION: The paper discusses the future impacts of this approach on immunisation management and its scalability for diverse public health contexts. By leveraging advanced technologies and simulation, this digital twin framework aims to improve the performance and overall impact of immunization services.

RevDate: 2025-09-08

Zhou Y, Wu Y, Su Y, et al (2025)

Cloud-magnetic resonance imaging system: In the era of 6G and artificial intelligence.

Magnetic resonance letters, 5(1):200138.

Magnetic resonance imaging (MRI) plays an important role in medical diagnosis, generating petabytes of image data annually in large hospitals. This voluminous data stream requires a significant amount of network bandwidth and extensive storage infrastructure. Additionally, local data processing demands substantial manpower and hardware investments. Data isolation across different healthcare institutions hinders cross-institutional collaboration in clinics and research. In this work, we anticipate an innovative MRI system and its four generations that integrate emerging distributed cloud computing, 6G bandwidth, edge computing, federated learning, and blockchain technology. This system is called Cloud-MRI, aiming at solving the problems of MRI data storage security, transmission speed, artificial intelligence (AI) algorithm maintenance, hardware upgrading, and collaborative work. The workflow commences with the transformation of k-space raw data into the standardized Imaging Society for Magnetic Resonance in Medicine Raw Data (ISMRMRD) format. Then, the data are uploaded to the cloud or edge nodes for fast image reconstruction, neural network training, and automatic analysis. Then, the outcomes are seamlessly transmitted to clinics or research institutes for diagnosis and other services. The Cloud-MRI system will save the raw imaging data, reduce the risk of data loss, facilitate inter-institutional medical collaboration, and finally improve diagnostic accuracy and work efficiency.

RevDate: 2025-09-05

Zhang YH, He JY, Lin SJ, et al (2025)

[Development and practice of an interactive chromatography learning tool for beginners based on GeoGebra: a case study of plate theory].

Se pu = Chinese journal of chromatography, 43(9):1078-1085.

This study developed a GeoGebra platform-based interactive pedagogical tool focusing on plate theory to address challenges associated with abstract theory transmission, unidirectional knowledge delivery, and low student engagement in chromatography teaching in instrumental analysis courses. This study introduced an innovative methodology that encompasses theoretical model reconstruction, tool development, and teaching-chain integration that addresses the limitations of existing teaching tools, including the complex operation of professional software, restricted accessibility to web-based tools, and insufficient parameter-adjustment flexibility. An improved mathematical plate-theory model was established by incorporating mobile-phase flow rate, dead time, and phase ratio parameters. A three-tier progressive learning system (single-component simulation, multi-component simulation, and retention-time-equation derivation modules) was developed on a cloud-based computing platform. An integrated teaching chain that combined athematical modeling (AI-assisted "Doubao" derivation), interactive-parameter adjustment (multiple adjustable chromatographic parameters), and visual verification (chromatographic elution-curve simulation) was implemented. Teaching practice demonstrated that: (1) The developed tool transcends the dimensional limitations of traditional instruction, elevating the classroom task completion rate to 94% and improving the student accuracy rate for solving advanced problems to 76%. (2) The dynamic-parameter-adjustment feature significantly enhances learning engagement by enabling 85% of the students to independently use the tool in subsequent studies and experiments. (3) The AI-powered derivation and regression-analysis modules enable the interdisciplinary integration of theoretical chemistry and computational tools. The process of deriving chromatographic retention-time equations through this methodological approach proved more convincing than the current textbook practice of directly presenting conclusions. The developed innovative "theoretical-model visualizable-model-parameter adjustable-interactive-knowledge generating" model provides a new avenue for addressing teaching challenges associated with chromatography theory, and its open-source framework and modular design philosophy can offer valuable references for the digital teaching reform in analytical chemistry.

RevDate: 2025-09-02

Ting T, M Li (2025)

Enhanced secure storage and data privacy management system for big data based on multilayer model.

Scientific reports, 15(1):32285.

As big data systems expand in scale and complexity, managing and securing sensitive data-especially personnel records-has become a critical challenge in cloud environments. This paper proposes a novel Multi-Layer Secure Cloud Storage Model (MLSCSM) tailored for large-scale personnel data. The model integrates fast and secure ChaCha20 encryption, Dual Stage Data Partitioning (DSDP) to maintain statistical reliability across blocks, k-anonymization to ensure privacy, SHA-512 hashing for data integrity, and Cauchy matrix-based dispersion for fault-tolerant distributed storage. A key novelty lies in combining cryptographic and statistical methods to enable privacy-preserving partitioned storage, optimized for distributed Cloud Computing Environments (CCE). Data blocks are securely encoded, masked, and stored in discrete locations across several cloud platforms, based on factors such as latency, bandwidth, cost, and security. They are later retrieved with integrity verification. The model also includes audit logs, load balancing, and real-time resource evaluation. To validate the system, experiments were tested using the MIMIC-III dataset on a 20-node Hadoop cluster. Compared to baseline models such as RDFA, SDPMC, and P&XE, the proposed model achieved a reduction in encoding time to 250 ms (block size 75), a CPU usage of 23% for 256 MB of data, a latency as low as 14 ms, and a throughput of up to 139 ms. These results confirm that the model offers superior security, efficiency, and scalability for cloud-based big data storage applications.

RevDate: 2025-09-01

Mushtaq SU, Sheikh S, Nain A, et al (2025)

CRFTS: a cluster-centric and reservation-based fault-tolerant scheduling strategy to enhance QoS in cloud computing.

Scientific reports, 15(1):32233.

Cloud systems supply different kinds of on-demand services in accordance with client needs. As the landscape of cloud computing undergoes continuous development, there is a growing imperative for effective utilization of resources, task scheduling, and fault tolerance mechanisms. To decrease the user task execution time (shorten the makespan) with reduced operational expenses, to improve the distribution of load, and to boost utilization of resources, proper mapping of user tasks to the available VMs is necessary. This study introduces a unique perspective in tackling these challenges by implementing inventive scheduling strategies along with robust and proactive fault tolerance mechanisms in cloud environments. This paper presents the Clustering and Reservation Fault-tolerant Scheduling (CRFTS), which adapts the heartbeat mechanism to detect failed VMs proactively and maximizes the system reliability while making it fault-tolerant and optimizing other Quality of Service (QoS) parameters, such as makespan, average resource utilization, and reliability. The study optimizes the allocation of tasks to improve resource utilization and reduce the time required for their completion. At the same time, the proactive reservation-based fault tolerance framework is presented to ensure continuous service delivery throughout its execution without any interruption. The effectiveness of the suggested model is illustrated through simulations and empirical analyses, highlighting enhancements in several QoS parameters while comparing with HEFT, FTSA-1, DBSA, E-HEFT, LB-HEFT, BDHEFT, HO-SSA, and MOTSWAO for various cases and conditions across different tasks and VMs. The outcomes demonstrate that CRFTS average progresses about 48.7%, 51.2%, 45.4%, 11.8%, 24.5%, 24.4% in terms of makespan and 13.1%, 9.3%, 6.5%, 21%, 22.1%, 26.3% in terms of average resource utilization compared to HEFT, FTSA-1, DBSA, E-HEFT, LB-HEFT, BDHEFT, HO-SSA, and MOTSWAO, respectively.

RevDate: 2025-09-01

Kishor I, Mamodiya U, Patil V, et al (2025)

AI-Integrated autonomous robotics for solar panel cleaning and predictive maintenance using drone and ground-based systems.

Scientific reports, 15(1):32187.

Solar photovoltaic (PV) systems, especially in dusty and high-temperature regions, suffer performance degradation due to dust accumulation, surface heating, and delayed maintenance. This study proposes an AI-integrated autonomous robotic system combining real-time monitoring, predictive analytics, and intelligent cleaning for enhanced solar panel performance. We developed a hybrid system that integrates CNN-LSTM-based fault detection, Reinforcement Learning (DQN)-driven robotic cleaning, and Edge AI analytics for low-latency decision-making. Thermal and LiDAR-equipped drones detect panel faults, while ground robots clean panel surfaces based on real-time dust and temperature data. The system is built on Jetson Nano and Raspberry Pi 4B units with MQTT-based IoT communication. The system achieved an average cleaning efficiency of 91.3%, reducing dust density from 3.9 to 0.28 mg/m[3], and restoring up to 31.2% energy output on heavily soiled panels. CNN-LSTM-based fault detection delivered 92.3% accuracy, while the RL-based cleaning policy reduced energy and water consumption by 34.9%. Edge inference latency averaged 47.2 ms, outperforming cloud processing by 63%. A strong correlation, r = 0.87 between dust concentration and thermal anomalies, was confirmed. The proposed IEEE 1876-compliant framework offers a resilient and intelligent solution for real-time solar panel maintenance. By leveraging AI, robotics, and edge computing, the system enhances energy efficiency, reduces manual labor, and provides a scalable model for climate-resilient, smart solar infrastructure.

RevDate: 2025-09-01

Maciá-Lillo A, Mora H, Jimeno-Morenilla A, et al (2025)

AI edge cloud service provisioning for knowledge management smart applications.

Scientific reports, 15(1):32246.

This paper investigates a serverless edge-cloud architecture to support knowledge management processes within smart cities, which align with the goals of Society 5.0 to create human-centered, data-driven urban environments. The proposed architecture leverages cloud computing for scalability and on-demand resource provisioning, and edge computing for cost-efficiency and data processing closer to data sources, while also supporting serverless computing for simplified application development. Together, these technologies enhance the responsiveness and efficiency of smart city applications, such as traffic management, public safety, and infrastructure governance, by minimizing latency and improving data handling at scale. Experimental analysis demonstrates the benefits of deploying KM processes on this hybrid architecture, particularly in reducing data transmission times and alleviating network congestion, while at the same time providing options for cost-efficient computations. In addition to that, the study also identifies the characteristics, opportunities and limitations of the edge and cloud environment in terms of computation and network communication times. This architecture represents a flexible framework for advancing knowledge-driven services in smart cities, supporting further development of smart city applications in KM processes.

RevDate: 2025-08-28
CmpDate: 2025-08-29

Kim EM, Y Lim (2025)

Mapping interconnectivity of digital twin healthcare research themes through structural topic modeling.

Scientific reports, 15(1):31734.

Digital twin (DT) technology is revolutionizing healthcare systems by leveraging real-time data integration and advanced analytics to enhance patient care, optimize clinical operations, and facilitate simulation. This study aimed to identify key research trends related to the application of DTs to healthcare using structural topic modeling (STM). Five electronic databases were searched for articles related to healthcare and DT. Using the held-out likelihood, residual, semantic coherence, and lower bound as metrics revealed that the optimal number of topics was eight. The "security solutions to improve data processes and communication in healthcare" topic was positioned at the center of the network and connected to multiple nodes. The "cloud computing and data network architecture" and "machine-learning algorithms for accurate detection and prediction" topics served as a bridge between technical and healthcare topics, suggesting their high potential for use in various fields. The widespread adoption of DTs in healthcare requires robust governance structures to protect individual rights, ensure data security and privacy, and promote transparency and fairness. Compliance with regulatory frameworks, ethical guidelines, and a commitment to accountability are also crucial.

RevDate: 2025-08-28
CmpDate: 2025-08-28

Zhang Y, Ran H, Guenther A, et al (2025)

Improved modelling of biogenic emissions in human-disturbed forest edges and urban areas.

Nature communications, 16(1):8064.

Biogenic volatile organic compounds (BVOCs) are critical to biosphere-atmosphere interactions, profoundly influencing atmospheric chemistry, air quality and climate, yet accurately estimating their emissions across diverse ecosystems remains challenging. Here we introduce GEE-MEGAN, a cloud-native extension of the widely used MEGAN2.1 model, integrating dynamic satellite-derived land cover and vegetation within Google Earth Engine to produce near-real-time BVOC emissions at 10-30 m resolution, enabling fine-scale tracking of emissions in rapidly changing environments. GEE-MEGAN reduces BVOC emission estimates by 31% and decreases root mean square errors by up to 48.6% relative to MEGAN2.1 in human-disturbed forest edges, and reveals summertime BVOC emissions up to 25‑fold higher than previous estimates in urban areas such as London, Los Angeles, Paris, and Beijing. By capturing fine-scale landscape heterogeneity and human-driven dynamics, GEE-MEGAN significantly improves BVOC emission estimates, providing crucial insights to the complex interactions among BVOCs, climate, and air quality across both natural and human-modified environments.

RevDate: 2025-08-28

Panagou IC, Katsoulis S, Nannos E, et al (2025)

A Comprehensive Evaluation of IoT Cloud Platforms: A Feature-Driven Review with a Decision-Making Tool.

Sensors (Basel, Switzerland), 25(16): pii:s25165124.

The rapid proliferation of Internet of Things (IoT) devices has led to a growing ecosystem of Cloud Platforms designed to manage, process, and analyze IoT data. Selecting the optimal IoT Cloud Platform is a critical decision for businesses and developers, yet it presents a significant challenge due to the diverse range of features, pricing models, and architectural nuances. This manuscript presents a comprehensive, feature-driven review of twelve prominent IoT Cloud Platforms, including AWS IoT Core, IoT on Google Cloud Platform, and Microsoft Azure IoT Hub among others. We meticulously analyze each platform across nine key features: Security, Scalability and Performance, Interoperability, Data Analytics and AI/ML Integration, Edge Computing Support, Pricing Models and Cost-effectiveness, Developer Tools and SDK Support, Compliance and Standards, and Over-The-Air (OTA) Update Capabilities. For each feature, platforms are quantitatively scored (1-10) based on an in-depth assessment of their capabilities and offerings at the time of research. Recognizing the dynamic nature of this domain, we present our findings in a two-dimensional table to provide a clear comparative overview. Furthermore, to empower users in their decision-making process, we introduce a novel, web-based tool for evaluating IoT Cloud Platforms, called the "IoT Cloud Platforms Selector". This interactive tool allows users to assign personalized weights to each feature, dynamically calculating and displaying weighted scores for each platform, thereby facilitating a tailored selection process. This research provides a valuable resource for researchers, practitioners, and organizations seeking to navigate the complex landscape of IoT Cloud Platforms.

RevDate: 2025-08-28
CmpDate: 2025-08-28

Rao S, S Neethirajan (2025)

Computational Architectures for Precision Dairy Nutrition Digital Twins: A Technical Review and Implementation Framework.

Sensors (Basel, Switzerland), 25(16): pii:s25164899.

Sensor-enabled digital twins (DTs) are reshaping precision dairy nutrition by seamlessly integrating real-time barn telemetry with advanced biophysical simulations in the cloud. Drawing insights from 122 peer-reviewed studies spanning 2010-2025, this systematic review reveals how DT architectures for dairy cattle are conceptualized, validated, and deployed. We introduce a novel five-dimensional classification framework-spanning application domain, modeling paradigms, computational topology, validation protocols, and implementation maturity-to provide a coherent comparative lens across diverse DT implementations. Hybrid edge-cloud architectures emerge as optimal solutions, with lightweight CNN-LSTM models embedded in collar or rumen-bolus microcontrollers achieving over 90% accuracy in recognizing feeding and rumination behaviors. Simultaneously, remote cloud systems harness mechanistic fermentation simulations and multi-objective genetic algorithms to optimize feed composition, minimize greenhouse gas emissions, and balance amino acid nutrition. Field-tested prototypes indicate significant agronomic benefits, including 15-20% enhancements in feed conversion efficiency and water use reductions of up to 40%. Nevertheless, critical challenges remain: effectively fusing heterogeneous sensor data amid high barn noise, ensuring millisecond-level synchronization across unreliable rural networks, and rigorously verifying AI-generated nutritional recommendations across varying genotypes, lactation phases, and climates. Overcoming these gaps necessitates integrating explainable AI with biologically grounded digestion models, federated learning protocols for data privacy, and standardized PRISMA-based validation approaches. The distilled implementation roadmap offers actionable guidelines for sensor selection, middleware integration, and model lifecycle management, enabling proactive rather than reactive dairy management-an essential leap toward climate-smart, welfare-oriented, and economically resilient dairy farming.

RevDate: 2025-08-28

Alamri M, Humayun M, Haseeb K, et al (2025)

AI-Powered Adaptive Disability Prediction and Healthcare Analytics Using Smart Technologies.

Diagnostics (Basel, Switzerland), 15(16): pii:diagnostics15162104.

Background: By leveraging advanced wireless technologies, Healthcare Industry 5.0 promotes the continuous monitoring of real-time medical acquisition from the physical environment. These systems help identify early diseases by collecting health records from patients' bodies promptly using biosensors. The dynamic nature of medical devices not only enhances the data analysis in medical services and the prediction of chronic diseases, but also improves remote diagnostics with the latency-aware healthcare system. However, due to scalability and reliability limitations in data processing, most existing healthcare systems pose research challenges in the timely detection of personalized diseases, leading to inconsistent diagnoses, particularly when continuous monitoring is crucial. Methods: This work propose an adaptive and secure framework for disability identification using the Internet of Medical Things (IoMT), integrating edge computing and artificial intelligence. To achieve the shortest response time for medical decisions, the proposed framework explores lightweight edge computing processes that collect physiological and behavioral data using biosensors. Furthermore, it offers a trusted mechanism using decentralized strategies to protect big data analytics from malicious activities and increase authentic access to sensitive medical data. Lastly, it provides personalized healthcare interventions while monitoring healthcare applications using realistic health records, thereby enhancing the system's ability to identify diseases associated with chronic conditions. Results: The proposed framework is tested using simulations, and the results indicate the high accuracy of the healthcare system in detecting disabilities at the edges, while enhancing the prompt response of the cloud server and guaranteeing the security of medical data through lightweight encryption methods and federated learning techniques. Conclusions: The proposed framework offers a secure and efficient solution for identifying disabilities in healthcare systems by leveraging IoMT, edge computing, and AI. It addresses critical challenges in real-time disease monitoring, enhancing diagnostic accuracy and ensuring the protection of sensitive medical data.

RevDate: 2025-08-28

Gao H (2025)

Research on Computation Offloading and Resource Allocation Strategy Based on MADDPG for Integrated Space-Air-Marine Network.

Entropy (Basel, Switzerland), 27(8): pii:e27080803.

This paper investigates the problem of computation offloading and resource allocation in an integrated space-air-sea network based on unmanned aerial vehicle (UAV) and low Earth orbit (LEO) satellites supporting Maritime Internet of Things (M-IoT) devices. Considering the complex, dynamic environment comprising M-IoT devices, UAVs and LEO satellites, traditional optimization methods encounter significant limitations due to non-convexity and the combinatorial explosion in possible solutions. A multi-agent deep deterministic policy gradient (MADDPG)-based optimization algorithm is proposed to address these challenges. This algorithm is designed to minimize the total system costs, balancing energy consumption and latency through partial task offloading within a cloud-edge-device collaborative mobile edge computing (MEC) system. A comprehensive system model is proposed, with the problem formulated as a partially observable Markov decision process (POMDP) that integrates association control, power control, computing resource allocation, and task distribution. Each M-IoT device and UAV acts as an intelligent agent, collaboratively learning the optimal offloading strategies through a centralized training and decentralized execution framework inherent in the MADDPG. The numerical simulations validate the effectiveness of the proposed MADDPG-based approach, which demonstrates rapid convergence and significantly outperforms baseline methods, and indicate that the proposed MADDPG-based algorithm reduces the total system cost by 15-60% specifically.

RevDate: 2025-08-27

Massimi F, Tedeschi A, Bagadi K, et al (2025)

Integrating Google Maps and Smooth Street View Videos for Route Planning.

Journal of imaging, 11(8):.

This research addresses the long-standing dependence on printed maps for navigation and highlights the limitations of existing digital services like Google Street View and Google Street View Player in providing comprehensive solutions for route analysis and understanding. The absence of a systematic approach to route analysis, issues related to insufficient street view images, and the lack of proper image mapping for desired roads remain unaddressed by current applications, which are predominantly client-based. In response, we propose an innovative automatic system designed to generate videos depicting road routes between two geographic locations. The system calculates and presents the route conventionally, emphasizing the path on a two-dimensional representation, and in a multimedia format. A prototype is developed based on a cloud-based client-server architecture, featuring three core modules: frames acquisition, frames analysis and elaboration, and the persistence of metadata information and computed videos. The tests, encompassing both real-world and synthetic scenarios, have produced promising results, showcasing the efficiency of our system. By providing users with a real and immersive understanding of requested routes, our approach fills a crucial gap in existing navigation solutions. This research contributes to the advancement of route planning technologies, offering a comprehensive and user-friendly system that leverages cloud computing and multimedia visualization for an enhanced navigation experience.

RevDate: 2025-08-27
CmpDate: 2025-08-27

Tang H, Yuan Y, Liu H, et al (2025)

Application of a "nursing education cloud platform"-based combined and phased training model in the education of standardized-training nurses: A quasi-experimental study.

Medicine, 104(34):e44138.

The evolution of nursing education has rendered traditional standardized-training models increasingly inadequate, primarily due to their inflexible curricula, limited personalized instruction, and delayed feedback loops. While stage-based training models offer improved coherence through structured planning, they encounter difficulties in resource integration and real-time interaction. Contemporary advancements in cloud computing and Internet of Things technologies present novel opportunities for educational reform. Nursing Education Cloud Platform (NECP)-based systems have demonstrated efficacy in medical education, particularly in efficient resource management, data-driven decision-making, and the design of adaptable learning pathways. Despite the nascent implementation of cloud platforms in standardized nurse training, the sustained impact on multifaceted competencies, including professional identity and clinical reasoning, warrants further investigation. The primary objective of this investigation was to assess the effectiveness of a NECP-integrated, phased training model in enhancing standardized-training nurses' theoretical comprehension, practical competencies, professional self-perception, and clinical decision-making capabilities, while also examining its potential to refine nursing education methodologies. This quasi-experimental, non-randomized controlled trial evaluated the impact of a NECP-based training program. The study encompassed an experimental group (n = 56, receiving cloud platform-based training from September 2021 to August 2022) and a control group (n = 56, undergoing traditional training from September 2020 to August 2021). Group assignment was determined by the hospital's annual training schedule, thus employing a natural grouping based on the time period. Propensity score matching was utilized to mitigate baseline characteristic imbalances. The intervention's effects were assessed across several domains, including theoretical knowledge, operational skills, professional identity, and clinical reasoning abilities. ANCOVA was employed to account for temporal covariates. The experimental group scored significantly higher than the control group in theoretical knowledge (88.70 ± 5.07 vs 75.55 ± 9.01, P < .05), operational skills (94.27 ± 2.04 vs 90.95 ± 3.69, P < .05), professional identity (73.18 ± 10.18 vs 62.54 ± 15.48, P < .05), and clinical reasoning ability (60.95 ± 8.90 vs 51.09 ± 12.28, P < .05). The integration of the "NECP" with a phased training model demonstrates efficacy in augmenting nurses' competencies. However, the potential for selection bias, inherent in the non-randomized design, warrants careful consideration in the interpretation of these findings. Further investigation, specifically through multicenter longitudinal studies, is recommended to ascertain the generalizability of these results.

RevDate: 2025-08-26
CmpDate: 2025-08-26

Brown S, Kudia O, Kleine K, et al (2025)

Comparing Multiple Imputation Methods to Address Missing Patient Demographics in Immunization Information Systems: Retrospective Cohort Study.

JMIR public health and surveillance, 11:e73916 pii:v11i1e73916.

BACKGROUND: Immunization Information Systems (IIS) and surveillance data are essential for public health interventions and programming; however, missing data are often a challenge, potentially introducing bias and impacting the accuracy of vaccine coverage assessments, particularly in addressing disparities.

OBJECTIVE: This study aimed to evaluate the performance of 3 multiple imputation methods, Stata's (StataCorp LLC) multiple imputation using chained equations (MICE), scikit-learn's Iterative-Imputer, and Python's miceforest package, in managing missing race and ethnicity data in large-scale surveillance datasets. We compared these methodologies in their ability to preserve demographic distribution, computational efficiency, and performed G-tests on contingency tables to obtain likelihood ratio statistics to assess the association between race and ethnicity and flu vaccination status.

METHODS: In this retrospective cohort study, we analyzed 2021-2022 flu vaccination and demographic data from the West Virginia Immunization Information System (N=2,302,036), where race (15%) and ethnicity (34%) were missing. MICE, Iterative Imputer, and miceforest were used to impute missing variables, generating 15 datasets each. Computational efficiency, demographic distribution preservation, and spatial clustering patterns were assessed using G-statistics.

RESULTS: After imputation, an additional 780,339 observations were obtained compared with complete case analysis. All imputation methods exhibited significant spatial clustering for race imputation (G-statistics: MICE=26,452.7, Iterative-Imputer=128,280.3, Miceforest=26,891.5; P<.001), while ethnicity imputation showed variable clustering patterns (G-statistics: MICE=1142.2, Iterative-Imputer=1.7, Miceforest=2185.0; P: MICE<.001, Iterative-Imputer=1.7, Miceforest<.001). MICE and miceforest best preserved the proportional distribution of demographics. Computational efficiency varied, with MICE requiring 14 hours, Iterative Imputer 2 minutes, and miceforest 10 minutes for 15 imputations. Postimputation estimates indicated a 0.87%-18% reduction in stratified flu vaccination coverage rates. Overall estimated flu vaccination rates decreased from 26% to 19% after imputations.

CONCLUSIONS: Both MICE and Miceforest offer flexible and reliable approaches for imputing missing demographic data while mitigating bias compared with Iterative-Imputer. Our results also highlight that the imputation method can profoundly affect research findings. Though MICE and Miceforest had better effect sizes and reliability, MICE was much more computationally and time-expensive, limiting its use in large, surveillance datasets. Miceforest can use cloud-based computing, which further enhances efficiency by offloading resource-intensive tasks, enabling parallel execution, and minimizing processing delays. The significant decrease in vaccination coverage estimates validates how incomplete or missing data can eclipse real disparities. Our findings support regular application of imputation methods in immunization surveillance to improve health equity evaluations and shape targeted public health interventions and programming.

RevDate: 2025-08-26
CmpDate: 2025-08-26

Nguyen C, Nguyen T, Trivitt G, et al (2025)

Modular and cloud-based bioinformatics pipelines for high-confidence biomarker detection in cancer immunotherapy clinical trials.

PloS one, 20(8):e0330827 pii:PONE-D-25-08135.

BACKGROUND: The Cancer Immune Monitoring and Analysis Centers - Cancer Immunologic Data Center (CIMAC-CIDC) network aims to improve cancer immunotherapy by providing harmonized molecular assays and standardized bioinformatics analysis.

RESULTS: In response to evolving bioinformatics standards and the migration of the CIDC to the National Cancer Institute (NCI), we undertook the enhancement of the CIDC's extant whole exome sequencing (WES) and RNA sequencing (RNA-Seq) pipelines. Leveraging open-source tools and cloud-based technologies, we implemented modular workflows using Snakemake and Docker for efficient deployment on the Google Cloud Platform (GCP). Benchmarking analyses demonstrate improved reproducibility, precision, and recall across validated truth sets for variant calling, transcript quantification, and fusion detection.

CONCLUSION: This work establishes a scalable framework for harmonized multi-omic analyses, ensuring the continuity and reliability of bioinformatics workflows in multi-site clinical research aimed at advancing cancer biomarker discovery and personalized medicine.

RevDate: 2025-08-25

Nazmul Haque SM, MJ Uddin (2025)

Monitoring LULC dynamics and detecting transformation hotspots in sylhet, Bangladesh (2000-2023) using Google Earth Engine.

Scientific reports, 15(1):31263.

Sylhet, located in the northeastern part of Bangladesh, is characterized by a unique topography and climatic conditions that make it susceptible to flash floods. The interplay of rapid urbanization and climatic variability has exacerbated these flood risks in recent years. Effective monitoring and planning of land use/land cover (LULC) are crucial strategies for mitigating these hazards. While former studies analyzed LULC in parts of Sylhet using traditional GIS approaches, no comprehensive, district-wide assessment has been carried out using long-term satellite data and cloud computing platforms. This study addresses that gap by applying Google Earth Engine (GEE) for an extensive analysis of LULC changes, transitions, and hot/cold spots across the district. Accordingly, this work investigates the LULC changes in Sylhet district over the past twenty-three years (2000-2023). Using satellite imagery from Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and Landsat 8 Operational Land Imager (OLI), the LULC is classified in six selected years (2000, 2005, 2010, 2015, 2020, and 2023). A supervised machine learning algorithm, the Random Forest Classifier, is employed on the cloud computing platform Google Earth Engine to analyze LULC dynamics and detect changes. The Getis-Ord Gi[*] statistical model is applied to identify land transformation hot spot and cold spot areas. The results reveal a significant increase in built-up areas and a corresponding reduction in water bodies. Spatial analysis at the upazila level indicates urban expansion in every upazila, with the most substantial increase observed in Beani Bazar upazila, where urban areas expanded by approximately 1500%. Conversely, Bishwanath upazila experienced the greatest reduction in water bodies, with a decrease of about 90%. Sylhet Sadar upazila showed a 240% increase in urban areas and a 72% decrease in water bodies. According to hotspot analysis, Kanaighat upazila has the most amount of unchanging land at 7%, whereas Balaganj upazila has the largest amount of LULC transformation at 5.5%. Overall, the urban area in the Sylhet district has grown by approximately 300%, while water bodies have diminished by about 77%, reflecting trends of urbanization and river-filling. These findings underscore the necessity of ensuring adequate drainage infrastructure to decrease flash flood hazards in the Sylhet district and offer insightful information to relevant authorities, politicians, and water resource engineers.

RevDate: 2025-08-25

Dhanaraj RK, Maragatharajan M, Sureshkumar A, et al (2025)

On-device AI for climate-resilient farming with intelligent crop yield prediction using lightweight models on smart agricultural devices.

Scientific reports, 15(1):31195.

In Recent time, with the utilization of Artificial Intelligence (AI), AI applications have proliferated across various domains where agricultural consumer electronics are no exception. These innovations have significantly enhanced the intelligence of agricultural processes, leading to increased efficiency and sustainability. This study introduces an intelligent crop yield prediction system that utilizes Random Forest (RF) classifier to optimize the usage of water based on environmental factors. By integrating lightweight machine learning with consumer electronics such as sensors connected inside the smart display devices, this work is aimed to amplify water management and promote sustainable farming practices. While focusing on the sustainable agriculture, the water usage efficiency in irrigation should be enhanced by predicting optimal watering schedules and it will reduce the environmental impact and support the climate resilient farming. The proposed lightweight model has been trained on real-time agricultural data with minimum memory resource in sustainability prediction and the model has achieved 90.1% accuracy in the detection of crop yield suitable for the farmland as well as outperformed the existing methods including AI-enabled IoT model with mobile sensors and deep learning architectures (89%), LoRa-based systems (87.2%), and adaptive AI with self-learning techniques (88%). The deployment of computationally efficient machine learning models like random forest algorithms will emphasis on real time decision making without depending on the cloud computing. The performance evaluation and effectiveness of the proposed method are estimated using the important parameter called prediction accuracy. The main goal of this parameter is to access how the AI model accurately predicts the irrigation needs based on the sensor data.

RevDate: 2025-08-25

Ozlem K, Gumus C, Yilmaz AF, et al (2025)

Cloud-Based Control System with Sensing and Actuating Textile-Based IoT Gloves for Telerehabilitation Applications.

Advanced intelligent systems (Weinheim an der Bergstrasse, Germany), 7(8):2400894.

Remote manipulation devices extend human capabilities over vast distances or in inaccessible environments, removing constraints between patients and treatment. The integration of therapeutic and assistive devices with the Internet of Things (IoT) has demonstrated high potential to develop and enhance intelligent rehabilitation systems in the e-health domain. Within such devices, soft robotic products distinguish themselves through their lightweight and adaptable characteristics, facilitating secure collaboration between humans and robots. The objective of this research is to combine a textile-based sensorized glove with an air-driven soft robotic glove, operated wirelessly using the developed control system architecture. The sensing glove equipped with capacitive sensors on each finger captures the movements of the medical staff's hand. Meanwhile, the pneumatic rehabilitation glove designed to aid patients affected by impaired hand function due to stroke, brain injury, or spinal cord injury replicates the movements of the medical personnel. The proposed artificial intelligence-based system detects finger gestures and actuates the pneumatic system, responding within an average response time of 48.4 ms. The evaluation of the system further in terms of accuracy and transmission quality metrics verifies the feasibility of the proposed system integrating textile gloves into IoT infrastructure, enabling remote motion sensing and actuation.

RevDate: 2025-08-25

Saratkar SY, Langote M, Kumar P, et al (2025)

Digital twin for personalized medicine development.

Frontiers in digital health, 7:1583466.

Digital Twin (DT) technology is revolutionizing healthcare by enabling real-time monitoring, predictive analytics, and highly personalized medical care. As a key innovation of Industry 4.0, DTs integrate advanced tools like artificial intelligence (AI), the Internet of Things (IoT), and machine learning (ML) to create dynamic, data-driven replicas of patients. These digital replicas allow simulations of disease progression, optimize diagnostics, and personalize treatment plans based on individual genetic and lifestyle profiles. This review explores the evolution, architecture, and enabling technologies of DTs, focusing on their transformative applications in personalized medicine (PM). While the integration of DTs offers immense potential to improve outcomes and efficiency in healthcare, challenges such as data privacy, system interoperability, and ethical concerns must be addressed. The paper concludes by highlighting future directions, where AI, cloud computing, and blockchain are expected to play a pivotal role in overcoming these limitations and advancing precision medicine.

RevDate: 2025-08-24
CmpDate: 2025-08-24

Beć KB, Grabska J, CW Huck (2025)

Handheld NIR spectroscopy for real-time on-site food quality and safety monitoring.

Advances in food and nutrition research, 115:293-389.

This chapter reviews the applications and future directions of portable near-infrared (NIR) spectroscopy in food analytics, with a focus on quality control, safety monitoring, and fraud detection. Portable NIR spectrometers are essential for real-time, non-destructive analysis of food composition, and their use is rapidly expanding across various stages of the food production chain-from agriculture and processing to retail and consumer applications. The functional design of miniaturized NIR spectrometers is examined, linking the technological diversity of these sensors to their application potential in specific roles within the food sector, while discussing challenges related to thermal stability, energy efficiency, and spectral accuracy. Current trends in data analysis, including chemometrics and artificial intelligence, are also highlighted, as the successful application of portable spectroscopy heavily depends on this key aspect of the analytical process. This discussion is based on recent literature, with a focus on the last five years, and addresses the application of portable NIR spectroscopy in food quality assessment and composition analysis, food safety and contaminant detection, and food authentication and fraud prevention. The chapter concludes that portable NIR spectroscopy has significantly enhanced food analytics over the past decade, with ongoing trends likely to lead to even wider adoption in the near future. Future challenges related to ultra-miniaturization and emerging consumer-oriented spectrometers emphasize the need for robust pre-calibrated models and the development of global models for key applications. The integration of NIR spectrometers with cloud computing, IoT, and machine learning is expected to drive advancements in real-time monitoring, predictive modeling, and data processing, fitting the growing demand for improved safety, quality, and fraud detection from the farm to the fork.

RevDate: 2025-08-21
CmpDate: 2025-08-21

Cui D, Peng Z, Li K, et al (2025)

An novel cloud task scheduling framework using hierarchical deep reinforcement learning for cloud computing.

PloS one, 20(8):e0329669 pii:PONE-D-24-45416.

With the increasing popularity of cloud computing services, their large and dynamic load characteristics have rendered task scheduling an NP-complete problem.To address the problem of large-scale task scheduling in a cloud computing environment, this paper proposes a novel cloud task scheduling framework using hierarchical deep reinforcement learning (DRL) to address the challenges of large-scale task scheduling in cloud computing. The framework defines a set of virtual machines (VMs) as a VM cluster and employs hierarchical scheduling to allocate tasks first to the cluster and then to individual VMs. The scheduler, designed using DRL, adapts to dynamic changes in the cloud environments by continuously learning and updating network parameters. Experiments demonstrate that it skillfully balances cost and performance. In low-load situations, costs are reduced by using low-cost nodes within the Service Level Agreement (SLA) range; in high-load situations, resource utilization is improved through load balancing. Compared with classical heuristic algorithms, it effectively optimizes load balancing, cost, and overdue time, achieving a 10% overall improvement. The experimental results demonstrate that this approach effectively balances cost and performance, optimizing objectives such as load balance, cost, and overdue time. One potential shortcoming of the proposed hierarchical deep reinforcement learning (DRL) framework for cloud task scheduling is its complexity and computational overhead. Implementing and maintaining a DRL-based scheduler requires significant computational resources and expertise in machine learning. There are still shortcomings in the method used in this study. First, the continuous learning and updating of network parameters might introduce latency, which could impact real-time task scheduling efficiency. Furthermore, the framework's performance heavily depends on the quality and quantity of training data, which might be challenging to obtain and maintain in a dynamic cloud environment.

RevDate: 2025-08-20

Manhary FN, Mohamed MH, M Farouk (2025)

A scalable machine learning strategy for resource allocation in database.

Scientific reports, 15(1):30567.

Modern cloud computing systems require intelligent resource allocation strategies that balance quality-of-service (QoS), operational costs, and energy sustainability. Existing deep Q-learning (DQN) methods suffer from sample inefficiency, centralization bottlenecks, and reactive decision-making during workload spikes. Transformer-based forecasting models such as Temporal Fusion Transformer (TFT) offer improved accuracy but introduce computational overhead, limiting real-time deployment. We propose LSTM-MARL-Ape-X, a novel framework integrating bidirectional Long Short-Term Memory (BiLSTM) for workload forecasting with Multi-Agent Reinforcement Learning (MARL) in a distributed Ape-X architecture. This approach enables proactive, decentralized, and scalable resource management through three innovations: high-accuracy forecasting using BiLSTM with feature-wise attention, variance-regularized credit assignment for stable multi-agent coordination, and faster convergence via adaptive prioritized replay. Experimental validation on real-world traces demonstrates 94.6% SLA compliance, 22% reduction in energy consumption, and linear scalability to over 5,000 nodes with sub-100 ms decision latency. The framework converges 3.2× faster than uniform sampling baselines and outperforms transformer-based models in both accuracy and inference speed. Unlike decoupled prediction-action frameworks, our method provides end-to-end optimization, enabling robust and sustainable cloud orchestration at scale.

RevDate: 2025-08-19

Park SY, Takayama C, Ryu J, et al (2025)

Design and evaluation of next-generation HIV genotyping for detection of resistance mutations to 28 antiretroviral drugs across five major classes including lenacapavir.

Clinical infectious diseases : an official publication of the Infectious Diseases Society of America pii:8237671 [Epub ahead of print].

BACKGROUND: The emergence and spread of HIV drug-resistant strains present a major barrier to effective lifelong Antiretroviral Therapy (ART). The anticipated rise in long-acting subcutaneous lenacapavir (LEN) use, along with the increased risk of transmitted resistance and Pre-Exposure Prophylaxis (PrEP)-associated resistance, underscores the urgent need for advanced genotyping methods to enhance clinical care and prevention strategies.

METHODS: We developed the Portable HIV Genotyping (PHG) platform which combines cost-effective next-generation sequencing with cloud computing to screen for resistance to 28 antiretroviral drugs across five major classes, including LEN. We analyzed three study cohorts and compared our drug resistance findings against standard care testing results and high-fidelity sequencing data obtained through unique molecular identifier (UMI) labeling.

RESULTS: PHG identified two major LEN-resistance mutations in one participant, confirmed by an additional independent sequencing run. Across three study cohorts, PHG consistently detected the same drug resistance mutations as standard care genotyping and high-fidelity UMI-labeling in most tested specimens. PHG's 10% limit of detection minimized false positives and enabled identification of minority variants less than 20% frequency, pointing to underdiagnosis of drug resistance in clinical care. Furthermore, PHG identified linked cross-class resistance mutations, confirmed by UMI-labeling, including linked cross-resistance in a participant who reported use of long-acting cabotegravir (CAB) and rilpivirine (RPV). We also observed multi-year persistence of linked cross-class resistance mutations.

CONCLUSIONS: PHG demonstrates significant improvements over standard care HIV genotyping, offering deeper insights into LEN-resistance, minority variants, and cross-class resistance using a low-cost high-throughput portable sequencing technology and publicly available cloud computing.

RevDate: 2025-08-17

Wu J, Bian Z, Gao H, et al (2025)

A Blockchain-Based Secure Data Transaction and Privacy Preservation Scheme in IoT System.

Sensors (Basel, Switzerland), 25(15):.

With the explosive growth of Internet of Things (IoT) devices, massive amounts of heterogeneous data are continuously generated. However, IoT data transactions and sharing face multiple challenges such as limited device resources, untrustworthy network environment, highly sensitive user privacy, and serious data silos. How to achieve fine-grained access control and privacy protection for massive devices while ensuring secure and reliable data circulation has become a key issue that needs to be urgently addressed in the current IoT field. To address the above challenges, this paper proposes a blockchain-based data transaction and privacy protection framework. First, the framework builds a multi-layer security architecture that integrates blockchain and IPFS and adapts to the "end-edge-cloud" collaborative characteristics of IoT. Secondly, a data sharing mechanism that takes into account both access control and interest balance is designed. On the one hand, the mechanism uses attribute-based encryption (ABE) technology to achieve dynamic and fine-grained access control for massive heterogeneous IoT devices; on the other hand, it introduces a game theory-driven dynamic pricing model to effectively balance the interests of both data supply and demand. Finally, in response to the needs of confidential analysis of IoT data, a secure computing scheme based on CKKS fully homomorphic encryption is proposed, which supports efficient statistical analysis of encrypted sensor data without leaking privacy. Security analysis and experimental results show that this scheme is secure under standard cryptographic assumptions and can effectively resist common attacks in the IoT environment. Prototype system testing verifies the functional completeness and performance feasibility of the scheme, providing a complete and effective technical solution to address the challenges of data integrity, verifiable transactions, and fine-grained access control, while mitigating the reliance on a trusted central authority in IoT data sharing.

RevDate: 2025-08-18

Chapman OS, Sridhar S, Chow EY, et al (2025)

Extrachromosomal DNA associates with poor survival across a broad spectrum of childhood solid tumors.

medRxiv : the preprint server for health sciences.

Circular extrachromosomal DNA (ecDNA) is a common form of oncogene amplification in aggressive cancers. The frequency and diversity of ecDNA has been catalogued in adult and some childhood cancers; however, its role in most pediatric cancers is not well-understood. To address this gap, we accessed large pediatric cancer genomics data repositories and identified ecDNA from whole genome sequencing data using cloud computing. This retrospective cohort comprises 3,631 solid tumor biopsies from 2,968 patients covering all major childhood solid tumor types. Aggressive tumor types had particularly high incidences of ecDNA. Pediatric patients whose tumors harbored extrachromosomal DNA had significantly poorer five-year overall survival than children whose tumors contained only chromosomal amplifications. We catalogue known and potentially novel oncogenes recurrently amplified on ecDNA and show that ecDNA often evolves during disease progression. These results highlight patient populations that could potentially benefit from future ecDNA-directed therapies. To facilitate discovery, we developed an interactive catalogue of ecDNA in childhood cancer at https://ccdi-ecdna.org/.

RevDate: 2023-11-10

Vahidy F, Jones SL, Tano ME, et al (2021)

Rapid Response to Drive COVID-19 Research in a Learning Health Care System: Rationale and Design of the Houston Methodist COVID-19 Surveillance and Outcomes Registry (CURATOR).

JMIR medical informatics, 9(2):e26773.

BACKGROUND: The COVID-19 pandemic has exacerbated the challenges of meaningful health care digitization. The need for rapid yet validated decision-making requires robust data infrastructure. Organizations with a focus on learning health care (LHC) systems tend to adapt better to rapidly evolving data needs. Few studies have demonstrated a successful implementation of data digitization principles in an LHC context across health care systems during the COVID-19 pandemic.

OBJECTIVE: We share our experience and provide a framework for assembling and organizing multidisciplinary resources, structuring and regulating research needs, and developing a single source of truth (SSoT) for COVID-19 research by applying fundamental principles of health care digitization, in the context of LHC systems across a complex health care organization.

METHODS: Houston Methodist (HM) comprises eight tertiary care hospitals and an expansive primary care network across Greater Houston, Texas. During the early phase of the pandemic, institutional leadership envisioned the need to streamline COVID-19 research and established the retrospective research task force (RRTF). We describe an account of the structure, functioning, and productivity of the RRTF. We further elucidate the technical and structural details of a comprehensive data repository-the HM COVID-19 Surveillance and Outcomes Registry (CURATOR). We particularly highlight how CURATOR conforms to standard health care digitization principles in the LHC context.

RESULTS: The HM COVID-19 RRTF comprises expertise in epidemiology, health systems, clinical domains, data sciences, information technology, and research regulation. The RRTF initially convened in March 2020 to prioritize and streamline COVID-19 observational research; to date, it has reviewed over 60 protocols and made recommendations to the institutional review board (IRB). The RRTF also established the charter for CURATOR, which in itself was IRB-approved in April 2020. CURATOR is a relational structured query language database that is directly populated with data from electronic health records, via largely automated extract, transform, and load procedures. The CURATOR design enables longitudinal tracking of COVID-19 cases and controls before and after COVID-19 testing. CURATOR has been set up following the SSoT principle and is harmonized across other COVID-19 data sources. CURATOR eliminates data silos by leveraging unique and disparate big data sources for COVID-19 research and provides a platform to capitalize on institutional investment in cloud computing. It currently hosts deeply phenotyped sociodemographic, clinical, and outcomes data of approximately 200,000 individuals tested for COVID-19. It supports more than 30 IRB-approved protocols across several clinical domains and has generated numerous publications from its core and associated data sources.

CONCLUSIONS: A data-driven decision-making strategy is paramount to the success of health care organizations. Investment in cross-disciplinary expertise, health care technology, and leadership commitment are key ingredients to foster an LHC system. Such systems can mitigate the effects of ongoing and future health care catastrophes by providing timely and validated decision support.

RevDate: 2023-11-11
CmpDate: 2016-10-17

Dinov ID (2016)

Methodological challenges and analytic opportunities for modeling and interpreting Big Healthcare Data.

GigaScience, 5:12.

Managing, processing and understanding big healthcare data is challenging, costly and demanding. Without a robust fundamental theory for representation, analysis and inference, a roadmap for uniform handling and analyzing of such complex data remains elusive. In this article, we outline various big data challenges, opportunities, modeling methods and software techniques for blending complex healthcare data, advanced analytic tools, and distributed scientific computing. Using imaging, genetic and healthcare data we provide examples of processing heterogeneous datasets using distributed cloud services, automated and semi-automated classification techniques, and open-science protocols. Despite substantial advances, new innovative technologies need to be developed that enhance, scale and optimize the management and processing of large, complex and heterogeneous data. Stakeholder investments in data acquisition, research and development, computational infrastructure and education will be critical to realize the huge potential of big data, to reap the expected information benefits and to build lasting knowledge assets. Multi-faceted proprietary, open-source, and community developments will be essential to enable broad, reliable, sustainable and efficient data-driven discovery and analytics. Big data will affect every sector of the economy and their hallmark will be 'team science'.

RevDate: 2025-08-18

Isik MS, Parente L, Consoli D, et al (2025)

Light use efficiency (LUE) based bimonthly gross primary productivity (GPP) for global grasslands at 30 m spatial resolution (2000-2022).

PeerJ, 13:e19774 pii:19774.

The article describes production of a high spatial resolution (30 m) bimonthly light use efficiency (LUE) based gross primary productivity (GPP) data set representing grasslands for the period 2000 to 2022. The data set is based on using reconstructed global complete consistent bimonthly Landsat archive (400TB of data), combined with 1 km MOD11A1 temperature data and 1° CERES Photosynthetically Active Radiation (PAR). First, the LUE model was implemented by taking the biome-specific productivity factor (maximum LUE parameter) as a global constant, producing a global bimonthly (uncalibrated) productivity data for the complete land mask. Second, the GPP 30 m bimonthly maps were derived for the global grassland annual predictions and calibrating the values based on the maximum LUE factor of 0.86 gCm[-2]d[-1]MJ[-1]. The results of validation of the produced GPP estimates based on 527 eddy covariance flux towers show an R-square between 0.48-0.71 and root mean square error (RMSE) below ~2.3 gCm[-2]d[-1] for all land cover classes. Using a total of 92 flux towers located in grasslands, the validation of the GPP product calibrated for the grassland biome revealed an R-square between 0.51-0.70 and an RMSE smaller than ~2 gCm[-2]d[-1]. The final time-series of maps (uncalibrated and grassland GPP) are available as bimonthly (daily estimates in units of gCm[-2]d[-1]) and annual (daily average accumulated by 365 days in units of gCm[-2]yr[-1]) in Cloud-Optimized GeoTIFF (~23TB in size) as open data (CC-BY license). The recommended uses of data include: trend analysis e.g., to determine where are the largest losses in GPP and which could be an indicator of potential land degradation, crop yield mapping and for modeling GHG fluxes at finer spatial resolution. Produced maps are available via SpatioTemporal Asset Catalog (http://stac.openlandmap.org) and Google Earth Engine.

RevDate: 2025-08-17

Periasamy JK, Prabhakar S, Vanathi A, et al (2025)

Enhancing cloud security and deduplication efficiency with SALIGP and cryptographic authentication.

Scientific reports, 15(1):30112.

Cloud computing enables data storage and application deployment over the internet, offering benefits such as mobility, resource pooling, and scalability. However, it also presents major challenges, particularly in managing shared resources, ensuring data security, and controlling distributed applications in the absence of centralized oversight. One key issue is data duplication, which leads to inefficient storage, increased costs, and potential privacy and security risks. To address these challenges, this study proposes a post-quantum mechanism that enhances both cloud security and deduplication efficiency. The proposed SALIGP method leverages Genetic Programming and a Geometric Approach, integrating Bloom Filters for efficient duplication detection. The Cryptographic Deduplication Authentication Scheme (CDAS) is introduced, which utilizes blockchain technology to securely store and retrieve files, while ensuring that encrypted access is limited to authorized users. This dual-layered approach effectively resolves the issue of redundant data in dynamic, distributed cloud environments. Experimental results demonstrate that the proposed method significantly reduces computation and communication times at various network nodes, particularly in key generation and group operations. Encrypting user data prior to outsourcing ensures enhanced privacy protection during the deduplication process. Overall, the proposed system leads to substantial improvements in cloud data security, reliability, and storage efficiency, offering a scalable and secure framework for modern cloud computing environments.

RevDate: 2025-08-18

Wang J, Li K, Han T, et al (2025)

Long-term Land Cover Dataset of the Mongolian Plateau Based on Multi-source Data and Rich Sample Annotations.

Scientific data, 12(1):1434.

The Mongolian Plateau (MP), with its unique geographical landscape and nomadic cultural features, is vital to regional ecological security and sustainable development in North Asia. Existing global land cover products often lack the classification specificity and temporal continuity required for MP-specific studies, particularly for grassland and bare area subtypes. To address this gap, a new land cover classification was designed for MP, which includes 14 categories: forests, shrubs, meadows, real steppes, dry steppes, desert steppes, wetlands, water, croplands, built-up land, barren land, desert, sand, and ice. Using machine learning and cloud computing, the novel dataset spanning the period of 1990-2020. Random Forest algorithm was employed to integrate training samples with multisource features for landcover classification, and a two-step Random Forest classification strategy to improve detail land cover results in transition regions. This process involved accurately annotating 64,345 sample points within a gridded framework. The resulting dataset achieved an overall accuracy of 83.6%. This land cover product and its approach has potential for application in vast arid and semi-arid areas.

RevDate: 2025-08-14

Ahmad T, Schuchart J, Al Ars Z, et al (2025)

GenMPI: Cluster Scalable Variant Calling for Short/Long Reads Sequencing Data.

IEEE transactions on computational biology and bioinformatics, PP: [Epub ahead of print].

Rapid technological advancements in sequencing technologies allow producing cost effective and high volume sequencing data. Processing this data for real-time clinical diagnosis is potentially time-consuming if done on a single computing node. This work presents a complete variant calling workflow, implemented using the Message Passing Interface (MPI) to leverage the benefits of high bandwidth interconnects. This solution (GenMPI) is portable and flexible, meaning it can be deployed to any private or public cluster/cloud infrastructure. Any alignment or variant calling application can be used with minimal adaptation. To achieve high performance, compressed input data can be streamed in parallel to alignment applications while uncompressed data can use internal file seek functionality to eliminate the bottleneck of streaming input data from a single node. Alignment output can be directly stored in multiple chromosome-specific SAM files or a single SAM file. After alignment, a distributed queue using MPI RMA (Remote Memory Access) atomic operations is created for sorting, indexing, marking of duplicates (if necessary) and variant calling applications. We ensure the accuracy of variants as compared to the original single node methods. We also show that for 300x coverage data, alignment scales almost linearly up to 64 nodes (8192 CPU cores). Overall, this work outperforms existing big data based workflows by a factor of two and is almost 20% faster than other MPI-based implementations for alignment without any extra memory overheads. Sorting, indexing, duplicate removal and variant calling is also scalable up to 8 nodes cluster. For pair-end short-reads (Illumina) data, we integrated the BWA-MEM aligner and three variant callers (GATK HaplotypeCaller, DeepVariant and Octopus), while for long-reads data, we integrated the Minimap2 aligner and three different variant callers (DeepVariant, DeepVariant with WhatsHap for phasing (PacBio) and Clair3 (ONT)). All codes and scripts are available at: https://github.com/abs-tudelft/gen-mpi.

RevDate: 2025-08-17

Liu S, Shan N, Bao X, et al (2025)

Distributed Collaborative Data Processing Framework for Unmanned Platforms Based on Federated Edge Intelligence.

Sensors (Basel, Switzerland), 25(15):.

Unmanned platforms such as unmanned aerial vehicles, unmanned ground vehicles, and autonomous underwater vehicles often face challenges of data, device, and model heterogeneity when performing collaborative data processing tasks. Existing research does not simultaneously address issues from these three aspects. To address this issue, this study designs an unmanned platform cluster architecture inspired by the cloud-edge-end model. This architecture integrates federated learning for privacy protection, leverages the advantages of distributed model training, and utilizes edge computing's near-source data processing capabilities. Additionally, this paper proposes a federated edge intelligence method (DSIA-FEI), which comprises two key components. Based on traditional federated learning, a data sharing mechanism is introduced, in which data is extracted from edge-side platforms and placed into a data sharing platform to form a public dataset. At the beginning of model training, random sampling is conducted from the public dataset and distributed to each unmanned platform, so as to mitigate the impact of data distribution heterogeneity and class imbalance during collaborative data processing in unmanned platforms. Moreover, an intelligent model aggregation strategy based on similarity measurement and loss gradient is developed. This strategy maps heterogeneous model parameters to a unified space via hierarchical parameter alignment, and evaluates the similarity between local and global models of edge devices in real-time, along with the loss gradient, to select the optimal model for global aggregation, reducing the influence of device and model heterogeneity on cooperative learning of unmanned platform swarms. This study carried out extensive validation on multiple datasets, and the experimental results showed that the accuracy of the DSIA-FEI proposed in this paper reaches 0.91, 0.91, 0.88, and 0.87 on the FEMNIST, FEAIR, EuroSAT, and RSSCN7 datasets, respectively, which is more than 10% higher than the baseline method. In addition, the number of communication rounds is reduced by more than 40%, which is better than the existing mainstream methods, and the effectiveness of the proposed method is verified.

RevDate: 2025-08-17

Cui M, Y Wang (2025)

An Effective QoS-Aware Hybrid Optimization Approach for Workflow Scheduling in Cloud Computing.

Sensors (Basel, Switzerland), 25(15):.

Workflow scheduling in cloud computing is attracting increasing attention. Cloud computing can assign tasks to available virtual machine resources in cloud data centers according to scheduling strategies, providing a powerful computing platform for the execution of workflow tasks. However, developing effective workflow scheduling algorithms to find optimal or near-optimal task-to-VM allocation solutions that meet users' specific QoS requirements still remains an open area of research. In this paper, we propose a hybrid QoS-aware workflow scheduling algorithm named HLWOA to address the problem of simultaneously minimizing the completion time and execution cost of workflow scheduling in cloud computing. First, the workflow scheduling problem in cloud computing is modeled as a multi-objective optimization problem. Then, based on the heterogeneous earliest finish time (HEFT) heuristic optimization algorithm, tasks are reverse topologically sorted and assigned to virtual machines with the earliest finish time to construct an initial workflow task scheduling sequence. Furthermore, an improved Whale Optimization Algorithm (WOA) based on Lévy flight is proposed. The output solution of HEFT is used as one of the initial population solutions in WOA to accelerate the convergence speed of the algorithm. Subsequently, a Lévy flight search strategy is introduced in the iterative optimization phase to avoid the algorithm falling into local optimal solutions. The proposed HLWOA is evaluated on the WorkflowSim platform using real-world scientific workflows (Cybershake and Montage) with different task scales (100 and 1000). Experimental results demonstrate that HLWOA outperforms HEFT, HEPGA, and standard WOA in both makespan and cost, with normalized fitness values consistently ranking first.

RevDate: 2025-08-17

Mtowe DP, Long L, DM Kim (2025)

Low-Latency Edge-Enabled Digital Twin System for Multi-Robot Collision Avoidance and Remote Control.

Sensors (Basel, Switzerland), 25(15):.

This paper proposes a low-latency and scalable architecture for Edge-Enabled Digital Twin networked control systems (E-DTNCS) aimed at multi-robot collision avoidance and remote control in dynamic and latency-sensitive environments. Traditional approaches, which rely on centralized cloud processing or direct sensor-to-controller communication, are inherently limited by excessive network latency, bandwidth bottlenecks, and a lack of predictive decision-making, thus constraining their effectiveness in real-time multi-agent systems. To overcome these limitations, we propose a novel framework that seamlessly integrates edge computing with digital twin (DT) technology. By performing localized preprocessing at the edge, the system extracts semantically rich features from raw sensor data streams, reducing the transmission overhead of the original data. This shift from raw data to feature-based communication significantly alleviates network congestion and enhances system responsiveness. The DT layer leverages these extracted features to maintain high-fidelity synchronization with physical robots and to execute predictive models for proactive collision avoidance. To empirically validate the framework, a real-world testbed was developed, and extensive experiments were conducted with multiple mobile robots. The results revealed a substantial reduction in collision rates when DT was deployed, and further improvements were observed with E-DTNCS integration due to significantly reduced latency. These findings confirm the system's enhanced responsiveness and its effectiveness in handling real-time control tasks. The proposed framework demonstrates the potential of combining edge intelligence with DT-driven control in advancing the reliability, scalability, and real-time performance of multi-robot systems for industrial automation and mission-critical cyber-physical applications.

RevDate: 2025-08-17

Stojanović R, Đurković J, Vukmirović M, et al (2025)

Medical Data over Sound-CardiaWhisper Concept.

Sensors (Basel, Switzerland), 25(15):.

Data over sound (DoS) is an established technique that has experienced a resurgence in recent years, finding applications in areas such as contactless payments, device pairing, authentication, presence detection, toys, and offline data transfer. This study introduces CardiaWhisper, a system that extends the DoS concept to the medical domain by using a medical data-over-sound (MDoS) framework. CardiaWhisper integrates wearable biomedical sensors with home care systems, edge or IoT gateways, and telemedical networks or cloud platforms. Using a transmitter device, vital signs such as ECG (electrocardiogram) signals, PPG (photoplethysmogram) signals, RR (respiratory rate), and ACC (acceleration/movement) are sensed, conditioned, encoded, and acoustically transmitted to a nearby receiver-typically a smartphone, tablet, or other gadget-and can be further relayed to edge and cloud infrastructures. As a case study, this paper presents the real-time transmission and processing of ECG signals. The transmitter integrates an ECG sensing module, an encoder (either a PLL-based FM modulator chip or a microcontroller), and a sound emitter in the form of a standard piezoelectric speaker. The receiver, in the form of a mobile phone, tablet, or desktop computer, captures the acoustic signal via its built-in microphone and executes software routines to decode the data. It then enables a range of control and visualization functions for both local and remote users. Emphasis is placed on describing the system architecture and its key components, as well as the software methodologies used for signal decoding on the receiver side, where several algorithms are implemented using open-source, platform-independent technologies, such as JavaScript, HTML, and CSS. While the main focus is on the transmission of analog data, digital data transmission is also illustrated. The CardiaWhisper system is evaluated across several performance parameters, including functionality, complexity, speed, noise immunity, power consumption, range, and cost-efficiency. Quantitative measurements of the signal-to-noise ratio (SNR) were performed in various realistic indoor scenarios, including different distances, obstacles, and noise environments. Preliminary results are presented, along with a discussion of design challenges, limitations, and feasible applications. Our experience demonstrates that CardiaWhisper provides a low-power, eco-friendly alternative to traditional RF or Bluetooth-based medical wearables in various applications.

RevDate: 2025-08-16

Cui G, Zhang W, Xu W, et al (2025)

Efficient workflow scheduling using an improved multi-objective memetic algorithm in cloud-edge-end collaborative framework.

Scientific reports, 15(1):29754 pii:10.1038/s41598-025-08691-y.

With the rapid advancement of large-scale model technologies, AI agent frameworks built on foundation models have become a central focus of artificial-intelligence research. In cloud-edge-end collaborative computing frameworks, efficient workflow scheduling is essential to reducing both server energy consumption and overall makespan. This paper addresses this challenge by proposing an Improved Multi-Objective Memetic Algorithm (IMOMA) that simultaneously optimizes energy consumption and makespan. First, a multi-objective optimization model incorporating task execution constraints and priority constraints is developed, and complexity analysis confirms its NP-hard nature. Second, the IMOMA algorithm enhances population diversity through dynamic opposition-based learning, introduces local search operators tailored for bi-objective optimization, and maintains Pareto optimal solutions via an elite archive. A dynamic selection mechanism based on operator historical performance and an adaptive local search triggering strategy effectively balance global exploration and local exploitation capabilities. Experimental results on 10 standard datasets demonstrate that IMOMA achieves improvements of 93%, 7%, and 19% in hypervolume and 58%, 1%, and 23% in inverted generational distance compared to MOPSO, NSGA-II, and SPEA-II algorithms. Additionally, ablation experiments reveal the influence mechanisms of scheduling strategies, server configurations, and other constraints on optimization objectives, providing an engineering-oriented solution for real-world cloud-edge-end collaborative scenarios.

RevDate: 2025-08-16

Maray M (2025)

Intelligent deep learning for human activity recognition in individuals with disabilities using sensor based IoT and edge cloud continuum.

Scientific reports, 15(1):29640.

Aging is associated with a reduction in the capability to perform activities of everyday routine and a decline in physical activity, which affects physical and mental health. A human activity recognition (HAR) system can be a valuable tool for elderly individuals or patients, as it monitors their activities and detects any significant changes in behavior or events. When integrated with the Internet of Things (IoT), this system enables individuals to live independently while ensuring their well-being. The IoT-edge-cloud framework enhances this by processing data as close to the source as possible-either on edge devices or directly on the IoT devices themselves. However, the massive number of activity constellations and sensor configurations make the HAR problem challenging to solve deterministically. HAR involves collecting sensor data to classify diverse human activities and is a rapidly growing field. It presents valuable insights into the health, fitness, and overall wellness of individuals outside of hospital settings. Therefore, the machine learning (ML) model is mostly used for the growth of the HAR system to discover the models of human activity from the sensor data. In this manuscript, an Intelligent Deep Learning Technique for Human Activity Recognition of Persons with Disabilities using the Sensors Technology (IDLTHAR-PDST) technique is proposed. The purpose of the IDLTHAR-PDST technique is to efficiently recognize and interpret activities by leveraging sensor technology within a smart IoT-Edge-Cloud continuum. Initially, the IDLTHAR-PDST technique utilizes min-max normalization-based data pre-processing model to optimize sensor data consistency and enhance model performance. For feature subset selection, the enhanced honey badger algorithm (EHBA) model is used to effectively reduce dimensionality while retaining critical activity-related features. Finally, the deep belief network (DBN) model is employed for HAR. To exhibit the improved performance of the existing IDLTHAR-PDST model, a comprehensive simulation study is accomplished. The performance validation of the IDLTHAR-PDST model portrayed a superior accuracy value of 98.75% over existing techniques.

RevDate: 2025-08-16

Sorin V, Collins JD, Bratt AK, et al (2025)

Evaluating prompt and data perturbation sensitivity in large language models for radiology reports classification.

JAMIA open, 8(4):ooaf073.

OBJECTIVES: Large language models (LLMs) offer potential in natural language processing tasks in healthcare. Due to the need for high accuracy, understanding their limitations is essential. The purpose of this study was to evaluate the performance of LLMs in classifying radiology reports for the presence of pulmonary embolism (PE) under various conditions, including different prompt designs and data perturbations.

MATERIALS AND METHODS: In this retrospective, institutional review board approved study, we evaluated 3 Google's LLMs including Gemini-1.5-Pro, Gemini-1.5-Flash-001, and Gemini-1.5-Flash-002, in classifying 11 999 pulmonary CT angiography radiology reports for PE. Ground truth labels were determined by concordance between a computer vision-based PE detection (CVPED) algorithm and multiple LLM runs under various configurations. Discrepancies between algorithms' classifications were aggregated and manually reviewed. We evaluated the effects of prompt design, data perturbations, and repeated analyses across geographic cloud regions. Performance metrics were calculated.

RESULTS: Of 11 999 reports, 1296 (10.8%) were PE-positive. Accuracy across LLMs ranged between 0.953 and 0.996. The highest recall rate for a prompt modified after a review of the misclassified cases (up to 0.997). Few-shot prompting improved recall (up to 0.99), while chain-of-thought generally degraded performance. Gemini-1.5-Flash-002 demonstrated the highest robustness against data perturbations. Geographic cloud region variability was minimal for Gemini-1.5+-Pro, while the Flash models showed stable performance.

DISCUSSION AND CONCLUSION: LLMs demonstrated high performance in classifying radiology reports, though results varied with prompt design and data quality. These findings underscore the need for systematic evaluation and validation of LLMs for clinical applications, particularly in high-stakes scenarios.

RevDate: 2025-08-14

Hizem M, Aoueileyine MO, Belhaouari SB, et al (2025)

Sustainable E-Health: Energy-Efficient Tiny AI for Epileptic Seizure Detection via EEG.

Biomedical engineering and computational biology, 16:11795972241283101.

Tiny Artificial Intelligence (Tiny AI) is transforming resource-constrained embedded systems, particularly in e-health applications, by introducing a shift in Tiny Machine Learning (TinyML) and its integration with the Internet of Things (IoT). Unlike conventional machine learning (ML), which demands substantial processing power, TinyML strategically delegates processing requirements to the cloud infrastructure, allowing lightweight models to run on embedded devices. This study aimed to (i) Develop a TinyML workflow that details the steps for model creation and deployment in resource-constrained environments and (ii) apply the workflow to e-health applications for the real-time detection of epileptic seizures using electroencephalography (EEG) data. The methodology employs a dataset of 4097 EEG recordings per patient, each 23.5 seconds long, from 500 patients, to develop a robust and resilient model. The model was deployed using TinyML on microcontrollers tailored to hardware with limited resources. TensorFlow Lite (TFLite) efficiently runs ML models on small devices, such wearables. Simulation outcomes demonstrated significant performance, particularly in predicting epileptic seizures, with the ExtraTrees Classifier achieving a notable 99.6% Area Under the Curve (AUC) on the validation set. Because of its superior performance, the ExtraTrees Classifier was selected as the preferred model. For the optimized TinyML model, the accuracy remained practically unchanged, whereas inference time was significantly reduced. Additionally, the converted model had a smaller size of 256 KB, approximately ten times smaller, making it suitable for microcontrollers with a capacity of no more than 1 MB. These findings highlight the potential of TinyML to significantly enhance healthcare applications by enabling real-time, energy-efficient decision-making directly on local devices. This is especially valuable in scenarios with limited computing resources or during emergencies, as it reduces latency, ensures privacy, and operates without reliance on cloud infrastructure. Moreover, by reducing the size of training datasets needed, TinyML helps lower overall costs and minimizes the risk of overfitting, making it an even more cost-effective and reliable solution for healthcare innovations.

RevDate: 2025-08-12

Osório NS, LD Garma (2025)

Teaching Python with team-based learning: using cloud-based notebooks for interactive coding education.

FEBS open bio [Epub ahead of print].

Computer programming and bioinformatics are increasingly essential topics in life sciences research, facilitating the analysis of large and complex 'omics' datasets. However, they remain challenging for students without a background in mathematics or computing. To address challenges in teaching programming within biomedical education, this study integrates team-based learning (TBL) with cloud-hosted interactive Python notebooks, targeting enhanced student engagement, understanding, and collaboration in bioinformatics in two Masters level classes with 28 biomedical students in total. Four interactive notebooks covering Python basics and practical bioinformatics applications-ranging from data manipulation to multi-omics analysis-were developed. Hosted on github and integrated with Google Colaboratory, these notebooks ensured equal access and eliminated technical barriers for students with varied computing setups. During the TBL session, students were highly engaged with the notebooks, which led to a greater interest in Python and increased confidence in using bioinformatics tools. Feedback highlighted the value of TBL and interactive notebooks in enriching the learning experience, while also identifying a need for further development in bioinformatics research skills. Although more validity evidence is needed in future studies, this blended, cloud-based TBL approach effectively made bioinformatics education more accessible and engaging, suggesting its potential for enhancing computational training across life sciences.

RevDate: 2025-08-13
CmpDate: 2025-08-11

González LL, Arias-Serrano I, Villalba-Meneses F, et al (2024)

Deep learning neural network development for the classification of bacteriocin sequences produced by lactic acid bacteria.

F1000Research, 13:981.

BACKGROUND: The rise of antibiotic-resistant bacteria presents a pressing need for exploring new natural compounds with innovative mechanisms to replace existing antibiotics. Bacteriocins offer promising alternatives for developing therapeutic and preventive strategies in livestock, aquaculture, and human health. Specifically, those produced by LAB are recognized as GRAS and QPS. This study aims to develop a deep learning model specifically designed to classify bacteriocins by their LAB origin, using interpretable k-mer features and embedding vectors to enable applications in antimicrobial discover.

METHODS: We developed a deep learning neural network for binary classification of bacteriocin amino acid sequences (BacLAB vs. Non-BacLAB). Features were extracted using k-mers (k=3,5,7,15,20) and vector embeddings (EV). Ten feature combinations were tested (e.g., EV, EV+5-mers+7-mers). Sequences were filtered by length (50-2000 AA) to ensure uniformity, and class balance was maintained (24,964 BacLAB vs. 25,000 Non-BacLAB). The model was trained on Google Colab, demonstrating computational accessibility without specialized hardware.

RESULTS: The '5-mers+7-mers+EV' group achieved the best performance, with k-fold cross-validation (k=30) showing: 9.90% loss, 90.14% accuracy, 90.30% precision, 90.10% recall and F1 score. Folder 22 stood out with 8.50% loss, 91.47% accuracy, and 91.00% precision, recall, and F1 score. Five sets of 100 LAB-specific k-mers were identified, revealing conserved motifs. Despite high accuracy, sequence length variation (50-2000 AA) may bias k-mer representation, favoring longer sequences. Additionally, experimental validation is required to confirm the biological activity of predicted bacteriocins. These aspects highlight directions for future research.

CONCLUSIONS: The model developed in this study achieved consistent results with those seen in the reviewed literature. It outperformed some studies by 3-10%. Its implementation in resource-limited settings is feasible via cloud platforms like Google Colab. The identified k-mers could guide the design of synthetic antimicrobials, pending further in vitro validation.

RevDate: 2025-08-13

Gao Z, Liu D, C Zheng (2025)

Vehicle-to-everything decision optimization and cloud control based on deep reinforcement learning.

Scientific reports, 15(1):29160.

To address the challenges of decision optimization and road segment hazard assessment within complex traffic environments, and to enhance the safety and responsiveness of autonomous driving, a Vehicle-to-Everything (V2X) decision framework is proposed. This framework is structured into three modules: vehicle perception, decision-making, and execution. The vehicle perception module integrates sensor fusion techniques to capture real-time environmental data, employing deep neural networks to extract essential information. In the decision-making module, deep reinforcement learning algorithms are applied to optimize decision processes by maximizing expected rewards. Meanwhile, the road segment hazard classification module, utilizing both historical traffic data and real-time perception information, adopts a hazard evaluation model to classify road conditions automatically, providing real-time feedback to guide vehicle decision-making. Furthermore, an autonomous driving cloud control platform is designed, augmenting decision-making capabilities through centralized computing resources, enabling large-scale data analysis, and facilitating collaborative optimization. Experimental evaluations conducted within simulation environments and utilizing the KITTI dataset demonstrate that the proposed V2X decision optimization method substantially outperforms conventional decision algorithms. Vehicle decision accuracy increased by 9.0%, rising from 89.2 to 98.2%. Additionally, the response time of the cloud control system decreased from 178 ms to 127 ms, marking a reduction of 28.7%, which significantly enhances decision efficiency and real-time performance. The introduction of the road segment hazard classification model also results in a hazard assessment accuracy of 99.5%, maintaining over 95% accuracy even in high-density traffic and complex road conditions, thus illustrating strong adaptability. The results highlight the effectiveness of the proposed V2X decision optimization framework and cloud control platform in enhancing the decision quality and safety of autonomous driving systems.

RevDate: 2025-08-13

Murala DK, Prasada Rao KV, Vuyyuru VA, et al (2025)

A service-oriented microservice framework for differential privacy-based protection in industrial IoT smart applications.

Scientific reports, 15(1):29230.

The rapid advancement of key technologies such as Artificial Intelligence (AI), the Internet of Things (IoT), and edge-cloud computing has significantly accelerated the transformation toward smart industries across various domains, including finance, manufacturing, and healthcare. Edge and cloud computing offer low-cost, scalable, and on-demand computational resources, enabling service providers to deliver intelligent data analytics and real-time insights to end-users. However, despite their potential, the practical adoption of these technologies faces critical challenges, particularly concerning data privacy and security. AI models, especially in distributed environments, may inadvertently retain and leak sensitive training data, exposing users to privacy risks in the event of malicious attacks. To address these challenges, this study proposes a privacy-preserving, service-oriented microservice architecture tailored for intelligent Industrial IoT (IIoT) applications. The architecture integrates Differential Privacy (DP) mechanisms into the machine learning pipeline to safeguard sensitive information. It supports both centralised and distributed deployments, promoting flexible, scalable, and secure analytics. We developed and evaluated differentially private models, including Radial Basis Function Networks (RBFNs), across a range of privacy budgets (ɛ), using both real-world and synthetic IoT datasets. Experimental evaluations using RBFNs demonstrate that the framework maintains high predictive accuracy (up to 96.72%) with acceptable privacy guarantees for budgets [Formula: see text]. Furthermore, the microservice-based deployment achieves an average latency reduction of 28.4% compared to monolithic baselines. These results confirm the effectiveness and practicality of the proposed architecture in delivering privacy-preserving, efficient, and scalable intelligence for IIoT environments. Additionally, the microservice-based design enhanced computational efficiency and reduced latency through dynamic service orchestration. This research demonstrates the feasibility of deploying robust, privacy-conscious AI services in IIoT environments, paving the way for secure, intelligent, and scalable industrial systems.

LOAD NEXT 100 CITATIONS

RJR Experience and Expertise

Researcher

Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.

Educator

Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.

Administrator

Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.

Technologist

Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.

Publisher

While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.

Speaker

Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.

Facilitator

Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.

Designer

Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.

Support this website:
Order from Amazon
We will earn a commission.

This is a must read book for anyone with an interest in invasion biology. The full title of the book lays out the author's premise — The New Wild: Why Invasive Species Will Be Nature's Salvation. Not only is species movement not bad for ecosystems, it is the way that ecosystems respond to perturbation — it is the way ecosystems heal. Even if you are one of those who is absolutely convinced that invasive species are actually "a blight, pollution, an epidemic, or a cancer on nature", you should read this book to clarify your own thinking. True scientific understanding never comes from just interacting with those with whom you already agree. R. Robbins

963 Red Tail Lane
Bellingham, WA 98226

206-300-3443

E-mail: RJR8222@gmail.com

Collection of publications by R J Robbins

Reprints and preprints of publications, slide presentations, instructional materials, and data compilations written or prepared by Robert Robbins. Most papers deal with computational biology, genome informatics, using information technology to support biomedical research, and related matters.

Research Gate page for R J Robbins

ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. According to a study by Nature and an article in Times Higher Education , it is the largest academic social network in terms of active users.

Curriculum Vitae for R J Robbins

short personal version

Curriculum Vitae for R J Robbins

long standard version

RJR Picks from Around the Web (updated 11 MAY 2018 )