picture
RJR-logo

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

icon

Bibliography Options Menu

icon
QUERY RUN:
20 Oct 2025 at 01:41
HITS:
4263
PAGE OPTIONS:
Hide Abstracts   |   Hide Additional Links
NOTE:
Long bibliographies are displayed in blocks of 100 citations at a time. At the end of each block there is an option to load the next block.

Bibliography on: Cloud Computing

RJR-3x

Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About:  RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE

ESP: PubMed Auto Bibliography 20 Oct 2025 at 01:41 Created: 

Cloud Computing

Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.

Created with PubMed® Query: ( cloud[TIAB] AND (computing[TIAB] OR "amazon web services"[TIAB] OR google[TIAB] OR "microsoft azure"[TIAB]) ) NOT pmcbook NOT ispreviousversion

Citations The Papers (from PubMed®)

-->

RevDate: 2025-10-17

Upadhiyay A, A Jain (2025)

Cyber resilient framework with energy efficient swarm routing and ensemble threat detection in fog assisted wireless sensor networks.

Scientific reports, 15(1):36461.

The rapid growth of Wireless Sensor Networks (WSNs) and their integration with fog computing have enabled faster data processing and reduced reliance on cloud infrastructures. However, these networks remain constrained by limited energy resources, increased latency under dynamic traffic, and heightened vulnerability to cyberattacks. Traditional routing protocols typically optimize either energy efficiency or security, but rarely address both in a unified and adaptive manner. This work proposes a cyber-resilient, energy-optimized routing framework for fog-enabled WSNs that integrates a modified Ant Colony Optimization (ACO) algorithm with an ensemble-based Intrusion Detection System (IDS). The routing layer employs a multi-objective cost function that jointly considers distance, residual energy, and security risk. To enhance adaptability, CatBoost is deployed at energy-constrained sensor nodes for local energy and density assessment, while XGBoost operates at fog nodes to evaluate global path quality and congestion. The IDS ensemble—comprising Support Vector Machines (SVM), k-Nearest Neighbours (KNN), and Long Short-Term Memory (LSTM) networks—detects Denial-of-Service (DoS), Probe, R2L, and U2R attacks in real time. Importantly, detected threats immediately influence routing decisions, enabling compromised links to be bypassed without disrupting network operations. Extensive MATLAB simulations show that the proposed framework achieves 96.5% energy savings, an 85.83% latency reduction, and an 89% intrusion detection rate, validated through statistical analysis across multiple runs. By transforming IDS from a passive monitoring tool into an active routing controller, this work delivers a secure, adaptive, and energy-efficient solution for dynamic and resource-constrained IoT and WSN environments.

RevDate: 2025-10-18

Lockee B, Vandervelden CA, Tilden DR, et al (2025)

Establishment of a Diabetes-Tailored Data Intelligence Platform Enhances Clinical Care, Enables Risk-Based Monitoring, and Facilitates Population-Health-Based Approaches at a Pediatric Diabetes Network.

Journal of diabetes science and technology [Epub ahead of print].

BACKGROUND: Patient-generated health data (PGHD) represents an opportunity to customize care, particularly in type 1 diabetes (T1D) care where continuous glucose monitor (CGM) and insulin pump usage continues to rise. Previous solutions to integrating CGM data into the electronic health record (EHR) have been limited in their ability to integrate data from multiple sources, ensure data fidelity, integrate data from multiple data streams, and rapidly adapt to changes in data output from numerous vendors. We developed a novel data infrastructure contained outside of the EHR to provide an alternative approach to PGHD integration, enable diabetes centers to identify and predict risk, and to facilitate research and quality improvement.

METHODS: We identified three key capabilities: ingesting and storing a wide variety of data, refining raw data into actionable insights, and visualizing and reporting to decision makers. To meet these requirements, we built a data intelligence platform we coined the diabetes data dock (D-data dock) in the Microsoft Azure cloud platform.

RESULTS: The D-data dock houses approximately 100 million CGM measurements, one million clinical events and insulin bolus records, and a near complete EHR record covering approximately 3000 patients per year from 2016 to 2023. We provide case studies detailing how the D-data dock allows timely monitoring of CGM data, enables novel study designs, and powers machine-learning-informed supplemental care interventions.

CONCLUSIONS: The D-data dock is a novel approach to harnessing disparate data streams to improve patient care, enable timely interventions, and drive innovation to improve the lives and care of people with T1D.

RevDate: 2025-10-16

Sun F, Guo L, Meng Y, et al (2025)

Ultra-simplified fabrication of all-silver memristor arrays.

Nanoscale advances [Epub ahead of print].

Brain-inspired neuromorphic computing strives to emulate the human brain's remarkable capabilities, including parallel information processing, adaptive learning, and cognitive inference, while maintaining ultra-low power consumption characteristics. The exponential progress in cloud computing and supercomputing technologies has generated an increasing demand for highly integrated electronic storage systems with enhanced performance capabilities. To address the challenges of tedious fabrication, we innovatively offer a feasible strategy: using weaved silver electrodes combined with in situ formed silver oxide insulating layers to create a high-performance two-terminal memristor array configuration. This memristor possesses a high ON/OFF ratio (above 10[6]) and good durability (200 cycles). Moreover, its innovative weaving-type configuration enables higher integration density while maintaining conformal attachment capability onto the skin. Our ultra-simplified fabrication strategy provides a novel alternative approach for streamlining fabrication processes, enabling the realization of advanced device integration and system miniaturization.

RevDate: 2025-10-16

Khan HM, Jabeen F, Khan A, et al (2025)

IoT-Enabled Fog-Based Secure Aggregation in Smart Grids Supporting Data Analytics.

Sensors (Basel, Switzerland), 25(19): pii:s25196240.

The Internet of Things (IoT) has transformed multiple industries, providing significant potential for automation, efficiency, and enhanced decision-making. The incorporation of IoT and data analytics in smart grid represents a groundbreaking opportunity for the energy sector, delivering substantial advantages in efficiency, sustainability, and customer empowerment. This integration enables smart grids to autonomously monitor energy flows and adjust to fluctuations in energy demand and supply in a flexible and real-time fashion. Statistical analytics, as a fundamental component of data analytics, provides the necessary tools and techniques to uncover patterns, trends, and insights within datasets. Nevertheless, it is crucial to address privacy and security issues to fully maximize the potential of data analytics in smart grids. This paper makes several significant contributions to the literature on secure, privacy-aware aggregation schemes in smart grids. First, we introduce a Fog-enabled Secure Data Analytics Operations (FESDAO) scheme which offers a distributed architecture incorporating robust security features such as secure aggregation, authentication, fault tolerance and resilience against insider threats. The scheme achieves privacy during data aggregation through a modified Boneh-Goh-Nissim cryptographic scheme along with other mechanisms. Second, FESDAO also supports statistical analytics on metering data at the cloud control center and fog node levels. FESDAO ensures reliable aggregation and accurate data analytical results, even in scenarios where smart meters fail to report data, thereby preserving both analytical operation computation accuracy and latency. We further provide comprehensive security analyses to demonstrate that the proposed approach effectively supports data privacy, source authentication, fault tolerance, and resilience against false data injection and replay attacks. Lastly, we offer thorough performance evaluations to illustrate the efficiency of the suggested scheme in comparison to current state-of-the-art schemes, considering encryption, computation, aggregation, decryption, and communication costs. Moreover, a detailed security analysis has been conducted to verify the scheme's resistance against insider collusion attacks, replay attack, and false data injection (FDI) attack.

RevDate: 2025-10-16
CmpDate: 2025-10-16

Qian Y, KL Siau (2025)

Advances in IoT, AI, and Sensor-Based Technologies for Disease Treatment, Health Promotion, Successful Ageing, and Ageing Well.

Sensors (Basel, Switzerland), 25(19): pii:s25196207.

Recent advancements in the Internet of Things (IoT) and artificial intelligence (AI) are unlocking transformative opportunities across society. One of the most critical challenges addressed by these technologies is the ageing population, which presents mounting concerns for healthcare systems and quality of life worldwide. By supporting continuous monitoring, personal care, and data-driven decision-making, IoT and AI are shifting healthcare delivery from a reactive approach to a proactive one. This paper presents a comprehensive overview of IoT-based systems with a particular focus on the Internet of Healthcare Things (IoHT) and their integration with AI, referred to as the Artificial Intelligence of Things (AIoT). We illustrate the operating procedures of IoHT systems in detail. We highlight their applications in disease management, health promotion, and active ageing. Key enabling technologies, including cloud computing, edge computing architectures, machine learning, and smart sensors, are examined in relation to continuous health monitoring, personalized interventions, and predictive decision support. This paper also indicates potential challenges that IoHT systems face, including data privacy, ethical concerns, and technology transition and aversion, and it reviews corresponding defense mechanisms from perception, policy, and technology levels. Future research directions are discussed, including explainable AI, digital twins, metaverse applications, and multimodal sensor fusion. By integrating IoT and AI, these systems offer the potential to support more adaptive and human-centered healthcare delivery, ultimately improving treatment outcomes and supporting healthy ageing.

RevDate: 2025-10-16

Qi Y, Du Y, Guo Y, et al (2025)

Task Offloading and Resource Allocation Strategy in Non-Terrestrial Networks for Continuous Distributed Task Scenarios.

Sensors (Basel, Switzerland), 25(19): pii:s25196195.

Leveraging non-terrestrial networks for edge computing is crucial for the development of 6G, the Internet of Things, and ubiquitous digitalization. In such scenarios, diverse tasks often exhibit continuously distributed attributes, while existing research predominantly relies on qualitative thresholds for task classification, failing to accommodate quantitatively continuous task requirements. To address this issue, this paper models a multi-task scenario with continuously distributed attributes and proposes a three-tier cloud-edge collaborative offloading architecture comprising UAV-based edge nodes, LEO satellites, and ground cloud data centers. We further formulate a system cost minimization problem that integrates UAV network load balancing and satellite energy efficiency. To solve this non-convex, multi-stage optimization problem, a two-layer multi-type-agent deep reinforcement learning (TMDRL) algorithm is developed. This algorithm categorizes agents according to their functional roles in the Markov decision process and jointly optimizes task offloading and resource allocation by integrating DQN and DDPG frameworks. Simulation results demonstrate that the proposed algorithm reduces system cost by 7.82% compared to existing baseline methods.

RevDate: 2025-10-16

Ma Y, Zhao Y, Hu Y, et al (2025)

Multi-Agent Deep Reinforcement Learning for Joint Task Offloading and Resource Allocation in IIoT with Dynamic Priorities.

Sensors (Basel, Switzerland), 25(19): pii:s25196160.

The rapid growth of Industrial Internet of Things (IIoT) terminals has resulted in tasks exhibiting increased concurrency, heterogeneous resource demands, and dynamic priorities, significantly increasing the complexity of task scheduling in edge computing. Cloud-edge-end collaborative computing leverages cross-layer task offloading to alleviate edge node resource contention and improve task scheduling efficiency. However, existing methods generally neglect the joint optimization of task offloading, resource allocation, and priority adaptation, making it difficult to balance task execution and resource utilization under resource-constrained and competitive conditions. To address this, this paper proposes a two-stage dynamic-priority-aware joint task offloading and resource allocation method (DPTORA). In the first stage, an improved Multi-Agent Proximal Policy Optimization (MAPPO) algorithm integrated with a Priority-Gated Attention Module (PGAM) enhances the robustness and accuracy of offloading strategies under dynamic priorities; in the second stage, the resource allocation problem is formulated as a single-objective convex optimization task and solved globally using the Lagrangian dual method. Simulation results show that DPTORA significantly outperforms existing multi-agent reinforcement learning baselines in terms of task latency, energy consumption, and the task completion rate.

RevDate: 2025-10-16

Mushtaq S, Mohsin M, MM Mushtaq (2025)

A Systematic Literature Review on the Implementation and Challenges of Zero Trust Architecture Across Domains.

Sensors (Basel, Switzerland), 25(19): pii:s25196118.

The Zero Trust Architecture (ZTA) model has emerged as a foundational cybersecurity paradigm that eliminates implicit trust and enforces continuous verification across users, devices, and networks. This study presents a systematic literature review of 74 peer-reviewed articles published between 2016 and 2025, spanning domains such as cloud computing (24 studies), Internet of Things (11), healthcare (7), enterprise and remote work systems (6), industrial and supply chain networks (5), mobile networks (5), artificial intelligence and machine learning (5), blockchain (4), big data and edge computing (3), and other emerging contexts (4). The analysis shows that authentication, authorization, and access control are the most consistently implemented ZTA components, whereas auditing, orchestration, and environmental perception remain underexplored. Across domains, the main challenges include scalability limitations, insufficient lightweight cryptographic solutions for resource-constrained systems, weak orchestration mechanisms, and limited alignment with regulatory frameworks such as GDPR and HIPAA. Cross-domain comparisons reveal that cloud and enterprise systems demonstrate relatively mature implementations, while IoT, blockchain, and big data deployments face persistent performance and compliance barriers. Overall, the findings highlight both the progress and the gaps in ZTA adoption, underscoring the need for lightweight cryptography, context-aware trust engines, automated orchestration, and regulatory integration. This review provides a roadmap for advancing ZTA research and practice, offering implications for researchers, industry practitioners, and policymakers seeking to enhance cybersecurity resilience.

RevDate: 2025-10-16

Yildirim N, Cao M, Yun M, et al (2025)

EcoWild: Reinforcement Learning for Energy-Aware Wildfire Detection in Remote Environments.

Sensors (Basel, Switzerland), 25(19): pii:s25196011.

Early wildfire detection in remote areas remains a critical challenge due to limited connectivity, intermittent solar energy, and the need for autonomous, long-term operation. Existing systems often rely on fixed sensing schedules or cloud connectivity, making them impractical for energy-constrained deployments. We introduce EcoWild, a reinforcement learning-driven cyber-physical system for energy-adaptive wildfire detection on solar-powered edge devices. EcoWild combines a decision tree-based fire risk estimator, lightweight on-device smoke detection, and a reinforcement learning agent that dynamically adjusts sensing and communication strategies based on battery levels, solar input, and estimated fire risk. The system models realistic solar harvesting, battery dynamics, and communication costs to ensure sustainable operation on embedded platforms. We evaluate EcoWild using real-world solar, weather, and fire image datasets in a high-fidelity simulation environment. Results show that EcoWild consistently maintains responsiveness while avoiding battery depletion under diverse conditions. Compared to static baselines, it achieves 2.4× to 7.7× faster detection, maintains moderate energy consumption, and avoids system failure due to battery depletion across 125 deployment scenarios.

RevDate: 2025-10-16

Zhang F, Xia X, Gao H, et al (2025)

A Blockchain-Enabled Multi-Authority Secure IoT Data-Sharing Scheme with Attribute-Based Searchable Encryption for Intelligent Systems.

Sensors (Basel, Switzerland), 25(19): pii:s25195944.

With the advancement of technologies such as 5G, digital twins, and edge computing, the Internet of Things (IoT) as a critical component of intelligent systems is profoundly driving the transformation of various industries toward digitalization and intelligence. However, the exponential growth of network connection nodes has expanded the attack exposure surface of IoT devices. The IoT devices with limited storage and computing resources struggle to cope with new types of attacks, and IoT devices lack mature authorization and authentication mechanisms. It is difficult for traditional data-sharing solutions to meet the security requirements of cloud-based shared data. Therefore, this paper proposes a blockchain-based multi-authority IoT data-sharing scheme with attribute-based searchable encryption for intelligent system (BM-ABSE), aiming to address the security, efficiency, and verifiability issues of data sharing in an IoT environment. Our scheme decentralizes management responsibilities through a multi-authority mechanism to avoid the risk of single-point failure. By utilizing the immutability and smart contract function of blockchain, this scheme can ensure data integrity and the reliability of search results. Meanwhile, some decryption computing tasks are outsourced to the cloud to reduce the computing burden on IoT devices. Our scheme meets the static security and IND-CKA security requirements of the standard model, as demonstrated by theoretical analysis, which effectively defends against the stealing or tampering of ciphertexts and keywords by attackers. Experimental simulation results indicate that the scheme has excellent computational efficiency on resource-constrained IoT devices, with core algorithm execution time maintained in milliseconds, and as the number of attributes increases, it has a controllable performance overhead.

RevDate: 2025-10-15

Kayalvili S, Senthilkumar R, Yasotha S, et al (2025)

An optimized resource allocation in cloud using prediction enabled reinforcement learning.

Scientific reports, 15(1):36088.

Due to its many applications, cloud computing has gained popularity in recent years. It is simple and fast to access shared resources at any time from any location. Cloud-based package facilities need adaptive resource allocation (RA) to provide Quality-of-Service (QoS) while lowering resource prices owing to workloads and service demands that change over time. As a result of the constantly shifting system states, resource allocation presents enormous challenges. The old methods often require specialist knowledge, which may result in poor adaptability. Additionally, it aims for environments with set workloads; hence, it cannot be used successfully in real-world contexts with fluctuating workloads. This research therefore proposes a Prediction-enabled feedback system to solve these significant problems with the reinforcement learning-based RA (PCRA) framework. Firstly, this research creates a more accurate Q-value prediction to forecast management value processes at various scheme conditions, using Q-values as the basis. For accurate Q-value prediction, the model makes use of several prediction learners using the Q-learning method. Also, an improved optimization-based algorithm is utilized to discover impartial resource allocations called the Feature Selection Whale Optimization Algorithm (FSWOA). Simulations based on practical scenarios using CloudStack and RUBiS benchmarks demonstrate the effectiveness of PCRA for real-time RA. Simulations demonstrate that the PCRA framework achieves a 94.7% Q-value prediction accuracy and reduces SLA violations and resource cost by 17.4% compared to traditional round-robin scheduling.

RevDate: 2025-10-14

Wit N, Bertlin J, Hynes-Allen A, et al (2025)

Mapping SET1B chromatin interactions with DamID using DamMapper, a comprehensive Snakemake workflow.

BMC genomics, 26(1):914.

BACKGROUND: DNA adenine methyltransferase identification followed by sequencing (DamID-seq) is a powerful method used to map genome-wide chromatin-protein interactions. However, the bioinformatic analysis of DamID-seq data presents significant challenges due to the inherent complexities of the data and a notable lack of comprehensive software solutions for data-processing and downstream analysis.

RESULTS: To address these challenges, we present a comprehensive bioinformatic workflow for DamID-seq data analysis, DamMapper, using the Snakemake workflow management system. Key features include straightforward processing of multiple biological replicates, visualisation of quality control, such as correlation heatmaps and principal component analysis (PCA), and robust code quality maintained through continuous integration (CI). Reproducibility is ensured across diverse computational environments, including cloud computing and high-performance computing (HPC) clusters, through the implementation of software environments (Conda) and containerisation (Docker/Apptainer). We validate this workflow using a previously published DamID-seq dataset and apply it to analyse novel datasets for proteins involved in the hypoxia response, specifically the transcription factor HIF-1α and the histone methyltransferase SET1B. This application reveals a strong concordance between our HIF-1α DamID-seq results and ChIP-seq data, and importantly, provides the first genome-wide DNA binding map for SET1B.

CONCLUSIONS: This work provides a validated, reproducible, and feature-rich workflow that overcomes common hurdles in DamID-seq data analysis. By streamlining the processing and ensuring robustness, DamMapper facilitates reliable analysis and enables new biological discoveries, as demonstrated by the characterization of SET1B binding sites. The workflow is available under an MIT license at https://github.com/niekwit/damid-seq.

SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12864-025-12075-x.

RevDate: 2025-10-15
CmpDate: 2025-10-15

Chow W, Venkataraman N, Oh HC, et al (2025)

Building an artificial intelligence and digital ecosystem: a smart hospital's data-driven path to healthcare excellence.

Singapore medical journal, 66(Suppl 1):S75-S83.

Hospitals worldwide recognise the importance of data and digital transformation in healthcare. We traced a smart hospital's data-driven journey to build an artificial intelligence and digital ecosystem (AIDE) to achieve healthcare excellence. We measured the impact of data and digital transformation on patient care and hospital operations, identifying key success factors, challenges, and opportunities. The use of data analytics and data science, robotic process automation, AI, cloud computing, Medical Internet of Things and robotics were stand-out areas for a hospital's data-driven journey. In the future, the adoption of a robust AI governance framework, enterprise risk management system, AI assurance and AI literacy are critical for success. Hospitals must adopt a digital-ready, digital-first strategy to build a thriving healthcare system and innovate care for tomorrow.

RevDate: 2025-10-14

Padmavathi V, R Saminathan (2025)

A federated edge intelligence framework with trust based access control for secure and privacy preserving IoT systems.

Scientific reports, 15(1):35832.

The rapid growth of Internet of Things (IoT) ecosystems has generated substantial industrial progress, yet it has also introduced intricate security and privacy issues. IoT deployments cannot be properly supported with traditional cloud-centric approaches because they require improved bandwidth utilization, reduced latency, and enhanced trust mechanisms. The research proposes Artificial Intelligence-Driven Secure Edge Trust Framework (AI-SET), which establishes a comprehensive edge-based security design that connects network intrusion detection with federated learning capabilities to implement adaptive trust-based access control for IoT system protection. The AI-SET framework comprises three central elements. Real-time anomaly detection at the network edge through the Edge-Resident Intrusion Detection System operates with lightweight AI algorithms to minimize dependency on centralized systems. Privacy-preserving federated learning utilizes the modified FedAvg algorithm, which is supported by differential privacy and homomorphic encryption. Security measures enabled by this model allow algorithms to be trained across decentralized sources that contain heterogeneous and non-identically distributed (non-IID) data. A dynamic access control system utilizes trust assessment models to evaluate device context and behavior for real-time permission evaluations. The framework undergoes validation by running tests with the NAB dataset, supported by Jetson Nano and Raspberry Pi edge devices, and tools including Suricata, Metasploit, and the WAZUH threat platform. Evidence shows that AI-SET boasts higher accuracy in intrusion detection, enhanced communication performance, and superior access control security compared to standard approaches. AI-SET demonstrates immunity against attempted model poisoning attacks and unauthorized system breaches, achieving this protection while maintaining low operational costs and ensuring secure data privacy. The research presents AI-SET as an adaptable, resilient, and sensitive-minded security framework for future IoT systems, through its holistic control of edge intelligence, secure network operations, and automated trust management.

RevDate: 2025-10-14

Sina EM, Limage K, Anisman E, et al (2025)

Automated Machine Learning Differentiation of Pituitary Macroadenomas and Parasellar Meningiomas Using Preoperative Magnetic Resonance Imaging.

Otolaryngology--head and neck surgery : official journal of American Academy of Otolaryngology-Head and Neck Surgery [Epub ahead of print].

INTRODUCTION: Automated machine learning (AutoML) is an artificial intelligence tool that facilitates image recognition model development. This study evaluates the diagnostic performance of AutoML in differentiating pituitary macroadenomas (PA) and parasellar meningiomas (PSM) using preoperative MRI.

STUDY DESIGN: Model development and retrospective analysis.

SETTING: Single academic institution with external validation from a public dataset.

METHODS: 1628 contrast-enhanced T1-weighted MRI sequences from 116 patients (997 PA, 631 PSM) were uploaded to Google Cloud VertexAI AutoML. A single-label classification model was developed using an 80%-10%-10% training-validation-testing split. External validation included 930 PA and 29 PSM images. A subanalysis evaluated the classification of anatomical PSM subtypes (planum sphenoidale [PS] versus tuberculum sellae [TS]). Performance metrics were calculated at 0.25, 0.5, and 0.75 confidence thresholds.

RESULTS: At a 0.5 confidence threshold, the AutoML model achieved an aggregate AUPRC of 0.997, with F1 score, sensitivity, specificity, PPV, and NPV equilibrated to 97.55%. The model achieved strong performance in classifying PA (F1 = 97.98%; sensitivity = 97.00%; specificity = 98.96%) and PSM (F1 = 96.88%; sensitivity = 98.41%; specificity = 95.53%). External validation demonstrated high accuracy (AUPRC = 0.999 for PA; 1.000 for PSM). The PSM subanalysis yielded an aggregate F1 score of 97.30%, with PS and TS classified at 97.44% and 97.14%, respectively.

CONCLUSION: Our customized AutoML model accurately differentiates PAs from PSMs using preoperative MRIs and outperforms traditional ML. It is the first AutoML model specifically trained for parasellar tumor classification. Its highly automated, user-friendly design may facilitate scalable integration into clinical practice.

RevDate: 2025-10-14

Wang Z, Veniaminovna Kalugina O, Vladimirovna Volichenko O, et al (2025)

Correction: Sustainability in construction economics as a barrier to cloud computing adoption in small-scale Building projects.

Scientific reports, 15(1):35562 pii:10.1038/s41598-025-23126-4.

RevDate: 2025-10-13

Tian Q, G Li (2025)

Fog computing based cost optimization for university governance.

Scientific reports, 15(1):35691.

This study presented a new architecture based on fog computing to effectively reduce the burdensome cost of university governance. The process is established by enhancing network performance and optimizing resource utilization. The solution uses an effective process to overcome the inherent shortcomings of traditional cloud-based systems, which incur exorbitant costs and have delayed response times, especially in the case of distributed computing arrangements in online education. The main contribution of this work is a low cost, fog based model that uniquely combines a new cost function for optimizing resources allocation with a fuzzy inference system for intelligent error handling and resource prioritization. Our approach can assist significantly in alleviating the computational burden in the evolving paradigm of online education and distributed computing in the on-premise or hybrid cloud environment. Simulation results conducted in MATLAB environment, validate cost minimization and resource optimization in the university networks using the proposed solution. Although, this study is an important addition to the existing knowledge base on the use of fog computing in university governance, and it lays the groundwork for future research into optimizing operations of administrative processes and lowering costs of educational institutions.

RevDate: 2025-10-13

Fonari A, Elliott SD, Brock CN, et al (2025)

Finding the temperature window for atomic layer deposition of ruthenium metal via efficient phonon calculations.

Physical chemistry chemical physics : PCCP [Epub ahead of print].

We investigate the use of first principles thermodynamics based on periodic density functional theory (DFT) to examine the gas-surface chemistry of an oxidized ruthenium surface reacting with hydrogen gas. This reaction system features in the growth of ultrathin Ru films by atomic layer deposition (ALD). We reproduce and rationalize the experimental observation that ALD of the metal from RuO4 and H2 occurs only in a narrow temperature window above 100 °C, and this validates the approach. Specifically, the temperature-dependent reaction free energies are computed for the competing potential reactions of the H2 reagent, and show that surface oxide is reduced to water, which is predicted to desorb thermally above 113 °C, exposing bare Ru that can further react to surface hydride, and hence deposit Ru metal. The saturating coverages give a predicted growth rate of 0.7 Å per cycle of Ru. At lower temperatures, free energies indicate that water is retained at the surface and reacts with the RuO4 precursor to form an oxide film, also in agreement with experiment. The temperature dependence is obtained with the required accuracy by computing Gibbs free energy corrections from phonon calculations within the harmonic approximation. Surface phonons are computed rapidly and efficiently by parallelization on a cloud architecture within the Schrödinger Materials Science Suite. We also show that rotational and translational entropy of gases dominate the free energies, permitting an alternative approach without phonon calculations, which would be suitable for rapid pre-screening of gas-surface chemistries.

RevDate: 2025-10-11
CmpDate: 2025-10-11

Gakh RV, Fedoniuk LY, Furman OY, et al (2025)

The role of health monitoring technologies in optimising athletes' self-regulation.

Wiadomosci lekarskie (Warsaw, Poland : 1960), 78(8):1544-1553.

OBJECTIVE: Aim: To analyse current approaches to monitoring sports performance and health of athletes by developing an intelligent system that combines wearable devices, cloud computing and deep learning methods.

PATIENTS AND METHODS: Materials and Methods: The paper analyses related literature in sports medicine, informatics and artificial intelligence. The work is based on studying the effectiveness of devices such as Fitbit Charge 5, Garmin Venu 2, Samsung Galaxy Watch 4, and Oura Ring Gen 3.

RESULTS: Results: Showed that such systems provide high accuracy in predicting athletes' health status. The presented models allow real-time tracking of physiological parameters, analysing the data and generating health reports for prompt adjustment of the training process. These devices enable systematic monitoring of various indicators, such as heart rate, stress level, sleep quality and overall physical activity. Reading these indicators allows athletes to receive objective information about their condition. This, in turn, contributes to more effective training planning, recovery and injury prevention.

CONCLUSION: Conclusions: Integrating wearables, cloud computing, and deep learning methods presented on the latest devices is a promising approach to sports health monitoring. The analysed devices can improve athletes' performance, prevent injuries and optimise training programmes.

RevDate: 2025-10-08
CmpDate: 2025-10-08

Hosseinzadeh F, Liu G, Tsai E, et al (2025)

Utilizing a publicly accessible automated machine learning platform to enable diagnosis before tumor surgery.

Communications medicine, 5(1):419.

BACKGROUND: In benign tumors with potential for malignant transformation, sampling error during pre-operative biopsy can significantly change patient counseling and surgical planning. Sinonasal inverted papilloma (IP) is the most common benign soft tissue tumor of the sinuses, yet it can undergo malignant transformation to squamous cell carcinoma (IP-SCC), for which the planned surgery could be drastically different. Artificial intelligence (AI) could potentially help with this diagnostic challenge.

METHODS: CT images from 19 institutions were used to train the Google Cloud Vertex AI platform to distinguish between IP and IP-SCC. The model was evaluated on a holdout test dataset of images from patients whose data were not used for training or validation. Performance metrics of area under the curve (AUC), sensitivity, specificity, accuracy, and F1 were used to assess the model.

RESULTS: Here we show CT image data from 958 patients and 41099 individual images that were labeled to train and validate the deep learning image classification model. The model demonstrated a 95.8 % sensitivity in correctly identifying IP-SCC cases from IP, while specificity was robust at 99.7 %. Overall, the model achieved an accuracy of 99.1%.

CONCLUSIONS: A deep automated machine learning model, created from a publicly available artificial intelligence tool, using pre-operative CT imaging alone, identified malignant transformation of inverted papilloma with excellent accuracy.

RevDate: 2025-10-03

Samantray S, Lockwood M, Andersen A, et al (2025)

PTM-Psi on the Cloud: A Cloud-Compatible Workflow for Scalable, High-Throughput Simulation of Post-Translational Modifications in Protein Complexes.

Journal of chemical information and modeling [Epub ahead of print].

We developed an advanced computational framework to accelerate the study of the impact of post-translational modifications on protein structures and interactions (PTM-Psi) using asynchronous, loosely coupled workflows on the Azure Quantum Elements Cloud platform. We seamlessly integrate emerging cloud computing assets that further expand the scope and capability of PTM-Psi Python package by refactoring it into a cloud-compatible library. We employed a "workflow of workflows" approach, wherein a parent workflow spawns one or more child workflows, managing them, and acting on their results. This approach enabled us to optimize resource allocation according to each workflow's needs and allowed us to use the cloud heterogeneous architecture for the computational investigation of a combinatorial explosion of thiol protein PTMs on an exemplary protein megacomplex critical to the Calvin-Benson cycle of light-dependent sugar production in cyanobacteria. With PTM-Psi on the cloud, we transformed the pipeline for the thiol PTM analysis to achieve high throughput by leveraging the strengths of the cloud service. PTM-Psi on the cloud reduces operational complexity and lowers entry barriers to data interpretation with structural modeling for a redox proteomics mass spectrometry specialist.

RevDate: 2025-10-03

Catalucci S, Koutecký T, Senin N, et al (2025)

Investigation on the effects of the application of a sublimating matte coating in optical coordinate measurement of additively manufactured parts.

The International journal, advanced manufacturing technology, 140(5-6):2749-2775.

Coating sprays play a crucial role in extending the capabilities of optical measuring systems, especially when dealing with reflective surfaces, where excessive reflections, caused by incident light hitting the object surface, lead to increased noise and missing data points in the measurement results. This work focuses on metal additively manufactured parts, and explores how the application of a sublimating matting spray on the measured surfaces can improve measurement performance. The use of sublimating matting sprays is a recent development for achieving temporary coatings that are useful for measurement, but then disappear in the final product. A series of experiments was performed involving measurement by fringe projection on a selected test part pre- and post-application of a sublimating coating layer. A comparison of measurement performance across the experiments was run by computing a selected set of custom-developed point cloud quality indicators: rate of surface coverage, level of sampling density, local point dispersion, variation of selected linear dimensions computed from the point clouds. In addition, measurements were performed using an optical profilometer on the coated and uncoated surfaces to determine both thickness of the coating layer and changes of surface texture (matte effect) due to the presence of the coating layer.

RevDate: 2025-10-02

Sun X, Liao B, Huang S, et al (2025)

Evaluation of the particle characteristics of aggregates from construction spoils treatment through a real-time detection multimodal module based on 3D point cloud technology.

Waste management (New York, N.Y.), 208:115165 pii:S0956-053X(25)00576-8 [Epub ahead of print].

Construction spoils are generated during construction activities and typically contain aggregates along with mud, requiring size distribution (gradation) assessment for reuse. Conventional methods using the square opening sieves are inefficient and labor-intensive. This study introduced an intelligent multi-modal module primarily for gradation detection based on 3D scanning technology to replace traditional sieve techniques. The proposed Particle Point Cloud Clustering algorithm achieved nearly 100% segmentation accuracy for multi-particle point clouds within 2 s through adaptive point-spacing optimization. A Particle Sieving Size Determination method ensured particle size classification accuracy exceeding 93.0%. A particle surface reconstruction algorithm was integrated into the Particle Characteristics Extraction (PCE) method to address the challenge of volume calculation for unscanned particle bottom surfaces, providing a novel strategy for computing particle geometry that encompasses traditional analysis. To streamline volume calculation and bypass individual particle reconstruction, we developed a volume prediction approach that combines the Oriented Bounding Box volume with the particle morphological parameter (λ) obtained through the PCE method. Furthermore, the Particle Mass Modification model determined aggregate mass by multiplying the predicted volume with the established density. This model significantly reduced gradation errors to less than 1.2% on average, which was experimentally validated. Experimental results also confirmed that the proposed method achieves real-time, second-level detection and fulfills the typical application needs in a construction site. This study is expected to benefit other industrial processes, such as particle screening in the mining industry, since information on particle characteristics is equally crucial for this sector.

RevDate: 2025-10-02

Ma Q, Fan R, Zhao L, et al (2025)

SGSG: Stroke-Guided Scene Graph Generation.

IEEE transactions on visualization and computer graphics, PP: [Epub ahead of print].

3D scene graph generation is essential for spatial computing in Extended Reality (XR), providing structured semantics for task planning and intelligent perception. However, unlike instance-segmentation-driven setups, generating semantic scene graphs still suffer from limited accuracy due to coarse and noisy point cloud data typically acquired in practice, and from the lack of interactive strategies to incorporate users, spatialized and intuitive guidance. We identify three key challenges: designing controllable interaction forms, involving guidance in inference, and generalizing from local corrections. To address these, we propose SGSG, a Stroke-Guided Scene Graph generation method that enables users to interactively refine 3D semantic relationships and improve predictions in real time. We propose three types of strokes and a lightweight SGstrokes dataset tailored for this modality. Our model integrates stroke guidance representation and injection for spatio-temporal feature learning and reasoning correction, along with intervention losses that combine consistency-repulsive and geometry-sensitive constraints to enhance accuracy and generalization. Experiments and the user study show that SGSG outperforms state-of-the-art methods 3DSSG and SGFN in overall accuracy and precision, surpasses JointSSG in predicate-level metrics, and reduces task load across all control conditions, establishing SGSG as a new benchmark for interactive 3D scene graph generation and semantic understanding in XR. Implementation resources are available at: https://github.com/Sycamore-Ma/SGSG-runtime.

RevDate: 2025-10-01

Sudhakar M, K Vivekrabinson (2025)

Enhanced CNN based approach for IoT edge enabled smart car driving system for improving real time control and navigation.

Scientific reports, 15(1):33932.

This study investigates the critical control factors differentiating human-driven vehicles from IoT edge-enabled smart driving systems Real-time steering, throttle, and brake control are the main areas of emphasis. By combining many high-precision sensors and using edge computing for real-time processing, the research seeks to improve autonomous vehicle decision-making. The suggested system gathers real-time time-series data using LiDAR, radar, GPS, IMU, and ultrasonic sensors. Before sending this data to a cloud server, edge nodes preprocess it. There, a Convolutional Neural Network (CNN) creates predicted control vectors for vehicle navigation. The study uses a MATLAB 2023 simulation framework that includes 100 autonomous cars, five edge nodes, and a centralized cloud server. Multiple convolutional and pooling layers make up the CNN architecture, which is followed by fully linked layers. To enhance trajectory estimation, grayscale and optical flow pictures are used. Trajectory smoothness measures, loss function trends, and Root Mean Square Error (RMSE) are used to evaluate performance. According to experimental data, the suggested CNN-based edge-enabled driving system outperforms conventional autonomous driving techniques in terms of navigation accuracy, achieving an RMSE of 15.123 and a loss value of 2.114. The results show how edge computing may improve vehicle autonomy and reduce computational delay, opening the door for more effective smart driving systems. In order to better evaluate the system's suitability for dynamic situations, future study will incorporate real-world validation.

RevDate: 2025-09-30

Kario K, Asayama K, Arima H, et al (2025)

Digital hypertension - what we need for the high-quality management of hypertension in the new era.

Hypertension research : official journal of the Japanese Society of Hypertension [Epub ahead of print].

Digital technologies are playing an increasing role in hypertension management. Digital hypertension is a new field that integrates advancing technologies into hypertension management. This research area encompasses various aspects of digital transformation technologies, including the development of novel blood pressure (BP) measurement devices-whether cuffless or cuff-based sensors-the transmission of large-scale time-series BP data, cloud-based computing and analysis of BP indices, presentation of the results, and feedback systems for both patients and physicians. A key component of this approach is novel blood pressure (BP) monitoring devices. This article summarizes the latest information and discussions about "held at the 2024 Japan Society of Hypertension scientific meeting. Novel BP monitoring includes cuffless devices that estimate BP, but cuffless devices require achieving accuracy without the need for calibration using conventional cuff-based devices. New BP monitoring devices can provide information on novel biomarkers beyond BP and may improve risk assessment and outcomes. Integration of BP data with omics and clinical information should enable personalized hypertension management. Key data gaps relating to novel BP monitoring devices are accuracy/validation in different settings/populations, association between BP metrics and hard clinical outcomes, and measurement/interpretation of BP variability data. Human- and health system-related factors also need to be addressed or overcome before these devices can be successfully integrated into routine clinical practice. If these things can be achieved, new BP monitoring technologies could transform hypertension management and play a pivotal role in the future of remote healthcare. This article summarizes the latest information and discussions about digital hypertension from the Digital Hypertension symposium that took place during the 2024 Japan Society of Hypertension scientific meeting.

RevDate: 2025-09-30

Alamro H, Albouq SS, Khan J, et al (2025)

An intelligent deep representation learning with enhanced feature selection approach for cyberattack detection in internet of things enabled cloud environment.

Scientific reports, 15(1):34013.

Users of computer networks can take advantage of cloud computing (CC), a relatively new concept that provides features such as processing, in addition to storing and sharing data. Cloud computing (CC) is attracting global investment due to its services, while IoT faces rising advanced cyberattacks, making its cybersecurity crucial to protect privacy and digital assets. A significant challenge for intrusion detection systems (IDS) is detecting complex and hidden malware, as attackers use advanced evasion techniques to bypass conventional security measures. At the cutting edge of cybersecurity is artificial intelligence (AI), which is applied to develop composite models that protect systems and networks, including Internet of Things (IoT) systems. AI-based deep learning (DL) is highly effective in detecting cybersecurity threats. This paper presents an Intelligent Hybrid Deep Learning Method for Cyber Attack Detection Using an Enhanced Feature Selection Technique (IHDLM-CADEFST) approach in IoT-enabled cloud networks. The aim is to strengthen IoT cybersecurity by identifying key threats and developing effective detection and mitigation strategies. Initially, the data pre-processing phase uses the standard scaler method to convert input data into a suitable format. Furthermore, the feature selection (FS) strategy is implemented using the recursive feature elimination with information gain (RFE-IG) model to detect the most pertinent features and prevent overfitting. Finally, a hybrid Convolutional Neural Network and Long Short-Term Memory (CNN-LSTM) model is employed for attack classification, utilizing the RMSprop optimizer to enhance the performance and efficiency of the classification process. The experimentation of the IHDLM-CADEFST approach is examined under the ToN-IoT and Edge-IIoT datasets. The comparison analysis of the IHDLM-CADEFST approach yielded superior accuracy values of 99.45% and 99.19% compared to recent models on the dual dataset.

RevDate: 2025-09-30

He M, Zhou N, Peng H, et al (2025)

A Multivariate Cloud Workload Prediction Method Integrating Convolutional Nonlinear Spiking Neural Model with Bidirectional Long Short-Term Memory.

International journal of neural systems [Epub ahead of print].

Multivariate workload prediction in cloud computing environments is a critical research problem. Effectively capturing inter-variable correlations and temporal patterns in multivariate time series is key to addressing this challenge. To address this issue, this paper proposes a convolutional model based on a Nonlinear Spiking Neural P System (ConvNSNP), which enhances the ability to process nonlinear data compared to conventional convolutional models. Building upon this, a hybrid forecasting model is developed by integrating ConvNSNP with a Bidirectional Long Short-Term Memory (BiLSTM) network. ConvNSNP is first employed to extract temporal and cross-variable dependencies from the multivariate time series, followed by BiLSTM to further strengthen long-term temporal modeling. Comprehensive experiments are conducted on three public cloud workload traces from Alibaba and Google. The proposed model is compared with a range of established deep learning approaches, including CNN, RNN, LSTM, TCN and hybrid models such as LSTNet, CNN-GRU and CNN-LSTM. Experimental results on three public datasets demonstrate that our proposed model achieves up to 9.9% improvement in RMSE and 11.6% improvement in MAE compared with the most effective baseline methods. The model also achieves favorable performance in terms of MAPE, further validating its effectiveness in multivariate workload prediction.

RevDate: 2025-09-30
CmpDate: 2025-09-30

Labayle O, Roskams-Hieter B, Slaughter J, et al (2024)

Semiparametric efficient estimation of small genetic effects in large-scale population cohorts.

Biostatistics (Oxford, England), 26(1):.

Population genetics seeks to quantify DNA variant associations with traits or diseases, as well as interactions among variants and with environmental factors. Computing millions of estimates in large cohorts in which small effect sizes and tight confidence intervals are expected, necessitates minimizing model-misspecification bias to increase power and control false discoveries. We present TarGene, a unified statistical workflow for the semi-parametric efficient and double robust estimation of genetic effects including $ k $-point interactions among categorical variables in the presence of confounding and weak population dependence. $ k $-point interactions, or Average Interaction Effects (AIEs), are a direct generalization of the usual average treatment effect (ATE). We estimate genetic effects with cross-validated and/or weighted versions of Targeted Minimum Loss-based Estimators (TMLE) and One-Step Estimators (OSE). The effect of dependence among data units on variance estimates is corrected by using sieve plateau variance estimators based on genetic relatedness across the units. We present extensive realistic simulations to demonstrate power, coverage, and control of type I error. Our motivating application is the targeted estimation of genetic effects on trait, including two-point and higher-order gene-gene and gene-environment interactions, in large-scale genomic databases such as UK Biobank and All of Us. All cross-validated and/or weighted TMLE and OSE for the AIE $ k $-point interaction, as well as ATEs, conditional ATEs and functions thereof, are implemented in the general purpose Julia package TMLE.jl. For high-throughput applications in population genomics, we provide the open-source Nextflow pipeline and software TarGene which integrates seamlessly with modern high-performance and cloud computing platforms.

RevDate: 2025-09-29

Ala'anzy MA, Abilakim A, Zhanuzak R, et al (2025)

Real time smart parking system based on IoT and fog computing evaluated through a practical case study.

Scientific reports, 15(1):33483.

The increasing urban population and the growing preference for private transportation have led to a significant rise in vehicle numbers, exacerbating traffic congestion and parking challenges. Cruising for parking not only consumes time and fuel but also contributes to environmental and energy inefficiencies. Smart parking systems have emerged as essential solutions to these issues, addressing everyday urban challenges and enabling the development of smart, sustainable cities. By reducing traffic congestion and streamlining parking processes, these systems promote eco-friendly and efficient urban transportation. This paper introduces a provenance-based smart parking system leveraging fog computing to enhance real-time parking space management and resource allocation. The proposed system employs a hierarchical fog architecture, with four layers architecture nodes for efficient data storage, transfer, and resource utilisation. The provenance component empowers users with real-time insights into parking availability, facilitating informed decision-making. Simulations conducted using the iFogSim2 toolkit evaluated the system across key metrics, including end-to-end latency, execution cost, execution time, network usage, and energy consumption in both fog and cloud-based environments. A comparative analysis demonstrates that the fog-based approach significantly outperforms its cloud-based counterpart in terms of efficiency and responsiveness. Additionally, the system minimises network usage and optimises space utilisation, reducing the need for parking area expansion. A real-world case study from SDU University Park validated the proposed system, showcasing its effectiveness in managing parking spaces, particularly during peak hours.

RevDate: 2025-09-29
CmpDate: 2025-09-29

Yao S, Yu T, Ramos AFV, et al (2025)

Toward smart and in-situ mycotoxin detection in food via vibrational spectroscopy and machine learning.

Food chemistry: X, 31:103016.

Recent advances in vibrational spectroscopy combined with machine learning are enabling smart and in-situ detection of mycotoxins in complex food matrices. Infrared and spontaneous Raman spectroscopy detect molecular vibrations or compositional changes in host matrices, capturing direct or indirect mycotoxin fingerprints, while surface-enhanced Raman spectroscopy (SERs) amplifies characteristic mycotoxins molecular vibrations via plasmonic nanostructures, enabling ultra-sensitive detection. Machine learning further enhances analysis by extracting subtle and unique mycotoxin spectral features from information-rich spectra, suppressing noise, and enabling robust predictions across heterogeneous samples. This review critically examines recent sensing strategies, model development, application performance, non-destructive screening, and potential application challenges, highlighting strengths and limitations relative to conventional methods. Innovations in portable, miniaturized spectrometers integrated with cloud computation are also discussed, supporting scalable, rapid, and on-site mycotoxin monitoring. By integrating state-of-art vibrational fingerprints with computational analysis, these approaches provide a pathway toward sensitive, smart, and field-deployable mycotoxin detection in food.

RevDate: 2025-09-27

Mangalampalli SS, Reddy PV, Reddy Karri G, et al (2025)

Priority-Aware Multi-Objective Task Scheduling in Fog Computing Using Simulated Annealing.

Sensors (Basel, Switzerland), 25(18): pii:s25185744.

The number of IoT devices has been increasing at a rapid rate, and the advent of information-intensive Internet of Multimedia Things (IoMT) applications has placed serious challenges on computing infrastructure, especially for latency, energy efficiency, and responsiveness to tasks. The legacy cloud-centric approach cannot meet such requirements because it suffers from local latency and central resource allocation. To overcome such limitations, fog computing proposes a decentralized model by reducing latency and bringing computation closer to data sources. However, effective scheduling of tasks within heterogeneous and resource-limited fog environments is still an NP-hard problem, especially in multi-criteria optimization and priority-sensitive situations. This research work proposes a new simulated annealing (SA)-based task scheduling framework to perform multi-objective optimization for fog computing environments. The proposed model minimizes makespan, energy consumption, and execution cost, and integrates a priority-aware penalty function to provide high responsiveness to high-priority tasks. The SA algorithm searches the scheduling solution space by accepting potentially sub-optimal configurations during the initial iterations and further improving towards optimality as the temperature decreases. Experimental analyses on benchmark datasets obtained from Google Cloud Job Workloads demonstrate that the proposed approach outperforms ACO, PSO, I-FASC and M2MPA approaches in terms of makespan, energy consumption, execution cost, and reliability at all task volume scales. These results confirm the proposed SA-based scheduler as a scalable and effective solution for smart task scheduling within fog-enabled IoT infrastructures.

RevDate: 2025-09-27

Arockiyadoss MA, Yao CK, Liu PC, et al (2025)

Spectral Demodulation of Mixed-Linewidth FBG Sensor Networks Using Cloud-Based Deep Learning for Land Monitoring.

Sensors (Basel, Switzerland), 25(18): pii:s25185627.

Fiber Bragg grating (FBG) sensing systems face significant challenges in resolving overlapping spectral signatures when multiple sensors operate within limited wavelength ranges, severely limiting sensor density and network scalability. This study introduces a novel Transformer-based neural network architecture that effectively resolves spectral overlap in both uniform and mixed-linewidth FBG sensor arrays, operating under bidirectional drift. The system uniquely combines dual-linewidth configurations with reflection and transmission mode fusion to enhance demodulation accuracy and sensing capacity. By integrating cloud computing, the model enables scalable deployment and near-real-time inference even in large-scale monitoring environments. The proposed approach supports self-healing functionality through dynamic switching between spectral modes during fiber breaks and enhances resilience against spectral congestion. Comprehensive evaluation across twelve drift scenarios demonstrates exceptional demodulation performance under severe spectral overlap conditions that challenge conventional peak-finding algorithms. This breakthrough establishes a new paradigm for high-density, distributed FBG sensing networks applicable to land monitoring, soil stability assessment, groundwater detection, maritime surveillance, and smart agriculture.

RevDate: 2025-09-27

Wang Y, Tang Z, Qian G, et al (2025)

A Prototype of a Lightweight Structural Health Monitoring System Based on Edge Computing.

Sensors (Basel, Switzerland), 25(18): pii:s25185612.

Bridge Structural Health Monitoring (BSHM) is vital for assessing structural integrity and operational safety. Traditional wired systems are limited by high installation costs and complexity, while existing wireless systems still face issues with cost, synchronization, and reliability. Moreover, cloud-based methods for extreme event detection struggle to meet real-time and bandwidth constraints in edge environments. To address these challenges, this study proposes a lightweight wireless BSHM system based on edge computing, enabling local data acquisition and real-time intelligent detection of extreme events. The system consists of wireless sensor nodes for front-end acceleration data collection and an intelligent hub for data storage, visualization, and earthquake recognition. Acceleration data are converted into time-frequency images to train a MobileNetV2-based model. With model quantization and Neural Processing Unit (NPU) acceleration, efficient on-device inference is achieved. Experiments on a laboratory steel bridge verify the system's high acquisition accuracy, precise clock synchronization, and strong anti-interference performance. Compared with inference on a general-purpose ARM CPU running the unquantized model, the quantized model deployed on the NPU achieves a 26× speedup in inference, a 35% reduction in power consumption, and less than 1% accuracy loss. This solution provides a cost-effective, reliable BSHM framework for small-to-medium-sized bridges, offering local intelligence and rapid response with strong potential for real-world applications.

RevDate: 2025-09-26

Reddy CL, K Malathi (2025)

Revolutionary hybrid ensembled deep learning model for accurate and robust side-channel attack detection in cloud computing.

Scientific reports, 15(1):32949.

Cryptographic systems are essential for securing sensitive information but are increasingly susceptible to side-channel attacks (SCAs) that exploit physical data leakages. In cloud computing environments, where resources shared across multiple tenants, detecting SCAs becomes particularly challenging due to increased noise and complex data patterns. This study aims to develop a robust detection model for SCAs in cloud environments, leveraging deep learning techniques to capture the multi-dimensional characteristics of power traces while ensuring scalability and accuracy. We propose a hybrid ensembled deep learning (HEDL) model that integrates convolutional neural networks (CNN), long short-term memory (LSTM) networks, and AutoEncoders, enhanced by an attention mechanism to focus on the most critical data segments. The model trained and evaluated on the ASCAD dataset, a benchmark dataset for SCA research, and implemented in a cloud environment to assess real-time detection capabilities. The HEDL model achieved a detection accuracy of 98.65%, significantly outperforming traditional machine learning and standalone deep learning models in both clean and noisy data conditions. The attention mechanism improved the model's focus on key data segments, reducing computational demands and enhancing detection precision. The proposed HEDL model demonstrates superior robustness and accuracy in SCA detection within noisy cloud environments, marking a significant advancement in cloud-based cryptographic security.

RevDate: 2025-09-25
CmpDate: 2025-09-26

Adebangbe SA, Dixon DP, B Barrett (2025)

Evaluating contaminated land and the environmental impact of oil spills in the Niger Delta region: a remote sensing-based approach.

Environmental monitoring and assessment, 197(10):1149.

The Niger Delta region of Nigeria is a major oil-producing area which experiences frequent oil spills that severely impacts the local environment and communities. Effective environmental monitoring and management remain inadequate in this area due to negligence, slow response times following oil spills, and difficulties regarding access and safety. This study investigates the pervasive issue of oil spills in the Niger Delta region, by employing a remote sensing approach, leveraging geospatial cloud computing and machine learning to evaluate vegetation health indices (SR, SR2, NDVI, EVI2, GRNDVI, GNDVI) derived from PlanetScope satellite data. These indices were analysed using Slow Moving Average regression, which revealed significant declines in vegetation health following oil spill events. The contaminated landcovers exhibit a Spearman's correlation coefficient (ρ) ranging from - 0.68 to - 0.82, P < 0.005 and P-values below 0.05 in most landcover categories, suggesting a clear and consistent downward trend in the indices' values, reflecting a decrease in vegetation health in contaminated areas between 2016 and 2023. A random forest classifier further quantified the extent of contaminated land cover, demonstrating the effectiveness of this method for monitoring environmental damage in this challenging terrain. Contaminated vegetation, wetland, farmland, and grassland cover approximately 4% (1180 ha) of the total Niger Delta area. This integrated approach will enable decision-makers, including government agencies and oil companies, to gain a deeper understanding of the environmental consequences of oil pollution and implement targeted mitigation and remediation strategies.

RevDate: 2025-09-25

Schlenz MA, Chillemi L, B Wöstmann (2025)

Clinical Study on the Accuracy of Wireless Intraoral Scanners for Digital Full Arch Impressions of Dentate Arches.

Journal of dentistry pii:S0300-5712(25)00578-0 [Epub ahead of print].

OBJECTIVE: The aim of this clinical study was to update the literature on the scan accuracy (trueness and precision) of four modern wireless intraoral scanners (IOS) and to compare their performance with wired IOS and conventional impressions (CVI). A metallic reference aid was employed as the reference dataset.

METHODS: Digital impressions were obtained from four wireless IOS (Dexis IS 3800W, Medit i700, Primescan 2, and Trios 5), one wired IOS (Primescan AC), and one CVI in thirty patients. Scan data were analysed using 3D software, and CVI dental stone casts were evaluated using a coordinate measuring machine. Scan accuracy between the reference aid and the various impression systems was compared. Statistical analysis was performed using mixed-effects ANOVA models, with significance set at p < 0.05.

RESULTS: Statistically significant differences in trueness and precision were observed between the impression systems (p < 0.05). A significant interaction between impression system and linear distance (p < 0.05) indicated that performance varied depending on the length of scan path. The Dexis IS 3800W and Medit i700 exhibited the greatest deviations, whereas the cloud-native Primescan 2 demonstrated comparable or superior accuracy to other impression systems.

CONCLUSIONS: Within the limitations of this clinical study, the overall accuracy of CVI remained high. Accuracy was influenced by both the impression system and the length of the scan path, with smaller deviations observed over short distances and increased inaccuracies over longer distances, particularly in diagonal and intermolar regions.

CLINICAL SIGNIFICANCE: Wireless IOS demonstrated statistically significant differences in certain cases, highlighting the importance of carefully evaluating the performance of each system individually.

RevDate: 2025-09-25
CmpDate: 2025-09-25

Ahmad SZ, Qamar F, Alshehri H, et al (2025)

A GAN-Based Approach for enhancing security in satellite based IoT networks using MPI enabled HPC.

PloS one, 20(9):e0331019 pii:PONE-D-25-23842.

Satellite Internet of Things (IoT) networks based on satellites are becoming increasingly critical for mission-critical applications, including disaster recovery, environmental surveillance, and remote sensing. While becoming more widespread, they are also more vulnerable to various risks, particularly due to the heterogeneous communication technologies they support and the limited computing capacity on each device. When such IoT systems are connected with central HighPerformance Computing (HPC) clouds, particularly by satellite links, new security issues arise, the primary one being the secure transmission of confidential information. To overcome such challenges, this research proposes a new security framework termed DLGAN (Deep Learning-based Generative Adversarial Network), specially designed for satellite-based IoT scenarios. The model leverages the strengths of Convolutional Neural Networks (CNNs) for real-time anomaly detection, combined with Generative Adversarial Networks (GANs) to generate realistic synthetic attack data, thereby addressing the challenge of skewed datasets prevalent in cybersecurity research. Since training GANs may be computationally expensive, the model is optimized to run on an HPC system via the Message Passing Interface (MPI) to enable scalable parallel processing of huge IoT data. Fundamentally, the DLGAN model is based on a generator/discriminator mechanism for effectively distinguishing network traffic as either benign or malicious, with the capability to detect 14 different types of attacks. By harnessingAI-enabled GPUs in the HPC cloud, the system can provide fast and accurate detection while maintaining low computational costs. Experimental evaluations demonstrate that the framework significantly enhances detection accuracy, reduces training time, and scales well with large data volumes, making it highly suitable for real-time security operations. In total, this study highlights how integrating advanced deep learning technologies with HPC-based distributed environments can deliver an efficient and dynamic defense mechanism for contemporary IoT networks. The envisaged solution is unique in its ability to scale, maximize efficiency, and resist attacks while securing satellite-based IoT infrastructures.

RevDate: 2025-09-25

Lee Y, Chen R, S Bhattacharyya (2025)

An Online Learning Framework for Neural Decoding in Embedded Neuromodulation Systems.

Brain connectivity [Epub ahead of print].

Introduction: Advancements in brain-computer interfaces (BCIs) have improved real-time neural signal decoding, enabling adaptive closed-loop neuromodulation. These systems dynamically adjust stimulation parameters based on neural biomarkers, enhancing treatment precision and adaptability. However, existing neuromodulation frameworks often depend on high-power computational platforms, limiting their feasibility for portable, real-time applications. Methods: We propose RONDO (Recursive Online Neural DecOding), a resource-efficient neural decoding framework that employs dynamic updating schemes in online learning with recurrent neural networks (RNNs). RONDO supports simple RNNs, long short-term memory networks, and gated recurrent units, allowing flexible adaptation to different signal type, accuracy, and real-time constraints. Results: Experimental results show that RONDO's adaptive model updating improves neural decoding accuracy by 35% to 45% compared to offline learning. Additionally, RONDO operates within real-time constraints of neuroimaging devices without requiring cloud-based or high-performance computing. Its dynamic updating scheme ensures high accuracy with minimal updates, improving energy efficiency and robustness in resource-limited settings. Conclusions: RONDO presents a scalable, adaptive, and energy-efficient solution for real-time closed-loop neuromodulation, eliminating reliance on cloud computing. Its flexibility makes it a promising tool for clinical and research applications, advancing personalized neurostimulation and adaptive BCIs.

RevDate: 2025-09-24

Mehrtabar S, Marey A, Desai A, et al (2025)

Ethical Considerations in Patient Privacy and Data Handling for AI in Cardiovascular Imaging and Radiology.

Journal of imaging informatics in medicine [Epub ahead of print].

The integration of artificial intelligence (AI) into cardiovascular imaging and radiology offers the potential to enhance diagnostic accuracy, streamline workflows, and personalize patient care. However, the rapid adoption of AI has introduced complex ethical challenges, particularly concerning patient privacy, data handling, informed consent, and data ownership. This narrative review explores these issues by synthesizing literature from clinical, technical, and regulatory perspectives. We examine the tensions between data utility and data protection, the evolving role of transparency and explainable AI, and the disparities in ethical and legal frameworks across jurisdictions such as the European Union, the USA, and emerging players like China. We also highlight the vulnerabilities introduced by cloud computing, adversarial attacks, and the use of commercial datasets. Ethical frameworks and regulatory guidelines are compared, and proposed mitigation strategies such as federated learning, blockchain, and differential privacy are discussed. To ensure ethical implementation, we emphasize the need for shared accountability among clinicians, developers, healthcare institutions, and policymakers. Ultimately, the responsible development of AI in medical imaging must prioritize patient trust, fairness, and equity, underpinned by robust governance and transparent data stewardship.

RevDate: 2025-09-24
CmpDate: 2025-09-24

Chen Y, Chan WH, Su ELM, et al (2025)

Multi-objective optimization for smart cities: a systematic review of algorithms, challenges, and future directions.

PeerJ. Computer science, 11:e3042.

With the growing complexity and interdependence of urban systems, multi-objective optimization (MOO) has become a critical tool for smart-city planning, sustainability, and real-time decision-making. This article presents a systematic literature review (SLR) of 117 peer-reviewed studies published between 2015 and 2025, assessing the evolution, classification, and performance of MOO techniques in smart-city contexts. Existing algorithms are organised into four families-bio-inspired, mathematical theory-driven, physics-inspired, and machine-learning-enhanced-and benchmarked for computational efficiency, scalability, and scenario suitability across six urban domains: infrastructure, energy, transportation, Internet of Things (IoT)/cloud systems, agriculture, and water management. While established methods such as Non-dominated Sorting Genetic Algorithm II (NSGA-II) and Multiobjective Evolutionary Algorithm based on Decomposition (MOED/D) remain prevalent, hybrid frameworks that couple deep learning with evolutionary search display superior adaptability in high-dimensional, dynamic environments. Persistent challenges include limited cross-domain generalisability, inadequate uncertainty handling, and low interpretability of artificial intelligence (AI)-assisted models. Twelve research gaps are synthesised-from privacy-preserving optimisation and sustainable trade-off resolution to integration with digital twins, large language models, and neuromorphic computing-and a roadmap towards scalable, interpretable, and resilient optimisation frameworks is outlined. Finally, a ready-to-use benchmarking toolkit and a deployment-oriented algorithm-selection matrix are provided to guide researchers, engineers, and policy-makers in real-world smart-city applications. This review targets interdisciplinary researchers, optimisation developers, and smart-city practitioners seeking to apply or advance MOO techniques in complex urban systems.

RevDate: 2025-09-24
CmpDate: 2025-09-24

Huang W, Tian H, Wang L, et al (2025)

SA3C-ID: a novel network intrusion detection model using feature selection and adversarial training.

PeerJ. Computer science, 11:e3089.

With the continuous proliferation of emerging technologies such as cloud computing, 5G networks, and the Internet of Things, the field of cybersecurity is facing an increasing number of complex challenges. Network intrusion detection systems, as a fundamental part of network security, have become increasingly significant. However, traditional intrusion detection methods exhibit several limitations, including insufficient feature extraction from network data, high model complexity, and data imbalance, which result in issues like low detection efficiency, as well as frequent false positives and missed alarms. To address the above issues, this article proposed an adversarial intrusion detection model (Soft Adversarial Asynchronous Actor-Critic Intrusion Detection, SA3C-ID) based on reinforcement learning. Firstly, the raw dataset is preprocessed via one-hot encoding and standardization. Subsequently, the refined data undergoes feature selection employing an improved pigeon-inspired optimizer (PIO) algorithm. This operation eliminates redundant and irrelevant features, consequently reducing data dimensionality while maintaining critical information. Next, the network intrusion detection process is modeled as a Markov decision process and integrated with the Soft Actor-Critic (SAC) reinforcement learning algorithm, with a view to constructing agents; In the context of adversarial training, two agents, designated as the attacker and the defender, are defined to perform asynchronous adversarial training. During this training process, both agents calculate the reward value, update their respective strategies, and transfer parameters based on the classification results. Finally, to verify the robustness and generalization ability of the SA3C-ID model, ablation experiments and comparative evaluations are conducted on two benchmark datasets, NSL-KDD and CSE-CIC-IDS2018. The experimental results demonstrate that SA3C-ID exhibits superior performance in comparison to other prevalent intrusion detection models. The F1-score attained by SA3C-ID was 92.58% and 98.76% on the NSL-KDD and CSE-CIC-IDS2018 datasets, respectively.

RevDate: 2025-09-24
CmpDate: 2025-09-24

Jenifer P, J Angela Jennifa Sujana (2025)

Quality of experience-aware application deployment in fog computing environments using machine learning.

PeerJ. Computer science, 11:e3143.

Edge intelligence is fast becoming indispensable as billions of sensors demand real-time inference without saturating backbone links or exposing sensitive data in remote data centres and emerging artificial intelligence (AI)-edge boards such as NVIDIA CPUs, 16 GB RAM, and microcontrollers with chip neural processing unit (NPU) (<1 W). This article introduces the Energy-Smart Component Placement (ESCP) algorithm of fog devices like fog cluster manager nodes (FCMNs) and fog nodes (FNs), allocates modules to fog devices, and saves energy by deactivating inactive devices framework transparently distributes compressed neural workloads across serverless. To optimize the deployment of AI workloads on fog edge devices as a service (FEdaaS), this project aims to provide a reliable and dynamic architecture that guarantees quality of service (QoS) and quality of experience (QoE). The cloud, fog, and extreme edge layers while upholding application-level QoS and QoE. Two machine learning (ML) methods that fuse eXtreme Gradient Boosting (XGB)-based instantaneous QoS scoring and long short term memory (LSTM) forecasting of node congestion, and a meta-heuristic scheduler that uses XGB for instantaneous QoS scoring and LSTM for short-horizon load forecasting. Compared with a cloud-only baseline, ESCP improved bandwidth utilization by 5.2%, scalability (requests per second) by 3.2%, energy consumption by 3.8% and response time by 2.1% while maintaining prediction accuracy within +0.4%. The results confirm that low-resource AI-edge devices, when orchestrated through our adaptive framework, can meet QoE targets such as 250 ms latency and 24 h of battery life. Future work will explore federated on-device learning to enhance data privacy, extend the scheduler to neuromorphic processors, and validate the architecture in real-time intensive care and smart city deployments.

RevDate: 2025-09-23

Castilla-Puentes R, Isidoro AF, Orosito A, et al (2025)

Perinatal bereavement rooms: a narrative review of physical space in perinatal grief.

Archives of gynecology and obstetrics [Epub ahead of print].

BACKGROUND: Perinatal loss is a profoundly complex form of grief, often linked to heightened risk of prolonged bereavement and adverse mental health outcomes. Perinatal grief rooms-private, supportive spaces within healthcare settings-aim to help families process their loss, spend time with their baby, and create meaningful memories in a respectful environment. While bereavement care has received growing attention, the role of the physical environment in supporting grief remains underexplored.

OBJECTIVE: To synthesize current evidence on how dedicated physical spaces can support individuals and families after perinatal loss, and to identify priorities for research, design standards, and interdisciplinary collaboration.

METHODS: A narrative review was conducted in accordance with PRISMA-ScR guidelines. Literature searches were performed across PubMed, PsycINFO, Medline (OVID), Embase, ScienceDirect, SCOPUS, SciELO, and Google Scholar using terms, such as "perinatal grief rooms", "bereavement rooms", "angel suites", "butterfly suites", "snowdrop suites", "cloud rooms", "designated units for perinatal loss", and "birthing + bereavement suites". The review examined (1) the current role of physical spaces in the perinatal loss experience, and (2) how their availability and design may influence grief outcomes.

RESULTS: Of the 17 articles meeting inclusion criteria, only 4 (24%) referenced bereavement rooms, and just 3 (18%) noted the need for formal protocols-without offering concrete examples. No studies evaluated implementation, design standards, or measurable impact on grief, mental health, or family well-being. This lack of empirical evidence and standardized guidance underscores a critical gap that limits integration of therapeutic environments into perinatal bereavement care.

CONCLUSION: Despite increasing recognition of the importance of bereavement care, dedicated grief rooms remain under-researched and inconsistently implemented. Advancing this field will require rigorously designed studies, development of design standards, and collaborative partnerships among healthcare providers, researchers, policymakers, and design experts to ensure equitable access to therapeutic spaces for grieving families.

RevDate: 2025-09-23

Ying X, Zhang Q, Jiang H, et al (2025)

High isolation, low inter-channel interference, eight-channel LAN-WDM SiPh transceiver for reliable Tbps transmission.

Optics express, 33(16):34052-34067.

The rapid growth of artificial intelligence (AI) inference, training, and cloud computing has driven the continuous demands for data transmission bandwidth and rate, enlarging the modern data centers' scale and quantities. While high-speed, long-reach (LR) (∼10 km) data center interconnection (DCI) faces significant performance degradation caused by device nonlinearity, optical link loss, channel interference, etc., when adopting a wavelength-division multiplexing (WDM) architecture. This work establishes an 8-channel multiplexer (MUX)/demultiplexer (DeMUX)-based optoelectronic transceiver scheme with high isolation, low inter-channel interference, and polarization-insensitive features to minimize the four-wave mixing (FWM) interference for Tbps DCI reliable transmission. What we believe to be a novel scheme is applied to an elaborately designed 8-channel intensity modulation direct detection (IM-DD) silicon photonic (SiPh) transceiver system for the LR8 Tbps DCI-Campus (∼10 km transmission) scenario. Experimental results demonstrate the significant performance promotion by 200 Gbps with a total 1.1 Tbps transmission rate, ultra-high channel isolation (>45 dB), thorough polarization-insensitive inter-channel interference suppression, high signal-noise ratio (SNR), as well as good channel response uniformity.

RevDate: 2025-09-22

Glatt-Holtz NE, Holbrook AJ, Krometis JA, et al (2024)

Parallel MCMC algorithms: theoretical foundations, algorithm design, case studies.

Transactions of mathematics and its applications : a journal of the IMA, 8(2):.

Parallel Markov Chain Monte Carlo (pMCMC) algorithms generate clouds of proposals at each step to efficiently resolve a target probability distribution μ. We build a rigorous foundational framework for pMCMC algorithms that situates these methods within a unified 'extended phase space' measure-theoretic formalism. Drawing on our recent work that provides a comprehensive theory for reversible single-proposal methods, we herein derive general criteria for multiproposal acceptance mechanisms that yield ergodic chains on general state spaces. Our formulation encompasses a variety of methodologies, including proposal cloud resampling and Hamiltonian methods, while providing a basis for the derivation of novel algorithms. In particular, we obtain a top-down picture for a class of methods arising from 'conditionally independent' proposal structures. As an immediate application of this formalism, we identify several new algorithms including a multiproposal version of the popular preconditioned Crank-Nicolson (pCN) sampler suitable for high- and infinite-dimensional target measures that are absolutely continuous with respect to a Gaussian base measure. To supplement the aforementioned theoretical results, we carry out a selection of numerical case studies that evaluate the efficacy of these novel algorithms. First, noting that the true potential of pMCMC algorithms arises from their natural parallelizability and the ease with which they map to modern high-performance computing architectures, we provide a limited parallelization study using TensorFlow and a graphics processing unit to scale pMCMC algorithms that leverage as many as 100k proposals at each step. Second, we use our multiproposal pCN algorithm (mpCN) to resolve a selection of problems in Bayesian statistical inversion for partial differential equations motivated by fluid measurement. These examples provide preliminary evidence of the efficacy of mpCN for high-dimensional target distributions featuring complex geometries and multimodal structures.

RevDate: 2025-09-22
CmpDate: 2025-09-22

Gershkovich P (2025)

Wearing a fur coat in the summertime: Should digital pathology redefine medical imaging?.

Journal of pathology informatics, 18:100450.

Slides are data. Once digitized, they function like any enterprise asset: accessible anywhere, ready for AI, and integrated into cloud workflows. But in pathology, they enter a realm of clinical complexity-demanding systems that handle nuance, integrate diverse data streams, scale effectively, enable computational exploration, and enforce rigorous security. Although the Digital Imaging and Communications in Medicine (DICOM) standard revolutionized radiology, it is imperative to explore its adequacy in addressing modern digital pathology's orchestration needs. Designed more than 30 years ago, DICOM reflects assumptions and architectural choices that predate modular software, cloud computing, and AI-driven workflows. This article shows that by embedding metadata, annotations, and communication protocols into a unified container, DICOM limits interoperability and exposes architectural vulnerabilities. The article begins by examining these innate design risks, then challenges DICOM's interoperability claims, and ultimately presents a modular, standards-aligned alternative. The article argues that separating image data from orchestration logic improves scalability, security, and performance. Standards such as HL7 FHIR (Health Level Seven Fast Healthcare Interoperability Resources) and modern databases manage clinical metadata; formats like Scalable Vector Graphics handle annotations; and fast, cloud-native file transfer protocols, and microservices support tile-level image access. This separation of concerns allows each component to evolve independently, optimizes performance across the system, and better adapts to emerging AI-driven workflows-capabilities that are inherently constrained in monolithic architectures where these elements are tightly coupled. It further shows that security requirements should not be embedded within the DICOM standard itself. Instead, security must be addressed through a layered, format-independent framework that spans systems, networks, applications, and data governance. Security is not a discrete feature but an overarching discipline-defined by its own evolving set of standards and best practices. Overlays such as those outlined in the National Institute of Standards and Technology SP 800-53 support modern Transport Layer Security, single sign-on, cryptographic hashing, and other controls that protect data streams without imposing architectural constraints or restricting technological choices. Pathology stands at a rare inflection point. Unlike radiology, where DICOM is deeply entrenched, pathology workflows still operate in polyglot environments-leveraging proprietary formats, hybrid standards, and emerging cloud-native tools. This diversity, often seen as a limitation, offers a clean slate: an opportunity to architect a modern, modular infrastructure free from legacy constraints. While a full departure from DICOM is unnecessary, pathology is uniquely positioned to prototype the future-to define a more flexible, secure, and interoperable model that other domains in medical imaging may one day follow. With support from forward-looking DICOM advocates, pathology can help reshape not just its own infrastructure, but the trajectory of medical imaging itself.

RevDate: 2025-09-22
CmpDate: 2025-09-22

Demattê JAM, Poppiel RR, Novais JJM, et al (2025)

Frontiers in earth observation for global soil properties assessment linked to environmental and socio-economic factors.

Innovation (Cambridge (Mass.)), 6(9):100985.

Soil has garnered global attention for its role in food security and climate change. Fine-scale soil-mapping techniques are urgently needed to support food, water, and biodiversity services. A global soil dataset integrated into an Earth observation system and supported by cloud computing enabled the development of the first global soil grid of six key properties at a 90-m spatial resolution. Assessing them from environmental and socio-economic perspectives, we demonstrated that 64% of the world's topsoils are primarily sandy, with low fertility and high susceptibility to degradation. These conditions limit crop productivity and highlight potential risks to food security. Results reveal that approximately 900 Gt of soil organic carbon (SOC) is stored up to 20 cm deep. Arid biomes store three times more SOC than mangroves based on total areas. SOC content in agricultural soils is reduced by at least 60% compared to soils under natural vegetation. Most agricultural areas are being fertilized while simultaneously experiencing a depletion of the carbon pool. By integrating soil capacity with economic and social factors, we highlight the critical role of soil in supporting societal prosperity. The top 10 largest countries in area per continent store 75% of the global SOC stock. However, the poorest countries face rapid organic matter degradation. We indicate an interconnection between societal growth and spatially explicit mapping of soil properties. This soil-human nexus establishes a geographically based link between soil health and human development. It underscores the importance of soil management in enhancing agricultural productivity and promotes sustainable-land-use planning.

RevDate: 2025-09-22
CmpDate: 2025-09-22

Thapa N, Nepali S, Shrestha R, et al (2025)

Time series flood mapping using the Copernicus dataset in Google Earth Engine of the Mountainous Region.

Data in brief, 62:112010.

In mountainous countries like Nepal, floods are a major challenge due to complex topography, intense snowmelt, and highly variable monsoon rainfall that drive frequent flooding events. This study focuses on the Hilly and Himalayan regions of Nepal, where flood monitoring and risk management are increasingly important for safeguarding vulnerable communities and infrastructure. This study presents a high-resolution, time-series flood extent dataset derived from the Copernicus Sentinel-2 Level-2A imagery at a 10-meter spatial resolution, covering the years 2019 to 2023. Flood mapping was performed using the Normalized Difference Vegetation Index (NDVI) combined with region-specific thresholding. NDVI values below 0 represent open water, while values between 0 and 0.1 often indicate mud, bare soil. A threshold of NDVI <0.019 was applied to identify flood-affected areas in the hilly region to capture the debris flow type flood, whereas NDVI <0 was used for the Himalayan region, because of the presence of snow and water that complicated classification due to their spectral similarity with other features. Snow-covered areas were masked using the Copernicus Global Land Cover dataset to improve accuracy in the high altitude zones. Data processing was performed on the Google Earth Engine (GEE) platform. Monsoon-season image composites were generated after applying cloud masking using the Scene Classification Layer (SCL), and temporal cloud gaps were filled using post-monsoon imagery to ensure continuous temporal data. The resulting flood extent maps reveal consistent spatial patterns and provide critical data for flood forecasting, risk-sensitive land use planning, and interdisciplinary studies. Despite challenges with cloud interference and complex terrain, this dataset offers valuable insights into flood dynamics across Nepal's mountainous landscape.

RevDate: 2025-09-19
CmpDate: 2025-09-19

Jang WD, Gu C, Noh Y, et al (2025)

ChemBounce: a computational framework for scaffold hopping in drug discovery.

Bioinformatics (Oxford, England), 41(9):.

SUMMARY: Scaffold hopping is a critical strategy in medicinal chemistry for generating novel and patentable drug candidates. Here, we present ChemBounce, a computational framework designed to facilitate scaffold hopping by generating structurally diverse scaffolds with high synthetic accessibility. Given a user-supplied molecule in SMILES format, ChemBounce identifies the core scaffolds and replaces them using a curated in-house library of over 3 million fragments derived from the ChEMBL database. The generated compounds are evaluated based on Tanimoto and electron shape similarities to ensure retention of pharmacophores and potential biological activity. By enabling systematic exploration of unexplored chemical space, ChemBounce represents a valuable tool for hit expansion and lead optimization in modern drug discovery.

The source code for ChemBounce is available at https://github.com/jyryu3161/chembounce. In addition, a cloud-based implementation of ChemBounce is available as a Google Colaboratory notebook.

RevDate: 2025-09-19

Li X, Wood AR, Yuan Y, et al (2025)

Streamlining large-scale genomic data management: Insights from the UK Biobank whole-genome sequencing data.

Cell genomics pii:S2666-979X(25)00265-4 [Epub ahead of print].

Biobank-scale whole-genome sequencing (WGS) studies are increasingly pivotal in unraveling the genetic bases of diverse health outcomes. However, managing and analyzing these datasets' sheer volume and complexity presents significant challenges. We highlight the annotated genomic data structure (aGDS) format, substantially reducing the WGS data file size while enabling seamless integration of genomic and functional information for comprehensive WGS analyses. The aGDS format yielded 23 chromosome-specific files for the UK Biobank 500k WGS dataset, occupying only 1.10 tebibytes of storage. We develop the vcf2agds toolkit that streamlines the conversion of WGS data from VCF to aGDS format. Additionally, the STAARpipeline equipped with the aGDS files enabled scalable, comprehensive, and functionally informed WGS analysis, facilitating the detection of common and rare coding and noncoding phenotype-genotype associations. Overall, the vcf2agds toolkit and STAARpipeline provide a streamlined solution that facilitates efficient data management and analysis of biobank-scale WGS data across hundreds of thousands of samples.

RevDate: 2025-09-19

Wang J, Garthwaite MC, Wang C, et al (2025)

Development of a Multi-Sensor GNSS-IoT System for Precise Water Surface Elevation Measurement.

Sensors (Basel, Switzerland), 25(11): pii:s25113566.

The Global Navigation Satellite System (GNSS), Internet of Things (IoT) and cloud computing technologies enable high-precision positioning with flexible data communication, making real-time/near-real-time monitoring more economical and efficient. In this study, a multi-sensor GNSS-IoT system was developed for measuring precise water surface elevation (WSE). The system, which includes ultrasonic and accelerometer sensors, was deployed on a floating platform in Googong reservoir, Australia, over a four-month period in 2024. WSE data derived from the system were compared against independent reference measurements from the reservoir operator, achieving an accuracy of 7 mm for 6 h averaged solutions and 28 mm for epoch-by-epoch solutions. The results demonstrate the system's potential for remote, autonomous WSE monitoring and its suitability for validating satellite Earth observation data, particularly from the Surface Water and Ocean Topography (SWOT) mission. Despite environmental challenges such as moderate gale conditions, the system maintained robust performance, with over 90% of solutions meeting quality assurance standards. This study highlights the advantages of combining the GNSS with IoT technologies and multiple sensors for cost-effective, long-term WSE monitoring in remote and dynamic environments. Future work will focus on optimizing accuracy and expanding applications to diverse aquatic settings.

RevDate: 2025-09-19
CmpDate: 2025-09-19

Thang DV, Volkov A, Muthanna A, et al (2025)

Future of Telepresence Services in the Evolving Fog Computing Environment: A Survey on Research and Use Cases.

Sensors (Basel, Switzerland), 25(11): pii:s25113488.

With the continuing development of technology, telepresence services have emerged as an essential part of modern communication systems. Concurrently, the rapid growth of fog computing presents new opportunities and challenges for integrating telepresence capabilities into distributed networks. Fog computing is a component of the cloud computing model that is used to meet the diverse computing needs of applications in the emergence and development of fifth- and sixth-generation (5G and 6G) networks. The incorporation of fog computing into this model provides benefits that go beyond the traditional model. This survey investigates the convergence of telepresence services with fog computing, evaluating the latest advancements in research developments and practical use cases. This study examines the changes brought about by the 6G network as well as the promising future directions of 6G. This study presents the concepts of fog computing and its basic structure. We analyze Cisco's model and propose an alternative model to improve its weaknesses. Additionally, this study synthesizes, analyzes, and evaluates a body of articles on remote presence services from major bibliographic databases. Summing up, this work thoroughly reviews current research on telepresence services and fog computing for the future.

RevDate: 2025-09-19

Sun H, Xu R, Luo J, et al (2025)

Review of the Application of UAV Edge Computing in Fire Rescue.

Sensors (Basel, Switzerland), 25(11): pii:s25113304.

The use of unmanned aerial vehicles (UAVs) attracts significant attention, especially in fire emergency rescue, where UAVs serve as indispensable tools. In fire rescue scenarios, the rapid increase in the amount of data collected and transmitted by sensors poses significant challenges to traditional methods of data storage and computing. Sensor-data processing utilizing UAV edge computing technology is emerging as a research hotspot in this field and aims to address the challenges of data preprocessing and feature analysis during fire emergency rescue. This review first analyzes fire-rescue scenarios involving UAV, including forest fires, high-rise building fires, chemical plant fires, and mine fires. Then it discusses the current status of UAV edge computing technology and its application to integrating sensor data in fire emergency rescue, analyzes the advantages and disadvantages of UAV use in fire scenarios, and identifies challenges during by UAV operations in environments with no GNSS signal. Finally, based on the analysis of fire emergency-rescue scenarios, this review argues that compared with centralized computing centers and cloud computing, distributed UAV edge computing technology based on sensor data exhibits higher mobility and timeliness and is more adaptable to the urgent nature of emergency rescue. This review also seeks to provide support and reference for the research and development of UAV edge technology.

RevDate: 2025-09-18
CmpDate: 2025-09-18

Bilal M, Shah AA, Abbas S, et al (2025)

High-Performance Deep Learning for Instant Pest and Disease Detection in Precision Agriculture.

Food science & nutrition, 13(9):e70963.

Global farm productivity is constantly under attack from pests and diseases, resulting in massive crop loss and food insecurity. Manual scouting, expert estimation, and laboratory-based microscopy are time-consuming, prone to human error, and labor-intensive. Although traditional machine learning classifiers such as SVM, Random Forest, and Decision Trees provide better accuracy, they are not field deployable. This article presents a high-performance deep learning fusion model using MobileNetV2 and EfficientNetB0 for real-time detection of pests and diseases in precision farming. The model, trained on the CCMT dataset (24,881 original and 102,976 augmented images in 22 classes of cashew, cassava, maize, and tomato crops), attained a global accuracy of 89.5%, precision and recall of 95.68%, F1-score of 95.67%, and ROC-AUC of 0.95. For supporting deployment in edge environments, methods such as quantization, pruning, and knowledge distillation were employed to decrease inference time to below 10 ms per image. The suggested model is superior to baseline CNN models, including ResNet-50 (81.25%), VGG-16 (83.10%), and other edge lightweight models (83.00%). The optimized model is run on low-power devices such as smartphones, Raspberry Pi, and farm drones without the need for cloud computing, allowing real-time detection in far-off fields. Field trials using drones validated rapid image capture and inference performance. This study delivers a scalable, cost-effective, and accurate early pest and disease detection framework for sustainable agriculture and supporting food security at the global level. The model has been successfully implemented with TensorFlow Lite within Android applications and Raspberry Pi systems.

RevDate: 2025-09-17

Zolfagharinejad M, Büchel J, Cassola L, et al (2025)

Analogue speech recognition based on physical computing.

Nature [Epub ahead of print].

With the rise of decentralized computing, such as in the Internet of Things, autonomous driving and personalized healthcare, it is increasingly important to process time-dependent signals 'at the edge' efficiently: right at the place where the temporal data are collected, avoiding time-consuming, insecure and costly communication with a centralized computing facility (or 'cloud'). However, modern-day processors often cannot meet the restrained power and time budgets of edge systems because of intrinsic limitations imposed by their architecture (von Neumann bottleneck) or domain conversions (analogue to digital and time to frequency). Here we propose an edge temporal-signal processor based on two in-materia computing systems for both feature extraction and classification, reaching near-software accuracy for the TI-46-Word[1] and Google Speech Commands[2] datasets. First, a nonlinear, room-temperature reconfigurable-nonlinear-processing-unit[3,4] layer realizes analogue, time-domain feature extraction from the raw audio signals, similar to the human cochlea. Second, an analogue in-memory computing chip[5], consisting of memristive crossbar arrays, implements a compact neural network trained on the extracted features for classification. With submillisecond latency, reconfigurable-nonlinear-processing-unit-based feature extraction consuming roughly 300 nJ per inference, and the analogue in-memory computing-based classifier using around 78 µJ (with potential for roughly 10 µJ)[6], our findings offer a promising avenue for advancing the compactness, efficiency and performance of heterogeneous smart edge processors through in materia computing hardware.

RevDate: 2025-09-16

Zhao Z, Zhang H, Li R, et al (2025)

Revisiting Transferable Adversarial Images: Systemization, Evaluation, and New Insights.

IEEE transactions on pattern analysis and machine intelligence, PP: [Epub ahead of print].

Transferable adversarial images raise critical security concerns for computer vision systems in real-world, blackbox attack scenarios. Although many transfer attacks have been proposed, existing research lacks a systematic and comprehensive evaluation. In this paper, we systemize transfer attacks into five categories around the general machine learning pipeline and provide the first comprehensive evaluation, with 23 representative attacks against 11 representative defenses, including the recent, transfer-oriented defense and the real-world Google Cloud Vision. In particular, we identify two main problems of existing evaluations: (1) for attack transferability, lack of intra-category analyses with fair hyperparameter settings, and (2) for attack stealthiness, lack of diverse measures. Our evaluation results validate that these problems have indeed caused misleading conclusions and missing points, and addressing them leads to new, consensuschallenging insights, such as (1) an early attack, DI, even outperforms all similar follow-up ones, (2) the state-of-the-art (whitebox) defense, DiffPure, is even vulnerable to (black-box) transfer attacks, and (3) even under the same Lp constraint, different attacks yield dramatically different stealthiness results regarding diverse imperceptibility metrics, finer-grained measures, and a user study. We hope that our analyses will serve as guidance on properly evaluating transferable adversarial images and advance the design of attacks and defenses.

RevDate: 2025-09-15

Moharam MH, Ashraf K, Alaa H, et al (2025)

Real-time detection of Wi-Fi attacks using hybrid deep learning models on NodeMCU.

Scientific reports, 15(1):32544.

This paper presents a real-time, lightweight system for detecting Wi-Fi deauthentication (DA) attacks that uses the NodeMCU ESP8266 microcontroller for live packet sniffing and feature extraction. Tailored for low-power IoT environments, the system combines the sequential learning capabilities of Long short-term memory (LSTM), Gate recurrent unit (GRU), and Recurrent neural network (RNN) with the interpretability of logistic regression (LR). These hybrid models analyze Wi-Fi traffic in real time to detect anomalous behavior based on key metrics such as Received Signal Strength indicator (RSSI), DA, packet count, and Signal noise ratio (SNR), which are also displayed live on an OLED screen. The proposed framework uniquely integrates hybrid temporal deep learning with interpretable classification through (LR) in an ultra-low-cost embedded data acquisition setup through NodeMCU, addressing a gap in existing intrusion detection research that often focuses on either cloud-based processing or non-interpretable model. The system was trained and validated on a dataset of over 5,600 labeled samples collected under varied network conditions. Among the evaluated models, GRU_LR achieved the highest accuracy (96%) and demonstrated superior performance in identifying minority-class threats. By combining explainable AI with cost-effective embedded sensing, this work delivers a practical and transparent intrusion detection approach that can be readily adapted to diverse IoT and wireless security contexts.

RevDate: 2025-09-15

Ayouni S, Khan MH, Ibrahim M, et al (2025)

IoT-based Approach for Diabetes Patient Monitoring Using Machine Learning.

SLAS technology pii:S2472-6303(25)00106-2 [Epub ahead of print].

This study presents an IoT-based framework for real-time diabetes monitoring and management, addressing key limitations identified in previous studies by integrating four datasets: BVH Dataset, PIMA Diabetes Dataset, Simulated Dataset, and an Integrated Dataset. The proposed approach ensures diverse demographic representation and a wide range of features including real-time vital signs (e.g., oxygen saturation, pulse rate, temperature) and subjective variables (e.g., skin color, moisture, consciousness level). Advanced preprocessing techniques, including Kalman Filtering for noise reduction, KNN imputation for addressing missing data, and SMOTE-ENN for improving data quality and class balance, were employed. These methods resulted in a 25% improvement in Recall and a 20% increase in the F1-score, demonstrating the model's effectiveness and robustness. By applying PCA and SHAP for feature engineering, high-impact features were identified, enabling the tuning of models such as Random Forest, SVM, and Logistic Regression, which achieved an accuracy of 97% and an F1-score of 0.98. A novel triage system, integrated with edge and cloud computing, classifies health status in real-time (Green, Yellow, Red, Black), reducing latency by 35%. The proposed system sets a new benchmark for scalable, individualized diabetes care in IoT-based healthcare solutions, demonstrating significant improvements in accuracy, response time, and feature incorporation compared to prior works.

RevDate: 2025-09-15

Osei-Wusu F, Asiedu W, Yeboah D, et al (2025)

Leveraging Information Technology tools to create cost-effective alternatives: Using Google Sheets as a platform for competitive debate and public speaking tabulation.

PloS one, 20(9):e0332576 pii:PONE-D-24-60341.

Traditional web-based debate tabulation systems like Tabbycat, offer robust features but often pose high costs and accessibility barriers that limit participation and the smooth organization of events. In this work, we present Tab XYZ, a novel debate and public-speaking tabulation platform built on Google Sheets, as a cost-effective alternative to conventional systems. We deployed Tab XYZ's cloud-based features like Google Apps Script automation, Google Forms for data input, real-time collaboration, to replicate core functionalities of standard tabulation software without the need for dedicated servers or paid licenses. The proposed system was evaluated in five tournaments constituting a total of 435 participants, and compared against a popular web-based platform on key metrics including setup time, user satisfaction, reliability, and error handling. Results indicate that Tab XYZ eliminated all licensing and hosting costs while achieving user satisfaction scores (overall average 4.7 out of 5) comparable to the conventional system (4.6 out of 5). Tab XYZ also demonstrated robust data security and offline-capable error recovery by leveraging Google's infrastructure. These findings illustrate a viable pathway to leverage readily available IT tools like spreadsheets and cloud services, to create innovative solutions for specialized domains, avoiding the cost and complexity barriers of traditional approaches.

RevDate: 2025-09-15

Gomase VS (2025)

Cybersecurity, Research Data Management (RDM), and Regulatory Compliance in Clinical Trials.

Reviews on recent clinical trials pii:RRCT-EPUB-150556 [Epub ahead of print].

INTRODUCTION: The intersection of drug discovery and cybersecurity is becoming critical as the pharmaceutical sector adopts digital technologies to drive research and development. Drug discovery entails extensive collaboration and large volumes of data, making it highly susceptible to cyberattacks. Emerging technologies, such as big data analytics, artificial intelligence (AI), and cloud computing, hold significant innovation potential but also pose risks to the industry that can undermine intellectual property (IP), clinical trial results, and collaborative research. This review discusses the importance of cybersecurity in the drug discovery process. The focus is on determining major threats, defining best practices for protecting sensitive information, and ensuring compliance with regulatory requirements. The objective is to highlight the strategic significance of cybersecurity practices in protecting research integrity and fostering innovation.

METHODS: The review-based approach is employed to analyze present-day trends in drug discovery cybersecurity. Emerging technologies, security issues, regulatory needs, and the security controls most frequently utilized in the industry, such as encryption, multi-factor authentication, and secure data sharing, are discussed in the chapter.

RESULTS: The pharmaceutical sector has advanced significantly in securing sensitive research information through robust cybersecurity measures. However, the vulnerabilities remain for cloud security as well as for protecting AI models. Adhering to the regulatory guidelines of GDPR (General Data Protection Regulation) and HIPAA (Health Insurance Portability and Accountability Act) remains a concern as international norms evolve.

DISCUSSION: As digital technologies transform drug discovery, cybersecurity has become crucial in protecting sensitive data and intellectual property rights. Strengthening compliance with evolving regulations is key to ensuring safety and innovative pharmaceutical research.

CONCLUSION: Cybersecurity is critical in preserving the integrity of drug discovery. With the increasing adoption of digital technologies, pharmaceutical firms must implement robust cybersecurity measures to protect sensitive information, ensure compliance, and foster innovation in a secure environment.

RevDate: 2025-09-13

Liang F (2025)

Decentralized and Network-Aware Task Offloading for Smart Transportation via Blockchain.

Sensors (Basel, Switzerland), 25(17): pii:s25175555.

As intelligent transportation systems (ITSs) evolve rapidly, the increasing computational demands of connected vehicles call for efficient task offloading. Centralized approaches face challenges in scalability, security, and adaptability to dynamic network conditions. To address these issues, we propose a blockchain-based decentralized task offloading framework with network-aware resource allocation and tokenized economic incentives. In our model, vehicles generate computational tasks that are dynamically mapped to available computing nodes-including vehicle-to-vehicle (V2V) resources, roadside edge servers (RSUs), and cloud data centers-based on a multi-factor score considering computational power, bandwidth, latency, and probabilistic packet loss. A blockchain transaction layer ensures auditable and secure task assignment, while a proof-of-stake (PoS) consensus and smart-contract-driven dynamic pricing jointly incentivize participation and balance workloads to minimize delay. In extensive simulations reflecting realistic ITS dynamics, our approach reduces total completion time by 12.5-24.3%, achieves a task success rate of 84.2-88.5%, improves average resource utilization to 88.9-92.7%, and sustains >480 transactions per second (TPS) with a 10 s block interval, outperforming centralized/cloud-based baselines. These results indicate that integrating blockchain incentives with network-aware offloading yields secure, scalable, and efficient management of computational resources for future ITSs.

RevDate: 2025-09-13

Dembski J, Wiszniewski B, A Kołakowska (2025)

Anomaly Detection and Segmentation in Measurement Signals on Edge Devices Using Artificial Neural Networks.

Sensors (Basel, Switzerland), 25(17): pii:s25175526.

In this paper, three alternative solutions to the problem of detecting and cleaning anomalies in soil signal time series, involving the use of artificial neural networks deployed on in situ data measurement end devices, are proposed and investigated. These models are designed to perform calculations on MCUs, characterized by significantly limited computing capabilities and a limited supply of electrical power. Training of neural network models is carried out based on data from multiple sensors in the supporting computing cloud instance, while detection and removal of anomalies with a trained model takes place on the constrained end devices. With such a distribution of work, it is necessary to achieve a sound compromise between prediction accuracy and the computational complexity of the detection process. In this study, neural-primed heuristic (NPH), autoencoder-based (AEB), and U-Net-based (UNB) approaches were tested, which were found to vary regarding both prediction accuracy and computational complexity. Labeled data were used to train the models, transforming the detection task into an anomaly segmentation task. The obtained results reveal that the UNB approach presents certain advantages; however, it requires a significant volume of training data and has a relatively high time complexity which, in turn, translates into increased power consumption by the end device. For this reason, the other two approaches-NPH and AEB-may be worth considering as reasonable alternatives when developing in situ data cleaning solutions for IoT measurement systems.

RevDate: 2025-09-13

Zhang L, Wu S, Z Wang (2025)

LoRA-INT8 Whisper: A Low-Cost Cantonese Speech Recognition Framework for Edge Devices.

Sensors (Basel, Switzerland), 25(17): pii:s25175404.

To address the triple bottlenecks of data scarcity, oversized models, and slow inference that hinder Cantonese automatic speech recognition (ASR) in low-resource and edge-deployment settings, this study proposes a cost-effective Cantonese ASR system based on LoRA fine-tuning and INT8 quantization. First, Whisper-tiny is parameter-efficiently fine-tuned on the Common Voice zh-HK training set using LoRA with rank = 8. Only 1.6% of the original weights are updated, reducing the character error rate (CER) from 49.5% to 11.1%, a performance close to full fine-tuning (10.3%), while cutting the training memory footprint and computational cost by approximately one order of magnitude. Next, the fine-tuned model is compressed into a 60 MB INT8 checkpoint via dynamic quantization in ONNX Runtime. On a MacBook Pro M1 Max CPU, the quantized model achieves an RTF = 0.20 (offline inference 5 × real-time) and 43% lower latency than the FP16 baseline; on an NVIDIA A10 GPU, it reaches RTF = 0.06, meeting the requirements of high-concurrency cloud services. Ablation studies confirm that the LoRA-INT8 configuration offers the best trade-off among accuracy, speed, and model size. Limitations include the absence of spontaneous-speech noise data, extreme-hardware validation, and adaptive LoRA structure optimization. Future work will incorporate large-scale self-supervised pre-training, tone-aware loss functions, AdaLoRA architecture search, and INT4/NPU quantization, and will establish an mJ/char energy-accuracy curve. The ultimate goal is to achieve CER ≤ 8%, RTF < 0.1, and mJ/char < 1 for low-power real-time Cantonese ASR in practical IoT scenarios.

RevDate: 2025-09-13

Yu M, Du Y, Zhang X, et al (2025)

Efficient Navigable Area Computation for Underground Autonomous Vehicles via Ground Feature and Boundary Processing.

Sensors (Basel, Switzerland), 25(17): pii:s25175355.

Accurate boundary detection is critical for autonomous trackless rubber-wheeled vehicles in underground coal mines, as it prevents lateral collisions with tunnel walls. Unlike open-road environments, underground tunnels suffer from poor illumination, water mist, and dust, which degrade visual imaging. To address these challenges, this paper proposes a navigable area computation for underground autonomous vehicles via ground feature and boundary processing, consisting of three core steps. First, a real-time point cloud correction process via pre-correction and dynamic update aligns ground point clouds with the LiDAR coordinate system to ensure parallelism. Second, corrected point clouds are projected onto a 2D grid map using a grid-based method, effectively mitigating the impact of ground unevenness on boundary extraction; third, an adaptive boundary completion method is designed to resolve boundary discontinuities in junctions and shunting chambers. Additionally, the method emphasizes continuous extraction of boundaries over extended periods by integrating temporal context, ensuring the continuity of boundary detection during vehicle operation. Experiments on real underground vehicle data validate that the method achieves accurate detection and consistent tracking of dual-sided boundaries across straight tunnels, curves, intersections, and shunting chambers, meeting the requirements of underground autonomous driving. This work provides a rule-based, real-time solution feasible under limited computing power, offering critical safety redundancy when deep learning methods fail in harsh underground environments.

RevDate: 2025-09-13

Honarparvar S, Honarparvar Y, Ashena Z, et al (2025)

GICEDCam: A Geospatial Internet of Things Framework for Complex Event Detection in Camera Streams.

Sensors (Basel, Switzerland), 25(17): pii:s25175331.

Complex event detection (CED) adds value to camera stream data in various applications such as workplace safety, task monitoring, security, and health. Recent CED frameworks have addressed the issues of limited spatiotemporal labels and costly training by decomposing the CED into low-level features, as well as spatial and temporal relationship extraction. However, these frameworks suffer from high resource costs, low scalability, and an increased number of false positives and false negatives. This paper proposes GICEDCAM, which distributes CED across edge, stateless, and stateful layers to improve scalability and reduce computation cost. Additionally, we introduce a Spatial Event Corrector component that leverages geospatial data analysis to minimize false negatives and false positives in spatial event detection. We evaluate GICEDCAM on 16 camera streams covering four complex events. Relative to a strong open-source baseline configured for our setting, GICEDCAM reduces end-to-end latency by 36% and total computational cost by 45%, with the advantage widening as objects per frame increase. Among corrector variants, Bayesian Network (BN) yields the lowest latency, Long Short-Term Memory (LSTM) achieves the highest accuracy, and trajectory analysis offers the best accuracy-latency trade-off for this architecture.

RevDate: 2025-09-13

Gong R, Zhang H, Li G, et al (2025)

Edge Computing-Enabled Smart Agriculture: Technical Architectures, Practical Evolution, and Bottleneck Breakthroughs.

Sensors (Basel, Switzerland), 25(17): pii:s25175302.

As the global digital transformation of agriculture accelerates, the widespread deployment of farming equipment has triggered an exponential surge in agricultural production data. Consequently, traditional cloud computing frameworks face critical challenges: communication latency in the field, the demand for low-power devices, and stringent real-time decision constraints. These bottlenecks collectively exacerbate bandwidth constraints, diminish response efficiency, and introduce data security vulnerabilities. In this context, edge computing offers a promising solution for smart agriculture. By provisioning computing resources to the network periphery and enabling localized processing at data sources adjacent to agricultural machinery, sensors, and crops, edge computing leverages low-latency responses, bandwidth optimization, and distributed computation capabilities. This paper provides a comprehensive survey of the research landscape in agricultural edge computing. We begin by defining its core concepts and highlighting its advantages over cloud computing. Subsequently, anchored in the "terminal sensing-edge intelligence-cloud coordination" architecture, we analyze technological evolution in edge sensing devices, lightweight intelligent algorithms, and cooperative communication mechanisms. Additionally, through precision farming, intelligent agricultural machinery control, and full-chain crop traceability, we demonstrate its efficacy in enhancing real-time agricultural decision-making. Finally, we identify adaptation challenges in complex environments and outline future directions for research and development in this field.

RevDate: 2025-09-13

Ali EM, Abawajy J, Lemma F, et al (2025)

Analysis of Deep Reinforcement Learning Algorithms for Task Offloading and Resource Allocation in Fog Computing Environments.

Sensors (Basel, Switzerland), 25(17): pii:s25175286.

Fog computing is increasingly preferred over cloud computing for processing tasks from Internet of Things (IoT) devices with limited resources. However, placing tasks and allocating resources in distributed and dynamic fog environments remains a major challenge, especially when trying to meet strict Quality of Service (QoS) requirements. Deep reinforcement learning (DRL) has emerged as a promising solution to these challenges, offering adaptive, data-driven decision-making in real-time and uncertain conditions. While several surveys have explored DRL in fog computing, most focus on traditional centralized offloading approaches or emphasize reinforcement learning (RL) with limited integration of deep learning. To address this gap, this paper presents a comprehensive and focused survey on the full-scale application of DRL to the task offloading problem in fog computing environments involving multiple user devices and multiple fog nodes. We systematically analyze and classify the literature based on architecture, resource allocation methods, QoS objectives, offloading topology and control, optimization strategies, DRL techniques used, and application scenarios. We also introduce a taxonomy of DRL-based task offloading models and highlight key challenges, open issues, and future research directions. This survey serves as a valuable resource for researchers by identifying unexplored areas and suggesting new directions for advancing DRL-based solutions in fog computing. For practitioners, it provides insights into selecting suitable DRL techniques and system designs to implement scalable, efficient, and QoS-aware fog computing applications in real-world environments.

RevDate: 2025-09-13

Xu T, Zou K, Liu C, et al (2025)

Special Issue on Advanced Optical Technologies for Communications, Perception, and Chips.

Sensors (Basel, Switzerland), 25(17): pii:s25175278.

With the iterative upgrade and popular application of new information technologies such as 5G, cloud computing, big data, and artificial intelligence (AI), the global data traffic and the demand for computing power has ushered in explosive growth [...].

RevDate: 2025-09-13

Tasmurzayev N, Amangeldy B, Imanbek B, et al (2025)

Digital Cardiovascular Twins, AI Agents, and Sensor Data: A Narrative Review from System Architecture to Proactive Heart Health.

Sensors (Basel, Switzerland), 25(17): pii:s25175272.

Cardiovascular disease remains the world's leading cause of mortality, yet everyday care still relies on episodic, symptom-driven interventions that detect ischemia, arrhythmias, and remodeling only after tissue damage has begun, limiting the effectiveness of therapy. A narrative review synthesized 183 studies published between 2016 and 2025 that were located through PubMed, MDPI, Scopus, IEEE Xplore, and Web of Science. This review examines CVD diagnostics using innovative technologies such as digital cardiovascular twins, which involve the collection of data from wearable IoT devices (electrocardiography (ECG), photoplethysmography (PPG), and mechanocardiography), clinical records, laboratory biomarkers, and genetic markers, as well as their integration with artificial intelligence (AI), including machine learning and deep learning, graph and transformer networks for interpreting multi-dimensional data streams and creating prognostic models, as well as generative AI, medical large language models (LLMs), and autonomous agents for decision support, personalized alerts, and treatment scenario modeling, and with cloud and edge computing for data processing. This multi-layered architecture enables the detection of silent pathologies long before clinical manifestations, transforming continuous observations into actionable recommendations and shifting cardiology from reactive treatment to predictive and preventive care. Evidence converges on four layers: sensors streaming multimodal clinical and environmental data; hybrid analytics that integrate hemodynamic models with deep-, graph- and transformer learning while Bayesian and Kalman filters manage uncertainty; decision support delivered by domain-tuned medical LLMs and autonomous agents; and prospective simulations that trial pacing or pharmacotherapy before bedside use, closing the prediction-intervention loop. This stack flags silent pathology weeks in advance and steers proactive personalized prevention. It also lays the groundwork for software-as-a-medical-device ecosystems and new regulatory guidance for trustworthy AI-enabled cardiovascular care.

RevDate: 2025-09-13

Luo H, Dai S, Hu Y, et al (2025)

Integrating Knowledge-Based and Machine Learning for Betel Palm Mapping on Hainan Island Using Sentinel-1/2 and Google Earth Engine.

Plants (Basel, Switzerland), 14(17): pii:plants14172696.

The betel palm is a critical economic crop on Hainan Island. Accurate and timely maps of betel palms are fundamental for the industry's management and ecological environment evaluation. To date, mapping the spatial distribution of betel palms across a large regional scale remains a significant challenge. In this study, we propose an integrated framework that combines knowledge-based and machine learning approaches to produce a map of betel palms at 10 m spatial resolution based on Sentinel-1/2 data and Google Earth Engine (GEE) for 2023 on Hainan Island, which accounts for 95% of betel nut acreage in China. The forest map was initially delineated based on signature information and the Green Normalized Difference Vegetation Index (GNDVI) acquired from Sentinel-1 and Sentinel-2 data, respectively. Subsequently, patches of betel palms were extracted from the forest map using a random forest classifier and feature selection method via logistic regression (LR). The resultant 10 m betel palm map achieved user's, producer's, and overall accuracy of 86.89%, 88.81%, and 97.51%, respectively. According to the betel palm map in 2023, the total planted area was 189,805 hectares (ha), exhibiting high consistency with statistical data (R[2] = 0.74). The spatial distribution was primarily concentrated in eastern Hainan, reflecting favorable climatic and topographic conditions. The results demonstrate the significant potential of Sentinel-1/2 data for identifying betel palms in complex tropical regions characterized by diverse land cover types, fragmented cultivated land, and frequent cloud and rain interference. This study provides a reference framework for mapping tropical crops, and the findings are crucial for tropical agricultural management and optimization.

RevDate: 2025-09-12

Rosenblum J, Dong J, S Narayanasamy (2025)

Confidential computing for population-scale genome-wide association studies with SECRET-GWAS.

Nature computational science [Epub ahead of print].

Genomic data from a single institution lacks global diversity representation, especially for rare variants and diseases. Confidential computing can enable collaborative genome-wide association studies (GWAS) without compromising privacy or accuracy. However, due to limited secure memory space and performance overheads, previous solutions fail to support widely used regression methods. Here we present SECRET-GWAS-a rapid, privacy-preserving, population-scale, collaborative GWAS tool. We discuss several system optimizations, including streaming, batching, data parallelization and reducing trusted hardware overheads to efficiently scale linear and logistic regression to over a thousand processor cores on an Intel SGX-based cloud platform. In addition, we protect SECRET-GWAS against several hardware side-channel attacks. SECRET-GWAS is an open-source tool and works with the widely used Hail genomic analysis framework. Our experiments on Azure's Confidential Computing platform demonstrate that SECRET-GWAS enables multivariate linear and logistic regression GWAS queries on population-scale datasets from ten independent sources in just 4.5 and 29 minutes, respectively.

RevDate: 2025-09-12

Smith DS, Ramadass K, Jones L, et al (2025)

Secondary use of radiological imaging data: Vanderbilt's ImageVU approach.

Journal of biomedical informatics pii:S1532-0464(25)00134-0 [Epub ahead of print].

OBJECTIVE: To develop ImageVU, a scalable research imaging infrastructure that integrates clinical imaging data with metadata-driven cohort discovery, enabling secure, efficient, and regulatory-compliant access to imaging for secondary and opportunistic research use. This manuscript presents a detailed description of ImageVU's key components and lessons learned to assist other institutions in developing similar research imaging services and infrastructure.

METHODS: ImageVU was designed to support the secondary use of radiological imaging data through a dedicated research imaging store. The system comprises four interconnected components: a Research PACS, an Ad Hoc Backfill Host, Cloud Storage System, and a De-Identification System. Imaging metadata are extracted and stored in the Research Derivative (RD), an identified clinical data repository, and the Synthetic Derivative (SD), a de-identified research data repository, with access facilitated through the RD Discover web portal. Researchers interact with the system via structured metadata queries and multiple data delivery options, including web-based viewing, bulk downloads, and dataset preparation for high-performance computing environments.

RESULTS: The integration of metadata-driven search capabilities has streamlined cohort discovery and improved imaging data accessibility. As of December 2024, ImageVU has processed 12.9 million MRI and CT series from 1.36 million studies across 453,403 patients. The system has supported 75 project requests, delivering over 50 TB of imaging data to 55 investigators, leading to 66 published research papers.

CONCLUSION: ImageVU demonstrates a scalable and efficient approach for integrating clinical imaging into research workflows. By combining institutional data infrastructure with cloud-based storage and metadata-driven cohort identification, the platform enables secure and compliant access to imaging for translational research.

RevDate: 2025-09-09
CmpDate: 2025-09-09

Sanjalawe Y, Fraihat S, Al-E'mari S, et al (2025)

Smart load balancing in cloud computing: Integrating feature selection with advanced deep learning models.

PloS one, 20(9):e0329765 pii:PONE-D-24-52330.

The increasing dependence on cloud computing as a cornerstone of modern technological infrastructures has introduced significant challenges in resource management. Traditional load-balancing techniques often prove inadequate in addressing cloud environments' dynamic and complex nature, resulting in suboptimal resource utilization and heightened operational costs. This paper presents a novel smart load-balancing strategy incorporating advanced techniques to mitigate these limitations. Specifically, it addresses the critical need for a more adaptive and efficient approach to workload management in cloud environments, where conventional methods fall short in handling dynamic and fluctuating workloads. To bridge this gap, the paper proposes a hybrid load-balancing methodology that integrates feature selection and deep learning models for optimizing resource allocation. The proposed Smart Load Adaptive Distribution with Reinforcement and Optimization approach, SLADRO, combines Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) algorithms for load prediction, a hybrid bio-inspired optimization technique-Orthogonal Arrays and Particle Swarm Optimization (OOA-PSO)-for feature selection algorithms, and Deep Reinforcement Learning (DRL) for dynamic task scheduling. Extensive simulations conducted on a real-world dataset called Google Cluster Trace dataset reveal that the SLADRO model significantly outperforms traditional load-balancing approaches, yielding notable improvements in throughput, makespan, resource utilization, and energy efficiency. This integration of advanced techniques offers a scalable and adaptive solution, providing a comprehensive framework for efficient load balancing in cloud computing environments.

RevDate: 2025-09-09

Degatano K, Awdeh A, Cox Iii RS, et al (2025)

Warp Analysis Research Pipelines: Cloud-optimized workflows for biological data processing and reproducible analysis.

Bioinformatics (Oxford, England) pii:8250097 [Epub ahead of print].

SUMMARY: In the era of large data, the cloud is increasingly used as a computing environment, necessitating the development of cloud-compatible pipelines that can provide uniform analysis across disparate biological datasets. The Warp Analysis Research Pipelines (WARP) repository is a GitHub repository of open-source, cloud-optimized workflows for biological data processing that are semantically versioned, tested, and documented. A companion repository, WARP-Tools, hosts Docker containers and custom tools used in WARP workflows.

The WARP and WARP-Tools repositories and code are freely available at https://github.com/broadinstitute/WARP and https://github.com/broadinstitute/WARP-tools, respectively. The pipelines are available for download from the WARP repository, can be exported from Dockstore, and can be imported to a bioinformatics platform such as Terra.

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

RevDate: 2025-09-08

El-Warrak LO, Miceli de Farias C, VHDM De Azevedo Costa (2025)

Simulation-based assessment of digital twin systems for immunisation.

Frontiers in digital health, 7:1603550.

BACKGROUND: This paper presents the application of simulation to assess the functionality of a proposed Digital Twin (DT) architecture for immunisation services in primary healthcare centres. The solution is based on Industry 4.0 concepts and technologies, such as IoT, machine learning, and cloud computing, and adheres to the ISO 23247 standard.

METHODS: The system modelling is carried out using the Unified Modelling Language (UML) to define the workflows and processes involved, including vaccine storage temperature monitoring and population vaccination status tracking. The proposed architecture is structured into four domains: observable elements/entities, data collection and device control, digital twin platform, and user domain. To validate the system's performance and feasibility, simulations are conducted using SimPy, enabling the evaluation of its response under various operational scenarios.

RESULTS: The system facilitates the storage, monitoring, and visualisation of data related to the thermal conditions of ice-lined refrigerators (ILR) and thermal boxes. Additionally, it analyses patient vaccination coverage based on the official immunisation schedule. The key benefits include optimising vaccine storage conditions, reducing dose wastage, continuously monitoring immunisation coverage, and supporting strategic vaccination planning.

CONCLUSION: The paper discusses the future impacts of this approach on immunisation management and its scalability for diverse public health contexts. By leveraging advanced technologies and simulation, this digital twin framework aims to improve the performance and overall impact of immunization services.

RevDate: 2025-09-08

Zhou Y, Wu Y, Su Y, et al (2025)

Cloud-magnetic resonance imaging system: In the era of 6G and artificial intelligence.

Magnetic resonance letters, 5(1):200138.

Magnetic resonance imaging (MRI) plays an important role in medical diagnosis, generating petabytes of image data annually in large hospitals. This voluminous data stream requires a significant amount of network bandwidth and extensive storage infrastructure. Additionally, local data processing demands substantial manpower and hardware investments. Data isolation across different healthcare institutions hinders cross-institutional collaboration in clinics and research. In this work, we anticipate an innovative MRI system and its four generations that integrate emerging distributed cloud computing, 6G bandwidth, edge computing, federated learning, and blockchain technology. This system is called Cloud-MRI, aiming at solving the problems of MRI data storage security, transmission speed, artificial intelligence (AI) algorithm maintenance, hardware upgrading, and collaborative work. The workflow commences with the transformation of k-space raw data into the standardized Imaging Society for Magnetic Resonance in Medicine Raw Data (ISMRMRD) format. Then, the data are uploaded to the cloud or edge nodes for fast image reconstruction, neural network training, and automatic analysis. Then, the outcomes are seamlessly transmitted to clinics or research institutes for diagnosis and other services. The Cloud-MRI system will save the raw imaging data, reduce the risk of data loss, facilitate inter-institutional medical collaboration, and finally improve diagnostic accuracy and work efficiency.

RevDate: 2025-09-05

Zhang YH, He JY, Lin SJ, et al (2025)

[Development and practice of an interactive chromatography learning tool for beginners based on GeoGebra: a case study of plate theory].

Se pu = Chinese journal of chromatography, 43(9):1078-1085.

This study developed a GeoGebra platform-based interactive pedagogical tool focusing on plate theory to address challenges associated with abstract theory transmission, unidirectional knowledge delivery, and low student engagement in chromatography teaching in instrumental analysis courses. This study introduced an innovative methodology that encompasses theoretical model reconstruction, tool development, and teaching-chain integration that addresses the limitations of existing teaching tools, including the complex operation of professional software, restricted accessibility to web-based tools, and insufficient parameter-adjustment flexibility. An improved mathematical plate-theory model was established by incorporating mobile-phase flow rate, dead time, and phase ratio parameters. A three-tier progressive learning system (single-component simulation, multi-component simulation, and retention-time-equation derivation modules) was developed on a cloud-based computing platform. An integrated teaching chain that combined athematical modeling (AI-assisted "Doubao" derivation), interactive-parameter adjustment (multiple adjustable chromatographic parameters), and visual verification (chromatographic elution-curve simulation) was implemented. Teaching practice demonstrated that: (1) The developed tool transcends the dimensional limitations of traditional instruction, elevating the classroom task completion rate to 94% and improving the student accuracy rate for solving advanced problems to 76%. (2) The dynamic-parameter-adjustment feature significantly enhances learning engagement by enabling 85% of the students to independently use the tool in subsequent studies and experiments. (3) The AI-powered derivation and regression-analysis modules enable the interdisciplinary integration of theoretical chemistry and computational tools. The process of deriving chromatographic retention-time equations through this methodological approach proved more convincing than the current textbook practice of directly presenting conclusions. The developed innovative "theoretical-model visualizable-model-parameter adjustable-interactive-knowledge generating" model provides a new avenue for addressing teaching challenges associated with chromatography theory, and its open-source framework and modular design philosophy can offer valuable references for the digital teaching reform in analytical chemistry.

RevDate: 2025-09-02

Ting T, M Li (2025)

Enhanced secure storage and data privacy management system for big data based on multilayer model.

Scientific reports, 15(1):32285.

As big data systems expand in scale and complexity, managing and securing sensitive data-especially personnel records-has become a critical challenge in cloud environments. This paper proposes a novel Multi-Layer Secure Cloud Storage Model (MLSCSM) tailored for large-scale personnel data. The model integrates fast and secure ChaCha20 encryption, Dual Stage Data Partitioning (DSDP) to maintain statistical reliability across blocks, k-anonymization to ensure privacy, SHA-512 hashing for data integrity, and Cauchy matrix-based dispersion for fault-tolerant distributed storage. A key novelty lies in combining cryptographic and statistical methods to enable privacy-preserving partitioned storage, optimized for distributed Cloud Computing Environments (CCE). Data blocks are securely encoded, masked, and stored in discrete locations across several cloud platforms, based on factors such as latency, bandwidth, cost, and security. They are later retrieved with integrity verification. The model also includes audit logs, load balancing, and real-time resource evaluation. To validate the system, experiments were tested using the MIMIC-III dataset on a 20-node Hadoop cluster. Compared to baseline models such as RDFA, SDPMC, and P&XE, the proposed model achieved a reduction in encoding time to 250 ms (block size 75), a CPU usage of 23% for 256 MB of data, a latency as low as 14 ms, and a throughput of up to 139 ms. These results confirm that the model offers superior security, efficiency, and scalability for cloud-based big data storage applications.

RevDate: 2025-09-01

Mushtaq SU, Sheikh S, Nain A, et al (2025)

CRFTS: a cluster-centric and reservation-based fault-tolerant scheduling strategy to enhance QoS in cloud computing.

Scientific reports, 15(1):32233.

Cloud systems supply different kinds of on-demand services in accordance with client needs. As the landscape of cloud computing undergoes continuous development, there is a growing imperative for effective utilization of resources, task scheduling, and fault tolerance mechanisms. To decrease the user task execution time (shorten the makespan) with reduced operational expenses, to improve the distribution of load, and to boost utilization of resources, proper mapping of user tasks to the available VMs is necessary. This study introduces a unique perspective in tackling these challenges by implementing inventive scheduling strategies along with robust and proactive fault tolerance mechanisms in cloud environments. This paper presents the Clustering and Reservation Fault-tolerant Scheduling (CRFTS), which adapts the heartbeat mechanism to detect failed VMs proactively and maximizes the system reliability while making it fault-tolerant and optimizing other Quality of Service (QoS) parameters, such as makespan, average resource utilization, and reliability. The study optimizes the allocation of tasks to improve resource utilization and reduce the time required for their completion. At the same time, the proactive reservation-based fault tolerance framework is presented to ensure continuous service delivery throughout its execution without any interruption. The effectiveness of the suggested model is illustrated through simulations and empirical analyses, highlighting enhancements in several QoS parameters while comparing with HEFT, FTSA-1, DBSA, E-HEFT, LB-HEFT, BDHEFT, HO-SSA, and MOTSWAO for various cases and conditions across different tasks and VMs. The outcomes demonstrate that CRFTS average progresses about 48.7%, 51.2%, 45.4%, 11.8%, 24.5%, 24.4% in terms of makespan and 13.1%, 9.3%, 6.5%, 21%, 22.1%, 26.3% in terms of average resource utilization compared to HEFT, FTSA-1, DBSA, E-HEFT, LB-HEFT, BDHEFT, HO-SSA, and MOTSWAO, respectively.

RevDate: 2025-09-01

Kishor I, Mamodiya U, Patil V, et al (2025)

AI-Integrated autonomous robotics for solar panel cleaning and predictive maintenance using drone and ground-based systems.

Scientific reports, 15(1):32187.

Solar photovoltaic (PV) systems, especially in dusty and high-temperature regions, suffer performance degradation due to dust accumulation, surface heating, and delayed maintenance. This study proposes an AI-integrated autonomous robotic system combining real-time monitoring, predictive analytics, and intelligent cleaning for enhanced solar panel performance. We developed a hybrid system that integrates CNN-LSTM-based fault detection, Reinforcement Learning (DQN)-driven robotic cleaning, and Edge AI analytics for low-latency decision-making. Thermal and LiDAR-equipped drones detect panel faults, while ground robots clean panel surfaces based on real-time dust and temperature data. The system is built on Jetson Nano and Raspberry Pi 4B units with MQTT-based IoT communication. The system achieved an average cleaning efficiency of 91.3%, reducing dust density from 3.9 to 0.28 mg/m[3], and restoring up to 31.2% energy output on heavily soiled panels. CNN-LSTM-based fault detection delivered 92.3% accuracy, while the RL-based cleaning policy reduced energy and water consumption by 34.9%. Edge inference latency averaged 47.2 ms, outperforming cloud processing by 63%. A strong correlation, r = 0.87 between dust concentration and thermal anomalies, was confirmed. The proposed IEEE 1876-compliant framework offers a resilient and intelligent solution for real-time solar panel maintenance. By leveraging AI, robotics, and edge computing, the system enhances energy efficiency, reduces manual labor, and provides a scalable model for climate-resilient, smart solar infrastructure.

RevDate: 2025-09-01

Maciá-Lillo A, Mora H, Jimeno-Morenilla A, et al (2025)

AI edge cloud service provisioning for knowledge management smart applications.

Scientific reports, 15(1):32246.

This paper investigates a serverless edge-cloud architecture to support knowledge management processes within smart cities, which align with the goals of Society 5.0 to create human-centered, data-driven urban environments. The proposed architecture leverages cloud computing for scalability and on-demand resource provisioning, and edge computing for cost-efficiency and data processing closer to data sources, while also supporting serverless computing for simplified application development. Together, these technologies enhance the responsiveness and efficiency of smart city applications, such as traffic management, public safety, and infrastructure governance, by minimizing latency and improving data handling at scale. Experimental analysis demonstrates the benefits of deploying KM processes on this hybrid architecture, particularly in reducing data transmission times and alleviating network congestion, while at the same time providing options for cost-efficient computations. In addition to that, the study also identifies the characteristics, opportunities and limitations of the edge and cloud environment in terms of computation and network communication times. This architecture represents a flexible framework for advancing knowledge-driven services in smart cities, supporting further development of smart city applications in KM processes.

RevDate: 2025-08-28
CmpDate: 2025-08-29

Kim EM, Y Lim (2025)

Mapping interconnectivity of digital twin healthcare research themes through structural topic modeling.

Scientific reports, 15(1):31734.

Digital twin (DT) technology is revolutionizing healthcare systems by leveraging real-time data integration and advanced analytics to enhance patient care, optimize clinical operations, and facilitate simulation. This study aimed to identify key research trends related to the application of DTs to healthcare using structural topic modeling (STM). Five electronic databases were searched for articles related to healthcare and DT. Using the held-out likelihood, residual, semantic coherence, and lower bound as metrics revealed that the optimal number of topics was eight. The "security solutions to improve data processes and communication in healthcare" topic was positioned at the center of the network and connected to multiple nodes. The "cloud computing and data network architecture" and "machine-learning algorithms for accurate detection and prediction" topics served as a bridge between technical and healthcare topics, suggesting their high potential for use in various fields. The widespread adoption of DTs in healthcare requires robust governance structures to protect individual rights, ensure data security and privacy, and promote transparency and fairness. Compliance with regulatory frameworks, ethical guidelines, and a commitment to accountability are also crucial.

RevDate: 2025-08-28
CmpDate: 2025-08-28

Zhang Y, Ran H, Guenther A, et al (2025)

Improved modelling of biogenic emissions in human-disturbed forest edges and urban areas.

Nature communications, 16(1):8064.

Biogenic volatile organic compounds (BVOCs) are critical to biosphere-atmosphere interactions, profoundly influencing atmospheric chemistry, air quality and climate, yet accurately estimating their emissions across diverse ecosystems remains challenging. Here we introduce GEE-MEGAN, a cloud-native extension of the widely used MEGAN2.1 model, integrating dynamic satellite-derived land cover and vegetation within Google Earth Engine to produce near-real-time BVOC emissions at 10-30 m resolution, enabling fine-scale tracking of emissions in rapidly changing environments. GEE-MEGAN reduces BVOC emission estimates by 31% and decreases root mean square errors by up to 48.6% relative to MEGAN2.1 in human-disturbed forest edges, and reveals summertime BVOC emissions up to 25‑fold higher than previous estimates in urban areas such as London, Los Angeles, Paris, and Beijing. By capturing fine-scale landscape heterogeneity and human-driven dynamics, GEE-MEGAN significantly improves BVOC emission estimates, providing crucial insights to the complex interactions among BVOCs, climate, and air quality across both natural and human-modified environments.

RevDate: 2025-08-28

Panagou IC, Katsoulis S, Nannos E, et al (2025)

A Comprehensive Evaluation of IoT Cloud Platforms: A Feature-Driven Review with a Decision-Making Tool.

Sensors (Basel, Switzerland), 25(16): pii:s25165124.

The rapid proliferation of Internet of Things (IoT) devices has led to a growing ecosystem of Cloud Platforms designed to manage, process, and analyze IoT data. Selecting the optimal IoT Cloud Platform is a critical decision for businesses and developers, yet it presents a significant challenge due to the diverse range of features, pricing models, and architectural nuances. This manuscript presents a comprehensive, feature-driven review of twelve prominent IoT Cloud Platforms, including AWS IoT Core, IoT on Google Cloud Platform, and Microsoft Azure IoT Hub among others. We meticulously analyze each platform across nine key features: Security, Scalability and Performance, Interoperability, Data Analytics and AI/ML Integration, Edge Computing Support, Pricing Models and Cost-effectiveness, Developer Tools and SDK Support, Compliance and Standards, and Over-The-Air (OTA) Update Capabilities. For each feature, platforms are quantitatively scored (1-10) based on an in-depth assessment of their capabilities and offerings at the time of research. Recognizing the dynamic nature of this domain, we present our findings in a two-dimensional table to provide a clear comparative overview. Furthermore, to empower users in their decision-making process, we introduce a novel, web-based tool for evaluating IoT Cloud Platforms, called the "IoT Cloud Platforms Selector". This interactive tool allows users to assign personalized weights to each feature, dynamically calculating and displaying weighted scores for each platform, thereby facilitating a tailored selection process. This research provides a valuable resource for researchers, practitioners, and organizations seeking to navigate the complex landscape of IoT Cloud Platforms.

RevDate: 2025-08-28
CmpDate: 2025-08-28

Rao S, S Neethirajan (2025)

Computational Architectures for Precision Dairy Nutrition Digital Twins: A Technical Review and Implementation Framework.

Sensors (Basel, Switzerland), 25(16): pii:s25164899.

Sensor-enabled digital twins (DTs) are reshaping precision dairy nutrition by seamlessly integrating real-time barn telemetry with advanced biophysical simulations in the cloud. Drawing insights from 122 peer-reviewed studies spanning 2010-2025, this systematic review reveals how DT architectures for dairy cattle are conceptualized, validated, and deployed. We introduce a novel five-dimensional classification framework-spanning application domain, modeling paradigms, computational topology, validation protocols, and implementation maturity-to provide a coherent comparative lens across diverse DT implementations. Hybrid edge-cloud architectures emerge as optimal solutions, with lightweight CNN-LSTM models embedded in collar or rumen-bolus microcontrollers achieving over 90% accuracy in recognizing feeding and rumination behaviors. Simultaneously, remote cloud systems harness mechanistic fermentation simulations and multi-objective genetic algorithms to optimize feed composition, minimize greenhouse gas emissions, and balance amino acid nutrition. Field-tested prototypes indicate significant agronomic benefits, including 15-20% enhancements in feed conversion efficiency and water use reductions of up to 40%. Nevertheless, critical challenges remain: effectively fusing heterogeneous sensor data amid high barn noise, ensuring millisecond-level synchronization across unreliable rural networks, and rigorously verifying AI-generated nutritional recommendations across varying genotypes, lactation phases, and climates. Overcoming these gaps necessitates integrating explainable AI with biologically grounded digestion models, federated learning protocols for data privacy, and standardized PRISMA-based validation approaches. The distilled implementation roadmap offers actionable guidelines for sensor selection, middleware integration, and model lifecycle management, enabling proactive rather than reactive dairy management-an essential leap toward climate-smart, welfare-oriented, and economically resilient dairy farming.

RevDate: 2025-08-28

Alamri M, Humayun M, Haseeb K, et al (2025)

AI-Powered Adaptive Disability Prediction and Healthcare Analytics Using Smart Technologies.

Diagnostics (Basel, Switzerland), 15(16): pii:diagnostics15162104.

Background: By leveraging advanced wireless technologies, Healthcare Industry 5.0 promotes the continuous monitoring of real-time medical acquisition from the physical environment. These systems help identify early diseases by collecting health records from patients' bodies promptly using biosensors. The dynamic nature of medical devices not only enhances the data analysis in medical services and the prediction of chronic diseases, but also improves remote diagnostics with the latency-aware healthcare system. However, due to scalability and reliability limitations in data processing, most existing healthcare systems pose research challenges in the timely detection of personalized diseases, leading to inconsistent diagnoses, particularly when continuous monitoring is crucial. Methods: This work propose an adaptive and secure framework for disability identification using the Internet of Medical Things (IoMT), integrating edge computing and artificial intelligence. To achieve the shortest response time for medical decisions, the proposed framework explores lightweight edge computing processes that collect physiological and behavioral data using biosensors. Furthermore, it offers a trusted mechanism using decentralized strategies to protect big data analytics from malicious activities and increase authentic access to sensitive medical data. Lastly, it provides personalized healthcare interventions while monitoring healthcare applications using realistic health records, thereby enhancing the system's ability to identify diseases associated with chronic conditions. Results: The proposed framework is tested using simulations, and the results indicate the high accuracy of the healthcare system in detecting disabilities at the edges, while enhancing the prompt response of the cloud server and guaranteeing the security of medical data through lightweight encryption methods and federated learning techniques. Conclusions: The proposed framework offers a secure and efficient solution for identifying disabilities in healthcare systems by leveraging IoMT, edge computing, and AI. It addresses critical challenges in real-time disease monitoring, enhancing diagnostic accuracy and ensuring the protection of sensitive medical data.

RevDate: 2025-08-28

Gao H (2025)

Research on Computation Offloading and Resource Allocation Strategy Based on MADDPG for Integrated Space-Air-Marine Network.

Entropy (Basel, Switzerland), 27(8): pii:e27080803.

This paper investigates the problem of computation offloading and resource allocation in an integrated space-air-sea network based on unmanned aerial vehicle (UAV) and low Earth orbit (LEO) satellites supporting Maritime Internet of Things (M-IoT) devices. Considering the complex, dynamic environment comprising M-IoT devices, UAVs and LEO satellites, traditional optimization methods encounter significant limitations due to non-convexity and the combinatorial explosion in possible solutions. A multi-agent deep deterministic policy gradient (MADDPG)-based optimization algorithm is proposed to address these challenges. This algorithm is designed to minimize the total system costs, balancing energy consumption and latency through partial task offloading within a cloud-edge-device collaborative mobile edge computing (MEC) system. A comprehensive system model is proposed, with the problem formulated as a partially observable Markov decision process (POMDP) that integrates association control, power control, computing resource allocation, and task distribution. Each M-IoT device and UAV acts as an intelligent agent, collaboratively learning the optimal offloading strategies through a centralized training and decentralized execution framework inherent in the MADDPG. The numerical simulations validate the effectiveness of the proposed MADDPG-based approach, which demonstrates rapid convergence and significantly outperforms baseline methods, and indicate that the proposed MADDPG-based algorithm reduces the total system cost by 15-60% specifically.

RevDate: 2025-08-27

Massimi F, Tedeschi A, Bagadi K, et al (2025)

Integrating Google Maps and Smooth Street View Videos for Route Planning.

Journal of imaging, 11(8):.

This research addresses the long-standing dependence on printed maps for navigation and highlights the limitations of existing digital services like Google Street View and Google Street View Player in providing comprehensive solutions for route analysis and understanding. The absence of a systematic approach to route analysis, issues related to insufficient street view images, and the lack of proper image mapping for desired roads remain unaddressed by current applications, which are predominantly client-based. In response, we propose an innovative automatic system designed to generate videos depicting road routes between two geographic locations. The system calculates and presents the route conventionally, emphasizing the path on a two-dimensional representation, and in a multimedia format. A prototype is developed based on a cloud-based client-server architecture, featuring three core modules: frames acquisition, frames analysis and elaboration, and the persistence of metadata information and computed videos. The tests, encompassing both real-world and synthetic scenarios, have produced promising results, showcasing the efficiency of our system. By providing users with a real and immersive understanding of requested routes, our approach fills a crucial gap in existing navigation solutions. This research contributes to the advancement of route planning technologies, offering a comprehensive and user-friendly system that leverages cloud computing and multimedia visualization for an enhanced navigation experience.

RevDate: 2025-08-27
CmpDate: 2025-08-27

Tang H, Yuan Y, Liu H, et al (2025)

Application of a "nursing education cloud platform"-based combined and phased training model in the education of standardized-training nurses: A quasi-experimental study.

Medicine, 104(34):e44138.

The evolution of nursing education has rendered traditional standardized-training models increasingly inadequate, primarily due to their inflexible curricula, limited personalized instruction, and delayed feedback loops. While stage-based training models offer improved coherence through structured planning, they encounter difficulties in resource integration and real-time interaction. Contemporary advancements in cloud computing and Internet of Things technologies present novel opportunities for educational reform. Nursing Education Cloud Platform (NECP)-based systems have demonstrated efficacy in medical education, particularly in efficient resource management, data-driven decision-making, and the design of adaptable learning pathways. Despite the nascent implementation of cloud platforms in standardized nurse training, the sustained impact on multifaceted competencies, including professional identity and clinical reasoning, warrants further investigation. The primary objective of this investigation was to assess the effectiveness of a NECP-integrated, phased training model in enhancing standardized-training nurses' theoretical comprehension, practical competencies, professional self-perception, and clinical decision-making capabilities, while also examining its potential to refine nursing education methodologies. This quasi-experimental, non-randomized controlled trial evaluated the impact of a NECP-based training program. The study encompassed an experimental group (n = 56, receiving cloud platform-based training from September 2021 to August 2022) and a control group (n = 56, undergoing traditional training from September 2020 to August 2021). Group assignment was determined by the hospital's annual training schedule, thus employing a natural grouping based on the time period. Propensity score matching was utilized to mitigate baseline characteristic imbalances. The intervention's effects were assessed across several domains, including theoretical knowledge, operational skills, professional identity, and clinical reasoning abilities. ANCOVA was employed to account for temporal covariates. The experimental group scored significantly higher than the control group in theoretical knowledge (88.70 ± 5.07 vs 75.55 ± 9.01, P < .05), operational skills (94.27 ± 2.04 vs 90.95 ± 3.69, P < .05), professional identity (73.18 ± 10.18 vs 62.54 ± 15.48, P < .05), and clinical reasoning ability (60.95 ± 8.90 vs 51.09 ± 12.28, P < .05). The integration of the "NECP" with a phased training model demonstrates efficacy in augmenting nurses' competencies. However, the potential for selection bias, inherent in the non-randomized design, warrants careful consideration in the interpretation of these findings. Further investigation, specifically through multicenter longitudinal studies, is recommended to ascertain the generalizability of these results.

RevDate: 2025-08-26
CmpDate: 2025-08-26

Brown S, Kudia O, Kleine K, et al (2025)

Comparing Multiple Imputation Methods to Address Missing Patient Demographics in Immunization Information Systems: Retrospective Cohort Study.

JMIR public health and surveillance, 11:e73916 pii:v11i1e73916.

BACKGROUND: Immunization Information Systems (IIS) and surveillance data are essential for public health interventions and programming; however, missing data are often a challenge, potentially introducing bias and impacting the accuracy of vaccine coverage assessments, particularly in addressing disparities.

OBJECTIVE: This study aimed to evaluate the performance of 3 multiple imputation methods, Stata's (StataCorp LLC) multiple imputation using chained equations (MICE), scikit-learn's Iterative-Imputer, and Python's miceforest package, in managing missing race and ethnicity data in large-scale surveillance datasets. We compared these methodologies in their ability to preserve demographic distribution, computational efficiency, and performed G-tests on contingency tables to obtain likelihood ratio statistics to assess the association between race and ethnicity and flu vaccination status.

METHODS: In this retrospective cohort study, we analyzed 2021-2022 flu vaccination and demographic data from the West Virginia Immunization Information System (N=2,302,036), where race (15%) and ethnicity (34%) were missing. MICE, Iterative Imputer, and miceforest were used to impute missing variables, generating 15 datasets each. Computational efficiency, demographic distribution preservation, and spatial clustering patterns were assessed using G-statistics.

RESULTS: After imputation, an additional 780,339 observations were obtained compared with complete case analysis. All imputation methods exhibited significant spatial clustering for race imputation (G-statistics: MICE=26,452.7, Iterative-Imputer=128,280.3, Miceforest=26,891.5; P<.001), while ethnicity imputation showed variable clustering patterns (G-statistics: MICE=1142.2, Iterative-Imputer=1.7, Miceforest=2185.0; P: MICE<.001, Iterative-Imputer=1.7, Miceforest<.001). MICE and miceforest best preserved the proportional distribution of demographics. Computational efficiency varied, with MICE requiring 14 hours, Iterative Imputer 2 minutes, and miceforest 10 minutes for 15 imputations. Postimputation estimates indicated a 0.87%-18% reduction in stratified flu vaccination coverage rates. Overall estimated flu vaccination rates decreased from 26% to 19% after imputations.

CONCLUSIONS: Both MICE and Miceforest offer flexible and reliable approaches for imputing missing demographic data while mitigating bias compared with Iterative-Imputer. Our results also highlight that the imputation method can profoundly affect research findings. Though MICE and Miceforest had better effect sizes and reliability, MICE was much more computationally and time-expensive, limiting its use in large, surveillance datasets. Miceforest can use cloud-based computing, which further enhances efficiency by offloading resource-intensive tasks, enabling parallel execution, and minimizing processing delays. The significant decrease in vaccination coverage estimates validates how incomplete or missing data can eclipse real disparities. Our findings support regular application of imputation methods in immunization surveillance to improve health equity evaluations and shape targeted public health interventions and programming.

RevDate: 2025-08-26
CmpDate: 2025-08-26

Nguyen C, Nguyen T, Trivitt G, et al (2025)

Modular and cloud-based bioinformatics pipelines for high-confidence biomarker detection in cancer immunotherapy clinical trials.

PloS one, 20(8):e0330827 pii:PONE-D-25-08135.

BACKGROUND: The Cancer Immune Monitoring and Analysis Centers - Cancer Immunologic Data Center (CIMAC-CIDC) network aims to improve cancer immunotherapy by providing harmonized molecular assays and standardized bioinformatics analysis.

RESULTS: In response to evolving bioinformatics standards and the migration of the CIDC to the National Cancer Institute (NCI), we undertook the enhancement of the CIDC's extant whole exome sequencing (WES) and RNA sequencing (RNA-Seq) pipelines. Leveraging open-source tools and cloud-based technologies, we implemented modular workflows using Snakemake and Docker for efficient deployment on the Google Cloud Platform (GCP). Benchmarking analyses demonstrate improved reproducibility, precision, and recall across validated truth sets for variant calling, transcript quantification, and fusion detection.

CONCLUSION: This work establishes a scalable framework for harmonized multi-omic analyses, ensuring the continuity and reliability of bioinformatics workflows in multi-site clinical research aimed at advancing cancer biomarker discovery and personalized medicine.

RevDate: 2025-08-25

Nazmul Haque SM, MJ Uddin (2025)

Monitoring LULC dynamics and detecting transformation hotspots in sylhet, Bangladesh (2000-2023) using Google Earth Engine.

Scientific reports, 15(1):31263.

Sylhet, located in the northeastern part of Bangladesh, is characterized by a unique topography and climatic conditions that make it susceptible to flash floods. The interplay of rapid urbanization and climatic variability has exacerbated these flood risks in recent years. Effective monitoring and planning of land use/land cover (LULC) are crucial strategies for mitigating these hazards. While former studies analyzed LULC in parts of Sylhet using traditional GIS approaches, no comprehensive, district-wide assessment has been carried out using long-term satellite data and cloud computing platforms. This study addresses that gap by applying Google Earth Engine (GEE) for an extensive analysis of LULC changes, transitions, and hot/cold spots across the district. Accordingly, this work investigates the LULC changes in Sylhet district over the past twenty-three years (2000-2023). Using satellite imagery from Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and Landsat 8 Operational Land Imager (OLI), the LULC is classified in six selected years (2000, 2005, 2010, 2015, 2020, and 2023). A supervised machine learning algorithm, the Random Forest Classifier, is employed on the cloud computing platform Google Earth Engine to analyze LULC dynamics and detect changes. The Getis-Ord Gi[*] statistical model is applied to identify land transformation hot spot and cold spot areas. The results reveal a significant increase in built-up areas and a corresponding reduction in water bodies. Spatial analysis at the upazila level indicates urban expansion in every upazila, with the most substantial increase observed in Beani Bazar upazila, where urban areas expanded by approximately 1500%. Conversely, Bishwanath upazila experienced the greatest reduction in water bodies, with a decrease of about 90%. Sylhet Sadar upazila showed a 240% increase in urban areas and a 72% decrease in water bodies. According to hotspot analysis, Kanaighat upazila has the most amount of unchanging land at 7%, whereas Balaganj upazila has the largest amount of LULC transformation at 5.5%. Overall, the urban area in the Sylhet district has grown by approximately 300%, while water bodies have diminished by about 77%, reflecting trends of urbanization and river-filling. These findings underscore the necessity of ensuring adequate drainage infrastructure to decrease flash flood hazards in the Sylhet district and offer insightful information to relevant authorities, politicians, and water resource engineers.

RevDate: 2025-08-25

Dhanaraj RK, Maragatharajan M, Sureshkumar A, et al (2025)

On-device AI for climate-resilient farming with intelligent crop yield prediction using lightweight models on smart agricultural devices.

Scientific reports, 15(1):31195.

In Recent time, with the utilization of Artificial Intelligence (AI), AI applications have proliferated across various domains where agricultural consumer electronics are no exception. These innovations have significantly enhanced the intelligence of agricultural processes, leading to increased efficiency and sustainability. This study introduces an intelligent crop yield prediction system that utilizes Random Forest (RF) classifier to optimize the usage of water based on environmental factors. By integrating lightweight machine learning with consumer electronics such as sensors connected inside the smart display devices, this work is aimed to amplify water management and promote sustainable farming practices. While focusing on the sustainable agriculture, the water usage efficiency in irrigation should be enhanced by predicting optimal watering schedules and it will reduce the environmental impact and support the climate resilient farming. The proposed lightweight model has been trained on real-time agricultural data with minimum memory resource in sustainability prediction and the model has achieved 90.1% accuracy in the detection of crop yield suitable for the farmland as well as outperformed the existing methods including AI-enabled IoT model with mobile sensors and deep learning architectures (89%), LoRa-based systems (87.2%), and adaptive AI with self-learning techniques (88%). The deployment of computationally efficient machine learning models like random forest algorithms will emphasis on real time decision making without depending on the cloud computing. The performance evaluation and effectiveness of the proposed method are estimated using the important parameter called prediction accuracy. The main goal of this parameter is to access how the AI model accurately predicts the irrigation needs based on the sensor data.

RevDate: 2025-08-25

Ozlem K, Gumus C, Yilmaz AF, et al (2025)

Cloud-Based Control System with Sensing and Actuating Textile-Based IoT Gloves for Telerehabilitation Applications.

Advanced intelligent systems (Weinheim an der Bergstrasse, Germany), 7(8):2400894.

Remote manipulation devices extend human capabilities over vast distances or in inaccessible environments, removing constraints between patients and treatment. The integration of therapeutic and assistive devices with the Internet of Things (IoT) has demonstrated high potential to develop and enhance intelligent rehabilitation systems in the e-health domain. Within such devices, soft robotic products distinguish themselves through their lightweight and adaptable characteristics, facilitating secure collaboration between humans and robots. The objective of this research is to combine a textile-based sensorized glove with an air-driven soft robotic glove, operated wirelessly using the developed control system architecture. The sensing glove equipped with capacitive sensors on each finger captures the movements of the medical staff's hand. Meanwhile, the pneumatic rehabilitation glove designed to aid patients affected by impaired hand function due to stroke, brain injury, or spinal cord injury replicates the movements of the medical personnel. The proposed artificial intelligence-based system detects finger gestures and actuates the pneumatic system, responding within an average response time of 48.4 ms. The evaluation of the system further in terms of accuracy and transmission quality metrics verifies the feasibility of the proposed system integrating textile gloves into IoT infrastructure, enabling remote motion sensing and actuation.

RevDate: 2025-08-25

Saratkar SY, Langote M, Kumar P, et al (2025)

Digital twin for personalized medicine development.

Frontiers in digital health, 7:1583466.

Digital Twin (DT) technology is revolutionizing healthcare by enabling real-time monitoring, predictive analytics, and highly personalized medical care. As a key innovation of Industry 4.0, DTs integrate advanced tools like artificial intelligence (AI), the Internet of Things (IoT), and machine learning (ML) to create dynamic, data-driven replicas of patients. These digital replicas allow simulations of disease progression, optimize diagnostics, and personalize treatment plans based on individual genetic and lifestyle profiles. This review explores the evolution, architecture, and enabling technologies of DTs, focusing on their transformative applications in personalized medicine (PM). While the integration of DTs offers immense potential to improve outcomes and efficiency in healthcare, challenges such as data privacy, system interoperability, and ethical concerns must be addressed. The paper concludes by highlighting future directions, where AI, cloud computing, and blockchain are expected to play a pivotal role in overcoming these limitations and advancing precision medicine.

RevDate: 2025-08-24
CmpDate: 2025-08-24

Beć KB, Grabska J, CW Huck (2025)

Handheld NIR spectroscopy for real-time on-site food quality and safety monitoring.

Advances in food and nutrition research, 115:293-389.

This chapter reviews the applications and future directions of portable near-infrared (NIR) spectroscopy in food analytics, with a focus on quality control, safety monitoring, and fraud detection. Portable NIR spectrometers are essential for real-time, non-destructive analysis of food composition, and their use is rapidly expanding across various stages of the food production chain-from agriculture and processing to retail and consumer applications. The functional design of miniaturized NIR spectrometers is examined, linking the technological diversity of these sensors to their application potential in specific roles within the food sector, while discussing challenges related to thermal stability, energy efficiency, and spectral accuracy. Current trends in data analysis, including chemometrics and artificial intelligence, are also highlighted, as the successful application of portable spectroscopy heavily depends on this key aspect of the analytical process. This discussion is based on recent literature, with a focus on the last five years, and addresses the application of portable NIR spectroscopy in food quality assessment and composition analysis, food safety and contaminant detection, and food authentication and fraud prevention. The chapter concludes that portable NIR spectroscopy has significantly enhanced food analytics over the past decade, with ongoing trends likely to lead to even wider adoption in the near future. Future challenges related to ultra-miniaturization and emerging consumer-oriented spectrometers emphasize the need for robust pre-calibrated models and the development of global models for key applications. The integration of NIR spectrometers with cloud computing, IoT, and machine learning is expected to drive advancements in real-time monitoring, predictive modeling, and data processing, fitting the growing demand for improved safety, quality, and fraud detection from the farm to the fork.

RevDate: 2025-08-21
CmpDate: 2025-08-21

Cui D, Peng Z, Li K, et al (2025)

An novel cloud task scheduling framework using hierarchical deep reinforcement learning for cloud computing.

PloS one, 20(8):e0329669 pii:PONE-D-24-45416.

With the increasing popularity of cloud computing services, their large and dynamic load characteristics have rendered task scheduling an NP-complete problem.To address the problem of large-scale task scheduling in a cloud computing environment, this paper proposes a novel cloud task scheduling framework using hierarchical deep reinforcement learning (DRL) to address the challenges of large-scale task scheduling in cloud computing. The framework defines a set of virtual machines (VMs) as a VM cluster and employs hierarchical scheduling to allocate tasks first to the cluster and then to individual VMs. The scheduler, designed using DRL, adapts to dynamic changes in the cloud environments by continuously learning and updating network parameters. Experiments demonstrate that it skillfully balances cost and performance. In low-load situations, costs are reduced by using low-cost nodes within the Service Level Agreement (SLA) range; in high-load situations, resource utilization is improved through load balancing. Compared with classical heuristic algorithms, it effectively optimizes load balancing, cost, and overdue time, achieving a 10% overall improvement. The experimental results demonstrate that this approach effectively balances cost and performance, optimizing objectives such as load balance, cost, and overdue time. One potential shortcoming of the proposed hierarchical deep reinforcement learning (DRL) framework for cloud task scheduling is its complexity and computational overhead. Implementing and maintaining a DRL-based scheduler requires significant computational resources and expertise in machine learning. There are still shortcomings in the method used in this study. First, the continuous learning and updating of network parameters might introduce latency, which could impact real-time task scheduling efficiency. Furthermore, the framework's performance heavily depends on the quality and quantity of training data, which might be challenging to obtain and maintain in a dynamic cloud environment.

RevDate: 2025-08-20

Manhary FN, Mohamed MH, M Farouk (2025)

A scalable machine learning strategy for resource allocation in database.

Scientific reports, 15(1):30567.

Modern cloud computing systems require intelligent resource allocation strategies that balance quality-of-service (QoS), operational costs, and energy sustainability. Existing deep Q-learning (DQN) methods suffer from sample inefficiency, centralization bottlenecks, and reactive decision-making during workload spikes. Transformer-based forecasting models such as Temporal Fusion Transformer (TFT) offer improved accuracy but introduce computational overhead, limiting real-time deployment. We propose LSTM-MARL-Ape-X, a novel framework integrating bidirectional Long Short-Term Memory (BiLSTM) for workload forecasting with Multi-Agent Reinforcement Learning (MARL) in a distributed Ape-X architecture. This approach enables proactive, decentralized, and scalable resource management through three innovations: high-accuracy forecasting using BiLSTM with feature-wise attention, variance-regularized credit assignment for stable multi-agent coordination, and faster convergence via adaptive prioritized replay. Experimental validation on real-world traces demonstrates 94.6% SLA compliance, 22% reduction in energy consumption, and linear scalability to over 5,000 nodes with sub-100 ms decision latency. The framework converges 3.2× faster than uniform sampling baselines and outperforms transformer-based models in both accuracy and inference speed. Unlike decoupled prediction-action frameworks, our method provides end-to-end optimization, enabling robust and sustainable cloud orchestration at scale.

RevDate: 2025-08-19

Park SY, Takayama C, Ryu J, et al (2025)

Design and evaluation of next-generation HIV genotyping for detection of resistance mutations to 28 antiretroviral drugs across five major classes including lenacapavir.

Clinical infectious diseases : an official publication of the Infectious Diseases Society of America pii:8237671 [Epub ahead of print].

BACKGROUND: The emergence and spread of HIV drug-resistant strains present a major barrier to effective lifelong Antiretroviral Therapy (ART). The anticipated rise in long-acting subcutaneous lenacapavir (LEN) use, along with the increased risk of transmitted resistance and Pre-Exposure Prophylaxis (PrEP)-associated resistance, underscores the urgent need for advanced genotyping methods to enhance clinical care and prevention strategies.

METHODS: We developed the Portable HIV Genotyping (PHG) platform which combines cost-effective next-generation sequencing with cloud computing to screen for resistance to 28 antiretroviral drugs across five major classes, including LEN. We analyzed three study cohorts and compared our drug resistance findings against standard care testing results and high-fidelity sequencing data obtained through unique molecular identifier (UMI) labeling.

RESULTS: PHG identified two major LEN-resistance mutations in one participant, confirmed by an additional independent sequencing run. Across three study cohorts, PHG consistently detected the same drug resistance mutations as standard care genotyping and high-fidelity UMI-labeling in most tested specimens. PHG's 10% limit of detection minimized false positives and enabled identification of minority variants less than 20% frequency, pointing to underdiagnosis of drug resistance in clinical care. Furthermore, PHG identified linked cross-class resistance mutations, confirmed by UMI-labeling, including linked cross-resistance in a participant who reported use of long-acting cabotegravir (CAB) and rilpivirine (RPV). We also observed multi-year persistence of linked cross-class resistance mutations.

CONCLUSIONS: PHG demonstrates significant improvements over standard care HIV genotyping, offering deeper insights into LEN-resistance, minority variants, and cross-class resistance using a low-cost high-throughput portable sequencing technology and publicly available cloud computing.

LOAD NEXT 100 CITATIONS

RJR Experience and Expertise

Researcher

Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.

Educator

Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.

Administrator

Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.

Technologist

Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.

Publisher

While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.

Speaker

Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.

Facilitator

Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.

Designer

Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.

Support this website:
Order from Amazon
We will earn a commission.

This is a must read book for anyone with an interest in invasion biology. The full title of the book lays out the author's premise — The New Wild: Why Invasive Species Will Be Nature's Salvation. Not only is species movement not bad for ecosystems, it is the way that ecosystems respond to perturbation — it is the way ecosystems heal. Even if you are one of those who is absolutely convinced that invasive species are actually "a blight, pollution, an epidemic, or a cancer on nature", you should read this book to clarify your own thinking. True scientific understanding never comes from just interacting with those with whom you already agree. R. Robbins

963 Red Tail Lane
Bellingham, WA 98226

206-300-3443

E-mail: RJR8222@gmail.com

Collection of publications by R J Robbins

Reprints and preprints of publications, slide presentations, instructional materials, and data compilations written or prepared by Robert Robbins. Most papers deal with computational biology, genome informatics, using information technology to support biomedical research, and related matters.

Research Gate page for R J Robbins

ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. According to a study by Nature and an article in Times Higher Education , it is the largest academic social network in terms of active users.

Curriculum Vitae for R J Robbins

short personal version

Curriculum Vitae for R J Robbins

long standard version

RJR Picks from Around the Web (updated 11 MAY 2018 )