picture
RJR-logo

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

icon

Bibliography Options Menu

icon
QUERY RUN:
29 Nov 2025 at 01:42
HITS:
4332
PAGE OPTIONS:
Hide Abstracts   |   Hide Additional Links
NOTE:
Long bibliographies are displayed in blocks of 100 citations at a time. At the end of each block there is an option to load the next block.

Bibliography on: Cloud Computing

RJR-3x

Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About:  RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE

ESP: PubMed Auto Bibliography 29 Nov 2025 at 01:42 Created: 

Cloud Computing

Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.

Created with PubMed® Query: ( cloud[TIAB] AND (computing[TIAB] OR "amazon web services"[TIAB] OR google[TIAB] OR "microsoft azure"[TIAB]) ) NOT pmcbook NOT ispreviousversion

Citations The Papers (from PubMed®)

-->

RevDate: 2025-11-29

Du H, P Butkaew (2025)

A deep learning-based intelligent curriculum system for enhancing public music education: a case study across three universities in Southwest China.

Scientific reports, 15(1):42798.

Responding to national aesthetic education reforms, this study introduces a deep learning-driven platform to enhance public music education in Southwest China's universities. Utilizing LSTM and Transformer models, the system analyzes real-time student learning, predicts mastery trends, and delivers personalized feedback via a cloud-based interface. A semester-long experiment across Guizhou Minzu University, Guizhou University, and Xichang University compared three groups: traditional instruction, MOOC-based hybrid teaching, and AI-enhanced personalized learning. The AI group achieved 32% higher post-test mastery scores, with predictive models maintaining high accuracy (RMSE < 0.15). The platform supports adaptive assessments, intelligent feedback, and instructional decision-making, offering a scalable solution for AI integration in arts education, particularly in culturally diverse, data-scarce settings. This work informs policymakers and developers aiming to modernize aesthetic education through advanced computing.

RevDate: 2025-11-27

Jamshidi O, Abbasi M, Ramazani A, et al (2025)

Modifier guided resilient CNN inference enables fault-tolerant edge collaboration for IoT.

Scientific reports pii:10.1038/s41598-025-28454-z [Epub ahead of print].

In resource-constrained Internet of Things (IoT) scenarios, implementing robust and accurate deep learning inference is problematic due to device failures, limited computing power, and privacy concerns. We present a resilient, completely edge-based distributed convolutional neural network (CNN) architecture that eliminates cloud dependencies while enabling accurate and fault-tolerant inference. At its core is a lightweight Modifier Module deployed at the edge, which synthesizes predictions for failing devices by pooling peer CNN outputs and weights. This dynamic mechanism is trained via a novel fail-simulation technique, allowing it to mimic missing outputs in real-time without model duplication or cloud fallback. We assess our methodology using MNIST and CIFAR-10 datasets under both homogeneous and heterogeneous data partitions, with up to five simultaneous device failures. The system displays up to 1.5% absolute accuracy improvement, 30% error rate reduction, and stable operation even with over 80% device dropout, exceeding ensemble, dropout, and federated baselines. Our strategy combines significant statistical significance, low resource utilization (~ 15 KB per model), and real-time responsiveness, making it well-suited for safety-critical IoT installations where cloud access is infeasible.

RevDate: 2025-11-27

Yin H, Ding Y, Long C, et al (2025)

Hybrid modeling and rapid prototyping technology based on the geomagic system.

Scientific reports, 15(1):42456.

The structural characteristics of gear parts are analyzed, and appropriate point cloud processing flow is formulated. Taking the Geomagic system as the computing platform and taking spur gears and spiral bevel gear gears as examples, the forward and reverse hybrid modelling is carried out, and a solid model that meets the accuracy requirements is obtained, which verifies the effectiveness of the hybrid modelling. A 3D printing process is then carried out on the generated solid model, and the corresponding process parameters are set to obtain a feasible physical model. This hybrid modelling + rapid prototyping solution can effectively improve the design efficiency of products, reduce product development costs, and improve the competitiveness of enterprises.

RevDate: 2025-11-27
CmpDate: 2025-11-27

Shi Q, Oztekin A, Matthew G, et al (2025)

GrantCheck-an AI Solution for Guiding Grant Language to New Policy Requirements: Development Study.

JMIR formative research, 9:e79038 pii:v9i1e79038.

BACKGROUND: Academic institutions face increasing challenges in grant writing due to evolving federal and state policies that restrict the use of specific language. Manual review processes are labor-intensive and may delay submissions, highlighting the need for scalable, secure solutions that ensure compliance without compromising scientific integrity.

OBJECTIVE: This study aimed to develop a secure, artificial intelligence-powered tool that assists researchers in writing grants consistent with evolving state and federal policy requirements.

METHODS: GrantCheck (University of Massachusetts Chan Medical School) was built on a private Amazon Web Services virtual private cloud, integrating a rule-based natural language processing engine with large language models accessed via Amazon Bedrock. A hybrid pipeline detects flagged terms and generates alternative phrasing, with validation steps to prevent hallucinations. A secure web-based front end enables document upload and report retrieval. Usability was assessed using the System Usability Scale.

RESULTS: GrantCheck achieved high performance in detecting and recommending alternatives for sensitive terms, with a precision of 1.00, recall of 0.73, and an F1-score of 0.84-outperforming general-purpose models including GPT-4o (OpenAI; F1=0.43), Deepseek R1 (High-Flyer; F1=0.40), Llama 3.1 (Meta AI; F1=0.27), Gemini 2.5 Flash (Google; F1=0.58), and even Gemini 2.5 Pro (Google; F1=0.72). Usability testing among 25 faculty and staff yielded a mean System Usability Scale score of 85.9 (SD 13.4), indicating high user satisfaction and strong workflow integration.

CONCLUSIONS: GrantCheck demonstrates the feasibility of deploying institutionally hosted, artificial intelligence-driven systems to support compliant and researcher-friendly grant writing. Beyond administrative efficiency, such systems can indirectly safeguard public health research continuity by minimizing grant delays and funding losses caused by language-related policy changes. By maintaining compliance without suppressing scientific rigor or inclusivity, GrantCheck helps protect the pipeline of research that advances biomedical discovery, health equity, and patient outcomes. This capability is particularly relevant for proposals in sensitive domains-such as social determinants of health, behavioral medicine, and community-based research-that are most vulnerable to evolving policy restrictions. As a proof-of-concept development study, our implementation is tailored to one institution's policy environment and security infrastructure, and findings should be interpreted as preliminary rather than universally generalizable.

RevDate: 2025-11-27

Baseca CC, Dionísio R, Ribeiro F, et al (2025)

Edge-Computing Smart Irrigation Controller Using LoRaWAN and LSTM for Predictive Controlled Deficit Irrigation.

Sensors (Basel, Switzerland), 25(22): pii:s25227079.

Enhancing sustainability in agriculture has become a significant challenge today where in the current context of climate change, particularly in countries of the Mediterranean area, the amount of water available for irrigation is becoming increasingly limited. Automating irrigation processes using affordable sensors can help save irrigation water and produce almonds more sustainably. This work presents an IoT-enabled edge computing model for smart irrigation systems focused on precision agriculture. This model combines IoT sensors, hybrid machine learning algorithms, and edge computing to predict soil moisture and manage Controlled Deficit Irrigation (CDI) strategies in high density almond tree fields applying reductions of 35% ETc (crop evapotranspiration). By gathering and analyzing meteorological, humidity soil, and crop data, a soft ML (Machine Learning) model has been developed to enhance irrigation practices and identify crop anomalies in real-time without cloud computing. This methodology has the potential to transform agricultural practices by enabling precise and efficient water management, even in remote locations with lack of internet access. This study represents an initial step toward implementing ML algorithms for irrigation CDI strategies.

RevDate: 2025-11-27

Zhao X, Cao X, Ding M, et al (2025)

Online Mapping from Weight Matching Odometry and Highly Dynamic Point Cloud Filtering via Pseudo-Occupancy Grid.

Sensors (Basel, Switzerland), 25(22): pii:s25226872.

Efficient locomotion in autonomous driving and robotics requires clearer visualization and more precise map. This paper presents a high accuracy online mapping including weight matching LiDAR-IMU-GNSS odometry and an object-level highly dynamic point cloud filtering method based on a pseudo-occupancy grid. The odometry integrates IMU pre-integration, ground point segmentation through progressive morphological filtering (PMF), motion compensation, and weight feature point matching. Weight feature point matching enhances alignment accuracy by combining geometric and reflectance intensity similarities. By computing the pseudo-occupancy ratio between the current frame and prior local submaps, the grid probability values are updated to identify the distribution of dynamic grids. Object-level point cloud cluster segmentation is obtained using the curved voxel clustering method, eventually leading to filtering out the object-level highly dynamic point clouds during the online mapping process. Compared to the LIO-SAM and FAST-LIO2 frameworks, the proposed odometry demonstrates superior accuracy in the KITTI, UrbanLoco, and Newer College (NCD) datasets. Meantime, the proposed highly dynamic point cloud filtering algorithm exhibits better detection precision than the performance of Removert and ERASOR. Furthermore, the high-accuracy online mapping is built from a real-time dataset with the comprehensive filtering of driving vehicles, cyclists, and pedestrians. This research contributes to the field of high-accuracy online mapping, especially in filtering highly dynamic objects in an advanced way.

RevDate: 2025-11-27
CmpDate: 2025-11-27

Almufareh MF, Humayun M, K Haseeb (2025)

Transforming Smart Healthcare Systems with AI-Driven Edge Computing for Distributed IoMT Networks.

Bioengineering (Basel, Switzerland), 12(11): pii:bioengineering12111232.

The Internet of Medical Things (IoMT) with edge computing provides opportunities for the rapid growth and development of a smart healthcare system (SHM). It consists of wearable sensors, physical objects, and electronic devices that collect health data, perform local processing, and later forward it to a cloud platform for further analysis. Most existing approaches focus on diagnosing health conditions and reporting them to medical experts for personalized treatment. However, they overlook the need to provide dynamic approaches to address the unpredictable nature of the healthcare system, which relies on public infrastructure that all connected devices can access. Furthermore, the rapid processing of health data on constrained devices often leads to uneven load distribution and affects the system's responsiveness in critical circumstances. Our research study proposes a model based on AI-driven and edge computing technologies to provide a lightweight and innovative healthcare system. It enhances the learning capabilities of the system and efficiently detects network anomalies in a distributed IoMT network, without incurring additional overhead on a bounded system. The proposed model is verified and tested through simulations using synthetic data, and the obtained results prove its efficacy in terms of energy consumption by 53%, latency by 46%, packet loss rate by 52%, network throughput by 56%, and overhead by 48% than related solutions.

RevDate: 2025-11-26

Yu X, Mi J, Tang L, et al (2025)

Dynamic multi objective task scheduling in cloud computing using reinforcement learning for energy and cost optimization.

Scientific reports pii:10.1038/s41598-025-29280-z [Epub ahead of print].

Efficient task scheduling in cloud computing is crucial for managing dynamic workloads while balancing performance, energy efficiency, and operational costs. This paper introduces a novel Reinforcement Learning-Driven Multi-Objective Task Scheduling (RL-MOTS) framework that leverages a Deep Q-Network (DQN) to dynamically allocate tasks across virtual machines. By integrating multi-objective optimization, RL-MOTS simultaneously minimizes energy consumption, reduces costs, and ensures Quality of Service (QoS) under varying workload conditions. The framework employs a reward function that adapts to real-time resource utilization, task deadlines, and energy metrics, enabling robust performance in heterogeneous cloud environments. Evaluations conducted using a simulated cloud platform demonstrate that RL-MOTS achieves up to 27% reduction in energy consumption and 18% improvement in cost efficiency compared to state-of-the-art heuristic and metaheuristic methods, while meeting stringent deadline constraints. Its adaptability to hybrid cloud-edge architectures makes RL-MOTS a forward-looking solution for next-generation distributed computing systems.

RevDate: 2025-11-26

Houmenou CT, Sokhna C, Fenollar F, et al (2025)

Advancements and challenges in bioinformatics tools for microbial genomics in the last decade: Toward the smart integration of bioinformatics tools, digital resources, and emerging technologies for the analysis of complex biological data.

Infection, genetics and evolution : journal of molecular epidemiology and evolutionary genetics in infectious diseases pii:S1567-1348(25)00148-0 [Epub ahead of print].

Over the past decade, microbial genomics has been transformed by advances in sequencing technologies and bioinformatics, enabling the transition from targeted gene markers to complete genome assemblies and ecological scale metagenomic surveys. This review presents a comprehensive overview of the bioinformatics pipelines that structure this field, from sample preparation, PCR amplification, and next-generation sequencing (NGS) to read preprocessing, genome assembly, polishing, structural and functional annotation, and submission to public databases. We highlight the major tools that have become standards at each stage, including FastQC, SPAdes, Prokka, Bakta, CARD, GTDB-Tk, QIIME 2, and Kraken2, while also emphasizing recent innovations such as hybrid assemblers, ontology-driven annotation frameworks, and automated workflows (nf-core, Bactopia). Applications extend across microbiology, from antimicrobial resistance surveillance and phylogenetic classification to ecological studies, exemplified here by three case studies: termite gut microbiota profiling by 16S metabarcoding, the description of new Bartonella species from bats, and the genomic characterization of rare Salmonella enterica serovars from primates. Despite these advances, persistent challenges remain, including incomplete and biased reference databases, computational bottlenecks, and economic disparities in sequencing and storage capacities. In response, international initiatives increasingly promote open, interoperable, and reusable bioinformatics infrastructures. Conforming to the Findable, Accessible, Interoperable, Reusable (FAIR) principles and global frameworks such as Global Alliance for Genomics and Health (GA4GH), these efforts are driving greater standardization, transparency, and data sharing across the microbial genomics community. Future perspectives point toward the integration of artificial intelligence, long-read and telomere-to-telomere (T2T) sequencing, cloud-native infrastructures, and even quantum computing, paving the way for a predictive, reproducible, and globally inclusive microbial genomics.

RevDate: 2025-11-26
CmpDate: 2025-11-26

Molnár T, Bolla B, Szabó O, et al (2025)

Sentinel-2-Based Forest Health Survey of ICP Forests Level I and II Plots in Hungary.

Journal of imaging, 11(11): pii:jimaging11110413.

Forest damage has been increasingly recorded over the past decade in both Europe and Hungary, primarily due to prolonged droughts, causing a decline in forest health. In the framework of ICP Forests, the forest damage has been monitored for decades; however, it is labour-intensive and time-consuming. Satellite-based remote sensing offers a rapid and efficient method for assessing large-scale damage events, combining the ground-based ICP Forests datasets. This study utilised cloud computing and Sentinel-2 satellite imagery to monitor forest health and detect anomalies. Standardised NDVI (Z NDVI) maps were produced for the period from 2017 to 2023 to identify disturbances in the forest. The research focused on seven active ICP Forests Level II and 78 Level I plots in Hungary. Z NDVI values were divided into five categories based on damage severity, and there was agreement between Level II field data and satellite imagery. In 2017, severe damage was caused by late frost and wind; however, the forest recovered by 2018. Another decline was observed in 2021 due to wind and in 2022 due to drought. Data from the ICP Forests Level I plots, which represent forest condition in Hungary, indicated that 80% of the monitored stands were damaged, with 30% suffering moderate damage and 15% experiencing severe damage. Z NDVI classifications aligned with the field data, showing widespread forest damage across the country.

RevDate: 2025-11-25

Zhang W, H Ou (2025)

Reinforcement learning based multi objective task scheduling for energy efficient and cost effective cloud edge computing.

Scientific reports, 15(1):41716.

The rapid proliferation of Internet of Things (IoT) devices and latency-sensitive applications has amplified the need for efficient task scheduling in hybrid cloud-edge environments. Traditional heuristic and metaheuristic algorithms often fall short in addressing the dynamic nature of workloads and the conflicting objectives of performance, energy efficiency, and cost-effectiveness. To overcome these challenges, this study introduces Reinforcement Learning-Based Multi-Objective Task Scheduling (RL-MOTS), a framework leveraging Deep Q-Networks (DQNs) for intelligent and adaptive resource allocation. The proposed model formulates scheduling as a Markov Decision Process, incorporating a priority-aware dynamic queueing mechanism and a multi-objective reward function that balances task latency, energy consumption, and operational costs. Additionally, the framework employs a state-reward tensor to capture trade-offs among objectives, enabling real-time decision-making across heterogeneous cloud and edge nodes. Comprehensive simulations using CloudSim validate the robustness of RL-MOTS under varying workload conditions. Compared to baseline strategies such as FCFS, Min-Min, and multi-objective heuristic models, RL-MOTS achieves up to 28% reduction in energy consumption, 20% improvement in cost efficiency, and significant reductions in makespan and deadline violations, while maintaining strict Quality of Service (QoS) requirements. The framework's adaptability to preemptive and non-preemptive scheduling further enhances its resilience and scalability. These findings establish RL-MOTS as a forward-looking solution for sustainable, cost-efficient, and performance-oriented computing in next-generation distributed systems. Future research will focus on integrating transfer learning and federated learning to increase scalability and privacy in large, decentralized environments, including those applicable to the medical industry.

RevDate: 2025-11-25

Samriya JK, Kumar A, Bhansali A, et al (2025)

Enhancing IIoT security through blockchain-enabled workload analysis in fog computing environments.

Scientific reports pii:10.1038/s41598-025-27694-3 [Epub ahead of print].

Robots and software are utilized in industrial automation to run machinery and processes in a variety of sectors. Numerous applications incorporate machine learning, the Internet of Things (IoT), and other methods to offer clever features that enhance user experience. Businesses and individuals can successfully accomplish both commercial and noncommercial requirements with the help of such technologies. Due to high risk as well as inefficiency of traditional procedures, organisations are expected to automate industrial processes. Aim of this research is to propose novel technique in workload analysis for fog network and blockchain model in security improvement for IIoT application. Here the IIoT network malicious activity is analysed using blockchain reinforcement gaussian neural network. Then the manufacturing industry workload analysis is carried out using fog cloud based virtual machine multilayer perceptron model. The experimental analysis is carried out for various security dataset in manufacturing industry in terms of latency, QoS, accuracy, reliability, data integrity.

RevDate: 2025-11-24
CmpDate: 2025-11-24

Sunderland N, Hite D, Smadbeck P, et al (2025)

GWASHub: An Automated Cloud-Based Platform for Genome-Wide Association Study Meta-Analysis.

medRxiv : the preprint server for health sciences pii:2025.10.21.25338463.

Genome-wide association studies (GWAS) often aggregate data from millions of participants across multiple cohorts using meta-analysis to maximise power for genetic discovery. The increase in availability of genomic biobanks, together with a growing focus on phenotypic subgroups, genetic diversity, and sex-stratified analyses, has led GWAS meta-analyses to routinely produce hundreds of summary statistic files accompanied by detailed meta-data. Scalable infrastructures for data handling, quality control (QC), and meta-analysis workflows are essential to prevent errors, ensure reproducibility, and reduce the burden on researchers, allowing them to focus on downstream research and clinical translation. To address this need, we developed GWASHub, a secure cloud-based platform designed for the curation, processing and meta-analysis of GWAS summary statistics. GWASHub features i) private and secure project spaces, ii) automated file harmonisation and data validation, iii) GWAS meta-data capture, iv) customisable variant QC, v) GWAS meta-analysis, vi) analysis reporting and visualisation, and vii) results download. Users interact with the portal via an intuitive web interface built on Nuxt.js, a high-performance JavaScript framework. Data is securely managed through an Amazon Web Services (AWS) MySQL database and S3 block storage. Analysis jobs are distributed to AWS compute resources in a scalable fashion. The QC dashboard presents tabular and graphical QC outputs allowing manual review of individual datasets. Those passing QC are made available to the meta-analysis module. Individual datasets and meta-analysis results are available for download by project users with appropriate access permissions. In GWASHub, a "project" serves as a virtual workspace spanning an entire consortium, allowing individuals with different roles, such as data contributors (users) and project coordinators (main analysts), to collaborate securely under a unified framework. GWASHub has a flexible architecture to allow for ongoing development and incorporation of alternative quality control or meta-analysis procedures, to meet the specific needs of researchers. GWASHub was developed as a joint initiative by the HERMES Consortium and the Cardiovascular Knowledge Portal, and access to the platform is free and available upon request. GWASHub addresses a critical need in the genetics research community by providing a scalable, secure, and user-friendly platform for managing the complexity of large-scale GWAS meta-analyses. As the volume and diversity of GWAS data continue to grow, platforms like GWASHub may help to accelerate insights into the genetic architecture of complex traits.

RevDate: 2025-11-24
CmpDate: 2025-11-24

Chu YC, Chen YC, Hsu CY, et al (2025)

Hybrid artificial intelligence frameworks for otoscopic diagnosis: Integrating convolutional neural networks and large language models toward real-time mobile health.

Digital health, 11:20552076251395449.

BACKGROUND: Otitis media remains a significant global health concern, particularly in resource-limited settings where timely diagnosis is challenging. Artificial intelligence (AI) offers promising solutions to enhance diagnostic accuracy in mobile health applications.

OBJECTIVE: This study introduces a hybrid AI framework that integrates convolutional neural networks (CNNs) for image classification with large language models (LLMs) for clinical reasoning, enabling real-time otoscopic diagnosis.

METHODS: We developed a dual-path system combining CNN-based feature extraction with LLM-supported interpretation. The framework was optimized for mobile deployment, with lightweight models operating on-device and advanced reasoning performed via secure cloud APIs. A dataset of 10,465 otoendoscopic images (expanded from 2820 original clinical images through data augmentation) across 10 middle-ear conditions was used for training and validation. Diagnostic performance was benchmarked against clinicians of varying expertise.

RESULTS: The hybrid CNN-LLM system achieved an overall diagnostic accuracy of 97.6%, demonstrating the synergistic benefit of combining CNN-driven visual analysis with LLM-based clinical reasoning. The system delivered sub-200 ms feedback and achieved specialist-level performance in identifying common ear pathologies.

CONCLUSIONS: This hybrid AI framework substantially improves diagnostic precision and responsiveness in otoscopic evaluation. Its mobile-friendly design supports scalable deployment in telemedicine and primary care, offering a practical solution to enhance ear disease diagnosis in underserved regions.

RevDate: 2025-11-23

Champendal M, Lokaj B, de Gevigney VD, et al (2025)

Exploring environmental sustainability of artificial intelligence in radiology: A scoping review.

European journal of radiology, 194:112558 pii:S0720-048X(25)00644-8 [Epub ahead of print].

OBJECTIVE: Artificial intelligence (AI) is increasingly used in radiology, but its environmental implications have not been sufficiently studied, so far. This study aims to synthesize existing literature on the environmental sustainability of AI in radiology and highlights strategies proposed to mitigate its impact.

METHODS: A scoping review was conducted following the Joanna Briggs Institute methodology. Searches across MEDLINE, Embase, CINAHL, and Web of Science focused on English and French publications from 2014 to 2024, targeting AI, environmental sustainability, and medical imaging. Eligible studies addressed environmental sustainability of AI in medical imaging. Conference abstracts, non-radiological or non-human studies, and unavailable full texts were excluded. Two independent reviewers assessed titles, abstracts, and full texts, while four reviewers conducted data extraction and analysis.

RESULTS: The search identified 3,723 results, of which 13 met inclusion criteria: nine research articles and four reviews. Four themes emerged: energy consumption (n = 10), carbon footprint (n = 6), computational resources (n = 9), and water consumption (n = 2). Reported metrics included CO2-equivalent emissions, training time, power use effectiveness, equivalent distance travelled by car, energy demands, and water consumption. Strategies to enhance sustainability included lightweight model architectures, quantization and pruning, efficient optimizers, and early stopping. Broader recommendations encompassed integrating carbon and energy metrics into AI evaluation, transitioning to cloud computing, and developing an eco-label for radiology AI systems.

CONCLUSIONS: Research on sustainable AI in radiology remains scarce but is rapidly growing. This review highlights key metrics and strategies to guide future research and practice toward more transparent, consistent, and environmentally responsible AI development in radiology.

ABBREVIATIONS: AI, Artificial intelligence; CNN, Convolutional neural networks; CT, Computed tomography; CPU, Central Processing Unit; DL, Deep learning; FLOP, Floating-point operation; GHG, Greenhouses gas; GPU, Graphics Processing Unit; LCA, Life Cycle Assessment; LLM, Large Language Model; MeSH, Medical Subject Headings; ML, Machine learning; MRI, Magnetic resonance imaging; NLP, Natural language processing; PUE, Power Usage Effectiveness; TPU, Tensor Processing Unit; USA, United States of America; ViT, Vision Transformer; WUE, Water Usage Effectiveness.

RevDate: 2025-11-22

Naeem AB, Senapati B, Rasheed J, et al (2025)

An intelligent job scheduling and real-time resource optimization for edge-cloud continuum in next generation networks.

Scientific reports pii:10.1038/s41598-025-25452-z [Epub ahead of print].

While cloud-edge infrastructures demand flexible and sophisticated resource management, 6G networks necessitate very low latency, great dependability, and broad connection. Cloud computing's scalability and agility enable it to prioritize service delivery at various levels of detail while serving billions of users. However, due to resource inefficiencies, virtual machine (VM) issues, response delays, and deadline violations, real-time task scheduling is challenging in these settings. This study develops an AI-powered task scheduling system based on the newly published Unfair Semi-Greedy (USG) algorithm, Earliest Deadline First (EDF), and Enhanced Deadline Zero-Laxity (EDZL) algorithm. The system chooses the best scheduler based on load and work criticality by combining reinforcement learning adaptive logic with a dynamic resource table. Over 10,000 soft real-time task sets were utilized to evaluate the framework across various cloud-edge scenarios. When compared to solo EDF and EDZL solutions, the recommended hybrid method reduced average response times by up to 26.3% and deadline exceptions by 41.7%. The USG component achieved 98.6% task stimulability under saturated edge settings, indicating significant changes in workload. These findings suggest that the method might be useful for applications that need a speedy turnaround. This architecture is especially well-suited for autonomous systems, remote healthcare, and immersive media, all of which require low latency and dependability, and it may be extended to AI-native 6G networks.

RevDate: 2025-11-22

Bertoni D, Tsenkov M, Magana P, et al (2025)

AlphaFold Protein Structure Database 2025: a redesigned interface and updated structural coverage.

Nucleic acids research pii:8340156 [Epub ahead of print].

The AlphaFold Protein Structure Database (AFDB; https://alphafold.ebi.ac.uk), developed by EMBL-EBI and Google DeepMind, provides open access to hundreds of millions of high-accuracy protein structure predictions, transforming research in structural biology and the wider life sciences. Since its launch, AFDB has become a widely used bioinformatics resource, integrated into major databases, visualization platforms, and analysis pipelines. Here, we report the update of the database to align with the UniProt 2025_03 release, along with a comprehensive redesign of the entry page to enhance usability, accessibility, and structural interpretation. The new design integrates annotations directly with an interactive 3D viewer and introduces dedicated domains and summary tabs. Structural coverage has also been updated to include isoforms plus underlying multiple sequence alignments. Data are available through the website, FTP, Google Cloud, and updated APIs. Together, these advances reinforce AFDB as a sustainable resource for exploring protein sequence-structure relationships.

RevDate: 2025-11-21

RahimiZadeh K, Beheshti A, Javadi B, et al (2025)

An integrated queuing and certainty factor theory model for efficient edge computing in remote patient monitoring systems.

Scientific reports pii:10.1038/s41598-025-28703-1 [Epub ahead of print].

Remote Patient Monitoring Systems (RPMS) require efficient resource management to prioritize life-critical data in latency-sensitive healthcare environments. This research introduces an Integrated Queuing and Certainty Factor Theory (IQCT) model aimed at optimizing bandwidth allocation and task scheduling within fog-edge-cloud architectures. IQCT prioritizes patient requests in real time by classifying them into emergency, warning, and normal categories using certainty factor(CF) -based urgency assessment. Simulated on Raspberry Pi fog nodes with the UCI Heart Disease dataset, its performance was benchmarked against FCFS, PQ, and WFQ using metrics such as latency, energy consumption, and response time under varying workloads. IQCT reduced latency for emergency requests by 54.5% and improved network efficiency by 30.08% compared to FCFS. It also lowered response and execution times by 49.5% and 36%, and decreased fog-layer energy consumption by 30.8%. Scalability tests confirmed stable quality of service (QoS) under peak loads, demonstrating adaptability to dynamic demand. The adaptation of PQ and CF theory can lead to more efficient and optimized performance in RPMS. The IQCT model has significantly reduced the latency by 54.5% in emergency situations, in comparison with the existing models.

RevDate: 2025-11-21

Chennam KK, V UM, Aluvalu R, et al (2025)

Load balancing for cloud computing using optimized cluster based federated learning.

Scientific reports, 15(1):41328.

Task scheduling and load balancing in cloud computing represent challenging NP-hard optimization problems that often result in inefficient resource utilization, elevated energy consumption, and prolonged execution times. This study introduces a novel Cluster-based Federated Learning (FL) framework that addresses system heterogeneity by clustering virtual machines (VMs) with similar characteristics via unsupervised learning, enabling dynamic and efficient task allocation. The proposed method leverages VM capabilities and a derivative-based objective function to optimize scheduling. We benchmark the approach against established metaheuristic algorithms including Whale Optimization Algorithm (WOA), Butterfly Optimization (BFO), Mayfly Optimization (MFO), and Fire Hawk Optimization (FHO). Evaluated using makespan, idle time, and degree of imbalance, the Cluster-based FL model coupled with the COA algorithm consistently outperforms existing methods, achieving up to a 10% reduction in makespan, a 15% decrease in idle time, and a significant improvement in load balancing across VMs. These results highlight the efficacy of integrating clustering within federated learning paradigms to deliver scalable, adaptive, and resilient cloud resource management solutions.

RevDate: 2025-11-21
CmpDate: 2025-11-21

Alohali MA, Alanazi F, Alsahafi YA, et al (2025)

Intelligent feature fusion with dynamic graph convolutional recurrent network for robust object detection to assist individuals with disabilities in a smart Iot edge-cloud environment.

Scientific reports, 15(1):41228.

Smart Internet of Things (IoT)-edge-cloud computing defines intelligent systems where IoT devices create data at the network's edge, which is then further processed and analyzed in local edge devices before transmission to the cloud for deeper insights and storage. Visual impairment, like blindness, has a deep effect on a person's psychological and cognitive functions. So, the use of assistive models can help mitigate the adverse effects and improve the quality of life for individuals who are blind. Much current research mainly concentrates on mobility, navigation, and object detection (OD) in smart devices and advanced technologies for visually challenged people. OD is a vital feature of computer vision that includes categorizing objects within an image, allowing applications like augmented reality, image retrieval, etc. Recently, deep learning (DL) models have emerged as an excellent technique for mining feature representation from data, primarily due to significant developments in OD. The DL model is well-trained with manifold images of objects that are highly applicable to visually impaired individuals. This paper presents an intelligent Feature Fusion with Dynamic Graph Convolutional Recurrent Network for Robust Object Detection (FFDGCRN-ROD) approach to assist individuals with disabilities. The paper aims to present an intelligent OD framework for individuals with disabilities utilizing a smart IoT edge cloud environment to enable monitoring and assistive decision-making. At first, the image pre-processing phase involves resizing, normalization, and image enhancement to eliminate the noise and enhance the image quality. For the OD process, the FFDGCRN-ROD approach employs the faster R-CNN to identify and locate specific targets within the images automatically. Furthermore, the fusion models, namely CapsNet, SqueezeNet, and Inceptionv3, are used for the feature extraction process. Finally, the FFDGCRN-ROD model implements the dynamic adaptive graph convolutional recurrent network (DA-GCRN) model to detect and classify objects for visually impaired people accurately. The experimental validation of the FFDGCRN-ROD methodology is performed under the Indoor OD dataset. The comparison analysis of the FFDGCRN-ROD methodology demonstrated a superior accuracy value of 99.65% over existing techniques.

RevDate: 2025-11-21

Chou CL, Su CK, Cruz SKD, et al (2025)

Using artificial intelligence to automate the analysis of psoriasis severity: A pilot study.

Dermatology (Basel, Switzerland) pii:000549640 [Epub ahead of print].

INTRODUCTION: The Psoriasis Area and Severity Index (PASI) score is widely used to assess psoriasis severity; however, manual PASI scoring is susceptible to environmental variability and subjective interpretation. This study leverages artificial intelligence to improve the consistency and objectivity of psoriasis severity classification based on features extracted from 2D clinical images.

METHODS: This study employed the YOLOv8 deep learning model to classify psoriatic lesions according to the severity of erythema, thickness, and scaling- key subcomponents of the PASI scoring system. Severity was assessed as follows: (0), mild (1), moderate (2), severe (3), or very severe (4). Model training and analysis were conducted in a cloud-based environment (Google Colab) using three different datasets. Stratified k-fold cross-validation was employed to ensure robustness by preserving the distribution of PASI scores across folds. Model performance was assessed using a confusion matrix and accuracy metrics.

RESULTS: In experiments, the YOLOv8 model proved highly effective in classifying psoriasis images based on PASI scores. Stratified k-fold cross-validation was shown to enhance model reliability across diverse datasets.

CONCLUSIONS: This study represents a significant advancement in the application of AI to the automated classification of lesion severity based on erythema, thickness, and scaling-key subcomponents of PASI.

RevDate: 2025-11-21
CmpDate: 2025-11-21

Guerrero P, Ernebjerg M, Holst T, et al (2025)

The AIR·MS data platform for artificial intelligence in healthcare.

JAMIA open, 8(6):ooaf145.

OBJECTIVE: To present the Artificial Intelligence-Ready Mount Sinai (AIR·MS) platform-unified access to diverse clinical datasets from the Mount Sinai Health System (MSHS), along with computational infrastructure for AI-driven research and demonstrate its utility with 3 research projects.

MATERIALS AND METHODS: AIR·MS integrates structured and unstructured data from multiple MSHS sources via the OMOP Common Data Model on an in-memory columnar database. Unstructured pathology and radiology data are integrated through metadata extracted from and linking the raw source data. Data access and analytics are supported from the HIPAA-compliant Azure cloud and the on-premises Minerva High-Performance Computing (HPC) environment.

RESULTS: AIR·MS provides access to structured electronic health records, clinical notes, and metadata for pathology and radiology images, covering over 12M patients. The platform enables interactive cohort building and AI model training. Experimentation with complex cohort queries confirm a high system performance. Three use cases demonstrate, risk-factor discovery, and federated cardiovascular risk modeling.

DISCUSSION: AIR·MS demonstrates how clinical data and infrastructure can be integrated to support large-scale AI-based research. The platform's performance, scale, and cross-institutional design position it as a model for similar initiatives.

CONCLUSION: AIR·MS provides a scalable, secure, and collaborative platform for AI-enabled healthcare research on multimodal clinical data.

RevDate: 2025-11-20

Ahmad Z, Seo JT, S Jeon (2025)

Cloud edge enabled stacked ensemble learning framework with meta model for situation aware maritime traffic monitoring and control systems.

Scientific reports, 15(1):41099.

In the last few years, the increasing trend of vessel density, different types of vessels, and the increased need for real-time data have made maritime traffic management significantly more difficult. This study presents a situation-aware framework based on stacked ensemble learning and cloud-edge hybridization, which is aimed at enhancing the maritime traffic monitoring and control systems. This approach combines stacked ensemble learning with a meta-model for vessel type classification and employs the concept of cloud-edge architecture to strike a balance between computational efficiency and delay minimization. While the edge layer takes care of real-time inference and situational analysis on the go, the cloud layer takes care of model training and amalgamation of data from various sources. Our evaluation made use of a comprehensive maritime vessel dataset and compared the performance with the state-of-the-art deep learning models (VGG16, VGG19, DenseNet121, and ResNet50). Our experiments show that the stacked ensemble learning with a meta-model significantly outperforms the traditional ones, achieving an overall accuracy of 0.98, macro average precision of 0.97, macro average recall of 0.98, and an F1-score of 0.98. Both ROC and PR curves also demonstrate excellent AUC values, which tend to 1.00 for almost all categories of vessels, which is a strong performance in distinguishing vessels from each other. Test predictions are outstandingly accurate, with confidence in vessel classification exceeding 99% in most cases. From these results, the proposed method shows robustness, scalability, and effectiveness for real-time maritime surveillance, naval defense systems, and autonomous vessel traffic control in industrial IoT environments.

RevDate: 2025-11-20

Niu Z, Bruyère T, Manthey D, et al (2025)

NimbusImage: a cloud-computing platform for image analysis.

RevDate: 2025-11-20

Swathi K, Durga P, Prasad KV, et al (2025)

Secure blockchain integrated deep learning framework for federated risk-adaptive and privacy-preserving IoT edge intelligence sets.

Scientific reports, 15(1):41133.

An enormous demand for a secure, scalable, intelligent edge computing framework has emerged for the exponentially increasing number of Internet of Things (IoT) devices for any substrate of modern digital infrastructure. These edge nodes distributed across heterogeneous environments serve as primary interfaces for sensing, computation, and actuations. Their physical deployment in unattended scenarios puts them at risk of being targets for resource manipulation. One widely accepted IoT architecture with traditional notions of edge may consider a threat to its centralized knowledge with an unbounded attack surface that includes anything that can remotely connect to the edge from the cloud-like domain. Existing strategies either forget the dynamic risk context of edge nodes or do not achieve a reasonable trade-off between security and resource constraints, essentially degrading the robustness and trustworthiness of solutions intended for real-life scenarios. To address the existing gaps, the work presents a novel Blockchain Integrated Deep Learning Framework for secure IoT edge computing, introducing a hybrid architecture where the transparency of blockchain meets deep learning flexibility. The proposed system incorporates five specialized components: Blockchain-Orchestrated Federated Curriculum Learning (BOFCL), which ensures risk-prioritized training using threat indices derived from blockchain logs; this adaptive sequencing enhances responsiveness to high-risk edge scenarios. Zero-Knowledge Proof Enabled Secure Inference Engine (ZK-SIE) provides verifiable privacy-preserving inference, ensuring model integrity without exposing input data or model internals in process. Blockchain Indexed Adversarial Attack Simulator (BI-AAS) focuses on testing the models in edge environments against attack scenarios drawn from common adversarial profiles and thereby facilitates a model defensive retraining. Energy-Aware Lightweight Consensus with Adaptive Synchronization (ELCAS) avoids overhead by seeking energy-efficient participants for global model synchronization in constrained environments. Trust Indexed Model Provenance and Deployment Ledger (TIMPDL) ensures model lineage tracking and deploy ability in a transparent manner by providing composite trust scores computed from data quality, node reputation, and validation metrics. Altogether, the framework combines the data integrity, adversarial robustness, and trust-aware deployment, shortening training latency, synchronization energy, and privacy leakage. It is a foundational advancement supporting secure decentralized edge intelligence for next-generation IoT ecosystems.

RevDate: 2025-11-20
CmpDate: 2025-11-20

Silva Araujo SDC, Ong Michael GK, Deshpande UU, et al (2025)

ResNet-18 based multi-task visual inference and adaptive control for an edge-deployed autonomous robot.

Frontiers in robotics and AI, 12:1680285.

Current industrial robots deployed in small and medium-sized businesses (SMEs) are too complex, expensive, or dependent on external computing resources. In order to bridge this gap, we introduce an autonomous logistics robot that combines adaptive control and visual perception on a small edge computing platform. The NVIDIA Jetson Nano was equipped with a modified ResNet-18 model that allowed it to concurrently execute three tasks: object-handling zone recognition, obstacle detection, and path tracking. A lightweight rack-and-pinion mechanism enables payload lifting of up to 2 kg without external assistance. Experimental evaluation in semi-structured warehouse settings demonstrated a path tracking accuracy of 92%, obstacle avoidance success of 88%, and object handling success of 90%, with a maximum perception-to-action latency of 150 m. The system maintains stable operation for up to 3 hours on a single charge. Unlike other approaches that focus on single functions or require cloud support, our design integrates navigation, perception, and mechanical handling into a low-power, standalone solution. This highlights its potential as a practical and cost-effective automation platform for SMEs.

RevDate: 2025-11-19

Park H, Pritchard BP, LP Wang (2025)

High-Throughput Approach for Minimum Energy Pathway Search Using the Nudged Elastic Band Method with Efficient Data Handling and Parallel Computing.

Journal of chemical theory and computation [Epub ahead of print].

The Nudged Elastic Band (NEB) method is critical for mapping chemical reaction pathways but is a computationally and data-intensive workflow involving a large number of single-point (SP) calculations. Additionally, due to the complexity of the NEB method, understanding how variations in the protocol (algorithm, levels of theory, and parameters) impact performance is challenging. To address these issues, we developed and tested a high-throughput approach on the QCArchive cloud-based infrastructure, utilizing two open-source projects, QCFractal and geomeTRIC, to enhance the NEB efficiency. This approach parallelizes SP energy and gradient calculations and stores results in a database, facilitating data organization and retrieval. To evaluate its performance, we optimized four elementary reactions from the RGD1 data set of organic reactions using the B3LYP/6-31G(d), B3LYP-D3/def2-TZVP methods, and the PM7 semiempirical model. We tested 72 different combinations of chain optimization parameters and three types of band forces: conventional NEB, a hybrid band that projects out the perpendicular energy gradient as in NEB but retains the full spring force, and a plain band that does not project any forces. The highest-energy images of the optimized chains were used as the initial structures for transition state (TS) optimization to locate the first-order saddle points. The NEB and TS steps may be performed at different levels of theory, allowing us to perform NEB calculations with either DFT or PM7, followed by TS optimizations at the DFT level. The final TS structures were compared with reference geometries from the data set, which were further optimized at the corresponding level of theory. The convergence rates of TS and NEB are reported to demonstrate how the parameters influence the performance. Next, we performed NEB calculations on 118 diverse chemical reactions from a compilation of seven barrier height data sets from the literature using two selected protocols: one uses the NEB method, while the other employs the hybrid band. Notably, the hybrid band yielded consistently higher convergence rates across reactions from both data sets. Lastly, three elementary reactions from our previous work involving molecular transition metal catalysts were optimized using the hybrid band, successfully reproducing the earlier results. This study demonstrates that the high-throughput approach can perform a large number of NEB calculations concurrently in parallel while storing all calculation results in a database. The results presented here also confirm the reliability and correctness of the new implementation.

RevDate: 2025-11-19

Abedin ZU, Jianbin L, Siddique M, et al (2025)

Optimizing dispatch factor in smart energy networks using cloud-based computational resources.

Scientific reports, 15(1):40683.

The coordination of eco-friendly power sources and brilliant matrix advances has changed the power lattice framework. Optimized dispatch of distributed resources assisted by cloud-based technologies, is inevitable in this evolving era of could computing. This paper provides a comprehensive framework of cloud-based load management technologies, with a focus on the dispatch factor as a crucial parameter in making energy dispatch decisions. Cloud computing provides grid administrators with the adaptability and computational power expected to advance energy dispatch continuously. The contributions of this research work about the optimized dispatch of power sources include i) formulation of constrained optimization objective function for power distribution network ii) proposed a novel algorithm for evaluation of the parametric values involved in proposed objective function. iii) proposed a framework for resourcing the computational burden to cloud computational platform. These contributions inculcates a methodology for efficient energy dispatch, highlighting the use of machine learning, optimization algorithms, and real-time data analytics to adjust the dispatch factor dynamically. The paper concludes with the discussion of results obtained after implementation of proposed methodology on google cloud platform which shows the effectiveness of the proposed methodology.

RevDate: 2025-11-19

Ding X, Li T, Li S, et al (2025)

Automated detection of mycobacterium tuberculosis based on cloud computing.

BMC infectious diseases, 25(1):1620.

RevDate: 2025-11-19

Mustafa NE, SG Kandlikar (2025)

Performance Evaluation of Boiling Chamber With Microchannel Chip and Taper Microgap.

ASME journal of heat and mass transfer, 147(12):121605.

The increasing trend of power densities in high-performance computing, driven by artificial intelligence, machine learning, and cloud computing, necessitates advanced thermal management solutions to maintain operational stability and energy efficiency. This study examines the effectiveness of cooling a 1.5 U simulated copper microchannel chip compared to a plain chip. Both chip types were tested with and without configurations for dual taper microgaps to enhance the heat transfer performance of a boiling chamber (BC). Experimental investigation was conducted using 500 μm wide × 400 μm deep microchannels separated by 200 μm fins. Varying inlet gaps (0.5-4 mm) and taper lengths (8.25 mm and 16.5 mm) with a taper angle of 3 deg were employed in dual taper configuration. Their impact on critical heat flux (CHF) and subcooled boiling dynamics was investigated. Microchannels provided considerable performance enhancement over a plain surface with or without the dual taper microgap. The findings demonstrate that smaller inlet gaps (0.5-1 mm) and longer taper lengths (16.5 mm, with central liquid inlet) significantly enhance nucleate boiling. These configurations improve vapor escape and delay CHF through subcooled boiling and submerged condensation. However, a lower CHF was noted due to vapor agglomeration within the microgap. The 80% fill ratio microchannel chip exhibited the highest CHF as subcooled boiling increased liquid replenishment and prevented vapor stagnation. Similarly, lower coolant temperatures (20-30 °C) enhanced boiling performance, where submerged condensation accelerated bubble collapse and improved heat dissipation efficiency in lower surface temperatures.

RevDate: 2025-11-19
CmpDate: 2025-11-19

Agudelo-Romero P, Iosifidis T, Lim J, et al (2025)

Programming of the respiratory epithelium in utero - insight from the amniotic epithelial methylome.

medRxiv : the preprint server for health sciences pii:2025.10.02.25337047.

BACKGROUND: Dysregulation of the airway epithelium contributes to recurrent wheezing and asthma and may have developmental origins. Here, we investigated the relationship between the placental amniotic and nasal epithelial methylation landscapes to determine whether amniotic epithelium provides insight into fetal programming of respiratory tissue.

METHODS: We conducted high-throughput target-capture DNA methylation sequencing of 84 matched pairs of placental amniotic and neonatal nasal brushings samples within the Airway Epithelium Respiratory Illnesses and Allergy (AERIAL) cohort. Comparative analysis of tissue-specific methylation profiles, and conservation of methylation changes associated with gestational exposures (maternal smoking and maternal asthma), was explored.

RESULTS: Between amniotic and nasal tissues, we identified 4,897 differentially methylated regions (FDR ≤ 0.05 and log 2 FC ≥ |0.2|) that were generally hypermethylated in the nasal epithelium. Despite these extensive tissue-specific differences, filtering for loci with non-significant differential methylation (FDR ≥ 0.1) revealed 1,493,976 CpG loci (∼20% of the measured methylome) with highly concordant methylation ratios levels between tissues (Pearson's R ≥ 0.8). These loci included genes crucial to epithelial and lung development. Within these conserved regions, associations with maternal asthma and prenatal smoking were consistently represented in both tissues.

CONCLUSIONS: The conserved methylome signatures support the use of amniotic tissue as a valuable tissue for investigating the developmental programming of airway vulnerability, potentially leading to early risk stratification and targeted interventions for childhood asthma.

SOURCES OF SUPPORT: This work was supported by grants from the National Health and Medical Research Council of Australia (APP1157548), Department of Health (Western Australia)-Future Health Research and Innovation Fund (2020, 2021, 2022). S.M.S. is supported by an NHMRC Investigator Grant (NHMRC2007725). P.A-R. received funding from the Google Cloud Education Program, a Telethon Kids Institute Theme Collaboration Award grant (PR030564), the Branchi Family Foundation, and a Future Health Research and Innovation (FHRI) Fellowship by the Department of Health (IF2024-25/1), Government of Western Australia. T.I. is supported through the Channel 7 Telethon Trust, Stan Perron Charitable Foundation People Fellowship and previously supported by the Future Health Research Innovation Fund (FHRIF 2020-2023) and Imogen Miranda Suleski Fellowship. A.K is a Rothwell Family Fellow and D.G.H. is a Stan Perron/Perth Children's Hospital Foundation (PCHF) Fellow. D.M. is supported by FHRIF. A.B. is supported by the NIH (R21 AI176305-01A1, R01AI099108-11A1). The ORIGINS birth cohort has received core funding support from the Paul Ramsay Foundation and the Commonwealth Government of Australia through the Channel 7 Telethon Trust. Substantial in-kind support has been provided by The Kids Research Institute Australia and Joondalup Health Campus.

RevDate: 2025-11-19
CmpDate: 2025-11-19

Franzoso E, Santorsola M, F Lescai (2025)

Rapid NGS Analysis on Google Cloud Platform: Performance Benchmark and User Tutorial.

Clinical and translational science, 18(11):e70416.

Next-Generation Sequencing (NGS) is being increasingly adopted in clinical settings as a tool to increase diagnostic yield in genetically determined pathologies. However, for patients in critical conditions the time to results of data analysis is crucial for a rapid diagnosis and response. Sentieon DNASeq and Clara Parabricks Germline are two widely used pipelines for ultra-rapid NGS analysis, but their high computational demands often exceed the resources available in many healthcare facilities. Cloud platforms, like Google Cloud Platform (GCP), offer scalable solutions to address these limitations. Yet, setting up these pipelines in a cloud environment can be complex. This work provides a benchmark of the two solutions, and offers a comprehensive tutorial aimed at easing their implementation on GCP by healthcare bioinformaticians. Additionally, it presents valuable cost guidance to healthcare managers who consider implementing cloud-based NGS processing. Using five publicly available exome (WES) and five genome (WGS) samples, we benchmarked both pipelines on GCP in terms of runtime, cost, and resource utilization. Our results show that Sentieon and Parabricks perform comparably. Both pipelines are viable options for rapid, cloud-based NGS analysis, enabling healthcare providers to access advanced genomic tools without the need for extensive local infrastructure.

RevDate: 2025-11-17

Patil DA, S G (2025)

A comprehensive survey on securing the social internet of things: protocols, threat mitigation, technological integrations, tools, and performance metrics.

Scientific reports, 15(1):40190.

The integration of social networking concepts with the Internet of Things (IoT) has led to the Social Internet of Things (SIoT)-a paradigm enabling autonomous, context-aware interactions among devices based on social relationships. While this connectivity improves interoperability, it also raises critical challenges in trust management, secure communication, and data protection. This survey reviews 225 papers published between 2014 and 18 September 2025, analyzing advancements in SIoT security. Sources include IEEE Xplore, ACM Digital Library, Springer, ScienceDirect (Elsevier), MDPI, Wiley, Taylor & Francis, and Google Scholar. Blockchain and AI/ML approaches feature prominently, with blockchain referenced in more than 50 papers, AI/ML in over 80, and many adopting both in combination. The literature is examined across architectural foundations, security requirements, and layered defenses, with evaluation most often based on latency, accuracy, scalability, and false-positive rate. The review further highlights existing security and communication protocols, attack mitigation strategies, and the adoption of blockchain, cloud, and edge computing for scalable and decentralized processing. The survey traces the evolution of SIoT research, identifies future directions to strengthen security and transparency, and serves as a reference for researchers and practitioners designing secure and decentralized SIoT environments.

RevDate: 2025-11-17

Laneve A, Ronco G, Beccaceci M, et al (2025)

Quantum teleportation with dissimilar quantum dots over a hybrid quantum network.

Nature communications, 16(1):10028.

Photonic quantum information processing in metropolitan quantum networks lays the foundation for cloud quantum computing, secure communication, and the realization of a global quantum internet. This paradigm shift requires on-demand and high-rate generation of flying qubits and their quantum state teleportation over long distances. Despite the last decade has witnessed an impressive progress in the performances of deterministic photon sources, the exploitation of distinct quantum emitters to implement a quantum relay among distant parties has remained elusive. Here, we overcome this challenge by using dissimilar quantum dots whose electronic and optical properties are engineered by light-matter interaction, multi-axial strain and magnetic fields so as to make them suitable for the teleportation of polarization qubits. This is demonstrated in a quantum network harnessing both fiber connections and a 270 m free-space optical link connecting two buildings of the Sapienza University campus in Rome. The protocol exploits GPS-assisted synchronization, ultra-fast single photon detectors as well as stabilization systems that compensate for atmospheric turbulence. The achieved teleportation state fidelity reaches up to 82 ± 1%, above the classical limit by more than 10 standard deviations. Our field demonstration of all-photonic quantum teleportation opens a new route to implement solid-state based quantum relays and builds the foundation for practical quantum networks.

RevDate: 2025-11-15

Liu X, Tan Z, Tang P, et al (2025)

Research on a novel gene sequence prediction and homomorphic encryption method based on Mamba-VMD.

Computational biology and chemistry, 120(Pt 1):108780 pii:S1476-9271(25)00442-6 [Epub ahead of print].

Gene sequence prediction not only effectively identifies homologous genes but also provides crucial insights into gene function and evolutionary relationships, making it one of the significant research directions in the field of bioinformatics. Given the high sensitivity of bioinformatics data, particularly gene sequences, plaintext transmission and processing in cloud environments pose risks of privacy leakage. To address this, this study innovatively proposes an analytical method combining Mamba neural network-based gene sequence prediction with homomorphic encryption, enabling secure data transmission while employing deep learning for sequence prediction. Using the monkeypox virus experimental ID SRX17751190 and sequencing ID SRR21755835 as examples, the study first decomposes the original gene sequence using VMD modal decomposition, then predicts the gene sequence from the decomposed components using the Mamba neural network, and finally performs homomorphic encryption and spatial similarity analysis on the predicted data in a cloud computing environment. Experimental results demonstrate that the average MAE, MSE, RMSE, MAPE, and MSPE for the top ten predicted gene sequences are 0.0140, 0.0003, 0.0190, 0.6527, and 789.52959, respectively, with an average CKKS homomorphic encryption computation error of 0.582492456. This ensures secure similarity calculations for gene sequence data in cloud environments.

RevDate: 2025-11-14

Lin P (2025)

Fuzzy based priority aware task scheduling optimization for mobile edge computing environments.

Scientific reports, 15(1):39998.

Mobile Edge Computing (MEC) is an innovative solution designed to address key challenges in mobile cloud computing, including latency, limited capacity, and resource constraints. The primary objective of MEC is to enable dynamic scheduling and efficient resource allocation with minimal cost. This paper proposes a three-tier system architecture comprising mobile devices, edge computing nodes, and traditional cloud infrastructure. It introduces two methods for task offloading and scheduling. For task allocation on mobile devices, the system leverages the Greedy Auto-Scaling Offloading algorithm, which prioritizes high-energy-consuming tasks to enhance energy efficiency. On the edge computing layer, a dynamic scheduling approach based on fuzzy logic is presented, which ranks and allocates tasks according to two specific criteria. Numerical evaluations demonstrate that, compared to existing alternatives, the proposed method significantly reduces task waiting time, latency, and overall system load, while maintaining system balance with minimal resource consumption. Moreover, the proposed system achieves up to ~ 64% reduction in battery consumption in our simulated environment compared with local execution. The results also indicate that over 93% of tasks are successfully executed within the edge environment.

RevDate: 2025-11-13

Ren F, Xu L, Fei J, et al (2025)

Design of multi-mode intelligent system architecture for surface defect detection of steel based on cloud technology.

Scientific reports, 15(1):39735.

This research establishes an advanced cloud-native multimodal intelligent system architecture dedicated to automated detection and real-time monitoring of steel surface defects. The architecture ingeniously integrates the advantages of edge computing, cloud computing, and mobile computing technologies, achieving efficient processing and precise analysis of industrial big data through advanced deep learning algorithms. The main innovative contributions of this research include: (1) Optimized deployment of improved YOLOv5-based algorithm systems, including YOLOv5-efficient net and YOLOv5-mobile net series lightweight models, which significantly enhance defect recognition capabilities in complex industrial environments; (2) Meticulously designed microservice architecture and distributed message queue system, realizing high availability and dynamic scalability; (3) Pioneering construction of a multi-dimensional intelligent result distribution ecosystem, encompassing cross-platform services (WeChat, email), real-time push notifications (iMessage), and mobile application alerts (Android applications), ensuring seamless and timely delivery of detection results to different end users. In rigorous evaluations on a large-scale self-built steel defect dataset, the system demonstrates good performance across key metrics including detection accuracy, recall rate, and processing speed. Practical deployment has validated the system's high reliability and robust adaptability in industrial production environments. This research not only provides a feasible technical solution for the intelligent transformation of the steel industry but also establishes a forward-looking technical paradigm for intelligent manufacturing in the context of Industry 4.0, laying a solid foundation for achieving unmanned, high-quality modern industrial production.

RevDate: 2025-11-13

Rocha H, Chong YJ, Thirunavukarasu AJ, et al (2025)

Performance of Foundation Models vs Physicians in Textual and Multimodal Ophthalmological Questions.

JAMA ophthalmology pii:2841079 [Epub ahead of print].

IMPORTANCE: There is an increasing amount of literature evaluating the clinical knowledge and reasoning performance of large language models (LLMs) in ophthalmology, but to date, investigations into its multimodal abilities clinically-such as interpreting images and tables-have been limited.

OBJECTIVE: To evaluate the multimodal performance of the following 7 foundation models (FMs): GPT-4o (OpenAI), Gemini 1.5 Pro (Google), Claude 3.5 Sonnet (Anthropic), Llama-3.2-11B (Meta), DeepSeek V3 (High-Flyer), Qwen2.5-Max (Alibaba Cloud), and Qwen2.5-VL-72B (Alibaba Cloud) in answering offline Fellowship of the Royal College of Ophthalmologists part 2 written multiple-choice textual and multimodal questions, with head-to-head comparisons with physicians.

This cross-sectional study was conducted between September 2024 and March 2025 using questions sourced from a textbook used as an examination preparation resource for the Fellowship of the Royal College of Ophthalmologists part 2 written examination.

EXPOSURE: FM performance.

MAIN OUTCOMES AND MEASURES: The primary outcome measure was FM accuracy, defined as the proportion of answers generated by the model matching the textbook's labeled letter answer.

RESULTS: For textual questions, Claude 3.5 Sonnet (accuracy, 77.7%) outperformed all other FMs (followed by GPT-4o [accuracy, 69.9%], Qwen2.5-Max [accuracy, 69.3%], DeepSeek V3 [accuracy, 63.2%], Gemini Advanced [accuracy, 62.6%], Qwen2.5-VL-72B [accuracy, 58.3%], and Llama-3.2-11B [accuracy, 50.7%]), ophthalmology trainees (difference, 9.0%; 95% CI, 2.4%-15.6%; P = .01) and junior physicians (difference, 35.2%; 95% CI, 28.3%-41.9%; P < .001), with comparable performance with expert ophthalmologists (difference, 1.3%; 95% CI, -5.1% to 7.4%; P = .72). GPT-4o (accuracy, 69.9%) outperformed GPT-4 (OpenAI; difference, 8.5%; 95% CI, 1.1%-15.8%; P = .02) and GPT-3.5 (OpenAI; difference, 21.8%; 95% CI, 14.3%-29.2%; P < .001). For multimodal questions, GPT-4o (accuracy, 57.5%) outperformed all other FMs (Claude 3.5 Sonnet [accuracy, 47.5%], Qwen2.5-VL-72B [accuracy, 45%], Gemini Advanced [accuracy, 35%], and Llama-3.2-11B [accuracy, 25%]) and the junior physician (difference, 15%; 95% CI, -6.7% to 36.7%; P = .18) but was weaker than expert ophthalmologists (accuracy range, 70.0%-85.0%; P = .16) and trainees (accuracy range, 62.5%-80%; P = .35).

CONCLUSIONS AND RELEVANCE: Results of this cross-sectional study suggest that for textual questions, current FMs exhibited notable improvements in ophthalmological knowledge reasoning when compared with older LLMs and ophthalmology trainees, with performance comparable with that of expert ophthalmologists. These models demonstrated potential for medical assistance for answering ophthalmological textual queries, but their multimodal abilities remain limited. Further research or fine-tuning models with diverse ophthalmic multimodal data may lead to more capable applications with multimodal functionalities.

RevDate: 2025-11-13

Hafner A, Kourousias G, De Simone M, et al (2026)

Modular Adaptive Processing Infrastructure (MAPI): a blueprint for interconnecting generic workflows with modern interfaces.

Journal of synchrotron radiation pii:S1600577525009269 [Epub ahead of print].

In this paper, we introduce the Modular Adaptive Processing Infrastructure (MAPI), a comprehensive software suite and approach designed to streamline and enhance data analysis workflows in scientific research laboratories. MAPI selects and integrates multiple frameworks and toolkits into a web-based platform, offering a highly modular and adaptable solution for diverse data analysis requirements. By design, MAPI supports distributed processing across heterogeneous backends (edge workstations, on-premises servers, high-performance computing and public cloud), making it suitable for various beamlines and data-processing labs. This blueprint, or `recipe', provides a flexible infrastructure that can be tailored to specific experimental needs. We showcase MAPI's application through its successful implementation on the X-ray computed tomography (CT) beamline, resulting in a system for tomographic processing (STP3). The case study demonstrates MAPI's effectiveness in meeting complex computational demands, highlighting its potential for widespread adoption in scientific research environments. Most of the results reported in the paper are from a production deployment on Elettra's SYRMEP beamline using two on-premises GPU servers, but two additional ongoing deployments on different beamlines are discussed.

RevDate: 2025-11-13
CmpDate: 2025-11-13

Safi A, Shaikh M, Hoang MT, et al (2025)

Decoding sepsis: A technical blueprint for an algorithm-driven system architecture.

Digital health, 11:20552076251389389.

OBJECTIVE: Sepsis remains a leading cause of mortality in healthcare, requiring rapid detection and intervention. This paper presents a scalable, serverless machine learning (ML) operations architecture for near real-time sepsis risk-stratification in Emergency Department (ED) waiting rooms, where pathology data is often unavailable and recognising sepsis presents the biggest opportunity for timely treatment.

METHODS: The system integrates HL7 message processing through MuleSoft in a secure Amazon Web Services (AWS) cloud environment, leveraging AWS services such as Lambda for real-time data processing and SageMaker for ML model deployment. To optimise the model's performance, the receiver operating characteristic (ROC) curve was used to evaluate different cutoff thresholds of probability across different age groups (16-35, 35-65, and 65-115), aiming for >80% sensitivity and minimise false negatives. Processed data is stored in Aurora PostgreSQL Relational Database Service and visualised in an on-premises proprietary dashboard for clinical decision support.

RESULTS: Despite high reliability, with 99.7% of HL7 messages successfully processed, limitations include occasional failures due to downtime and code set mismatches, as well as peak execution times of 10.5 s under heavy loads, highlighting areas for optimisation. Model development of eligible ED encounters (n =  484,617) using XG Boost was integrated as a real-time endpoint in SageMaker. The Extreme Gradient Boosting model achieved the highest overall accuracy (0.84) and F1-score (0.80), with balanced sensitivity and specificity for our specified limited features within an ED. ROC for age groups (16-35, 36-65, 66-115), showed strong performance in all cohorts (AUCs: 0.864, 0.867, 0.806).

CONCLUSION: This paper outlines the system's design, implementation, and potential for enhancing early sepsis risk-stratification through near real-time monitoring in the ED waiting room.

RevDate: 2025-11-12

Taslim C, Zhang Y, Rask G, et al (2025)

RNA-SeqEZPZ: A Point-and-Click Pipeline for Comprehensive Transcriptomics Analysis with Interactive Visualizations.

GigaScience pii:8321683 [Epub ahead of print].

BACKGROUND: RNA-Seq analysis has become a routine task in numerous genomic research labs, driven by the reduced cost of bulk RNA sequencing experiments. These studies generate billions of reads that require easy-to-run, comprehensive, and reproducible analysis. However, many labs rely on in-house scripts, which can be challenging for bench scientists to use and hinder standardization and reproducibility. While existing RNA-Seq pipelines attempt to address these challenges, they often lack a complete end-to-end user interface.

FINDINGS: To bridge this gap, we developed RNA-SeqEZPZ, an automated pipeline with a user-friendly point-and-click interface, enabling rigorous and reproducible RNA-Seq analysis without requiring programming or bioinformatics expertise. For advanced users, the pipeline can also be executed from the command line, allowing customization of steps to suit specific applications. The innovation of this pipeline lies in the combination of three key features: (1) all software is packaged within a Singularity container, eliminating installation issues, (2) it offers a point-and-click interface from raw FASTQ files through differential expression and pathway analysis, and (3) it includes a Nextflow version, enabling scalability and portability for seamless execution across various platforms including job submission in the cloud and cluster computing. Additionally, RNA-SeqEZPZ generates a thorough statistical report and offers an option for batch adjustment to minimize effects of noise due to technical variations across replicates. Reports can also be reviewed by a bioinformatician to ensure the overall quality of the analysis.

CONCLUSIONS: RNA-SeqEZPZ is a robust, accessible, and scalable solution for comprehensive RNA-Seq analysis, enabling researchers to focus on biological insights rather than computational challenges.

RevDate: 2025-11-10

Du R, Wang Z, J Shen (2025)

Certificateless data integrity auditing with sparse Merkle trees for the cloud-edge environment.

Scientific reports, 15(1):39202.

Ensuring data integrity in cloud-edge environments is critical for IoT ecosystems but is challenged by dynamic data and resource constraints. This paper proposes a certificateless auditing scheme harmonizing cloud security with edge efficiency. By integrating online/offline cryptography and sparse Merkle trees, our approach achieves (1) significant user-side computation reduction via offline or edge-side tag generation, (2) [Formula: see text] dynamic update complexity versus traditional [Formula: see text] approaches, and (3) 75% communication overhead savings through pre-download mechanism. The scheme eliminates certificate management and mitigates Key Generation Centre (KGC) risks via decentralized trust mechanisms. Security proofs demonstrate resilience against KGC collusion and tag forgery under the Inv-CDH assumption. Experiments show our scheme audits faster than prior schemes, supporting 500k+ operations at sub-second latency. This work bridges scalability and real-time demands for smart cities and Industry 4.0 while enabling future extensions in ML-optimized caching and blockchain trust models.

RevDate: 2025-11-09

Montagnese M, Rangelov B, Doel T, et al (2025)

Cloud computing for equitable, data-driven dementia medicine.

The Lancet. Digital health pii:S2589-7500(25)00084-6 [Epub ahead of print].

Dementia poses an increasing global health challenge, and the introduction of new drugs with diverse activity profiles underscores the need for the rapid development and deployment of tailored predictive models. Machine learning has shown promise in dementia research, but it remains largely untested in routine dementia health care-particularly for image-based decision support-owing to data unavailability. Thus, data drift remains a key barrier for equitable real-world translation. We propose and pilot a scalable, cloud-based infrastructure as code solution incorporating privacy-preserving federated learning. This architecture preserves patient privacy by keeping data localised and secure, while enabling the development of robust, adaptable artificial intelligence models. Although technology giants have successfully implemented such technologies in consumer applications, their potential in health-care applications remains largely underutilised. This Viewpoint outlines the key challenges and solutions in implementing cloud-based federated learning for dementia medicine and provides a well-documented codebase to support further research.

RevDate: 2025-11-08
CmpDate: 2025-11-08

Ahadov B, Gadirli F, G Hajiyeva (2025)

Assessment of water quality and coastal changes under climate impacts using multi-sensor satellite techniques in the Southern Absheron Peninsula.

Environmental monitoring and assessment, 197(12):1312.

The southern Absheron Peninsula is experiencing increasing ecological stress caused by both climatic shifts and anthropogenic pressures. Using multi-sensor satellite data, we estimated water quality indicators Chlorophyll-a (Chl-a), Trophic State Index (TSI), Colored Dissolved Organic Matter (CDOM), and Land Surface Temperature (LST) for 2021-2025. An investigation of water quality in the research area was conducted using Sentinel-2 optical data and processed with Google Earth Engine (GEE) from 2021 to 2025. The integration of remote sensing and cloud-based processing provides a practical framework for long-term monitoring, supporting data-driven decision-making for sustainable coastal management. To explore the physical drivers underlying these ecological changes, ICESat-2 ATL03 photon data were used to evaluate vertical seafloor changes along a fixed coastal track. This shoaling is attributed to sediment accumulation or coastal infilling, likely linked to reclamation activities and altered hydrodynamic conditions near the Zigh shoreline. The surface temperature displayed warming in summer most surpassing 30 °C, overlapping with higher Chl-a and TSI values and highlighting the role of temperature in intensifying eutrophication risk. Land reclamation along the Zigh and Hovsan coastline decreased water circulation, resulting in the boosting of CDOM and Chl-a accumulation. We complemented this with ICESat-2 photon-based bathymetry estimation and GRACE/GRACE-FO sea level observations. ICESat-2 bathymetry analyses approved significant nearshore bathymetry changes, with depths decreasing from ~7 m (2020-2021) to 2-3 m in 2024, reflecting sedimentation and shoreline transformation. GRACE/GRACE-FO observations of sea-level decline reinforce the combined influence of hydrological change, sediment redistribution, and human-driven alterations on coastal morphology. Results underscore the pressing nature of the situation, with climate-driven sea-level decline, rising temperatures, and local anthropogenic activities jointly degrading water quality and reshaping bathymetry in the southern Absheron Peninsula. This integrated approach highlights the urgent need for enhanced wastewater management and sustainable coastal planning to protect the Caspian Sea's ecosystems.

RevDate: 2025-11-07

Subburaj S, Liu C, T Xu (2025)

Emerging trends in AI-integrated optical biosensors for point-of-care diagnostics: current status and future prospects.

Chemical communications (Cambridge, England) [Epub ahead of print].

Optical biosensors have emerged as a transformative class of point-of-care diagnostic (POCD) devices, offering sensitive, specific, and rapid detection of diseases. The integration of optical biosensors with artificial intelligence (AI) brings a new revolution to the field of POCD by enabling enhanced analytical performance and real-time decision-making. This review presents an overview of the existing and upcoming prospects of AI-integrated optical biosensors with an emphasis on progress in sensor design, data science, and miniaturization. We also point out the advantages of AI algorithms, especially machine learning and deep learning, in improving the sensitivity, specificity, and multiplexing of optical biosensors during intelligent signal processing, pattern recognition, and automated decision-making. The optical biosensing techniques, including SPR, fluorescence, colorimetric, and Raman-based methods, are reviewed concerning improvements facilitated by AI technology. Finally, we examine the possibilities of integrating optical biosensors with IoT and cloud computing and critically address challenges related to data privacy, integration complexity, and clinical validation. To summarize, this review provides a realistic and future-oriented outlook to researchers, clinicians, and industry stakeholders interested in using AI-enhanced optical biosensors in redefining the future of POCD.

RevDate: 2025-11-06
CmpDate: 2025-11-06

Nandini K, K Rahimunnisa (2025)

IoT assisted fetal health classification using mother optimization algorithm with deep learning approach on cardiotocogram data.

Scientific reports, 15(1):38979.

The adoption of the Internet of Things (IoT) for the application of smart health is an effective method for distributed and intelligent automated diagnosis systems. Fetal movement is a basic index of fetal well being. IoT based fetal health classification leverages IoT technology to remotely assess and monitor fetal well being in real time. Continuous data streams including uterine contractions, fetal heart rate (FHR), and movement patterns can be gathered and analyzed by incorporating sensors with cloud machine learning (ML) and computing algorithms. This allows prompt diagnosis of distress indicators or abnormalities, simplifying quick measures to optimize prenatal care outcomes. Furthermore, IoT based systems provide an opportunity for personalized monitoring of individual pregnancies, enhancing fetal and maternal health monitoring through gestation. On the other hand, the present technology in medical applications could not offer an easily accessible, long term, and effective way for fetal movement monitoring. Lately, ML and deep learning (DL) approaches have been considered appropriate for the automatic classification of fetal health. This study presents an IoT assisted Fetal Health Detection and Classification using the Mother Optimization Algorithm with Deep Learning (AFHDC MOADL) method. The goal of the AFHDC MOADL technique is to accurately classify fetal health into three different classes such as normal, suspect, and pathological. In the AFHDC MOADL technique, a multi faceted process is involved. Primarily, the AFHDC MOADL technique involves IoT devices for the data acquisition process which collects fetal health related data. Besides, the AFHDC MOADL technique undergoes data pre processing in two ways such as K nearest neighbor (KNN) based data imputation and standard scaler. The AFHDC MOADL technique designs a mother optimization algorithm (MOA) to decrease the high dimensionality problem, which selects an optimal subset of features. A graph convolutional neural network (GCN) model is exploited for the fetal health classification. Finally, the root mean square propagation (RMSProp) optimizer can be utilized for optimum hyper parameter selection of the GCN technique. The simulation outcomes of the AFHDC MOADL algorithm can be assessed on the Fetal Health Classification dataset from the Kaggle dataset. The experimental validation highlighted the significant performance of the AFHDC MOADL technique over recent DL approaches.

RevDate: 2025-11-06
CmpDate: 2025-11-06

Zhou H, Zhang H, Zhang R, et al (2025)

AI‑driven photonic noses: from conventional sensors to cloud‑to-edge intelligent microsystems.

Microsystems & nanoengineering, 11(1):209.

The photonic nose is an emerging class of optical sensing systems designed to mimic the olfactory capabilities of a human nose. Evolving from conventional chemical and gas sensors, photonic noses leverage optical phenomena to achieve high sensitivity and fast, label-free analysis of chemical volatiles. This review provides an in-depth analysis of the evolution and current state of photonic nose technologies, particularly focusing on their integration with artificial intelligence (AI) and machine learning (ML). We first discuss key optical sensing and fabrication methods, including colorimetry, refractive index sensing, spectroscopy, and integrated photonic devices. Then, the role of ML algorithms in photonic noses is highlighted, and the integration of photonic noses into cloud-to-edge computing systems is also explored, demonstrating intelligent microsystem designs capable of on-chip real-time analytics and distributed data processing. Additionally, we highlight representative application scenarios where AI-driven photonic noses show significant advantages, including environmental monitoring, early-stage medical diagnostics, and ensuring food quality and safety. A concise comparative analysis between photonic noses, electronic noses, and analytical instruments is provided. Finally, this review identifies the remaining challenges in AI-driven photonic noses and offers insights into future development pathways toward smarter, miniaturized, and more robust photonic sensing systems.

RevDate: 2025-11-05

Zhang F, C Shi (2025)

Fair-efficient allocation mechanism with meta-types resources in cloud computing.

Scientific reports, 15(1):38821.

Fair and efficient resource allocation is a fundamental goal of cloud computing systems. However, diverse user requirements and heterogeneous resource types make it difficult to balance utilization efficiency and user-perceived fairness. To address this challenge, we propose a meta-type-based resource allocation mechanism, GAF-MT, which is based on the principle of asset fairness. GAF-MT introduces meta-types to model structured resource groupings and supports user-specific requirements while reducing fragmentation. We design a scheduling algorithm to find feasible solutions and implement GAF-MT based on GUROBI. Extensive experiments in small-scale and large-scale user environments show that GAF-MT not only ensures fairness, but also significantly improves resource utilization and maintains high performance even under high user loads.

RevDate: 2025-11-05

Nagarjun AV, S Rajkumar (2025)

Quantum deep learning-enhanced ethereum blockchain for cloud security: intrusion detection, fraud prevention, and secure data migration.

Scientific reports, 15(1):38711.

Because of the rapid acceleration of cloud computing, data transfer security and intrusion detection in cloud networks have become emerging areas of concern. All traditional security mechanisms have central vulnerabilities, cannot detect real-time threats, and are ineffective against zero-day attacks. Signature-based approaches of existing intrusion detection systems (IDS) do not cover the dynamically changing nature of cyber threats. Conventional blockchain security methods suffer from poor scalability and dynamic threat analysis. Therefore, this research proposes integrating Ethereum Blockchain and Deep Learning to construct a well-founded security framework for cloud networks with data migration security and real-time intrusion detection. The architecture has five distinct methods, each of which deals with particular security issues. Blockchain-Aware Federated Learning for Secure Model Training (BAFL SMT) guarantees tamper-proof and decentralized deep learning model training, which reduces model poisoning attacks by 98.4%. Graph Neural Networks for Adaptive Intrusion Detection (GNN-AID) captures graph structures for real-time anomaly detection in networks while reducing false positives to 1.2%. Quantum-inspired Variational Autoencoders (QI VAE ZDAD) provide enhanced zero-day attack detection, with an improved detection rate of 92%. Self-Supervised Contrastive Learning for Blockchain Security Auditing (SSCL-BSA) detects smart contract vulnerabilities automatically, resulting in an 87% reduction in fraud risk. Finally, Hierarchical Transformers for Secure Data Migration (HT SDM) enhance the transfer security of large-scale cloud data, achieving an attack classification accuracy of 99.1%. Overall, this multi-layer security framework will greatly enhance cloud security by preserving data integrity, cutting down the intrusion detection time by up to 65%, and enhancing response mechanisms. By marrying the immutable transparency of blockchain with superior anomaly detection at deep learning, this research provides a scalable, real-time, and intelligent approach to strengthening security against the backed-up transfer of data within cloud networks.

RevDate: 2025-11-04

Su J, Y Liu (2025)

Task offloading decision making for IoV based on deep reinforcement learning.

Scientific reports, 15(1):38586.

With the popularization and development of in-vehicle applications, the limitations of computing resources, storage resources, and energy on vehicles have become increasingly prominent. To meet the growing demand for compute-intensive applications, cloud-edge collaborative computing has emerged as a key scheme. However, existing challenges still urgently need to be addressed: current task offloading schemes under cloud-edge collaboration are generally limited to the assumption of full offloading, failing to address the demand for partial offloading in practical scenarios such as segmented data processing in autonomous driving and this makes it difficult to determine the optimal offloading rate. Furthermore, most schemes fail to establish a priority model based on the resource requirements of tasks, struggling to balance efficient offloading and rational resource allocation.To address these issues, this paper designs a communication model, an energy consumption model, a cost model, a priority model, and a task offloading model. It also proposes a task offloading decision scheme based on deep reinforcement learning algorithms, enabling the selection of optimal offloading strategies in dynamic environments. Experimental results demonstrate that in comparison with existing schemes reported in the literature, the proposed scheme achieves significantly optimized performance. After the algorithm converges, Compared with DQN-based scheme and DDPG-based scheme, IDDPG-based scheme has reduced latency by 59.46% and 67.39%, reduced energy consumption by 18.37% and 11.76% respectively.

RevDate: 2025-11-04

Fadl ME, Zekari M, Labad R, et al (2025)

Integrating RUSLE, AHP, GIS, and cloud-based geospatial analysis for soil erosion assessment under mediterranean conditions.

Scientific reports, 15(1):38494.

Soil erosion is a major environmental challenge in Mediterranean regions, where climatic variability, steep slopes, and human activities accelerate land degradation. In the north-central region of Algeria, the Mitidja Plain faces increasing erosion pressure, threatening biodiversity, agricultural productivity, and long-term soil sustainability. This study aims to assess soil erosion risk by integrating the Revised Universal Soil Loss Equation (RUSLE), the Analytical Hierarchy Process (AHP), and Geographic Information System (GIS) techniques within a Cloud-Based Geospatial (CBG) framework using the Google Earth Engine (GEE) platform. High-resolution datasets on rainfall, topography, soil properties, and land cover were processed in GEE to derive five RUSLE factors: rainfall runoff erosivity (R[E]), soil erodibility (K[S]), slope length steepness (L[S]), cropping management (C[M]), and management practices (P[C]). The analysis revealed that 41% of the Mitidja Plain is at severe erosion risk, with an average soil loss of 88.72 t ha[-1] yr[-1] and a maximum of 161.13 t ha[-1] yr[-1]. Erosion hotspots correspond to areas where slopes exceed 22°, vegetation cover is sparse, and rainfall intensity is high. The AHP-weighted integration achieved strong predictive accuracy (AUC = 0.87), identifying slope characteristics as the most influential factor (weight = 0.292). Forested areas reduced erosion risk in 30% of the region, while unprotected mountainous zones covering 22% of the study area require urgent intervention. These findings demonstrate the effectiveness of CBG-enhanced modeling for mapping priority conservation areas. Recommendations include terracing, check dams, vegetation restoration, and adaptive agricultural practices to reduce soil loss, particularly in agricultural lands with moderate to high vulnerability (48% of the plain). The methodology provides a replicable framework for other Mediterranean regions facing similar erosion pressures, offering robust spatial data to guide soil management and conservation planning.

RevDate: 2025-11-04
CmpDate: 2025-11-04

He J, Zhang J, Wang Z, et al (2025)

Quantitative Assessment of Strabismus Using Cloud AI Computing: Validation Study.

JMIR formative research, 9:e79280 pii:v9i1e79280.

BACKGROUND: Strabismus measurement is essential in vision assessment and screening. It typically requires skilled clinicians or specialized equipment. Photographic strabismus measurement methods have value in terms of accessibility and convenience of use.

OBJECTIVE: This study aimed to evaluate Eyeturn Cloud, a cloud-based artificial intelligence (AI) system for measuring strabismus angles based on eye images captured with smartphone cameras under cover test conditions.

METHODS: The Eyeturn Cloud web app uses AI models to recognize eyes, eye lid, and iris, and then to segment iris precisely. It then computes strabismus based on ellipse fitting of the iris boundary and corneal reflection. The system was evaluated in patients (without glasses) with manifest strabismus and control participants. Clinicians measured eye deviations using the prism alternate cover test and also captured pictures of their eyes under alternate cover and unilateral cover conditions. The pictures were processed by Eyeturn Cloud.

RESULTS: In total, 79 (mean age 11.9, SD 6.3 years; esotropia: n=15, exotropia: n=55, orthotropia: n=9) participants were enrolled; of which, data were available for 71 participants (8/79, 10.1% processing failure). The range of prism alternate cover test strabismus magnitude was from 78 base in to 78 base out prism diopters (PDs). A strong correlation was found between Eyeturn Cloud and clinical measurements (R[2]=0.95; slope=0.91; P<.001). Bland-Altman analysis revealed that 95% limits of agreement between the 2 measurements were -20.2 to 14.6 PD. A repeatability test with 15 participants (4 photos each) found a 1.53 PD SD.

CONCLUSIONS: The cloud AI web app can compute strabismus angles reliably under alternate and unilateral cover conditions in clinical settings, and its potential for use in telehealth settings needs further evaluation.

RevDate: 2025-11-03
CmpDate: 2025-11-03

Zhang X, Chen L, J Wang (2025)

The research landscape and evolutionary trends of robot-assisted orthopedic telesurgery: a bibliometric and visualized analysis.

Journal of robotic surgery, 19(1):743.

This study employed bibliometric methodologies to systematically investigate the research landscape and evolutionary trajectory of robot-assisted orthopedic telesurgery (RAOTS) from 2010 to 2025. A visual analysis of 3,226 publications indexed in the Web of Science Core Collection revealed an annual growth rate of 18.24%, with international collaborations accounting for 26.35% of the total output. In terms of publication volume, China, the United States, and the England ranked at the forefront, while Shanghai Jiao Tong University and the Chinese Academy of Sciences emerged as central contributors in this field. Keyword co-occurrence and burst analysis indicated a transition in research emphasis-from an initial stage centered on "force feedback" and "master-slave control" to a more sophisticated paradigm integrating "artificial intelligence", "cloud computing" and "5G communication". Simultaneously, technical challenges have evolved beyond latency optimization to include pressing issues such as cybersecurity, ethical governance, medical insurance reimbursement, and multimodal sensory integration. This study presents, for the first time, a three-stage model characterizing the knowledge evolution of RAOTS and proposes a structured framework mapping its technological development path, thereby providing empirical evidence to support future standardization, clinical translation, and interdisciplinary cooperation.

RevDate: 2025-11-01

Nagel GW, Darby SE, J Leyland (2025)

Surface Water Transitions 1984-2022: A Global Dataset at Annual Resolution.

Scientific data, 12(1):1729.

Recent advances in satellite technology and cloud computing have enabled global-scale monitoring of long-term surface water changes. The dynamic nature of surface water, driven by seasonal fluctuations and climatic events, presents challenges for accurately interpreting these dynamics. Here, we introduce the first global dataset that identifies the timing, at annual resolution, of surface water advance or recession from 1984 to 2022. Our approach focuses on identifying persistent changes in surface water features by filtering out seasonal or shorter-term fluctuations. Using a novel algorithm, we mapped the timing of surface water transitions globally, including rivers, lakes, reservoirs, flooded agriculture, and coastal regions. In the dataset each 30 m × 30 m pixel records whether water advance or recession occurred and specifies the year of transition. This dataset enables users to visualize the location, type, and magnitude of changes, while its focus on timing provides new insights into the drivers of water dynamics. Designed for accessibility, the dataset supports scientific research as well as NGOs, policymakers, and water managers in addressing surface water-related challenges.

RevDate: 2025-10-31

Kocifaj M, Falchi F, F Kundracik (2025)

An all-sky light pollution model for global-scale applications that embraces a full range of cloud distributions.

Proceedings of the National Academy of Sciences of the United States of America, 122(44):e2508001122.

Light pollution has been traditionally modeled using clear or completely overcast conditions. Usually, atmospheric conditions are more complex and involve variable cloudiness. To predict light pollution in a realistic atmosphere, we developed a model for computing the artificial night sky brightness over the sky hemisphere, considering the presence of different types of clouds and the cloud fraction. The model is applied to the city of Žilina, Slovakia, which has moderate levels of light pollution and a population of approximately 80,000. We performed simulations for various aerosol optical depths, distances from the city, cloud types, and cloud coverages (from clear to completely overcast). Results show that above the simulated city, the clouds can amplify the zenith artificial radiance by more than 15 times and the irradiance incident at ground level by more than 4 times compared to clear-sky conditions. Outside the city, however, the presence of clouds can have a screening effect, lowering the artificial zenith radiance. Additionally, an analysis performed in photopic units demonstrated that over the urban area, amplification caused by low clouds can generate amplification factors of up to 27 in zenith luminance and 17 in horizontal illuminance. These amplification factors are obtained for a moderately urbanized environment; in highly urbanized areas, even stronger amplification effects might be expected. The model can be used to explain observational data collected by light pollution monitoring networks, particularly at sites where the combination of amplifying and darkening effects of clouds generates ambiguous brightness outcomes.

RevDate: 2025-10-30
CmpDate: 2025-10-30

Yu P, Teng F, Zhu W, et al (2025)

Cloud-edge-device collaborative computing in smart agriculture: architectures, applications, and future perspectives.

Frontiers in plant science, 16:1668545.

Smart agriculture is rapidly evolving in response to growing global demands for food security and sustainable resource management. Cloud-edge-device collaborative computing has emerged as a transformative paradigm, addressing the limitations of traditional centralized architectures by enabling distributed intelligence, real-time processing, and adaptive decision-making. This review provides a comprehensive overview of the architectures, technical characteristics, and application scenarios of cloud-edge-device collaboration in agriculture. Key domains covered include environmental monitoring, intelligent irrigation, UAV-machinery coordination, livestock health management, and pest and disease control. Major challenges such as device heterogeneity, data consistency, resource constraints, and privacy concerns are identified and discussed. Furthermore, six critical research directions are outlined, including intelligent scheduling algorithms, lightweight edge AI, hierarchical data fusion, federated learning, interoperability frameworks, and digital twin technologies. This review aims to serve as a practical reference and theoretical foundation for advancing the design and implementation of next-generation smart agriculture systems.

RevDate: 2025-10-29

Lu Y, Pan Z, Zhang R, et al (2025)

Spatially-enhanced Spiking neural network for efficient point cloud analysis.

Neural networks : the official journal of the International Neural Network Society, 195:108190 pii:S0893-6080(25)01070-6 [Epub ahead of print].

Spiking Neural Networks (SNNs), with their spike-driven mechanism and low power consumption, attract extensive research attention in 2D visual tasks. For computationally intensive 3D point cloud tasks, SNNs exhibit greater potential in addressing the high computational complexity. However, SNNs hold significant application value and room for improvement. Unlike the 2D data with fixed spatial positions, complex and unordered points contain rich spatial information, posing significant challenges for spike feature modeling. We find that 3D spatial modeling is more crucial for SNN points analysis. Therefore, we analyze the key aspects of spiking spatial perception and introduce parameter-free Spiking Spatial Position Encoding (SSPE) to extract local positional information. Besides, the incorporation of Spiking Cross-feature Graph Position Encoding (SCGPE) is proposed to capture global spatial relationships. Our spatial enhancement is embedded within a framework centered on spiking fully connected layers, referred to as Spiking 3D Network (S3DNet). Extensive experiments demonstrate S3DNet's low energy consumption and state-of-the-art performance in SNNs. Specifically, with only 1.18M parameters, S3DNet achieves a classification accuracy of 92.34 % on the ModelNet40 and 84.49 % on the ScanObjectNN. Additionally, we explore SNN point cloud segmentation task for the first time and achieve an accuracy of 85.0 % on the ShapeNetPart with only 2.27M parameters. Overall, enhanced by spiking positional encoding, S3DNet has demonstrated the potential of Spiking Neural Networks (SNNs) in point cloud analysis.

RevDate: 2025-10-29

Hu H, Liu Y, Liu S, et al (2025)

A Reconfigurable Memristor-Based Computing-in-Memory Circuit for Content-Addressable Memory in Sensor Systems.

Sensors (Basel, Switzerland), 25(20): pii:s25206464.

To meet the demand for energy-efficient and high-performance computing in resource-limited sensor edge applications, this paper presents a reconfigurable memristor-based computing-in-memory circuit for Content-Addressable Memory (CAM). The scheme exploits the analog multi-level resistance characteristics of memristors to enable parallel multi-bit processing, overcoming the constraints of traditional binary computing and significantly improving storage density and computational efficiency. Furthermore, by employing dynamic adjustment of the mapping between input signals and reference voltages, the circuit supports dynamic switching between exact and approximate CAM modes, substantially enhancing functional flexibility. Experimental results demonstrate that the 32 × 36 memristor array based on a TiN/TiOx/HfO2/TiN structure exhibits eight stable and distinguishable resistance states with excellent retention characteristics. In large-scale array simulations, the minimum voltage separation between state-representing waveforms exceeds 6.5 mV, ensuring reliable discrimination by the readout circuit. This work provides an efficient and scalable hardware solution for intelligent edge computing in next-generation sensor networks, particularly suitable for real-time biometric recognition, distributed sensor data fusion, and lightweight artificial intelligence inference, effectively reducing system dependence on cloud communication and overall power consumption.

RevDate: 2025-10-29

Sharobiddinov D, Siddiqui HUR, Saleem AA, et al (2025)

Edge-Based Autonomous Fire and Smoke Detection Using MobileNetV2.

Sensors (Basel, Switzerland), 25(20): pii:s25206419.

Forest fires pose significant threats to ecosystems, human life, and the global climate, necessitating rapid and reliable detection systems. Traditional fire detection approaches, including sensor networks, satellite monitoring, and centralized image analysis, often suffer from delayed response, high false positives, and limited deployment in remote areas. Recent deep learning-based methods offer high classification accuracy but are typically computationally intensive and unsuitable for low-power, real-time edge devices. This study presents an autonomous, edge-based forest fire and smoke detection system using a lightweight MobileNetV2 convolutional neural network. The model is trained on a balanced dataset of fire, smoke, and non-fire images and optimized for deployment on resource-constrained edge devices. The system performs near real-time inference, achieving a test accuracy of 97.98% with an average end-to-end prediction latency of 0.77 s per frame (approximately 1.3 FPS) on the Raspberry Pi 5 edge device. Predictions include the class label, confidence score, and timestamp, all generated locally without reliance on cloud connectivity, thereby enhancing security and robustness against potential cyber threats. Experimental results demonstrate that the proposed solution maintains high predictive performance comparable to state-of-the-art methods while providing efficient, offline operation suitable for real-world environmental monitoring and early wildfire mitigation. This approach enables cost-effective, scalable deployment in remote forest regions, combining accuracy, speed, and autonomous edge processing for timely fire and smoke detection.

RevDate: 2025-10-29

Wei J, Peng Q, Xie Y, et al (2025)

Intelligent Gas Sensors: From Mechanism to Applications.

Sensors (Basel, Switzerland), 25(20): pii:s25206321.

Intelligent gas sensors are indispensable devices widely used in modern society for environmental monitoring, healthcare, the food industry, and public safety. Recent advancements in wireless communication, cloud storage, computing technologies, and artificial intelligence algorithms have significantly enhanced the intelligence level and performance requirements of these sensors. Particularly in the Internet of Things (IoT) environment, flexible and wearable gas sensors are playing an increasingly important role due to their convenience and real-time monitoring capabilities. This review systematically summarizes the latest progress in intelligent gas sensors, covering conceptual frameworks, working principles, and applications across various fields, as well as the construction of IoT networks using sensor arrays. It provides a comprehensive assessment of recent advancements in intelligent gas sensing technologies, highlighting innovations in device architecture, functional mechanisms, and performance in diverse application environments. Special emphasis is placed on transformative developments in flexible and wearable sensor platforms and the enhanced intelligence achieved through the integration of advanced computational algorithms and machine learning techniques. Finally, a summary and future prospects are presented. Despite significant progress, intelligent gas sensors still face challenges related to sensing accuracy, stability, and cost in future applications.

RevDate: 2025-10-29

Singh AR, Rathore RS, Jiang W, et al (2025)

A scalable cloud-integrated AI platform for real-time optimization of EV charging and resilient microgrid energy management.

Scientific reports, 15(1):37692.

The emergence of electric vehicles (EVs) as key elements in the decarbonization of transportation demands a new class of intelligent infrastructure capable of optimizing charging behavior while maintaining power system stability. This paper proposes a novel Scalable Cloud-Based Continuous Monitoring Platform (SC-CMP) designed to support real-time optimization of microgrid operations, particularly in EV-dense and renewable-integrated environments. By fusing cloud computing, machine learning (ML), and artificial intelligence (AI) with Internet of Things (IoT) data acquisition, SC-CMP enables continuous monitoring, predictive scheduling, and adaptive energy management across distributed power networks. Unlike conventional systems, SC-CMP supports both centralized and decentralized microgrid architectures, providing scalable support for dynamic load balancing, V2G coordination, and resilient energy dispatch. Simulation and validation are performed using a real-world dataset of 3395 EV charging sessions across 105 stations, demonstrating SC-CMP's superiority over existing AI/ML baselines. Quantitatively, the platform achieves 97.34% predictive accuracy, 96.81% grid stability improvement, 94.5% resource allocation efficiency, 93% scalability, and 95.2% data privacy assurance. These outcomes position SC-CMP as a comprehensive, adaptive, and cost-effective solution for microgrid-oriented EV integration, offering substantial advances in resilient power distribution, renewable energy utilization, and sustainable electric mobility. The platform serves as a foundation for next-generation microgrid control systems that demand real-time intelligence, scalability, and reliability across evolving smart grid landscapes.

RevDate: 2025-10-28

Karasawa M, Leow CS, Yajima H, et al (2025)

ColabReaction: Accelerating Transition State Searches with Machine Learning Potentials on Google Colaboratory.

Journal of chemical information and modeling [Epub ahead of print].

We have developed a rapid and automated transition state (TS) search method for chemical reactions by combining the double-ended method, Direct MaxFlux (DMF), with machine learning (ML) potentials. Compared to conventional quantum mechanical (QM) scan-based approaches, this method achieves approximately 2 orders of magnitude speedup, typically locating TS structures within 10 min. To promote broad accessibility, this method is implemented on Google Colaboratory (Colab), leveraging its cloud-based GPU environment to eliminate the need for local computational resources. We named this implementation as ColabReaction. A modified panel-based graphical user interface is also provided, allowing users to perform TS searches through a web-based interface without writing code. This platform offers a cost-free, user-friendly solution for reaction pathway exploration and mechanistic analysis, particularly for experimental researchers and students without prior experience in computational chemistry. ColabReaction is open-source and freely available at https://ColabReaction.net and https://github.com/BILAB/ColabReaction.

RevDate: 2025-10-28
CmpDate: 2025-10-28

Zhu M, Li J, X Yang (2025)

A Hybrid SAO and RIME Optimizer for Global Optimization and Cloud Task Scheduling.

Biomimetics (Basel, Switzerland), 10(10): pii:biomimetics10100690.

In a global industrial landscape where the digital economy accounts for over 40% of total output, cloud computing technology is reshaping business models at a compound annual growth rate of 19%. This trend has led to an increasing number of cloud computing tasks requiring timely processing. However, most computational tasks are latency-sensitive and cannot tolerate significant delays. This has led to the urgent need for researchers to address the challenge of effectively scheduling cloud computing tasks. This paper proposes a hybrid SAO and RIME optimizer (HSAO) for global optimization and cloud task scheduling problems. First, population initialization based on ecological niche differentiation is proposed to enhance the initial population quality of SAO, enabling it to better explore the solution space. Then, the introduction of the soft frost search strategy and hard frost piercing mechanism from the RIME optimization algorithm enables the algorithm to better escape local optima and accelerate its convergence. Additionally, a population-based collaborative boundary control method is proposed to handle outlier individuals, preventing them from clustering at the boundary and enabling more effective exploration of the solution space. To evaluate the effectiveness of the proposed algorithm, we compared it with 11 other algorithms using the IEEE CEC2017 test set and assessed the differences through statistical analysis. Experimental data demonstrate that the HSAO algorithm exhibits significant advantages. Furthermore, to validate its practical applicability, we applied HSAO to real-world cloud computing task scheduling problems, achieving excellent results and successfully completing the scheduling planning of cloud computing tasks.

RevDate: 2025-10-27
CmpDate: 2025-10-27

Kounakis K, Layana Castro PE, Garvi AG, et al (2025)

Automated Analysis of C. elegans Fluorescence Images using SegElegans.

Journal of visualized experiments : JoVE.

Microscopy, particularly of the fluorescent kind, is a frequently used tool in C. elegans research. The analysis of data from microscopy experiments can, however, be quite tedious and time-consuming. Thus, automation is desirable. We developed SegElegans, a two-headed U-net-based convolutional neural network system that is specifically designed for the automated segmentation of worms, even in images with large numbers of touching or overlapping individuals. The first part of SegElegans consists of one encoder and two decoders. The encoder, based on the SmaAt AT model, applies double convolution layers followed by a Convolutional Block Attention Module (CBAM). Both decoders use convolutional LSTMs: one performs semantic segmentation of worm images (body, edge, background, or overlap), while the other extracts a linear skeleton along each worm. The second part is a post-processing algorithm that combines the outputs of the two decoders and uses them to generate accurate instance segmentations. These segmentations can then be fed to ImageJ or other appropriate image analysis tools. Here we present instructions on how to access and run this system. We provide an online, cloud computing-based implementation as well as two methods to use the SegElegans models offline, on a local machine, should the required hardware be available.

RevDate: 2025-10-24
CmpDate: 2025-10-24

Zeba Z, Lartey ST, Durneva P, et al (2025)

Best Practices for Data Modernization Across the United States Public Health System: Scoping Review.

Journal of medical Internet research, 27:e70946 pii:v27i1e70946.

BACKGROUND: The adoption of new technologies and data modernization approaches in public health aims to enhance the use of health data to inform decision-making and improve population health. However, public health departments struggle with legacy systems, siloed data, and privacy concerns, which hamper the adoption of new technology and data sharing with stakeholders. This paper maps how to address these shortcomings by identifying data modernization challenges, initiatives, and progress.

OBJECTIVE: This study aims to characterize evidence for data modernization-associated gaps and best practices in public health.

METHODS: This scoping review was conducted using the 5-stage framework developed by Arksey and O'Malley and was reported according to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines. A structured search was performed in databases PubMed, Scopus, CINAHL, and PsycINFO, and was complemented by a further search in the Google Scholar search engine, covering publications from January 1, 2019, to April 30, 2024. Eligible studies were peer-reviewed, published in English, and focused on data modernization initiatives within US public health system and reported on best practices, challenges, and outcomes. Search terms combined concepts such as "Data Modernization," "Interoperability," and "Public Health" using Boolean operators. Two reviewers independently screened titles, abstracts, and full texts using Rayyan QCRI, with conflicts resolved through consultation with a third reviewer. Data were extracted into Microsoft Excel and thematically analyzed.

RESULTS: This review analyzed 21 studies focused on public health data modernization. Across the literature, common components included transitioning to cloud-based systems, consolidating fragmented data into unified platforms, applying governance frameworks, and implementing analytics tools to support decision-making. Primary data sources were electronic health records, insurance claims, and disease surveillance registries. Key challenges identified across studies involved data quality issues, lack of interoperability, and limited resources, particularly in underfunded settings. Notable benefits included more timely and accessible data, improved integration across systems, and enhanced analytical capabilities, which collectively support more responsive and effective public health interventions when guided by clear standards and policy alignment.

CONCLUSIONS: Progress hinges on balancing local adaptability with national coordination, improving data governance practices, and enhancing collaboration across institutions. These steps are vital to ensure that public health systems can deliver timely, accurate, and actionable information to support effective public health efforts.

RevDate: 2025-10-23

Rieneck K, Clausen FB, MH Dziegiel (2025)

Detection of fetal cfDNA in maternal blood.

Transfusion medicine (Oxford, England) [Epub ahead of print].

BACKGROUND: An NGS-based assay was developed to determine the presence or absence of paternally inherited genetic variants in cfDNA derived from the fetus in the plasma of pregnant women. This assay can be used in connection with NGS-based prenatal prediction of fetal blood groups in immunised pregnant women. The purpose of the assay is to minimise the risk of a false-negative outcome in the situation with a prediction of the absence of a blood group allele from the fetus.

METHODS: The underlying principle was to examine genetic markers, with each single marker giving a small contribution to the probability of differentiating between individuals (woman and fetus) and combining several markers into one multiplex PCR assay with enough discerning power to determine whether cfDNA from a fetus was present in maternal plasma. If only maternal cfDNA was detected, a prediction of the absence of a fetal blood group might be due to a false-negative result based on insufficient amounts of fetal cfDNA.

RESULTS: The assay did not require knowledge of maternal or paternal genotypes. The genetic markers were deletion or insertion variants and were selected using an SQL algorithm searching all autosomes from gnomAD v 3 on the Google Cloud Platform and included alleles with a frequency close to 0.5 in four different ethnic populations, including several other criteria. The final assay consisted of a multiplex PCR amplification of 22 different biallelic delin markers, each located on a separate chromosome. The assay is informative in >99% of cases with at least one primer set. After experimental testing, an algorithm for scoring test results was defined, and the cut-off was set at <0.15%.

CONCLUSION: Per sample, the control assay required one extra dedicated multiplex PCR, which was eventually spiked into the sequencing reaction. The assay estimated the presence of non-self-genetic variation and may have applications beyond control for the presence of fetal cfDNA.

RevDate: 2025-10-21

Satpathy S, Tripathy U, PK Swain (2025)

Cloud-based DDoS detection using hybrid feature selection with deep reinforcement learning (DRL).

Scientific reports, 15(1):36546.

The escalating size and complexity of Distributed Denial of Service (DDoS) attacks pose significant threats to the security and availability of cloud computing infrastructure. Traditional Intrusion Detection Systems (IDSs), which rely primarily on static or signature-based methods, are inadequate in adapting to the rapidly evolving nature of modern attack vectors. This paper proposes a deep reinforcement learning (DRL)-based framework for real-time DDoS detection in cloud environments. Specifically, the study investigates three actor-critic DRL algorithms: Twin Delayed Deep Deterministic Policy Gradient (TD3), Deep Deterministic Policy Gradient (DDPG), and Advantage Actor-Critic (A2C) to differentiate between benign and malicious network traffic. A robust hybrid feature selection strategy is introduced, combining Boruta (wrapper-based), SHAP (model explainability-based), and cross-validation stability analysis to ensure that selected features are statistically robust, interpretable, and consistent across datasets. The model is trained and evaluated on two benchmark datasets, CICDDoS2019 and UNSW-NB15, following a comprehensive preprocessing pipeline that includes feature selection, normalization, and class imbalance handling through reward shaping and stratified experience replay. Experimental results demonstrate that the TD3 algorithm achieves superior performance, with an average accuracy of 99.12%, an AUC of 99.21%, and an inference latency of 1.87 milliseconds per sample, making it suitable for real-time deployment. An ablation study confirms the critical contribution of each preprocessing component, and SHAP-based analysis is employed to interpret model decisions by identifying key traffic features influencing predictions. The findings underscore the effectiveness, scalability, and interpretability of the proposed DRL-based approach, particularly TD3, in overcoming the limitations of traditional IDSs and providing an adaptive solution for DDoS detection in dynamic cloud environments.

RevDate: 2025-10-21

Zahoor A, Abbasi W, Babar MZ, et al (2025)

Robust IoT security using isolation forest and one class SVM algorithms.

Scientific reports, 15(1):36586.

The rapid growth of cloud computing and the Internet of Things (IoT) has increased the exposure of IoT devices to cyber-attacks due to their resource limitations and lack of standardized security protocols. This paper presents a robust anomaly detection framework for IoT networks using two unsupervised machine learning models: Isolation Forest (IF) and One-Class Support Vector Machine (OCSVM). Leveraging the TON_IoT dataset, we conduct a comparative evaluation of IF, OCSVM, and a lightweight fusion approach called Combined Scoring Anomaly Detection (CSAD). Results show that OCSVM achieves superior precision, recall, and accuracy compared to both IF and CSAD. To ensure reliability, we apply Random Forest-based feature importance analysis, fivefold cross-validation and hyperparameter tuning. Model resilience is further examined under adversarial label-flip poisoning attacks and interpretability is enhanced through Local Interpretable Model-Agnostic Explanations (LIME). The findings demonstrate that lightweight unsupervised algorithms can provide effective, low-resource anomaly detection for modern IoT environments.

RevDate: 2025-10-21

Wang G, Che J, Gao C, et al (2025)

Integrated Neuromorphic Photonic Computing for AI Acceleration: Emerging Devices, Network Architectures, and Future Paradigms.

Advanced materials (Deerfield Beach, Fla.) [Epub ahead of print].

Deep learning stands as a cornerstone of modern artificial intelligence (AI), revolutionizing fields from computer vision to large language models (LLMs). However, as electronic hardware approaches fundamental physical limits-constrained by transistor scaling challenges, von Neuman architecture, and thermal dissipation-critical bottlenecks emerge in computational density and energy efficiency. To bridge the gap between algorithmic ambition and hardware limitations, photonic neuromorphic computing emerges as a transformative candidate, exploiting light's inherent parallelism, sub-nanosecond latency, and near-zero thermal losses to natively execute matrix operations-the computational backbone of neural networks. Photonic neural networks (PNNs) have achieved influential milestones in AI acceleration, demonstrating single-chip integration of both inference and in situ training-a leap forward with profound implications for next-generation computing. This review synthesizes a decade of progress in PNNs core components, critically analyzing advances in linear synaptic devices, nonlinear neuron devices, and network architectures, summarizing their respective strengths and persistent challenges. Furthermore, application-specific requirements are systematically analyzed for PNN deployment across computational regimes: cloud-scale and edge/client-side AIs. Finally, actionable pathways are outlined for overcoming material- and system-level barriers, emphasizing topology-optimized active/passive devices and advanced packaging strategies. These multidisciplinary advances position PNNs as a paradigm-shifting platform for post-Moore AI hardware.

RevDate: 2025-10-17

Upadhiyay A, A Jain (2025)

Cyber resilient framework with energy efficient swarm routing and ensemble threat detection in fog assisted wireless sensor networks.

Scientific reports, 15(1):36461.

The rapid growth of Wireless Sensor Networks (WSNs) and their integration with fog computing have enabled faster data processing and reduced reliance on cloud infrastructures. However, these networks remain constrained by limited energy resources, increased latency under dynamic traffic, and heightened vulnerability to cyberattacks. Traditional routing protocols typically optimize either energy efficiency or security, but rarely address both in a unified and adaptive manner. This work proposes a cyber-resilient, energy-optimized routing framework for fog-enabled WSNs that integrates a modified Ant Colony Optimization (ACO) algorithm with an ensemble-based Intrusion Detection System (IDS). The routing layer employs a multi-objective cost function that jointly considers distance, residual energy, and security risk. To enhance adaptability, CatBoost is deployed at energy-constrained sensor nodes for local energy and density assessment, while XGBoost operates at fog nodes to evaluate global path quality and congestion. The IDS ensemble—comprising Support Vector Machines (SVM), k-Nearest Neighbours (KNN), and Long Short-Term Memory (LSTM) networks—detects Denial-of-Service (DoS), Probe, R2L, and U2R attacks in real time. Importantly, detected threats immediately influence routing decisions, enabling compromised links to be bypassed without disrupting network operations. Extensive MATLAB simulations show that the proposed framework achieves 96.5% energy savings, an 85.83% latency reduction, and an 89% intrusion detection rate, validated through statistical analysis across multiple runs. By transforming IDS from a passive monitoring tool into an active routing controller, this work delivers a secure, adaptive, and energy-efficient solution for dynamic and resource-constrained IoT and WSN environments.

RevDate: 2025-10-18

Lockee B, Vandervelden CA, Tilden DR, et al (2025)

Establishment of a Diabetes-Tailored Data Intelligence Platform Enhances Clinical Care, Enables Risk-Based Monitoring, and Facilitates Population-Health-Based Approaches at a Pediatric Diabetes Network.

Journal of diabetes science and technology [Epub ahead of print].

BACKGROUND: Patient-generated health data (PGHD) represents an opportunity to customize care, particularly in type 1 diabetes (T1D) care where continuous glucose monitor (CGM) and insulin pump usage continues to rise. Previous solutions to integrating CGM data into the electronic health record (EHR) have been limited in their ability to integrate data from multiple sources, ensure data fidelity, integrate data from multiple data streams, and rapidly adapt to changes in data output from numerous vendors. We developed a novel data infrastructure contained outside of the EHR to provide an alternative approach to PGHD integration, enable diabetes centers to identify and predict risk, and to facilitate research and quality improvement.

METHODS: We identified three key capabilities: ingesting and storing a wide variety of data, refining raw data into actionable insights, and visualizing and reporting to decision makers. To meet these requirements, we built a data intelligence platform we coined the diabetes data dock (D-data dock) in the Microsoft Azure cloud platform.

RESULTS: The D-data dock houses approximately 100 million CGM measurements, one million clinical events and insulin bolus records, and a near complete EHR record covering approximately 3000 patients per year from 2016 to 2023. We provide case studies detailing how the D-data dock allows timely monitoring of CGM data, enables novel study designs, and powers machine-learning-informed supplemental care interventions.

CONCLUSIONS: The D-data dock is a novel approach to harnessing disparate data streams to improve patient care, enable timely interventions, and drive innovation to improve the lives and care of people with T1D.

RevDate: 2025-10-16

Sun F, Guo L, Meng Y, et al (2025)

Ultra-simplified fabrication of all-silver memristor arrays.

Nanoscale advances [Epub ahead of print].

Brain-inspired neuromorphic computing strives to emulate the human brain's remarkable capabilities, including parallel information processing, adaptive learning, and cognitive inference, while maintaining ultra-low power consumption characteristics. The exponential progress in cloud computing and supercomputing technologies has generated an increasing demand for highly integrated electronic storage systems with enhanced performance capabilities. To address the challenges of tedious fabrication, we innovatively offer a feasible strategy: using weaved silver electrodes combined with in situ formed silver oxide insulating layers to create a high-performance two-terminal memristor array configuration. This memristor possesses a high ON/OFF ratio (above 10[6]) and good durability (200 cycles). Moreover, its innovative weaving-type configuration enables higher integration density while maintaining conformal attachment capability onto the skin. Our ultra-simplified fabrication strategy provides a novel alternative approach for streamlining fabrication processes, enabling the realization of advanced device integration and system miniaturization.

RevDate: 2025-10-16

Khan HM, Jabeen F, Khan A, et al (2025)

IoT-Enabled Fog-Based Secure Aggregation in Smart Grids Supporting Data Analytics.

Sensors (Basel, Switzerland), 25(19): pii:s25196240.

The Internet of Things (IoT) has transformed multiple industries, providing significant potential for automation, efficiency, and enhanced decision-making. The incorporation of IoT and data analytics in smart grid represents a groundbreaking opportunity for the energy sector, delivering substantial advantages in efficiency, sustainability, and customer empowerment. This integration enables smart grids to autonomously monitor energy flows and adjust to fluctuations in energy demand and supply in a flexible and real-time fashion. Statistical analytics, as a fundamental component of data analytics, provides the necessary tools and techniques to uncover patterns, trends, and insights within datasets. Nevertheless, it is crucial to address privacy and security issues to fully maximize the potential of data analytics in smart grids. This paper makes several significant contributions to the literature on secure, privacy-aware aggregation schemes in smart grids. First, we introduce a Fog-enabled Secure Data Analytics Operations (FESDAO) scheme which offers a distributed architecture incorporating robust security features such as secure aggregation, authentication, fault tolerance and resilience against insider threats. The scheme achieves privacy during data aggregation through a modified Boneh-Goh-Nissim cryptographic scheme along with other mechanisms. Second, FESDAO also supports statistical analytics on metering data at the cloud control center and fog node levels. FESDAO ensures reliable aggregation and accurate data analytical results, even in scenarios where smart meters fail to report data, thereby preserving both analytical operation computation accuracy and latency. We further provide comprehensive security analyses to demonstrate that the proposed approach effectively supports data privacy, source authentication, fault tolerance, and resilience against false data injection and replay attacks. Lastly, we offer thorough performance evaluations to illustrate the efficiency of the suggested scheme in comparison to current state-of-the-art schemes, considering encryption, computation, aggregation, decryption, and communication costs. Moreover, a detailed security analysis has been conducted to verify the scheme's resistance against insider collusion attacks, replay attack, and false data injection (FDI) attack.

RevDate: 2025-10-16
CmpDate: 2025-10-16

Qian Y, KL Siau (2025)

Advances in IoT, AI, and Sensor-Based Technologies for Disease Treatment, Health Promotion, Successful Ageing, and Ageing Well.

Sensors (Basel, Switzerland), 25(19): pii:s25196207.

Recent advancements in the Internet of Things (IoT) and artificial intelligence (AI) are unlocking transformative opportunities across society. One of the most critical challenges addressed by these technologies is the ageing population, which presents mounting concerns for healthcare systems and quality of life worldwide. By supporting continuous monitoring, personal care, and data-driven decision-making, IoT and AI are shifting healthcare delivery from a reactive approach to a proactive one. This paper presents a comprehensive overview of IoT-based systems with a particular focus on the Internet of Healthcare Things (IoHT) and their integration with AI, referred to as the Artificial Intelligence of Things (AIoT). We illustrate the operating procedures of IoHT systems in detail. We highlight their applications in disease management, health promotion, and active ageing. Key enabling technologies, including cloud computing, edge computing architectures, machine learning, and smart sensors, are examined in relation to continuous health monitoring, personalized interventions, and predictive decision support. This paper also indicates potential challenges that IoHT systems face, including data privacy, ethical concerns, and technology transition and aversion, and it reviews corresponding defense mechanisms from perception, policy, and technology levels. Future research directions are discussed, including explainable AI, digital twins, metaverse applications, and multimodal sensor fusion. By integrating IoT and AI, these systems offer the potential to support more adaptive and human-centered healthcare delivery, ultimately improving treatment outcomes and supporting healthy ageing.

RevDate: 2025-10-16

Qi Y, Du Y, Guo Y, et al (2025)

Task Offloading and Resource Allocation Strategy in Non-Terrestrial Networks for Continuous Distributed Task Scenarios.

Sensors (Basel, Switzerland), 25(19): pii:s25196195.

Leveraging non-terrestrial networks for edge computing is crucial for the development of 6G, the Internet of Things, and ubiquitous digitalization. In such scenarios, diverse tasks often exhibit continuously distributed attributes, while existing research predominantly relies on qualitative thresholds for task classification, failing to accommodate quantitatively continuous task requirements. To address this issue, this paper models a multi-task scenario with continuously distributed attributes and proposes a three-tier cloud-edge collaborative offloading architecture comprising UAV-based edge nodes, LEO satellites, and ground cloud data centers. We further formulate a system cost minimization problem that integrates UAV network load balancing and satellite energy efficiency. To solve this non-convex, multi-stage optimization problem, a two-layer multi-type-agent deep reinforcement learning (TMDRL) algorithm is developed. This algorithm categorizes agents according to their functional roles in the Markov decision process and jointly optimizes task offloading and resource allocation by integrating DQN and DDPG frameworks. Simulation results demonstrate that the proposed algorithm reduces system cost by 7.82% compared to existing baseline methods.

RevDate: 2025-10-16

Ma Y, Zhao Y, Hu Y, et al (2025)

Multi-Agent Deep Reinforcement Learning for Joint Task Offloading and Resource Allocation in IIoT with Dynamic Priorities.

Sensors (Basel, Switzerland), 25(19): pii:s25196160.

The rapid growth of Industrial Internet of Things (IIoT) terminals has resulted in tasks exhibiting increased concurrency, heterogeneous resource demands, and dynamic priorities, significantly increasing the complexity of task scheduling in edge computing. Cloud-edge-end collaborative computing leverages cross-layer task offloading to alleviate edge node resource contention and improve task scheduling efficiency. However, existing methods generally neglect the joint optimization of task offloading, resource allocation, and priority adaptation, making it difficult to balance task execution and resource utilization under resource-constrained and competitive conditions. To address this, this paper proposes a two-stage dynamic-priority-aware joint task offloading and resource allocation method (DPTORA). In the first stage, an improved Multi-Agent Proximal Policy Optimization (MAPPO) algorithm integrated with a Priority-Gated Attention Module (PGAM) enhances the robustness and accuracy of offloading strategies under dynamic priorities; in the second stage, the resource allocation problem is formulated as a single-objective convex optimization task and solved globally using the Lagrangian dual method. Simulation results show that DPTORA significantly outperforms existing multi-agent reinforcement learning baselines in terms of task latency, energy consumption, and the task completion rate.

RevDate: 2025-10-16

Mushtaq S, Mohsin M, MM Mushtaq (2025)

A Systematic Literature Review on the Implementation and Challenges of Zero Trust Architecture Across Domains.

Sensors (Basel, Switzerland), 25(19): pii:s25196118.

The Zero Trust Architecture (ZTA) model has emerged as a foundational cybersecurity paradigm that eliminates implicit trust and enforces continuous verification across users, devices, and networks. This study presents a systematic literature review of 74 peer-reviewed articles published between 2016 and 2025, spanning domains such as cloud computing (24 studies), Internet of Things (11), healthcare (7), enterprise and remote work systems (6), industrial and supply chain networks (5), mobile networks (5), artificial intelligence and machine learning (5), blockchain (4), big data and edge computing (3), and other emerging contexts (4). The analysis shows that authentication, authorization, and access control are the most consistently implemented ZTA components, whereas auditing, orchestration, and environmental perception remain underexplored. Across domains, the main challenges include scalability limitations, insufficient lightweight cryptographic solutions for resource-constrained systems, weak orchestration mechanisms, and limited alignment with regulatory frameworks such as GDPR and HIPAA. Cross-domain comparisons reveal that cloud and enterprise systems demonstrate relatively mature implementations, while IoT, blockchain, and big data deployments face persistent performance and compliance barriers. Overall, the findings highlight both the progress and the gaps in ZTA adoption, underscoring the need for lightweight cryptography, context-aware trust engines, automated orchestration, and regulatory integration. This review provides a roadmap for advancing ZTA research and practice, offering implications for researchers, industry practitioners, and policymakers seeking to enhance cybersecurity resilience.

RevDate: 2025-10-16

Yildirim N, Cao M, Yun M, et al (2025)

EcoWild: Reinforcement Learning for Energy-Aware Wildfire Detection in Remote Environments.

Sensors (Basel, Switzerland), 25(19): pii:s25196011.

Early wildfire detection in remote areas remains a critical challenge due to limited connectivity, intermittent solar energy, and the need for autonomous, long-term operation. Existing systems often rely on fixed sensing schedules or cloud connectivity, making them impractical for energy-constrained deployments. We introduce EcoWild, a reinforcement learning-driven cyber-physical system for energy-adaptive wildfire detection on solar-powered edge devices. EcoWild combines a decision tree-based fire risk estimator, lightweight on-device smoke detection, and a reinforcement learning agent that dynamically adjusts sensing and communication strategies based on battery levels, solar input, and estimated fire risk. The system models realistic solar harvesting, battery dynamics, and communication costs to ensure sustainable operation on embedded platforms. We evaluate EcoWild using real-world solar, weather, and fire image datasets in a high-fidelity simulation environment. Results show that EcoWild consistently maintains responsiveness while avoiding battery depletion under diverse conditions. Compared to static baselines, it achieves 2.4× to 7.7× faster detection, maintains moderate energy consumption, and avoids system failure due to battery depletion across 125 deployment scenarios.

RevDate: 2025-10-16

Zhang F, Xia X, Gao H, et al (2025)

A Blockchain-Enabled Multi-Authority Secure IoT Data-Sharing Scheme with Attribute-Based Searchable Encryption for Intelligent Systems.

Sensors (Basel, Switzerland), 25(19): pii:s25195944.

With the advancement of technologies such as 5G, digital twins, and edge computing, the Internet of Things (IoT) as a critical component of intelligent systems is profoundly driving the transformation of various industries toward digitalization and intelligence. However, the exponential growth of network connection nodes has expanded the attack exposure surface of IoT devices. The IoT devices with limited storage and computing resources struggle to cope with new types of attacks, and IoT devices lack mature authorization and authentication mechanisms. It is difficult for traditional data-sharing solutions to meet the security requirements of cloud-based shared data. Therefore, this paper proposes a blockchain-based multi-authority IoT data-sharing scheme with attribute-based searchable encryption for intelligent system (BM-ABSE), aiming to address the security, efficiency, and verifiability issues of data sharing in an IoT environment. Our scheme decentralizes management responsibilities through a multi-authority mechanism to avoid the risk of single-point failure. By utilizing the immutability and smart contract function of blockchain, this scheme can ensure data integrity and the reliability of search results. Meanwhile, some decryption computing tasks are outsourced to the cloud to reduce the computing burden on IoT devices. Our scheme meets the static security and IND-CKA security requirements of the standard model, as demonstrated by theoretical analysis, which effectively defends against the stealing or tampering of ciphertexts and keywords by attackers. Experimental simulation results indicate that the scheme has excellent computational efficiency on resource-constrained IoT devices, with core algorithm execution time maintained in milliseconds, and as the number of attributes increases, it has a controllable performance overhead.

RevDate: 2025-10-15

Kayalvili S, Senthilkumar R, Yasotha S, et al (2025)

An optimized resource allocation in cloud using prediction enabled reinforcement learning.

Scientific reports, 15(1):36088.

Due to its many applications, cloud computing has gained popularity in recent years. It is simple and fast to access shared resources at any time from any location. Cloud-based package facilities need adaptive resource allocation (RA) to provide Quality-of-Service (QoS) while lowering resource prices owing to workloads and service demands that change over time. As a result of the constantly shifting system states, resource allocation presents enormous challenges. The old methods often require specialist knowledge, which may result in poor adaptability. Additionally, it aims for environments with set workloads; hence, it cannot be used successfully in real-world contexts with fluctuating workloads. This research therefore proposes a Prediction-enabled feedback system to solve these significant problems with the reinforcement learning-based RA (PCRA) framework. Firstly, this research creates a more accurate Q-value prediction to forecast management value processes at various scheme conditions, using Q-values as the basis. For accurate Q-value prediction, the model makes use of several prediction learners using the Q-learning method. Also, an improved optimization-based algorithm is utilized to discover impartial resource allocations called the Feature Selection Whale Optimization Algorithm (FSWOA). Simulations based on practical scenarios using CloudStack and RUBiS benchmarks demonstrate the effectiveness of PCRA for real-time RA. Simulations demonstrate that the PCRA framework achieves a 94.7% Q-value prediction accuracy and reduces SLA violations and resource cost by 17.4% compared to traditional round-robin scheduling.

RevDate: 2025-10-14

Wit N, Bertlin J, Hynes-Allen A, et al (2025)

Mapping SET1B chromatin interactions with DamID using DamMapper, a comprehensive Snakemake workflow.

BMC genomics, 26(1):914.

BACKGROUND: DNA adenine methyltransferase identification followed by sequencing (DamID-seq) is a powerful method used to map genome-wide chromatin-protein interactions. However, the bioinformatic analysis of DamID-seq data presents significant challenges due to the inherent complexities of the data and a notable lack of comprehensive software solutions for data-processing and downstream analysis.

RESULTS: To address these challenges, we present a comprehensive bioinformatic workflow for DamID-seq data analysis, DamMapper, using the Snakemake workflow management system. Key features include straightforward processing of multiple biological replicates, visualisation of quality control, such as correlation heatmaps and principal component analysis (PCA), and robust code quality maintained through continuous integration (CI). Reproducibility is ensured across diverse computational environments, including cloud computing and high-performance computing (HPC) clusters, through the implementation of software environments (Conda) and containerisation (Docker/Apptainer). We validate this workflow using a previously published DamID-seq dataset and apply it to analyse novel datasets for proteins involved in the hypoxia response, specifically the transcription factor HIF-1α and the histone methyltransferase SET1B. This application reveals a strong concordance between our HIF-1α DamID-seq results and ChIP-seq data, and importantly, provides the first genome-wide DNA binding map for SET1B.

CONCLUSIONS: This work provides a validated, reproducible, and feature-rich workflow that overcomes common hurdles in DamID-seq data analysis. By streamlining the processing and ensuring robustness, DamMapper facilitates reliable analysis and enables new biological discoveries, as demonstrated by the characterization of SET1B binding sites. The workflow is available under an MIT license at https://github.com/niekwit/damid-seq.

SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12864-025-12075-x.

RevDate: 2025-10-15
CmpDate: 2025-10-15

Chow W, Venkataraman N, Oh HC, et al (2025)

Building an artificial intelligence and digital ecosystem: a smart hospital's data-driven path to healthcare excellence.

Singapore medical journal, 66(Suppl 1):S75-S83.

Hospitals worldwide recognise the importance of data and digital transformation in healthcare. We traced a smart hospital's data-driven journey to build an artificial intelligence and digital ecosystem (AIDE) to achieve healthcare excellence. We measured the impact of data and digital transformation on patient care and hospital operations, identifying key success factors, challenges, and opportunities. The use of data analytics and data science, robotic process automation, AI, cloud computing, Medical Internet of Things and robotics were stand-out areas for a hospital's data-driven journey. In the future, the adoption of a robust AI governance framework, enterprise risk management system, AI assurance and AI literacy are critical for success. Hospitals must adopt a digital-ready, digital-first strategy to build a thriving healthcare system and innovate care for tomorrow.

RevDate: 2025-10-14

Padmavathi V, R Saminathan (2025)

A federated edge intelligence framework with trust based access control for secure and privacy preserving IoT systems.

Scientific reports, 15(1):35832.

The rapid growth of Internet of Things (IoT) ecosystems has generated substantial industrial progress, yet it has also introduced intricate security and privacy issues. IoT deployments cannot be properly supported with traditional cloud-centric approaches because they require improved bandwidth utilization, reduced latency, and enhanced trust mechanisms. The research proposes Artificial Intelligence-Driven Secure Edge Trust Framework (AI-SET), which establishes a comprehensive edge-based security design that connects network intrusion detection with federated learning capabilities to implement adaptive trust-based access control for IoT system protection. The AI-SET framework comprises three central elements. Real-time anomaly detection at the network edge through the Edge-Resident Intrusion Detection System operates with lightweight AI algorithms to minimize dependency on centralized systems. Privacy-preserving federated learning utilizes the modified FedAvg algorithm, which is supported by differential privacy and homomorphic encryption. Security measures enabled by this model allow algorithms to be trained across decentralized sources that contain heterogeneous and non-identically distributed (non-IID) data. A dynamic access control system utilizes trust assessment models to evaluate device context and behavior for real-time permission evaluations. The framework undergoes validation by running tests with the NAB dataset, supported by Jetson Nano and Raspberry Pi edge devices, and tools including Suricata, Metasploit, and the WAZUH threat platform. Evidence shows that AI-SET boasts higher accuracy in intrusion detection, enhanced communication performance, and superior access control security compared to standard approaches. AI-SET demonstrates immunity against attempted model poisoning attacks and unauthorized system breaches, achieving this protection while maintaining low operational costs and ensuring secure data privacy. The research presents AI-SET as an adaptable, resilient, and sensitive-minded security framework for future IoT systems, through its holistic control of edge intelligence, secure network operations, and automated trust management.

RevDate: 2025-10-14

Sina EM, Limage K, Anisman E, et al (2025)

Automated Machine Learning Differentiation of Pituitary Macroadenomas and Parasellar Meningiomas Using Preoperative Magnetic Resonance Imaging.

Otolaryngology--head and neck surgery : official journal of American Academy of Otolaryngology-Head and Neck Surgery [Epub ahead of print].

INTRODUCTION: Automated machine learning (AutoML) is an artificial intelligence tool that facilitates image recognition model development. This study evaluates the diagnostic performance of AutoML in differentiating pituitary macroadenomas (PA) and parasellar meningiomas (PSM) using preoperative MRI.

STUDY DESIGN: Model development and retrospective analysis.

SETTING: Single academic institution with external validation from a public dataset.

METHODS: 1628 contrast-enhanced T1-weighted MRI sequences from 116 patients (997 PA, 631 PSM) were uploaded to Google Cloud VertexAI AutoML. A single-label classification model was developed using an 80%-10%-10% training-validation-testing split. External validation included 930 PA and 29 PSM images. A subanalysis evaluated the classification of anatomical PSM subtypes (planum sphenoidale [PS] versus tuberculum sellae [TS]). Performance metrics were calculated at 0.25, 0.5, and 0.75 confidence thresholds.

RESULTS: At a 0.5 confidence threshold, the AutoML model achieved an aggregate AUPRC of 0.997, with F1 score, sensitivity, specificity, PPV, and NPV equilibrated to 97.55%. The model achieved strong performance in classifying PA (F1 = 97.98%; sensitivity = 97.00%; specificity = 98.96%) and PSM (F1 = 96.88%; sensitivity = 98.41%; specificity = 95.53%). External validation demonstrated high accuracy (AUPRC = 0.999 for PA; 1.000 for PSM). The PSM subanalysis yielded an aggregate F1 score of 97.30%, with PS and TS classified at 97.44% and 97.14%, respectively.

CONCLUSION: Our customized AutoML model accurately differentiates PAs from PSMs using preoperative MRIs and outperforms traditional ML. It is the first AutoML model specifically trained for parasellar tumor classification. Its highly automated, user-friendly design may facilitate scalable integration into clinical practice.

RevDate: 2025-10-14

Wang Z, Veniaminovna Kalugina O, Vladimirovna Volichenko O, et al (2025)

Correction: Sustainability in construction economics as a barrier to cloud computing adoption in small-scale Building projects.

Scientific reports, 15(1):35562 pii:10.1038/s41598-025-23126-4.

RevDate: 2025-10-13

Tian Q, G Li (2025)

Fog computing based cost optimization for university governance.

Scientific reports, 15(1):35691.

This study presented a new architecture based on fog computing to effectively reduce the burdensome cost of university governance. The process is established by enhancing network performance and optimizing resource utilization. The solution uses an effective process to overcome the inherent shortcomings of traditional cloud-based systems, which incur exorbitant costs and have delayed response times, especially in the case of distributed computing arrangements in online education. The main contribution of this work is a low cost, fog based model that uniquely combines a new cost function for optimizing resources allocation with a fuzzy inference system for intelligent error handling and resource prioritization. Our approach can assist significantly in alleviating the computational burden in the evolving paradigm of online education and distributed computing in the on-premise or hybrid cloud environment. Simulation results conducted in MATLAB environment, validate cost minimization and resource optimization in the university networks using the proposed solution. Although, this study is an important addition to the existing knowledge base on the use of fog computing in university governance, and it lays the groundwork for future research into optimizing operations of administrative processes and lowering costs of educational institutions.

RevDate: 2025-10-13

Fonari A, Elliott SD, Brock CN, et al (2025)

Finding the temperature window for atomic layer deposition of ruthenium metal via efficient phonon calculations.

Physical chemistry chemical physics : PCCP [Epub ahead of print].

We investigate the use of first principles thermodynamics based on periodic density functional theory (DFT) to examine the gas-surface chemistry of an oxidized ruthenium surface reacting with hydrogen gas. This reaction system features in the growth of ultrathin Ru films by atomic layer deposition (ALD). We reproduce and rationalize the experimental observation that ALD of the metal from RuO4 and H2 occurs only in a narrow temperature window above 100 °C, and this validates the approach. Specifically, the temperature-dependent reaction free energies are computed for the competing potential reactions of the H2 reagent, and show that surface oxide is reduced to water, which is predicted to desorb thermally above 113 °C, exposing bare Ru that can further react to surface hydride, and hence deposit Ru metal. The saturating coverages give a predicted growth rate of 0.7 Å per cycle of Ru. At lower temperatures, free energies indicate that water is retained at the surface and reacts with the RuO4 precursor to form an oxide film, also in agreement with experiment. The temperature dependence is obtained with the required accuracy by computing Gibbs free energy corrections from phonon calculations within the harmonic approximation. Surface phonons are computed rapidly and efficiently by parallelization on a cloud architecture within the Schrödinger Materials Science Suite. We also show that rotational and translational entropy of gases dominate the free energies, permitting an alternative approach without phonon calculations, which would be suitable for rapid pre-screening of gas-surface chemistries.

RevDate: 2025-10-11
CmpDate: 2025-10-11

Gakh RV, Fedoniuk LY, Furman OY, et al (2025)

The role of health monitoring technologies in optimising athletes' self-regulation.

Wiadomosci lekarskie (Warsaw, Poland : 1960), 78(8):1544-1553.

OBJECTIVE: Aim: To analyse current approaches to monitoring sports performance and health of athletes by developing an intelligent system that combines wearable devices, cloud computing and deep learning methods.

PATIENTS AND METHODS: Materials and Methods: The paper analyses related literature in sports medicine, informatics and artificial intelligence. The work is based on studying the effectiveness of devices such as Fitbit Charge 5, Garmin Venu 2, Samsung Galaxy Watch 4, and Oura Ring Gen 3.

RESULTS: Results: Showed that such systems provide high accuracy in predicting athletes' health status. The presented models allow real-time tracking of physiological parameters, analysing the data and generating health reports for prompt adjustment of the training process. These devices enable systematic monitoring of various indicators, such as heart rate, stress level, sleep quality and overall physical activity. Reading these indicators allows athletes to receive objective information about their condition. This, in turn, contributes to more effective training planning, recovery and injury prevention.

CONCLUSION: Conclusions: Integrating wearables, cloud computing, and deep learning methods presented on the latest devices is a promising approach to sports health monitoring. The analysed devices can improve athletes' performance, prevent injuries and optimise training programmes.

RevDate: 2025-10-08
CmpDate: 2025-10-08

Hosseinzadeh F, Liu G, Tsai E, et al (2025)

Utilizing a publicly accessible automated machine learning platform to enable diagnosis before tumor surgery.

Communications medicine, 5(1):419.

BACKGROUND: In benign tumors with potential for malignant transformation, sampling error during pre-operative biopsy can significantly change patient counseling and surgical planning. Sinonasal inverted papilloma (IP) is the most common benign soft tissue tumor of the sinuses, yet it can undergo malignant transformation to squamous cell carcinoma (IP-SCC), for which the planned surgery could be drastically different. Artificial intelligence (AI) could potentially help with this diagnostic challenge.

METHODS: CT images from 19 institutions were used to train the Google Cloud Vertex AI platform to distinguish between IP and IP-SCC. The model was evaluated on a holdout test dataset of images from patients whose data were not used for training or validation. Performance metrics of area under the curve (AUC), sensitivity, specificity, accuracy, and F1 were used to assess the model.

RESULTS: Here we show CT image data from 958 patients and 41099 individual images that were labeled to train and validate the deep learning image classification model. The model demonstrated a 95.8 % sensitivity in correctly identifying IP-SCC cases from IP, while specificity was robust at 99.7 %. Overall, the model achieved an accuracy of 99.1%.

CONCLUSIONS: A deep automated machine learning model, created from a publicly available artificial intelligence tool, using pre-operative CT imaging alone, identified malignant transformation of inverted papilloma with excellent accuracy.

RevDate: 2025-10-03

Samantray S, Lockwood M, Andersen A, et al (2025)

PTM-Psi on the Cloud: A Cloud-Compatible Workflow for Scalable, High-Throughput Simulation of Post-Translational Modifications in Protein Complexes.

Journal of chemical information and modeling [Epub ahead of print].

We developed an advanced computational framework to accelerate the study of the impact of post-translational modifications on protein structures and interactions (PTM-Psi) using asynchronous, loosely coupled workflows on the Azure Quantum Elements Cloud platform. We seamlessly integrate emerging cloud computing assets that further expand the scope and capability of PTM-Psi Python package by refactoring it into a cloud-compatible library. We employed a "workflow of workflows" approach, wherein a parent workflow spawns one or more child workflows, managing them, and acting on their results. This approach enabled us to optimize resource allocation according to each workflow's needs and allowed us to use the cloud heterogeneous architecture for the computational investigation of a combinatorial explosion of thiol protein PTMs on an exemplary protein megacomplex critical to the Calvin-Benson cycle of light-dependent sugar production in cyanobacteria. With PTM-Psi on the cloud, we transformed the pipeline for the thiol PTM analysis to achieve high throughput by leveraging the strengths of the cloud service. PTM-Psi on the cloud reduces operational complexity and lowers entry barriers to data interpretation with structural modeling for a redox proteomics mass spectrometry specialist.

RevDate: 2025-10-03

Catalucci S, Koutecký T, Senin N, et al (2025)

Investigation on the effects of the application of a sublimating matte coating in optical coordinate measurement of additively manufactured parts.

The International journal, advanced manufacturing technology, 140(5-6):2749-2775.

Coating sprays play a crucial role in extending the capabilities of optical measuring systems, especially when dealing with reflective surfaces, where excessive reflections, caused by incident light hitting the object surface, lead to increased noise and missing data points in the measurement results. This work focuses on metal additively manufactured parts, and explores how the application of a sublimating matting spray on the measured surfaces can improve measurement performance. The use of sublimating matting sprays is a recent development for achieving temporary coatings that are useful for measurement, but then disappear in the final product. A series of experiments was performed involving measurement by fringe projection on a selected test part pre- and post-application of a sublimating coating layer. A comparison of measurement performance across the experiments was run by computing a selected set of custom-developed point cloud quality indicators: rate of surface coverage, level of sampling density, local point dispersion, variation of selected linear dimensions computed from the point clouds. In addition, measurements were performed using an optical profilometer on the coated and uncoated surfaces to determine both thickness of the coating layer and changes of surface texture (matte effect) due to the presence of the coating layer.

RevDate: 2025-10-02

Sun X, Liao B, Huang S, et al (2025)

Evaluation of the particle characteristics of aggregates from construction spoils treatment through a real-time detection multimodal module based on 3D point cloud technology.

Waste management (New York, N.Y.), 208:115165 pii:S0956-053X(25)00576-8 [Epub ahead of print].

Construction spoils are generated during construction activities and typically contain aggregates along with mud, requiring size distribution (gradation) assessment for reuse. Conventional methods using the square opening sieves are inefficient and labor-intensive. This study introduced an intelligent multi-modal module primarily for gradation detection based on 3D scanning technology to replace traditional sieve techniques. The proposed Particle Point Cloud Clustering algorithm achieved nearly 100% segmentation accuracy for multi-particle point clouds within 2 s through adaptive point-spacing optimization. A Particle Sieving Size Determination method ensured particle size classification accuracy exceeding 93.0%. A particle surface reconstruction algorithm was integrated into the Particle Characteristics Extraction (PCE) method to address the challenge of volume calculation for unscanned particle bottom surfaces, providing a novel strategy for computing particle geometry that encompasses traditional analysis. To streamline volume calculation and bypass individual particle reconstruction, we developed a volume prediction approach that combines the Oriented Bounding Box volume with the particle morphological parameter (λ) obtained through the PCE method. Furthermore, the Particle Mass Modification model determined aggregate mass by multiplying the predicted volume with the established density. This model significantly reduced gradation errors to less than 1.2% on average, which was experimentally validated. Experimental results also confirmed that the proposed method achieves real-time, second-level detection and fulfills the typical application needs in a construction site. This study is expected to benefit other industrial processes, such as particle screening in the mining industry, since information on particle characteristics is equally crucial for this sector.

RevDate: 2025-10-02

Ma Q, Fan R, Zhao L, et al (2025)

SGSG: Stroke-Guided Scene Graph Generation.

IEEE transactions on visualization and computer graphics, PP: [Epub ahead of print].

3D scene graph generation is essential for spatial computing in Extended Reality (XR), providing structured semantics for task planning and intelligent perception. However, unlike instance-segmentation-driven setups, generating semantic scene graphs still suffer from limited accuracy due to coarse and noisy point cloud data typically acquired in practice, and from the lack of interactive strategies to incorporate users, spatialized and intuitive guidance. We identify three key challenges: designing controllable interaction forms, involving guidance in inference, and generalizing from local corrections. To address these, we propose SGSG, a Stroke-Guided Scene Graph generation method that enables users to interactively refine 3D semantic relationships and improve predictions in real time. We propose three types of strokes and a lightweight SGstrokes dataset tailored for this modality. Our model integrates stroke guidance representation and injection for spatio-temporal feature learning and reasoning correction, along with intervention losses that combine consistency-repulsive and geometry-sensitive constraints to enhance accuracy and generalization. Experiments and the user study show that SGSG outperforms state-of-the-art methods 3DSSG and SGFN in overall accuracy and precision, surpasses JointSSG in predicate-level metrics, and reduces task load across all control conditions, establishing SGSG as a new benchmark for interactive 3D scene graph generation and semantic understanding in XR. Implementation resources are available at: https://github.com/Sycamore-Ma/SGSG-runtime.

RevDate: 2025-10-01

Sudhakar M, K Vivekrabinson (2025)

Enhanced CNN based approach for IoT edge enabled smart car driving system for improving real time control and navigation.

Scientific reports, 15(1):33932.

This study investigates the critical control factors differentiating human-driven vehicles from IoT edge-enabled smart driving systems Real-time steering, throttle, and brake control are the main areas of emphasis. By combining many high-precision sensors and using edge computing for real-time processing, the research seeks to improve autonomous vehicle decision-making. The suggested system gathers real-time time-series data using LiDAR, radar, GPS, IMU, and ultrasonic sensors. Before sending this data to a cloud server, edge nodes preprocess it. There, a Convolutional Neural Network (CNN) creates predicted control vectors for vehicle navigation. The study uses a MATLAB 2023 simulation framework that includes 100 autonomous cars, five edge nodes, and a centralized cloud server. Multiple convolutional and pooling layers make up the CNN architecture, which is followed by fully linked layers. To enhance trajectory estimation, grayscale and optical flow pictures are used. Trajectory smoothness measures, loss function trends, and Root Mean Square Error (RMSE) are used to evaluate performance. According to experimental data, the suggested CNN-based edge-enabled driving system outperforms conventional autonomous driving techniques in terms of navigation accuracy, achieving an RMSE of 15.123 and a loss value of 2.114. The results show how edge computing may improve vehicle autonomy and reduce computational delay, opening the door for more effective smart driving systems. In order to better evaluate the system's suitability for dynamic situations, future study will incorporate real-world validation.

RevDate: 2025-09-30

Kario K, Asayama K, Arima H, et al (2025)

Digital hypertension - what we need for the high-quality management of hypertension in the new era.

Hypertension research : official journal of the Japanese Society of Hypertension [Epub ahead of print].

Digital technologies are playing an increasing role in hypertension management. Digital hypertension is a new field that integrates advancing technologies into hypertension management. This research area encompasses various aspects of digital transformation technologies, including the development of novel blood pressure (BP) measurement devices-whether cuffless or cuff-based sensors-the transmission of large-scale time-series BP data, cloud-based computing and analysis of BP indices, presentation of the results, and feedback systems for both patients and physicians. A key component of this approach is novel blood pressure (BP) monitoring devices. This article summarizes the latest information and discussions about "held at the 2024 Japan Society of Hypertension scientific meeting. Novel BP monitoring includes cuffless devices that estimate BP, but cuffless devices require achieving accuracy without the need for calibration using conventional cuff-based devices. New BP monitoring devices can provide information on novel biomarkers beyond BP and may improve risk assessment and outcomes. Integration of BP data with omics and clinical information should enable personalized hypertension management. Key data gaps relating to novel BP monitoring devices are accuracy/validation in different settings/populations, association between BP metrics and hard clinical outcomes, and measurement/interpretation of BP variability data. Human- and health system-related factors also need to be addressed or overcome before these devices can be successfully integrated into routine clinical practice. If these things can be achieved, new BP monitoring technologies could transform hypertension management and play a pivotal role in the future of remote healthcare. This article summarizes the latest information and discussions about digital hypertension from the Digital Hypertension symposium that took place during the 2024 Japan Society of Hypertension scientific meeting.

RevDate: 2025-09-30

Alamro H, Albouq SS, Khan J, et al (2025)

An intelligent deep representation learning with enhanced feature selection approach for cyberattack detection in internet of things enabled cloud environment.

Scientific reports, 15(1):34013.

Users of computer networks can take advantage of cloud computing (CC), a relatively new concept that provides features such as processing, in addition to storing and sharing data. Cloud computing (CC) is attracting global investment due to its services, while IoT faces rising advanced cyberattacks, making its cybersecurity crucial to protect privacy and digital assets. A significant challenge for intrusion detection systems (IDS) is detecting complex and hidden malware, as attackers use advanced evasion techniques to bypass conventional security measures. At the cutting edge of cybersecurity is artificial intelligence (AI), which is applied to develop composite models that protect systems and networks, including Internet of Things (IoT) systems. AI-based deep learning (DL) is highly effective in detecting cybersecurity threats. This paper presents an Intelligent Hybrid Deep Learning Method for Cyber Attack Detection Using an Enhanced Feature Selection Technique (IHDLM-CADEFST) approach in IoT-enabled cloud networks. The aim is to strengthen IoT cybersecurity by identifying key threats and developing effective detection and mitigation strategies. Initially, the data pre-processing phase uses the standard scaler method to convert input data into a suitable format. Furthermore, the feature selection (FS) strategy is implemented using the recursive feature elimination with information gain (RFE-IG) model to detect the most pertinent features and prevent overfitting. Finally, a hybrid Convolutional Neural Network and Long Short-Term Memory (CNN-LSTM) model is employed for attack classification, utilizing the RMSprop optimizer to enhance the performance and efficiency of the classification process. The experimentation of the IHDLM-CADEFST approach is examined under the ToN-IoT and Edge-IIoT datasets. The comparison analysis of the IHDLM-CADEFST approach yielded superior accuracy values of 99.45% and 99.19% compared to recent models on the dual dataset.

RevDate: 2025-09-30

He M, Zhou N, Peng H, et al (2025)

A Multivariate Cloud Workload Prediction Method Integrating Convolutional Nonlinear Spiking Neural Model with Bidirectional Long Short-Term Memory.

International journal of neural systems [Epub ahead of print].

Multivariate workload prediction in cloud computing environments is a critical research problem. Effectively capturing inter-variable correlations and temporal patterns in multivariate time series is key to addressing this challenge. To address this issue, this paper proposes a convolutional model based on a Nonlinear Spiking Neural P System (ConvNSNP), which enhances the ability to process nonlinear data compared to conventional convolutional models. Building upon this, a hybrid forecasting model is developed by integrating ConvNSNP with a Bidirectional Long Short-Term Memory (BiLSTM) network. ConvNSNP is first employed to extract temporal and cross-variable dependencies from the multivariate time series, followed by BiLSTM to further strengthen long-term temporal modeling. Comprehensive experiments are conducted on three public cloud workload traces from Alibaba and Google. The proposed model is compared with a range of established deep learning approaches, including CNN, RNN, LSTM, TCN and hybrid models such as LSTNet, CNN-GRU and CNN-LSTM. Experimental results on three public datasets demonstrate that our proposed model achieves up to 9.9% improvement in RMSE and 11.6% improvement in MAE compared with the most effective baseline methods. The model also achieves favorable performance in terms of MAPE, further validating its effectiveness in multivariate workload prediction.

RevDate: 2025-09-30
CmpDate: 2025-09-30

Labayle O, Roskams-Hieter B, Slaughter J, et al (2024)

Semiparametric efficient estimation of small genetic effects in large-scale population cohorts.

Biostatistics (Oxford, England), 26(1):.

Population genetics seeks to quantify DNA variant associations with traits or diseases, as well as interactions among variants and with environmental factors. Computing millions of estimates in large cohorts in which small effect sizes and tight confidence intervals are expected, necessitates minimizing model-misspecification bias to increase power and control false discoveries. We present TarGene, a unified statistical workflow for the semi-parametric efficient and double robust estimation of genetic effects including $ k $-point interactions among categorical variables in the presence of confounding and weak population dependence. $ k $-point interactions, or Average Interaction Effects (AIEs), are a direct generalization of the usual average treatment effect (ATE). We estimate genetic effects with cross-validated and/or weighted versions of Targeted Minimum Loss-based Estimators (TMLE) and One-Step Estimators (OSE). The effect of dependence among data units on variance estimates is corrected by using sieve plateau variance estimators based on genetic relatedness across the units. We present extensive realistic simulations to demonstrate power, coverage, and control of type I error. Our motivating application is the targeted estimation of genetic effects on trait, including two-point and higher-order gene-gene and gene-environment interactions, in large-scale genomic databases such as UK Biobank and All of Us. All cross-validated and/or weighted TMLE and OSE for the AIE $ k $-point interaction, as well as ATEs, conditional ATEs and functions thereof, are implemented in the general purpose Julia package TMLE.jl. For high-throughput applications in population genomics, we provide the open-source Nextflow pipeline and software TarGene which integrates seamlessly with modern high-performance and cloud computing platforms.

RevDate: 2025-09-29

Ala'anzy MA, Abilakim A, Zhanuzak R, et al (2025)

Real time smart parking system based on IoT and fog computing evaluated through a practical case study.

Scientific reports, 15(1):33483.

The increasing urban population and the growing preference for private transportation have led to a significant rise in vehicle numbers, exacerbating traffic congestion and parking challenges. Cruising for parking not only consumes time and fuel but also contributes to environmental and energy inefficiencies. Smart parking systems have emerged as essential solutions to these issues, addressing everyday urban challenges and enabling the development of smart, sustainable cities. By reducing traffic congestion and streamlining parking processes, these systems promote eco-friendly and efficient urban transportation. This paper introduces a provenance-based smart parking system leveraging fog computing to enhance real-time parking space management and resource allocation. The proposed system employs a hierarchical fog architecture, with four layers architecture nodes for efficient data storage, transfer, and resource utilisation. The provenance component empowers users with real-time insights into parking availability, facilitating informed decision-making. Simulations conducted using the iFogSim2 toolkit evaluated the system across key metrics, including end-to-end latency, execution cost, execution time, network usage, and energy consumption in both fog and cloud-based environments. A comparative analysis demonstrates that the fog-based approach significantly outperforms its cloud-based counterpart in terms of efficiency and responsiveness. Additionally, the system minimises network usage and optimises space utilisation, reducing the need for parking area expansion. A real-world case study from SDU University Park validated the proposed system, showcasing its effectiveness in managing parking spaces, particularly during peak hours.

RevDate: 2025-09-29
CmpDate: 2025-09-29

Yao S, Yu T, Ramos AFV, et al (2025)

Toward smart and in-situ mycotoxin detection in food via vibrational spectroscopy and machine learning.

Food chemistry: X, 31:103016.

Recent advances in vibrational spectroscopy combined with machine learning are enabling smart and in-situ detection of mycotoxins in complex food matrices. Infrared and spontaneous Raman spectroscopy detect molecular vibrations or compositional changes in host matrices, capturing direct or indirect mycotoxin fingerprints, while surface-enhanced Raman spectroscopy (SERs) amplifies characteristic mycotoxins molecular vibrations via plasmonic nanostructures, enabling ultra-sensitive detection. Machine learning further enhances analysis by extracting subtle and unique mycotoxin spectral features from information-rich spectra, suppressing noise, and enabling robust predictions across heterogeneous samples. This review critically examines recent sensing strategies, model development, application performance, non-destructive screening, and potential application challenges, highlighting strengths and limitations relative to conventional methods. Innovations in portable, miniaturized spectrometers integrated with cloud computation are also discussed, supporting scalable, rapid, and on-site mycotoxin monitoring. By integrating state-of-art vibrational fingerprints with computational analysis, these approaches provide a pathway toward sensitive, smart, and field-deployable mycotoxin detection in food.

LOAD NEXT 100 CITATIONS

RJR Experience and Expertise

Researcher

Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.

Educator

Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.

Administrator

Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.

Technologist

Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.

Publisher

While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.

Speaker

Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.

Facilitator

Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.

Designer

Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.

Support this website:
Order from Amazon
We will earn a commission.

This is a must read book for anyone with an interest in invasion biology. The full title of the book lays out the author's premise — The New Wild: Why Invasive Species Will Be Nature's Salvation. Not only is species movement not bad for ecosystems, it is the way that ecosystems respond to perturbation — it is the way ecosystems heal. Even if you are one of those who is absolutely convinced that invasive species are actually "a blight, pollution, an epidemic, or a cancer on nature", you should read this book to clarify your own thinking. True scientific understanding never comes from just interacting with those with whom you already agree. R. Robbins

963 Red Tail Lane
Bellingham, WA 98226

206-300-3443

E-mail: RJR8222@gmail.com

Collection of publications by R J Robbins

Reprints and preprints of publications, slide presentations, instructional materials, and data compilations written or prepared by Robert Robbins. Most papers deal with computational biology, genome informatics, using information technology to support biomedical research, and related matters.

Research Gate page for R J Robbins

ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. According to a study by Nature and an article in Times Higher Education , it is the largest academic social network in terms of active users.

Curriculum Vitae for R J Robbins

short personal version

Curriculum Vitae for R J Robbins

long standard version

RJR Picks from Around the Web (updated 11 MAY 2018 )