Other Sites:
Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About: RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE
ESP: PubMed Auto Bibliography 04 Apr 2026 at 01:41 Created:
Cloud Computing
Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.
Created with PubMed® Query: ( cloud[TIAB] AND (computing[TIAB] OR "amazon web services"[TIAB] OR google[TIAB] OR "microsoft azure"[TIAB]) ) NOT pmcbook NOT ispreviousversion
Citations The Papers (from PubMed®)
RevDate: 2026-04-03
CmpDate: 2026-04-03
AI-Enhanced Adaptive Virtual Screening Platform Enabling Exploration of 69 Billion Molecules Discovers Structurally Validated FSP1 Inhibitors.
bioRxiv : the preprint server for biology pii:2023.04.25.537981.
Identifying potent lead molecules for specific targets remains a major bottleneck in drug discovery. As structural information about proteins becomes increasingly available, ultra-large virtual screenings (ULVSs) which computationally evaluate billions of molecules offer a powerful way to accelerate early-stage drug discovery. Here, we introduce AdaptiveFlow, an open-source platform designed to make ULVSs more accessible, scalable, and efficient. AdaptiveFlow provides free access to a screening-ready version of the Enamine REAL Space, the largest library of ready-to-dock, drug-like molecules, containing 69 billion compounds that we prepared using the ligand preparation module of the platform. A key innovation of the platform is its use of a multi-dimensional grid of molecular properties, which helps researchers explore and prioritize chemical space more effectively and reduce the computational costs by a factor of approximately 1000. This grid forms the basis of a new method for identifying promising regions of chemical space, enabling systematic exploration and prioritization of compound libraries. An optional active learning component can further accelerate this process by adaptively steering the search toward molecules most likely to bind a given target. To support a broad range of applications, AdaptiveFlow is compatible with over 1,500 docking methods. The platform achieves near-linear scaling on up to 5.6 million CPUs in the AWS Cloud, setting a new benchmark for large-scale cloud computing in drug discovery. Using this approach, we identified nanomolar inhibitors of two disease-relevant targets: ferroptosis suppressor protein 1 (FSP1) and poly(ADP-ribose) polymerase 1 (PARP-1). By leveraging newly solved crystal structures of FSP1 in complex with NAD+, FAD, and coenzyme Q1, we validated these hits experimentally and determined the first co-crystal structures of FSP1 bound to small-molecule inhibitors, enabling insights into inhibitor binding mechanisms previously unknown. With its high scalability, flexibility, and open accessibility, AdaptiveFlow offers a powerful new resource for discovering and optimizing drug candidates at an unprecedented scale and speed.
Additional Links: PMID-41929058
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41929058,
year = {2026},
author = {Cecchini, D and Nigam, A and Tang, M and Reis, J and Koop, M and Gottinger, A and Nicoll, CR and Wang, Y and Jayaraj, A and Cinaroglu, SS and Törner, R and Malets, Y and Gehev, M and Padmanabha Das, KM and Churion, K and Kim, J and Thomas, N and Li, Y and Seo, HS and Dhe-Paganon, S and Secker, C and Haddadnia, M and Hasson, A and Li, M and Kumar, A and Levin-Konigsberg, R and Choi, EB and Shapiro, GI and Cox, H and Sebastian, L and Braithwaite, C and Bashyal, P and Radchenko, DS and Kumar, A and Yang, L and Aquilanti, PY and Gabb, H and Alhossary, A and Wagner, G and Aspuru-Guzik, A and Moroz, YS and Kalodimos, CG and Fackeldey, K and Schuetz, JD and Mattevi, A and Arthanari, H and Gorgulla, C},
title = {AI-Enhanced Adaptive Virtual Screening Platform Enabling Exploration of 69 Billion Molecules Discovers Structurally Validated FSP1 Inhibitors.},
journal = {bioRxiv : the preprint server for biology},
volume = {},
number = {},
pages = {},
doi = {10.1101/2023.04.25.537981},
pmid = {41929058},
issn = {2692-8205},
abstract = {Identifying potent lead molecules for specific targets remains a major bottleneck in drug discovery. As structural information about proteins becomes increasingly available, ultra-large virtual screenings (ULVSs) which computationally evaluate billions of molecules offer a powerful way to accelerate early-stage drug discovery. Here, we introduce AdaptiveFlow, an open-source platform designed to make ULVSs more accessible, scalable, and efficient. AdaptiveFlow provides free access to a screening-ready version of the Enamine REAL Space, the largest library of ready-to-dock, drug-like molecules, containing 69 billion compounds that we prepared using the ligand preparation module of the platform. A key innovation of the platform is its use of a multi-dimensional grid of molecular properties, which helps researchers explore and prioritize chemical space more effectively and reduce the computational costs by a factor of approximately 1000. This grid forms the basis of a new method for identifying promising regions of chemical space, enabling systematic exploration and prioritization of compound libraries. An optional active learning component can further accelerate this process by adaptively steering the search toward molecules most likely to bind a given target. To support a broad range of applications, AdaptiveFlow is compatible with over 1,500 docking methods. The platform achieves near-linear scaling on up to 5.6 million CPUs in the AWS Cloud, setting a new benchmark for large-scale cloud computing in drug discovery. Using this approach, we identified nanomolar inhibitors of two disease-relevant targets: ferroptosis suppressor protein 1 (FSP1) and poly(ADP-ribose) polymerase 1 (PARP-1). By leveraging newly solved crystal structures of FSP1 in complex with NAD+, FAD, and coenzyme Q1, we validated these hits experimentally and determined the first co-crystal structures of FSP1 bound to small-molecule inhibitors, enabling insights into inhibitor binding mechanisms previously unknown. With its high scalability, flexibility, and open accessibility, AdaptiveFlow offers a powerful new resource for discovering and optimizing drug candidates at an unprecedented scale and speed.},
}
RevDate: 2026-04-03
Artificial Intelligence-Assisted reflectance confocal microscopy for Real-Time intraoperative margin assessment in oral squamous cell carcinoma.
Oral oncology, 177:107939 pii:S1368-8375(26)00092-8 [Epub ahead of print].
BACKGROUND: Oral cavity squamous cell carcinoma (OSCC) is a global health burden, where negative margins are essential for reducing recurrence and improving survival. Intraoperative frozen-section analysis is limited by time, sampling error, and interpretive variability, underscoring the need for more reliable margin assessment. Reflectance confocal microscopy (RCM) enables real-time, in vivo high-resolution imaging, but accuracy depends on expert interpretation. This study evaluated the diagnostic performance of an artificial intelligence (AI)-driven model for RCM in OSCC, aiming to develop a point-of-care platform for intraoperative use.
METHODS: Patients with biopsy-confirmed OSCC underwent in vivo RCM imaging using a handheld intraoral probe before biopsy. Histopathology was the reference standard. A deep learning model was developed with the Google Cloud Vertex AI Automated Machine Learning (AutoML) Vision platform and trained on 4,090 annotated RCM images (1,998 benign, 2,092 malignant). Performance was compared with blinded expert pathologist and RCM readers.
RESULTS: The AI model achieved an area under the precision-recall curve (AUC-PR) of 0.99 and an area under the receiver operating characteristic curve (AUC-ROC) of 0.99, with sensitivity 98.09%, specificity 95.00%, accuracy 96.58%, positive predictive value (PPV) 95.35%, and negative predictive value (NPV) 97.94%. Expert readers showed sensitivity 90.00%, specificity 98.30%, accuracy 94.15%, PPV 88.20%, and NPV 96.60%. Inter-reader agreement was 95.00% for benign and 81.70% for malignant cases.
CONCLUSIONS: AI-driven RCM interpretation provides an accurate, rapid, noninvasive approach for OSCC diagnosis and intraoperative margin assessment. It outperformed expert readers and can reduce reliance on frozen-section analysis, streamline workflows, and improve outcomes.
Additional Links: PMID-41932031
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41932031,
year = {2026},
author = {Hosseinzadeh, F and Zanoni, D and de Souza França, PD and Valero, C and Peterson, G and Ardigo, M and Ghossein, R and Dusza, SW and Pillarsetty, N and Kose, K and Wong, RJ and Ganly, I and Rajadhyaksha, M and Patel, SG},
title = {Artificial Intelligence-Assisted reflectance confocal microscopy for Real-Time intraoperative margin assessment in oral squamous cell carcinoma.},
journal = {Oral oncology},
volume = {177},
number = {},
pages = {107939},
doi = {10.1016/j.oraloncology.2026.107939},
pmid = {41932031},
issn = {1879-0593},
abstract = {BACKGROUND: Oral cavity squamous cell carcinoma (OSCC) is a global health burden, where negative margins are essential for reducing recurrence and improving survival. Intraoperative frozen-section analysis is limited by time, sampling error, and interpretive variability, underscoring the need for more reliable margin assessment. Reflectance confocal microscopy (RCM) enables real-time, in vivo high-resolution imaging, but accuracy depends on expert interpretation. This study evaluated the diagnostic performance of an artificial intelligence (AI)-driven model for RCM in OSCC, aiming to develop a point-of-care platform for intraoperative use.
METHODS: Patients with biopsy-confirmed OSCC underwent in vivo RCM imaging using a handheld intraoral probe before biopsy. Histopathology was the reference standard. A deep learning model was developed with the Google Cloud Vertex AI Automated Machine Learning (AutoML) Vision platform and trained on 4,090 annotated RCM images (1,998 benign, 2,092 malignant). Performance was compared with blinded expert pathologist and RCM readers.
RESULTS: The AI model achieved an area under the precision-recall curve (AUC-PR) of 0.99 and an area under the receiver operating characteristic curve (AUC-ROC) of 0.99, with sensitivity 98.09%, specificity 95.00%, accuracy 96.58%, positive predictive value (PPV) 95.35%, and negative predictive value (NPV) 97.94%. Expert readers showed sensitivity 90.00%, specificity 98.30%, accuracy 94.15%, PPV 88.20%, and NPV 96.60%. Inter-reader agreement was 95.00% for benign and 81.70% for malignant cases.
CONCLUSIONS: AI-driven RCM interpretation provides an accurate, rapid, noninvasive approach for OSCC diagnosis and intraoperative margin assessment. It outperformed expert readers and can reduce reliance on frozen-section analysis, streamline workflows, and improve outcomes.},
}
RevDate: 2026-04-01
An innovative framework for secure data transmission using machine learning based classification and ElGamal encryption with Ramanujan primes.
Scientific reports, 16(1):.
The secure processing and transmission of sensitive data has arisen as a crucial concern in the modern digital world. This paper presents a conditional ElGamal framework in which the prime modulus is chosen from either conventional primes or the Ramanujan prime based on data sensitivity, while the encryption and decryption techniques remain identical to the standard ElGamal scheme. To classify data into normal and highly sensitive categories, several machine learning models are evaluated among which the Support Vector Machine model achieves highest mean accuracy under 5-fold cross-validation. The normal sensitive data is encrypted using the standard ElGamal encryption method whereas highly sensitive data is encrypted by using the proposed variant with Ramanujan prime-based key generation. The security analysis confirmed that the proposed framework preserves the security of standard ElGamal scheme under known-plain text attack and chosen-plain text attack. The choice of Ramanujan primes affects only the key-generation procedure and does not change or strengthen the theoretical security guarantees of ElGamal scheme. To ensure both data credibility and reliability, a hash-based message authentication code is appended to each message before transmission. The encryption-decryption time and the average data rate for highly sensitive data is less compared to the normal sensitive data which ensures the less CPU memory usage of the new framework. Hence, the proposed framework can be suitable for applications in cloud computing, healthcare, and e-governance environments.
Additional Links: PMID-41748746
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41748746,
year = {2026},
author = {Haritha, N and Narayanan, V and Srikanth, R},
title = {An innovative framework for secure data transmission using machine learning based classification and ElGamal encryption with Ramanujan primes.},
journal = {Scientific reports},
volume = {16},
number = {1},
pages = {},
pmid = {41748746},
issn = {2045-2322},
abstract = {The secure processing and transmission of sensitive data has arisen as a crucial concern in the modern digital world. This paper presents a conditional ElGamal framework in which the prime modulus is chosen from either conventional primes or the Ramanujan prime based on data sensitivity, while the encryption and decryption techniques remain identical to the standard ElGamal scheme. To classify data into normal and highly sensitive categories, several machine learning models are evaluated among which the Support Vector Machine model achieves highest mean accuracy under 5-fold cross-validation. The normal sensitive data is encrypted using the standard ElGamal encryption method whereas highly sensitive data is encrypted by using the proposed variant with Ramanujan prime-based key generation. The security analysis confirmed that the proposed framework preserves the security of standard ElGamal scheme under known-plain text attack and chosen-plain text attack. The choice of Ramanujan primes affects only the key-generation procedure and does not change or strengthen the theoretical security guarantees of ElGamal scheme. To ensure both data credibility and reliability, a hash-based message authentication code is appended to each message before transmission. The encryption-decryption time and the average data rate for highly sensitive data is less compared to the normal sensitive data which ensures the less CPU memory usage of the new framework. Hence, the proposed framework can be suitable for applications in cloud computing, healthcare, and e-governance environments.},
}
RevDate: 2026-04-02
CmpDate: 2026-04-02
GermVarX: A Robust Workflow for Joint Germline Variant Exploration in whole-exome sequencing cohorts.
PloS one, 21(4):e0345561 pii:PONE-D-25-50266.
Accurate identification of germline variants from whole-exome sequencing (WES) data is foundational to population genetics, disease association studies, and clinical genomics. However, variant calling across cohorts poses challenges in scalability, consistency, and reproducibility. We present GermVarX, a fully automated, modular workflow for joint germline variant discovery and exploration in WES cohort studies. A key feature of GermVarX is its implementation of joint variant calling, enabling simultaneous genotyping of multiple samples to produce a single, high-confidence multi-sample VCF, optimized for downstream analyses. Developed with Nextflow DSL2, GermVarX ensures reproducibility, portability, and efficient parallelization across diverse computing environments, including workstations, HPC clusters, and cloud platforms. The workflow integrates two state-of-the-art variant callers-GATK HaplotypeCaller and DeepVariant-with joint genotyping performed via GATK or GLnexus. To increase reliability, GermVarX supports consensus generation between callers, coupled with sample- and cohort-level quality control, functional annotation using the Variant Effect Predictor (VEP), and unified reporting through MultiQC. In addition, it provides PLINK-compatible outputs, facilitating seamless integration with statistical and association analyses. GermVarX delivers a scalable, reproducible, and comprehensive solution for germline variant analysis in large WES studies, supporting consistent and interpretable results for both research and clinical genomics. The source code and usage instructions are available at https://github.com/thaontp711/GermVarX.
Additional Links: PMID-41926483
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41926483,
year = {2026},
author = {Nguyen, TTP and Nguyen, DD and Mai, TV and Nguyen, DK and Nguyen, TD and Truong, NTM and Ha, HH and Tran, TTH},
title = {GermVarX: A Robust Workflow for Joint Germline Variant Exploration in whole-exome sequencing cohorts.},
journal = {PloS one},
volume = {21},
number = {4},
pages = {e0345561},
doi = {10.1371/journal.pone.0345561},
pmid = {41926483},
issn = {1932-6203},
mesh = {Humans ; *Exome Sequencing/methods ; Workflow ; *Software ; *Germ-Line Mutation ; Cohort Studies ; Reproducibility of Results ; Genotype ; },
abstract = {Accurate identification of germline variants from whole-exome sequencing (WES) data is foundational to population genetics, disease association studies, and clinical genomics. However, variant calling across cohorts poses challenges in scalability, consistency, and reproducibility. We present GermVarX, a fully automated, modular workflow for joint germline variant discovery and exploration in WES cohort studies. A key feature of GermVarX is its implementation of joint variant calling, enabling simultaneous genotyping of multiple samples to produce a single, high-confidence multi-sample VCF, optimized for downstream analyses. Developed with Nextflow DSL2, GermVarX ensures reproducibility, portability, and efficient parallelization across diverse computing environments, including workstations, HPC clusters, and cloud platforms. The workflow integrates two state-of-the-art variant callers-GATK HaplotypeCaller and DeepVariant-with joint genotyping performed via GATK or GLnexus. To increase reliability, GermVarX supports consensus generation between callers, coupled with sample- and cohort-level quality control, functional annotation using the Variant Effect Predictor (VEP), and unified reporting through MultiQC. In addition, it provides PLINK-compatible outputs, facilitating seamless integration with statistical and association analyses. GermVarX delivers a scalable, reproducible, and comprehensive solution for germline variant analysis in large WES studies, supporting consistent and interpretable results for both research and clinical genomics. The source code and usage instructions are available at https://github.com/thaontp711/GermVarX.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Exome Sequencing/methods
Workflow
*Software
*Germ-Line Mutation
Cohort Studies
Reproducibility of Results
Genotype
RevDate: 2026-04-02
CmpDate: 2026-04-02
Robotic process automation for identifying missing codes on insurance claims.
BMJ health & care informatics, 33(1): pii:bmjhci-2025-101821.
OBJECTIVES: This study aimed to develop and implement robotic process automation (RPA) for identifying missing codes during insurance claim post-review at a tertiary hospital and to evaluate its feasibility and effectiveness METHODS: As a single-centre, operational implementation, an RPA system integrated with optical character recognition (OCR) and electronic medical record (EMR) platforms was developed using Blue Prism. The system compared 532 surgical procedure codes with 21 cutting device codes, automatically flagging discrepancies. Accuracy and efficiency were compared with manual review.
RESULTS: Between 1 and 31 May 2025, the RPA system analysed 61 claim statements and performed 199 OCR processes. The Google Cloud Vision API (application programming interface) achieved 100% detection accuracy without false positives, while Tesseract yielded lower accuracy. The RPA reduced average processing time from 120 (manual review) to 54 min, representing a 55% efficiency gain.
DISCUSSION: RPA reliably automated repetitive, rule-based administrative tasks, improving accuracy and standardisation of insurance claim audits. Secure system architecture ensured compliance with healthcare data protection standards. User-centred development and integration with EMR demonstrated feasibility in complex healthcare workflows.
CONCLUSION: Implementing RPA for insurance claim post-review significantly enhanced efficiency and accuracy, reduced administrative workload and provided a scalable model for digital transformation in healthcare administration.
Additional Links: PMID-41927105
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41927105,
year = {2026},
author = {Lee, J and Cho, JH and Lee, WJ and Kim, DH and Gong, YL and Kim, WT and Kim, CW},
title = {Robotic process automation for identifying missing codes on insurance claims.},
journal = {BMJ health & care informatics},
volume = {33},
number = {1},
pages = {},
doi = {10.1136/bmjhci-2025-101821},
pmid = {41927105},
issn = {2632-1009},
mesh = {Humans ; Electronic Health Records ; *Automation ; *Robotics/methods ; *Insurance Claim Review ; *Clinical Coding ; },
abstract = {OBJECTIVES: This study aimed to develop and implement robotic process automation (RPA) for identifying missing codes during insurance claim post-review at a tertiary hospital and to evaluate its feasibility and effectiveness METHODS: As a single-centre, operational implementation, an RPA system integrated with optical character recognition (OCR) and electronic medical record (EMR) platforms was developed using Blue Prism. The system compared 532 surgical procedure codes with 21 cutting device codes, automatically flagging discrepancies. Accuracy and efficiency were compared with manual review.
RESULTS: Between 1 and 31 May 2025, the RPA system analysed 61 claim statements and performed 199 OCR processes. The Google Cloud Vision API (application programming interface) achieved 100% detection accuracy without false positives, while Tesseract yielded lower accuracy. The RPA reduced average processing time from 120 (manual review) to 54 min, representing a 55% efficiency gain.
DISCUSSION: RPA reliably automated repetitive, rule-based administrative tasks, improving accuracy and standardisation of insurance claim audits. Secure system architecture ensured compliance with healthcare data protection standards. User-centred development and integration with EMR demonstrated feasibility in complex healthcare workflows.
CONCLUSION: Implementing RPA for insurance claim post-review significantly enhanced efficiency and accuracy, reduced administrative workload and provided a scalable model for digital transformation in healthcare administration.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
Electronic Health Records
*Automation
*Robotics/methods
*Insurance Claim Review
*Clinical Coding
RevDate: 2026-04-02
Slow drift aware dynamic risk assessment in cyber physical systems using quantum neutrosophic fuzzy modelling.
Scientific reports pii:10.1038/s41598-026-41732-8 [Epub ahead of print].
The dynamic risk assessment identifies, evaluates, and mitigates the vulnerabilities that compromise the reliability of the cyber-physical system (CPS). The slow degradation in network behaviour is known as the gradual and long-term shifts in traffic characteristics (variations in latency, packet flow distribution, and protocol level patterns) that accumulate over time. These subtle variations cause faults, stealthy intrusion, and network aging in CPS. However, the prevailing works overlooked the slow degradation that persists in the network behaviour. Thus, Quantum State-based Exponentially Weighted Moving Average (QS-EWMA)-based slow drift analysis is proposed. Initially, the medical devices are registered in the cloud server, followed by data sensing. During data transfer, the intrusion is detected by the Network Intrusion Detection System (NIDS). In NIDS, the data collection, pre-processing, and feature extraction are performed. Further, the QS-EWMA-based slow drift analysis is performed, followed by Sparse Random Walk Softplus S-shaped Rectified Gated Recurrent Unit (SRWSSR-GRU)-based intrusion detection. Next, the dependency-aware aggregation is performed using the Choquet k-Additive Function Integral Model (CkAFIM). Then, the uncertainty-based risk is assessed using Min-Max Normalization-based Neutrosophic Logic System (MMN-NLS). Here, the explainability is improved using SHapley Max-Abs Scaling Additive exPlanation (SMAS-HAP). Finally, the decision-making is performed using the Markov Linear Discrete-Time Propagation Decision Process (MLDTPDP). Hence, the proposed system effectively assessed the risk with an Indeterminacy Detection Rate (IDR) of 13.43%, showing superiority over prevailing works.
Additional Links: PMID-41927643
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41927643,
year = {2026},
author = {Kiruthika, K and Rajesh, A and Dhapekar, NK and Dubey, TK},
title = {Slow drift aware dynamic risk assessment in cyber physical systems using quantum neutrosophic fuzzy modelling.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-41732-8},
pmid = {41927643},
issn = {2045-2322},
abstract = {The dynamic risk assessment identifies, evaluates, and mitigates the vulnerabilities that compromise the reliability of the cyber-physical system (CPS). The slow degradation in network behaviour is known as the gradual and long-term shifts in traffic characteristics (variations in latency, packet flow distribution, and protocol level patterns) that accumulate over time. These subtle variations cause faults, stealthy intrusion, and network aging in CPS. However, the prevailing works overlooked the slow degradation that persists in the network behaviour. Thus, Quantum State-based Exponentially Weighted Moving Average (QS-EWMA)-based slow drift analysis is proposed. Initially, the medical devices are registered in the cloud server, followed by data sensing. During data transfer, the intrusion is detected by the Network Intrusion Detection System (NIDS). In NIDS, the data collection, pre-processing, and feature extraction are performed. Further, the QS-EWMA-based slow drift analysis is performed, followed by Sparse Random Walk Softplus S-shaped Rectified Gated Recurrent Unit (SRWSSR-GRU)-based intrusion detection. Next, the dependency-aware aggregation is performed using the Choquet k-Additive Function Integral Model (CkAFIM). Then, the uncertainty-based risk is assessed using Min-Max Normalization-based Neutrosophic Logic System (MMN-NLS). Here, the explainability is improved using SHapley Max-Abs Scaling Additive exPlanation (SMAS-HAP). Finally, the decision-making is performed using the Markov Linear Discrete-Time Propagation Decision Process (MLDTPDP). Hence, the proposed system effectively assessed the risk with an Indeterminacy Detection Rate (IDR) of 13.43%, showing superiority over prevailing works.},
}
RevDate: 2026-04-01
CmpDate: 2026-04-01
Efficient large-scale land cover change detection using Google Earth Engine: Climate-driven vegetation dynamics in Asian drylands (2001-2022).
PloS one, 21(4):e0344835 pii:PONE-D-25-35978.
Monitoring land cover dynamics and understanding vegetation responses to climate change are critical for ecological assessment and management in dryland regions. This study systematically analyzes land cover dynamics, vegetation type transitions, and their climatic drivers across Asian drylands from 2001 to 2022 by integrating MODIS land cover data, TerraClimate climate reanalysis datasets, and the Google Earth Engine (GEE) platform. Using a unified framework that combines land cover dynamic indices, transition probability and transfer matrix analyses, and climate attribution, we quantify spatiotemporal change patterns and identify dominant vegetation transition pathways. The results reveal pronounced land cover changes across Asian drylands over the past two decades, characterized by expansions of grasslands (GRA), savannas (SAV), croplands (CRO), and water, snow, and ice (WSI), alongside contractions of shrublands (SH), mixed forests (MF), permanent wetlands (WET), and barren land (BAR). Land cover transition analysis indicates that the most prominent conversion pathways are from barren land to grasslands and from grasslands to croplands, reflecting the combined influences of climate variability and land use processes. Climate attribution analyses further demonstrate that vegetation dynamics across different stability zones exhibit distinct responses to long-term climate trends, with increasing maximum temperature, soil moisture, and vapor-related variables, together with declining precipitation, drought indices, and surface radiation, jointly shaping vegetation persistence, expansion, or degradation. By integrating long-term multi-source datasets and cloud-based geospatial computing, this study provides a scalable and reproducible framework for assessing land cover change and vegetation stability in arid and semi-arid regions. The findings enhance understanding of dryland ecosystem dynamics under climate change and support large-scale ecological assessment in data-scarce environments.
Additional Links: PMID-41920809
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41920809,
year = {2026},
author = {Wu, J and Wei, S and Hao, H and Chen, M and Ismail, S},
title = {Efficient large-scale land cover change detection using Google Earth Engine: Climate-driven vegetation dynamics in Asian drylands (2001-2022).},
journal = {PloS one},
volume = {21},
number = {4},
pages = {e0344835},
doi = {10.1371/journal.pone.0344835},
pmid = {41920809},
issn = {1932-6203},
mesh = {*Climate Change ; Asia ; Grassland ; Ecosystem ; Forests ; Wetlands ; *Environmental Monitoring/methods ; },
abstract = {Monitoring land cover dynamics and understanding vegetation responses to climate change are critical for ecological assessment and management in dryland regions. This study systematically analyzes land cover dynamics, vegetation type transitions, and their climatic drivers across Asian drylands from 2001 to 2022 by integrating MODIS land cover data, TerraClimate climate reanalysis datasets, and the Google Earth Engine (GEE) platform. Using a unified framework that combines land cover dynamic indices, transition probability and transfer matrix analyses, and climate attribution, we quantify spatiotemporal change patterns and identify dominant vegetation transition pathways. The results reveal pronounced land cover changes across Asian drylands over the past two decades, characterized by expansions of grasslands (GRA), savannas (SAV), croplands (CRO), and water, snow, and ice (WSI), alongside contractions of shrublands (SH), mixed forests (MF), permanent wetlands (WET), and barren land (BAR). Land cover transition analysis indicates that the most prominent conversion pathways are from barren land to grasslands and from grasslands to croplands, reflecting the combined influences of climate variability and land use processes. Climate attribution analyses further demonstrate that vegetation dynamics across different stability zones exhibit distinct responses to long-term climate trends, with increasing maximum temperature, soil moisture, and vapor-related variables, together with declining precipitation, drought indices, and surface radiation, jointly shaping vegetation persistence, expansion, or degradation. By integrating long-term multi-source datasets and cloud-based geospatial computing, this study provides a scalable and reproducible framework for assessing land cover change and vegetation stability in arid and semi-arid regions. The findings enhance understanding of dryland ecosystem dynamics under climate change and support large-scale ecological assessment in data-scarce environments.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Climate Change
Asia
Grassland
Ecosystem
Forests
Wetlands
*Environmental Monitoring/methods
RevDate: 2026-04-01
Dynamic machine learning approach for workload prediction in cloud environments.
Scientific reports, 16(1):.
Additional Links: PMID-41922438
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41922438,
year = {2026},
author = {Nashaat, M and Moussa, W and Rizk, R and Saber, W},
title = {Dynamic machine learning approach for workload prediction in cloud environments.},
journal = {Scientific reports},
volume = {16},
number = {1},
pages = {},
pmid = {41922438},
issn = {2045-2322},
}
RevDate: 2026-03-31
Cloud assisted blockchain-enabled split federated learning framework for security and privacy-preserving of IoMT in healthcare 5.0.
Scientific reports pii:10.1038/s41598-026-41771-1 [Epub ahead of print].
The rapid adoption of Internet of Medical Things (IoMT) in Federated Learning (FL)-enabled Smart Healthcare 5.0 raises pressing concerns regarding privacy, security, and real-time threat detection. FL remains vulnerable to training-phase attacks, parameter breaches, and aggregation threats, with the central server posing risks of inference, poisoning, and single-point failure. To address the challenge of single point failure and lack of efficient decentralized trust mechanism, we propose a novel framework that integrates Split Federated Learning (SFL) with Blockchain. SFL uses hybrid Deep Learning (DL) model which is divided between the IoMT and Edge layers by the architecture: Edge-based BiLSTM networks identify threats and temporal patterns, while lightweight CNN at the IoT layer extract spatial features from patient data. While SFL facilitates safe decentralized training, Blockchain uses the Practical Byzantine Fault Tolerance (PBFT) consensus mechanism to guarantee tamper-proof integrity, trust, and authentication. This method improves decentralized trust, protects privacy, and lessens data leakage. The framework is a robust and real-time intrusion detection solution for next-generation smart healthcare, as demonstrated by experiments on ToN-IoT and IoT healthcare datasets, which show improved privacy, robustness against attacks, block commit rate (450 b/s), and reduced consensus time (250 ms) in addition to superior accuracy (99.95%).
Additional Links: PMID-41917035
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41917035,
year = {2026},
author = {Baihan, A and Kryvinska, N and Amoon, M and Jiang, W and Ullah, Z and Shafiq, M},
title = {Cloud assisted blockchain-enabled split federated learning framework for security and privacy-preserving of IoMT in healthcare 5.0.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-41771-1},
pmid = {41917035},
issn = {2045-2322},
abstract = {The rapid adoption of Internet of Medical Things (IoMT) in Federated Learning (FL)-enabled Smart Healthcare 5.0 raises pressing concerns regarding privacy, security, and real-time threat detection. FL remains vulnerable to training-phase attacks, parameter breaches, and aggregation threats, with the central server posing risks of inference, poisoning, and single-point failure. To address the challenge of single point failure and lack of efficient decentralized trust mechanism, we propose a novel framework that integrates Split Federated Learning (SFL) with Blockchain. SFL uses hybrid Deep Learning (DL) model which is divided between the IoMT and Edge layers by the architecture: Edge-based BiLSTM networks identify threats and temporal patterns, while lightweight CNN at the IoT layer extract spatial features from patient data. While SFL facilitates safe decentralized training, Blockchain uses the Practical Byzantine Fault Tolerance (PBFT) consensus mechanism to guarantee tamper-proof integrity, trust, and authentication. This method improves decentralized trust, protects privacy, and lessens data leakage. The framework is a robust and real-time intrusion detection solution for next-generation smart healthcare, as demonstrated by experiments on ToN-IoT and IoT healthcare datasets, which show improved privacy, robustness against attacks, block commit rate (450 b/s), and reduced consensus time (250 ms) in addition to superior accuracy (99.95%).},
}
RevDate: 2026-03-30
A unified low-carbon cybersecurity framework integrating energy-efficient intrusion detection, lightweight cryptography, and carbon-aware scheduling for edge-cloud architectures.
Scientific reports pii:10.1038/s41598-026-44260-7 [Epub ahead of print].
The rapid expansion of edge-cloud computing infrastructures has intensified both cybersecurity demands and the associated energy consumption and carbon footprint of intrusion detection systems (IDS). This paper presents GreenShield, a unified low-carbon cybersecurity framework that integrates energy-efficient deep learning-based intrusion detection with knowledge distillation and dynamic quantization, ASCON lightweight cryptography, hierarchical federated learning with gradient compression, and a carbon-aware scheduling engine across distributed edge-fog-cloud architectures. GreenShield employs a threat-adaptive quantization mechanism that scales model precision (4-32 bit) based on real-time threat levels and a carbon-conscious scheduling controller that dynamically aligns security workload execution with renewable energy availability forecasts. Extensive experiments on the UNSW-NB15 and CIC-IDS2017 datasets demonstrate that GreenShield achieves 98.73% detection accuracy with 67.4% energy reduction compared to conventional deep learning-based IDS, while reducing operational carbon emissions by up to 97.6% (equivalent to approximately 2.8 kg CO2-eq per hour savings in a typical edge deployment). The hierarchical federated learning architecture reduces communication overhead by 58.2% through Top-k gradient sparsification, and the dynamic quantization mechanism achieves 71.3% inference energy reduction during low-threat periods. These results establish GreenShield as a viable, scalable solution for sustainable cybersecurity that supports carbon-conscious security workflows in next-generation edge-cloud computing environments.
Additional Links: PMID-41912554
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41912554,
year = {2026},
author = {Alshammari, A},
title = {A unified low-carbon cybersecurity framework integrating energy-efficient intrusion detection, lightweight cryptography, and carbon-aware scheduling for edge-cloud architectures.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-44260-7},
pmid = {41912554},
issn = {2045-2322},
support = {0054-1446-S//Deputyship for Research and Innovation, Ministry of Education in Saudi Arabia/ ; },
abstract = {The rapid expansion of edge-cloud computing infrastructures has intensified both cybersecurity demands and the associated energy consumption and carbon footprint of intrusion detection systems (IDS). This paper presents GreenShield, a unified low-carbon cybersecurity framework that integrates energy-efficient deep learning-based intrusion detection with knowledge distillation and dynamic quantization, ASCON lightweight cryptography, hierarchical federated learning with gradient compression, and a carbon-aware scheduling engine across distributed edge-fog-cloud architectures. GreenShield employs a threat-adaptive quantization mechanism that scales model precision (4-32 bit) based on real-time threat levels and a carbon-conscious scheduling controller that dynamically aligns security workload execution with renewable energy availability forecasts. Extensive experiments on the UNSW-NB15 and CIC-IDS2017 datasets demonstrate that GreenShield achieves 98.73% detection accuracy with 67.4% energy reduction compared to conventional deep learning-based IDS, while reducing operational carbon emissions by up to 97.6% (equivalent to approximately 2.8 kg CO2-eq per hour savings in a typical edge deployment). The hierarchical federated learning architecture reduces communication overhead by 58.2% through Top-k gradient sparsification, and the dynamic quantization mechanism achieves 71.3% inference energy reduction during low-threat periods. These results establish GreenShield as a viable, scalable solution for sustainable cybersecurity that supports carbon-conscious security workflows in next-generation edge-cloud computing environments.},
}
RevDate: 2026-03-28
Toward Energy-Efficient and Low-Carbon Intrusion Detection in Edge and Cloud Computing Based on GreenShield Cybersecurity Framework.
Sensors (Basel, Switzerland), 26(6): pii:s26061780.
The fast growth of edge-cloud computing infrastructures has increased the cybersecurity burden even as it has substantially amplified the energy use and carbon footprint of intrusion detection systems (IDSs). In order to overcome this challenge, this paper suggests GreenShield, which is a framework of low-carbon cybersecurity involving lightweight cryptography, deep learning that is energy efficient, and carbon conscious system optimization across distributed edges and in cloud setup. GreenShield employs a hierarchical federated learning architecture with integrated knowledge distillation and a carbon-aware scheduling controller that dynamically adjusts security response execution based on threat intensity and renewable energy availability. As extensive experiments on the UNSW-NB15 and CIC-IDS2017 datasets show, GreenShield attains 98.73% detection accuracy and is 67.4% more energy efficient than traditional deeplearning-based IDSs. Further, the suggested system reduces the operational carbon emissions up to 97.6%, which is equivalent to a reduction of around 2.8 kg CO2-equivalent/per hour in a typical edge-deployment situation, yet it does not undermine the performance of the detection. These findings suggest that GreenShield can be one of the meaningful alternatives in providing viable and scalable sustainable cybersecurity that supports carbon-conscious security workflows in the future edge-cloud computing architecture.
Additional Links: PMID-41901949
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41901949,
year = {2026},
author = {Alshammari, A},
title = {Toward Energy-Efficient and Low-Carbon Intrusion Detection in Edge and Cloud Computing Based on GreenShield Cybersecurity Framework.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {6},
pages = {},
doi = {10.3390/s26061780},
pmid = {41901949},
issn = {1424-8220},
abstract = {The fast growth of edge-cloud computing infrastructures has increased the cybersecurity burden even as it has substantially amplified the energy use and carbon footprint of intrusion detection systems (IDSs). In order to overcome this challenge, this paper suggests GreenShield, which is a framework of low-carbon cybersecurity involving lightweight cryptography, deep learning that is energy efficient, and carbon conscious system optimization across distributed edges and in cloud setup. GreenShield employs a hierarchical federated learning architecture with integrated knowledge distillation and a carbon-aware scheduling controller that dynamically adjusts security response execution based on threat intensity and renewable energy availability. As extensive experiments on the UNSW-NB15 and CIC-IDS2017 datasets show, GreenShield attains 98.73% detection accuracy and is 67.4% more energy efficient than traditional deeplearning-based IDSs. Further, the suggested system reduces the operational carbon emissions up to 97.6%, which is equivalent to a reduction of around 2.8 kg CO2-equivalent/per hour in a typical edge-deployment situation, yet it does not undermine the performance of the detection. These findings suggest that GreenShield can be one of the meaningful alternatives in providing viable and scalable sustainable cybersecurity that supports carbon-conscious security workflows in the future edge-cloud computing architecture.},
}
RevDate: 2026-03-28
AgroNova: An Autonomous IoT Platform for Greenhouse Climate Control.
Sensors (Basel, Switzerland), 26(6): pii:s26061861.
This study presents AgroNova-a hybrid IoT architecture for autonomous monitoring and management of microclimate in greenhouse environments. The system combines a capillary wireless sensor network, gateway-level rule-based agents, a server agent, cloud services and an advisory component based on a large language model (LLM) that supports local decision-making by incorporating external contextual information from meteorological services. The proposed architecture was validated through a seven-month deployment in an unheated tomato greenhouse, during which more than 380,000 environmental measurements were collected from five sensor nodes. The system operated continuously under real agricultural conditions, including during temporary internet connectivity interruptions, due to the autonomous gateway-level control and deterministic fallback mechanisms. The analysis of the collected data includes 3110 environmental threshold exceedance events, in which recovery dynamics, reaction latency, and actuator activation frequency were evaluated. The results show that the architecture supports stable autonomous operation under limited actuation conditions, with an average local reaction latency of less than 1 s, while physical actuator operations occur in approximately 2.3% of all control decisions. This behavior reflects a conservative control strategy that limits unnecessary mechanical operations and contributes to stable system operation. The experimental integration of a consultative LLM module within the server-side agent demonstrates the potential for context-enriched decision support using external meteorological data, while final control decisions remain under the authority of the gateway-based deterministic control mechanism. The main contribution of this study is the demonstration of a hybrid IoT architecture that combines edge-level autonomy with context-assisted reasoning, validated through deployment in a real greenhouse environment.
Additional Links: PMID-41902029
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41902029,
year = {2026},
author = {Toskov, B and Toskova, A},
title = {AgroNova: An Autonomous IoT Platform for Greenhouse Climate Control.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {6},
pages = {},
doi = {10.3390/s26061861},
pmid = {41902029},
issn = {1424-8220},
support = {D01-65/19.03.2021//Ministry of Education and Science of the Republic of Bulgaria/ ; },
abstract = {This study presents AgroNova-a hybrid IoT architecture for autonomous monitoring and management of microclimate in greenhouse environments. The system combines a capillary wireless sensor network, gateway-level rule-based agents, a server agent, cloud services and an advisory component based on a large language model (LLM) that supports local decision-making by incorporating external contextual information from meteorological services. The proposed architecture was validated through a seven-month deployment in an unheated tomato greenhouse, during which more than 380,000 environmental measurements were collected from five sensor nodes. The system operated continuously under real agricultural conditions, including during temporary internet connectivity interruptions, due to the autonomous gateway-level control and deterministic fallback mechanisms. The analysis of the collected data includes 3110 environmental threshold exceedance events, in which recovery dynamics, reaction latency, and actuator activation frequency were evaluated. The results show that the architecture supports stable autonomous operation under limited actuation conditions, with an average local reaction latency of less than 1 s, while physical actuator operations occur in approximately 2.3% of all control decisions. This behavior reflects a conservative control strategy that limits unnecessary mechanical operations and contributes to stable system operation. The experimental integration of a consultative LLM module within the server-side agent demonstrates the potential for context-enriched decision support using external meteorological data, while final control decisions remain under the authority of the gateway-based deterministic control mechanism. The main contribution of this study is the demonstration of a hybrid IoT architecture that combines edge-level autonomy with context-assisted reasoning, validated through deployment in a real greenhouse environment.},
}
RevDate: 2026-03-27
Enhancement of cryptography algorithms for security of cloud-based IoT with machine learning models.
Scientific reports pii:10.1038/s41598-026-45938-8 [Epub ahead of print].
The rapid expansion of cloud-based Internet of Things (IoT) systems has intensified security challenges due to the large-scale transmission of sensitive data from resource-constrained devices to cloud infrastructures. Conventional cryptographic techniques often impose high computational and memory overhead. Consequently, there is a critical need for security frameworks that balance strong data protection with efficient resource utilization while supporting intelligent threat detection. This study proposes an integrated security framework that combines lightweight and hybrid cryptographic algorithms with machine learning (ML) models to secure IoT data transmission in cloud-based environments. Four encryption techniques, XOR, ChaCha20, AES, and a hybrid AES-RSA scheme, are systematically evaluated in terms of memory consumption, CPU usage, and overall resource efficiency using the Overall Resource Consumption Score (ORCS). Secure data transmission is simulated using the MQTT protocol, while ML-based intrusion detection is performed using Random Forest (RF), XGBoost, CatBoost, and ensemble classifiers. Experiments are conducted on two real-world IoT datasets, MQTTEEB-D and CIC IoT 2023 for IoT network traffic. On the MQTTEEB-D dataset, the hybrid AES-RSA scheme achieved a low memory usage of 0.126 KB per traffic with an ORCS of 0.56, while the voting ensemble classifier attained the highest detection accuracy of 92.68%. On the CIC IoT 2023 dataset, comprising 605,839 test records, the hybrid AES-RSA method required 0.374 KB per traffic and achieved an ORCS of 0.5425, whereas the voting ensemble model achieved an accuracy of 81.09%. The findings demonstrate that hybrid cryptography provides an effective balance between security and efficiency for cloud-based IoT systems, while ensemble ML models significantly enhance intrusion detection performance.
Additional Links: PMID-41888344
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41888344,
year = {2026},
author = {Qasem, MA and Motiram, BM and Thorat, S and Al-Hejri, AM and Alshamrani, SS and Alshmrany, KM},
title = {Enhancement of cryptography algorithms for security of cloud-based IoT with machine learning models.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-45938-8},
pmid = {41888344},
issn = {2045-2322},
abstract = {The rapid expansion of cloud-based Internet of Things (IoT) systems has intensified security challenges due to the large-scale transmission of sensitive data from resource-constrained devices to cloud infrastructures. Conventional cryptographic techniques often impose high computational and memory overhead. Consequently, there is a critical need for security frameworks that balance strong data protection with efficient resource utilization while supporting intelligent threat detection. This study proposes an integrated security framework that combines lightweight and hybrid cryptographic algorithms with machine learning (ML) models to secure IoT data transmission in cloud-based environments. Four encryption techniques, XOR, ChaCha20, AES, and a hybrid AES-RSA scheme, are systematically evaluated in terms of memory consumption, CPU usage, and overall resource efficiency using the Overall Resource Consumption Score (ORCS). Secure data transmission is simulated using the MQTT protocol, while ML-based intrusion detection is performed using Random Forest (RF), XGBoost, CatBoost, and ensemble classifiers. Experiments are conducted on two real-world IoT datasets, MQTTEEB-D and CIC IoT 2023 for IoT network traffic. On the MQTTEEB-D dataset, the hybrid AES-RSA scheme achieved a low memory usage of 0.126 KB per traffic with an ORCS of 0.56, while the voting ensemble classifier attained the highest detection accuracy of 92.68%. On the CIC IoT 2023 dataset, comprising 605,839 test records, the hybrid AES-RSA method required 0.374 KB per traffic and achieved an ORCS of 0.5425, whereas the voting ensemble model achieved an accuracy of 81.09%. The findings demonstrate that hybrid cryptography provides an effective balance between security and efficiency for cloud-based IoT systems, while ensemble ML models significantly enhance intrusion detection performance.},
}
RevDate: 2026-03-27
CmpDate: 2026-03-27
A Cloud-Aware Scalable Architecture for Distributed Edge-Enabled BCI Biosensor System.
Biosensors, 16(3): pii:bios16030157.
BCI biosensors enable continuous monitoring of neural activity, but existing systems face challenges in scalability, latency, and reliable integration with cloud infrastructure. This work presents a cloud-aware, real-time cognitive grid architecture for multimodal BCI biosensors, validated at the system level through a full physical prototype. The system integrates the BioAmp EXG Pill for signal acquisition with an RP2040 microcontroller for local preprocessing using edge-resident TinyML deployment for on-device feature/inference feasibility coupled with environmental context sensors to augment signal context for downstream analytics talking to the external world via Wi-Fi/4G connectivity. A tiered data pipeline was implemented: SD card buffering for raw signals, Redis for near-real-time streaming, PostgreSQL for structured analytics, and AWS S3 with Glacier for long-term archival. End-to-end validation demonstrated consistent edge-level inference with bounded latency, while cloud-assisted telemetry and analytics exhibited variable transmission and processing delays consistent with cellular connectivity and serverless execution characteristics; packet loss remained below 5%. Visualization was achieved through Python 3.10 using Matplotlib GUI, Grafana 10.2.3 dashboards, and on-device LCD displays. Hybrid deployment strategies-local development, simulated cloud testing, and limited cloud usage for benchmark capture-enabled cost-efficient validation while preserving architectural fidelity and latency observability. The results establish a scalable, modular, and energy-efficient biosensor framework, providing a foundation for advanced analytics and translational BCI applications to be explored in subsequent work, with explicit consideration of both edge-resident TinyML inference and cloud-based machine learning workflows.
Additional Links: PMID-41892049
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41892049,
year = {2026},
author = {Ghosh, S and Bhuvanakantham, R and Sindhujaa, P and Harishita, PB and Mohan, A and Gulyás, B and Máthé, D and Padmanabhan, P},
title = {A Cloud-Aware Scalable Architecture for Distributed Edge-Enabled BCI Biosensor System.},
journal = {Biosensors},
volume = {16},
number = {3},
pages = {},
doi = {10.3390/bios16030157},
pmid = {41892049},
issn = {2079-6374},
mesh = {*Biosensing Techniques ; *Brain-Computer Interfaces ; *Cloud Computing ; Humans ; Electroencephalography ; Signal Processing, Computer-Assisted ; },
abstract = {BCI biosensors enable continuous monitoring of neural activity, but existing systems face challenges in scalability, latency, and reliable integration with cloud infrastructure. This work presents a cloud-aware, real-time cognitive grid architecture for multimodal BCI biosensors, validated at the system level through a full physical prototype. The system integrates the BioAmp EXG Pill for signal acquisition with an RP2040 microcontroller for local preprocessing using edge-resident TinyML deployment for on-device feature/inference feasibility coupled with environmental context sensors to augment signal context for downstream analytics talking to the external world via Wi-Fi/4G connectivity. A tiered data pipeline was implemented: SD card buffering for raw signals, Redis for near-real-time streaming, PostgreSQL for structured analytics, and AWS S3 with Glacier for long-term archival. End-to-end validation demonstrated consistent edge-level inference with bounded latency, while cloud-assisted telemetry and analytics exhibited variable transmission and processing delays consistent with cellular connectivity and serverless execution characteristics; packet loss remained below 5%. Visualization was achieved through Python 3.10 using Matplotlib GUI, Grafana 10.2.3 dashboards, and on-device LCD displays. Hybrid deployment strategies-local development, simulated cloud testing, and limited cloud usage for benchmark capture-enabled cost-efficient validation while preserving architectural fidelity and latency observability. The results establish a scalable, modular, and energy-efficient biosensor framework, providing a foundation for advanced analytics and translational BCI applications to be explored in subsequent work, with explicit consideration of both edge-resident TinyML inference and cloud-based machine learning workflows.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Biosensing Techniques
*Brain-Computer Interfaces
*Cloud Computing
Humans
Electroencephalography
Signal Processing, Computer-Assisted
RevDate: 2026-03-26
A hybrid RL-GA-LSTM-AE framework for energy-aware and SLA-driven task scheduling in cloud computing environments.
Scientific reports pii:10.1038/s41598-026-43108-4 [Epub ahead of print].
Additional Links: PMID-41882156
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41882156,
year = {2026},
author = {Narsimhulu, B and Kumar, TS},
title = {A hybrid RL-GA-LSTM-AE framework for energy-aware and SLA-driven task scheduling in cloud computing environments.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-43108-4},
pmid = {41882156},
issn = {2045-2322},
}
RevDate: 2026-03-25
Comparative evaluation of a deep learning method QNet and LCModel for MRS quantification on the cloud computing platform CloudBrain-MRS.
BMC medical imaging pii:10.1186/s12880-026-02292-5 [Epub ahead of print].
Additional Links: PMID-41877086
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41877086,
year = {2026},
author = {Lin, M and Guo, L and Chen, D and Chen, J and Tu, Z and Huang, X and Wang, J and Qi, J and Long, Y and Huang, Z and Guo, D and Qu, X and Han, H},
title = {Comparative evaluation of a deep learning method QNet and LCModel for MRS quantification on the cloud computing platform CloudBrain-MRS.},
journal = {BMC medical imaging},
volume = {},
number = {},
pages = {},
doi = {10.1186/s12880-026-02292-5},
pmid = {41877086},
issn = {1471-2342},
support = {2023YFF0714200//National Key R&D Program of China/ ; 62331021 and 62371410//National Natural Science Foundation of China/ ; 2023J02005//Natural Science Foundation of Fujian Province of China/ ; 231107173160805//Industry-University Cooperation Projects of the Ministry of Education of China/ ; 0621-Z0332004//Zhou Yongtang Fund for High Talent Team/ ; },
}
RevDate: 2026-03-24
A 30 m Multi-Year Dataset of Major Crop Distributions in Xinjiang, China (2013-2024) Based on Harmonized Landsat-Sentinel-2 Data.
Scientific data pii:10.1038/s41597-026-07082-w [Epub ahead of print].
Accurate and timely information on the spatial distribution of crops is essential for ensuring food security, achieving sustainable agricultural management, and understanding ecosystem interactions. However, in large-scale arid regions like Xinjiang, China, constructing high-spatial-resolution, continuous, and multi-year crop distribution datasets remains a significant challenge due to complex terrain, sparse ground observations, and limited computational resources. In this study, we developed a robust crop classification framework leveraging the Google Earth Engine (GEE) cloud platform. The framework integrates all available NASA-Sentinel-2 (HLSL30) imagery to construct harmonic models based on NDVI and LSWI indices, effectively characterizing crop phenological trajectories. These features are combined with a Random Forest (RF) algorithm to achieve detailed identification of major crop types. To minimize interference from non-crop vegetation and background land cover, we implemented a pre-extracted cropland mask. Using this approach, we generated a 30 m resolution dataset of major crops (including cotton, maize, wheat, and rice) across Xinjiang for the period 2013-2024. Accuracy assessments using independent validation samples from 2018 and 2019 yielded producer accuracies of 0.83-0.99 and user accuracies of 0.83-0.96. The overall accuracy reached 0.90 and 0.93, with Kappa coefficients of 0.86 and 0.89, respectively. Furthermore, the estimated crop areas at the prefecture level show high consistency with official statistical yearbooks and align well with existing distribution maps of cotton, maize, and wheat. This dataset provides a systematic characterization of the long-term spatial dynamics of major crops in Xinjiang, offering critical and reliable data support for regional agricultural monitoring, food security assessment, policy formulation, and environmental change research.
Additional Links: PMID-41872228
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41872228,
year = {2026},
author = {Liang, Q and Di, Y and Hao, X and Zhang, J and Ci, M and Sun, F and Wang, C and Fan, X and Guo, X},
title = {A 30 m Multi-Year Dataset of Major Crop Distributions in Xinjiang, China (2013-2024) Based on Harmonized Landsat-Sentinel-2 Data.},
journal = {Scientific data},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41597-026-07082-w},
pmid = {41872228},
issn = {2052-4463},
support = {XDB0720101//Chinese Academy of Sciences/ ; },
abstract = {Accurate and timely information on the spatial distribution of crops is essential for ensuring food security, achieving sustainable agricultural management, and understanding ecosystem interactions. However, in large-scale arid regions like Xinjiang, China, constructing high-spatial-resolution, continuous, and multi-year crop distribution datasets remains a significant challenge due to complex terrain, sparse ground observations, and limited computational resources. In this study, we developed a robust crop classification framework leveraging the Google Earth Engine (GEE) cloud platform. The framework integrates all available NASA-Sentinel-2 (HLSL30) imagery to construct harmonic models based on NDVI and LSWI indices, effectively characterizing crop phenological trajectories. These features are combined with a Random Forest (RF) algorithm to achieve detailed identification of major crop types. To minimize interference from non-crop vegetation and background land cover, we implemented a pre-extracted cropland mask. Using this approach, we generated a 30 m resolution dataset of major crops (including cotton, maize, wheat, and rice) across Xinjiang for the period 2013-2024. Accuracy assessments using independent validation samples from 2018 and 2019 yielded producer accuracies of 0.83-0.99 and user accuracies of 0.83-0.96. The overall accuracy reached 0.90 and 0.93, with Kappa coefficients of 0.86 and 0.89, respectively. Furthermore, the estimated crop areas at the prefecture level show high consistency with official statistical yearbooks and align well with existing distribution maps of cotton, maize, and wheat. This dataset provides a systematic characterization of the long-term spatial dynamics of major crops in Xinjiang, offering critical and reliable data support for regional agricultural monitoring, food security assessment, policy formulation, and environmental change research.},
}
RevDate: 2026-03-24
Quantum Computing and Visualization Research Challenges and Opportunities.
IEEE computer graphics and applications, 46(2):120-130.
Quantum computing (QC) has experienced rapid growth in recent years with the advent of robust programming environments, readily accessible software simulators and cloud-based QC hardware platforms, and growing interest in learning how to design useful methods that leverage this emerging technology for practical applications. From the perspective of the field of visualization, this article examines research challenges and opportunities along the path from initial feasibility to practical use of QC platforms applied to meaningful problems.
Additional Links: PMID-41875008
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41875008,
year = {2026},
author = {Bethel, EW and Van Beeumen, R and Perciano, T and Rhyne, TM},
title = {Quantum Computing and Visualization Research Challenges and Opportunities.},
journal = {IEEE computer graphics and applications},
volume = {46},
number = {2},
pages = {120-130},
doi = {10.1109/MCG.2025.3646315},
pmid = {41875008},
issn = {1558-1756},
abstract = {Quantum computing (QC) has experienced rapid growth in recent years with the advent of robust programming environments, readily accessible software simulators and cloud-based QC hardware platforms, and growing interest in learning how to design useful methods that leverage this emerging technology for practical applications. From the perspective of the field of visualization, this article examines research challenges and opportunities along the path from initial feasibility to practical use of QC platforms applied to meaningful problems.},
}
RevDate: 2026-03-23
A low-latency deep learning framework for volcanic ash cloud nowcasting using geostationary satellite imagery.
Scientific reports pii:10.1038/s41598-026-42230-7 [Epub ahead of print].
Rapid assessment of hazardous aerosol dispersion is critical for emergency response, yet operational dispersion workflows can exhibit end-to-end latency that is incompatible with the first minutes of decision-making. This study develops and validates a deep learning approach for near-real-time nowcasting of volcanic ash dispersion from geostationary observations. The model was trained on an archive of volcanic ash satellite imagery from EUMETSAT's SEVIRI instrument (Ash RGB composite) and achieved a structural similarity index of 0.88 for 15-minute next-frame forecasts. The complete edge workflow, including data download and inference, runs in under five seconds on an NVIDIA Jetson AGX Orin. To illustrate how the same nowcasting pipeline can be used for hypothetical scenario exploration across particulate sources, a pixel-based event-injection algorithm is introduced to overlay synthetic plumes of varying sizes into real-time satellite frames before inference. Scenario demonstrations parameterized by nuclear-yield-inspired sizes (10 kt to 100 Mt) are presented at urban (Paris, London, Berlin), national (Iberian Peninsula), and continental (Europe-wide) scales. These scenario outputs are intended as illustrative, low-latency visualizations of kinematic transport patterns in the SEVIRI observation space, as validated predictions of nuclear plume morphology. The primary contribution is a fast, low-cost volcanic ash nowcasting system, complemented by a generalizable injection framework for rapid scenario visualization on edge computing.
Additional Links: PMID-41866598
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41866598,
year = {2026},
author = {Alves, D and Radeta, M and Mendonça, F and Pereira, L and Morgado-Dias, F},
title = {A low-latency deep learning framework for volcanic ash cloud nowcasting using geostationary satellite imagery.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-42230-7},
pmid = {41866598},
issn = {2045-2322},
support = {10.54499/LA/P/0083/2022 & UID/50009/202//Fundação para a Ciência e a Tecnologia/ ; },
abstract = {Rapid assessment of hazardous aerosol dispersion is critical for emergency response, yet operational dispersion workflows can exhibit end-to-end latency that is incompatible with the first minutes of decision-making. This study develops and validates a deep learning approach for near-real-time nowcasting of volcanic ash dispersion from geostationary observations. The model was trained on an archive of volcanic ash satellite imagery from EUMETSAT's SEVIRI instrument (Ash RGB composite) and achieved a structural similarity index of 0.88 for 15-minute next-frame forecasts. The complete edge workflow, including data download and inference, runs in under five seconds on an NVIDIA Jetson AGX Orin. To illustrate how the same nowcasting pipeline can be used for hypothetical scenario exploration across particulate sources, a pixel-based event-injection algorithm is introduced to overlay synthetic plumes of varying sizes into real-time satellite frames before inference. Scenario demonstrations parameterized by nuclear-yield-inspired sizes (10 kt to 100 Mt) are presented at urban (Paris, London, Berlin), national (Iberian Peninsula), and continental (Europe-wide) scales. These scenario outputs are intended as illustrative, low-latency visualizations of kinematic transport patterns in the SEVIRI observation space, as validated predictions of nuclear plume morphology. The primary contribution is a fast, low-cost volcanic ash nowcasting system, complemented by a generalizable injection framework for rapid scenario visualization on edge computing.},
}
RevDate: 2026-03-21
Edge machine learning over IoT for chipless RFID environmental sensing in smart agriculture.
Scientific reports, 16(1):.
Chipless Radio Frequency Identification (RFID) has emerged as a promising technology for battery-free and maintenance-free sensing in Internet of Things (IoT) applications, particularly in smart agriculture where large-scale deployment and long-term autonomy are essential. However, practical agricultural sensing requires more than isolated tag or sensor designs; it demands an integrated system that jointly supports high-capacity identification, reliable multi-parameter environmental monitoring, and robust data interpretation within realistic radio-frequency (RF) environments. This work presents an IoT-ready chipless RFID framework that unifies resonator-based tag design, functional sensing materials, and physics-guided machine learning within a coherent hardware–analytics architecture. A compact 24-bit chipless RFID identification tag based on T-shaped resonators is designed within a 60 × 40 mm[2] footprint, achieving dense encoding with approximately 100 MHz spectral spacing. The tag is validated through full-wave CST simulations and anechoic-chamber radar cross-section (RCS) measurements, demonstrating high-Q, well-isolated spectral notches. Building on this platform, a 12-resonator sensing variant is developed for dual-parameter microclimate monitoring, exploiting the temperature-sensitive permittivity of Taconic RF-35 and the humidity responsiveness of a Kapton HN/PVA bilayer, with approximately 160 MHz reserved per sensing slot to prevent spectral overlap. To enable reliable operation under deployment-realistic interference, a physics-driven, edge-deployable machine learning framework is introduced, operating on interpretable RCS features rather than raw spectra. A hybrid ensemble combining Random Forest, Support Vector Regression, and XGBoost models are developed and augmented with k-means–based anomaly detection. This framework achieves 96.2% temperature-bin classification accuracy, with mean errors of ± 1.3 °C for temperature and ± 2.1% for relative humidity under frequency jitter, attenuation, and multipath distortions. The proposed co-design demonstrates a scalable, interpretable, and energy-autonomous solution for precision agriculture. It is compatible with edge gateways and cloud or serverless IoT infrastructures.
Additional Links: PMID-41708718
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41708718,
year = {2026},
author = {Mekki, K and Ghezaiel, N and Slimene, MB and Neffati, S and Rmili, H and Gharsallah, A},
title = {Edge machine learning over IoT for chipless RFID environmental sensing in smart agriculture.},
journal = {Scientific reports},
volume = {16},
number = {1},
pages = {},
pmid = {41708718},
issn = {2045-2322},
abstract = {Chipless Radio Frequency Identification (RFID) has emerged as a promising technology for battery-free and maintenance-free sensing in Internet of Things (IoT) applications, particularly in smart agriculture where large-scale deployment and long-term autonomy are essential. However, practical agricultural sensing requires more than isolated tag or sensor designs; it demands an integrated system that jointly supports high-capacity identification, reliable multi-parameter environmental monitoring, and robust data interpretation within realistic radio-frequency (RF) environments. This work presents an IoT-ready chipless RFID framework that unifies resonator-based tag design, functional sensing materials, and physics-guided machine learning within a coherent hardware–analytics architecture. A compact 24-bit chipless RFID identification tag based on T-shaped resonators is designed within a 60 × 40 mm[2] footprint, achieving dense encoding with approximately 100 MHz spectral spacing. The tag is validated through full-wave CST simulations and anechoic-chamber radar cross-section (RCS) measurements, demonstrating high-Q, well-isolated spectral notches. Building on this platform, a 12-resonator sensing variant is developed for dual-parameter microclimate monitoring, exploiting the temperature-sensitive permittivity of Taconic RF-35 and the humidity responsiveness of a Kapton HN/PVA bilayer, with approximately 160 MHz reserved per sensing slot to prevent spectral overlap. To enable reliable operation under deployment-realistic interference, a physics-driven, edge-deployable machine learning framework is introduced, operating on interpretable RCS features rather than raw spectra. A hybrid ensemble combining Random Forest, Support Vector Regression, and XGBoost models are developed and augmented with k-means–based anomaly detection. This framework achieves 96.2% temperature-bin classification accuracy, with mean errors of ± 1.3 °C for temperature and ± 2.1% for relative humidity under frequency jitter, attenuation, and multipath distortions. The proposed co-design demonstrates a scalable, interpretable, and energy-autonomous solution for precision agriculture. It is compatible with edge gateways and cloud or serverless IoT infrastructures.},
}
RevDate: 2026-03-22
Enhanced smart commuting with artificial intelligence for intelligent health and safety monitoring in school buses.
Scientific reports pii:10.1038/s41598-026-41628-7 [Epub ahead of print].
This paper introduces ESC.AI (Enhanced Smart Commuting with Artificial Intelligence), an intelligent and integrated safety framework designed to improve health monitoring, environmental awareness, behavioral detection, driver supervision, and route optimization in school bus transportation systems. The proposed framework combines multimodal sensing, edge-based artificial intelligence, adaptive routing, and secure data management to enable proactive risk detection and real-time decision-making during transit. Although school buses remain one of the safest modes of transportation for students, recent national statistics continue to highlight persistent risks related to health emergencies, behavioral incidents, and environmental hazards. According to data from the National Safety Council (NSC) and the National Highway Traffic Safety Administration (NHTSA), school bus-related crashes resulted in 104 fatalities in the United States in 2022, representing a 3.7% decrease from 2021. Between 2013 and 2022, approximately 71% of fatalities involved occupants of other vehicles, 16% were pedestrians, and only 5% were school bus passengers. Injury statistics show a similar pattern, emphasizing the need for safety solutions that protect both students and surrounding road users. ESC.AI addresses these challenges through a unified platform that integrates Internet of Things (IoT) sensors for physiological and environmental monitoring, computer vision-based behavioral analysis, driver monitoring, and intelligent routing. Edge-cloud computing is employed to ensure low-latency responses, while blockchain-based mechanisms are used selectively to enhance data integrity, traceability, and access control for sensitive safety records. Together, these components form a cohesive and scalable framework aimed at improving transparency, responsiveness, and reliability in school transportation systems.
Additional Links: PMID-41864994
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41864994,
year = {2026},
author = {Hossam, H and Tamer, R and Mohsen, M and Ahmed, O and Alaa, M and Mourad, J and Adel, R and Hatem, A and Sherif, S},
title = {Enhanced smart commuting with artificial intelligence for intelligent health and safety monitoring in school buses.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-41628-7},
pmid = {41864994},
issn = {2045-2322},
abstract = {This paper introduces ESC.AI (Enhanced Smart Commuting with Artificial Intelligence), an intelligent and integrated safety framework designed to improve health monitoring, environmental awareness, behavioral detection, driver supervision, and route optimization in school bus transportation systems. The proposed framework combines multimodal sensing, edge-based artificial intelligence, adaptive routing, and secure data management to enable proactive risk detection and real-time decision-making during transit. Although school buses remain one of the safest modes of transportation for students, recent national statistics continue to highlight persistent risks related to health emergencies, behavioral incidents, and environmental hazards. According to data from the National Safety Council (NSC) and the National Highway Traffic Safety Administration (NHTSA), school bus-related crashes resulted in 104 fatalities in the United States in 2022, representing a 3.7% decrease from 2021. Between 2013 and 2022, approximately 71% of fatalities involved occupants of other vehicles, 16% were pedestrians, and only 5% were school bus passengers. Injury statistics show a similar pattern, emphasizing the need for safety solutions that protect both students and surrounding road users. ESC.AI addresses these challenges through a unified platform that integrates Internet of Things (IoT) sensors for physiological and environmental monitoring, computer vision-based behavioral analysis, driver monitoring, and intelligent routing. Edge-cloud computing is employed to ensure low-latency responses, while blockchain-based mechanisms are used selectively to enhance data integrity, traceability, and access control for sensitive safety records. Together, these components form a cohesive and scalable framework aimed at improving transparency, responsiveness, and reliability in school transportation systems.},
}
RevDate: 2026-03-21
Cloud Computing Startup for Data Science- A Tutorial.
The American surgeon [Epub ahead of print].
Cloud computing has revolutionized analysis of large datasets. This tutorial provides a comprehensive, practical guide for research groups seeking to leverage cloud platforms for data analysis. The tutorial covers the foundations of cloud computing, including its history, rationale, and use cases for research, followed by detailed comparisons of the 3 major platforms: Google Cloud Platform (GCP), Amazon Web Services (AWS), and Microsoft Azure. Side-by-side comparisons of services, costs, ease of use, and selection guidelines assist researchers in choosing the most appropriate platform. A complete step-by-step example using hospital price transparency data demonstrates the entire workflow from account creation through results retrieval, enabling researchers to begin productive cloud-based analysis within hours.
Additional Links: PMID-41863225
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41863225,
year = {2026},
author = {Kuo, AC and Hiraldo, L and Wolansky, RL and Sujka, J and Kuo, PC},
title = {Cloud Computing Startup for Data Science- A Tutorial.},
journal = {The American surgeon},
volume = {},
number = {},
pages = {31348261433646},
doi = {10.1177/00031348261433646},
pmid = {41863225},
issn = {1555-9823},
abstract = {Cloud computing has revolutionized analysis of large datasets. This tutorial provides a comprehensive, practical guide for research groups seeking to leverage cloud platforms for data analysis. The tutorial covers the foundations of cloud computing, including its history, rationale, and use cases for research, followed by detailed comparisons of the 3 major platforms: Google Cloud Platform (GCP), Amazon Web Services (AWS), and Microsoft Azure. Side-by-side comparisons of services, costs, ease of use, and selection guidelines assist researchers in choosing the most appropriate platform. A complete step-by-step example using hospital price transparency data demonstrates the entire workflow from account creation through results retrieval, enabling researchers to begin productive cloud-based analysis within hours.},
}
RevDate: 2026-03-20
CmpDate: 2026-03-20
The health informatics centre: a safe haven and trusted research environment enabling world-leading research.
International journal of population data science, 8(6):3320.
INTRODUCTION: The Health Informatics Centre (HIC) is a regional Scottish Safe Haven dedicated to secure data management, ensuring its integrity, confidentiality, and availability through a robust information governance framework.
METHODS: As a data processor, HIC is responsible for the secure curation, storage, and provision of research data extracts. Research-ready data are made available to approved researchers via our customisable cloud Trusted Research Environment (TRE).
RESULTS: The available granular data spans over 20 years, includes 2.1 million of the Scottish population, and HIC offers more than 170 datasets, with the most commonly used published with a digital object identifier. Data sources include clinical, hospital, laboratory, imaging, and research datasets which can be linked to new, and existing datasets. The data is quality-assured and released as project-specific extracts, ensuring robust privacy protection and research readiness. HIC's infrastructure is secure-by-design and supports high-performance computing, advanced data analytics, and is customisable to researcher's needs.
CONCLUSION: HIC has a long history of supporting a wide range of data-led research projects as a trusted and capable partner. At the time of publication 175 projects across academia, the NHS, and public sector organisations are active within HIC. Through adaptability, innovation and investment in people and infrastructure we have established a sustainable model which will continue to meet future needs and demands from world-leading research with sensitive data.
Additional Links: PMID-41858610
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41858610,
year = {2023},
author = {Ward, LM and Johnston, J and Milburn, KR and Hall, C and Jones, C and Guignard-Duff, M and Krueger, S and Milligan, G and Anderson, J and Walls, R and Cole, C},
title = {The health informatics centre: a safe haven and trusted research environment enabling world-leading research.},
journal = {International journal of population data science},
volume = {8},
number = {6},
pages = {3320},
pmid = {41858610},
issn = {2399-4908},
mesh = {Scotland ; *Medical Informatics/organization & administration ; Humans ; Confidentiality ; *Computer Security ; *Data Management ; Biomedical Research ; Research ; },
abstract = {INTRODUCTION: The Health Informatics Centre (HIC) is a regional Scottish Safe Haven dedicated to secure data management, ensuring its integrity, confidentiality, and availability through a robust information governance framework.
METHODS: As a data processor, HIC is responsible for the secure curation, storage, and provision of research data extracts. Research-ready data are made available to approved researchers via our customisable cloud Trusted Research Environment (TRE).
RESULTS: The available granular data spans over 20 years, includes 2.1 million of the Scottish population, and HIC offers more than 170 datasets, with the most commonly used published with a digital object identifier. Data sources include clinical, hospital, laboratory, imaging, and research datasets which can be linked to new, and existing datasets. The data is quality-assured and released as project-specific extracts, ensuring robust privacy protection and research readiness. HIC's infrastructure is secure-by-design and supports high-performance computing, advanced data analytics, and is customisable to researcher's needs.
CONCLUSION: HIC has a long history of supporting a wide range of data-led research projects as a trusted and capable partner. At the time of publication 175 projects across academia, the NHS, and public sector organisations are active within HIC. Through adaptability, innovation and investment in people and infrastructure we have established a sustainable model which will continue to meet future needs and demands from world-leading research with sensitive data.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Scotland
*Medical Informatics/organization & administration
Humans
Confidentiality
*Computer Security
*Data Management
Biomedical Research
Research
RevDate: 2026-03-19
CmpDate: 2026-03-19
Sant'Andrea della valle dataset: Georeferenced 2D study models of a Theatine Church.
Data in brief, 65:112618.
This article presents a georeferenced dataset of 2D study models of the church of Sant'Andrea della Valle in Rome, a key monument of Theatine architecture. Integrating architectural, historical, and geospatial data, the dataset supports interdisciplinary research, heritage documentation, and digital preservation. It originates from a 3D laser scanning campaign conducted in 2023, combined with archival and bibliographic investigations mapping Theatine settlements in Italy and abroad. Sant'Andrea della Valle was selected for its historical relevance and unresolved questions about its architectural evolution. The dataset includes a CAD file with 2D restitution (bidimensional study model) derived from 3D point clouds, a GIS-compatible shapefile, and a KML file for Google Earth. The point cloud, captured with a Z + F 5016 scanner, was aligned in Z + F LaserControl and processed in AutoCAD to produce plans, elevations, and sections later georeferenced in AutoCAD Map 3D; Georeferencing ensured spatial consistency and integration with other datasets for multi-scale architectural and urban analysis. Archival sources were digitized and linked to the spatial models, enabling reconstruction of construction phases and stylistic evolution. The dataset supports architectural, historical, and urban studies by providing standardized, reusable CAD, GIS, and KML formats. Despite some gaps in archival data and variable point cloud density, it offers a robust, reproducible basis for analyzing Theatine architecture and preserving its heritage through digital documentation. The georeferencing of the plan derived from the surveys can contribute to creating a comprehensive, georeferenced mapping of all Theatine works in Italy and worldwide.
Additional Links: PMID-41852844
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41852844,
year = {2026},
author = {Bianchini, C and De Stefano, F and Giuliani, G and Griffo, M and Porfiri, F},
title = {Sant'Andrea della valle dataset: Georeferenced 2D study models of a Theatine Church.},
journal = {Data in brief},
volume = {65},
number = {},
pages = {112618},
pmid = {41852844},
issn = {2352-3409},
abstract = {This article presents a georeferenced dataset of 2D study models of the church of Sant'Andrea della Valle in Rome, a key monument of Theatine architecture. Integrating architectural, historical, and geospatial data, the dataset supports interdisciplinary research, heritage documentation, and digital preservation. It originates from a 3D laser scanning campaign conducted in 2023, combined with archival and bibliographic investigations mapping Theatine settlements in Italy and abroad. Sant'Andrea della Valle was selected for its historical relevance and unresolved questions about its architectural evolution. The dataset includes a CAD file with 2D restitution (bidimensional study model) derived from 3D point clouds, a GIS-compatible shapefile, and a KML file for Google Earth. The point cloud, captured with a Z + F 5016 scanner, was aligned in Z + F LaserControl and processed in AutoCAD to produce plans, elevations, and sections later georeferenced in AutoCAD Map 3D; Georeferencing ensured spatial consistency and integration with other datasets for multi-scale architectural and urban analysis. Archival sources were digitized and linked to the spatial models, enabling reconstruction of construction phases and stylistic evolution. The dataset supports architectural, historical, and urban studies by providing standardized, reusable CAD, GIS, and KML formats. Despite some gaps in archival data and variable point cloud density, it offers a robust, reproducible basis for analyzing Theatine architecture and preserving its heritage through digital documentation. The georeferencing of the plan derived from the surveys can contribute to creating a comprehensive, georeferenced mapping of all Theatine works in Italy and worldwide.},
}
RevDate: 2026-03-19
CmpDate: 2026-03-19
mHealth Intervention to Promote Nonexercise Physical Activity in Patients With Type 2 Diabetes: Secondary Analysis and Implementation Study.
JMIR formative research, 10:e80304 pii:v10i1e80304.
BACKGROUND: Physical activity (PA) has an important role in the prevention and treatment of type 2 diabetes (T2D). Interventions with mobile-based technology (mobile health [mHealth]) seem promising in PA promotion, but their behavioral framework is often vague, and the implementation is seldom reported.
OBJECTIVE: This paper examines perceived behavior change needs and implementation of an mHealth approach in increasing nonexercise PA in patients with T2D.
METHODS: A 3-arm mHealth intervention was conducted in primary care. Information on perceived behavior change needs was collected with a modified capability, opportunity, motivation-behavior (COM-B) questionnaire before the intervention from a separate sample of patients with T2D (n=25) and at the intervention baseline (n=119). Implementation evaluation focused on the fidelity and acceptability of the main arm of the intervention (n=39), which included 24-hour accelerometer use, a smartphone app with personal feedback, a PA leaflet, a YouTube video on walking, and individual counseling with 3 face-to-face sessions and 4 telephone contacts. Data on fidelity were accumulated during the intervention through counseling cards and cloud computing. Data on acceptability were collected with a questionnaire at the end of the intervention (Likert scale from 1 to 5). Data analysis was mainly descriptive.
RESULTS: The participants' responses revealed 3 items in capability and 2 in motivation, which stood out as perceived behavior change needs. Moreover, the main intervention arm showed good fidelity (eg, face-to-face sessions: 112/117, 96% and telephone contacts completed: 145/156, 93%; mean weekly accelerometer use 54%; ranging from 80% to 17% during the intervention) and acceptability (mean score ranging from 3.8 to 4.8), although some challenges were also experienced, especially in cloud-computed feedback and accelerometer-app use.
CONCLUSIONS: The findings on behavior change needs call for additional research since no comparable studies were found. In addition, the explanatory value of the COM-B model and the psychometric properties of the COM-B questionnaire deserve further attention. The main intervention arm seemed applicable to clinical practice. However, the challenges discovered underscore the importance of pretesting technology-based approaches in patients with T2D.
Additional Links: PMID-41855496
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41855496,
year = {2026},
author = {Aittasalo, M and Tokola, K and Vähä-Ypyä, H and Husu, P and Mänttäri, A and Martiskainen, T and Laatikainen, T and Sievänen, H},
title = {mHealth Intervention to Promote Nonexercise Physical Activity in Patients With Type 2 Diabetes: Secondary Analysis and Implementation Study.},
journal = {JMIR formative research},
volume = {10},
number = {},
pages = {e80304},
doi = {10.2196/80304},
pmid = {41855496},
issn = {2561-326X},
mesh = {Humans ; *Diabetes Mellitus, Type 2/therapy/psychology ; Female ; Male ; Middle Aged ; Telemedicine ; *Exercise/psychology ; Aged ; Surveys and Questionnaires ; *Health Promotion/methods ; Motivation ; Mobile Applications ; Adult ; },
abstract = {BACKGROUND: Physical activity (PA) has an important role in the prevention and treatment of type 2 diabetes (T2D). Interventions with mobile-based technology (mobile health [mHealth]) seem promising in PA promotion, but their behavioral framework is often vague, and the implementation is seldom reported.
OBJECTIVE: This paper examines perceived behavior change needs and implementation of an mHealth approach in increasing nonexercise PA in patients with T2D.
METHODS: A 3-arm mHealth intervention was conducted in primary care. Information on perceived behavior change needs was collected with a modified capability, opportunity, motivation-behavior (COM-B) questionnaire before the intervention from a separate sample of patients with T2D (n=25) and at the intervention baseline (n=119). Implementation evaluation focused on the fidelity and acceptability of the main arm of the intervention (n=39), which included 24-hour accelerometer use, a smartphone app with personal feedback, a PA leaflet, a YouTube video on walking, and individual counseling with 3 face-to-face sessions and 4 telephone contacts. Data on fidelity were accumulated during the intervention through counseling cards and cloud computing. Data on acceptability were collected with a questionnaire at the end of the intervention (Likert scale from 1 to 5). Data analysis was mainly descriptive.
RESULTS: The participants' responses revealed 3 items in capability and 2 in motivation, which stood out as perceived behavior change needs. Moreover, the main intervention arm showed good fidelity (eg, face-to-face sessions: 112/117, 96% and telephone contacts completed: 145/156, 93%; mean weekly accelerometer use 54%; ranging from 80% to 17% during the intervention) and acceptability (mean score ranging from 3.8 to 4.8), although some challenges were also experienced, especially in cloud-computed feedback and accelerometer-app use.
CONCLUSIONS: The findings on behavior change needs call for additional research since no comparable studies were found. In addition, the explanatory value of the COM-B model and the psychometric properties of the COM-B questionnaire deserve further attention. The main intervention arm seemed applicable to clinical practice. However, the challenges discovered underscore the importance of pretesting technology-based approaches in patients with T2D.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Diabetes Mellitus, Type 2/therapy/psychology
Female
Male
Middle Aged
Telemedicine
*Exercise/psychology
Aged
Surveys and Questionnaires
*Health Promotion/methods
Motivation
Mobile Applications
Adult
RevDate: 2026-03-18
Blockchain-driven trust management and AI computing for sensor networks optimization.
Scientific reports pii:10.1038/s41598-026-41302-y [Epub ahead of print].
The Internet of Things (IoT) and emerging technologies have converged to drive the remarkable development of intelligent systems. The interconnection of physical objects, sensors, and tiny communication devices enables data aggregation, which is then forwarded to edge computing for local processing and analysis. Such a system improves response time and enhances network capabilities while managing the massive amount of collected data. On the other hand, existing approaches include cloud-based schemes that leverage edge-level offloading to control and manage demanding traffic. Furthermore, data security and network integrity are ensured by integrating blockchain technology with device identity authentication. However, in a dynamic environment, most approaches still incur interception and data eavesdropping, thereby affecting the reliability of connected communication channels. Therefore, developing trustworthiness and an authenticated system is a significant research challenge for the growth of smart systems. In this research, we introduce a lightweight, trusted AI-driven model to enhance security in complex systems and to ensure a more robust data-forwarding path over the long term. First, optimized methods are introduced that use an adaptive technique to explore network conditions and generate efficient data-transfer decision policies. Secondly, distributed and collaborative interactions are enabled across devices with minimal computing resources, thereby improving the system's response time through load balancing. Ultimately, trust is continuously updated by leveraging real-time parameters and records of neighbours' communication, thereby providing fault tolerance and trusted channels. The proposed model is verified and validated for efficacy through a wide range of simulations, and performance results demonstrate its superiority over existing approaches on realistic scenarios and metrics.
Additional Links: PMID-41844731
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41844731,
year = {2026},
author = {Alharbi, M and Haseeb, K and Jhanjhi, NZ and Khan, A and Humayun, M and Khan, MA},
title = {Blockchain-driven trust management and AI computing for sensor networks optimization.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-41302-y},
pmid = {41844731},
issn = {2045-2322},
support = {DGSSR-2025-NF-02-068//Al Jouf University/ ; },
abstract = {The Internet of Things (IoT) and emerging technologies have converged to drive the remarkable development of intelligent systems. The interconnection of physical objects, sensors, and tiny communication devices enables data aggregation, which is then forwarded to edge computing for local processing and analysis. Such a system improves response time and enhances network capabilities while managing the massive amount of collected data. On the other hand, existing approaches include cloud-based schemes that leverage edge-level offloading to control and manage demanding traffic. Furthermore, data security and network integrity are ensured by integrating blockchain technology with device identity authentication. However, in a dynamic environment, most approaches still incur interception and data eavesdropping, thereby affecting the reliability of connected communication channels. Therefore, developing trustworthiness and an authenticated system is a significant research challenge for the growth of smart systems. In this research, we introduce a lightweight, trusted AI-driven model to enhance security in complex systems and to ensure a more robust data-forwarding path over the long term. First, optimized methods are introduced that use an adaptive technique to explore network conditions and generate efficient data-transfer decision policies. Secondly, distributed and collaborative interactions are enabled across devices with minimal computing resources, thereby improving the system's response time through load balancing. Ultimately, trust is continuously updated by leveraging real-time parameters and records of neighbours' communication, thereby providing fault tolerance and trusted channels. The proposed model is verified and validated for efficacy through a wide range of simulations, and performance results demonstrate its superiority over existing approaches on realistic scenarios and metrics.},
}
RevDate: 2026-03-14
A Lightweight Net with Dual-Path Feature Enhancer and Bidirectional Gated Fusion for Cloud Detection.
Sensors (Basel, Switzerland), 26(5): pii:s26051727.
Cloud detection serves as a critical preprocessing step in remote sensing image processing and quantitative applications. However, prevailing deep learning-based models often depend on computationally intensive backbone networks to achieve high accuracy, which hinders their deployment in resource-constrained scenarios such as on-board processing or edge computing. To bridge the trade-off between accuracy and efficiency, this paper introduces a lightweight network for cloud detection. The core innovations of our network are twofold: (1) a dual-path feature enhancer that operates at the front end to extract and fuse multi-scale features through a parallel architecture, significantly enriching feature diversity and representational capacity, thereby alleviating the need for a complex backbone, and (2) a bidirectional gated fusion module, which adaptively integrates multi-scale features from the dual-path feature enhancer with deep semantic features from the backbone decoder through a gated attention mechanism and dynamic convolution, thereby enhancing feature discriminability. Comprehensive experiments on the public HRC_WHU dataset demonstrate that the proposed model achieves a high overall accuracy of 96.31% and a mean intersection-over-union of 92.82%, with only 12.04 GFLOPs of computational cost, outperforming several state-of-the-art methods. These results validate that our approach effectively balances high detection performance with computational efficiency, offering a practical solution for real-time, lightweight cloud detection in high-resolution remote sensing imagery.
Additional Links: PMID-41829688
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41829688,
year = {2026},
author = {Mo, Y and Chen, P and Bai, S and Xiao, E},
title = {A Lightweight Net with Dual-Path Feature Enhancer and Bidirectional Gated Fusion for Cloud Detection.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {5},
pages = {},
doi = {10.3390/s26051727},
pmid = {41829688},
issn = {1424-8220},
support = {62261038//the National Natural Science Foundation of China/ ; },
abstract = {Cloud detection serves as a critical preprocessing step in remote sensing image processing and quantitative applications. However, prevailing deep learning-based models often depend on computationally intensive backbone networks to achieve high accuracy, which hinders their deployment in resource-constrained scenarios such as on-board processing or edge computing. To bridge the trade-off between accuracy and efficiency, this paper introduces a lightweight network for cloud detection. The core innovations of our network are twofold: (1) a dual-path feature enhancer that operates at the front end to extract and fuse multi-scale features through a parallel architecture, significantly enriching feature diversity and representational capacity, thereby alleviating the need for a complex backbone, and (2) a bidirectional gated fusion module, which adaptively integrates multi-scale features from the dual-path feature enhancer with deep semantic features from the backbone decoder through a gated attention mechanism and dynamic convolution, thereby enhancing feature discriminability. Comprehensive experiments on the public HRC_WHU dataset demonstrate that the proposed model achieves a high overall accuracy of 96.31% and a mean intersection-over-union of 92.82%, with only 12.04 GFLOPs of computational cost, outperforming several state-of-the-art methods. These results validate that our approach effectively balances high detection performance with computational efficiency, offering a practical solution for real-time, lightweight cloud detection in high-resolution remote sensing imagery.},
}
RevDate: 2026-03-14
Risks Related to Advanced Bridge Monitoring Technologies.
Sensors (Basel, Switzerland), 26(5): pii:s26051603.
Bridge monitoring has undergone a significant transformation with the integration of advanced technologies, including structural health monitoring systems, Internet of Things sensors, unmanned aerial vehicles, artificial intelligence, and cloud computing. These technologies enable continuous real-time data acquisition, processing, and early detection of structural degradation. However, their deployment also introduces a range of emerging risks that require careful consideration. This paper presents descriptive risk listings and proposes a comprehensive risk-governance framework for advanced bridge monitoring using the SWOT analysis. The framework integrates a unified risk taxonomy and assessment that links sensor and AI performance with cyber threat modeling and data governance requirements. The application of two real deployments, the Jindo Bridge SHM program and the Stava Bridge digital-twin implementation, shows how the framework converts heterogeneous measurements for improving bridge lifecycle management with the implementation of advanced monitoring technologies. Compared with prior studies that primarily catalog risks, the contribution of the paper is an interdisciplinary, operationalizable method that couples reliability, security, and governance into a single process, thereby ensuring that advanced technologies enhance, rather than erode, the safety and resilience of bridge infrastructure.
Additional Links: PMID-41829563
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41829563,
year = {2026},
author = {Miške, M and Daponte, P and De Vito, L and Figuli, L},
title = {Risks Related to Advanced Bridge Monitoring Technologies.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {5},
pages = {},
doi = {10.3390/s26051603},
pmid = {41829563},
issn = {1424-8220},
support = {SPS G6140 Advanced Technologies for Physical ResIlience Of cRitical Infrastructure (APRIORI)//North Atlantic Treaty Organization/ ; APVV-22-0562, Strengthening the REsilience MAnagement of Key infrastructue El-ements using advances in 3D modeling//Slovak Research and Development Agency/ ; },
abstract = {Bridge monitoring has undergone a significant transformation with the integration of advanced technologies, including structural health monitoring systems, Internet of Things sensors, unmanned aerial vehicles, artificial intelligence, and cloud computing. These technologies enable continuous real-time data acquisition, processing, and early detection of structural degradation. However, their deployment also introduces a range of emerging risks that require careful consideration. This paper presents descriptive risk listings and proposes a comprehensive risk-governance framework for advanced bridge monitoring using the SWOT analysis. The framework integrates a unified risk taxonomy and assessment that links sensor and AI performance with cyber threat modeling and data governance requirements. The application of two real deployments, the Jindo Bridge SHM program and the Stava Bridge digital-twin implementation, shows how the framework converts heterogeneous measurements for improving bridge lifecycle management with the implementation of advanced monitoring technologies. Compared with prior studies that primarily catalog risks, the contribution of the paper is an interdisciplinary, operationalizable method that couples reliability, security, and governance into a single process, thereby ensuring that advanced technologies enhance, rather than erode, the safety and resilience of bridge infrastructure.},
}
RevDate: 2026-03-14
LITO: Lemur-Inspired Task Offloading for Edge-Fog-Cloud Continuum Systems.
Sensors (Basel, Switzerland), 26(5): pii:s26051497.
Edge, fog, and cloud continuum architectures that interconnect resource-constrained devices, intermediate edge servers, and remote cloud data centers face persistent challenges in handling heterogeneous and latency-sensitive workloads while reducing energy consumption and improving resource utilization. Classical task offloading approaches either rely on static heuristics, which lack adaptability to dynamic conditions, or on metaheuristic optimizers, which often incur high computational overhead and centralized coordination. This paper proposes LITO, a lemur-inspired task offloading algorithm for edge, fog, and cloud continuum systems that models the infrastructure as a social system in which computing nodes assume distinct roles that mirror lemur social hierarchies. Building on an abstracted model of lemur group behavior, LITO incorporates two key lemur-inspired mechanisms: an energy-aware task assignment mechanism based on sun basking, a thermoregulation behavior in which lemurs seek favorable warm spots, mapped here to selecting energetically efficient execution nodes, and a cooperative scheduling policy based on huddling, group clustering under stress, mapped here to sharing load among overloaded nodes. These mechanisms are combined with a continual supervised policy-learning layer with contextual bandit feedback that refines offloading decisions from online feedback. The resulting multi-objective formulation jointly minimizes energy consumption and deadline violations while maximizing resource utilization and throughput under high-load conditions in the edge and fog segment of the continuum. Simulations under diverse workload regimes and task complexities show that LITO outperforms representative multi-objective offloading baselines in terms of energy consumption, resource utilization, latency, Service Level Agreement (SLA) violations, and throughput in congested scenarios.
Additional Links: PMID-41829461
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41829461,
year = {2026},
author = {Almulifi, A and Kurdi, H},
title = {LITO: Lemur-Inspired Task Offloading for Edge-Fog-Cloud Continuum Systems.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {5},
pages = {},
doi = {10.3390/s26051497},
pmid = {41829461},
issn = {1424-8220},
abstract = {Edge, fog, and cloud continuum architectures that interconnect resource-constrained devices, intermediate edge servers, and remote cloud data centers face persistent challenges in handling heterogeneous and latency-sensitive workloads while reducing energy consumption and improving resource utilization. Classical task offloading approaches either rely on static heuristics, which lack adaptability to dynamic conditions, or on metaheuristic optimizers, which often incur high computational overhead and centralized coordination. This paper proposes LITO, a lemur-inspired task offloading algorithm for edge, fog, and cloud continuum systems that models the infrastructure as a social system in which computing nodes assume distinct roles that mirror lemur social hierarchies. Building on an abstracted model of lemur group behavior, LITO incorporates two key lemur-inspired mechanisms: an energy-aware task assignment mechanism based on sun basking, a thermoregulation behavior in which lemurs seek favorable warm spots, mapped here to selecting energetically efficient execution nodes, and a cooperative scheduling policy based on huddling, group clustering under stress, mapped here to sharing load among overloaded nodes. These mechanisms are combined with a continual supervised policy-learning layer with contextual bandit feedback that refines offloading decisions from online feedback. The resulting multi-objective formulation jointly minimizes energy consumption and deadline violations while maximizing resource utilization and throughput under high-load conditions in the edge and fog segment of the continuum. Simulations under diverse workload regimes and task complexities show that LITO outperforms representative multi-objective offloading baselines in terms of energy consumption, resource utilization, latency, Service Level Agreement (SLA) violations, and throughput in congested scenarios.},
}
RevDate: 2026-03-14
CmpDate: 2026-03-14
Edge-AI Enabled Acoustic Monitoring and Spatial Localisation for Sow Oestrus Detection.
Animals : an open access journal from MDPI, 16(5): pii:ani16050804.
Timely and accurate detection of sow oestrus is crucial for enhancing reproductive efficiency and reducing non-productive days (NPDs) in large-scale pig farms. However, traditional manual observation is labour-intensive and subjective, while cloud-based deep learning solutions face challenges such as high latency and privacy risks when applied in intensive housing environments. This study developed an edge-intelligent monitoring system that integrates deep temporal modelling with sound source localisation technology. A three-stage hierarchical screening strategy was utilised to select and deploy a lightweight Stacked-LSTM model on the resource-constrained ESP32-S3 hardware platform. This model was trained and calibrated using a high-quality acoustic dataset validated against serum reproductive hormones, specifically follicle-stimulating hormone (FSH), luteinising hormone (LH), and progesterone (P4). Experimental results demonstrate that the optimised model achieved a classification accuracy of 96.17%, with an inference latency of only 41 ms, thereby fully satisfying the stringent real-time monitoring requirements while maintaining a minimal memory footprint. Furthermore, the system integrates a localisation algorithm based on Generalised Cross-Correlation with Phase Transform (GCC-PHAT). Through spatial geometric modelling, the system successfully implements the functional mapping of vocalisation events to individual gestation stalls (Stall IDs). Laboratory pressure tests validated the robustness and low-cost deployment advantages of the "edge recognition-cloud synchronization" architecture, providing a reliable technical framework for the precision management of smart livestock farming.
Additional Links: PMID-41829012
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41829012,
year = {2026},
author = {Liu, H and Li, H and Cao, Y and Cao, R and Hu, G and Liu, Z},
title = {Edge-AI Enabled Acoustic Monitoring and Spatial Localisation for Sow Oestrus Detection.},
journal = {Animals : an open access journal from MDPI},
volume = {16},
number = {5},
pages = {},
doi = {10.3390/ani16050804},
pmid = {41829012},
issn = {2076-2615},
support = {2023-092//Shanxi Scholarship Council of China/ ; 202302010101002//Key R&D Program of Shanxi Province/ ; },
abstract = {Timely and accurate detection of sow oestrus is crucial for enhancing reproductive efficiency and reducing non-productive days (NPDs) in large-scale pig farms. However, traditional manual observation is labour-intensive and subjective, while cloud-based deep learning solutions face challenges such as high latency and privacy risks when applied in intensive housing environments. This study developed an edge-intelligent monitoring system that integrates deep temporal modelling with sound source localisation technology. A three-stage hierarchical screening strategy was utilised to select and deploy a lightweight Stacked-LSTM model on the resource-constrained ESP32-S3 hardware platform. This model was trained and calibrated using a high-quality acoustic dataset validated against serum reproductive hormones, specifically follicle-stimulating hormone (FSH), luteinising hormone (LH), and progesterone (P4). Experimental results demonstrate that the optimised model achieved a classification accuracy of 96.17%, with an inference latency of only 41 ms, thereby fully satisfying the stringent real-time monitoring requirements while maintaining a minimal memory footprint. Furthermore, the system integrates a localisation algorithm based on Generalised Cross-Correlation with Phase Transform (GCC-PHAT). Through spatial geometric modelling, the system successfully implements the functional mapping of vocalisation events to individual gestation stalls (Stall IDs). Laboratory pressure tests validated the robustness and low-cost deployment advantages of the "edge recognition-cloud synchronization" architecture, providing a reliable technical framework for the precision management of smart livestock farming.},
}
RevDate: 2026-03-13
Web-based cloud-computing Asthma Control Test differentiates uncontrolled asthma.
Respiratory investigation, 64(3):101398 pii:S2212-5345(26)00032-8 [Epub ahead of print].
BACKGROUND: Mobile health technologies have been shown to improve asthma control. However, the effect on web-based Asthma Control Test (ACT) with cloud computing on asthma control status remains unknown. We aimed to examine how predictions of ACT scores based on self-perceived symptoms and cloud computing influence asthma control levels defined by Global Initiative for Asthma (GINA).
METHODS: We created an interactive web-based application providing automated calculations of frequencies of asthma symptoms and reliever medication used in the past month, traditional ACT, and cloud-computing ACT (ccACT). Adult asthma patients aged between 20- and 65-year-old were included in the study. Participants were encouraged to input perceived asthma symptoms and reliever medication use and to perform monthly assessment of web-based traditional ACT. The ccACT scores were calculated by the application when monthly assessments have been completed. The receiver operating characteristic (ROC) curves were used to compare the performance of traditional ACT and ccACT. Asthma control levels defined by GINA were used as outcome variables.
RESULTS: Eighty-five qualified records obtained from 28 participants were included for analysis. To differentiate well- and partly-controlled asthma from uncontrolled asthma, the better cutoff was ≥21 for both traditional ACT and ccACT scores. The area under the ROC curve for ccACT was greater than traditional ACT (0.99 vs. 0.91). The performance of both tests was statistically different (P = 0.01).
CONCLUSIONS: ccACT may differentiate well- and partly-controlled asthma from uncontrolled asthma better than traditional ACT. This potentially leads to the development of a more effective method in asthma control.
Additional Links: PMID-41825338
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41825338,
year = {2026},
author = {Wu, TJ and Wu, JH and Ko, YC and Huang, MS and Wu, JT},
title = {Web-based cloud-computing Asthma Control Test differentiates uncontrolled asthma.},
journal = {Respiratory investigation},
volume = {64},
number = {3},
pages = {101398},
doi = {10.1016/j.resinv.2026.101398},
pmid = {41825338},
issn = {2212-5353},
abstract = {BACKGROUND: Mobile health technologies have been shown to improve asthma control. However, the effect on web-based Asthma Control Test (ACT) with cloud computing on asthma control status remains unknown. We aimed to examine how predictions of ACT scores based on self-perceived symptoms and cloud computing influence asthma control levels defined by Global Initiative for Asthma (GINA).
METHODS: We created an interactive web-based application providing automated calculations of frequencies of asthma symptoms and reliever medication used in the past month, traditional ACT, and cloud-computing ACT (ccACT). Adult asthma patients aged between 20- and 65-year-old were included in the study. Participants were encouraged to input perceived asthma symptoms and reliever medication use and to perform monthly assessment of web-based traditional ACT. The ccACT scores were calculated by the application when monthly assessments have been completed. The receiver operating characteristic (ROC) curves were used to compare the performance of traditional ACT and ccACT. Asthma control levels defined by GINA were used as outcome variables.
RESULTS: Eighty-five qualified records obtained from 28 participants were included for analysis. To differentiate well- and partly-controlled asthma from uncontrolled asthma, the better cutoff was ≥21 for both traditional ACT and ccACT scores. The area under the ROC curve for ccACT was greater than traditional ACT (0.99 vs. 0.91). The performance of both tests was statistically different (P = 0.01).
CONCLUSIONS: ccACT may differentiate well- and partly-controlled asthma from uncontrolled asthma better than traditional ACT. This potentially leads to the development of a more effective method in asthma control.},
}
RevDate: 2026-03-13
CmpDate: 2026-03-13
Surgery for interplanetary space missions.
The British journal of surgery, 113(3):.
As human spaceflight expands beyond low Earth orbit, the ability to deliver advanced surgical care in space becomes critical. Current medical provisions on board the International Space Station (ISS) are geared towards treating low-risk conditions, with a 'stabilize-and-evacuate' principle for more complex cases-an approach that is not viable for extended missions to the Moon and Mars. This review summarizes research conducted around space surgery, with a particular focus on surgical robotics. Experiments in parabolic flight and analogue environments demonstrate that, provided the operator, patient, and instruments are restrained, surgical skill is largely unaffected by reduced gravity. Robotic surgery has primarily been explored in remote undersea habitats and in limited flight studies. There are several challenges to the implementation of surgical systems in space, including size, weight, and power constraints, communication latency, and crew training. Means of fluid and debris containment, provision of anaesthesia, and postoperative recovery in altered physiology must also be considered. The key features of an ideal space surgery robotic set-up are outlined. It should be compact, multifunctional, adaptable, reliable, and optimized in technical design and material composition for use in habitable volumes. Such systems should incorporate artificial intelligence (AI)-driven decision-making support, variable autonomy, and human-in-the-loop control. Crew members must be trained and supported to deliver and recover from surgical care in space. Cloud and edge computing will mitigate latency while expanding on-board data processing capabilities. Although not yet operationally mature, robotic surgery is a critical capability for future exploratory space missions, but requires continued multidisciplinary development.
Additional Links: PMID-41823369
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41823369,
year = {2026},
author = {Khanna, R and Li, Y and Cook, M and Sawant, P and Hounon, R and Carroll, D and Lowe, L and Lindenroth, L and Mahmoodi, T and Raison, N and Granados, A and Ojha, A and Bergeles, C and Breda, A and Ourselin, S and Dasgupta, P},
title = {Surgery for interplanetary space missions.},
journal = {The British journal of surgery},
volume = {113},
number = {3},
pages = {},
doi = {10.1093/bjs/znag005},
pmid = {41823369},
issn = {1365-2168},
mesh = {*Space Flight ; Humans ; *Robotic Surgical Procedures/methods ; Robotics ; },
abstract = {As human spaceflight expands beyond low Earth orbit, the ability to deliver advanced surgical care in space becomes critical. Current medical provisions on board the International Space Station (ISS) are geared towards treating low-risk conditions, with a 'stabilize-and-evacuate' principle for more complex cases-an approach that is not viable for extended missions to the Moon and Mars. This review summarizes research conducted around space surgery, with a particular focus on surgical robotics. Experiments in parabolic flight and analogue environments demonstrate that, provided the operator, patient, and instruments are restrained, surgical skill is largely unaffected by reduced gravity. Robotic surgery has primarily been explored in remote undersea habitats and in limited flight studies. There are several challenges to the implementation of surgical systems in space, including size, weight, and power constraints, communication latency, and crew training. Means of fluid and debris containment, provision of anaesthesia, and postoperative recovery in altered physiology must also be considered. The key features of an ideal space surgery robotic set-up are outlined. It should be compact, multifunctional, adaptable, reliable, and optimized in technical design and material composition for use in habitable volumes. Such systems should incorporate artificial intelligence (AI)-driven decision-making support, variable autonomy, and human-in-the-loop control. Crew members must be trained and supported to deliver and recover from surgical care in space. Cloud and edge computing will mitigate latency while expanding on-board data processing capabilities. Although not yet operationally mature, robotic surgery is a critical capability for future exploratory space missions, but requires continued multidisciplinary development.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Space Flight
Humans
*Robotic Surgical Procedures/methods
Robotics
RevDate: 2026-03-13
Quantum-Inspired Adaptive Meta-Heuristic-Machine Learning framework for resilient and energy-efficient task scheduling in multi-cloud ecosystems.
Scientific reports pii:10.1038/s41598-026-43125-3 [Epub ahead of print].
Additional Links: PMID-41820483
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41820483,
year = {2026},
author = {Divya, N and Kiranbabu, MNV and Babu, GC},
title = {Quantum-Inspired Adaptive Meta-Heuristic-Machine Learning framework for resilient and energy-efficient task scheduling in multi-cloud ecosystems.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-43125-3},
pmid = {41820483},
issn = {2045-2322},
}
RevDate: 2026-03-12
Hybrid prediction system for reliable multi-seasonal sustainable energy generation under meteorological and environmental volatility.
Scientific reports, 16(1):.
Amid escalating climate change and energy crises, wind energy, as a pivotal renewable resource, poses significant challenges to grid stability and energy management due to its inherent stochastic intermittency and nonlinear dynamics. Consequently, this research presents a hybrid prediction system, ICEEMDAN-NCRBMO-AELM, integrating data decomposition with intelligent computing to reveal spatiotemporal coupling patterns in climatic variables for reliable wind power forecasting. This system utilizes Improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (ICEEMDAN) to dissect sequences into several modes, addressing time–frequency features and alleviating mode mixing through a dynamic noise-weighting scheme. To further optimize Adaptive Extreme Learning Machine (AELM) performance, this research proposes a novel Normal Cloud Red-billed Blue Magpie Intelligent Optimization (NCRBMO) algorithm, motivated by cloud model theory and the swarm behavior of Red-billed Blue Magpie. NCRBMO employs a multiphase mapping inverse generation strategy for initializing individuals and designs five heuristic search strategies for global optimization. Regarding hyperparameter tuning, NCRBMO optimizes the weight matrix and bias vector in the output layer of a single-hidden-layer feedforward network, enhancing prediction accuracy and stability. The interseasonal wind power prediction results from Jiangsu region, China, indicate that this system surpasses competing representative techniques in addressing complex seasonal trends and meteorological abrupt changes.
Additional Links: PMID-41794986
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41794986,
year = {2026},
author = {Liu, H and Cai, C and Li, P and Wang, Y and Zhao, M and Tang, C and Tu, B and Li, Y and Zheng, X and Ma, Y and Liang, H and Chen, M},
title = {Hybrid prediction system for reliable multi-seasonal sustainable energy generation under meteorological and environmental volatility.},
journal = {Scientific reports},
volume = {16},
number = {1},
pages = {},
pmid = {41794986},
issn = {2045-2322},
support = {11975177//Natural Science Foundation of China/ ; },
abstract = {Amid escalating climate change and energy crises, wind energy, as a pivotal renewable resource, poses significant challenges to grid stability and energy management due to its inherent stochastic intermittency and nonlinear dynamics. Consequently, this research presents a hybrid prediction system, ICEEMDAN-NCRBMO-AELM, integrating data decomposition with intelligent computing to reveal spatiotemporal coupling patterns in climatic variables for reliable wind power forecasting. This system utilizes Improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (ICEEMDAN) to dissect sequences into several modes, addressing time–frequency features and alleviating mode mixing through a dynamic noise-weighting scheme. To further optimize Adaptive Extreme Learning Machine (AELM) performance, this research proposes a novel Normal Cloud Red-billed Blue Magpie Intelligent Optimization (NCRBMO) algorithm, motivated by cloud model theory and the swarm behavior of Red-billed Blue Magpie. NCRBMO employs a multiphase mapping inverse generation strategy for initializing individuals and designs five heuristic search strategies for global optimization. Regarding hyperparameter tuning, NCRBMO optimizes the weight matrix and bias vector in the output layer of a single-hidden-layer feedforward network, enhancing prediction accuracy and stability. The interseasonal wind power prediction results from Jiangsu region, China, indicate that this system surpasses competing representative techniques in addressing complex seasonal trends and meteorological abrupt changes.},
}
RevDate: 2026-03-11
Attention-based workload prediction and dynamic resource allocation for heterogeneous computing environments.
Scientific reports, 16(1):.
UNLABELLED: The rapid proliferation of artificial intelligence applications in modern data centers demands intelligent resource management strategies that can effectively handle diverse workloads across heterogeneous computing infrastructures. This paper proposes an integrated framework that combines multi-head spatial-temporal attention mechanisms for workload prediction with dynamic resource allocation algorithms optimized for heterogeneous environments. The spatial-temporal attention architecture separately models temporal evolution patterns within individual workload streams and spatial correlations across concurrent task types, enabling accurate forecasting of resource demands. The allocation framework formulates resource assignment as a multi-objective optimization problem that jointly considers performance, energy efficiency, and utilization while explicitly accounting for prediction uncertainty. Experimental evaluation on real-world cluster traces demonstrates that our approach achieves 78.4% resource utilization with only 2.3% SLA violations, reduces average task completion time by 25.8%, and decreases energy consumption by 15.1% compared to production-grade baseline methods. The framework provides practical benefits for cloud service providers and enterprise data centers seeking to maximize infrastructure efficiency while maintaining service quality guarantees.
SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1038/s41598-026-38622-4.
Additional Links: PMID-41680330
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41680330,
year = {2026},
author = {Shao, S and Ding, X and Zhao, B and Ye, P},
title = {Attention-based workload prediction and dynamic resource allocation for heterogeneous computing environments.},
journal = {Scientific reports},
volume = {16},
number = {1},
pages = {},
pmid = {41680330},
issn = {2045-2322},
abstract = {UNLABELLED: The rapid proliferation of artificial intelligence applications in modern data centers demands intelligent resource management strategies that can effectively handle diverse workloads across heterogeneous computing infrastructures. This paper proposes an integrated framework that combines multi-head spatial-temporal attention mechanisms for workload prediction with dynamic resource allocation algorithms optimized for heterogeneous environments. The spatial-temporal attention architecture separately models temporal evolution patterns within individual workload streams and spatial correlations across concurrent task types, enabling accurate forecasting of resource demands. The allocation framework formulates resource assignment as a multi-objective optimization problem that jointly considers performance, energy efficiency, and utilization while explicitly accounting for prediction uncertainty. Experimental evaluation on real-world cluster traces demonstrates that our approach achieves 78.4% resource utilization with only 2.3% SLA violations, reduces average task completion time by 25.8%, and decreases energy consumption by 15.1% compared to production-grade baseline methods. The framework provides practical benefits for cloud service providers and enterprise data centers seeking to maximize infrastructure efficiency while maintaining service quality guarantees.
SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1038/s41598-026-38622-4.},
}
RevDate: 2026-03-11
Holistic IoT and cloud-based telemetry architecture for proactive fire monitoring in smart agriculture.
Scientific reports pii:10.1038/s41598-026-43538-0 [Epub ahead of print].
Additional Links: PMID-41807538
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41807538,
year = {2026},
author = {Morchid, A and Salami, A and Khalid, HM and Said, Z},
title = {Holistic IoT and cloud-based telemetry architecture for proactive fire monitoring in smart agriculture.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-43538-0},
pmid = {41807538},
issn = {2045-2322},
}
RevDate: 2026-03-11
Brain Tissue Strain During Adolescent Soccer Heading Using the Cloud-Based Brain Simulation Research Platform Finite Element Head Model.
Conference proceedings. International Research Council on Biomechanics of Injury, 2024:375-381.
The current study aimed to quantify the strain of the brain associated with soccer headers in adolescent athletes using a cloud-based finite element (FE) human head model. Eleven male and female participants aged 13-18 years completed 10 frontal or oblique soccer headers. Linear acceleration and angular velocity of the head were captured using the Prevent Biometrics boil-and-bite Impact Monitoring Mouthguard (IMM). Head kinematics time series were applied to the Brain Simulation Research Platform (BSRP) FE head model. Frontal headers resulted in significantly (p<0.001) higher mean peak linear acceleration (17.5±0.5 g) but significantly (p<0.001) lower mean peak angular acceleration (1142±45 rad/s[2]) than oblique headers (12.3±0.4 g, 1431±66 rad/s[2]). Frontal headers had similar peak MPS95 values compared to oblique headers (4.8±1.1% vs. 4.5±1.2%, p=0.128). Using equivalent loading conditions, frontal and oblique headers did not differ in peak MPS95 despite oblique headers having significantly higher angular kinematics, which is associated with brain tissue strains. Comparisons with previous results from a validated FE head model suggest that the BSRP FE head model has potential for simulating on-field head impact sensor data, especially considering the reduced computational time, but estimations of strain and model comparisons with more severe on-field sporting impacts are needed.
Additional Links: PMID-41810402
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41810402,
year = {2024},
author = {Huber, CM and Patton, DA and Arbogast, KB and Kraft, RH},
title = {Brain Tissue Strain During Adolescent Soccer Heading Using the Cloud-Based Brain Simulation Research Platform Finite Element Head Model.},
journal = {Conference proceedings. International Research Council on Biomechanics of Injury},
volume = {2024},
number = {},
pages = {375-381},
pmid = {41810402},
issn = {2235-3151},
abstract = {The current study aimed to quantify the strain of the brain associated with soccer headers in adolescent athletes using a cloud-based finite element (FE) human head model. Eleven male and female participants aged 13-18 years completed 10 frontal or oblique soccer headers. Linear acceleration and angular velocity of the head were captured using the Prevent Biometrics boil-and-bite Impact Monitoring Mouthguard (IMM). Head kinematics time series were applied to the Brain Simulation Research Platform (BSRP) FE head model. Frontal headers resulted in significantly (p<0.001) higher mean peak linear acceleration (17.5±0.5 g) but significantly (p<0.001) lower mean peak angular acceleration (1142±45 rad/s[2]) than oblique headers (12.3±0.4 g, 1431±66 rad/s[2]). Frontal headers had similar peak MPS95 values compared to oblique headers (4.8±1.1% vs. 4.5±1.2%, p=0.128). Using equivalent loading conditions, frontal and oblique headers did not differ in peak MPS95 despite oblique headers having significantly higher angular kinematics, which is associated with brain tissue strains. Comparisons with previous results from a validated FE head model suggest that the BSRP FE head model has potential for simulating on-field head impact sensor data, especially considering the reduced computational time, but estimations of strain and model comparisons with more severe on-field sporting impacts are needed.},
}
RevDate: 2026-03-10
A cloud-based solution for managing next-generation sequencing data to comprehend forensic population statistics.
International journal of legal medicine [Epub ahead of print].
Additional Links: PMID-41805780
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41805780,
year = {2026},
author = {Vongpaisarnsin, K and Sukawutthiya, P and Noh, H and Sathirapatya, T},
title = {A cloud-based solution for managing next-generation sequencing data to comprehend forensic population statistics.},
journal = {International journal of legal medicine},
volume = {},
number = {},
pages = {},
pmid = {41805780},
issn = {1437-1596},
}
RevDate: 2026-03-10
CmpDate: 2026-03-10
Artificial Intelligence Integrated Smart Medical Imaging Lab Framework for Enhanced Diagnosis and Treatment of Pandemic-Prone Diseases.
Health science reports, 9(3):e71972.
BACKGROUND: The COVID-19 pandemic has caused massive devastation worldwide, and its effects still persist. Managing the early stages was difficult, but scientists worked tirelessly to control it. The emergence of variants continues to pose a threat, raising doubts about the capability of the healthcare system. Healthcare practitioners have faced immense strain under a massive patient load, while delays in testing have caused deaths due to untimely treatment. Moreover, relying only on RT-PCR testing is insufficient because of its diagnostic errors.
MATERIALS AND METHODS: To address these challenges, this study introduces a Smart Imaging Lab Framework for hospitals. The approach uses a convolutional neural network (CNN) model to carry out rapid X-ray and CT-scan assessments of emergency patients showing severe symptoms, following RT-PCR testing. In addition, blood tests help determine the severity of infection. Patients in critical condition are transferred to intensive care units, while those with milder cases remain in general wards.
RESULTS: The framework uses a 16-layer CNN framework for X-ray and CT-scan imaging, achieving 99.02% and 98.49% accuracy, respectively. Severity assessment with Extra Randomized Trees reached 98.00% accuracy.
DISCUSSION: These findings highlight the potential of the system to be adopted in hospitals, enabling regular health monitoring and timely intervention. In addition, explainable AI XAI tools like Grad-CAM increase transparency by highlighting the lung regions most relevant to the diagnosis.
CONCLUSION: The study demonstrates the potential of artificial intelligence, internet of things, and cloud computing to address future pandemic-prone diseases.
Additional Links: PMID-41804494
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41804494,
year = {2026},
author = {Tungal, A and Singh, P and Singh, K and Kaur, PD and Bharany, S and Pant, R and Kumar, A and Rehman, AU and Hussen, S},
title = {Artificial Intelligence Integrated Smart Medical Imaging Lab Framework for Enhanced Diagnosis and Treatment of Pandemic-Prone Diseases.},
journal = {Health science reports},
volume = {9},
number = {3},
pages = {e71972},
pmid = {41804494},
issn = {2398-8835},
abstract = {BACKGROUND: The COVID-19 pandemic has caused massive devastation worldwide, and its effects still persist. Managing the early stages was difficult, but scientists worked tirelessly to control it. The emergence of variants continues to pose a threat, raising doubts about the capability of the healthcare system. Healthcare practitioners have faced immense strain under a massive patient load, while delays in testing have caused deaths due to untimely treatment. Moreover, relying only on RT-PCR testing is insufficient because of its diagnostic errors.
MATERIALS AND METHODS: To address these challenges, this study introduces a Smart Imaging Lab Framework for hospitals. The approach uses a convolutional neural network (CNN) model to carry out rapid X-ray and CT-scan assessments of emergency patients showing severe symptoms, following RT-PCR testing. In addition, blood tests help determine the severity of infection. Patients in critical condition are transferred to intensive care units, while those with milder cases remain in general wards.
RESULTS: The framework uses a 16-layer CNN framework for X-ray and CT-scan imaging, achieving 99.02% and 98.49% accuracy, respectively. Severity assessment with Extra Randomized Trees reached 98.00% accuracy.
DISCUSSION: These findings highlight the potential of the system to be adopted in hospitals, enabling regular health monitoring and timely intervention. In addition, explainable AI XAI tools like Grad-CAM increase transparency by highlighting the lung regions most relevant to the diagnosis.
CONCLUSION: The study demonstrates the potential of artificial intelligence, internet of things, and cloud computing to address future pandemic-prone diseases.},
}
RevDate: 2026-03-10
An integrated edge-cloud IoT framework for resilient disaster prevention in fire detection and forest carbon assessment.
Scientific reports pii:10.1038/s41598-026-43053-2 [Epub ahead of print].
Additional Links: PMID-41803442
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41803442,
year = {2026},
author = {Chen, LH and Kolhe, SS and Hu, J and Tseng, KH and Chung, MY},
title = {An integrated edge-cloud IoT framework for resilient disaster prevention in fire detection and forest carbon assessment.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-43053-2},
pmid = {41803442},
issn = {2045-2322},
support = {NSTC 114-2622-E-027-015//Ministry of Science and Technology/ ; },
}
RevDate: 2026-03-10
QRGEC: quantum reinforcement learning with golden jackal optimization for resilient edge cloud coordination in internet computing.
Scientific reports pii:10.1038/s41598-026-42859-4 [Epub ahead of print].
The rapid growth of Internet scale computing has exposed critical limitations in existing edge cloud coordination mechanisms, particularly in terms of resilience, energy efficiency, and adaptability under heterogeneous and dynamic environments. Current optimization and learning based approaches often suffer from slow convergence, limited exploration capability, and poor robustness when managing distributed edge cloud resources. To address these challenges, this research proposes QRGEC: Quantum Reinforcement Learning with Golden Jackal Optimization for Resilient Edge Cloud Coordination in Internet Computing. The proposed hybrid framework integrates quantum focused policy exploration with adaptive metaheuristic tuning to enhance distributed Internet computing optimization. Policy representations are encoded using variational quantum circuits, enabling efficient exploration of high dimensional decision spaces. Furthermore, the Golden Jackal Optimization mechanism adaptively adjusts reinforcement parameters to improve convergence stability and accelerate learning, thereby enabling resilient and energy-efficient coordination across heterogeneous edge and cloud environments. A resilience aware scheduler seamlessly balances energy efficiency, latency, and recovery within dynamic edge cloud workloads. Extensive experimental evaluations in QRGEC demonstrate that the framework is capable of outperforming previously established deep reinforcement and quantum heuristic baselines with a latency reduction of 36.8%, an increase in energy efficiency of 24.7%, an improvement in resilience of 48.2%, and a sustained resource utilization of 94%. QRGEC also displays the ability to automatically recover from network congestion and failures, recover from network congestion, and maintain balances latency energy trade-offs. This emphasizes the efficiency of QRGEC in autonomous recovery from network failures, making latency energy balance adjustments, and conserving energy.
Additional Links: PMID-41803380
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41803380,
year = {2026},
author = {Lella, KK and Krishna, MSR},
title = {QRGEC: quantum reinforcement learning with golden jackal optimization for resilient edge cloud coordination in internet computing.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-42859-4},
pmid = {41803380},
issn = {2045-2322},
abstract = {The rapid growth of Internet scale computing has exposed critical limitations in existing edge cloud coordination mechanisms, particularly in terms of resilience, energy efficiency, and adaptability under heterogeneous and dynamic environments. Current optimization and learning based approaches often suffer from slow convergence, limited exploration capability, and poor robustness when managing distributed edge cloud resources. To address these challenges, this research proposes QRGEC: Quantum Reinforcement Learning with Golden Jackal Optimization for Resilient Edge Cloud Coordination in Internet Computing. The proposed hybrid framework integrates quantum focused policy exploration with adaptive metaheuristic tuning to enhance distributed Internet computing optimization. Policy representations are encoded using variational quantum circuits, enabling efficient exploration of high dimensional decision spaces. Furthermore, the Golden Jackal Optimization mechanism adaptively adjusts reinforcement parameters to improve convergence stability and accelerate learning, thereby enabling resilient and energy-efficient coordination across heterogeneous edge and cloud environments. A resilience aware scheduler seamlessly balances energy efficiency, latency, and recovery within dynamic edge cloud workloads. Extensive experimental evaluations in QRGEC demonstrate that the framework is capable of outperforming previously established deep reinforcement and quantum heuristic baselines with a latency reduction of 36.8%, an increase in energy efficiency of 24.7%, an improvement in resilience of 48.2%, and a sustained resource utilization of 94%. QRGEC also displays the ability to automatically recover from network congestion and failures, recover from network congestion, and maintain balances latency energy trade-offs. This emphasizes the efficiency of QRGEC in autonomous recovery from network failures, making latency energy balance adjustments, and conserving energy.},
}
RevDate: 2026-03-07
Adaptive machine learning models for predictive maintenance in industrial internet of things (IIoT) systems.
Scientific reports pii:10.1038/s41598-026-42666-x [Epub ahead of print].
The research examines how RL and DRL models can be used to enhance the prediction of maintenance needs in the IIoT setting. The purpose is to assess the accuracy, precision, recall, F1 score and the AUC-ROC of adaptive models against non-adaptive models. It is clear from the results that adaptive models outperform traditional models in fault prediction, providing better accuracy and more accurate predictions. Furthermore, adaptive models can handle changes in the environment and the equipment better than other models. Moreover, when these models are used with edge and cloud computing, they make sure that decisions are applied quickly and that the models can be easily integrated into industrial systems. The research also demonstrates that adaptive machine learning models can improve the accuracy of the model and reduce both false positive and false negative cases. When compared to non-adaptive baselines, adaptive models increased recall by up to 11.2% points and precision by up to 10.2% points. The Adaptive Ensemble performed best overall (93.4% accuracy, 95.2% AUC-ROC). Experimental assessment reveals consistent and statistically significant enhancements in performance for adaptive models across all criteria. The Adaptive Ensemble attains superior performance, achieving 93.4% accuracy and 95.2% AUC-ROC. In comparison to the most robust non-adaptive baseline (Random Forest), it enhances memory by 8.5% points, precision by 7.8% points, and F1-score by 8.2% points. In comparison to SVM, recall increases by 11.2% points and precision by 10.2% points, signifying significant decreases in undetected faults and false positives.The study provides information about how adaptive learning can be used in IIoT-based PdM systems and offers advice to industries that want to make their PdM systems more reliable, effective and cost-efficient.
Additional Links: PMID-41794976
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41794976,
year = {2026},
author = {Subashree, S and Rajakumaran, M and Pushpa, G and Manivannan, R},
title = {Adaptive machine learning models for predictive maintenance in industrial internet of things (IIoT) systems.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-42666-x},
pmid = {41794976},
issn = {2045-2322},
abstract = {The research examines how RL and DRL models can be used to enhance the prediction of maintenance needs in the IIoT setting. The purpose is to assess the accuracy, precision, recall, F1 score and the AUC-ROC of adaptive models against non-adaptive models. It is clear from the results that adaptive models outperform traditional models in fault prediction, providing better accuracy and more accurate predictions. Furthermore, adaptive models can handle changes in the environment and the equipment better than other models. Moreover, when these models are used with edge and cloud computing, they make sure that decisions are applied quickly and that the models can be easily integrated into industrial systems. The research also demonstrates that adaptive machine learning models can improve the accuracy of the model and reduce both false positive and false negative cases. When compared to non-adaptive baselines, adaptive models increased recall by up to 11.2% points and precision by up to 10.2% points. The Adaptive Ensemble performed best overall (93.4% accuracy, 95.2% AUC-ROC). Experimental assessment reveals consistent and statistically significant enhancements in performance for adaptive models across all criteria. The Adaptive Ensemble attains superior performance, achieving 93.4% accuracy and 95.2% AUC-ROC. In comparison to the most robust non-adaptive baseline (Random Forest), it enhances memory by 8.5% points, precision by 7.8% points, and F1-score by 8.2% points. In comparison to SVM, recall increases by 11.2% points and precision by 10.2% points, signifying significant decreases in undetected faults and false positives.The study provides information about how adaptive learning can be used in IIoT-based PdM systems and offers advice to industries that want to make their PdM systems more reliable, effective and cost-efficient.},
}
RevDate: 2026-03-07
Secure quantum-resilient smart city communication networks using QSC-Net with MF-MBO-based energy-aware task scheduling.
Scientific reports pii:10.1038/s41598-026-41015-2 [Epub ahead of print].
Adaptable optimisation that preserves efficiency under time-varying system dynamics necessitates modern task management in innovative city development and in distributed edge-cloud computing systems. Conservative optimisation techniques such as genetic algorithms, particle swarm optimisation, and classical monarch butterfly optimisation (MBO) suffer from premature convergence, poor multi-objective performance, and limited adaptability to changing environments. Further, virtualised infrastructures contextualise operational constraints that impair their ability to homogenously support heterogeneity in task types and quality-of-service demands. We present a hybrid scheduling framework called multi-strategy fuzzy-enhanced monarch butterfly optimisation (MF-MBO) that combines fuzzy dominance for strong multi-objective ranking, self-adaptive quantum-inspired tunnelling (classical acceptance strategy) to escape stagnation, and bounded greedy migration for stable local refinement and load balancing. To accelerate convergence while maintaining task fairness across distributed virtual machines, MF-MBO dynamically balances exploration and exploitation. In the experimental evaluation under different workload conditions, MF-MBO clearly outperforms baseline algorithms, providing improvements of 17.4% in task execution time, 22.8% in load-balancing efficiency, and 15.6% in energy consumption. The results are reported with respect to the standard MBO, while we also compare them with both GA and PSO under the same evaluation budget and workload conditions. The results show increased operational efficiency and scalability, along with greater robustness across varying environments. The idea behind the introduced MF-MBO framework enables practical adaptation for smart city infrastructure services, distributed edge computing, and IoT-based applications, through a reproducible, explainable optimisation pipeline. The last part of this study reports empirical results and sets a few benchmarks to support future extensions, such as broader-angle benchmarking and hardware-aware validation.
Additional Links: PMID-41794855
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41794855,
year = {2026},
author = {Reddy, NR and Dalton, GA and Swathi, K and Subrahmanyam, DVSS and G, LK and Nagaraju, G},
title = {Secure quantum-resilient smart city communication networks using QSC-Net with MF-MBO-based energy-aware task scheduling.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-41015-2},
pmid = {41794855},
issn = {2045-2322},
abstract = {Adaptable optimisation that preserves efficiency under time-varying system dynamics necessitates modern task management in innovative city development and in distributed edge-cloud computing systems. Conservative optimisation techniques such as genetic algorithms, particle swarm optimisation, and classical monarch butterfly optimisation (MBO) suffer from premature convergence, poor multi-objective performance, and limited adaptability to changing environments. Further, virtualised infrastructures contextualise operational constraints that impair their ability to homogenously support heterogeneity in task types and quality-of-service demands. We present a hybrid scheduling framework called multi-strategy fuzzy-enhanced monarch butterfly optimisation (MF-MBO) that combines fuzzy dominance for strong multi-objective ranking, self-adaptive quantum-inspired tunnelling (classical acceptance strategy) to escape stagnation, and bounded greedy migration for stable local refinement and load balancing. To accelerate convergence while maintaining task fairness across distributed virtual machines, MF-MBO dynamically balances exploration and exploitation. In the experimental evaluation under different workload conditions, MF-MBO clearly outperforms baseline algorithms, providing improvements of 17.4% in task execution time, 22.8% in load-balancing efficiency, and 15.6% in energy consumption. The results are reported with respect to the standard MBO, while we also compare them with both GA and PSO under the same evaluation budget and workload conditions. The results show increased operational efficiency and scalability, along with greater robustness across varying environments. The idea behind the introduced MF-MBO framework enables practical adaptation for smart city infrastructure services, distributed edge computing, and IoT-based applications, through a reproducible, explainable optimisation pipeline. The last part of this study reports empirical results and sets a few benchmarks to support future extensions, such as broader-angle benchmarking and hardware-aware validation.},
}
RevDate: 2026-03-07
Research on multi-level container security isolation operation strategy based on static game.
Accident; analysis and prevention, 231:108478 pii:S0001-4575(26)00087-4 [Epub ahead of print].
With the increasing pressure on rail transit passenger services, the introduction of cloud-edge collaborative computing technology into passenger service systems has emerged as a research hotspot in recent years. The key issue in applying cloud computing to passenger service systems lies in enhancing the security and reliability of applications/data within containers in cloud-edge computing environments. Since passenger service systems operate in open network environments, they are exposed to numerous security vulnerabilities and malicious cyber-attacks, potentially leading to system crashes or the leakage of critical data. To mitigate the severe impacts of such attacks, effective container security isolation policies must be implemented. Firstly, a multi-level container security isolation model is proposed, along with the design of rules for configuring model security policies. Subsequently, a static game model is utilized to dynamically adjust optimal security strategies. By integrating the wolf pack algorithm with the co-evolution algorithm, a wolf pack-co-evolution algorithm is devised to ascertain the optimal solution for the static game, thereby determining the optimal security strategy. Finally, simulation experiments demonstrate that the wolf pack-co-evolution algorithm can effectively solve for the Nash equilibrium in the multi-level container security isolation static game model, enabling dynamic adjustments to be made to security strategies. This approach ensures the security of both container subjects and objects while enhancing container computing efficiency.
Additional Links: PMID-41793843
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41793843,
year = {2026},
author = {Wen, J and Cao, Y and Sun, Y and Wang, F},
title = {Research on multi-level container security isolation operation strategy based on static game.},
journal = {Accident; analysis and prevention},
volume = {231},
number = {},
pages = {108478},
doi = {10.1016/j.aap.2026.108478},
pmid = {41793843},
issn = {1879-2057},
abstract = {With the increasing pressure on rail transit passenger services, the introduction of cloud-edge collaborative computing technology into passenger service systems has emerged as a research hotspot in recent years. The key issue in applying cloud computing to passenger service systems lies in enhancing the security and reliability of applications/data within containers in cloud-edge computing environments. Since passenger service systems operate in open network environments, they are exposed to numerous security vulnerabilities and malicious cyber-attacks, potentially leading to system crashes or the leakage of critical data. To mitigate the severe impacts of such attacks, effective container security isolation policies must be implemented. Firstly, a multi-level container security isolation model is proposed, along with the design of rules for configuring model security policies. Subsequently, a static game model is utilized to dynamically adjust optimal security strategies. By integrating the wolf pack algorithm with the co-evolution algorithm, a wolf pack-co-evolution algorithm is devised to ascertain the optimal solution for the static game, thereby determining the optimal security strategy. Finally, simulation experiments demonstrate that the wolf pack-co-evolution algorithm can effectively solve for the Nash equilibrium in the multi-level container security isolation static game model, enabling dynamic adjustments to be made to security strategies. This approach ensures the security of both container subjects and objects while enhancing container computing efficiency.},
}
RevDate: 2026-03-06
Unlocking the road to entrepreneurial success: quality drivers and digital competence in cloud computing adoption.
Scientific reports pii:10.1038/s41598-026-41143-9 [Epub ahead of print].
Additional Links: PMID-41792405
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41792405,
year = {2026},
author = {Wu, C and Mehta, AM and Li, Z and Asif, M and Shahzad, MF},
title = {Unlocking the road to entrepreneurial success: quality drivers and digital competence in cloud computing adoption.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-41143-9},
pmid = {41792405},
issn = {2045-2322},
}
RevDate: 2026-03-06
CmpDate: 2026-03-06
Integrated photonic 3D tensor processing engine.
Light, science & applications, 15(1):.
Optical computing leverages high bandwidth, low latency, and power efficiency, which is considered as one of the most effective solutions for accelerating deep learning tasks. However, mainstream photonic hardware accelerators are primarily optimized for two-dimensional (2D) matrix-vector multiplications (MVMs). To implement three-dimensional (3D) convolutional neural networks (CNNs), high-order tensors must be reshaped in the electrical domain according to the size of the accelerators before computation, leading to extra memory usage and time overheads. Additionally, synchronization across multiple channels depends on external electronic clocks, which increases the complexity of the system. In this work, we propose an integrated photonic 3D tensor processing engine (3D-TPE) based on the interleaving modulation of time, wavelength, and space. Data caching, channel synchronization and computation are realized entirely within the optical domain, reducing memory and time usage, and simplifying the system. Optical caching and synchronization are achieved with an optical tunable delay line (OTDL) chip supporting versatile clock frequencies up to 200 GHz, and optical computing is accomplished with a dual-coupled micro-ring resonators (MRRs) based crossbar chip with a 3-dB passband width of 50 GHz. We verify the processing capabilities of the 3D-TPE at clock frequencies ranging from 10 GHz to 30 GHz and perform a proof-of-concept experiment for a LiDAR 3D point cloud image recognition task operating at 20 GHz, achieving a recognition accuracy of 97.06%. The proposed 3D-TPE is anticipated to facilitate high-order tensor convolutions, playing an important role in autonomous driving, healthcare, video analytics, virtual reality, etc.
Additional Links: PMID-41792110
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41792110,
year = {2026},
author = {Wu, Y and Ni, Z and Li, X and Wang, Y and Lu, L and Chen, J and Zhou, L},
title = {Integrated photonic 3D tensor processing engine.},
journal = {Light, science & applications},
volume = {15},
number = {1},
pages = {},
pmid = {41792110},
issn = {2047-7538},
support = {62135010//National Natural Science Foundation of China (National Science Foundation of China)/ ; },
abstract = {Optical computing leverages high bandwidth, low latency, and power efficiency, which is considered as one of the most effective solutions for accelerating deep learning tasks. However, mainstream photonic hardware accelerators are primarily optimized for two-dimensional (2D) matrix-vector multiplications (MVMs). To implement three-dimensional (3D) convolutional neural networks (CNNs), high-order tensors must be reshaped in the electrical domain according to the size of the accelerators before computation, leading to extra memory usage and time overheads. Additionally, synchronization across multiple channels depends on external electronic clocks, which increases the complexity of the system. In this work, we propose an integrated photonic 3D tensor processing engine (3D-TPE) based on the interleaving modulation of time, wavelength, and space. Data caching, channel synchronization and computation are realized entirely within the optical domain, reducing memory and time usage, and simplifying the system. Optical caching and synchronization are achieved with an optical tunable delay line (OTDL) chip supporting versatile clock frequencies up to 200 GHz, and optical computing is accomplished with a dual-coupled micro-ring resonators (MRRs) based crossbar chip with a 3-dB passband width of 50 GHz. We verify the processing capabilities of the 3D-TPE at clock frequencies ranging from 10 GHz to 30 GHz and perform a proof-of-concept experiment for a LiDAR 3D point cloud image recognition task operating at 20 GHz, achieving a recognition accuracy of 97.06%. The proposed 3D-TPE is anticipated to facilitate high-order tensor convolutions, playing an important role in autonomous driving, healthcare, video analytics, virtual reality, etc.},
}
RevDate: 2026-03-06
Benchmarking Radiation Transport Monte Carlo Simulations with MCNP and Geant4 Using High Performance Computing.
Health physics pii:00004032-990000000-00329 [Epub ahead of print].
The objective of this paper is to compare the performance of several high-performance computing systems in order to inform decisions regarding their use for Monte Carlo simulations of radiation transport. Gamma ray emission from 131I in the human thyroid and detection using a personal radiation detector were modeled using the MCNP and Geant4 Monte Carlo software. These simulations were benchmarked by recording the computing time needed to run the simulation as a function of the number of parallel computing threads used. Simulations were run using a virtual machine, two desktop PCs, a CX-1 supercomputer, the Government of Canada General Purpose Science Cluster, and cloud computing. Using a higher number of parallel threads on these high-performance computing systems was found to reduce the computing time needed to run the MCNP and Geant4 simulations. The optimal configuration for running the simulations on cloud computing was evaluated, considering the number of available processors, the computing time, and the cost. Cloud computing was found to be a cost-effective, on-demand, high performance computing option for Monte Carlo simulations.
Additional Links: PMID-41790045
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41790045,
year = {2026},
author = {Moats, K and Desrosiers, M and Linton, G and Wood, R},
title = {Benchmarking Radiation Transport Monte Carlo Simulations with MCNP and Geant4 Using High Performance Computing.},
journal = {Health physics},
volume = {},
number = {},
pages = {},
doi = {10.1097/HP.0000000000002134},
pmid = {41790045},
issn = {1538-5159},
abstract = {The objective of this paper is to compare the performance of several high-performance computing systems in order to inform decisions regarding their use for Monte Carlo simulations of radiation transport. Gamma ray emission from 131I in the human thyroid and detection using a personal radiation detector were modeled using the MCNP and Geant4 Monte Carlo software. These simulations were benchmarked by recording the computing time needed to run the simulation as a function of the number of parallel computing threads used. Simulations were run using a virtual machine, two desktop PCs, a CX-1 supercomputer, the Government of Canada General Purpose Science Cluster, and cloud computing. Using a higher number of parallel threads on these high-performance computing systems was found to reduce the computing time needed to run the MCNP and Geant4 simulations. The optimal configuration for running the simulations on cloud computing was evaluated, considering the number of available processors, the computing time, and the cost. Cloud computing was found to be a cost-effective, on-demand, high performance computing option for Monte Carlo simulations.},
}
RevDate: 2026-03-05
CmpDate: 2026-03-05
Microcomb-enabled parallel self- calibration optical convolution streaming processor.
Light, science & applications, 15(1):.
The exponential growth of cloud computing and artificial intelligence (AI) applications has driven an urgent need for high-bandwidth, energy-efficient hardware architectures in data centers. With Moore's Law nearing its limits, optical neuromorphic computing hardware offers a promising alternative, providing ultra-high speeds and minimal energy consumption due to its analog architecture. Here, we propose the microcomb-enabled parallel optical convolution streaming processor (OCSP) with time, space, and wavelength three-dimensional multiplexing, operating at data rates of 50 GBaud or higher, achieving a convolution computing speed of up to 4 trillion operations per second (TOPS). Moreover, the OCSP uses a robust self-calibration mechanism to achieve accurate optical phase calibration and set-up of its convolution function. This innovative approach leverages time-space interleaving passive periodic interference architecture, incorporating wavelength-division-multiplexing technology, and is verified experimentally for parallel image feature extraction and recognition tasks. Our OCSP offers a practical pathway for seamlessly integrating photonic computing units into data center interconnects, unlocking photonic computing's potential for scalable, low-latency AI workloads.
Additional Links: PMID-41786696
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41786696,
year = {2026},
author = {Wang, J and Xu, X and Zhu, X and Xu, Y and Chen, S and Zhang, H and Zheng, Y and Li, S and Bai, Y and Liu, Z and Morandotti, R and Little, BE and Chu, ST and Lowery, AJ and Moss, DJ and Xu, K},
title = {Microcomb-enabled parallel self- calibration optical convolution streaming processor.},
journal = {Light, science & applications},
volume = {15},
number = {1},
pages = {},
pmid = {41786696},
issn = {2047-7538},
abstract = {The exponential growth of cloud computing and artificial intelligence (AI) applications has driven an urgent need for high-bandwidth, energy-efficient hardware architectures in data centers. With Moore's Law nearing its limits, optical neuromorphic computing hardware offers a promising alternative, providing ultra-high speeds and minimal energy consumption due to its analog architecture. Here, we propose the microcomb-enabled parallel optical convolution streaming processor (OCSP) with time, space, and wavelength three-dimensional multiplexing, operating at data rates of 50 GBaud or higher, achieving a convolution computing speed of up to 4 trillion operations per second (TOPS). Moreover, the OCSP uses a robust self-calibration mechanism to achieve accurate optical phase calibration and set-up of its convolution function. This innovative approach leverages time-space interleaving passive periodic interference architecture, incorporating wavelength-division-multiplexing technology, and is verified experimentally for parallel image feature extraction and recognition tasks. Our OCSP offers a practical pathway for seamlessly integrating photonic computing units into data center interconnects, unlocking photonic computing's potential for scalable, low-latency AI workloads.},
}
RevDate: 2026-03-03
Feature Compression for Cloud-Edge Multimodal 3D Object Detection.
IEEE transactions on pattern analysis and machine intelligence, PP: [Epub ahead of print].
Machine vision systems, which can efficiently manage extensive visual perception tasks, are becoming increasingly popular in industrial production and daily life. Due to the challenge of simultaneously obtaining accurate depth and texture information with a single sensor, multimodal data captured by cameras and LiDAR is commonly used to enhance performance. Additionally, cloud-edge cooperation has emerged as a novel computing approach to improve user experience and ensure data security in machine vision systems. This paper proposes a pioneering solution to address the feature compression problem in multimodal 3D object detection. Given a sparse tensor-based object detection network at the edge device, we introduce two modes to accommodate different application requirements: Transmission-Friendly Feature Compression (T-FFC) and Accuracy-Friendly Feature Compression (A-FFC). In T-FFC mode, only the output of the last layer of the network's backbone is transmitted from the edge device. The received feature is processed at the cloud device through a channel expansion module and two spatial upsampling modules to generate multi-scale features. In A-FFC mode, we expand upon the T-FFC mode by transmitting two additional types of features. These added features enable the cloud device to generate more accurate multi-scale features. Experimental results on the KITTI dataset using the VirConv-L detection network showed that T-FFC was able to compress the features by a factor of 4933 with less than a 3% reduction in detection performance. On the other hand, A-FFC compressed the features by a factor of about 733 with almost no degradation in detection performance. We also designed optional residual extraction and 3D object reconstruction modules to facilitate the reconstruction of detected objects. The reconstructed objects effectively reflected the shape, occlusion, and details of the original objects.
Additional Links: PMID-41774639
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41774639,
year = {2026},
author = {Tian, C and Li, Z and Yuan, H and Hamzaoui, R and Shen, L and Kwong, S},
title = {Feature Compression for Cloud-Edge Multimodal 3D Object Detection.},
journal = {IEEE transactions on pattern analysis and machine intelligence},
volume = {PP},
number = {},
pages = {},
doi = {10.1109/TPAMI.2026.3669471},
pmid = {41774639},
issn = {1939-3539},
abstract = {Machine vision systems, which can efficiently manage extensive visual perception tasks, are becoming increasingly popular in industrial production and daily life. Due to the challenge of simultaneously obtaining accurate depth and texture information with a single sensor, multimodal data captured by cameras and LiDAR is commonly used to enhance performance. Additionally, cloud-edge cooperation has emerged as a novel computing approach to improve user experience and ensure data security in machine vision systems. This paper proposes a pioneering solution to address the feature compression problem in multimodal 3D object detection. Given a sparse tensor-based object detection network at the edge device, we introduce two modes to accommodate different application requirements: Transmission-Friendly Feature Compression (T-FFC) and Accuracy-Friendly Feature Compression (A-FFC). In T-FFC mode, only the output of the last layer of the network's backbone is transmitted from the edge device. The received feature is processed at the cloud device through a channel expansion module and two spatial upsampling modules to generate multi-scale features. In A-FFC mode, we expand upon the T-FFC mode by transmitting two additional types of features. These added features enable the cloud device to generate more accurate multi-scale features. Experimental results on the KITTI dataset using the VirConv-L detection network showed that T-FFC was able to compress the features by a factor of 4933 with less than a 3% reduction in detection performance. On the other hand, A-FFC compressed the features by a factor of about 733 with almost no degradation in detection performance. We also designed optional residual extraction and 3D object reconstruction modules to facilitate the reconstruction of detected objects. The reconstructed objects effectively reflected the shape, occlusion, and details of the original objects.},
}
RevDate: 2026-03-03
Local-Global-Graph Network-Based Biokey Generation with Electrocardiogram Signal and Lightweight Authentication in Cloud-Based Internet of Medical Things Networks.
Critical reviews in biomedical engineering, 54(1):67-95.
The internet of medical things (IoMT) is regarded as a promising framework, which is used to expand and improve telemedicine services. Cloud-based IoMT refers to the integration of medical devices and sensors with cloud computing infrastructure, enabling real-time remote data collection, processing, storage, and analysis. This architecture supports the efficient management of patient health information and facilitates advanced telemedicine services by offering scalable, secure, and accessible healthcare solutions. Ensuring secure access and communication in such systems is critical, as vulnerabilities in the network can expose sensitive patient data to significant risks. Among various security measures, authentication using biomedical signals, particularly electrocardiogram (ECG) signals, is gaining attention due to their unique, individual-specific characteristics. Therefore, this paper develops a new approach called local-global-graph network-based biokey generation (LGGNet-BioKey) for authentication in Cloud-based IoMT. Initially, the Cloud-based IoMT network is simulated, and it includes three entities, like cloud server, gateway, and patient. First, the public key and security parameters are initialized, and then the entities are registered with the cloud server. Next, the key generation is done using LGGNet, and then the BioKey generation is performed using an ECG signal. Next, the lightweight authentication is done and lastly, attribute-based encryption and decryption are performed in the data preservation phase. Furthermore, the LGGNet-BioKey model measured an execution time, memory usage, and key generation time of 3.772 sec, 9.096 MB, and 3.771 sec.
Additional Links: PMID-41774489
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41774489,
year = {2026},
author = {Nagarathinam, SKA and Bhukya, RN},
title = {Local-Global-Graph Network-Based Biokey Generation with Electrocardiogram Signal and Lightweight Authentication in Cloud-Based Internet of Medical Things Networks.},
journal = {Critical reviews in biomedical engineering},
volume = {54},
number = {1},
pages = {67-95},
doi = {10.1615/CritRevBiomedEng.2025058925},
pmid = {41774489},
issn = {1943-619X},
abstract = {The internet of medical things (IoMT) is regarded as a promising framework, which is used to expand and improve telemedicine services. Cloud-based IoMT refers to the integration of medical devices and sensors with cloud computing infrastructure, enabling real-time remote data collection, processing, storage, and analysis. This architecture supports the efficient management of patient health information and facilitates advanced telemedicine services by offering scalable, secure, and accessible healthcare solutions. Ensuring secure access and communication in such systems is critical, as vulnerabilities in the network can expose sensitive patient data to significant risks. Among various security measures, authentication using biomedical signals, particularly electrocardiogram (ECG) signals, is gaining attention due to their unique, individual-specific characteristics. Therefore, this paper develops a new approach called local-global-graph network-based biokey generation (LGGNet-BioKey) for authentication in Cloud-based IoMT. Initially, the Cloud-based IoMT network is simulated, and it includes three entities, like cloud server, gateway, and patient. First, the public key and security parameters are initialized, and then the entities are registered with the cloud server. Next, the key generation is done using LGGNet, and then the BioKey generation is performed using an ECG signal. Next, the lightweight authentication is done and lastly, attribute-based encryption and decryption are performed in the data preservation phase. Furthermore, the LGGNet-BioKey model measured an execution time, memory usage, and key generation time of 3.772 sec, 9.096 MB, and 3.771 sec.},
}
RevDate: 2026-03-02
Benchmarking multiple instance learning architectures from patches to pathology for prostate cancer detection and grading using attention-based weak supervision.
Scientific reports pii:10.1038/s41598-026-39196-x [Epub ahead of print].
Histopathological evaluation is necessary for the diagnosis and grading of prostate cancer, which is still one of the most common cancers in men globally. Traditional evaluation is time-consuming, prone to inter-observer variability, and challenging to scale. The clinical usefulness of current AI systems is limited by the need for comprehensive pixel-level annotations. The objective of this research is to develop and evaluate a large-scale benchmarking study on a weakly supervised deep learning framework that minimizes the need for annotation and ensures interpretability for automated prostate cancer diagnosis and International Society of Urological Pathology (ISUP) grading using whole slide images (WSIs). This study rigorously tested six cutting-edge multiple instance learning (MIL) architectures (CLAM-MB, CLAM-SB, ILRA-MIL, AC-MIL, AMD-MIL, WiKG-MIL), three feature encoders (ResNet50, CTransPath, UNI2), and four patch extraction techniques (varying sizes and overlap) using the PANDA dataset (10,616 WSIs), yielding 72 experimental configurations. The methodology used distributed cloud computing to process over 31 million tissue patches, implementing advanced attention mechanisms to ensure clinical interpretability through Grad-CAM visualizations. The optimum configuration (UNI2 encoder with ILRA-MIL, 256×256 patches, 50% overlap) achieved 78.75% accuracy and 90.12% quadratic weighted kappa (QWK), outperforming traditional methods and approaching expert pathologist-level diagnostic capability. Overlapping smaller patches offered the best balance of spatial resolution and contextual information, while domain-specific foundation models performed noticeably better than generic encoders. This work is the first large-scale, comprehensive comparison of weekly supervised MIL methods for prostate cancer diagnosis and grading. The proposed approach has excellent clinical diagnostic performance, scalability, practical feasibility through cloud computing, and interpretability using visualization tools.
Additional Links: PMID-41771952
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41771952,
year = {2026},
author = {Butt, NA and Sarwat, D and Noya, ID and Tutusaus, K and Samee, NA and Ashraf, I},
title = {Benchmarking multiple instance learning architectures from patches to pathology for prostate cancer detection and grading using attention-based weak supervision.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-39196-x},
pmid = {41771952},
issn = {2045-2322},
support = {PNURSP2026R746//Princess Nourah bint Abdulrahman University Researchers Supporting Project/ ; },
abstract = {Histopathological evaluation is necessary for the diagnosis and grading of prostate cancer, which is still one of the most common cancers in men globally. Traditional evaluation is time-consuming, prone to inter-observer variability, and challenging to scale. The clinical usefulness of current AI systems is limited by the need for comprehensive pixel-level annotations. The objective of this research is to develop and evaluate a large-scale benchmarking study on a weakly supervised deep learning framework that minimizes the need for annotation and ensures interpretability for automated prostate cancer diagnosis and International Society of Urological Pathology (ISUP) grading using whole slide images (WSIs). This study rigorously tested six cutting-edge multiple instance learning (MIL) architectures (CLAM-MB, CLAM-SB, ILRA-MIL, AC-MIL, AMD-MIL, WiKG-MIL), three feature encoders (ResNet50, CTransPath, UNI2), and four patch extraction techniques (varying sizes and overlap) using the PANDA dataset (10,616 WSIs), yielding 72 experimental configurations. The methodology used distributed cloud computing to process over 31 million tissue patches, implementing advanced attention mechanisms to ensure clinical interpretability through Grad-CAM visualizations. The optimum configuration (UNI2 encoder with ILRA-MIL, 256×256 patches, 50% overlap) achieved 78.75% accuracy and 90.12% quadratic weighted kappa (QWK), outperforming traditional methods and approaching expert pathologist-level diagnostic capability. Overlapping smaller patches offered the best balance of spatial resolution and contextual information, while domain-specific foundation models performed noticeably better than generic encoders. This work is the first large-scale, comprehensive comparison of weekly supervised MIL methods for prostate cancer diagnosis and grading. The proposed approach has excellent clinical diagnostic performance, scalability, practical feasibility through cloud computing, and interpretability using visualization tools.},
}
RevDate: 2026-03-02
Sovereignty-as-a-service: How big tech companies co-opt and redefine digital sovereignty.
Media, culture, and society, 48(2):416-424 pii:10.1177_01634437251395003.
This article introduces the concept of sovereignty-as-a-service to describe how Big Tech companies, specifically Microsoft, Amazon, and Google/Alphabet, are strategically redefining digital sovereignty through their programs of cloud infrastructure. Drawing on critical discourse analysis of official materials released between 2022 and 2023, we show how these companies respond to regulatory pressures, particularly in Europe, by offering modular and branded solutions that frame sovereignty as a technical, legal, and infrastructural matter. Rather than sovereignty being exercised over platforms, it is now provisioned by them, on their terms. We argue that sovereignty-as-a-service constitutes a form of discursive capture that empties the concept, aligning it with the ideological legacy of the Californian Ideology. In this reframing, digital sovereignty becomes a service to be purchased, configured, and optimized through proprietary platforms. By conceptualizing sovereignty as a site of contested meanings open to appropriation, this article contributes to critical debates on digital sovereignty and technology governance.
Additional Links: PMID-41769682
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41769682,
year = {2026},
author = {Grohmann, R and Costa Barbosa, A},
title = {Sovereignty-as-a-service: How big tech companies co-opt and redefine digital sovereignty.},
journal = {Media, culture, and society},
volume = {48},
number = {2},
pages = {416-424},
doi = {10.1177/01634437251395003},
pmid = {41769682},
issn = {1460-3675},
abstract = {This article introduces the concept of sovereignty-as-a-service to describe how Big Tech companies, specifically Microsoft, Amazon, and Google/Alphabet, are strategically redefining digital sovereignty through their programs of cloud infrastructure. Drawing on critical discourse analysis of official materials released between 2022 and 2023, we show how these companies respond to regulatory pressures, particularly in Europe, by offering modular and branded solutions that frame sovereignty as a technical, legal, and infrastructural matter. Rather than sovereignty being exercised over platforms, it is now provisioned by them, on their terms. We argue that sovereignty-as-a-service constitutes a form of discursive capture that empties the concept, aligning it with the ideological legacy of the Californian Ideology. In this reframing, digital sovereignty becomes a service to be purchased, configured, and optimized through proprietary platforms. By conceptualizing sovereignty as a site of contested meanings open to appropriation, this article contributes to critical debates on digital sovereignty and technology governance.},
}
RevDate: 2026-03-02
Monitoring environmental impacts of a designated aquaculture area in the Karaburun Peninsula using Google Earth Engine.
PeerJ, 14:e20873.
Satellite-based monitoring of aquaculture impacts remains constrained by the absence of standardized, reproducible methodologies capable of capturing long-term environmental dynamics. This study introduces a novel framework that integrates Difference-in-Differences (DiD) causal inference with multi-decadal Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data and Google Earth Engine (GEE) cloud computing to evaluate aquaculture-related changes in coastal ecosystems. Using 20 years of satellite observations (2002-2022) from the Karaburun Peninsula, İzmir, Türkiye, we compared three representative sites: an aquaculture zone, a coastal area influenced by human settlements, and an offshore reference site with minimal anthropogenic activity. The human-impacted coastal site consistently exhibited the highest concentrations of surface parameters, reflecting dominant background anthropogenic influences. However, DiD analysis revealed no statistically significant differences in chlorophyll-a (Chl-a), particulate organic carbon (POC), or other parameters between the aquaculture and control sites, indicating that potential aquaculture-related effects remained below the detection threshold of the 1 km MODIS resolution. Despite these null results, the study demonstrates the feasibility and limitations of combining causal inference and cloud-based remote sensing for aquaculture monitoring. This methodological integration provides a scalable, cost-effective, and transferable framework for detecting and interpreting environmental change across large spatial and temporal domains. By defining the sensitivity limits of satellite-based detection, this work lays a foundation for future applications that merge high-resolution sensors, in-situ validation, and process-based modeling in sustainable aquaculture management.
Additional Links: PMID-41769407
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41769407,
year = {2026},
author = {Tosun, DD},
title = {Monitoring environmental impacts of a designated aquaculture area in the Karaburun Peninsula using Google Earth Engine.},
journal = {PeerJ},
volume = {14},
number = {},
pages = {e20873},
pmid = {41769407},
issn = {2167-8359},
abstract = {Satellite-based monitoring of aquaculture impacts remains constrained by the absence of standardized, reproducible methodologies capable of capturing long-term environmental dynamics. This study introduces a novel framework that integrates Difference-in-Differences (DiD) causal inference with multi-decadal Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data and Google Earth Engine (GEE) cloud computing to evaluate aquaculture-related changes in coastal ecosystems. Using 20 years of satellite observations (2002-2022) from the Karaburun Peninsula, İzmir, Türkiye, we compared three representative sites: an aquaculture zone, a coastal area influenced by human settlements, and an offshore reference site with minimal anthropogenic activity. The human-impacted coastal site consistently exhibited the highest concentrations of surface parameters, reflecting dominant background anthropogenic influences. However, DiD analysis revealed no statistically significant differences in chlorophyll-a (Chl-a), particulate organic carbon (POC), or other parameters between the aquaculture and control sites, indicating that potential aquaculture-related effects remained below the detection threshold of the 1 km MODIS resolution. Despite these null results, the study demonstrates the feasibility and limitations of combining causal inference and cloud-based remote sensing for aquaculture monitoring. This methodological integration provides a scalable, cost-effective, and transferable framework for detecting and interpreting environmental change across large spatial and temporal domains. By defining the sensitivity limits of satellite-based detection, this work lays a foundation for future applications that merge high-resolution sensors, in-situ validation, and process-based modeling in sustainable aquaculture management.},
}
RevDate: 2026-03-02
CmpDate: 2026-03-02
Enhancing E-health system accuracy using Rendezvous Data Processing Model (RDPM) with IoT-cloud integration.
Digital health, 12:20552076251406312.
OBJECTIVE: The study's overarching goal is to improve E-health monitoring systems' precision and performance by developing and implementing an Rendezvous Data Processing Model (RDPM) that is compatible with IoT-cloud architecture. The approach solves a problem with current E-health systems; these systems frequently make incorrect or redundant suggestions because they depend too much on static analytic methods and isolated data augmentation.
METHODS: The RDPM system recommended improves real-time decision-making by digesting historical suggestions and present analytical flaws. Divided features and data streams allow it to validate new hypotheses by comparing them to earlier observations. The state learning process has been improved by earlier efforts to avoid errors and data duplication, the model must distinguish intervening and non-intervening data. Internet-connected sensors collect massive volumes of patient and environment data. Cloud analytics evaluates the system's precision using these data.
RESULTS: Experimental results show that RDPM reduces data interruptions, analytical errors, and recommendation ratios while improving decision correctness. The model shows that it can quickly interpret many input streams without compromising accuracy. Compared to IoT-based healthcare analytics, the RDPM improves suggestion accuracy and reduces computing redundancy.
CONCLUSION: IoT-cloud technologies with the RDPM system establish an adaptive and scalable platform for sophisticated E-health monitoring. State learning and dynamic data validation allow RDPM to make more accurate and convenient health recommendations. This approach allows a healthcare system to self-improve, understand context, and manage massive, real-time datasets.
Additional Links: PMID-41767873
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41767873,
year = {2026},
author = {Shahab, S and Kumar Dutta, A and Shaikh, ZA and Yousef, A and Anjum, M},
title = {Enhancing E-health system accuracy using Rendezvous Data Processing Model (RDPM) with IoT-cloud integration.},
journal = {Digital health},
volume = {12},
number = {},
pages = {20552076251406312},
pmid = {41767873},
issn = {2055-2076},
abstract = {OBJECTIVE: The study's overarching goal is to improve E-health monitoring systems' precision and performance by developing and implementing an Rendezvous Data Processing Model (RDPM) that is compatible with IoT-cloud architecture. The approach solves a problem with current E-health systems; these systems frequently make incorrect or redundant suggestions because they depend too much on static analytic methods and isolated data augmentation.
METHODS: The RDPM system recommended improves real-time decision-making by digesting historical suggestions and present analytical flaws. Divided features and data streams allow it to validate new hypotheses by comparing them to earlier observations. The state learning process has been improved by earlier efforts to avoid errors and data duplication, the model must distinguish intervening and non-intervening data. Internet-connected sensors collect massive volumes of patient and environment data. Cloud analytics evaluates the system's precision using these data.
RESULTS: Experimental results show that RDPM reduces data interruptions, analytical errors, and recommendation ratios while improving decision correctness. The model shows that it can quickly interpret many input streams without compromising accuracy. Compared to IoT-based healthcare analytics, the RDPM improves suggestion accuracy and reduces computing redundancy.
CONCLUSION: IoT-cloud technologies with the RDPM system establish an adaptive and scalable platform for sophisticated E-health monitoring. State learning and dynamic data validation allow RDPM to make more accurate and convenient health recommendations. This approach allows a healthcare system to self-improve, understand context, and manage massive, real-time datasets.},
}
RevDate: 2026-02-27
IntelliScheduler: an edge-cloud computing environment hybrid deep learning framework for task scheduling based on learning.
Scientific reports pii:10.1038/s41598-026-41330-8 [Epub ahead of print].
Edge-cloud computing has emerged as an important paradigm for modern Internet of Things (IoT) workflow applications, enabling low latency and on-demand resource allocation. In scenarios with heterogeneous deadlines and varying workloads, SLA compliance requires efficient coordination between edge and cloud resources. However, cloud-centric scheduling and heuristic approaches tend to lack adaptability to rapidly changing system conditions and, as a result, experience long waiting times (the same applies to QoS). To tackle these issues, we present IntelliScheduler, a hybrid actor-critic deep reinforcement learning framework for adaptive task scheduling in an edge-cloud system. Our framework presents a runtime-aware state representation combined with a learning-based decision mechanism, backed by a multi-buffer experience replay architecture. Second, a learning-based optimal task scheduling (LbOTS) algorithm is developed to minimise total task execution delay by discovering optimal deployment decisions across edge and cloud computational resources using latency-aware reward modelling. We assess the proposed approach by conducting extensive simulation experiments under different workloads. We evaluate LbOTS across various experimental scenarios and report up to 13% higher normalised reward, 67% lower training loss, 52-66% lower operational cost, and 80-90% lower rejection rate compared to PSO, MBO, and MOPSObaselines, achieving approximately 15-75% better QoE. Though the current assessment is simulation-based, the adaptive learning formulation is highly relevant for application in dynamic edge-cloud scheduling scenarios.
Additional Links: PMID-41760833
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41760833,
year = {2026},
author = {Raju, LR and Reddy, MVK and Surukanti, SR and Sudhakar, G and Subrahmanya Sarma M, VV and Adepu, A},
title = {IntelliScheduler: an edge-cloud computing environment hybrid deep learning framework for task scheduling based on learning.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-41330-8},
pmid = {41760833},
issn = {2045-2322},
abstract = {Edge-cloud computing has emerged as an important paradigm for modern Internet of Things (IoT) workflow applications, enabling low latency and on-demand resource allocation. In scenarios with heterogeneous deadlines and varying workloads, SLA compliance requires efficient coordination between edge and cloud resources. However, cloud-centric scheduling and heuristic approaches tend to lack adaptability to rapidly changing system conditions and, as a result, experience long waiting times (the same applies to QoS). To tackle these issues, we present IntelliScheduler, a hybrid actor-critic deep reinforcement learning framework for adaptive task scheduling in an edge-cloud system. Our framework presents a runtime-aware state representation combined with a learning-based decision mechanism, backed by a multi-buffer experience replay architecture. Second, a learning-based optimal task scheduling (LbOTS) algorithm is developed to minimise total task execution delay by discovering optimal deployment decisions across edge and cloud computational resources using latency-aware reward modelling. We assess the proposed approach by conducting extensive simulation experiments under different workloads. We evaluate LbOTS across various experimental scenarios and report up to 13% higher normalised reward, 67% lower training loss, 52-66% lower operational cost, and 80-90% lower rejection rate compared to PSO, MBO, and MOPSObaselines, achieving approximately 15-75% better QoE. Though the current assessment is simulation-based, the adaptive learning formulation is highly relevant for application in dynamic edge-cloud scheduling scenarios.},
}
RevDate: 2026-02-27
The Evolving Cyberinfrastructure at the National Institutes of Health to Support Data and AI in Biomedical Research.
Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing, 31:859-864.
Technological advancements have made biomedicine rich in data. With the generation of enormous volumes of biomedical and clinical data, it has become imperative to support biomedical computing investigators to utilize this wealth of biologically meaningful information. Moreover, advancements in Artificial Intelligence (AI) techniques, in conjunction with improved capabilities in implementing large-scale data processing pipelines, have led to the development of robust computational techniques and algorithms to solve complex biological problems. However, there are many challenges associated with providing researchers with secure systems for accessing biomedical data and computational resources that must be addressed. Establishing and maintaining an impactful data and AI ecosystem to support efforts in advancing biomedical research requires effective, scalable, and standardized information technology solutions, funding programs, and technical guidance that facilitate researchers in utilizing the state-of-the-art. The U.S. National Institutes of Health (NIH) has established novel initiatives to implement a cyberinfrastructure that democratizes secure access to large biomedical datasets and cloud-based computing resources, equipping biocomputing scientists to pursue pioneering research. This workshop will highlight the major issues restraining researchers' access to biomedical datasets and computing infrastructures and will cover the key components of the NIH's cyberinfrastructure aimed at advancing data science and AI research for biomedical applications.
Additional Links: PMID-41758192
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41758192,
year = {2026},
author = {Ramwala, OA and Weber, N and Mooney, SD},
title = {The Evolving Cyberinfrastructure at the National Institutes of Health to Support Data and AI in Biomedical Research.},
journal = {Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing},
volume = {31},
number = {},
pages = {859-864},
doi = {10.1142/9789819824755_0064},
pmid = {41758192},
issn = {2335-6936},
abstract = {Technological advancements have made biomedicine rich in data. With the generation of enormous volumes of biomedical and clinical data, it has become imperative to support biomedical computing investigators to utilize this wealth of biologically meaningful information. Moreover, advancements in Artificial Intelligence (AI) techniques, in conjunction with improved capabilities in implementing large-scale data processing pipelines, have led to the development of robust computational techniques and algorithms to solve complex biological problems. However, there are many challenges associated with providing researchers with secure systems for accessing biomedical data and computational resources that must be addressed. Establishing and maintaining an impactful data and AI ecosystem to support efforts in advancing biomedical research requires effective, scalable, and standardized information technology solutions, funding programs, and technical guidance that facilitate researchers in utilizing the state-of-the-art. The U.S. National Institutes of Health (NIH) has established novel initiatives to implement a cyberinfrastructure that democratizes secure access to large biomedical datasets and cloud-based computing resources, equipping biocomputing scientists to pursue pioneering research. This workshop will highlight the major issues restraining researchers' access to biomedical datasets and computing infrastructures and will cover the key components of the NIH's cyberinfrastructure aimed at advancing data science and AI research for biomedical applications.},
}
RevDate: 2026-02-27
CmpDate: 2026-02-27
Singe cell RNA sequencing data processing using cloud-based serverless computing.
bioRxiv : the preprint server for biology pii:2025.04.26.650787.
Singe cell RNA sequencing (scRNA-seq) has become a routine method for measuring cell activities. Processing large scRNA-seq datasets requires high-performance computing resources. The emergence of cloud computing allows us to leverage its on-demand capabilities without major investment in infrastructure. Serverless computing provides cost efficiency by allowing users to pay only for actual resource usage, eliminating the necessity for pre-allocated server capacities. Additionally, there is no requirement to set up servers in advance. We present a novel and generalizable methodology using serverless cloud computing to accelerate computationally intensive workflows. We create an on-demand "supercomputer" using rapidly deployable cloud serverless functions as automatically provisioned computation units. We tested our methodology of optimizing a scRNA-seq workflow by leveraging serverless functions on the cloud using two publicly available peripheral blood mononuclear cell (PBMC) datasets. In addition, we demonstrate our approach using data generated by the NIH MorPhiC program, where we process a 450 GB human scRNA-seq dataset across 86 cell lines designed to study the temporal impact of perturbations on pancreatic differentiation. We compared the total execution time of the scRNA-seq serverless workflow with the traditional workflow without using serverless functions, and demonstrate major speedup for large scRNA-seq datasets.
Additional Links: PMID-41756869
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41756869,
year = {2026},
author = {Hung, LH and Nasam, N and Biju, C and Lloyd, W and Yeung, KY},
title = {Singe cell RNA sequencing data processing using cloud-based serverless computing.},
journal = {bioRxiv : the preprint server for biology},
volume = {},
number = {},
pages = {},
doi = {10.1101/2025.04.26.650787},
pmid = {41756869},
issn = {2692-8205},
abstract = {Singe cell RNA sequencing (scRNA-seq) has become a routine method for measuring cell activities. Processing large scRNA-seq datasets requires high-performance computing resources. The emergence of cloud computing allows us to leverage its on-demand capabilities without major investment in infrastructure. Serverless computing provides cost efficiency by allowing users to pay only for actual resource usage, eliminating the necessity for pre-allocated server capacities. Additionally, there is no requirement to set up servers in advance. We present a novel and generalizable methodology using serverless cloud computing to accelerate computationally intensive workflows. We create an on-demand "supercomputer" using rapidly deployable cloud serverless functions as automatically provisioned computation units. We tested our methodology of optimizing a scRNA-seq workflow by leveraging serverless functions on the cloud using two publicly available peripheral blood mononuclear cell (PBMC) datasets. In addition, we demonstrate our approach using data generated by the NIH MorPhiC program, where we process a 450 GB human scRNA-seq dataset across 86 cell lines designed to study the temporal impact of perturbations on pancreatic differentiation. We compared the total execution time of the scRNA-seq serverless workflow with the traditional workflow without using serverless functions, and demonstrate major speedup for large scRNA-seq datasets.},
}
RevDate: 2026-02-27
Intelligent Water Quality Assessment and Prediction System for Public Networks: A Comparative Analysis of ML Algorithms and Rule-Based Recommender Techniques.
Sensors (Basel, Switzerland), 26(4): pii:s26041392.
An assessment and prediction system for the quality of public water networks was developed, using Timișoara, Romania, as a case study. This was implemented on a Google Firebase cloud storage system and comprised twelve ML algorithms applied to test samples for drinkability and used in predictions of upcoming samples. The system compares 17 water quality parameters to the World Health Organization and public reports of Timișoara drinking water standards for 804 samples. The system provides real-time data storage, drinkability prediction for the reservoir water system, and rule-based critical water recommendations for elementary treatment in samples. The most accurate and best-calibrated against random forest, gradient boosting, and Logistic Regression algorithms was the decision tree algorithm of the ML models. The experimental findings also determine the regions of the worst and best water quality and propose respective treatment. In contrast to previous research and structures, the paper demonstrates an approved stable solution for smart water monitoring, correlating practical deployment with sophisticated data-based conclusions. The results contribute to enhancing public health, enhancing water management measures, and upscaling the system for larger-scale applications.
Additional Links: PMID-41755335
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41755335,
year = {2026},
author = {Paliuc, C and Banu-Taran, P and Petruc, SI and Bogdan, R and Popa, M},
title = {Intelligent Water Quality Assessment and Prediction System for Public Networks: A Comparative Analysis of ML Algorithms and Rule-Based Recommender Techniques.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {4},
pages = {},
doi = {10.3390/s26041392},
pmid = {41755335},
issn = {1424-8220},
abstract = {An assessment and prediction system for the quality of public water networks was developed, using Timișoara, Romania, as a case study. This was implemented on a Google Firebase cloud storage system and comprised twelve ML algorithms applied to test samples for drinkability and used in predictions of upcoming samples. The system compares 17 water quality parameters to the World Health Organization and public reports of Timișoara drinking water standards for 804 samples. The system provides real-time data storage, drinkability prediction for the reservoir water system, and rule-based critical water recommendations for elementary treatment in samples. The most accurate and best-calibrated against random forest, gradient boosting, and Logistic Regression algorithms was the decision tree algorithm of the ML models. The experimental findings also determine the regions of the worst and best water quality and propose respective treatment. In contrast to previous research and structures, the paper demonstrates an approved stable solution for smart water monitoring, correlating practical deployment with sophisticated data-based conclusions. The results contribute to enhancing public health, enhancing water management measures, and upscaling the system for larger-scale applications.},
}
RevDate: 2026-02-27
Verifiable Differential Privacy Partial Disclosure for IoT with Stateless k-Use Tokens.
Sensors (Basel, Switzerland), 26(4): pii:s26041393.
Internet of Things (IoT) applications often require only minimal necessary information-such as threshold judgments, binning, or prefixes-yet they must control privacy leakage arising from multi-round and cross-entity access without exposing raw values. Existing solutions, however, frequently rely on ciphertext structures and server-side states, making it difficult to define a leakage upper bound for restricted answers in the sense of Differential Privacy (DP), or they lack unified information budgeting and k-use control. To address these challenges, this paper proposes a verifiable differential privacy partial disclosure scheme for IoT. We employ DP accounting to uniformly constrain the leakage of three types of operators: threshold, binning, and prefix. Furthermore, we design stateless k-use tokens based on Verifiable Random Functions (VRFs) and chained receipts to generate publicly verifiable compliance evidence for each response. We implemented an end-edge-cloud prototype system and evaluated its performance on two use cases: smart meter threshold alarms and industrial sensor out-of-bound detection. Experimental results demonstrate that compared with a baseline relying on server-state counting for k-use control, our stateless k-use mechanism improves throughput by approximately 25-37% under concurrency scales of 1, 8, and 16, and reduces p95 latency by an average of 15%. Meanwhile, in multi-party splicing attack experiments, the re-identification accuracy remains stable in the 0.50-0.52 range, approximating random guessing. These results validate that the proposed scheme possesses low-energy engineering feasibility and audit-friendliness while effectively suppressing splicing risks.
Additional Links: PMID-41755332
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41755332,
year = {2026},
author = {Zheng, D and Shi, W and Pan, Y and Shu, S and Xu, C and Li, Z and Wang, B and Lin, Y and Liu, P},
title = {Verifiable Differential Privacy Partial Disclosure for IoT with Stateless k-Use Tokens.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {4},
pages = {},
doi = {10.3390/s26041393},
pmid = {41755332},
issn = {1424-8220},
support = {2022YFB3305302//National Key Research and Development Program of China/ ; 202510423112X//China National University Student Innovation & Entrepreneurship Development Program/ ; },
abstract = {Internet of Things (IoT) applications often require only minimal necessary information-such as threshold judgments, binning, or prefixes-yet they must control privacy leakage arising from multi-round and cross-entity access without exposing raw values. Existing solutions, however, frequently rely on ciphertext structures and server-side states, making it difficult to define a leakage upper bound for restricted answers in the sense of Differential Privacy (DP), or they lack unified information budgeting and k-use control. To address these challenges, this paper proposes a verifiable differential privacy partial disclosure scheme for IoT. We employ DP accounting to uniformly constrain the leakage of three types of operators: threshold, binning, and prefix. Furthermore, we design stateless k-use tokens based on Verifiable Random Functions (VRFs) and chained receipts to generate publicly verifiable compliance evidence for each response. We implemented an end-edge-cloud prototype system and evaluated its performance on two use cases: smart meter threshold alarms and industrial sensor out-of-bound detection. Experimental results demonstrate that compared with a baseline relying on server-state counting for k-use control, our stateless k-use mechanism improves throughput by approximately 25-37% under concurrency scales of 1, 8, and 16, and reduces p95 latency by an average of 15%. Meanwhile, in multi-party splicing attack experiments, the re-identification accuracy remains stable in the 0.50-0.52 range, approximating random guessing. These results validate that the proposed scheme possesses low-energy engineering feasibility and audit-friendliness while effectively suppressing splicing risks.},
}
RevDate: 2026-02-27
Two-Stage Wildlife Event Classification for Edge Deployment.
Sensors (Basel, Switzerland), 26(4): pii:s26041366.
Camera-based wildlife monitoring is often overwhelmed by non-target triggers and slowed by manual review or cloud-dependent inference, which can prevent timely intervention for high stakes human-wildlife conflicts. Our key contribution is a deployable, fully offline edge vision sensor that achieves near-real-time, highly accurate wildlife event classification by combining detector-based empty-image suppression with a lightweight classifier trained with a staged transfer-learning curriculum. Specifically, Stage 1 uses a pretrained You Only Look Once (YOLO)-family detector for permissive animal localization and empty-trigger suppression, and Stage 2 uses a lightweight EfficientNet-based binary classifier to confirm puma on detector crops and gate downstream actions. Our design is robust to low-quality nighttime monochrome imagery (motion blur, low contrast, illumination artifacts, and partial-body captures) and operates using commercially available components in connectivity-limited settings. In field deployments running since May 2025, end-to-end latency from camera trigger to action command is approximately 4 s. Ablation studies using a dataset of labeled wildlife images (pumas, not pumas) show that the two-stage approach substantially reduces false alarms in identifying pumas relative to a full-image classifier while maintaining high recall. On the held-out test set (N=1434 events), the proposed two-stage cascade achieves precision 0.983, recall 0.975, F1 0.979, accuracy 0.986, and balanced accuracy 0.983, with only 8 false positives and 12 false negatives. The system can be easily adapted for other species, as demonstrated by rapid retraining of the second stage to classify ringtails. Downstream responses (e.g., notifications and optional audio/light outputs) provide flexible actuation capabilities that can be configured to support intervention.
Additional Links: PMID-41755305
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41755305,
year = {2026},
author = {Viswanathan, AS and Bock, A and Bent, Z and Peyton, MA and Tartakovsky, DM and Santos, JE},
title = {Two-Stage Wildlife Event Classification for Edge Deployment.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {4},
pages = {},
doi = {10.3390/s26041366},
pmid = {41755305},
issn = {1424-8220},
support = {FA9550-24-1-0237//United States Air Force Office of Scientific Research/ ; DE-SC0023163//Office of Advanced Scientific Computing Research/ ; },
abstract = {Camera-based wildlife monitoring is often overwhelmed by non-target triggers and slowed by manual review or cloud-dependent inference, which can prevent timely intervention for high stakes human-wildlife conflicts. Our key contribution is a deployable, fully offline edge vision sensor that achieves near-real-time, highly accurate wildlife event classification by combining detector-based empty-image suppression with a lightweight classifier trained with a staged transfer-learning curriculum. Specifically, Stage 1 uses a pretrained You Only Look Once (YOLO)-family detector for permissive animal localization and empty-trigger suppression, and Stage 2 uses a lightweight EfficientNet-based binary classifier to confirm puma on detector crops and gate downstream actions. Our design is robust to low-quality nighttime monochrome imagery (motion blur, low contrast, illumination artifacts, and partial-body captures) and operates using commercially available components in connectivity-limited settings. In field deployments running since May 2025, end-to-end latency from camera trigger to action command is approximately 4 s. Ablation studies using a dataset of labeled wildlife images (pumas, not pumas) show that the two-stage approach substantially reduces false alarms in identifying pumas relative to a full-image classifier while maintaining high recall. On the held-out test set (N=1434 events), the proposed two-stage cascade achieves precision 0.983, recall 0.975, F1 0.979, accuracy 0.986, and balanced accuracy 0.983, with only 8 false positives and 12 false negatives. The system can be easily adapted for other species, as demonstrated by rapid retraining of the second stage to classify ringtails. Downstream responses (e.g., notifications and optional audio/light outputs) provide flexible actuation capabilities that can be configured to support intervention.},
}
RevDate: 2026-02-27
Dynamic Micro-Batch and Token-Budget Scheduling for IoT-Scale Pipeline-Parallel LLM Inference.
Sensors (Basel, Switzerland), 26(4): pii:s26041101.
Large language models in IoT-edge-cloud settings face bursty, heterogeneous requests that make pipeline-parallel inference prone to micro-batch imbalance and communication stalls, causing GPU idle time and SLO violations. We propose a runtime-adaptive scheduler that jointly tunes token budgets and micro-batch counts to balance prefill/decode workloads and minimize pipeline bubbles under changing compute and network conditions. On a four-node pipeline-parallel cluster across Llama-2-13b and Qwen2.5-14b at 100/1000 Mbps, our method outperforms vLLM and SGLang, reducing GPU idle time by up to 55% and improving throughput by up to 1.61 × while improving TTFT/ITL SLO satisfaction. These results show that dynamic scheduling is essential for scalable, latency-stable LLM inference in IoT-edge-cloud environments.
Additional Links: PMID-41755042
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41755042,
year = {2026},
author = {Ahn, J and Son, Y and Kim, D and Park, S},
title = {Dynamic Micro-Batch and Token-Budget Scheduling for IoT-Scale Pipeline-Parallel LLM Inference.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {4},
pages = {},
doi = {10.3390/s26041101},
pmid = {41755042},
issn = {1424-8220},
support = {25DIH-32//Daegu Digital Innovation Promotion Agency(DIP)/ ; },
abstract = {Large language models in IoT-edge-cloud settings face bursty, heterogeneous requests that make pipeline-parallel inference prone to micro-batch imbalance and communication stalls, causing GPU idle time and SLO violations. We propose a runtime-adaptive scheduler that jointly tunes token budgets and micro-batch counts to balance prefill/decode workloads and minimize pipeline bubbles under changing compute and network conditions. On a four-node pipeline-parallel cluster across Llama-2-13b and Qwen2.5-14b at 100/1000 Mbps, our method outperforms vLLM and SGLang, reducing GPU idle time by up to 55% and improving throughput by up to 1.61 × while improving TTFT/ITL SLO satisfaction. These results show that dynamic scheduling is essential for scalable, latency-stable LLM inference in IoT-edge-cloud environments.},
}
RevDate: 2026-02-27
Challenges and Opportunities in Multi-Omics Data Acquisition and Analysis: Toward Integrative Solutions.
Biomolecules, 16(2): pii:biom16020271.
In this perspective, we discuss the current challenges and opportunities in multi-omics, a rapidly evolving approach that integrates multiple molecular layers to advance our understanding of complex biological systems. As biomedical research moves toward precision medicine, the ability to correlate genotype, phenotype, and environmental contexts has never been more critical. Multi-omics enhances biomarker discovery and elucidates regulatory networks underlying health and disease. The dominant scientific paradigm for over a century was to take a reductionist approach, studying individual molecular components in isolation or as simplified systems. The advent of omics technologies in the 1990s enabled a systems paradigm, allowing holistic analyses of molecular networks. These early systems studies were constrained by technology and methodology to bulk tissue measurements and single-omics analyses. Recent advances in single-cell and spatial omics, high-throughput proteomics and metabolomics, cloud computing, and artificial intelligence now allow high-resolution, spatially contextualized multi-omics analyses. Despite these gains, challenges in data analysis and interpretation remain, including high dimensionality, missing or incomplete data, multiple batch effects, and method-specific variability. Emerging strategies-such as paired data collection, staged or joint integration, and latent factor or quasi-mediation frameworks-offer promising solutions, positioning multi-omics as a transformative tool for elucidating complex mechanisms and guiding personalized medicine. Continued refinement of these approaches may further enhance the utility of multi-omics for understanding complex biological systems.
Additional Links: PMID-41750340
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41750340,
year = {2026},
author = {Hemme, CL and Atoyan, J and Cai, A and Liu, C},
title = {Challenges and Opportunities in Multi-Omics Data Acquisition and Analysis: Toward Integrative Solutions.},
journal = {Biomolecules},
volume = {16},
number = {2},
pages = {},
doi = {10.3390/biom16020271},
pmid = {41750340},
issn = {2218-273X},
support = {P20GM103430/GM/NIGMS NIH HHS/United States ; },
abstract = {In this perspective, we discuss the current challenges and opportunities in multi-omics, a rapidly evolving approach that integrates multiple molecular layers to advance our understanding of complex biological systems. As biomedical research moves toward precision medicine, the ability to correlate genotype, phenotype, and environmental contexts has never been more critical. Multi-omics enhances biomarker discovery and elucidates regulatory networks underlying health and disease. The dominant scientific paradigm for over a century was to take a reductionist approach, studying individual molecular components in isolation or as simplified systems. The advent of omics technologies in the 1990s enabled a systems paradigm, allowing holistic analyses of molecular networks. These early systems studies were constrained by technology and methodology to bulk tissue measurements and single-omics analyses. Recent advances in single-cell and spatial omics, high-throughput proteomics and metabolomics, cloud computing, and artificial intelligence now allow high-resolution, spatially contextualized multi-omics analyses. Despite these gains, challenges in data analysis and interpretation remain, including high dimensionality, missing or incomplete data, multiple batch effects, and method-specific variability. Emerging strategies-such as paired data collection, staged or joint integration, and latent factor or quasi-mediation frameworks-offer promising solutions, positioning multi-omics as a transformative tool for elucidating complex mechanisms and guiding personalized medicine. Continued refinement of these approaches may further enhance the utility of multi-omics for understanding complex biological systems.},
}
RevDate: 2026-02-27
CmpDate: 2026-02-27
Stroke Rehabilitation, Novel Technology and the Internet of Medical Things.
Brain sciences, 16(2): pii:brainsci16020124.
Stroke continues to impose an enormous morbidity and mortality burden worldwide. Stroke survivors often incur debilitating consequences that impair motor function, independence in activities of daily living and quality of life. Rehabilitation is a pivotal intervention to minimize disability and promote functional recovery following a stroke. The Internet of Medical Things, a network of connected medical devices, software and health systems that collect, store and analyze health data over the internet, is an emerging resource in neurorehabilitation for stroke survivors. Technologies such as asynchronous transmission to handle intermittent connectivity, edge computing to conserve bandwidth and lengthen device life, functional interoperability across platforms, security mechanisms scalable to resource constraints, and hybrid architectures that combine local processing with cloud synchronization help bridge the digital divide and infrastructure limitations in low-resource environments. This manuscript reviews emerging rehabilitation technologies such as robotic devices, virtual reality, brain-computer interfaces and telerehabilitation in the setting of neurorehabilitation for stroke patients.
Additional Links: PMID-41750125
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41750125,
year = {2026},
author = {Costa, A and Schmalzried, E and Tong, J and Khanyan, B and Wang, W and Jin, Z and Bergese, SD},
title = {Stroke Rehabilitation, Novel Technology and the Internet of Medical Things.},
journal = {Brain sciences},
volume = {16},
number = {2},
pages = {},
doi = {10.3390/brainsci16020124},
pmid = {41750125},
issn = {2076-3425},
abstract = {Stroke continues to impose an enormous morbidity and mortality burden worldwide. Stroke survivors often incur debilitating consequences that impair motor function, independence in activities of daily living and quality of life. Rehabilitation is a pivotal intervention to minimize disability and promote functional recovery following a stroke. The Internet of Medical Things, a network of connected medical devices, software and health systems that collect, store and analyze health data over the internet, is an emerging resource in neurorehabilitation for stroke survivors. Technologies such as asynchronous transmission to handle intermittent connectivity, edge computing to conserve bandwidth and lengthen device life, functional interoperability across platforms, security mechanisms scalable to resource constraints, and hybrid architectures that combine local processing with cloud synchronization help bridge the digital divide and infrastructure limitations in low-resource environments. This manuscript reviews emerging rehabilitation technologies such as robotic devices, virtual reality, brain-computer interfaces and telerehabilitation in the setting of neurorehabilitation for stroke patients.},
}
RevDate: 2026-02-26
An Intelligent, low-cost water quality monitoring system with on-device machine learning and cloud integration.
Scientific reports pii:10.1038/s41598-026-37287-3 [Epub ahead of print].
Additional Links: PMID-41748630
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41748630,
year = {2026},
author = {Sharma, S and Mishra, D and Yadav, A and Gami, B and Madhan, ES},
title = {An Intelligent, low-cost water quality monitoring system with on-device machine learning and cloud integration.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-37287-3},
pmid = {41748630},
issn = {2045-2322},
}
RevDate: 2026-02-26
Copernicus Data Space Ecosystem establishes public cloud processing for earth observation data.
Scientific data pii:10.1038/s41597-026-06765-8 [Epub ahead of print].
The Copernicus Data Space Ecosystem is the official data platform for the Copernicus Programme's satellites. CDSE combines instant access to satellite imagery with Application Programming Interfaces and virtual machine processing. Instead of downloading satellite imagery for local computation, CDSE utilizes cloud-optimized files to provide data according to the filtering and processing request of the user, facilitating large-scale scientific analysis. Cloud computing on CDSE eliminates the need for users to rely on their own data infrastructure. The incorporated standards support both Open Science and commercialization of scientific tools and algorithms. CDSE serves all users from beginners to professionals, from the interactive visualization of imagery to custom ML algorithms. Acquiring the skills required to process Earth Observation data is facilitated by the open-source codebase and tutorials. Access to public cloud processing is expected to foster the uptake of Earth Observation across new domains. CDSE now provides the critical mass to serve as a tool for knowledge exchange and to influence commercial and public providers alike to support cloud processing.
Additional Links: PMID-41748608
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41748608,
year = {2026},
author = {D Kovács, D and Musial, J and Bojanowski, J and Clarijs, D and de la Mar, J and Zlinszky, A},
title = {Copernicus Data Space Ecosystem establishes public cloud processing for earth observation data.},
journal = {Scientific data},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41597-026-06765-8},
pmid = {41748608},
issn = {2052-4463},
abstract = {The Copernicus Data Space Ecosystem is the official data platform for the Copernicus Programme's satellites. CDSE combines instant access to satellite imagery with Application Programming Interfaces and virtual machine processing. Instead of downloading satellite imagery for local computation, CDSE utilizes cloud-optimized files to provide data according to the filtering and processing request of the user, facilitating large-scale scientific analysis. Cloud computing on CDSE eliminates the need for users to rely on their own data infrastructure. The incorporated standards support both Open Science and commercialization of scientific tools and algorithms. CDSE serves all users from beginners to professionals, from the interactive visualization of imagery to custom ML algorithms. Acquiring the skills required to process Earth Observation data is facilitated by the open-source codebase and tutorials. Access to public cloud processing is expected to foster the uptake of Earth Observation across new domains. CDSE now provides the critical mass to serve as a tool for knowledge exchange and to influence commercial and public providers alike to support cloud processing.},
}
RevDate: 2026-02-26
Design and implementation of a comprehensive management platform for drilling engineering.
PloS one, 21(2):e0343700 pii:PONE-D-25-60433.
To enhance the efficiency, safety, and data accuracy of drilling engineering, this study developed an integrated business management platform for drilling engineering grassroots units based on the Business Model Driven (BMD) approach. The platform is built on a "five horizontal, three vertical" cloud computing architecture, establishing a five-layer system from the infrastructure layer to the user layer horizontally, and supported by standard specifications, safety, and maintenance systems vertically, enabling collaboration across multiple business scenarios and data integration. Currently, four major modules with over 20 functionalities have been developed, supporting applications such as task coordination, engineering supervision, data analysis, and accident handling. Operational results demonstrate that the platform effectively promotes integrated management of drilling engineering through real-time data sharing, full-process quality control, and intelligent decision-making, thereby enhancing operational quality and safety, reducing accident risks, and providing critical technological support for the digital transformation and upgrading of the drilling industry.
Additional Links: PMID-41746936
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41746936,
year = {2026},
author = {Du, Y and Yang, Y and Wu, X and Gao, P and Ma, H},
title = {Design and implementation of a comprehensive management platform for drilling engineering.},
journal = {PloS one},
volume = {21},
number = {2},
pages = {e0343700},
doi = {10.1371/journal.pone.0343700},
pmid = {41746936},
issn = {1932-6203},
abstract = {To enhance the efficiency, safety, and data accuracy of drilling engineering, this study developed an integrated business management platform for drilling engineering grassroots units based on the Business Model Driven (BMD) approach. The platform is built on a "five horizontal, three vertical" cloud computing architecture, establishing a five-layer system from the infrastructure layer to the user layer horizontally, and supported by standard specifications, safety, and maintenance systems vertically, enabling collaboration across multiple business scenarios and data integration. Currently, four major modules with over 20 functionalities have been developed, supporting applications such as task coordination, engineering supervision, data analysis, and accident handling. Operational results demonstrate that the platform effectively promotes integrated management of drilling engineering through real-time data sharing, full-process quality control, and intelligent decision-making, thereby enhancing operational quality and safety, reducing accident risks, and providing critical technological support for the digital transformation and upgrading of the drilling industry.},
}
RevDate: 2026-02-26
Automatic Speech Recognition for Intelligibility Assessment in Children With Dysarthria.
Journal of speech, language, and hearing research : JSLHR [Epub ahead of print].
PURPOSE: Accurate assessment of speech intelligibility is critical for children with dysarthria secondary to cerebral palsy. Traditional assessment methods, such as human listeners' orthographic transcription and perceptual ratings (e.g., of ease of understanding [EoU]), are time consuming or subjective. Automatic speech recognition (ASR) may provide a more efficient, objective alternative, but its use for assessing intelligibility in this population is unexamined. This study evaluated the potential of ASR for intelligibility assessment in children with dysarthria and identified the most appropriate ASR systems for approximating human listeners' judgments.
METHOD: Five ASR systems transcribed speech samples from 20 children with dysarthria. Additionally, 168 adult listeners provided orthographic transcriptions and EoU ratings. Word recognition rate (WRR) was used as the metric for calculating ASR and human listeners' transcription accuracy. Spearman correlations were used to assess the relationship between ASR WRR and human WRR, as well as between ASR WRR and human EoU ratings.
RESULTS: The WRR yielded by four ASR systems (WhisperX-small, WhisperX-medium, WhisperX-large, and Google Cloud) showed strong correlations with human WRR, with WhisperX-medium demonstrating the strongest correlation. These four systems' WRRs also exhibited moderate-to-strong correlations with EoU ratings, with Google Cloud ASR showing the strongest correlation. In contrast, the WRR of Wav2Vec2 demonstrated a weak correlation with both human WRR and EoU ratings.
CONCLUSIONS: ASR shows promise for use in intelligibility assessment in children with dysarthria. Of the tested ASR systems, WhisperX-medium appears most promising for approximating human transcription accuracy, whereas Google Cloud ASR aligns best with perceptual ratings. Such differences in ASR performance highlight the need for careful system selection in clinical applications.
SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.31397457.
Additional Links: PMID-41746192
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41746192,
year = {2026},
author = {Choi, J and Moya-Galé, G and Hwang, K and Hirschberg, J and Levy, ES},
title = {Automatic Speech Recognition for Intelligibility Assessment in Children With Dysarthria.},
journal = {Journal of speech, language, and hearing research : JSLHR},
volume = {},
number = {},
pages = {1-17},
doi = {10.1044/2025_JSLHR-25-00562},
pmid = {41746192},
issn = {1558-9102},
abstract = {PURPOSE: Accurate assessment of speech intelligibility is critical for children with dysarthria secondary to cerebral palsy. Traditional assessment methods, such as human listeners' orthographic transcription and perceptual ratings (e.g., of ease of understanding [EoU]), are time consuming or subjective. Automatic speech recognition (ASR) may provide a more efficient, objective alternative, but its use for assessing intelligibility in this population is unexamined. This study evaluated the potential of ASR for intelligibility assessment in children with dysarthria and identified the most appropriate ASR systems for approximating human listeners' judgments.
METHOD: Five ASR systems transcribed speech samples from 20 children with dysarthria. Additionally, 168 adult listeners provided orthographic transcriptions and EoU ratings. Word recognition rate (WRR) was used as the metric for calculating ASR and human listeners' transcription accuracy. Spearman correlations were used to assess the relationship between ASR WRR and human WRR, as well as between ASR WRR and human EoU ratings.
RESULTS: The WRR yielded by four ASR systems (WhisperX-small, WhisperX-medium, WhisperX-large, and Google Cloud) showed strong correlations with human WRR, with WhisperX-medium demonstrating the strongest correlation. These four systems' WRRs also exhibited moderate-to-strong correlations with EoU ratings, with Google Cloud ASR showing the strongest correlation. In contrast, the WRR of Wav2Vec2 demonstrated a weak correlation with both human WRR and EoU ratings.
CONCLUSIONS: ASR shows promise for use in intelligibility assessment in children with dysarthria. Of the tested ASR systems, WhisperX-medium appears most promising for approximating human transcription accuracy, whereas Google Cloud ASR aligns best with perceptual ratings. Such differences in ASR performance highlight the need for careful system selection in clinical applications.
SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.31397457.},
}
RevDate: 2026-02-26
CmpDate: 2026-02-26
Accelerating Point Cloud Computation via Memory in Embedded Structured Light Cameras.
Journal of imaging, 12(2):.
Embedded structured light cameras have been widely applied in various fields. However, due to constraints such as insufficient computing resources, it remains difficult to achieve high-speed structured light point cloud computation. To address this issue, this study proposes a memory-driven computational framework for accelerating point cloud computation. Specifically, the point cloud computation process is precomputed as much as possible and stored in memory in the form of parameters, thereby significantly reducing the computational load during actual point cloud computation. The framework is instantiated in two forms: a low-memory method that minimizes memory footprint at the expense of point cloud stability, and a high-memory method that preserves the nonlinear phase-distance relation via an extensive lookup table. Experimental evaluations demonstrate that the proposed methods achieve comparable accuracy to the conventional method while delivering substantial speedups, and data-format optimizations further reduce required bandwidth. This framework offers a generalizable paradigm for optimizing structured light pipelines, paving the way for enhanced real-time 3D sensing in embedded applications.
Additional Links: PMID-41745455
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41745455,
year = {2026},
author = {Zhang, Y and Meng, S and Wang, S and Ren, Y},
title = {Accelerating Point Cloud Computation via Memory in Embedded Structured Light Cameras.},
journal = {Journal of imaging},
volume = {12},
number = {2},
pages = {},
pmid = {41745455},
issn = {2313-433X},
support = {F2025302004//Natural Science Foundation of Hebei Province/ ; 25B601//Hebei Academy of Sciences/ ; },
abstract = {Embedded structured light cameras have been widely applied in various fields. However, due to constraints such as insufficient computing resources, it remains difficult to achieve high-speed structured light point cloud computation. To address this issue, this study proposes a memory-driven computational framework for accelerating point cloud computation. Specifically, the point cloud computation process is precomputed as much as possible and stored in memory in the form of parameters, thereby significantly reducing the computational load during actual point cloud computation. The framework is instantiated in two forms: a low-memory method that minimizes memory footprint at the expense of point cloud stability, and a high-memory method that preserves the nonlinear phase-distance relation via an extensive lookup table. Experimental evaluations demonstrate that the proposed methods achieve comparable accuracy to the conventional method while delivering substantial speedups, and data-format optimizations further reduce required bandwidth. This framework offers a generalizable paradigm for optimizing structured light pipelines, paving the way for enhanced real-time 3D sensing in embedded applications.},
}
RevDate: 2026-02-25
Intelligent cloud-based RAS management: integration of DDPG reinforcement learning with AWS IoT for optimized aquaculture production.
Scientific reports pii:10.1038/s41598-025-33736-7 [Epub ahead of print].
While Deep Deterministic Policy Gradient (DDPG) reinforcement learning has demonstrated significant potential for optimizing aquaculture operations in laboratory and controlled environments, its practical deployment in commercial-scale Recirculating Aquaculture Systems (RAS) faces critical scalability and infrastructure challenges. This paper presents a novel cloud-edge hybrid architecture that enables the deployment of DDPG-based control systems across diverse commercial aquaculture operations, from small research facilities to large-scale production systems. Building upon our previous work in DDPG-based feeding rate optimization and energy management, we develop a comprehensive framework that addresses the practical challenges of deploying AI-based control systems in real-world aquaculture environments. The proposed architecture integrates AWS IoT Core for sensor connectivity, AWS Greengrass for edge intelligence, and a suite of cloud services for scalable model deployment and management. Edge optimization techniques, including 16-bit quantization and architecture pruning, reduced the DDPG model size by 74% (32 MB to 8.3 MB) while maintaining accuracy within 1.5% of the full-precision version, enabling real-time inference with 47 ± 8 ms latency across all deployment scales. Field validation in a commercial facility with 108 tanks (3,132 m[3] total volume) demonstrated exceptional scalability, with only 8.9% latency increase from small-scale (1,000 L) to large-scale (50,000 L) operations. The system achieved 99.97% IoT message delivery rates and maintained 98.7% reliability in critical parameter control, while comprehensive failsafe mechanisms ensured safe operation during network disruptions lasting up to 72 h. Network resilience testing validated robust performance under various connectivity challenges, maintaining 98.5% performance retention during minor network latency and 85.2% retention during 12-hour complete disconnections. This research establishes a practical blueprint for transitioning DDPG-based aquaculture management from research environments to commercial deployment, addressing critical gaps in scalability, reliability, and operational resilience that have previously limited the adoption of AI-based control systems in the aquaculture industry.
Additional Links: PMID-41741505
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41741505,
year = {2026},
author = {Elmessery, WM and Shams, MY and El-Hafeez, TA and Eid, MH and Székács, A and Saeed, O and Ahmed, AF and Alhumedi, M and Elwakeel, AE},
title = {Intelligent cloud-based RAS management: integration of DDPG reinforcement learning with AWS IoT for optimized aquaculture production.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-33736-7},
pmid = {41741505},
issn = {2045-2322},
abstract = {While Deep Deterministic Policy Gradient (DDPG) reinforcement learning has demonstrated significant potential for optimizing aquaculture operations in laboratory and controlled environments, its practical deployment in commercial-scale Recirculating Aquaculture Systems (RAS) faces critical scalability and infrastructure challenges. This paper presents a novel cloud-edge hybrid architecture that enables the deployment of DDPG-based control systems across diverse commercial aquaculture operations, from small research facilities to large-scale production systems. Building upon our previous work in DDPG-based feeding rate optimization and energy management, we develop a comprehensive framework that addresses the practical challenges of deploying AI-based control systems in real-world aquaculture environments. The proposed architecture integrates AWS IoT Core for sensor connectivity, AWS Greengrass for edge intelligence, and a suite of cloud services for scalable model deployment and management. Edge optimization techniques, including 16-bit quantization and architecture pruning, reduced the DDPG model size by 74% (32 MB to 8.3 MB) while maintaining accuracy within 1.5% of the full-precision version, enabling real-time inference with 47 ± 8 ms latency across all deployment scales. Field validation in a commercial facility with 108 tanks (3,132 m[3] total volume) demonstrated exceptional scalability, with only 8.9% latency increase from small-scale (1,000 L) to large-scale (50,000 L) operations. The system achieved 99.97% IoT message delivery rates and maintained 98.7% reliability in critical parameter control, while comprehensive failsafe mechanisms ensured safe operation during network disruptions lasting up to 72 h. Network resilience testing validated robust performance under various connectivity challenges, maintaining 98.5% performance retention during minor network latency and 85.2% retention during 12-hour complete disconnections. This research establishes a practical blueprint for transitioning DDPG-based aquaculture management from research environments to commercial deployment, addressing critical gaps in scalability, reliability, and operational resilience that have previously limited the adoption of AI-based control systems in the aquaculture industry.},
}
RevDate: 2026-02-20
SLA aware deep reinforcement learning for adaptive EdgeCloud task scheduling.
Scientific reports pii:10.1038/s41598-026-40237-8 [Epub ahead of print].
Additional Links: PMID-41720948
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41720948,
year = {2026},
author = {Yamsani, N and P, CR},
title = {SLA aware deep reinforcement learning for adaptive EdgeCloud task scheduling.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-40237-8},
pmid = {41720948},
issn = {2045-2322},
}
RevDate: 2026-02-18
A quantum-driven multi-objective scheduler for scalable task orchestration in fog-based cyber-physical-social systems.
Scientific reports, 16(1):6874.
Fog computing extends cloud capabilities toward the network edge, enabling low-latency Cyber-Physical-Social System (CPSS) services in domains such as smart cities and healthcare. However, multi-objective task scheduling in fog environments remains challenging due to conflicting goals minimizing execution time, resource costs, and energy consumption combined with the scalability limitations of classical evolutionary algorithms, which often converge slowly and produce poorly distributed Pareto fronts in large networks. To address these issues, this paper introduces FOG-QIEA, a quantum-inspired evolutionary algorithm designed for tri-objective fog scheduling. FOG-QIEA augments adaptive neighborhood mechanisms with quantum-inspired operators, including superposition-based population initialization, rotation-gate–driven updates, and measurement-guided selection, enabling faster and more diverse exploration of the solution space. The proposed model jointly optimizes total execution time, cost (including SLA violations), and energy efficiency while maintaining scalability across CPSS deployments with thousands of IoT tasks. Extensive simulations in iFogSim using realistic CPSS scenarios show that FOG-QIEA outperforms NSGA-II, MMPA-based approaches, and classical adaptive fog schedulers by 20–35% in convergence speed, 15–25% in energy reduction, and achieves significantly improved Pareto diversity. These results demonstrate the potential of FOG-QIEA as a sustainable and efficient scheduling framework, supporting future advancements toward quantum-hybrid optimization in fog and edge networks.
Additional Links: PMID-41622305
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41622305,
year = {2026},
author = {Hammouda, NG and Shalaby, M and Alfilh, RHC and Singh, NSS},
title = {A quantum-driven multi-objective scheduler for scalable task orchestration in fog-based cyber-physical-social systems.},
journal = {Scientific reports},
volume = {16},
number = {1},
pages = {6874},
pmid = {41622305},
issn = {2045-2322},
abstract = {Fog computing extends cloud capabilities toward the network edge, enabling low-latency Cyber-Physical-Social System (CPSS) services in domains such as smart cities and healthcare. However, multi-objective task scheduling in fog environments remains challenging due to conflicting goals minimizing execution time, resource costs, and energy consumption combined with the scalability limitations of classical evolutionary algorithms, which often converge slowly and produce poorly distributed Pareto fronts in large networks. To address these issues, this paper introduces FOG-QIEA, a quantum-inspired evolutionary algorithm designed for tri-objective fog scheduling. FOG-QIEA augments adaptive neighborhood mechanisms with quantum-inspired operators, including superposition-based population initialization, rotation-gate–driven updates, and measurement-guided selection, enabling faster and more diverse exploration of the solution space. The proposed model jointly optimizes total execution time, cost (including SLA violations), and energy efficiency while maintaining scalability across CPSS deployments with thousands of IoT tasks. Extensive simulations in iFogSim using realistic CPSS scenarios show that FOG-QIEA outperforms NSGA-II, MMPA-based approaches, and classical adaptive fog schedulers by 20–35% in convergence speed, 15–25% in energy reduction, and achieves significantly improved Pareto diversity. These results demonstrate the potential of FOG-QIEA as a sustainable and efficient scheduling framework, supporting future advancements toward quantum-hybrid optimization in fog and edge networks.},
}
RevDate: 2026-02-18
HoloQA: Full Reference Video Quality Assessor of Rendered Human Avatars in Virtual Reality.
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society, PP: [Epub ahead of print].
We present HoloQA, a new state-of-the-art Full Reference Video Quality Assessment (VQA) model that was designed using principles of visual neuroscience, information theory, and self-supervised deep learning to accurately predict the quality of rendered digital human avatars in Virtual Reality (VR) and Augmented Reality (AR) systems. The growing adoption of VR/AR applications that aim to transmit digital human avatars over bandwidth-limited video networks has driven the need for VQA algorithms that better account for the kinds of distortions that reduce the quality of rendered and viewed avatars. As we will show, standard VQA models often fail to capture distortions unique to the rendering, transmission, and compression of videos containing human avatars. Towards solving this difficult problem, we adopt a multi-level Mixture-of-Experts approach. This involves computing distortion-aware perceptual features and high-level content-aware deep features that capture semantic attributes of human body avatars. The high-level features are computed using a self-supervised, pre-trained deep learning network. We show that HoloQA is able to achieve state-of-the-art performance on the recently introduced LIVE-Meta Rendered Human Avatar VQA database, demonstrating its efficacy in predicting the quality of rendered human avatars in VR. Furthermore, we demonstrate the competitive performance of HoloQA on other digital human avatar databases and on another synthetically generated video quality use case: cloud gaming. The code associated with this work will be made available on GitHub.
Additional Links: PMID-41706772
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41706772,
year = {2026},
author = {Saha, A and Chen, YC and Hane, C and Bazin, JC and Katsavounidis, I and Chapiro, A and Bovik, AC},
title = {HoloQA: Full Reference Video Quality Assessor of Rendered Human Avatars in Virtual Reality.},
journal = {IEEE transactions on image processing : a publication of the IEEE Signal Processing Society},
volume = {PP},
number = {},
pages = {},
doi = {10.1109/TIP.2026.3663930},
pmid = {41706772},
issn = {1941-0042},
abstract = {We present HoloQA, a new state-of-the-art Full Reference Video Quality Assessment (VQA) model that was designed using principles of visual neuroscience, information theory, and self-supervised deep learning to accurately predict the quality of rendered digital human avatars in Virtual Reality (VR) and Augmented Reality (AR) systems. The growing adoption of VR/AR applications that aim to transmit digital human avatars over bandwidth-limited video networks has driven the need for VQA algorithms that better account for the kinds of distortions that reduce the quality of rendered and viewed avatars. As we will show, standard VQA models often fail to capture distortions unique to the rendering, transmission, and compression of videos containing human avatars. Towards solving this difficult problem, we adopt a multi-level Mixture-of-Experts approach. This involves computing distortion-aware perceptual features and high-level content-aware deep features that capture semantic attributes of human body avatars. The high-level features are computed using a self-supervised, pre-trained deep learning network. We show that HoloQA is able to achieve state-of-the-art performance on the recently introduced LIVE-Meta Rendered Human Avatar VQA database, demonstrating its efficacy in predicting the quality of rendered human avatars in VR. Furthermore, we demonstrate the competitive performance of HoloQA on other digital human avatar databases and on another synthetically generated video quality use case: cloud gaming. The code associated with this work will be made available on GitHub.},
}
RevDate: 2026-02-18
CmpDate: 2026-02-18
Dataset on resource allocation and usage for a private cloud.
Data in brief, 65:112514.
While public cloud providers dominate the commercial landscape, private clouds are widely adopted by academic and research institutions to meet specific governance and operational requirements. There are multiple available datasets about resource usage of public clouds; however, datasets capturing usage patterns in private clouds remain scarce, which limits research in this area. This work presents a dataset comprising over 64 million records collected from a private OpenStack-based cloud operated by the Distributed Systems Laboratory at the Federal University of Campina Grande, Brazil. Data was continuously gathered over nearly twelve months (May 23, 2024 to May 16, 2025), periodically querying OpenStack APIs and monitoring services every five minutes. The dataset captures different aspects of the infrastructure, allocation quotas, user-to-project associations (as OpenStack groups users into projects), server (virtual machines) specifications, and resource utilization for users and projects. Entries are timestamped, enabling temporal analyses of system dynamics. Sensitive attributes, such as user names, project names, IP addresses, and server names were protected, leaving only system-generated UUIDs. By offering a detailed, time-stamped, view of a private cloud, this dataset provides a valuable resource for cloud computing research, helping to bridge the gap in publicly available datasets from non-commercial cloud environments. The dataset is valuable not only for academic institutions but also for companies considering cloud repatriation.
Additional Links: PMID-41704506
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41704506,
year = {2026},
author = {Marques, P and Mendes, M and Pereira, TE and Farias, G},
title = {Dataset on resource allocation and usage for a private cloud.},
journal = {Data in brief},
volume = {65},
number = {},
pages = {112514},
pmid = {41704506},
issn = {2352-3409},
abstract = {While public cloud providers dominate the commercial landscape, private clouds are widely adopted by academic and research institutions to meet specific governance and operational requirements. There are multiple available datasets about resource usage of public clouds; however, datasets capturing usage patterns in private clouds remain scarce, which limits research in this area. This work presents a dataset comprising over 64 million records collected from a private OpenStack-based cloud operated by the Distributed Systems Laboratory at the Federal University of Campina Grande, Brazil. Data was continuously gathered over nearly twelve months (May 23, 2024 to May 16, 2025), periodically querying OpenStack APIs and monitoring services every five minutes. The dataset captures different aspects of the infrastructure, allocation quotas, user-to-project associations (as OpenStack groups users into projects), server (virtual machines) specifications, and resource utilization for users and projects. Entries are timestamped, enabling temporal analyses of system dynamics. Sensitive attributes, such as user names, project names, IP addresses, and server names were protected, leaving only system-generated UUIDs. By offering a detailed, time-stamped, view of a private cloud, this dataset provides a valuable resource for cloud computing research, helping to bridge the gap in publicly available datasets from non-commercial cloud environments. The dataset is valuable not only for academic institutions but also for companies considering cloud repatriation.},
}
RevDate: 2026-02-17
AsynDBT: asynchronous distributed bilevel tuning for efficient in-context learning with large language models.
Scientific reports pii:10.1038/s41598-026-39582-5 [Epub ahead of print].
With the rapid development of large language models (LLMs), an increasing number of applications leverage cloud-based LLM APIs to reduce usage costs. However, since cloud-based models' parameters and gradients are agnostic, users have to manually or use heuristic algorithms to adjust prompts for intervening LLM outputs, which requiring costly optimization procedures. In-context learning (ICL) has recently emerged as a promising paradigm that enables LLMs to adapt to new tasks using examples provided within the input, eliminating the need for parameter updates. Nevertheless, the advancement of ICL is often hindered by the lack of high-quality data, which is often sensitive and different to share. Federated learning (FL) offers a potential solution by enabling collaborative training of distributed LLMs while preserving data privacy. Despite this issues, previous FL approaches that incorporate ICL have struggled with severe straggler problems and challenges associated with heterogeneous non-identically data. To address these problems, we propose an asynchronous distributed bilevel tuning (AsynDBT) algorithm that optimizes both in-context learning samples and prompt fragments based on the feedback from the LLM, thereby enhancing downstream task performance. Benefiting from its distributed architecture, AsynDBT provides privacy protection and adaptability to heterogeneous computing environments. Furthermore, we present a theoretical analysis establishing the convergence guarantees of the proposed algorithm. Extensive experiments conducted on multiple benchmark datasets demonstrate the effectiveness and efficiency of AsynDBT.
Additional Links: PMID-41702990
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41702990,
year = {2026},
author = {Ma, H and Dou, S and Liu, Y and Xing, F and Feng, L and Pi, F},
title = {AsynDBT: asynchronous distributed bilevel tuning for efficient in-context learning with large language models.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-39582-5},
pmid = {41702990},
issn = {2045-2322},
support = {5105250183m//the Tianchi Talents - Young Doctor Program/ ; 2024B03028//Science and Technology Program of Xinjiang Uyghur Autonomous Region/ ; 202512120005//Regional Fund of the National Natural Science Foundation of China/ ; },
abstract = {With the rapid development of large language models (LLMs), an increasing number of applications leverage cloud-based LLM APIs to reduce usage costs. However, since cloud-based models' parameters and gradients are agnostic, users have to manually or use heuristic algorithms to adjust prompts for intervening LLM outputs, which requiring costly optimization procedures. In-context learning (ICL) has recently emerged as a promising paradigm that enables LLMs to adapt to new tasks using examples provided within the input, eliminating the need for parameter updates. Nevertheless, the advancement of ICL is often hindered by the lack of high-quality data, which is often sensitive and different to share. Federated learning (FL) offers a potential solution by enabling collaborative training of distributed LLMs while preserving data privacy. Despite this issues, previous FL approaches that incorporate ICL have struggled with severe straggler problems and challenges associated with heterogeneous non-identically data. To address these problems, we propose an asynchronous distributed bilevel tuning (AsynDBT) algorithm that optimizes both in-context learning samples and prompt fragments based on the feedback from the LLM, thereby enhancing downstream task performance. Benefiting from its distributed architecture, AsynDBT provides privacy protection and adaptability to heterogeneous computing environments. Furthermore, we present a theoretical analysis establishing the convergence guarantees of the proposed algorithm. Extensive experiments conducted on multiple benchmark datasets demonstrate the effectiveness and efficiency of AsynDBT.},
}
RevDate: 2026-02-13
TropMol: a cloud-based web tool for virtual screening and early-stage prediction of acetylcholinesterase inhibitors using machine learning.
Organic & biomolecular chemistry [Epub ahead of print].
Alzheimer's disease (AD) is the most common type of dementia, accounting for at least two-thirds of dementia cases in people aged 65 and older. Numerous approaches have been studied for the treatment of this disease, including the cholinergic hypothesis. Acetylcholinesterase (AChE) is the most promising target studied within the cholinergic hypothesis for the treatment of AD. Therefore, it is necessary to develop predictive models for the identification of AChE inhibitors. Thus, general drug design models can assist chemical synthesis groups and biochemical testing laboratories by enabling virtual screening and drug design. In this work, the objective is to build a generic molecular screening prediction model for public, online and free use based on pIC50, using a random forest model (RF). For this, a dataset with approximately 16 000 compounds and 134 classes of descriptors was used, resulting in more than 2 000 000 calculated descriptors. Other algorithms were studied, such as gradient boosting, XGBoost, LightGBM, and RF with descriptors from principal component analysis (PCA), but none demonstrated significantly superior results compared to the RF model. The final model studied obtained an R[2] = 0.76 with a 15% test set and obtained an R[2] = 0.73 with a 30% test set, with rigorous Y-scrambling confirming the absence of chance correlation. External validation performed on an independent test set comprising 10% of the data yielded an R[2] of 0.77 and an RMSE of 0.67, statistically confirming that the model retains high predictive accuracy for novel chemical scaffolds and is free from overfitting. It is suggested that compounds containing oxime groups (RR'C = NOH) and those with high structural branching (higher Balaban index) tend to be less potent AChE inhibitors (negative correlation). In addition, some descriptors indicate that electronic charge distribution, molecular surface area, and hydrophobicity play important roles in correlating with the inhibitory activity (pIC50) of the compounds. The presence of linear alkane chains also seems relevant to activity (positive correlation and greater importance). The data and models are available at the following link: (https://colab.research.google.com/drive/1gMcuXAsrqTIBMNnsCEWG9xfkK7aaZAbn?usp=sharing).
Additional Links: PMID-41685429
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41685429,
year = {2026},
author = {Doring, TH},
title = {TropMol: a cloud-based web tool for virtual screening and early-stage prediction of acetylcholinesterase inhibitors using machine learning.},
journal = {Organic & biomolecular chemistry},
volume = {},
number = {},
pages = {},
doi = {10.1039/d6ob00094k},
pmid = {41685429},
issn = {1477-0539},
abstract = {Alzheimer's disease (AD) is the most common type of dementia, accounting for at least two-thirds of dementia cases in people aged 65 and older. Numerous approaches have been studied for the treatment of this disease, including the cholinergic hypothesis. Acetylcholinesterase (AChE) is the most promising target studied within the cholinergic hypothesis for the treatment of AD. Therefore, it is necessary to develop predictive models for the identification of AChE inhibitors. Thus, general drug design models can assist chemical synthesis groups and biochemical testing laboratories by enabling virtual screening and drug design. In this work, the objective is to build a generic molecular screening prediction model for public, online and free use based on pIC50, using a random forest model (RF). For this, a dataset with approximately 16 000 compounds and 134 classes of descriptors was used, resulting in more than 2 000 000 calculated descriptors. Other algorithms were studied, such as gradient boosting, XGBoost, LightGBM, and RF with descriptors from principal component analysis (PCA), but none demonstrated significantly superior results compared to the RF model. The final model studied obtained an R[2] = 0.76 with a 15% test set and obtained an R[2] = 0.73 with a 30% test set, with rigorous Y-scrambling confirming the absence of chance correlation. External validation performed on an independent test set comprising 10% of the data yielded an R[2] of 0.77 and an RMSE of 0.67, statistically confirming that the model retains high predictive accuracy for novel chemical scaffolds and is free from overfitting. It is suggested that compounds containing oxime groups (RR'C = NOH) and those with high structural branching (higher Balaban index) tend to be less potent AChE inhibitors (negative correlation). In addition, some descriptors indicate that electronic charge distribution, molecular surface area, and hydrophobicity play important roles in correlating with the inhibitory activity (pIC50) of the compounds. The presence of linear alkane chains also seems relevant to activity (positive correlation and greater importance). The data and models are available at the following link: (https://colab.research.google.com/drive/1gMcuXAsrqTIBMNnsCEWG9xfkK7aaZAbn?usp=sharing).},
}
RevDate: 2026-02-13
CmpDate: 2026-02-13
Large language models for structured cardiovascular data extraction: a foundation for scalable research and clinical applications.
European heart journal. Digital health, 7(2):ztaf127.
AIMS: Automated extraction of information from cardiac reports would benefit both clinical reporting and research. Large language models (LLMs) hold promise for such automation, but their clinical performance and practical implementation across various computational environments remain unclear. This study aims to evaluate the feasibility and performance of LLM-based classification of echocardiogram and invasive coronary angiography reports, using real-world clinical data across local, high-performance computing and cloud-based platforms.
METHODS AND RESULTS: The angiography and echocardiography reports of 1000 patients, admitted with acute coronary syndrome, were labelled for multiple key diagnostic elements, including left ventricular function (LVF), culprit vessel, and acute occlusions. Report classification models were developed using LLMs via (i) prompt-based and (ii) fine-tuning approaches. Performance was assessed across different model types and compute infrastructures, with attention to class imbalance, ambiguous label annotations, and implementation costs. Large language models demonstrated strong performance in extracting structured diagnostic information from cardiac reports. Cloud-based models (such as GPT-4o) achieved the highest accuracy (0.87 for culprit vessel and 1.0 for LVF) and generalizability, but also smaller models run on a local high-performance cluster achieved reasonable accuracy, especially for less complex tasks (0.634 for culprit vessel and 0.984 for LVF). Classification was feasible with minimal pre-processing, enabling potential integration into electronic health record systems or research pipelines. Class imbalance, reflective of real-world prevalence, had a greater impact on fine-tuning approaches.
CONCLUSION: Large language models can reliably classify structured cardiology reports across diverse computed infrastructures. Their accuracy and adaptability support their use in clinical and research settings, particularly for scalable report structuring and dataset generation.
Additional Links: PMID-41684376
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41684376,
year = {2026},
author = {van der Loo, W and van der Valk, V and van den Broek, T and Atsma, D and Staring, M and Scherptong, R},
title = {Large language models for structured cardiovascular data extraction: a foundation for scalable research and clinical applications.},
journal = {European heart journal. Digital health},
volume = {7},
number = {2},
pages = {ztaf127},
pmid = {41684376},
issn = {2634-3916},
abstract = {AIMS: Automated extraction of information from cardiac reports would benefit both clinical reporting and research. Large language models (LLMs) hold promise for such automation, but their clinical performance and practical implementation across various computational environments remain unclear. This study aims to evaluate the feasibility and performance of LLM-based classification of echocardiogram and invasive coronary angiography reports, using real-world clinical data across local, high-performance computing and cloud-based platforms.
METHODS AND RESULTS: The angiography and echocardiography reports of 1000 patients, admitted with acute coronary syndrome, were labelled for multiple key diagnostic elements, including left ventricular function (LVF), culprit vessel, and acute occlusions. Report classification models were developed using LLMs via (i) prompt-based and (ii) fine-tuning approaches. Performance was assessed across different model types and compute infrastructures, with attention to class imbalance, ambiguous label annotations, and implementation costs. Large language models demonstrated strong performance in extracting structured diagnostic information from cardiac reports. Cloud-based models (such as GPT-4o) achieved the highest accuracy (0.87 for culprit vessel and 1.0 for LVF) and generalizability, but also smaller models run on a local high-performance cluster achieved reasonable accuracy, especially for less complex tasks (0.634 for culprit vessel and 0.984 for LVF). Classification was feasible with minimal pre-processing, enabling potential integration into electronic health record systems or research pipelines. Class imbalance, reflective of real-world prevalence, had a greater impact on fine-tuning approaches.
CONCLUSION: Large language models can reliably classify structured cardiology reports across diverse computed infrastructures. Their accuracy and adaptability support their use in clinical and research settings, particularly for scalable report structuring and dataset generation.},
}
RevDate: 2026-02-13
Privacy Protection Optimization Method for Cloud Platforms Based on Federated Learning and Homomorphic Encryption.
Sensors (Basel, Switzerland), 26(3): pii:s26030890.
With the wide application of cloud computing in multi-tenant, heterogeneous nodes and high-concurrency environments, model parameters frequently interact during distributed training, which easily leads to privacy leakage, communication redundancy, and decreased aggregation efficiency. To realize the collaborative optimization of privacy protection and computing performance, this study proposes the Heterogeneous Federated Homomorphic Encryption Cloud (HFHE-Cloud) model, which integrates federated learning (FL) and homomorphic encryption and constructs a secure and efficient collaborative learning framework for cloud platforms. Under the condition of not exposing the original data, the model effectively reduces the performance bottleneck caused by encryption calculation and communication delay through hierarchical key mapping and dynamic scheduling mechanism of heterogeneous nodes. The experimental results show that HFHE-Cloud is significantly superior to Federated Averaging (FedAvg), Federated Proximal (FedProx), Federated Personalization (FedPer) and Federated Normalized Averaging (FedNova) in comprehensive performance, Homomorphically Encrypted Federated Averaging (HE-FedAvg) and other five baseline models. In the dimension of privacy protection, the global accuracy is up to 94.25%, and the Loss is stable within 0.09. In terms of computing performance, the encryption and decryption time is shortened by about one third, and the encryption overhead is controlled at 13%. In terms of distributed training efficiency, the number of communication rounds is reduced by about one fifth, and the node participation rate is stable at over 90%. The results verify the model's ability to achieve high security and high scalability in multi-tenant environment. This study aims to provide cloud service providers and enterprise data holders with a technical solution of high-intensity privacy protection and efficient collaborative training that can be deployed in real cloud platforms.
Additional Links: PMID-41682403
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41682403,
year = {2026},
author = {Wang, J and Wang, Y},
title = {Privacy Protection Optimization Method for Cloud Platforms Based on Federated Learning and Homomorphic Encryption.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {3},
pages = {},
doi = {10.3390/s26030890},
pmid = {41682403},
issn = {1424-8220},
abstract = {With the wide application of cloud computing in multi-tenant, heterogeneous nodes and high-concurrency environments, model parameters frequently interact during distributed training, which easily leads to privacy leakage, communication redundancy, and decreased aggregation efficiency. To realize the collaborative optimization of privacy protection and computing performance, this study proposes the Heterogeneous Federated Homomorphic Encryption Cloud (HFHE-Cloud) model, which integrates federated learning (FL) and homomorphic encryption and constructs a secure and efficient collaborative learning framework for cloud platforms. Under the condition of not exposing the original data, the model effectively reduces the performance bottleneck caused by encryption calculation and communication delay through hierarchical key mapping and dynamic scheduling mechanism of heterogeneous nodes. The experimental results show that HFHE-Cloud is significantly superior to Federated Averaging (FedAvg), Federated Proximal (FedProx), Federated Personalization (FedPer) and Federated Normalized Averaging (FedNova) in comprehensive performance, Homomorphically Encrypted Federated Averaging (HE-FedAvg) and other five baseline models. In the dimension of privacy protection, the global accuracy is up to 94.25%, and the Loss is stable within 0.09. In terms of computing performance, the encryption and decryption time is shortened by about one third, and the encryption overhead is controlled at 13%. In terms of distributed training efficiency, the number of communication rounds is reduced by about one fifth, and the node participation rate is stable at over 90%. The results verify the model's ability to achieve high security and high scalability in multi-tenant environment. This study aims to provide cloud service providers and enterprise data holders with a technical solution of high-intensity privacy protection and efficient collaborative training that can be deployed in real cloud platforms.},
}
RevDate: 2026-02-13
Precision Farming with Smart Sensors: Current State, Challenges and Future Outlook.
Sensors (Basel, Switzerland), 26(3): pii:s26030882.
The agricultural sector, a vital industry for human survival and a primary source of food and raw materials, faces increasing pressure due to global population growth and environmental strains. Productivity, efficiency, and sustainability constraints are preventing traditional farming methods from adequately meeting the growing demand for food. Precision farming has emerged as a transformative paradigm to address these issues. It integrates advanced technologies to improve decision making, optimize yield, and conserve resources. This approach leverages technologies such as wireless sensor networks, the Internet of Things (IoT), robotics, drones, artificial intelligence (AI), and cloud computing to provide effective and cost-efficient agricultural services. Smart sensor technologies are foundational to precision farming. They offer crucial information regarding soil conditions, plant growth, and environmental factors in real time. This review explores the status, challenges, and prospects of smart sensor technologies in precision farming. The integration of smart sensors with the IoT and AI has significantly transformed how agricultural data is collected, analyzed, and utilized to optimize yield, conserve resources, and enhance overall farm efficiency. The review delves into various types of smart sensors used, their applications, and emerging technologies that promise to further innovate data acquisition and decision making in agriculture. Despite progress, challenges persist. They include sensor calibration, data privacy, interoperability, and adoption barriers. To fully realize the potential of smart sensors in ensuring global food security and promoting sustainable farming, the challenges need to be addressed.
Additional Links: PMID-41682397
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41682397,
year = {2026},
author = {Manono, BO and Mwami, B and Mutavi, S and Nzilu, F},
title = {Precision Farming with Smart Sensors: Current State, Challenges and Future Outlook.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {3},
pages = {},
doi = {10.3390/s26030882},
pmid = {41682397},
issn = {1424-8220},
abstract = {The agricultural sector, a vital industry for human survival and a primary source of food and raw materials, faces increasing pressure due to global population growth and environmental strains. Productivity, efficiency, and sustainability constraints are preventing traditional farming methods from adequately meeting the growing demand for food. Precision farming has emerged as a transformative paradigm to address these issues. It integrates advanced technologies to improve decision making, optimize yield, and conserve resources. This approach leverages technologies such as wireless sensor networks, the Internet of Things (IoT), robotics, drones, artificial intelligence (AI), and cloud computing to provide effective and cost-efficient agricultural services. Smart sensor technologies are foundational to precision farming. They offer crucial information regarding soil conditions, plant growth, and environmental factors in real time. This review explores the status, challenges, and prospects of smart sensor technologies in precision farming. The integration of smart sensors with the IoT and AI has significantly transformed how agricultural data is collected, analyzed, and utilized to optimize yield, conserve resources, and enhance overall farm efficiency. The review delves into various types of smart sensors used, their applications, and emerging technologies that promise to further innovate data acquisition and decision making in agriculture. Despite progress, challenges persist. They include sensor calibration, data privacy, interoperability, and adoption barriers. To fully realize the potential of smart sensors in ensuring global food security and promoting sustainable farming, the challenges need to be addressed.},
}
RevDate: 2026-02-13
EQARO-ECS: Efficient Quantum ARO-Based Edge Computing and SDN Routing Protocol for IoT Communication to Avoid Desertification.
Sensors (Basel, Switzerland), 26(3): pii:s26030824.
Desertification is the impoverishment of fertile land, caused by various factors and environmental effects, such as temperature and humidity. An appropriate Internet of Things (IoT) architecture, routing algorithms based on artificial intelligence (AI), and emerging technologies are essential to monitor and avoid desertification. However, the classical AI algorithms usually suffer from falling into local optimum issues and consuming more energy. This research proposed an improved multi-objective routing protocol, namely, the efficient quantum (EQ) artificial rabbit optimisation (ARO) based on edge computing (EC) and a software-defined network (SDN) concept (EQARO-ECS), which provides the best cluster table for the IoT network to avoid desertification. The methodology of the proposed EQARO-ECS protocol reduces energy consumption and improves data analysis speed by deploying new technologies, such as the Cloud, SDN, EC, and quantum technique-based ARO. This protocol increases the data analysis speed because of the suggested iterated quantum gates with the ARO, which can rapidly penetrate from the local to the global optimum. The protocol avoids desertification because of a new effective objective function that considers energy consumption, communication cost, and desertification parameters. The simulation results established that the suggested EQARO-ECS procedure increases accuracy and improves network lifetime by reducing energy depletion compared to other algorithms.
Additional Links: PMID-41682340
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41682340,
year = {2026},
author = {Al-Janabi, TA and Al-Raweshidy, HS and Zouri, M},
title = {EQARO-ECS: Efficient Quantum ARO-Based Edge Computing and SDN Routing Protocol for IoT Communication to Avoid Desertification.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {3},
pages = {},
doi = {10.3390/s26030824},
pmid = {41682340},
issn = {1424-8220},
abstract = {Desertification is the impoverishment of fertile land, caused by various factors and environmental effects, such as temperature and humidity. An appropriate Internet of Things (IoT) architecture, routing algorithms based on artificial intelligence (AI), and emerging technologies are essential to monitor and avoid desertification. However, the classical AI algorithms usually suffer from falling into local optimum issues and consuming more energy. This research proposed an improved multi-objective routing protocol, namely, the efficient quantum (EQ) artificial rabbit optimisation (ARO) based on edge computing (EC) and a software-defined network (SDN) concept (EQARO-ECS), which provides the best cluster table for the IoT network to avoid desertification. The methodology of the proposed EQARO-ECS protocol reduces energy consumption and improves data analysis speed by deploying new technologies, such as the Cloud, SDN, EC, and quantum technique-based ARO. This protocol increases the data analysis speed because of the suggested iterated quantum gates with the ARO, which can rapidly penetrate from the local to the global optimum. The protocol avoids desertification because of a new effective objective function that considers energy consumption, communication cost, and desertification parameters. The simulation results established that the suggested EQARO-ECS procedure increases accuracy and improves network lifetime by reducing energy depletion compared to other algorithms.},
}
RevDate: 2026-02-13
A Survey on the Computing Continuum and Meta-Operating Systems: Perspectives, Architectures, Outcomes, and Open Challenges.
Sensors (Basel, Switzerland), 26(3): pii:s26030799.
The goal of the study presented in this work is to analyze all recent advances in the context of the computing continuum and meta-operating systems (meta-OSs). The term continuum includes a variety of diverse hardware and computing elements, as well as network protocols, ranging from lightweight Internet of Things (IoT) components to more complex edge or cloud servers. To this end, the rapid penetration of IoT technology in modern-era networks, along with associated applications, poses new challenges towards efficient application deployment over heterogeneous network infrastructures. These challenges involve, among others, the interconnection of a vast number of IoT devices and protocols, proper resource management, and threat protection and privacy preservation. Hence, unified access mechanisms, data management policies, and security protocols are required across the continuum to support the vision of seamless connectivity and diverse device integration. This task becomes even more important as discussions on sixth generation (6G) networks are already taking place, which they are envisaged to coexist with IoT applications. Therefore, in this work the most significant technological approaches to satisfy the aforementioned challenges and requirements are presented and analyzed. To this end, a proposed architectural approach is also presented and discussed, which takes into consideration all key players and components in the continuum. In the same context, indicative use cases and scenarios that are leveraged from a meta-OSs in the computing continuum are presented as well. Finally, open issues and related challenges are also discussed.
Additional Links: PMID-41682316
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41682316,
year = {2026},
author = {Gkonis, PK and Giannopoulos, A and Nomikos, N and Sarakis, L and Nikolakakis, V and Patsourakis, G and Trakadas, P},
title = {A Survey on the Computing Continuum and Meta-Operating Systems: Perspectives, Architectures, Outcomes, and Open Challenges.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {3},
pages = {},
doi = {10.3390/s26030799},
pmid = {41682316},
issn = {1424-8220},
abstract = {The goal of the study presented in this work is to analyze all recent advances in the context of the computing continuum and meta-operating systems (meta-OSs). The term continuum includes a variety of diverse hardware and computing elements, as well as network protocols, ranging from lightweight Internet of Things (IoT) components to more complex edge or cloud servers. To this end, the rapid penetration of IoT technology in modern-era networks, along with associated applications, poses new challenges towards efficient application deployment over heterogeneous network infrastructures. These challenges involve, among others, the interconnection of a vast number of IoT devices and protocols, proper resource management, and threat protection and privacy preservation. Hence, unified access mechanisms, data management policies, and security protocols are required across the continuum to support the vision of seamless connectivity and diverse device integration. This task becomes even more important as discussions on sixth generation (6G) networks are already taking place, which they are envisaged to coexist with IoT applications. Therefore, in this work the most significant technological approaches to satisfy the aforementioned challenges and requirements are presented and analyzed. To this end, a proposed architectural approach is also presented and discussed, which takes into consideration all key players and components in the continuum. In the same context, indicative use cases and scenarios that are leveraged from a meta-OSs in the computing continuum are presented as well. Finally, open issues and related challenges are also discussed.},
}
RevDate: 2026-02-12
FMLCA: explainable and privacy-preserving federated machine learning classification algorithms for predicting heart disease in patients.
European journal of medical research pii:10.1186/s40001-026-04023-6 [Epub ahead of print].
BACKGROUND: Heart disease is a global health concern that significantly contributes to worldwide mortality. Machine Learning (ML) models have emerged as a powerful tool for predicting Coronary Artery Disease (CAD), a type of heart disease, by utilizing clinical features for classification. Federated Learning (FL) offers a solution for collaborative training without sharing raw data, thus addressing privacy concerns.
METHODS: This study presents an innovative approach, Federated Machine Learning Classification Algorithms (FMLCA), which utilizes cloud computing, privacy preservation techniques, and ML classification algorithms, including Decision Tree (DT), Adaptive Boosting (AdaBoost), K-Nearest Neighbors (KNN), Random Forest (RF), and Extreme Gradient Boosting (XGBoost), to predict CAD. In addition, privacy preserving is considered through the k-anonymity technique, and SHapley Additive exPlanations (SHAP) technique was utilized to identify features important in the model decision-making process.
RESULTS: The proposed RF model, compared to other models, obtained better performance. This RF model achieved an accuracy of 83.21% with privacy preservation and 84.49% without it. Furthermore, the SHAP technique enhances transparency by attributing feature influences in predictions.
CONCLUSION: Implementing these models on a cloud platform results in efficient computational performance. This proposed approach represents a significant advancement in predictive healthcare tools, capable of accurately predicting CAD across distributed environments. By placing a strong emphasis on privacy and security, this approach underscores its importance and paves the way for a transformative healthcare ecosystem that centers on the needs of patients and healthcare providers.
Additional Links: PMID-41680929
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41680929,
year = {2026},
author = {Sorayaie Azar, A and Gholami, F and Sharifi, L and Asl Asgharian Sardroud, A and Bagherzadeh Mohasefi, J and Wiil, UK},
title = {FMLCA: explainable and privacy-preserving federated machine learning classification algorithms for predicting heart disease in patients.},
journal = {European journal of medical research},
volume = {},
number = {},
pages = {},
doi = {10.1186/s40001-026-04023-6},
pmid = {41680929},
issn = {2047-783X},
abstract = {BACKGROUND: Heart disease is a global health concern that significantly contributes to worldwide mortality. Machine Learning (ML) models have emerged as a powerful tool for predicting Coronary Artery Disease (CAD), a type of heart disease, by utilizing clinical features for classification. Federated Learning (FL) offers a solution for collaborative training without sharing raw data, thus addressing privacy concerns.
METHODS: This study presents an innovative approach, Federated Machine Learning Classification Algorithms (FMLCA), which utilizes cloud computing, privacy preservation techniques, and ML classification algorithms, including Decision Tree (DT), Adaptive Boosting (AdaBoost), K-Nearest Neighbors (KNN), Random Forest (RF), and Extreme Gradient Boosting (XGBoost), to predict CAD. In addition, privacy preserving is considered through the k-anonymity technique, and SHapley Additive exPlanations (SHAP) technique was utilized to identify features important in the model decision-making process.
RESULTS: The proposed RF model, compared to other models, obtained better performance. This RF model achieved an accuracy of 83.21% with privacy preservation and 84.49% without it. Furthermore, the SHAP technique enhances transparency by attributing feature influences in predictions.
CONCLUSION: Implementing these models on a cloud platform results in efficient computational performance. This proposed approach represents a significant advancement in predictive healthcare tools, capable of accurately predicting CAD across distributed environments. By placing a strong emphasis on privacy and security, this approach underscores its importance and paves the way for a transformative healthcare ecosystem that centers on the needs of patients and healthcare providers.},
}
RevDate: 2026-02-11
WeMol: A Cloud-Based and Zero-Code Platform for AI-Driven Molecular Design and Simulation.
Journal of chemical information and modeling [Epub ahead of print].
Artificial intelligence (AI) has demonstrated remarkable potential in reshaping modern drug discovery, yet its widespread adoption is hindered by fragmented tools, high technical barriers, and the lack of user-friendly interfaces. Here, we present WeMol, an AI-driven one-stop molecular computing platform designed to streamline early-stage drug discovery. WeMol integrates a series of modules, covering molecular similarity search, structure-based and AI-enhanced docking, ADMET prediction, molecular generation, and molecular dynamics simulations. The platform features a zero-code, cloud-based interface that enables researchers without programming expertise to construct and execute comprehensive computational workflows. By integrating advanced AI algorithms with practical applications, WeMol lowers the entry barrier for nonexperts and provides a versatile, accessible, and reproducible solution to accelerate early drug design and discovery.
Additional Links: PMID-41668343
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41668343,
year = {2026},
author = {Liu, H and Yan, X and Fang, H and Ge, H and Hou, X},
title = {WeMol: A Cloud-Based and Zero-Code Platform for AI-Driven Molecular Design and Simulation.},
journal = {Journal of chemical information and modeling},
volume = {},
number = {},
pages = {},
doi = {10.1021/acs.jcim.6c00014},
pmid = {41668343},
issn = {1549-960X},
abstract = {Artificial intelligence (AI) has demonstrated remarkable potential in reshaping modern drug discovery, yet its widespread adoption is hindered by fragmented tools, high technical barriers, and the lack of user-friendly interfaces. Here, we present WeMol, an AI-driven one-stop molecular computing platform designed to streamline early-stage drug discovery. WeMol integrates a series of modules, covering molecular similarity search, structure-based and AI-enhanced docking, ADMET prediction, molecular generation, and molecular dynamics simulations. The platform features a zero-code, cloud-based interface that enables researchers without programming expertise to construct and execute comprehensive computational workflows. By integrating advanced AI algorithms with practical applications, WeMol lowers the entry barrier for nonexperts and provides a versatile, accessible, and reproducible solution to accelerate early drug design and discovery.},
}
RevDate: 2026-02-10
O-RAID: a satellite constellation architecture for ultra-resilient global data backup.
Scientific reports pii:10.1038/s41598-026-38784-1 [Epub ahead of print].
Growing global data volumes and the increasing frequency of climate-related and geopolitical threats highlight the need for ultra-resilient backup infrastructures. This paper proposes a novel Satellite-RAID architecture, named O-RAID, in which clusters of satellites operate as a distributed redundant array of independent disks (RAID), enabling large-scale cold and warm backup storage in Earth's orbit. Unlike previous work on space-based computing or satellite cloud relays, this research presents a formal design for orbital storage redundancy, inter-satellite parity exchange, latency-tolerant RAID protocols and power provisioning using a geostationary solar-energy beam. To establish a foundation for quantifying system resilience, we develop a reliability framework based on a Continuous-Time Markov Chain (CTMC) model, defining the states and transition rates for future survivability analysis of an orbital RAID equivalent. The paper provides a comprehensive analysis of the system architecture, its core components and the mathematical underpinnings for erasure coding and communication. An in-depth examination of system feasibility, survivability simulations, key constraints and communication overhead is presented, concluding that orbital backup storage presents a viable and promising paradigm for national archives, disaster-resilient storage and long-term scientific data preservation with technical readiness projected by 2035.
Additional Links: PMID-41667810
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41667810,
year = {2026},
author = {Meegama, RGN},
title = {O-RAID: a satellite constellation architecture for ultra-resilient global data backup.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-38784-1},
pmid = {41667810},
issn = {2045-2322},
abstract = {Growing global data volumes and the increasing frequency of climate-related and geopolitical threats highlight the need for ultra-resilient backup infrastructures. This paper proposes a novel Satellite-RAID architecture, named O-RAID, in which clusters of satellites operate as a distributed redundant array of independent disks (RAID), enabling large-scale cold and warm backup storage in Earth's orbit. Unlike previous work on space-based computing or satellite cloud relays, this research presents a formal design for orbital storage redundancy, inter-satellite parity exchange, latency-tolerant RAID protocols and power provisioning using a geostationary solar-energy beam. To establish a foundation for quantifying system resilience, we develop a reliability framework based on a Continuous-Time Markov Chain (CTMC) model, defining the states and transition rates for future survivability analysis of an orbital RAID equivalent. The paper provides a comprehensive analysis of the system architecture, its core components and the mathematical underpinnings for erasure coding and communication. An in-depth examination of system feasibility, survivability simulations, key constraints and communication overhead is presented, concluding that orbital backup storage presents a viable and promising paradigm for national archives, disaster-resilient storage and long-term scientific data preservation with technical readiness projected by 2035.},
}
RevDate: 2026-02-09
CmpDate: 2026-02-09
Big data in healthcare and medicine revisited design and managerial challenges in the age of artificial intelligence.
Health information science and systems, 14(1):38.
A decade ago, we characterized big data in healthcare as a nascent field anchored in distributed computing paradigms. The intervening years have witnessed a transformation so profound that revisiting our original framework is essential. This paper critically examines the evolution of big data in healthcare and medicine, assessing the shift from Hadoop-centric architectures to cloud computing platforms and GPU-accelerated artificial intelligence, including large language models and the emerging paradigm of agentic AI. The landscape has been reshaped by landmark biobank initiatives, breakthrough applications such as AlphaFold's Nobel Prize-winning solution to protein structure prediction, and the rapid growth of FDA-cleared AI medical devices from fewer than ten in 2015 to over 1200 by mid-2025. AI has enabled advances across precision oncology, drug discovery, and public health surveillance. Yet new challenges have emerged: algorithmic bias perpetuating health disparities, opacity undermining clinical trust, environmental sustainability concerns, and unresolved questions of privacy, security, data ownership, and interoperability. We propose extending the original "4Vs" framework to accommodate veracity through explainability, validity through fairness, and viability through sustainability. The paper concludes with prescriptive implications for healthcare organizations, technology developers, policymakers, and researchers.
Additional Links: PMID-41659839
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41659839,
year = {2026},
author = {Raghupathi, W and Raghupathi, V},
title = {Big data in healthcare and medicine revisited design and managerial challenges in the age of artificial intelligence.},
journal = {Health information science and systems},
volume = {14},
number = {1},
pages = {38},
pmid = {41659839},
issn = {2047-2501},
abstract = {A decade ago, we characterized big data in healthcare as a nascent field anchored in distributed computing paradigms. The intervening years have witnessed a transformation so profound that revisiting our original framework is essential. This paper critically examines the evolution of big data in healthcare and medicine, assessing the shift from Hadoop-centric architectures to cloud computing platforms and GPU-accelerated artificial intelligence, including large language models and the emerging paradigm of agentic AI. The landscape has been reshaped by landmark biobank initiatives, breakthrough applications such as AlphaFold's Nobel Prize-winning solution to protein structure prediction, and the rapid growth of FDA-cleared AI medical devices from fewer than ten in 2015 to over 1200 by mid-2025. AI has enabled advances across precision oncology, drug discovery, and public health surveillance. Yet new challenges have emerged: algorithmic bias perpetuating health disparities, opacity undermining clinical trust, environmental sustainability concerns, and unresolved questions of privacy, security, data ownership, and interoperability. We propose extending the original "4Vs" framework to accommodate veracity through explainability, validity through fairness, and viability through sustainability. The paper concludes with prescriptive implications for healthcare organizations, technology developers, policymakers, and researchers.},
}
RevDate: 2026-02-09
CmpDate: 2026-02-09
Emerging trends and bibliometric analysis of internet of medical things for innovative healthcare (2016-2023).
Digital health, 12:20552076251395701.
BACKGROUND: The internet of medical things (IoMT) is revolutionizing digital health through continuous monitoring, real-time diagnostics, and remote care capabilities. Nonetheless, research in this domain remains disjointed, with a restricted comprehension of its growth trajectories, principal contributors, and thematic emphasis. A comprehensive evaluation is thus required to inform forthcoming research, policy, and advancements in resilient healthcare technologies.
METHODS: This study performed a bibliometric and literature-based analysis of IoMT research indexed in the Scopus database from 2016 to 2023. The dataset was optimized by keyword screening, resulting in 762 pertinent papers. Bibliometric indices, including as publication and citation trends, authorship and institutional output, and funding patterns, were analyzed. Thematic evolution was examined by keyword co-occurrence and cluster mapping utilizing VOSviewer, complemented by a synthesis of literature.
RESULTS: A total of 762 publications on IOMT were identified, comprising 63.12% journal articles, 30.97% conference papers, and 5.91% review papers. The total publications rose from 1 in 2016 to 301 in 2023, indicating a 30,000% increase. Total citations reached 19,014, with an h-index of 171. The most prolific contributors were Mohsen M. Guizani, King Saud University, and India. Collaborations and funding, particularly from international agencies, were found to significantly drive research productivity. Keyword and cluster analyses revealed two dominant thematic areas: Smart Medical Diagnostics and Privacy-Driven Health Technologies. The literature further confirmed strong integration of machine learning, blockchain, sensor technologies, and cloud computing in IOMT applications.
CONCLUSION: This analysis consolidates fragmented IoMT research, providing a structured overview of its development, contributors, and thematic trajectories. The findings highlight the rapid growth, global collaborations, and integration of advanced technologies driving the field. By mapping benchmarks and research hotspots, the study offers valuable evidence to guide future investigations, interdisciplinary collaborations, and policy efforts aimed at strengthening secure and patient-centered digital health systems.
Additional Links: PMID-41659061
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41659061,
year = {2026},
author = {Xin, H and Ajibade, SM and Alhassan, GN and Yilmaz, Y},
title = {Emerging trends and bibliometric analysis of internet of medical things for innovative healthcare (2016-2023).},
journal = {Digital health},
volume = {12},
number = {},
pages = {20552076251395701},
pmid = {41659061},
issn = {2055-2076},
abstract = {BACKGROUND: The internet of medical things (IoMT) is revolutionizing digital health through continuous monitoring, real-time diagnostics, and remote care capabilities. Nonetheless, research in this domain remains disjointed, with a restricted comprehension of its growth trajectories, principal contributors, and thematic emphasis. A comprehensive evaluation is thus required to inform forthcoming research, policy, and advancements in resilient healthcare technologies.
METHODS: This study performed a bibliometric and literature-based analysis of IoMT research indexed in the Scopus database from 2016 to 2023. The dataset was optimized by keyword screening, resulting in 762 pertinent papers. Bibliometric indices, including as publication and citation trends, authorship and institutional output, and funding patterns, were analyzed. Thematic evolution was examined by keyword co-occurrence and cluster mapping utilizing VOSviewer, complemented by a synthesis of literature.
RESULTS: A total of 762 publications on IOMT were identified, comprising 63.12% journal articles, 30.97% conference papers, and 5.91% review papers. The total publications rose from 1 in 2016 to 301 in 2023, indicating a 30,000% increase. Total citations reached 19,014, with an h-index of 171. The most prolific contributors were Mohsen M. Guizani, King Saud University, and India. Collaborations and funding, particularly from international agencies, were found to significantly drive research productivity. Keyword and cluster analyses revealed two dominant thematic areas: Smart Medical Diagnostics and Privacy-Driven Health Technologies. The literature further confirmed strong integration of machine learning, blockchain, sensor technologies, and cloud computing in IOMT applications.
CONCLUSION: This analysis consolidates fragmented IoMT research, providing a structured overview of its development, contributors, and thematic trajectories. The findings highlight the rapid growth, global collaborations, and integration of advanced technologies driving the field. By mapping benchmarks and research hotspots, the study offers valuable evidence to guide future investigations, interdisciplinary collaborations, and policy efforts aimed at strengthening secure and patient-centered digital health systems.},
}
RevDate: 2026-02-09
CmpDate: 2026-02-09
IoMT-Fog-Cloud-based AI frameworks for chronic disease diagnosis: updated comparative analysis with recent AI-IoMT models (2020-2025).
Frontiers in medical technology, 8:1748964.
Chronic diseases such as diabetes and cardiovascular disease require frequent monitoring and timely clinical feedback to prevent complications. Internet of Medical Things (IoMT) systems increasingly combine near-patient sensing with Fog and Cloud computing so that time-critical preprocessing and inference can run close to the patient while compute-intensive training and population-level analytics remain in the Cloud. This review synthesizes primary studies published between 2020 and 2025 that implement AI-enabled IoMT, with an emphasis on systems that report both diagnostic performance and network quality-of-service (QoS). Following PRISMA 2020, we screened database records and included 14 primary studies; we focus the joint performance-QoS synthesis on six IoMT-Fog-Cloud frameworks for diabetes and cardiovascular disease and compare them with two recent multi-disease AI-IoMT models (DACL and TasLA). Diabetes-oriented implementations commonly report accuracy around 95%-96% using explainable or ensemble deep learning, whereas some cardiovascular frameworks report >99% accuracy in controlled settings; we therefore discuss plausible sources of optimistic performance, including small datasets, class imbalance, curated benchmarks, and potential leakage/overfitting in simulation-based evaluations. Across IoMT-Fog-Cloud studies, placing preprocessing and/or inference at the Fog layer repeatedly reduces end-to-end latency for streaming biosignals, but multi-Fog provisioning can increase energy and power demands. To support more reproducible comparisons, we organize 14 extracted metrics into (i) diagnostic performance (accuracy, precision, recall, F1-score, sensitivity, specificity) and (ii) system/network QoS (latency, jitter, throughput, bandwidth utilization, processing/execution time, network usage, energy consumption, power consumption), and we translate the evidence into study-linked design recommendations for future deployments.
Additional Links: PMID-41657732
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41657732,
year = {2026},
author = {Locharoenrat, K},
title = {IoMT-Fog-Cloud-based AI frameworks for chronic disease diagnosis: updated comparative analysis with recent AI-IoMT models (2020-2025).},
journal = {Frontiers in medical technology},
volume = {8},
number = {},
pages = {1748964},
pmid = {41657732},
issn = {2673-3129},
abstract = {Chronic diseases such as diabetes and cardiovascular disease require frequent monitoring and timely clinical feedback to prevent complications. Internet of Medical Things (IoMT) systems increasingly combine near-patient sensing with Fog and Cloud computing so that time-critical preprocessing and inference can run close to the patient while compute-intensive training and population-level analytics remain in the Cloud. This review synthesizes primary studies published between 2020 and 2025 that implement AI-enabled IoMT, with an emphasis on systems that report both diagnostic performance and network quality-of-service (QoS). Following PRISMA 2020, we screened database records and included 14 primary studies; we focus the joint performance-QoS synthesis on six IoMT-Fog-Cloud frameworks for diabetes and cardiovascular disease and compare them with two recent multi-disease AI-IoMT models (DACL and TasLA). Diabetes-oriented implementations commonly report accuracy around 95%-96% using explainable or ensemble deep learning, whereas some cardiovascular frameworks report >99% accuracy in controlled settings; we therefore discuss plausible sources of optimistic performance, including small datasets, class imbalance, curated benchmarks, and potential leakage/overfitting in simulation-based evaluations. Across IoMT-Fog-Cloud studies, placing preprocessing and/or inference at the Fog layer repeatedly reduces end-to-end latency for streaming biosignals, but multi-Fog provisioning can increase energy and power demands. To support more reproducible comparisons, we organize 14 extracted metrics into (i) diagnostic performance (accuracy, precision, recall, F1-score, sensitivity, specificity) and (ii) system/network QoS (latency, jitter, throughput, bandwidth utilization, processing/execution time, network usage, energy consumption, power consumption), and we translate the evidence into study-linked design recommendations for future deployments.},
}
RevDate: 2026-02-06
Energy and makespan optimised task mapping in fog enabled IoT application: a hybrid approach.
Scientific reports, 16(1):5210.
The Internet of Things (IoT) points to billions of connected devices that share data through the Internet. However, the increasing volume of data generated by IoT devices makes remote cloud data centers inefficient for delay-sensitive applications. In this regard, fog computing, which brings computation closer to the data source, plays a significant role in addressing the above issue. However, resource constraints in fog computing demand an effective task-scheduling technique to handle the enormous volume of data. Many researchers have proposed a variety of heuristic and meta-heuristic approaches for effective scheduling; however, there is still scope for improvement. In this paper, we propose EMAPSO (energy makespan-aware PSO). The simultaneous minimization of makespan and energy is presented as a bi-objective optimization problem. The approach also considered the load-balancing factor while assigning a task to a VM in a fog/cloud environment. The proposed algorithm, EMAPSO, is compared to standard PSO, Modified PSO (MPSO), Bird swarm optimization (BSO), and the Bee Life Algorithm (BLA). The experimental results show that the proposed method outperforms the compared algorithms in terms of resource utilization, makespan, and energy consumption.
Additional Links: PMID-41535366
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41535366,
year = {2026},
author = {Tripathy, N and Sahoo, S and Alghamdi, NS and Viriyasitavat, W and Dhiman, G},
title = {Energy and makespan optimised task mapping in fog enabled IoT application: a hybrid approach.},
journal = {Scientific reports},
volume = {16},
number = {1},
pages = {5210},
pmid = {41535366},
issn = {2045-2322},
abstract = {The Internet of Things (IoT) points to billions of connected devices that share data through the Internet. However, the increasing volume of data generated by IoT devices makes remote cloud data centers inefficient for delay-sensitive applications. In this regard, fog computing, which brings computation closer to the data source, plays a significant role in addressing the above issue. However, resource constraints in fog computing demand an effective task-scheduling technique to handle the enormous volume of data. Many researchers have proposed a variety of heuristic and meta-heuristic approaches for effective scheduling; however, there is still scope for improvement. In this paper, we propose EMAPSO (energy makespan-aware PSO). The simultaneous minimization of makespan and energy is presented as a bi-objective optimization problem. The approach also considered the load-balancing factor while assigning a task to a VM in a fog/cloud environment. The proposed algorithm, EMAPSO, is compared to standard PSO, Modified PSO (MPSO), Bird swarm optimization (BSO), and the Bee Life Algorithm (BLA). The experimental results show that the proposed method outperforms the compared algorithms in terms of resource utilization, makespan, and energy consumption.},
}
RevDate: 2026-02-07
DT-aided resource allocation via generative adversarial imitation learning in complex cloud-edge-end scenarios.
Scientific reports pii:10.1038/s41598-026-38367-0 [Epub ahead of print].
Traditional DRL-based resource allocation for cloud-edge-end computing primarily depends on known state parameters and real-time feedback rewards when making decisions. The traditional model, which heavily relies on prior knowledge and real-time feedback of the scene, faces challenges in delivering effective services in complex scenarios. We propose a DT-aided Expert-driven Generative Adversarial Imitation Learning (E-GAIL) model that leverages imitation learning capability to jointly allocate multiple constrained resources. Firstly, we introduce a single-expert trajectory generation algorithm based on Actor-Critic and Noisynet by using the rich historical data provided in DT Networks. This idea can enhance the fidelity of the imitated expert trajectory by utilizing the critic to update the network iteratively. Secondly, we fuse different single-expert trajectories into a multi-expert trajectory to expand the coverage area. We also employ the Nash equilibrium to identify the optimal equilibrium solution and reduce the conflicts among different experts. Finally, the parameters of the generator and discriminator in E-GAIL are updated according to the respective gradients to fit the multi-expert trajectory during the training process. Once the task is uploaded, the E-GAIL Agent in the edge server can rapidly obtain the resource allocation policy even without prior knowledge or real-time reward feedback. The experiment results indicate that E-GAIL can obtain the best-fit expert trajectory in large-scale noisy environments.
Additional Links: PMID-41654653
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41654653,
year = {2026},
author = {Zhang, X and Xin, M and Li, Y and Fu, Q},
title = {DT-aided resource allocation via generative adversarial imitation learning in complex cloud-edge-end scenarios.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-38367-0},
pmid = {41654653},
issn = {2045-2322},
support = {JSQB2023206S005//National Defense Basic Scientiffc Research Project/ ; No.:220XQD061//University of South China Doctoral Research Start-up Fund Project/ ; },
abstract = {Traditional DRL-based resource allocation for cloud-edge-end computing primarily depends on known state parameters and real-time feedback rewards when making decisions. The traditional model, which heavily relies on prior knowledge and real-time feedback of the scene, faces challenges in delivering effective services in complex scenarios. We propose a DT-aided Expert-driven Generative Adversarial Imitation Learning (E-GAIL) model that leverages imitation learning capability to jointly allocate multiple constrained resources. Firstly, we introduce a single-expert trajectory generation algorithm based on Actor-Critic and Noisynet by using the rich historical data provided in DT Networks. This idea can enhance the fidelity of the imitated expert trajectory by utilizing the critic to update the network iteratively. Secondly, we fuse different single-expert trajectories into a multi-expert trajectory to expand the coverage area. We also employ the Nash equilibrium to identify the optimal equilibrium solution and reduce the conflicts among different experts. Finally, the parameters of the generator and discriminator in E-GAIL are updated according to the respective gradients to fit the multi-expert trajectory during the training process. Once the task is uploaded, the E-GAIL Agent in the edge server can rapidly obtain the resource allocation policy even without prior knowledge or real-time reward feedback. The experiment results indicate that E-GAIL can obtain the best-fit expert trajectory in large-scale noisy environments.},
}
RevDate: 2026-02-07
Adaptive and intelligent customized deep Q-network for energy-efficient task offloading in mobile edge computing environments.
Scientific reports pii:10.1038/s41598-025-34765-y [Epub ahead of print].
The rapid expansion of edge-cloud infrastructures and latency-sensitive Internet of Things (IoT) applications has intensified the challenge of intelligent task offloading in dynamic and resource-constrained environments. This paper presents an Adaptive and Intelligent Customized Deep Q-Network (AICDQN), a novel reinforcement learning-based framework for real-time, priority-aware task scheduling in mobile edge computing systems. The proposed model formulates task offloading as a Markov Decision Process (MDP) and integrates a hybrid Gated Recurrent Unit-Long Short-Term Memory (GRU-LSTM) load prediction module to forecast workload fluctuations and task urgency trends. This foresight enables a Dynamic Dueling Double Deep Q-Network [Formula: see text] agent to make informed offloading decisions across local, edge, and cloud tiers. The system models compute nodes using priority-aware M/M/1, M/M/c and M/M/∞ queuing systems, enabling delay-sensitive and queue-aware decision-making. A dynamic priority scoring function integrates task urgency, deadline proximity, and node-level queue saturation, ensuring real-time tasks are prioritized effectively. Furthermore, an energy-aware scheduling policy proactively transitions underutilized servers into low-power states without compromising performance. Extensive simulations demonstrate that AICDQN achieves up to 33.39% reduction in delay, 57.74% improvement in energy efficiency, and 81.25% reduction in task drop rate compared with existing offloading algorithms, including Deep Deterministic Policy Gradient (DDPG), Distributed Dynamic Task Offloading (DDTO-DRL), Potential Game based Offloading Algorithm (PGOA), and the User-Level Online Offloading Framework (ULOOF). These results validate AICDQN as a scalable and adaptive solution for next-generation edge-cloud systems requiring efficient, intelligent, and energy-constrained task offloading.
Additional Links: PMID-41654577
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41654577,
year = {2026},
author = {Anand, J and Karthikeyan, B},
title = {Adaptive and intelligent customized deep Q-network for energy-efficient task offloading in mobile edge computing environments.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-34765-y},
pmid = {41654577},
issn = {2045-2322},
abstract = {The rapid expansion of edge-cloud infrastructures and latency-sensitive Internet of Things (IoT) applications has intensified the challenge of intelligent task offloading in dynamic and resource-constrained environments. This paper presents an Adaptive and Intelligent Customized Deep Q-Network (AICDQN), a novel reinforcement learning-based framework for real-time, priority-aware task scheduling in mobile edge computing systems. The proposed model formulates task offloading as a Markov Decision Process (MDP) and integrates a hybrid Gated Recurrent Unit-Long Short-Term Memory (GRU-LSTM) load prediction module to forecast workload fluctuations and task urgency trends. This foresight enables a Dynamic Dueling Double Deep Q-Network [Formula: see text] agent to make informed offloading decisions across local, edge, and cloud tiers. The system models compute nodes using priority-aware M/M/1, M/M/c and M/M/∞ queuing systems, enabling delay-sensitive and queue-aware decision-making. A dynamic priority scoring function integrates task urgency, deadline proximity, and node-level queue saturation, ensuring real-time tasks are prioritized effectively. Furthermore, an energy-aware scheduling policy proactively transitions underutilized servers into low-power states without compromising performance. Extensive simulations demonstrate that AICDQN achieves up to 33.39% reduction in delay, 57.74% improvement in energy efficiency, and 81.25% reduction in task drop rate compared with existing offloading algorithms, including Deep Deterministic Policy Gradient (DDPG), Distributed Dynamic Task Offloading (DDTO-DRL), Potential Game based Offloading Algorithm (PGOA), and the User-Level Online Offloading Framework (ULOOF). These results validate AICDQN as a scalable and adaptive solution for next-generation edge-cloud systems requiring efficient, intelligent, and energy-constrained task offloading.},
}
RevDate: 2026-02-07
CmpDate: 2026-02-07
A cloud-edge reference architecture for intertwining health digital domains.
Health informatics journal, 32(1):14604582251383803.
Objective: In the present work, LinkAll is introduced as a novel architectural model designed to enable real-time monitoring and cross-referential data analysis in remote monitoring systems across human, animal, and environmental health domains. LinkAll leverages Edge-Computing and Internet of Things principles to handle data collection, processing, and presentation from various sources. Methods: Two sibling systems were implemented to demonstrate its capability, one for monitoring urban greenery and the other for elderly home care. These systems were evaluated based on their ability to integrate with existing information systems, collect biophysical parameters, and ensure data cross-referencing. Results: Both systems demonstrate effective pluggability and cross-referenceability performances, meeting the stakeholders' requirements. LinkAll's ability to integrate diverse sensors and devices into existing infrastructures while providing real-time, machine-actionable insights, is also underscored. Conclusion: Pluggability, cross-referenceability, and compliance with FAIR principles make the architectural model introduced a robust solution for integrating human, animal, and environmental health monitoring systems, enhancing decision-making and contributing to One (Digital) Health's strategic goals.
Additional Links: PMID-41653444
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41653444,
year = {2026},
author = {Tramontano, A and Tamburis, O and Perillo, G and Iaccarino, G and Benis, A and Magliulo, M},
title = {A cloud-edge reference architecture for intertwining health digital domains.},
journal = {Health informatics journal},
volume = {32},
number = {1},
pages = {14604582251383803},
doi = {10.1177/14604582251383803},
pmid = {41653444},
issn = {1741-2811},
mesh = {Humans ; *Cloud Computing ; },
abstract = {Objective: In the present work, LinkAll is introduced as a novel architectural model designed to enable real-time monitoring and cross-referential data analysis in remote monitoring systems across human, animal, and environmental health domains. LinkAll leverages Edge-Computing and Internet of Things principles to handle data collection, processing, and presentation from various sources. Methods: Two sibling systems were implemented to demonstrate its capability, one for monitoring urban greenery and the other for elderly home care. These systems were evaluated based on their ability to integrate with existing information systems, collect biophysical parameters, and ensure data cross-referencing. Results: Both systems demonstrate effective pluggability and cross-referenceability performances, meeting the stakeholders' requirements. LinkAll's ability to integrate diverse sensors and devices into existing infrastructures while providing real-time, machine-actionable insights, is also underscored. Conclusion: Pluggability, cross-referenceability, and compliance with FAIR principles make the architectural model introduced a robust solution for integrating human, animal, and environmental health monitoring systems, enhancing decision-making and contributing to One (Digital) Health's strategic goals.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Cloud Computing
RevDate: 2026-02-07
DFDD: A Cloud-Ready Tool for Distance-Guided Fully Dynamic Docking in Host-Guest Complexation.
Journal of chemical information and modeling [Epub ahead of print].
Fully dynamic sampling of host-guest inclusion remains difficult because conventional docking and conventional molecular dynamics simulations can sample inclusion, but crystal-like binding is typically stochastic and difficult to reproduce. Here, we introduce DFDD (Distance-Guided Fully Dynamic Docking), a cloud-ready implementation of the LB-PaCS-MD framework designed to capture inclusion processes via unbiased molecular dynamics in explicit solvent. DFDD automates system setup, parameter generation, iterative short-cycle MD sampling, and trajectory analysis within a single workflow that runs on Google Colab without any installation. Progress toward complexation is guided only by the host-guest center-of-mass distance, allowing force-free exploration of insertion pathways and enabling the recovery of both stable and transient binding modes. Using β-cyclodextrin as a representative host, DFDD reproduces experimentally observed inclusion geometries within minutes and reveals intermediate states along the insertion route. Optional coupling with pKaNET-Cloud enables pH-aware, stereochemically consistent ligand protonation states prior to simulation, supporting robust host-guest modeling. This Application Note provides a transparent and accessible platform for efficient host-guest complexation studies. The DFDD framework is publicly available at https://github.com/nyelidl/DFDD.
Additional Links: PMID-41653112
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41653112,
year = {2026},
author = {Hengphasatporn, K and Duan, L and Harada, R and Shigeta, Y},
title = {DFDD: A Cloud-Ready Tool for Distance-Guided Fully Dynamic Docking in Host-Guest Complexation.},
journal = {Journal of chemical information and modeling},
volume = {},
number = {},
pages = {},
doi = {10.1021/acs.jcim.5c02852},
pmid = {41653112},
issn = {1549-960X},
abstract = {Fully dynamic sampling of host-guest inclusion remains difficult because conventional docking and conventional molecular dynamics simulations can sample inclusion, but crystal-like binding is typically stochastic and difficult to reproduce. Here, we introduce DFDD (Distance-Guided Fully Dynamic Docking), a cloud-ready implementation of the LB-PaCS-MD framework designed to capture inclusion processes via unbiased molecular dynamics in explicit solvent. DFDD automates system setup, parameter generation, iterative short-cycle MD sampling, and trajectory analysis within a single workflow that runs on Google Colab without any installation. Progress toward complexation is guided only by the host-guest center-of-mass distance, allowing force-free exploration of insertion pathways and enabling the recovery of both stable and transient binding modes. Using β-cyclodextrin as a representative host, DFDD reproduces experimentally observed inclusion geometries within minutes and reveals intermediate states along the insertion route. Optional coupling with pKaNET-Cloud enables pH-aware, stereochemically consistent ligand protonation states prior to simulation, supporting robust host-guest modeling. This Application Note provides a transparent and accessible platform for efficient host-guest complexation studies. The DFDD framework is publicly available at https://github.com/nyelidl/DFDD.},
}
RevDate: 2026-02-06
Undergraduate medical students' perceptions of an interactive and collaborative cloud-based learning strategy: survey at a single institution.
BMC medical education pii:10.1186/s12909-026-08640-x [Epub ahead of print].
Additional Links: PMID-41645196
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41645196,
year = {2026},
author = {Cortes, C and Jackman, TD and Dersch, AM and Taylor, TAH},
title = {Undergraduate medical students' perceptions of an interactive and collaborative cloud-based learning strategy: survey at a single institution.},
journal = {BMC medical education},
volume = {},
number = {},
pages = {},
doi = {10.1186/s12909-026-08640-x},
pmid = {41645196},
issn = {1472-6920},
}
RevDate: 2026-02-04
Bridging the implementation gap: Challenges and opportunities for integrating whole genome sequencing in tuberculosis surveillance in low-resource settings.
Diagnostic microbiology and infectious disease, 115(1):117282 pii:S0732-8893(26)00032-5 [Epub ahead of print].
INTRODUCTION: Tuberculosis (TB) remains a major global health concern, particularly in low-income countries where the impact is greater. The lack of proper surveillance tools in these countries is a big impediment to effective TB control. Whole-genome sequencing (WGS) has successfully been integrated into routine TB programs in high-income countries and transformed disease surveillance by providing rapid, high-resolution transmission insights, drug resistance profiling, and outbreak detection. However, its uptake in resource-limited settings where TB burden is most prevalent remains limited.
METHODS: This review examines how WGS is currently being utilised for TB surveillance and highlights the main obstacles to its adoption in limited-resource settings as well as the strategies that could improve its uptake. A literature search was conducted in PubMed, Google Scholar, and the World Health Organisation (WHO) databases with keywords "whole genome sequencing," "tuberculosis," "surveillance," "transmission," and "drug resistance." Studies published between 2015 and 2025 were prioritised, with a focus on applications in high-burden settings.
RESULTS: Key challenges identified include infrastructural issues whereby 78% of high-burden countries lack adequate sequencing facilities according to WHO 2023 data; financial barriers, with recurring costs surpassing $150 per sample in low-resource settings as compared to $80 in high-income countries, and a shortage of trained personnel with only 2.3 bioinformaticians being available per African country. Other hurdles involve concerns over data sovereignty, weak regulatory frameworks, and ethical dilemmas surrounding privacy and equitable data usage, with only 31% of low-resource countries having genomic data policies. Nevertheless, promising innovations like portable sequencing devices which have a sensitivity of up to 92% and cloud-based platforms that reduce computational needs by 70% offer scalable opportunities for equitable integration. We also highlight partnership models that blend WHO technical guidance, Global Fund financing, and South-South collaborations that could enhance sustainability.
CONCLUSION: To realise the full potential of WGS in TB-endemic regions, a coordinated approach that combines technical advancements with policy changes, ethical data governance, and sustained investment is needed. Tackling these challenges is essential in achieving equitable, genomics-informed TB control that aligns with global TB elimination goals.
Additional Links: PMID-41637877
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41637877,
year = {2026},
author = {Micheni, LN and Wambua, S and Magutah, K and Nkaiwuatei, J and Bazira, J and Sande, C},
title = {Bridging the implementation gap: Challenges and opportunities for integrating whole genome sequencing in tuberculosis surveillance in low-resource settings.},
journal = {Diagnostic microbiology and infectious disease},
volume = {115},
number = {1},
pages = {117282},
doi = {10.1016/j.diagmicrobio.2026.117282},
pmid = {41637877},
issn = {1879-0070},
abstract = {INTRODUCTION: Tuberculosis (TB) remains a major global health concern, particularly in low-income countries where the impact is greater. The lack of proper surveillance tools in these countries is a big impediment to effective TB control. Whole-genome sequencing (WGS) has successfully been integrated into routine TB programs in high-income countries and transformed disease surveillance by providing rapid, high-resolution transmission insights, drug resistance profiling, and outbreak detection. However, its uptake in resource-limited settings where TB burden is most prevalent remains limited.
METHODS: This review examines how WGS is currently being utilised for TB surveillance and highlights the main obstacles to its adoption in limited-resource settings as well as the strategies that could improve its uptake. A literature search was conducted in PubMed, Google Scholar, and the World Health Organisation (WHO) databases with keywords "whole genome sequencing," "tuberculosis," "surveillance," "transmission," and "drug resistance." Studies published between 2015 and 2025 were prioritised, with a focus on applications in high-burden settings.
RESULTS: Key challenges identified include infrastructural issues whereby 78% of high-burden countries lack adequate sequencing facilities according to WHO 2023 data; financial barriers, with recurring costs surpassing $150 per sample in low-resource settings as compared to $80 in high-income countries, and a shortage of trained personnel with only 2.3 bioinformaticians being available per African country. Other hurdles involve concerns over data sovereignty, weak regulatory frameworks, and ethical dilemmas surrounding privacy and equitable data usage, with only 31% of low-resource countries having genomic data policies. Nevertheless, promising innovations like portable sequencing devices which have a sensitivity of up to 92% and cloud-based platforms that reduce computational needs by 70% offer scalable opportunities for equitable integration. We also highlight partnership models that blend WHO technical guidance, Global Fund financing, and South-South collaborations that could enhance sustainability.
CONCLUSION: To realise the full potential of WGS in TB-endemic regions, a coordinated approach that combines technical advancements with policy changes, ethical data governance, and sustained investment is needed. Tackling these challenges is essential in achieving equitable, genomics-informed TB control that aligns with global TB elimination goals.},
}
RevDate: 2026-02-04
CmpDate: 2026-02-04
Streamline Protocol for Bulk-RNA Sequencing: From Data Extraction to Expression Analysis.
Current protocols, 6(2):e70304.
Next-generation RNA sequencing (RNA-seq) allows researchers to study gene expression across the whole genome. However, its analysis often needs powerful computers and advanced command-line skills, which can be challenging when resources are limited. This protocol provides a simple, start-to-finish RNA-seq data analysis method that is easy to follow, reproducible, and requires minimal local hardware. It uses free tools such as SRA Toolkit, FastQC, Trimmomatic, BWA/HISAT2, Samtools, and Subread, along with Python and R for further analysis using Google Colab. The process includes downloading raw data from NCBI GEO/SRA, checking data quality, trimming adapters and low-quality reads, aligning sequences to reference genomes, converting file formats, counting reads, normalizing to TPM, and creating visualizations such as heatmaps, bar plots, and volcano plots. Differential gene expression is analyzed with pyDESeq2, and functional enrichment is done using g:Profiler. Troubleshooting in RNA-seq generally involves configuring essential tools, resolving path and dependency issues, and ensuring proper handling of paired-end reads during analysis. By running the heavy computational steps on cloud platforms, this workflow makes RNA-seq analysis affordable and accessible to more researchers. © 2026 Wiley Periodicals LLC. Basic Protocol 1: Extracting and processing a high-throughput RNA-seq dataset with the command prompt and Windows Subsystem for Linux Basic Protocol 2: Normalization and visualization of processed RNA-seq dataset with Google Colab and Python 3.
Additional Links: PMID-41637157
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41637157,
year = {2026},
author = {Mohit, AA and Das, NR and Jain, A and Alam, NB and Mustafiz, A},
title = {Streamline Protocol for Bulk-RNA Sequencing: From Data Extraction to Expression Analysis.},
journal = {Current protocols},
volume = {6},
number = {2},
pages = {e70304},
doi = {10.1002/cpz1.70304},
pmid = {41637157},
issn = {2691-1299},
mesh = {Software ; *Sequence Analysis, RNA/methods ; *High-Throughput Nucleotide Sequencing/methods ; *Gene Expression Profiling/methods ; *RNA-Seq/methods ; Humans ; *Computational Biology/methods ; },
abstract = {Next-generation RNA sequencing (RNA-seq) allows researchers to study gene expression across the whole genome. However, its analysis often needs powerful computers and advanced command-line skills, which can be challenging when resources are limited. This protocol provides a simple, start-to-finish RNA-seq data analysis method that is easy to follow, reproducible, and requires minimal local hardware. It uses free tools such as SRA Toolkit, FastQC, Trimmomatic, BWA/HISAT2, Samtools, and Subread, along with Python and R for further analysis using Google Colab. The process includes downloading raw data from NCBI GEO/SRA, checking data quality, trimming adapters and low-quality reads, aligning sequences to reference genomes, converting file formats, counting reads, normalizing to TPM, and creating visualizations such as heatmaps, bar plots, and volcano plots. Differential gene expression is analyzed with pyDESeq2, and functional enrichment is done using g:Profiler. Troubleshooting in RNA-seq generally involves configuring essential tools, resolving path and dependency issues, and ensuring proper handling of paired-end reads during analysis. By running the heavy computational steps on cloud platforms, this workflow makes RNA-seq analysis affordable and accessible to more researchers. © 2026 Wiley Periodicals LLC. Basic Protocol 1: Extracting and processing a high-throughput RNA-seq dataset with the command prompt and Windows Subsystem for Linux Basic Protocol 2: Normalization and visualization of processed RNA-seq dataset with Google Colab and Python 3.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Software
*Sequence Analysis, RNA/methods
*High-Throughput Nucleotide Sequencing/methods
*Gene Expression Profiling/methods
*RNA-Seq/methods
Humans
*Computational Biology/methods
RevDate: 2026-02-03
CmpDate: 2026-02-03
AI-driven routing and layered architectures for intelligent ICT in nanosensor networked systems.
iScience, 29(2):114626.
This review examines the emerging integration of nanosensor networks with modern information and communication technologies to address critical needs in healthcare, environmental monitoring, and smart infrastructure. It evaluates how machine learning and artificial intelligence techniques improve data processing, energy management, real-time communication, and scalable system coordination within nanosensor environments. The analysis compares major learning approaches, including supervised, unsupervised, reinforcement, and deep learning methods, and highlights their effectiveness in data routing, anomaly detection, security, and predictive maintenance. The review also assesses new system architectures based on edge computing, cloud federated models, and intelligent communication protocols, focusing on performance indicators such as latency, throughput, and energy efficiency. Key challenges involving computational load, data privacy, and system interoperability are identified, and potential solutions inspired by biological systems, interpretable models, and quantum-based learning are explored. Overall, this work provides a unified framework for advancing intelligent and resource-efficient nanosensor communication systems with broad societal impact.
Additional Links: PMID-41630924
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41630924,
year = {2026},
author = {Yousif Dafhalla, AK and Attia Gasmalla, TA and Filali, A and Osman Sid Ahmed, NM and Adam, T and Elobaid, ME and Chandra Bose Gopinath, S},
title = {AI-driven routing and layered architectures for intelligent ICT in nanosensor networked systems.},
journal = {iScience},
volume = {29},
number = {2},
pages = {114626},
pmid = {41630924},
issn = {2589-0042},
abstract = {This review examines the emerging integration of nanosensor networks with modern information and communication technologies to address critical needs in healthcare, environmental monitoring, and smart infrastructure. It evaluates how machine learning and artificial intelligence techniques improve data processing, energy management, real-time communication, and scalable system coordination within nanosensor environments. The analysis compares major learning approaches, including supervised, unsupervised, reinforcement, and deep learning methods, and highlights their effectiveness in data routing, anomaly detection, security, and predictive maintenance. The review also assesses new system architectures based on edge computing, cloud federated models, and intelligent communication protocols, focusing on performance indicators such as latency, throughput, and energy efficiency. Key challenges involving computational load, data privacy, and system interoperability are identified, and potential solutions inspired by biological systems, interpretable models, and quantum-based learning are explored. Overall, this work provides a unified framework for advancing intelligent and resource-efficient nanosensor communication systems with broad societal impact.},
}
RevDate: 2026-02-02
Improvements on Scalable and Reproducible Cloud Implementation of Numerical Groundwater Modeling.
Ground water [Epub ahead of print].
In the past decade the groundwater modeling industry has trended toward more computationally intensive methods that necessarily require more parallel computing power due to the number of model runs required for these methods. Groundwater modeling that requires many parallel model runs is often limited by numerical burden or by the modeler's access to computational resources. Over the last 15 years the evolution of the cloud in accelerating groundwater model solutions has progressed; however, there are no apparent literature reviews of MODFLOW and PEST cloud implementation, specifically with regards to open-source and efficient scalable solutions. Here we describe infrastructure as code used to develop the architecture for running PEST++ in parallel on the cloud using Docker containers and open-source software to allow simple and repeatable cloud execution. The architecture utilizes Amazon Web Services and Terraform to facilitate cloud deployment and monitoring. A publicly available MODFLOW-6 model was used to evaluate parallel performance locally and in the cloud. Local model runs were found to have a linear 12 s increase in model run time per agent on a typical office computer compared to the cloud implementation's 0.02 s per model, indicating near perfect scaling even at up to 200 concurrent model runs. A consulting groundwater model was calibrated with the cloud infrastructure, which enabled acceleration of project completion at minimal cost.
Additional Links: PMID-41626743
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41626743,
year = {2026},
author = {Roth, M and Grove, J and Davis, A and Cornell, J},
title = {Improvements on Scalable and Reproducible Cloud Implementation of Numerical Groundwater Modeling.},
journal = {Ground water},
volume = {},
number = {},
pages = {},
doi = {10.1111/gwat.70052},
pmid = {41626743},
issn = {1745-6584},
abstract = {In the past decade the groundwater modeling industry has trended toward more computationally intensive methods that necessarily require more parallel computing power due to the number of model runs required for these methods. Groundwater modeling that requires many parallel model runs is often limited by numerical burden or by the modeler's access to computational resources. Over the last 15 years the evolution of the cloud in accelerating groundwater model solutions has progressed; however, there are no apparent literature reviews of MODFLOW and PEST cloud implementation, specifically with regards to open-source and efficient scalable solutions. Here we describe infrastructure as code used to develop the architecture for running PEST++ in parallel on the cloud using Docker containers and open-source software to allow simple and repeatable cloud execution. The architecture utilizes Amazon Web Services and Terraform to facilitate cloud deployment and monitoring. A publicly available MODFLOW-6 model was used to evaluate parallel performance locally and in the cloud. Local model runs were found to have a linear 12 s increase in model run time per agent on a typical office computer compared to the cloud implementation's 0.02 s per model, indicating near perfect scaling even at up to 200 concurrent model runs. A consulting groundwater model was calibrated with the cloud infrastructure, which enabled acceleration of project completion at minimal cost.},
}
RevDate: 2026-02-02
CmpDate: 2026-02-02
TropMol-Caipora: A Cloud-Based Web Tool to Predict Cruzain Inhibitors by Machine Learning.
ACS omega, 11(3):4167-4174.
Chagas disease (CD) affects approximately 8 million people and is classified as a high-priority neglected tropical disease by the WHO research and development actions. One promising avenue for drug development for CD is the inhibition of cruzain, a crucial cysteine protease of T. cruzi and one of the most extensively studied therapeutic targets. This study aims to construct a generic molecular screening model for public, online, and free use, based on pIC50 cruzain predictions using a Random Forest model. For this, a data set with approximately 8 thousand compounds and 168 classes of descriptors was used, resulting in more than a million calculated descriptors. The model achieved R [2] = 0.91 (RMSE = 0.33) for the training set and R [2] = 0.72 (RMSE = 0.55) for the test set. In 5-fold cross-validation, performance remained consistent (R [2] = 0.72 ± 0.01; RMSE = 0.57 ± 0.01). Some relevant insights were also observed. 1 - Aromaticity was shown to be a key factor in inhibitory activity. Compounds with nitrogenous aromatic rings are more likely to be more effective inhibitors. Aromatics in general also present correlation and structural relevance for an effective inhibitor. 2 - Halogenation may favor activity. The positive correlation may suggest that the introduction of halogen atoms may improve the activity of the compounds. 3 - Bicyclic or very rigid structures may decrease the inhibition efficiency of the tested candidates. 4 - Molecular accessibility and charge influence activity. Available in: https://colab.research.google.com/drive/1hotsXPddbJ6E0_hysLT9AqsXL-74Na-z?usp=sharing.
Additional Links: PMID-41626475
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41626475,
year = {2026},
author = {Doring, TH},
title = {TropMol-Caipora: A Cloud-Based Web Tool to Predict Cruzain Inhibitors by Machine Learning.},
journal = {ACS omega},
volume = {11},
number = {3},
pages = {4167-4174},
pmid = {41626475},
issn = {2470-1343},
abstract = {Chagas disease (CD) affects approximately 8 million people and is classified as a high-priority neglected tropical disease by the WHO research and development actions. One promising avenue for drug development for CD is the inhibition of cruzain, a crucial cysteine protease of T. cruzi and one of the most extensively studied therapeutic targets. This study aims to construct a generic molecular screening model for public, online, and free use, based on pIC50 cruzain predictions using a Random Forest model. For this, a data set with approximately 8 thousand compounds and 168 classes of descriptors was used, resulting in more than a million calculated descriptors. The model achieved R [2] = 0.91 (RMSE = 0.33) for the training set and R [2] = 0.72 (RMSE = 0.55) for the test set. In 5-fold cross-validation, performance remained consistent (R [2] = 0.72 ± 0.01; RMSE = 0.57 ± 0.01). Some relevant insights were also observed. 1 - Aromaticity was shown to be a key factor in inhibitory activity. Compounds with nitrogenous aromatic rings are more likely to be more effective inhibitors. Aromatics in general also present correlation and structural relevance for an effective inhibitor. 2 - Halogenation may favor activity. The positive correlation may suggest that the introduction of halogen atoms may improve the activity of the compounds. 3 - Bicyclic or very rigid structures may decrease the inhibition efficiency of the tested candidates. 4 - Molecular accessibility and charge influence activity. Available in: https://colab.research.google.com/drive/1hotsXPddbJ6E0_hysLT9AqsXL-74Na-z?usp=sharing.},
}
RevDate: 2026-02-02
Towards intelligent edge computing through reinforcement learning based offloading in public edge as a service.
Scientific reports, 16(1):4355.
Internet of Things (IoT) deployments face increasing challenges in meeting strict latency and cost requirements while ensuring efficient resource utilization in distributed environments. Traditional offloading often overlooks the role of intermediate regional layers and mobility, resulting in inefficiencies in real-world deployments. To address this gap, we propose Public Edge as a Service (PEaaS) as an intermediate tier and develop RegionalEdgeSimPy, a Python simulator to model and evaluate this framework. It uses a Proximal Policy Optimization (PPO) scheduler that models mobility and considers multiple input parameters (e.g., network latency, cost, congestion, and energy). Tasks are first evaluated at the serving (Wireless Access Point (WAP)) for feasibility under utilization thresholds. This decision uses action masking to restrict invalid options, and a reward function that integrates latency, cost, congestion, and energy to guide optimal offloading. Simulations conducted with 10 to 3000 devices in a 10 × 10 Kilometers smart city area. Results show that PPo prioritizes Edge processing until over-utilization, after which workloads are offloaded to the nearest PEaaS, with Cloud used sparingly. On average, Edge achieves 75.8% utilization, PEaaS stabilizes near 52.9%, and Cloud remains under 1.2% when active. These findings demonstrate that the PPO scheduling significantly reduces delay, cost, and task failures, providing improved scalability for mobility in IoT big data processing.
Additional Links: PMID-41622280
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41622280,
year = {2026},
author = {Jalal, A and Farooq, U and Rabbi, I and Badshah, A and Khan, A and Alam, MM and Su'ud, MM},
title = {Towards intelligent edge computing through reinforcement learning based offloading in public edge as a service.},
journal = {Scientific reports},
volume = {16},
number = {1},
pages = {4355},
pmid = {41622280},
issn = {2045-2322},
abstract = {Internet of Things (IoT) deployments face increasing challenges in meeting strict latency and cost requirements while ensuring efficient resource utilization in distributed environments. Traditional offloading often overlooks the role of intermediate regional layers and mobility, resulting in inefficiencies in real-world deployments. To address this gap, we propose Public Edge as a Service (PEaaS) as an intermediate tier and develop RegionalEdgeSimPy, a Python simulator to model and evaluate this framework. It uses a Proximal Policy Optimization (PPO) scheduler that models mobility and considers multiple input parameters (e.g., network latency, cost, congestion, and energy). Tasks are first evaluated at the serving (Wireless Access Point (WAP)) for feasibility under utilization thresholds. This decision uses action masking to restrict invalid options, and a reward function that integrates latency, cost, congestion, and energy to guide optimal offloading. Simulations conducted with 10 to 3000 devices in a 10 × 10 Kilometers smart city area. Results show that PPo prioritizes Edge processing until over-utilization, after which workloads are offloaded to the nearest PEaaS, with Cloud used sparingly. On average, Edge achieves 75.8% utilization, PEaaS stabilizes near 52.9%, and Cloud remains under 1.2% when active. These findings demonstrate that the PPO scheduling significantly reduces delay, cost, and task failures, providing improved scalability for mobility in IoT big data processing.},
}
RevDate: 2026-01-30
Efficient workflow scheduling in fog-cloud collaboration using a hybrid IPSO-GWO algorithm.
Scientific reports pii:10.1038/s41598-025-34462-w [Epub ahead of print].
With the rapid advancement of fog-cloud computing, task offloading and workflow scheduling have become pivotal in determining system performance and cost efficiency. To address the inherent complexity of this heterogeneous environment, a novel hybrid optimization strategy is introduced, integrating the Improved Particle Swarm Optimization (IPSO) algorithm, enhanced by a linearly decreasing inertia weight, with the Grey Wolf Optimization (GWO) algorithm. This hybridization is not merely a combination but a synergistic fusion, wherein the inertia weight adapts dynamically throughout the optimization process. Such adaptation ensures a balanced trade-off between exploration and exploitation, thereby mitigating the risk of premature convergence commonly observed in standard PSO. To assess the effectiveness of the proposed IPSO-GWO algorithm, extensive simulations were carried out using the FogWorkflowSim framework-an environment specifically developed to capture the complexities of workflow execution within fog-cloud architectures. Our evaluation encompasses a range of real-world scientific workflows, scaling up to 1000 tasks, and benchmarks the performance against PSO, GWO, IPSO, and the Gravitational Search Algorithm (GSA). The Analysis of Variance (ANOVA) is employed to substantiate the results. The experimental results reveal that the proposed IPSO-GWO approach consistently outperforms existing baseline methods across key performance metrics, including total cost, average energy consumption, and overall workflow execution time (makespan) in most scenarios, with average reductions of up to 26.14% in makespan, 37.73% in energy consumption, and 12.52% in total cost Beyond algorithmic innovation, this study contributes to a deeper understanding of workflow optimization dynamics in distributed fog-cloud systems, paving the way for more intelligent and adaptive task scheduling mechanisms in future computing paradigms.
Additional Links: PMID-41617810
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41617810,
year = {2026},
author = {Awad, S and Gamal, M and El Salam, KA and Abdel-Kader, RF},
title = {Efficient workflow scheduling in fog-cloud collaboration using a hybrid IPSO-GWO algorithm.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-34462-w},
pmid = {41617810},
issn = {2045-2322},
abstract = {With the rapid advancement of fog-cloud computing, task offloading and workflow scheduling have become pivotal in determining system performance and cost efficiency. To address the inherent complexity of this heterogeneous environment, a novel hybrid optimization strategy is introduced, integrating the Improved Particle Swarm Optimization (IPSO) algorithm, enhanced by a linearly decreasing inertia weight, with the Grey Wolf Optimization (GWO) algorithm. This hybridization is not merely a combination but a synergistic fusion, wherein the inertia weight adapts dynamically throughout the optimization process. Such adaptation ensures a balanced trade-off between exploration and exploitation, thereby mitigating the risk of premature convergence commonly observed in standard PSO. To assess the effectiveness of the proposed IPSO-GWO algorithm, extensive simulations were carried out using the FogWorkflowSim framework-an environment specifically developed to capture the complexities of workflow execution within fog-cloud architectures. Our evaluation encompasses a range of real-world scientific workflows, scaling up to 1000 tasks, and benchmarks the performance against PSO, GWO, IPSO, and the Gravitational Search Algorithm (GSA). The Analysis of Variance (ANOVA) is employed to substantiate the results. The experimental results reveal that the proposed IPSO-GWO approach consistently outperforms existing baseline methods across key performance metrics, including total cost, average energy consumption, and overall workflow execution time (makespan) in most scenarios, with average reductions of up to 26.14% in makespan, 37.73% in energy consumption, and 12.52% in total cost Beyond algorithmic innovation, this study contributes to a deeper understanding of workflow optimization dynamics in distributed fog-cloud systems, paving the way for more intelligent and adaptive task scheduling mechanisms in future computing paradigms.},
}
RevDate: 2026-01-29
Employing AI tools to predict features for dental care use in the United States during the global respiratory illness outbreak.
Frontiers in public health, 13:1692540.
Additional Links: PMID-41607911
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41607911,
year = {2025},
author = {Zanwar, PP and Kodan-Ghadr, HR and Thirumalai, V and Ghaddar, S and Huang, SJ and Harkness, B and Rey, E and Shah, R and Kurelli, SR and Patel, JS and Calzoni, L and Dede Yildirim, E and Duran, DG},
title = {Employing AI tools to predict features for dental care use in the United States during the global respiratory illness outbreak.},
journal = {Frontiers in public health},
volume = {13},
number = {},
pages = {1692540},
pmid = {41607911},
issn = {2296-2565},
}
▼ ▼ LOAD NEXT 100 CITATIONS
RJR Experience and Expertise
Researcher
Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.
Educator
Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.
Administrator
Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.
Technologist
Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.
Publisher
While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.
Speaker
Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.
Facilitator
Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.
Designer
Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.
RJR Picks from Around the Web (updated 11 MAY 2018 )
Old Science
Weird Science
Treating Disease with Fecal Transplantation
Fossils of miniature humans (hobbits) discovered in Indonesia
Paleontology
Dinosaur tail, complete with feathers, found preserved in amber.
Astronomy
Mysterious fast radio burst (FRB) detected in the distant universe.
Big Data & Informatics
Big Data: Buzzword or Big Deal?
Hacking the genome: Identifying anonymized human subjects using publicly available data.