Other Sites:
Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About: RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE
ESP: PubMed Auto Bibliography 12 Jan 2026 at 01:40 Created:
Cloud Computing
Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.
Created with PubMed® Query: ( cloud[TIAB] AND (computing[TIAB] OR "amazon web services"[TIAB] OR google[TIAB] OR "microsoft azure"[TIAB]) ) NOT pmcbook NOT ispreviousversion
Citations The Papers (from PubMed®)
RevDate: 2026-01-10
CmpDate: 2026-01-10
A Review of High-Throughput Optical Sensors for Food Detection Based on Machine Learning.
Foods (Basel, Switzerland), 15(1): pii:foods15010133.
As the global food industry expands and consumers demand higher food safety and quality standards, high-throughput detection technology utilizing digital intelligent optical sensors has emerged as a research hotspot in food testing due to its advantages of speed, precision, and non-destructive operation. Integrating cutting-edge achievements in optics, electronics, and computer science with machine learning algorithms, this technology efficiently processes massive datasets. This paper systematically summarizes the construction principles of intelligent optical sensors and their applications in food inspection. Sensors convert light signals into electrical signals using nanomaterials such as quantum dots, metal nanoparticles, and upconversion nanoparticles, and then employ machine learning algorithms including support vector machines, random forests, and convolutional neural networks for data analysis and model optimization. This enables efficient detection of target substances like pesticide residues, heavy metals, microorganisms, and food freshness. Furthermore, the integration of multiple detection mechanisms-including spectral analysis, fluorescence imaging, and hyperspectral imaging-has significantly broadened the sensors' application scenarios. Looking ahead, optical sensors will evolve toward multifunctional integration, miniaturization, and intelligent operation. By leveraging cloud computing and IoT technologies, they will deliver innovative solutions for comprehensive monitoring of food quality and safety across the entire supply chain.
Additional Links: PMID-41517198
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41517198,
year = {2026},
author = {Wang, Y and Yang, Y and Liu, H},
title = {A Review of High-Throughput Optical Sensors for Food Detection Based on Machine Learning.},
journal = {Foods (Basel, Switzerland)},
volume = {15},
number = {1},
pages = {},
doi = {10.3390/foods15010133},
pmid = {41517198},
issn = {2304-8158},
support = {the National Key Research and Development Program (No. 2023YFF1104801)//Huilin Liu/ ; },
abstract = {As the global food industry expands and consumers demand higher food safety and quality standards, high-throughput detection technology utilizing digital intelligent optical sensors has emerged as a research hotspot in food testing due to its advantages of speed, precision, and non-destructive operation. Integrating cutting-edge achievements in optics, electronics, and computer science with machine learning algorithms, this technology efficiently processes massive datasets. This paper systematically summarizes the construction principles of intelligent optical sensors and their applications in food inspection. Sensors convert light signals into electrical signals using nanomaterials such as quantum dots, metal nanoparticles, and upconversion nanoparticles, and then employ machine learning algorithms including support vector machines, random forests, and convolutional neural networks for data analysis and model optimization. This enables efficient detection of target substances like pesticide residues, heavy metals, microorganisms, and food freshness. Furthermore, the integration of multiple detection mechanisms-including spectral analysis, fluorescence imaging, and hyperspectral imaging-has significantly broadened the sensors' application scenarios. Looking ahead, optical sensors will evolve toward multifunctional integration, miniaturization, and intelligent operation. By leveraging cloud computing and IoT technologies, they will deliver innovative solutions for comprehensive monitoring of food quality and safety across the entire supply chain.},
}
RevDate: 2026-01-10
CmpDate: 2026-01-10
Sensor Driven Resource Optimization Framework for Intelligent Fog Enabled IoHT Systems.
Sensors (Basel, Switzerland), 26(1): pii:s26010348.
Fog computing has revolutionized the world by providing its services close to the user premises, which results in reducing the communication latency for many real-time applications. This communication latency has been a major constraint in cloud computing and ultimately causes user dissatisfaction due to slow response time. Many real-time applications like smart transportation, smart healthcare systems, smart cities, smart farming, video surveillance, and virtual and augmented reality are delay-sensitive real-time applications and require quick response times. The response delay in certain critical healthcare applications might cause serious loss to health patients. Therefore, by leveraging fog computing, a substantial portion of healthcare-related computational tasks can be offloaded to nearby fog nodes. This localized processing significantly reduces latency and enhances system availability, making it particularly advantageous for time-sensitive and mission-critical healthcare applications. Due to close proximity to end users, fog computing is considered to be the most suitable computing platform for real-time applications. However, fog devices are resource constrained and require proper resource management techniques for efficient resource utilization. This study presents an optimized resource allocation and scheduling framework for delay-sensitive healthcare applications using a Modified Particle Swarm Optimization (MPSO) algorithm. Using the iFogSim toolkit, the proposed technique was evaluated for many extensive simulations to obtain the desired results in terms of system response time, cost of execution and execution time. Experimental results demonstrate that the MPSO-based method reduces makespan by up to 8% and execution cost by up to 3% compared to existing metaheuristic algorithms, highlighting its effectiveness in enhancing overall fog computing performance for healthcare systems.
Additional Links: PMID-41516782
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41516782,
year = {2026},
author = {Khan, S and Shah, IA and Loh, WK and Khan, JA and Mylonas, A and Pitropakis, N},
title = {Sensor Driven Resource Optimization Framework for Intelligent Fog Enabled IoHT Systems.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {1},
pages = {},
doi = {10.3390/s26010348},
pmid = {41516782},
issn = {1424-8220},
mesh = {Algorithms ; Humans ; *Cloud Computing ; },
abstract = {Fog computing has revolutionized the world by providing its services close to the user premises, which results in reducing the communication latency for many real-time applications. This communication latency has been a major constraint in cloud computing and ultimately causes user dissatisfaction due to slow response time. Many real-time applications like smart transportation, smart healthcare systems, smart cities, smart farming, video surveillance, and virtual and augmented reality are delay-sensitive real-time applications and require quick response times. The response delay in certain critical healthcare applications might cause serious loss to health patients. Therefore, by leveraging fog computing, a substantial portion of healthcare-related computational tasks can be offloaded to nearby fog nodes. This localized processing significantly reduces latency and enhances system availability, making it particularly advantageous for time-sensitive and mission-critical healthcare applications. Due to close proximity to end users, fog computing is considered to be the most suitable computing platform for real-time applications. However, fog devices are resource constrained and require proper resource management techniques for efficient resource utilization. This study presents an optimized resource allocation and scheduling framework for delay-sensitive healthcare applications using a Modified Particle Swarm Optimization (MPSO) algorithm. Using the iFogSim toolkit, the proposed technique was evaluated for many extensive simulations to obtain the desired results in terms of system response time, cost of execution and execution time. Experimental results demonstrate that the MPSO-based method reduces makespan by up to 8% and execution cost by up to 3% compared to existing metaheuristic algorithms, highlighting its effectiveness in enhancing overall fog computing performance for healthcare systems.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Algorithms
Humans
*Cloud Computing
RevDate: 2026-01-10
MIGS: A Modular Edge Gateway with Instance-Based Isolation for Heterogeneous Industrial IoT Interoperability.
Sensors (Basel, Switzerland), 26(1): pii:s26010314.
The exponential proliferation of the Internet of Things (IoT) has catalyzed a paradigm shift in industrial automation and smart city infrastructure. However, this rapid expansion has engendered significant heterogeneity in communication protocols, creating critical barriers to seamless data integration and interoperability. Conventional gateway solutions frequently exhibit limited flexibility in supporting diverse protocol stacks simultaneously and often lack granular user controllability. To mitigate these deficiencies, this paper proposes a novel, modular IoT gateway architecture, designated as MIGS (Modular IoT Gateway System). The proposed architecture comprises four distinct components: a Management Component, a Southbound Component, a Northbound Component, and a Cache Component. Specifically, the Southbound Component employs instance-based isolation and independent task threading to manage heterogeneous field devices utilizing protocols such as Modbus, MQTT, and OPC UA. The Northbound Component facilitates reliable bidirectional data transmission with cloud platforms. A dedicated Cache Component is integrated to decouple data acquisition from transmission, ensuring data integrity during network latency. Furthermore, a web-based Control Service Module affords comprehensive runtime management. We explicate the data transmission methodology and formulate a theoretical latency model to quantify the impact of the Python Global Interpreter Lock (GIL) and serialization overhead. Functional validation and theoretical analysis confirm the system's efficacy in concurrent multi-protocol communication, robust data forwarding, and operational flexibility. The MIGS framework significantly enhances interoperability within heterogeneous IoT environments, offering a scalable solution for next-generation industrial applications.
Additional Links: PMID-41516748
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41516748,
year = {2026},
author = {Ai, Y and Zhu, Y and Jiang, Y and Deng, Y},
title = {MIGS: A Modular Edge Gateway with Instance-Based Isolation for Heterogeneous Industrial IoT Interoperability.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {1},
pages = {},
doi = {10.3390/s26010314},
pmid = {41516748},
issn = {1424-8220},
abstract = {The exponential proliferation of the Internet of Things (IoT) has catalyzed a paradigm shift in industrial automation and smart city infrastructure. However, this rapid expansion has engendered significant heterogeneity in communication protocols, creating critical barriers to seamless data integration and interoperability. Conventional gateway solutions frequently exhibit limited flexibility in supporting diverse protocol stacks simultaneously and often lack granular user controllability. To mitigate these deficiencies, this paper proposes a novel, modular IoT gateway architecture, designated as MIGS (Modular IoT Gateway System). The proposed architecture comprises four distinct components: a Management Component, a Southbound Component, a Northbound Component, and a Cache Component. Specifically, the Southbound Component employs instance-based isolation and independent task threading to manage heterogeneous field devices utilizing protocols such as Modbus, MQTT, and OPC UA. The Northbound Component facilitates reliable bidirectional data transmission with cloud platforms. A dedicated Cache Component is integrated to decouple data acquisition from transmission, ensuring data integrity during network latency. Furthermore, a web-based Control Service Module affords comprehensive runtime management. We explicate the data transmission methodology and formulate a theoretical latency model to quantify the impact of the Python Global Interpreter Lock (GIL) and serialization overhead. Functional validation and theoretical analysis confirm the system's efficacy in concurrent multi-protocol communication, robust data forwarding, and operational flexibility. The MIGS framework significantly enhances interoperability within heterogeneous IoT environments, offering a scalable solution for next-generation industrial applications.},
}
RevDate: 2026-01-10
CmpDate: 2026-01-10
A Systematic Review of Federated and Cloud Computing Approaches for Predicting Mental Health Risks.
Sensors (Basel, Switzerland), 26(1): pii:s26010229.
Mental health disorders affect large numbers of people worldwide and are a major cause of long-term disability. Digital health technologies such as mobile apps and wearable devices now generate rich behavioural data that could support earlier detection and more personalised care. However, these data are highly sensitive and distributed across devices and platforms, which makes privacy protection and scalable analysis challenging; federated learning offers a way to train models across devices while keeping raw data local. When combined with edge, fog, or cloud computing, federated learning offers a way to support near-real-time mental health analysis while keeping raw data local. This review screened 1104 records, assessed 31 full-text articles using a five-question quality checklist, and retained 17 empirical studies that achieved a score of at least 7/10 for synthesis. The included studies were compared in terms of their FL and edge/cloud architectures, data sources, privacy and security techniques, and evidence for operation in real-world settings. The synthesis highlights innovative but fragmented progress, with limited work on comorbidity modelling, deployment evaluation, and common benchmarks, and identifies priorities for the development of scalable, practical, and ethically robust FL systems for digital mental health.
Additional Links: PMID-41516665
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41516665,
year = {2025},
author = {Fiaz, I and Kanwal, N and Al-Said Ahmad, A},
title = {A Systematic Review of Federated and Cloud Computing Approaches for Predicting Mental Health Risks.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {1},
pages = {},
doi = {10.3390/s26010229},
pmid = {41516665},
issn = {1424-8220},
mesh = {*Cloud Computing ; Humans ; *Mental Health ; *Mental Disorders/diagnosis ; Wearable Electronic Devices ; Mobile Applications ; Telemedicine ; },
abstract = {Mental health disorders affect large numbers of people worldwide and are a major cause of long-term disability. Digital health technologies such as mobile apps and wearable devices now generate rich behavioural data that could support earlier detection and more personalised care. However, these data are highly sensitive and distributed across devices and platforms, which makes privacy protection and scalable analysis challenging; federated learning offers a way to train models across devices while keeping raw data local. When combined with edge, fog, or cloud computing, federated learning offers a way to support near-real-time mental health analysis while keeping raw data local. This review screened 1104 records, assessed 31 full-text articles using a five-question quality checklist, and retained 17 empirical studies that achieved a score of at least 7/10 for synthesis. The included studies were compared in terms of their FL and edge/cloud architectures, data sources, privacy and security techniques, and evidence for operation in real-world settings. The synthesis highlights innovative but fragmented progress, with limited work on comorbidity modelling, deployment evaluation, and common benchmarks, and identifies priorities for the development of scalable, practical, and ethically robust FL systems for digital mental health.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
Humans
*Mental Health
*Mental Disorders/diagnosis
Wearable Electronic Devices
Mobile Applications
Telemedicine
RevDate: 2026-01-10
A Lightweight Authentication and Key Distribution Protocol for XR Glasses Using PUF and Cloud-Assisted ECC.
Sensors (Basel, Switzerland), 26(1): pii:s26010217.
The rapid convergence of artificial intelligence (AI), cloud computing, and 5G communication has positioned extended reality (XR) as a core technology bridging the physical and virtual worlds. Encompassing virtual reality (VR), augmented reality (AR), and mixed reality (MR), XR has demonstrated transformative potential across sectors such as healthcare, industry, education, and defense. However, the compact architecture and limited computational capabilities of XR devices render conventional cryptographic authentication schemes inefficient, while the real-time transmission of biometric and positional data introduces significant privacy and security vulnerabilities. To overcome these challenges, this study introduces PXRA (PUF-based XR authentication), a lightweight and secure authentication and key distribution protocol optimized for cloud-assisted XR environments. PXRA utilizes a physically unclonable function (PUF) for device-level hardware authentication and offloads elliptic curve cryptography (ECC) operations to the cloud to enhance computational efficiency. Authenticated encryption with associated data (AEAD) ensures message confidentiality and integrity, while formal verification through ProVerif confirms the protocol's robustness under the Dolev-Yao adversary model. Experimental results demonstrate that PXRA reduces device-side computational overhead by restricting XR terminals to lightweight PUF and hash functions, achieving an average authentication latency below 15 ms sufficient for real-time XR performance. Formal analysis verifies PXRA's resistance to replay, impersonation, and key compromise attacks, while preserving user anonymity and session unlinkability. These findings establish the feasibility of integrating hardware-based PUF authentication with cloud-assisted cryptographic computation to enable secure, scalable, and real-time XR systems. The proposed framework lays a foundation for future XR applications in telemedicine, remote collaboration, and immersive education, where both performance and privacy preservation are paramount. Our contribution lies in a hybrid PUF-cloud ECC architecture, context-bound AEAD for session-splicing resistance, and a noise-resilient BCH-based fuzzy extractor supporting up to 15% BER.
Additional Links: PMID-41516652
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41516652,
year = {2025},
author = {Cha, W and Lee, HJ and Kook, S and Kim, K and Won, D},
title = {A Lightweight Authentication and Key Distribution Protocol for XR Glasses Using PUF and Cloud-Assisted ECC.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {1},
pages = {},
doi = {10.3390/s26010217},
pmid = {41516652},
issn = {1424-8220},
abstract = {The rapid convergence of artificial intelligence (AI), cloud computing, and 5G communication has positioned extended reality (XR) as a core technology bridging the physical and virtual worlds. Encompassing virtual reality (VR), augmented reality (AR), and mixed reality (MR), XR has demonstrated transformative potential across sectors such as healthcare, industry, education, and defense. However, the compact architecture and limited computational capabilities of XR devices render conventional cryptographic authentication schemes inefficient, while the real-time transmission of biometric and positional data introduces significant privacy and security vulnerabilities. To overcome these challenges, this study introduces PXRA (PUF-based XR authentication), a lightweight and secure authentication and key distribution protocol optimized for cloud-assisted XR environments. PXRA utilizes a physically unclonable function (PUF) for device-level hardware authentication and offloads elliptic curve cryptography (ECC) operations to the cloud to enhance computational efficiency. Authenticated encryption with associated data (AEAD) ensures message confidentiality and integrity, while formal verification through ProVerif confirms the protocol's robustness under the Dolev-Yao adversary model. Experimental results demonstrate that PXRA reduces device-side computational overhead by restricting XR terminals to lightweight PUF and hash functions, achieving an average authentication latency below 15 ms sufficient for real-time XR performance. Formal analysis verifies PXRA's resistance to replay, impersonation, and key compromise attacks, while preserving user anonymity and session unlinkability. These findings establish the feasibility of integrating hardware-based PUF authentication with cloud-assisted cryptographic computation to enable secure, scalable, and real-time XR systems. The proposed framework lays a foundation for future XR applications in telemedicine, remote collaboration, and immersive education, where both performance and privacy preservation are paramount. Our contribution lies in a hybrid PUF-cloud ECC architecture, context-bound AEAD for session-splicing resistance, and a noise-resilient BCH-based fuzzy extractor supporting up to 15% BER.},
}
RevDate: 2026-01-10
CmpDate: 2026-01-10
An Efficient Clinical Decision Support Framework Using IoMT Based on Explainable and Trustworthy Artificial Intelligence with Transformer Model and Blockchain-Integrated Chunking.
Diagnostics (Basel, Switzerland), 16(1): pii:diagnostics16010007.
Background/Objectives: The use of edge-cloud architectures has increased rapidly to move the analysis of AI-enabled health data to global environments. However, data security, communication overhead, cost-effectiveness, and data transmission losses are still important problems to be solved. Methods: In this paper, we propose a reliable, explainable, and energy-efficient stress detection framework supported by a cost-oriented blockchain-based content-defined chunking approach to minimise the losses during data transfer. In the proposed architecture, the Nurse Stress dataset represents IoMT data. While the chunking process reduces communication volume and storage costs by avoiding data duplication, blockchain technology eliminates the risks of unauthorised access and manipulation by ensuring the immutability and traceability of data blocks. Results: All Transformer-based models have demonstrated over 99% accuracy. The TimesNet model, in particular, has been designated as the system's reference model, exhibiting superior performance in terms of both stability and accuracy. The main contribution of this study lies in proposing one of the first integrated frameworks that jointly employs chunking-based data management, blockchain-enabled trust mechanisms, and edge-cloud computing with XAI to ensure secure and transparent IoMT data processing. The proposed system not only performs highly accurate stress detection, but also optimises the dimensions of reliable data transmission, energy and cost efficiency, and clinical reliability. Conclusions: In this respect, the study presents a scalable, reliable, and repeatable approach in health decision support systems by combining data security, integrity, and explainability issues, which are addressed separately in the literature, in a holistic manner.
Additional Links: PMID-41515502
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41515502,
year = {2025},
author = {Arslanoğlu, K and Karaköse, M},
title = {An Efficient Clinical Decision Support Framework Using IoMT Based on Explainable and Trustworthy Artificial Intelligence with Transformer Model and Blockchain-Integrated Chunking.},
journal = {Diagnostics (Basel, Switzerland)},
volume = {16},
number = {1},
pages = {},
doi = {10.3390/diagnostics16010007},
pmid = {41515502},
issn = {2075-4418},
abstract = {Background/Objectives: The use of edge-cloud architectures has increased rapidly to move the analysis of AI-enabled health data to global environments. However, data security, communication overhead, cost-effectiveness, and data transmission losses are still important problems to be solved. Methods: In this paper, we propose a reliable, explainable, and energy-efficient stress detection framework supported by a cost-oriented blockchain-based content-defined chunking approach to minimise the losses during data transfer. In the proposed architecture, the Nurse Stress dataset represents IoMT data. While the chunking process reduces communication volume and storage costs by avoiding data duplication, blockchain technology eliminates the risks of unauthorised access and manipulation by ensuring the immutability and traceability of data blocks. Results: All Transformer-based models have demonstrated over 99% accuracy. The TimesNet model, in particular, has been designated as the system's reference model, exhibiting superior performance in terms of both stability and accuracy. The main contribution of this study lies in proposing one of the first integrated frameworks that jointly employs chunking-based data management, blockchain-enabled trust mechanisms, and edge-cloud computing with XAI to ensure secure and transparent IoMT data processing. The proposed system not only performs highly accurate stress detection, but also optimises the dimensions of reliable data transmission, energy and cost efficiency, and clinical reliability. Conclusions: In this respect, the study presents a scalable, reliable, and repeatable approach in health decision support systems by combining data security, integrity, and explainability issues, which are addressed separately in the literature, in a holistic manner.},
}
RevDate: 2026-01-09
Enhancing patient admission efficiency through a hybrid cloud framework for medical record sharing.
Scientific reports pii:10.1038/s41598-026-35014-6 [Epub ahead of print].
The fragmentation of patient data across multiple healthcare institutions presents a significant challenge to realizing timely and effective treatment. Although electronic medical records have replaced traditional paper records, they often remain isolated within individual hospital information systems, limiting data exchange and preventing physicians from accessing complete medical histories during patient admission. These restrictions hinder the efficiency of diagnosis and treatment, particularly in critical care settings, such as emergency departments. Cloud computing provides a promising solution by enabling controlled electronic medical record sharing, thereby improving the continuity and quality of care. This study presents a system-level, multi-layered hybrid cloud architecture framework designed to facilitate seamless and managed exchange of electronic medical records among healthcare organizations. To further enhance operational efficiency, the system integrates fingerprint authentication based on hashed identifiers for rapid patient identification and an Internet of Things bracelet for real-time monitoring of vital signs. System performance was evaluated using discrete-event simulation implemented in the OMNeT++ framework, with simulation parameters informed by real emergency department data from three hospitals in Saudi Arabia. The evaluation considers multiple workflow scenarios and incorporates repeated simulation runs to assess performance stability. The simulation results indicate consistent reductions in average patient waiting times, while treatment durations remain stable and patient throughput increases. These findings highlight the potential of the proposed framework to enhance electronic medical record management, streamline clinical workflows, and improve operational efficiency in time-critical environments.
Additional Links: PMID-41513951
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41513951,
year = {2026},
author = {Abughazalah, M and Alsaggaf, W and Saifuddin, S and Sarhan, S},
title = {Enhancing patient admission efficiency through a hybrid cloud framework for medical record sharing.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-35014-6},
pmid = {41513951},
issn = {2045-2322},
abstract = {The fragmentation of patient data across multiple healthcare institutions presents a significant challenge to realizing timely and effective treatment. Although electronic medical records have replaced traditional paper records, they often remain isolated within individual hospital information systems, limiting data exchange and preventing physicians from accessing complete medical histories during patient admission. These restrictions hinder the efficiency of diagnosis and treatment, particularly in critical care settings, such as emergency departments. Cloud computing provides a promising solution by enabling controlled electronic medical record sharing, thereby improving the continuity and quality of care. This study presents a system-level, multi-layered hybrid cloud architecture framework designed to facilitate seamless and managed exchange of electronic medical records among healthcare organizations. To further enhance operational efficiency, the system integrates fingerprint authentication based on hashed identifiers for rapid patient identification and an Internet of Things bracelet for real-time monitoring of vital signs. System performance was evaluated using discrete-event simulation implemented in the OMNeT++ framework, with simulation parameters informed by real emergency department data from three hospitals in Saudi Arabia. The evaluation considers multiple workflow scenarios and incorporates repeated simulation runs to assess performance stability. The simulation results indicate consistent reductions in average patient waiting times, while treatment durations remain stable and patient throughput increases. These findings highlight the potential of the proposed framework to enhance electronic medical record management, streamline clinical workflows, and improve operational efficiency in time-critical environments.},
}
RevDate: 2026-01-09
CmpDate: 2026-01-09
SNAP: Streamlined Nextflow Analysis Pipeline for Immunoprecipitation-Based Epigenomic Profiling of Circulating Chromatin.
bioRxiv : the preprint server for biology.
Epigenomic profiling of circulating chromatin is a powerful and minimally invasive approach for detecting and monitoring disease, but there are no bioinformatics pipelines tailored to the unique characteristics of cell-free chromatin. We present SNAP (Streamlined Nextflow Analysis Pipeline), a reproducible, scalable, and modular workflow specifically designed for immunoprecipitation-based methods for profiling cell-free chromatin. SNAP incorporates quality control metrics optimized for circulating chromatin, including enrichment score and fragment count thresholds, as well as direct estimation of circulating tumor DNA (ctDNA) content from fragment length distributions. It also includes SNP fingerprinting to enable sample identity verification. When applied to cfChIP-seq and cfMeDIP-seq data across multiple cancer types, SNAP's quality filters significantly improved classification performance while maintaining high data retention. Independent validation using plasma from patients with osteosarcoma confirmed the detection of tumor-associated epigenomic signatures that correlated with ctDNA levels and reflected disease biology. SNAP's modular architecture enables straightforward extension to additional cell-free immunoprecipitation-based assays, providing a robust framework to support studies of circulating chromatin broadly. SNAP is compatible with cloud and high-performance computing environments and is publicly available at https://github.com/prc992/SNAP/ .
Additional Links: PMID-41509217
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41509217,
year = {2025},
author = {Zhang, Z and Da Silva Cordeiro, P and Chhetri, SB and Fortunato, B and Jin, Z and El Hajj Chehade, R and Semaan, K and Gulati, G and Lee, GG and Hemauer, C and Bian, W and Sotudian, S and Zhang, Z and Osei-Hwedieh, D and Heim, TE and Painter, C and Nawfal, R and Eid, M and Vasseur, D and Canniff, J and Savignano, H and Phillips, N and Seo, JH and Weiss, KR and Freedman, ML and Baca, SC},
title = {SNAP: Streamlined Nextflow Analysis Pipeline for Immunoprecipitation-Based Epigenomic Profiling of Circulating Chromatin.},
journal = {bioRxiv : the preprint server for biology},
volume = {},
number = {},
pages = {},
pmid = {41509217},
issn = {2692-8205},
abstract = {Epigenomic profiling of circulating chromatin is a powerful and minimally invasive approach for detecting and monitoring disease, but there are no bioinformatics pipelines tailored to the unique characteristics of cell-free chromatin. We present SNAP (Streamlined Nextflow Analysis Pipeline), a reproducible, scalable, and modular workflow specifically designed for immunoprecipitation-based methods for profiling cell-free chromatin. SNAP incorporates quality control metrics optimized for circulating chromatin, including enrichment score and fragment count thresholds, as well as direct estimation of circulating tumor DNA (ctDNA) content from fragment length distributions. It also includes SNP fingerprinting to enable sample identity verification. When applied to cfChIP-seq and cfMeDIP-seq data across multiple cancer types, SNAP's quality filters significantly improved classification performance while maintaining high data retention. Independent validation using plasma from patients with osteosarcoma confirmed the detection of tumor-associated epigenomic signatures that correlated with ctDNA levels and reflected disease biology. SNAP's modular architecture enables straightforward extension to additional cell-free immunoprecipitation-based assays, providing a robust framework to support studies of circulating chromatin broadly. SNAP is compatible with cloud and high-performance computing environments and is publicly available at https://github.com/prc992/SNAP/ .},
}
RevDate: 2026-01-07
Credibility measurement of cloud services based on information entropy and Markov chain.
Scientific reports pii:10.1038/s41598-026-35346-3 [Epub ahead of print].
Despite the rapid advancement of cloud computing technologies, user skepticism about service credibility remains a major barrier to adoption of cloud services. At present, there is not a comprehensive and systematic understanding of the factors that affect the credibility of cloud services. In view of the uncertainty and correlation between the factors of cloud service credibility, this study analyzed the user's demand for credit and credibility. The cloud service credibility attributes were divided into six dimensions: cloud service visibility, controllability, security, reliability, cloud service provider viability and user satisfaction. A cloud service credibility measurement model combining information entropy and Markov chain was established, which could calculate the uncertainty of each factor in the attribute model. The degree of influence on the credibility of cloud service and the credibility level of cloud service provider are calculated in the model. The experimental validation demonstrates that the information entropy and Markov chain model achieves a 15% improvement in prediction accuracy compared to traditional AHP methods, with particularly notable enhancements in dynamic scenario adaptability, which helps users make informed decisions when selecting cloud services.
Additional Links: PMID-41501126
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41501126,
year = {2026},
author = {Ou, L and Yu, J},
title = {Credibility measurement of cloud services based on information entropy and Markov chain.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-35346-3},
pmid = {41501126},
issn = {2045-2322},
support = {JAT230720//Science and Technology Project of Fujian Provincial Department of Education, China/ ; SHE2524//2025 Higher Education Research Project of Sanming University in China/ ; },
abstract = {Despite the rapid advancement of cloud computing technologies, user skepticism about service credibility remains a major barrier to adoption of cloud services. At present, there is not a comprehensive and systematic understanding of the factors that affect the credibility of cloud services. In view of the uncertainty and correlation between the factors of cloud service credibility, this study analyzed the user's demand for credit and credibility. The cloud service credibility attributes were divided into six dimensions: cloud service visibility, controllability, security, reliability, cloud service provider viability and user satisfaction. A cloud service credibility measurement model combining information entropy and Markov chain was established, which could calculate the uncertainty of each factor in the attribute model. The degree of influence on the credibility of cloud service and the credibility level of cloud service provider are calculated in the model. The experimental validation demonstrates that the information entropy and Markov chain model achieves a 15% improvement in prediction accuracy compared to traditional AHP methods, with particularly notable enhancements in dynamic scenario adaptability, which helps users make informed decisions when selecting cloud services.},
}
RevDate: 2026-01-07
CmpDate: 2026-01-07
The future of big data and artificial intelligence on dairy farms: A proposed dairy data ecosystem.
JDS communications, 6(Suppl 1):S9-S14.
The dairy sector should overcome challenges in productivity, sustainability, and data management by adopting intelligent, scalable, and privacy-preserving technological solutions. Adopting data and artificial intelligence (AI) technologies is essential to ensure efficient operations and informed decision making and to keep a competitive market advantage. This paper proposes an integrated, multimodal AI framework to support data-intensive dairy farm operations by leveraging big data principles and advancing them through AI technologies. The proposed architecture incorporates edge computing, autonomous AI agents, and federated learning to enable real-time, privacy-preserving analytics at the farm level and promote knowledge sharing and refinement through research farms and cloud collaboration. Farms collect heterogeneous data, which can be transformed into embeddings for both local inference and cloud analysis. These embeddings form the input of AI agents that support health monitoring, risk prediction, operational optimization, and decision making. Privacy is preserved by sharing only model weights or anonymized data externally. The edge layer handles time-sensitive tasks and communicates with a centralized enterprise cloud hosting global models and distributing updates. A research and development cloud linked to research farms ensures model testing and validation. The entire system is orchestrated by autonomous AI agents that manage data, choose models, and interact with stakeholders, and human oversight ensures safe decisions, as illustrated in the practical use case of mastitis management. This architecture could support data integrity, scalability, and real-time personalization, along with opening up space for partnerships between farms, research institutions, and regulatory bodies to promote secure, cross-sector innovation.
Additional Links: PMID-41497383
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41497383,
year = {2025},
author = {Hostens, M and Franceschini, S and van Leerdam, M and Yang, H and Pokharel, S and Liu, E and Niu, P and Zhang, H and Noor, S and Hermans, K and Salamone, M and Sharma, S},
title = {The future of big data and artificial intelligence on dairy farms: A proposed dairy data ecosystem.},
journal = {JDS communications},
volume = {6},
number = {Suppl 1},
pages = {S9-S14},
pmid = {41497383},
issn = {2666-9102},
abstract = {The dairy sector should overcome challenges in productivity, sustainability, and data management by adopting intelligent, scalable, and privacy-preserving technological solutions. Adopting data and artificial intelligence (AI) technologies is essential to ensure efficient operations and informed decision making and to keep a competitive market advantage. This paper proposes an integrated, multimodal AI framework to support data-intensive dairy farm operations by leveraging big data principles and advancing them through AI technologies. The proposed architecture incorporates edge computing, autonomous AI agents, and federated learning to enable real-time, privacy-preserving analytics at the farm level and promote knowledge sharing and refinement through research farms and cloud collaboration. Farms collect heterogeneous data, which can be transformed into embeddings for both local inference and cloud analysis. These embeddings form the input of AI agents that support health monitoring, risk prediction, operational optimization, and decision making. Privacy is preserved by sharing only model weights or anonymized data externally. The edge layer handles time-sensitive tasks and communicates with a centralized enterprise cloud hosting global models and distributing updates. A research and development cloud linked to research farms ensures model testing and validation. The entire system is orchestrated by autonomous AI agents that manage data, choose models, and interact with stakeholders, and human oversight ensures safe decisions, as illustrated in the practical use case of mastitis management. This architecture could support data integrity, scalability, and real-time personalization, along with opening up space for partnerships between farms, research institutions, and regulatory bodies to promote secure, cross-sector innovation.},
}
RevDate: 2026-01-06
Two-Tier heuristic search for ransomware-as-a-service based cyberattack défense analysis using explainable Bayesian deep learning model.
Scientific reports, 16(1):437.
Data security assurance is essential owing to the improving popularity of cloud computing and its extensive usage through several industries, particularly in light of the increasing number of cyber-security attacks. Ransomware-as-a-service (RaaS) attacks are prominent and widespread, allowing uniform individuals with minimum technology to perform ransomware processes. While RaaS methods have declined the access barriers for cyber threats, generative artificial intelligence (AI) growth might result in new possibilities for offenders. The high prevalence of RaaS-based cyberattacks poses essential challenges to cybersecurity, requiring progressive and understandable defensive mechanisms. Furthermore, deep or machine learning (ML) methods mainly provide a black box, giving no data about how it functions. Understanding the details of a classification model’s decision can be beneficial for understanding the work way to be identified. This study presents a novel Two-Tier Metaheuristic Algorithm for Cyberattack Defense Analysis using Explainable Artificial Intelligence based Bayesian Deep Learning (TTMCDA-XAIBDL) method. The main intention of the TTMCDA-XAIBDL method is to detect and mitigate ransomware cyber threats. Initially, the TTMCDA-XAIBDL method performs data preprocessing using Z-score normalization to ensure standardization and scalability of features. Next, the improved sand cat swarm optimization (ISCSO) technique is used for the feature selection. The Bayesian neural network (BNN) is employed to classify cyberattack defence. Moreover, the BNN’s hyperparameters are fine-tuned using the whale optimization algorithm (WOA) model, optimizing its performance for effective detection of ransomware threats. Finally, the XAI using SHAP is integrated to provide explainability, offering perceptions of the model’s decision-making procedure and adopting trust in the system. To demonstrate the effectiveness of the TTMCDA-XAIBDL technique, a series of simulations are conducted using a ransomware detection dataset to evaluate its classification performance. The performance validation of the TTMCDA-XAIBDL technique portrayed a superior accuracy value of 99.29% over the recent methods.
Additional Links: PMID-41490912
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41490912,
year = {2026},
author = {Almuflih, AS},
title = {Two-Tier heuristic search for ransomware-as-a-service based cyberattack défense analysis using explainable Bayesian deep learning model.},
journal = {Scientific reports},
volume = {16},
number = {1},
pages = {437},
pmid = {41490912},
issn = {2045-2322},
abstract = {Data security assurance is essential owing to the improving popularity of cloud computing and its extensive usage through several industries, particularly in light of the increasing number of cyber-security attacks. Ransomware-as-a-service (RaaS) attacks are prominent and widespread, allowing uniform individuals with minimum technology to perform ransomware processes. While RaaS methods have declined the access barriers for cyber threats, generative artificial intelligence (AI) growth might result in new possibilities for offenders. The high prevalence of RaaS-based cyberattacks poses essential challenges to cybersecurity, requiring progressive and understandable defensive mechanisms. Furthermore, deep or machine learning (ML) methods mainly provide a black box, giving no data about how it functions. Understanding the details of a classification model’s decision can be beneficial for understanding the work way to be identified. This study presents a novel Two-Tier Metaheuristic Algorithm for Cyberattack Defense Analysis using Explainable Artificial Intelligence based Bayesian Deep Learning (TTMCDA-XAIBDL) method. The main intention of the TTMCDA-XAIBDL method is to detect and mitigate ransomware cyber threats. Initially, the TTMCDA-XAIBDL method performs data preprocessing using Z-score normalization to ensure standardization and scalability of features. Next, the improved sand cat swarm optimization (ISCSO) technique is used for the feature selection. The Bayesian neural network (BNN) is employed to classify cyberattack defence. Moreover, the BNN’s hyperparameters are fine-tuned using the whale optimization algorithm (WOA) model, optimizing its performance for effective detection of ransomware threats. Finally, the XAI using SHAP is integrated to provide explainability, offering perceptions of the model’s decision-making procedure and adopting trust in the system. To demonstrate the effectiveness of the TTMCDA-XAIBDL technique, a series of simulations are conducted using a ransomware detection dataset to evaluate its classification performance. The performance validation of the TTMCDA-XAIBDL technique portrayed a superior accuracy value of 99.29% over the recent methods.},
}
RevDate: 2026-01-04
Computing power network dynamic resource scheduling integrating time series mixing dynamic state estimation and hierarchical reinforcement learning.
Scientific reports pii:10.1038/s41598-025-32753-w [Epub ahead of print].
With the evolution of cloud computing towards a multi-cloud architecture, cross-cloud resource scheduling faces challenges such as heterogeneous environment adaptation and slow dynamic load response. How to improve resource utilization while ensuring service quality has become a core challenge in the field of cloud management. To address this need, we propose the TSL-HRL intelligent scheduling framework, which integrates time-series feature modeling and hierarchical reinforcement learning. The framework utilizes a time-series mixing module to deeply mine the periodic fluctuations and burst demand features of computing, storage, and network resources. It integrates a dynamic state estimation module with Kalman filtering to capture real-time changes in resource supply and demand. Additionally, it constructs a high-level planning - low-level response hierarchical reinforcement learning architecture: the high-level Q-learning algorithm formulates a global long-term resource allocation strategy to ensure optimal overall scheduling, while the low-level A2C algorithm adjusts the execution plan based on real-time network fluctuations and node load, enabling fast adaptation to dynamic changes, forming a macro-micro collaborative decision mechanism. In experiments on the Multi-Cloud Service Composition Dataset and Google 2019 Cluster dynamic node scenarios, TSL-HRL effectively balanced resource utilization efficiency and scheduling real-time performance with its three-level architecture design of time-series feature extraction - dynamic state perception - hierarchical strategy optimization. The study shows that TSL-HRL provides a systematic solution for resource management in multi-cloud environments. Future research will focus on lightweight extensions for edge-cloud collaborative scenarios, multi-objective energy consumption optimization frameworks, and meta-learning-driven rapid adaptation technologies, promoting the application and generalization of intelligent resource scheduling technologies in real-world complex scenarios.
Additional Links: PMID-41486178
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41486178,
year = {2026},
author = {Liu, H and Zhang, S and Li, L and Sun, T and Xue, W and Yao, X and Xu, Y},
title = {Computing power network dynamic resource scheduling integrating time series mixing dynamic state estimation and hierarchical reinforcement learning.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-32753-w},
pmid = {41486178},
issn = {2045-2322},
abstract = {With the evolution of cloud computing towards a multi-cloud architecture, cross-cloud resource scheduling faces challenges such as heterogeneous environment adaptation and slow dynamic load response. How to improve resource utilization while ensuring service quality has become a core challenge in the field of cloud management. To address this need, we propose the TSL-HRL intelligent scheduling framework, which integrates time-series feature modeling and hierarchical reinforcement learning. The framework utilizes a time-series mixing module to deeply mine the periodic fluctuations and burst demand features of computing, storage, and network resources. It integrates a dynamic state estimation module with Kalman filtering to capture real-time changes in resource supply and demand. Additionally, it constructs a high-level planning - low-level response hierarchical reinforcement learning architecture: the high-level Q-learning algorithm formulates a global long-term resource allocation strategy to ensure optimal overall scheduling, while the low-level A2C algorithm adjusts the execution plan based on real-time network fluctuations and node load, enabling fast adaptation to dynamic changes, forming a macro-micro collaborative decision mechanism. In experiments on the Multi-Cloud Service Composition Dataset and Google 2019 Cluster dynamic node scenarios, TSL-HRL effectively balanced resource utilization efficiency and scheduling real-time performance with its three-level architecture design of time-series feature extraction - dynamic state perception - hierarchical strategy optimization. The study shows that TSL-HRL provides a systematic solution for resource management in multi-cloud environments. Future research will focus on lightweight extensions for edge-cloud collaborative scenarios, multi-objective energy consumption optimization frameworks, and meta-learning-driven rapid adaptation technologies, promoting the application and generalization of intelligent resource scheduling technologies in real-world complex scenarios.},
}
RevDate: 2026-01-04
Data security storage and transmission framework for AI computing power platforms.
Scientific reports pii:10.1038/s41598-025-31786-5 [Epub ahead of print].
In the era of rapidly expanding artificial intelligence (AI) applications, ensuring secure data storage and transmission within AI computing power platforms remains a critical challenge. This research presents a novel data security storage and transmission system, termed as secure artificial intelligence data storage and transmission (Secure AI-DST), tailored for AI computing environments. The proposed framework integrates a hybrid encryption mechanism that combines Amended Merkle Tree (AMerT) hashing with Secret Elliptic Curve Cryptography (SEllC) enhanced data confidentiality. For secure storage and decentralization, the system leverages blockchain with InterPlanetary File System (IPFS) integration, ensuring tamper-proof and scalable data handling. To classify various attack types, a novel deep learning model attention bidirectional gated recurrent unit-assisted residual network (Att-BGR) is deployed, offering accurate detection of intrusions. Simulation studies conducted in MATLAB® 2023b using both synthetic and real-time datasets show that the Secure AI-DST system reduces unauthorized access attempts by 92.7%, maintains data integrity with 99.98% accuracy under simulated cyberattacks, and achieves a packet validation success rate of 97.6% across edge-to-cloud transmissions. Furthermore, the proposed method introduces only a 4.3% computational overhead, making it highly suitable for real-time AI workloads. These outcomes confirm the effectiveness of Secure AI-DST in ensuring end-to-end data guard, resilience against cyber threats, and scalable presentation for next-generation AI computing substructures.
Additional Links: PMID-41484422
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41484422,
year = {2026},
author = {Chen, J and Lu, Z and Zheng, H and Ren, Z and Chen, Y and Shang, J},
title = {Data security storage and transmission framework for AI computing power platforms.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-31786-5},
pmid = {41484422},
issn = {2045-2322},
abstract = {In the era of rapidly expanding artificial intelligence (AI) applications, ensuring secure data storage and transmission within AI computing power platforms remains a critical challenge. This research presents a novel data security storage and transmission system, termed as secure artificial intelligence data storage and transmission (Secure AI-DST), tailored for AI computing environments. The proposed framework integrates a hybrid encryption mechanism that combines Amended Merkle Tree (AMerT) hashing with Secret Elliptic Curve Cryptography (SEllC) enhanced data confidentiality. For secure storage and decentralization, the system leverages blockchain with InterPlanetary File System (IPFS) integration, ensuring tamper-proof and scalable data handling. To classify various attack types, a novel deep learning model attention bidirectional gated recurrent unit-assisted residual network (Att-BGR) is deployed, offering accurate detection of intrusions. Simulation studies conducted in MATLAB® 2023b using both synthetic and real-time datasets show that the Secure AI-DST system reduces unauthorized access attempts by 92.7%, maintains data integrity with 99.98% accuracy under simulated cyberattacks, and achieves a packet validation success rate of 97.6% across edge-to-cloud transmissions. Furthermore, the proposed method introduces only a 4.3% computational overhead, making it highly suitable for real-time AI workloads. These outcomes confirm the effectiveness of Secure AI-DST in ensuring end-to-end data guard, resilience against cyber threats, and scalable presentation for next-generation AI computing substructures.},
}
RevDate: 2026-01-03
Ensemble deep learning approach for traffic video analytics in edge computing.
Scientific reports pii:10.1038/s41598-025-25628-7 [Epub ahead of print].
Video analytics is the new era of computer vision in identifying and classifying objects. Traffic surveillance videos can be analysed to using computer vision to comprehend the road traffic. Monitoring the real-time road traffic is essential to control them. Computer vision helps in identifying the vehicles on the road, but the present techniques either perform the video analysis on the cloud platform or the edge platform. The former introduces more delay in processing while controlling is needed in real-time, the latter is not accurate in estimating the current road traffic. YOLO algorithms are the most notable ones for efficient real-time object detection. To make such object detections feasible in lightweight environments, its tinier version called Tiny YOLO is used. Edge computing is the efficient framework to have its computation done on the edge of the physical layer without the need to move data into the cloud to reduce latency. A novel hybrid model of vehicle detection and classification using Tiny YOLO and YOLOR is constructed at the edge layer. This hybrid model processes the video frames at a higher rate and produces the traffic estimate. The numerical traffic volume is sent to Ensemble Learning in Traffic Video Analytics (ELITVA) which uses F-RNN to make decisions in reducing the traffic flow seamlessly. The experimental results performed on drone dataset captured at road signals show an increase in precision by 13.8%, accuracy by 4.8%, recall by 17.4%, F1 score by 19.9%, and frame rate processing by 12.8% compared to other existing traffic surveillance systems and efficient controlling of road traffic.
Additional Links: PMID-41484116
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41484116,
year = {2026},
author = {Sathyamoorthy, M and Rajasekar, V and Krishnamoorthi, S and Pamucar, D},
title = {Ensemble deep learning approach for traffic video analytics in edge computing.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-25628-7},
pmid = {41484116},
issn = {2045-2322},
abstract = {Video analytics is the new era of computer vision in identifying and classifying objects. Traffic surveillance videos can be analysed to using computer vision to comprehend the road traffic. Monitoring the real-time road traffic is essential to control them. Computer vision helps in identifying the vehicles on the road, but the present techniques either perform the video analysis on the cloud platform or the edge platform. The former introduces more delay in processing while controlling is needed in real-time, the latter is not accurate in estimating the current road traffic. YOLO algorithms are the most notable ones for efficient real-time object detection. To make such object detections feasible in lightweight environments, its tinier version called Tiny YOLO is used. Edge computing is the efficient framework to have its computation done on the edge of the physical layer without the need to move data into the cloud to reduce latency. A novel hybrid model of vehicle detection and classification using Tiny YOLO and YOLOR is constructed at the edge layer. This hybrid model processes the video frames at a higher rate and produces the traffic estimate. The numerical traffic volume is sent to Ensemble Learning in Traffic Video Analytics (ELITVA) which uses F-RNN to make decisions in reducing the traffic flow seamlessly. The experimental results performed on drone dataset captured at road signals show an increase in precision by 13.8%, accuracy by 4.8%, recall by 17.4%, F1 score by 19.9%, and frame rate processing by 12.8% compared to other existing traffic surveillance systems and efficient controlling of road traffic.},
}
RevDate: 2026-01-04
CmpDate: 2026-01-02
Collaborative optimization of computational offloading and resource allocation based on Stackelberg game.
PloS one, 21(1):e0339955.
The exponential growth of the Internet of Things and mobile edge computing has intensified the need for substantial data processing and instantaneous response. Consequently, collaboration between the cloud, the edge and the end has become a key computing paradigm. However, in this architecture, task scheduling is complex, resources are heterogeneous and dynamic, and it is still a serious challenge to achieve low-latency and energy-efficient task processing. Aiming at the deficiency of dynamic collaborative optimization in the existing research, this paper introduces a collaborative optimization approach for computational offloading and resource allocation, utilizing the Stackelberg game to maximize the system's total utility. First, an overall utility model that integrates delay, energy consumption, and revenue is constructed for application scenarios involving multi-cloud servers, multi-edge servers, and multiple users. Subsequently, a three-tier Stackelberg game model is developed in which the cloud assumes the role of the leader, focusing on the establishment of resource pricing strategies. Concurrently, the edge operates as the sub-leader, fine-tuning the distribution of computational resources in alignment with the cloud's strategic initiatives. Meanwhile, the mobile terminal functions as the follower, meticulously optimizing the computation offloading ratio in response to the superior strategies delineated by the preceding tiers. Next, through game equilibrium analysis, the existence and uniqueness of the Stackelberg equilibrium are proven. Finally, a BI-PRO is proposed based on the backward induction resource pricing, allocation, and computation offload optimization algorithm. The experimental findings indicate that the proposed Stackelberg game method optimizes the system's total revenue and maintains stable performance across various scenarios. These results confirm the superiority and robustness of the method.
Additional Links: PMID-41481652
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41481652,
year = {2026},
author = {Li, L and Yu, Q and Wang, C and Zhao, J and Lv, J and Wang, S and Hu, C},
title = {Collaborative optimization of computational offloading and resource allocation based on Stackelberg game.},
journal = {PloS one},
volume = {21},
number = {1},
pages = {e0339955},
pmid = {41481652},
issn = {1932-6203},
mesh = {*Resource Allocation/methods ; *Game Theory ; *Cloud Computing ; Algorithms ; Cooperative Behavior ; Humans ; Models, Theoretical ; },
abstract = {The exponential growth of the Internet of Things and mobile edge computing has intensified the need for substantial data processing and instantaneous response. Consequently, collaboration between the cloud, the edge and the end has become a key computing paradigm. However, in this architecture, task scheduling is complex, resources are heterogeneous and dynamic, and it is still a serious challenge to achieve low-latency and energy-efficient task processing. Aiming at the deficiency of dynamic collaborative optimization in the existing research, this paper introduces a collaborative optimization approach for computational offloading and resource allocation, utilizing the Stackelberg game to maximize the system's total utility. First, an overall utility model that integrates delay, energy consumption, and revenue is constructed for application scenarios involving multi-cloud servers, multi-edge servers, and multiple users. Subsequently, a three-tier Stackelberg game model is developed in which the cloud assumes the role of the leader, focusing on the establishment of resource pricing strategies. Concurrently, the edge operates as the sub-leader, fine-tuning the distribution of computational resources in alignment with the cloud's strategic initiatives. Meanwhile, the mobile terminal functions as the follower, meticulously optimizing the computation offloading ratio in response to the superior strategies delineated by the preceding tiers. Next, through game equilibrium analysis, the existence and uniqueness of the Stackelberg equilibrium are proven. Finally, a BI-PRO is proposed based on the backward induction resource pricing, allocation, and computation offload optimization algorithm. The experimental findings indicate that the proposed Stackelberg game method optimizes the system's total revenue and maintains stable performance across various scenarios. These results confirm the superiority and robustness of the method.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Resource Allocation/methods
*Game Theory
*Cloud Computing
Algorithms
Cooperative Behavior
Humans
Models, Theoretical
RevDate: 2026-01-02
CmpDate: 2026-01-02
MorphoCloud: Democratizing Access to High-Performance Computing for Morphological Data Analysis.
ArXiv pii:2512.21408.
The digitization of biological specimens has revolutionized the field of morphology, creating large collections of 3D data, and microCT in particular. This revolution was initially supported by the development of open-source software tools, specifically the development of SlicerMorph extension to the open-source image analytics platform 3D Slicer. Through SlicerMorph and 3D Slicer, biologists, morphologists and scientists in related fields have all the necessary tools to import, visualize and analyze these complex and large datasets in a single platform that is flexible and expandible, without the need of proprietary software that hinders scientific collaboration and sharing. Yet, a significant "compute gap" remains: While data and software are now open and accessible, the necessary high-end computing resources to run them are often not equally accessible in all institutions, and particularly lacking at Primarily Undergraduate Institutions (PUIs) and other educational settings. Here, we present MorphoCloud, an "IssuesOps"-based platform that leverages Github Actions and the JetStream2 cloud farm to provide on-demand, research-grade computing environments to researchers working with 3D morphological datasets. By delivering a GPU-accelerated full desktop experience via a web browser, MorphoCloud eliminates hardware barriers, enabling complex 3D analysis and AI-assisted segmentation. This paper explains the platform and its architecture, as well as use cases it is designed to support.
Additional Links: PMID-41479453
Full Text:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41479453,
year = {2025},
author = {Maga, AM and Fillion-Robin, JC},
title = {MorphoCloud: Democratizing Access to High-Performance Computing for Morphological Data Analysis.},
journal = {ArXiv},
volume = {},
number = {},
pages = {},
pmid = {41479453},
issn = {2331-8422},
abstract = {The digitization of biological specimens has revolutionized the field of morphology, creating large collections of 3D data, and microCT in particular. This revolution was initially supported by the development of open-source software tools, specifically the development of SlicerMorph extension to the open-source image analytics platform 3D Slicer. Through SlicerMorph and 3D Slicer, biologists, morphologists and scientists in related fields have all the necessary tools to import, visualize and analyze these complex and large datasets in a single platform that is flexible and expandible, without the need of proprietary software that hinders scientific collaboration and sharing. Yet, a significant "compute gap" remains: While data and software are now open and accessible, the necessary high-end computing resources to run them are often not equally accessible in all institutions, and particularly lacking at Primarily Undergraduate Institutions (PUIs) and other educational settings. Here, we present MorphoCloud, an "IssuesOps"-based platform that leverages Github Actions and the JetStream2 cloud farm to provide on-demand, research-grade computing environments to researchers working with 3D morphological datasets. By delivering a GPU-accelerated full desktop experience via a web browser, MorphoCloud eliminates hardware barriers, enabling complex 3D analysis and AI-assisted segmentation. This paper explains the platform and its architecture, as well as use cases it is designed to support.},
}
RevDate: 2025-12-31
Scalable photonic reservoir computing for parallel machine learning tasks.
Nature communications pii:10.1038/s41467-025-67983-z [Epub ahead of print].
Neuromorphic photonics enables brain-inspired information processing with higher bandwidth and lower energy consumption than traditional electronics, addressing the growing computational demands of the Internet of Things, cloud services, and edge computing. However, even current state-of-the-art electronic and photonic platforms are incapable of delivering the scalable throughput, multitasking processing, and energy efficiency required by these applications. Here, we demonstrate a tunable photonic reservoir computing device based on a nonlinear amplifying loop mirror (NALM), leveraging a time-delayed, single-unit, all-optical architecture. By combining dense temporal encoding with wavelength-division multiplexing, the system supports concurrent multitasking across independent data channels, enabling scalable computational performance without additional hardware complexity. Experiments and theoretical validation on classification and prediction benchmarks demonstrate the device's performance, achieving a throughput of 20 tera-operations-per-second and an energy efficiency of 4.4 fJ per operation. These results highlight a promising path towards reconfigurable, compact, and high-performance photonic processors for real-time intelligent applications.
Additional Links: PMID-41476165
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41476165,
year = {2025},
author = {Aadhi, A and Di Lauro, L and Fischer, B and Dmitriev, P and Alamgir, I and Mazoukh, C and Perron, N and Viktorov, EA and Kovalev, AV and Eshaghi, A and Vakili, S and Chemnitz, M and Roztocki, P and Little, BE and Chu, ST and Moss, DJ and Morandotti, R},
title = {Scalable photonic reservoir computing for parallel machine learning tasks.},
journal = {Nature communications},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41467-025-67983-z},
pmid = {41476165},
issn = {2041-1723},
abstract = {Neuromorphic photonics enables brain-inspired information processing with higher bandwidth and lower energy consumption than traditional electronics, addressing the growing computational demands of the Internet of Things, cloud services, and edge computing. However, even current state-of-the-art electronic and photonic platforms are incapable of delivering the scalable throughput, multitasking processing, and energy efficiency required by these applications. Here, we demonstrate a tunable photonic reservoir computing device based on a nonlinear amplifying loop mirror (NALM), leveraging a time-delayed, single-unit, all-optical architecture. By combining dense temporal encoding with wavelength-division multiplexing, the system supports concurrent multitasking across independent data channels, enabling scalable computational performance without additional hardware complexity. Experiments and theoretical validation on classification and prediction benchmarks demonstrate the device's performance, achieving a throughput of 20 tera-operations-per-second and an energy efficiency of 4.4 fJ per operation. These results highlight a promising path towards reconfigurable, compact, and high-performance photonic processors for real-time intelligent applications.},
}
RevDate: 2026-01-02
CmpDate: 2025-12-31
MTBseq-nf: Enabling Scalable Tuberculosis Genomics "Big Data" Analysis Through a User-Friendly Nextflow Wrapper for MTBseq Pipeline.
Microorganisms, 13(12):.
The MTBseq pipeline, published in 2018, was designed to address bioinformatics challenges in tuberculosis (TB) research using whole-genome sequencing (WGS) data. It was the first publicly available tool on GitHub to perform full analysis of WGS data for Mycobacterium tuberculosis complex (MTBC) encompassing quality control through mapping, variant calling for lineage classification, drug resistance prediction, and phylogenetic inference. However, the pipeline's architecture is not optimal for analyses on high-performance computing or cloud computing environments that often involve large datasets. To overcome this limitation, we developed MTBseq-nf, a Nextflow wrapper that provides parallelization for faster execution speeds in addition to several other significant enhancements. The MTBseq-nf wrapper can run several instances of the same step in parallel, fully utilizing the available resources, unlike the linear, batched analysis of samples in the TBfull step of the MTBseq pipeline. For evaluation of scalability and reproducibility, we used 90 M. tuberculosis genomes (European Nucleotide Archive-ENA accession PRJEB7727) for the benchmarking analysis on a dedicated computational server. In our benchmarks, MTBseq-nf in its parallel mode is at least twice as fast as the standard MTBseq pipeline for cohorts exceeding 20 samples. Through integration with the best practices of nf-core, Bioconda, and Biocontainers projects MTBseq-nf ensures reproducibility and platform independence, providing a scalable and efficient solution for TB genomic surveillance.
Additional Links: PMID-41471889
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41471889,
year = {2025},
author = {Sharma, A and Marcon, DJ and Loubser, J and Lima, KVB and van der Spuy, G and Conceição, EC},
title = {MTBseq-nf: Enabling Scalable Tuberculosis Genomics "Big Data" Analysis Through a User-Friendly Nextflow Wrapper for MTBseq Pipeline.},
journal = {Microorganisms},
volume = {13},
number = {12},
pages = {},
pmid = {41471889},
issn = {2076-2607},
support = {445784/2023-7//National Council for Scientific and Technological Development/ ; 3083687//Oracle Cloud credits/ ; PhD Scholarship//National Research Foundation/ ; },
abstract = {The MTBseq pipeline, published in 2018, was designed to address bioinformatics challenges in tuberculosis (TB) research using whole-genome sequencing (WGS) data. It was the first publicly available tool on GitHub to perform full analysis of WGS data for Mycobacterium tuberculosis complex (MTBC) encompassing quality control through mapping, variant calling for lineage classification, drug resistance prediction, and phylogenetic inference. However, the pipeline's architecture is not optimal for analyses on high-performance computing or cloud computing environments that often involve large datasets. To overcome this limitation, we developed MTBseq-nf, a Nextflow wrapper that provides parallelization for faster execution speeds in addition to several other significant enhancements. The MTBseq-nf wrapper can run several instances of the same step in parallel, fully utilizing the available resources, unlike the linear, batched analysis of samples in the TBfull step of the MTBseq pipeline. For evaluation of scalability and reproducibility, we used 90 M. tuberculosis genomes (European Nucleotide Archive-ENA accession PRJEB7727) for the benchmarking analysis on a dedicated computational server. In our benchmarks, MTBseq-nf in its parallel mode is at least twice as fast as the standard MTBseq pipeline for cohorts exceeding 20 samples. Through integration with the best practices of nf-core, Bioconda, and Biocontainers projects MTBseq-nf ensures reproducibility and platform independence, providing a scalable and efficient solution for TB genomic surveillance.},
}
RevDate: 2026-01-03
CmpDate: 2025-12-31
Distributed Deep Learning in IoT Sensor Network for the Diagnosis of Plant Diseases.
Sensors (Basel, Switzerland), 25(24):.
The early detection of plant diseases is critical to improving agricultural productivity and ensuring food security. However, conventional centralized deep learning approaches are often unsuitable for large-scale agricultural deployments, as they rely on continuous data transmission to cloud servers and require high computational resources that are impractical for Internet of Things (IoT)-based field environments. In this article, we present a distributed deep learning framework based on Federated Learning (FL) for the diagnosis of plant diseases in IoT sensor networks. The proposed architecture integrates multiple IoT nodes and an edge computing node that collaboratively train an EfficientNet B0 model using the Federated Averaging (FedAvg) algorithm without transferring local data. Two training pipelines are evaluated: a standard single-model pipeline and a hierarchical pipeline that combines a crop classifier with crop-specific disease models. Experimental results on a multicrop leaf image dataset under realistic augmentation scenarios demonstrate that the hierarchical FL approach improves per-crop classification accuracy and robustness to environmental variations, while the standard pipeline offers lower latency and energy consumption.
Additional Links: PMID-41471641
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41471641,
year = {2025},
author = {Papanikolaou, A and Tziouvaras, A and Floros, G and Xenakis, A and Bonsignorio, F},
title = {Distributed Deep Learning in IoT Sensor Network for the Diagnosis of Plant Diseases.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {24},
pages = {},
pmid = {41471641},
issn = {1424-8220},
mesh = {*Deep Learning ; *Plant Diseases ; *Internet of Things ; Algorithms ; Neural Networks, Computer ; Crops, Agricultural ; Plant Leaves ; },
abstract = {The early detection of plant diseases is critical to improving agricultural productivity and ensuring food security. However, conventional centralized deep learning approaches are often unsuitable for large-scale agricultural deployments, as they rely on continuous data transmission to cloud servers and require high computational resources that are impractical for Internet of Things (IoT)-based field environments. In this article, we present a distributed deep learning framework based on Federated Learning (FL) for the diagnosis of plant diseases in IoT sensor networks. The proposed architecture integrates multiple IoT nodes and an edge computing node that collaboratively train an EfficientNet B0 model using the Federated Averaging (FedAvg) algorithm without transferring local data. Two training pipelines are evaluated: a standard single-model pipeline and a hierarchical pipeline that combines a crop classifier with crop-specific disease models. Experimental results on a multicrop leaf image dataset under realistic augmentation scenarios demonstrate that the hierarchical FL approach improves per-crop classification accuracy and robustness to environmental variations, while the standard pipeline offers lower latency and energy consumption.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Deep Learning
*Plant Diseases
*Internet of Things
Algorithms
Neural Networks, Computer
Crops, Agricultural
Plant Leaves
RevDate: 2026-01-03
CmpDate: 2025-12-31
Edge-Enabled Hybrid Encryption Framework for Secure Health Information Exchange in IoT-Based Smart Healthcare Systems.
Sensors (Basel, Switzerland), 25(24):.
The integration of the Internet of Things (IoT) and edge computing is transforming healthcare by enabling real-time acquisition, processing, and exchange of sensitive patient data close to the data source. However, the distributed nature of IoT-enabled smart healthcare systems exposes them to severe security and privacy risks during health information exchange (HIE). This study proposes an edge-enabled hybrid encryption framework that combines elliptic curve cryptography (ECC), HMAC-SHA256, and the Advanced Encryption Standard (AES) to ensure data confidentiality, integrity, and efficient computation in healthcare communication networks. The proposed model minimizes latency and reduces cloud dependency by executing encryption and verification at the network edge. It provides the first systematic comparison of hybrid encryption configurations for edge-based HIE, evaluating CPU usage, memory consumption, and scalability across varying data volumes. Experimental results demonstrate that the ECC + HMAC-SHA256 + AES configuration achieves high encryption efficiency and strong resistance to attacks while maintaining lightweight processing suitable for edge devices. This approach provides a scalable and secure solution for protecting sensitive health data in next-generation IoT-enabled smart healthcare systems.
Additional Links: PMID-41471577
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41471577,
year = {2025},
author = {Ghani, NA and Bagustari, BA and Ahmad, M and Tolle, H and Kurnianingtyas, D},
title = {Edge-Enabled Hybrid Encryption Framework for Secure Health Information Exchange in IoT-Based Smart Healthcare Systems.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {24},
pages = {},
pmid = {41471577},
issn = {1424-8220},
support = {IMG005-2023//University of Malaya/ ; 01703/UN10.A0101/B/TU.01.00.1/2024//University of Brawijaya/ ; },
mesh = {*Computer Security ; *Health Information Exchange ; *Internet of Things ; Humans ; Confidentiality ; Delivery of Health Care ; Algorithms ; Cloud Computing ; },
abstract = {The integration of the Internet of Things (IoT) and edge computing is transforming healthcare by enabling real-time acquisition, processing, and exchange of sensitive patient data close to the data source. However, the distributed nature of IoT-enabled smart healthcare systems exposes them to severe security and privacy risks during health information exchange (HIE). This study proposes an edge-enabled hybrid encryption framework that combines elliptic curve cryptography (ECC), HMAC-SHA256, and the Advanced Encryption Standard (AES) to ensure data confidentiality, integrity, and efficient computation in healthcare communication networks. The proposed model minimizes latency and reduces cloud dependency by executing encryption and verification at the network edge. It provides the first systematic comparison of hybrid encryption configurations for edge-based HIE, evaluating CPU usage, memory consumption, and scalability across varying data volumes. Experimental results demonstrate that the ECC + HMAC-SHA256 + AES configuration achieves high encryption efficiency and strong resistance to attacks while maintaining lightweight processing suitable for edge devices. This approach provides a scalable and secure solution for protecting sensitive health data in next-generation IoT-enabled smart healthcare systems.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Computer Security
*Health Information Exchange
*Internet of Things
Humans
Confidentiality
Delivery of Health Care
Algorithms
Cloud Computing
RevDate: 2026-01-03
Two Novel Cloud-Masking Algorithms Tested in a Tropical Forest Setting Using High-Resolution NICFI-Planet Basemaps.
Sensors (Basel, Switzerland), 25(24):.
High-resolution NICFI-Planet image collection on Google Earth Engine (GEE) promises fine-scale tropical forest monitoring, but persistent cloud covers, shadows, and haze undermine its value. Here, we present two simple, fully reproducible cloud-masking algorithms. We introduce (A) a Blue and Near-Infrared threshold and (B) a Sentinel-2-derived statistical thresholding approach that sets per-band cutoffs. Both are implemented end-to-end in GEE for operational use. The algorithms were first developed, tuned, and evaluated in the Sundarbans (Bangladesh) using strongly contrasting dry- and monsoon-season scenes. To assess their broader utility, we additionally tested them in two independent deltaic mangrove systems, namely, the Bidyadhari Delta in West Bengal, India, and the Ayeyarwady Delta in Myanmar. Across all sites, Algorithm B consistently removes the largest share of cloud and bright-water pixels but tends to over-mask haze and low-contrast features. Algorithm A retains more usable pixels; however, its aggressiveness is region-dependent. It appears more conservative in the Sundarbans but noticeably more over-inclusive in the India and Myanmar scenes. A Random Forest classifier provided map offers a useful reference but the model is dependent on the quantity and quality of labeled samples. The novelty of the algorithms lies in their design specifically for NICFI-Planet basemaps and their ability to operate without labeled samples. Because they rely on simple, fully shareable GEE code, they can be readily applied in regions in a consistent manner. These two algorithms offer a pragmatic operational pathway: apply them as a first-pass filter keeping in mind that its behavior may vary across environments.
Additional Links: PMID-41471553
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41471553,
year = {2025},
author = {Islam, KMA and Abir, S and Kennedy, R},
title = {Two Novel Cloud-Masking Algorithms Tested in a Tropical Forest Setting Using High-Resolution NICFI-Planet Basemaps.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {24},
pages = {},
pmid = {41471553},
issn = {1424-8220},
support = {80NSSC23K0245//This work was supported by a grant from NASA's SERVIR program under agreement 80NSSC23K0245/ ; },
abstract = {High-resolution NICFI-Planet image collection on Google Earth Engine (GEE) promises fine-scale tropical forest monitoring, but persistent cloud covers, shadows, and haze undermine its value. Here, we present two simple, fully reproducible cloud-masking algorithms. We introduce (A) a Blue and Near-Infrared threshold and (B) a Sentinel-2-derived statistical thresholding approach that sets per-band cutoffs. Both are implemented end-to-end in GEE for operational use. The algorithms were first developed, tuned, and evaluated in the Sundarbans (Bangladesh) using strongly contrasting dry- and monsoon-season scenes. To assess their broader utility, we additionally tested them in two independent deltaic mangrove systems, namely, the Bidyadhari Delta in West Bengal, India, and the Ayeyarwady Delta in Myanmar. Across all sites, Algorithm B consistently removes the largest share of cloud and bright-water pixels but tends to over-mask haze and low-contrast features. Algorithm A retains more usable pixels; however, its aggressiveness is region-dependent. It appears more conservative in the Sundarbans but noticeably more over-inclusive in the India and Myanmar scenes. A Random Forest classifier provided map offers a useful reference but the model is dependent on the quantity and quality of labeled samples. The novelty of the algorithms lies in their design specifically for NICFI-Planet basemaps and their ability to operate without labeled samples. Because they rely on simple, fully shareable GEE code, they can be readily applied in regions in a consistent manner. These two algorithms offer a pragmatic operational pathway: apply them as a first-pass filter keeping in mind that its behavior may vary across environments.},
}
RevDate: 2026-01-03
The Research on a Collaborative Management Model for Multi-Source Heterogeneous Data Based on OPC Communication.
Sensors (Basel, Switzerland), 25(24):.
Effectively managing multi-source heterogeneous data remains a critical challenge in distributed cyber-physical systems (CPS). To address this, we present a novel and edge-centric computing framework integrating four key technological innovations. Firstly, a hybrid OPC communication stack seamlessly combines Client/Server, Publish/Subscribe, and P2P paradigms, enabling scalable interoperability across devices, edge nodes, and the cloud. Secondly, an event-triggered adaptive Kalman filter is introduced; it incorporates online noise-covariance estimation and multi-threshold triggering mechanisms. This approach significantly reduces state-estimation error by 46.7% and computational load by 41% compared to conventional fixed-rate sampling. Thirdly, temporal asynchrony among edge sensors is resolved by a Dynamic Time Warping (DTW)-based data-fusion module, which employs optimization constrained by Mahalanobis distance. Ultimately, a content-aware deterministic message queue data distribution mechanism is designed to ensure an end-to-end latency of less than 10 ms for critical control commands. This mechanism, which utilizes a "rules first" scheduling strategy and a dynamic resource allocation mechanism, guarantees low latency for key instructions even under the response loads of multiple data messages. The core contribution of this study is the proposal and empirical validation of an architecture co-design methodology aimed at ultra-high-performance industrial systems. This approach moves beyond the conventional paradigm of independently optimizing individual components, and instead prioritizes system-level synergy as the foundation for performance enhancement. Experimental evaluations were conducted under industrial-grade workloads, which involve over 100 heterogeneous data sources. These evaluations reveal that systems designed with this methodology can simultaneously achieve millimeter-level accuracy in field data acquisition and millisecond-level latency in the execution of critical control commands. These results highlight a promising pathway toward the development of real-time intelligent systems capable of meeting the stringent demands of next-generation industrial applications, and demonstrate immediate applicability in smart manufacturing domains.
Additional Links: PMID-41471512
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41471512,
year = {2025},
author = {Tian, J and Shang, C and Ren, T and Li, Z and Zhang, E and Yang, J and He, M},
title = {The Research on a Collaborative Management Model for Multi-Source Heterogeneous Data Based on OPC Communication.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {24},
pages = {},
pmid = {41471512},
issn = {1424-8220},
support = {Grant No. U24A6005//National Natural Science Foundation of China/ ; },
abstract = {Effectively managing multi-source heterogeneous data remains a critical challenge in distributed cyber-physical systems (CPS). To address this, we present a novel and edge-centric computing framework integrating four key technological innovations. Firstly, a hybrid OPC communication stack seamlessly combines Client/Server, Publish/Subscribe, and P2P paradigms, enabling scalable interoperability across devices, edge nodes, and the cloud. Secondly, an event-triggered adaptive Kalman filter is introduced; it incorporates online noise-covariance estimation and multi-threshold triggering mechanisms. This approach significantly reduces state-estimation error by 46.7% and computational load by 41% compared to conventional fixed-rate sampling. Thirdly, temporal asynchrony among edge sensors is resolved by a Dynamic Time Warping (DTW)-based data-fusion module, which employs optimization constrained by Mahalanobis distance. Ultimately, a content-aware deterministic message queue data distribution mechanism is designed to ensure an end-to-end latency of less than 10 ms for critical control commands. This mechanism, which utilizes a "rules first" scheduling strategy and a dynamic resource allocation mechanism, guarantees low latency for key instructions even under the response loads of multiple data messages. The core contribution of this study is the proposal and empirical validation of an architecture co-design methodology aimed at ultra-high-performance industrial systems. This approach moves beyond the conventional paradigm of independently optimizing individual components, and instead prioritizes system-level synergy as the foundation for performance enhancement. Experimental evaluations were conducted under industrial-grade workloads, which involve over 100 heterogeneous data sources. These evaluations reveal that systems designed with this methodology can simultaneously achieve millimeter-level accuracy in field data acquisition and millisecond-level latency in the execution of critical control commands. These results highlight a promising pathway toward the development of real-time intelligent systems capable of meeting the stringent demands of next-generation industrial applications, and demonstrate immediate applicability in smart manufacturing domains.},
}
RevDate: 2026-01-03
Adaptive Reinforcement Learning-Based Framework for Energy-Efficient Task Offloading in a Fog-Cloud Environment.
Sensors (Basel, Switzerland), 25(24):.
Ever-increasing computational demand introduced by the expanding scale of Internet of Things (IoT) devices poses significant concerns in terms of energy consumption in a fog-cloud environment. Due to the limited resources of IoT devices, energy-efficient task offloading becomes even more challenging for time-sensitive tasks. In this paper, we propose a reinforcement learning-based framework, namely Adaptive Q-learning-based Energy-aware Task Offloading (AQETO), that dynamically manages the energy consumption of fog nodes in a fog-cloud network. Concurrently, it considers IoT task delay tolerance and allocates computational resources while satisfying deadline requirements. The proposed approach dynamically determines energy states of each fog node using Q-learning depending on workload fluctuations. Moreover, AQETO prioritizes allocation of the most urgent tasks to minimize delays. Extensive experiments demonstrate the effectiveness of AQETO in terms of the minimization of fog node energy consumption and delay and the maximization of system efficiency.
Additional Links: PMID-41471511
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41471511,
year = {2025},
author = {Mikavica, B and Kostic-Ljubisavljevic, A},
title = {Adaptive Reinforcement Learning-Based Framework for Energy-Efficient Task Offloading in a Fog-Cloud Environment.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {24},
pages = {},
pmid = {41471511},
issn = {1424-8220},
abstract = {Ever-increasing computational demand introduced by the expanding scale of Internet of Things (IoT) devices poses significant concerns in terms of energy consumption in a fog-cloud environment. Due to the limited resources of IoT devices, energy-efficient task offloading becomes even more challenging for time-sensitive tasks. In this paper, we propose a reinforcement learning-based framework, namely Adaptive Q-learning-based Energy-aware Task Offloading (AQETO), that dynamically manages the energy consumption of fog nodes in a fog-cloud network. Concurrently, it considers IoT task delay tolerance and allocates computational resources while satisfying deadline requirements. The proposed approach dynamically determines energy states of each fog node using Q-learning depending on workload fluctuations. Moreover, AQETO prioritizes allocation of the most urgent tasks to minimize delays. Extensive experiments demonstrate the effectiveness of AQETO in terms of the minimization of fog node energy consumption and delay and the maximization of system efficiency.},
}
RevDate: 2026-01-03
Edge Temporal Digital Twin Network for Sensor-Driven Fault Detection in Nuclear Power Systems.
Sensors (Basel, Switzerland), 25(24):.
The safe and efficient operation of nuclear power systems largely relies on sensor networks that continuously collect and transmit monitoring data. However, due to the high sensitivity of the nuclear power field and strict privacy restrictions, data among different nuclear entities are typically not directly shareable, which poses challenges to constructing a global digital twin with strong generalization capability. Moreover, most existing digital twin approaches tend to treat sensor data as static, overlooking critical temporal patterns that could enhance fault prediction performance. To address these issues, this paper proposes an Edge Temporal Digital Twin Network (ETDTN) for cloud-edge collaborative, sensor-driven fault detection in nuclear power systems. ETDTN introduces a continuous variable temporal representation to fully exploit temporal information from sensors, incorporates a global representation module to alleviate the non-IID characteristics among different subsystems, and integrates a temporal attention mechanism based on graph neural networks in the latent space to strengthen temporal feature learning. Extensive experiments on real nuclear power datasets from 17 independent units demonstrate that ETDTN achieves significantly better fault detection performance than existing methods under non-sharing data scenarios, obtaining the best results in both accuracy and F1 score. The findings indicate that ETDTN not only effectively preserves data privacy through federated parameter aggregation but also captures latent temporal patterns, providing a powerful tool for sensor-driven fault detection and predictive maintenance in nuclear power systems.
Additional Links: PMID-41471501
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41471501,
year = {2025},
author = {Liu, S and Ye, G and Zhao, X},
title = {Edge Temporal Digital Twin Network for Sensor-Driven Fault Detection in Nuclear Power Systems.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {24},
pages = {},
pmid = {41471501},
issn = {1424-8220},
abstract = {The safe and efficient operation of nuclear power systems largely relies on sensor networks that continuously collect and transmit monitoring data. However, due to the high sensitivity of the nuclear power field and strict privacy restrictions, data among different nuclear entities are typically not directly shareable, which poses challenges to constructing a global digital twin with strong generalization capability. Moreover, most existing digital twin approaches tend to treat sensor data as static, overlooking critical temporal patterns that could enhance fault prediction performance. To address these issues, this paper proposes an Edge Temporal Digital Twin Network (ETDTN) for cloud-edge collaborative, sensor-driven fault detection in nuclear power systems. ETDTN introduces a continuous variable temporal representation to fully exploit temporal information from sensors, incorporates a global representation module to alleviate the non-IID characteristics among different subsystems, and integrates a temporal attention mechanism based on graph neural networks in the latent space to strengthen temporal feature learning. Extensive experiments on real nuclear power datasets from 17 independent units demonstrate that ETDTN achieves significantly better fault detection performance than existing methods under non-sharing data scenarios, obtaining the best results in both accuracy and F1 score. The findings indicate that ETDTN not only effectively preserves data privacy through federated parameter aggregation but also captures latent temporal patterns, providing a powerful tool for sensor-driven fault detection and predictive maintenance in nuclear power systems.},
}
RevDate: 2026-01-03
AI-Enabled Dynamic Edge-Cloud Resource Allocation for Smart Cities and Smart Buildings.
Sensors (Basel, Switzerland), 25(24):.
The rapid expansion of IoT devices represents significant progress in areas such as smart buildings and smart cities, but at the same time, the volume of data generated represents a challenge, which can lead to real bottlenecks in the data analysis process, thus resulting in increased waiting times for end users. The use of cloud-based solutions may prove inefficient in some cases, as the bandwidth required for transmitting data generated by IoT devices is limited. The integration with Edge computing mitigates this issue, bringing data processing closer to the resource that generates it. Edge computing plays a key role in improving cloud performance by offloading tasks closer to the data source, optimizing resource allocation. Achieving the desired performance requires a dynamic approach to resource management, where task execution can be prioritized based on current load conditions: either at the Edge node or the Cloud node. This paper proposes an approach based on the Seasonal Auto Regressive Integrated Moving Average (SARIMA) model for seamlessly switching between the Cloud and Edge nodes in the event of a loss of connection between the Cloud and Edge nodes. Thereby ensuring the command loop remains closed by transferring the task to the Edge node until the Cloud node becomes available. In this way, the prediction that could underlie a command is not jeopardized by the lack of connection to the cloud node. The method was evaluated using real-world resource utilization data and compared against a Simple Moving Average (SMA) baseline using standard metrics: RMSE, MAE, MAPE, and MSE. Experimental results demonstrate that SRIMA significantly improves prediction accuracy, achieving up to 64% improvement for CPU usage and 35% for RAM usage compared to SMA. These findings highlight the effectiveness of incorporating seasonality and autoregressive components in predictive models for edge computing, contributing to more efficient resource allocation and enhanced performance in smart city environments.
Additional Links: PMID-41471434
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41471434,
year = {2025},
author = {Dumitru, MC and Caramihai, SI and Dumitrascu, A and Pietraru, RN and Moisescu, MA},
title = {AI-Enabled Dynamic Edge-Cloud Resource Allocation for Smart Cities and Smart Buildings.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {24},
pages = {},
pmid = {41471434},
issn = {1424-8220},
abstract = {The rapid expansion of IoT devices represents significant progress in areas such as smart buildings and smart cities, but at the same time, the volume of data generated represents a challenge, which can lead to real bottlenecks in the data analysis process, thus resulting in increased waiting times for end users. The use of cloud-based solutions may prove inefficient in some cases, as the bandwidth required for transmitting data generated by IoT devices is limited. The integration with Edge computing mitigates this issue, bringing data processing closer to the resource that generates it. Edge computing plays a key role in improving cloud performance by offloading tasks closer to the data source, optimizing resource allocation. Achieving the desired performance requires a dynamic approach to resource management, where task execution can be prioritized based on current load conditions: either at the Edge node or the Cloud node. This paper proposes an approach based on the Seasonal Auto Regressive Integrated Moving Average (SARIMA) model for seamlessly switching between the Cloud and Edge nodes in the event of a loss of connection between the Cloud and Edge nodes. Thereby ensuring the command loop remains closed by transferring the task to the Edge node until the Cloud node becomes available. In this way, the prediction that could underlie a command is not jeopardized by the lack of connection to the cloud node. The method was evaluated using real-world resource utilization data and compared against a Simple Moving Average (SMA) baseline using standard metrics: RMSE, MAE, MAPE, and MSE. Experimental results demonstrate that SRIMA significantly improves prediction accuracy, achieving up to 64% improvement for CPU usage and 35% for RAM usage compared to SMA. These findings highlight the effectiveness of incorporating seasonality and autoregressive components in predictive models for edge computing, contributing to more efficient resource allocation and enhanced performance in smart city environments.},
}
RevDate: 2026-01-02
CmpDate: 2025-12-30
The Use of Industry 4.0 and 5.0 Technologies in the Transformation of Food Services: An Integrative Review.
Foods (Basel, Switzerland), 14(24):.
Industry 5.0 involves the integration of advanced technologies, collaboration between humans and intelligent machines, resilience and sustainability, all of which are essential for the advancement of the food services industry. This analysis reviews the scientific literature on Industries 4.0 and 5.0 technologies, whether experimental or implemented, focused on producing large meals in food service. The review has been conducted through a systematic search, covering aspects from consumer ordering and the cooking process to distribution while considering management, quality control, and sustainability. A total of thirty-one articles, published between 2006 and 2025, were selected, with the majority focusing on Industry 5.0 (71%) and a significant proportion on testing phases (77.4%). In the context of Food Service Perspectives, the emphasis has been placed on customer service (32.3%), highlighting the use of Artificial Intelligence (AI)-powered robots for serving customers and AI for service personalization. Sustainability has also received attention (29%), focusing on AI and machine learning (ML) applications aimed at waste reduction. In management (22.6%), AI has been applied to optimize production schedules, enhance menu engineering, and improve overall management. Big Data (BD) and ML were utilized for sales analysis, while Blockchain technology was employed for traceability. Cooking innovations (9.7%) centered on automation, particularly the use of collaborative robots (cobots). For Quality Control (6.4%), AI, along with the Internet of Things (IoT) and Cloud Computing, has been used to monitor the physical aspects of food. The study underscores the importance of strategic investments in technology to optimize processes and resources, personalize services, and ensure food quality, thereby promoting balance and sustainability.
Additional Links: PMID-41465025
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41465025,
year = {2025},
author = {Cantarelli da Silva, R and Bacharini Lima, L and Batistela Dos Santos, E and Akutsu, RC},
title = {The Use of Industry 4.0 and 5.0 Technologies in the Transformation of Food Services: An Integrative Review.},
journal = {Foods (Basel, Switzerland)},
volume = {14},
number = {24},
pages = {},
pmid = {41465025},
issn = {2304-8158},
abstract = {Industry 5.0 involves the integration of advanced technologies, collaboration between humans and intelligent machines, resilience and sustainability, all of which are essential for the advancement of the food services industry. This analysis reviews the scientific literature on Industries 4.0 and 5.0 technologies, whether experimental or implemented, focused on producing large meals in food service. The review has been conducted through a systematic search, covering aspects from consumer ordering and the cooking process to distribution while considering management, quality control, and sustainability. A total of thirty-one articles, published between 2006 and 2025, were selected, with the majority focusing on Industry 5.0 (71%) and a significant proportion on testing phases (77.4%). In the context of Food Service Perspectives, the emphasis has been placed on customer service (32.3%), highlighting the use of Artificial Intelligence (AI)-powered robots for serving customers and AI for service personalization. Sustainability has also received attention (29%), focusing on AI and machine learning (ML) applications aimed at waste reduction. In management (22.6%), AI has been applied to optimize production schedules, enhance menu engineering, and improve overall management. Big Data (BD) and ML were utilized for sales analysis, while Blockchain technology was employed for traceability. Cooking innovations (9.7%) centered on automation, particularly the use of collaborative robots (cobots). For Quality Control (6.4%), AI, along with the Internet of Things (IoT) and Cloud Computing, has been used to monitor the physical aspects of food. The study underscores the importance of strategic investments in technology to optimize processes and resources, personalize services, and ensure food quality, thereby promoting balance and sustainability.},
}
RevDate: 2025-12-31
CmpDate: 2025-12-29
The Challenges of Data Privacy and Cybersecurity in Cloud Computing and Artificial Intelligence (AI) Applications for EQA Organizations.
EJIFCC, 36(4):599-604.
BACKGROUND: The adoption of cloud computing and Artificial Intelligence (AI) technologies offers significant advantages for External Quality Assessment (EQA) providers, including scalability, cost efficiency, and broader accessibility. However, these benefits come with substantial cybersecurity and data privacy challenges.
METHODOLOGY: We performed a systematic literature review on cybersecurity risks in healthcare cloud computing, consulted experts in bioinformatics and cybersecurity, and analyzed real-world hacking incidents targeting EQA organizations. A risk-focused framework was developed to outline key challenges and best practice mitigation strategies.
RESULTS: Ten key challenges were identified: 1. data breaches and unauthorized access, 2. compliance with regulations such as HIPAA and GDPR, 3. data sovereignty and jurisdictional issues, 4. shared infrastructure vulnerabilities, 5. insider threats, 6. data loss and availability concerns, 7. inadequate security measures by cloud providers, 8. application vulnerabilities, 9. limited visibility and control, and 10. the complexity of cloud security management.
CONCLUSION: To fully benefit from cloud computing and AI, EQA providers must implement robust security practices, ensure regulatory compliance, and continuously monitor their environments. Proactive cybersecurity strategies are essential to safeguarding sensitive laboratory data and maintaining operational continuity and accreditation.
Additional Links: PMID-41459181
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41459181,
year = {2025},
author = {Haliassos, A and Kasvis, D and Karathanos, S},
title = {The Challenges of Data Privacy and Cybersecurity in Cloud Computing and Artificial Intelligence (AI) Applications for EQA Organizations.},
journal = {EJIFCC},
volume = {36},
number = {4},
pages = {599-604},
pmid = {41459181},
issn = {1650-3414},
abstract = {BACKGROUND: The adoption of cloud computing and Artificial Intelligence (AI) technologies offers significant advantages for External Quality Assessment (EQA) providers, including scalability, cost efficiency, and broader accessibility. However, these benefits come with substantial cybersecurity and data privacy challenges.
METHODOLOGY: We performed a systematic literature review on cybersecurity risks in healthcare cloud computing, consulted experts in bioinformatics and cybersecurity, and analyzed real-world hacking incidents targeting EQA organizations. A risk-focused framework was developed to outline key challenges and best practice mitigation strategies.
RESULTS: Ten key challenges were identified: 1. data breaches and unauthorized access, 2. compliance with regulations such as HIPAA and GDPR, 3. data sovereignty and jurisdictional issues, 4. shared infrastructure vulnerabilities, 5. insider threats, 6. data loss and availability concerns, 7. inadequate security measures by cloud providers, 8. application vulnerabilities, 9. limited visibility and control, and 10. the complexity of cloud security management.
CONCLUSION: To fully benefit from cloud computing and AI, EQA providers must implement robust security practices, ensure regulatory compliance, and continuously monitor their environments. Proactive cybersecurity strategies are essential to safeguarding sensitive laboratory data and maintaining operational continuity and accreditation.},
}
RevDate: 2025-12-27
Evaluation and optimization of resource matching for perception services in power communication networks.
Scientific reports pii:10.1038/s41598-025-31776-7 [Epub ahead of print].
In the cloud-edge-end communication architecture of the new power system, heterogeneous perception services face a fundamental and long-standing demand-supply mismatch with multi-dimensional resources (computing, storage, spectrum/bandwidth, and power) under QoS constraints such as delay, reliability, and accuracy. To uniformly measure and minimize this mismatch under resource-limited and time-varying network conditions-thereby enabling precise and efficient perception-this paper proposes an intelligent perception-service efficiency evaluation and optimization method for electric power information and communication networks based on fit entropy. First, based on the theory of information entropy, the fit entropy is defined for the degree of matching between the requirements of perception services such as delay and reliability and the provision of resources. Then, based on the fit entropy, a three-layer matching model of business domain- logical domain- physical domain is constructed, and then a many-to-many matching optimization problem between the business, service function chain and physical device is formed. Furthermore, a dynamic hypergraph neural network based on the gated attention mechanism is designed to solve this problem, where the multi-type aware service requests are dynamically mapped to cross-domain hyperedges, and the fit entropy is used as the weight of the hyperedges to quantify the global fit among the three domains. The fit entropy is optimized by adaptively adjusting the hypergraph structure and the weight of the hyperedges. The simulation results show that this method can significantly improve the quality of service of perceptive services and effectively balance the utilization of network resources and service adaptability.
Additional Links: PMID-41455713
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41455713,
year = {2025},
author = {Wei, L and Shang, L and Zhang, M and Li, H and Zhu, X},
title = {Evaluation and optimization of resource matching for perception services in power communication networks.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-31776-7},
pmid = {41455713},
issn = {2045-2322},
support = {J2024160//Research on Enhancing Support Capabilities and Optimizing Key Technologies for the Global Information Network/ ; },
abstract = {In the cloud-edge-end communication architecture of the new power system, heterogeneous perception services face a fundamental and long-standing demand-supply mismatch with multi-dimensional resources (computing, storage, spectrum/bandwidth, and power) under QoS constraints such as delay, reliability, and accuracy. To uniformly measure and minimize this mismatch under resource-limited and time-varying network conditions-thereby enabling precise and efficient perception-this paper proposes an intelligent perception-service efficiency evaluation and optimization method for electric power information and communication networks based on fit entropy. First, based on the theory of information entropy, the fit entropy is defined for the degree of matching between the requirements of perception services such as delay and reliability and the provision of resources. Then, based on the fit entropy, a three-layer matching model of business domain- logical domain- physical domain is constructed, and then a many-to-many matching optimization problem between the business, service function chain and physical device is formed. Furthermore, a dynamic hypergraph neural network based on the gated attention mechanism is designed to solve this problem, where the multi-type aware service requests are dynamically mapped to cross-domain hyperedges, and the fit entropy is used as the weight of the hyperedges to quantify the global fit among the three domains. The fit entropy is optimized by adaptively adjusting the hypergraph structure and the weight of the hyperedges. The simulation results show that this method can significantly improve the quality of service of perceptive services and effectively balance the utilization of network resources and service adaptability.},
}
RevDate: 2025-12-26
NF-MORL: a neuro-fuzzy multi-objective reinforcement learning framework for task scheduling in fog computing environments.
Scientific reports pii:10.1038/s41598-025-32235-z [Epub ahead of print].
The proliferation of IoT devices has exerted significant demand on computing systems to process data rapidly, efficiently, and in proximity to its source. Conventional cloud-based methods frequently fail because of elevated latency and centralized constraints. Fog computing has emerged as a viable option by decentralizing computation to the edge; yet, successfully scheduling work in these dynamic and heterogeneous contexts continues to pose a significant difficulty. This research presents A Neuro-Fuzzy Multi-Objective Reinforcement Learning (NF-MORL), an innovative framework that integrates neuro-fuzzy systems with multi-objective reinforcement learning to tackle task scheduling in fog networks. The concept is straightforward yet impactful: a Takagi-Sugeno fuzzy layer addresses uncertainty and offers interpretable priorities, while a multi-objective actor-critic agent acquires the capacity to reconcile conflicting objectives makespan, energy consumption, cost, and reliability through practical experience. We assessed NF-MORL using empirical data from Google Cluster and EdgeBench. The findings were promising: relative to cutting-edge techniques, our methodology decreased makespan by up to 35%, enhanced energy efficiency by about 30%, reduced operational expenses by up to 40%, and augmented fault tolerance by as much as 37%. These enhancements persisted across various workload sizes, demonstrating that NF-MORL can effectively adjust to fluctuating situations. Our research indicates that integrating human-like reasoning through fuzzy logic with autonomous learning via reinforcement learning can yield more effective and resilient schedulers for actual fog deployments.
Additional Links: PMID-41453898
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41453898,
year = {2025},
author = {Yu, X and Tang, L and Mi, J and Long, L and Qin, X and Li, X and Mo, Q},
title = {NF-MORL: a neuro-fuzzy multi-objective reinforcement learning framework for task scheduling in fog computing environments.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-32235-z},
pmid = {41453898},
issn = {2045-2322},
abstract = {The proliferation of IoT devices has exerted significant demand on computing systems to process data rapidly, efficiently, and in proximity to its source. Conventional cloud-based methods frequently fail because of elevated latency and centralized constraints. Fog computing has emerged as a viable option by decentralizing computation to the edge; yet, successfully scheduling work in these dynamic and heterogeneous contexts continues to pose a significant difficulty. This research presents A Neuro-Fuzzy Multi-Objective Reinforcement Learning (NF-MORL), an innovative framework that integrates neuro-fuzzy systems with multi-objective reinforcement learning to tackle task scheduling in fog networks. The concept is straightforward yet impactful: a Takagi-Sugeno fuzzy layer addresses uncertainty and offers interpretable priorities, while a multi-objective actor-critic agent acquires the capacity to reconcile conflicting objectives makespan, energy consumption, cost, and reliability through practical experience. We assessed NF-MORL using empirical data from Google Cluster and EdgeBench. The findings were promising: relative to cutting-edge techniques, our methodology decreased makespan by up to 35%, enhanced energy efficiency by about 30%, reduced operational expenses by up to 40%, and augmented fault tolerance by as much as 37%. These enhancements persisted across various workload sizes, demonstrating that NF-MORL can effectively adjust to fluctuating situations. Our research indicates that integrating human-like reasoning through fuzzy logic with autonomous learning via reinforcement learning can yield more effective and resilient schedulers for actual fog deployments.},
}
RevDate: 2026-01-05
CmpDate: 2025-12-26
Analyzing Large Connectome Graphs With BossDB Network Tools.
Current protocols, 5(12):e70273.
Modern connectomics enables large-scale, comparative network neuroscience across individuals, species, development, and evolution. The field now regularly produces extensive maps of neural connectivity exceeding hundreds of millions of synapses in continuous volumes. When connectomes are deposited in central archives such as BossDB with standardized metadata, researchers can pose previously intractable questions about neuronal networks. Here, we present step-by-step protocols for connectome dataset discovery and access, scalable graph construction and analysis, and reproducible comparative connectomics using BossDB, Motif Studio, DotMotif, Neuroglancer, neuPrint, and Python-based workflows. These protocols target bench neuroscientists and computational biologists and emphasize replicability, cloud-friendly options, and publication-quality visualization. © 2025 Wiley Periodicals LLC. Basic Protocol 1: Discovering connectome datasets and computing summary statistics with BossDB and Motif Studio Basic Protocol 2: Writing queries with DotMotif Basic Protocol 3: Querying known network motifs locally with DotMotif Support Protocol 1: Provisioning ad hoc graph databases for large-scale graph analysis Support Protocol 2: Querying structures and systems in the cloud with neuPrint Basic Protocol 4: Viewing anatomical motif features with BossDB and Neuroglancer.
Additional Links: PMID-41451919
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41451919,
year = {2025},
author = {Matelsky, JK and Martinez, H and Xenes, D and Robinette, M and Panigrahi, A and Wester, B},
title = {Analyzing Large Connectome Graphs With BossDB Network Tools.},
journal = {Current protocols},
volume = {5},
number = {12},
pages = {e70273},
doi = {10.1002/cpz1.70273},
pmid = {41451919},
issn = {2691-1299},
mesh = {*Connectome/methods ; Humans ; *Software ; Nerve Net/physiology ; Animals ; Databases, Factual ; Computational Biology/methods ; },
abstract = {Modern connectomics enables large-scale, comparative network neuroscience across individuals, species, development, and evolution. The field now regularly produces extensive maps of neural connectivity exceeding hundreds of millions of synapses in continuous volumes. When connectomes are deposited in central archives such as BossDB with standardized metadata, researchers can pose previously intractable questions about neuronal networks. Here, we present step-by-step protocols for connectome dataset discovery and access, scalable graph construction and analysis, and reproducible comparative connectomics using BossDB, Motif Studio, DotMotif, Neuroglancer, neuPrint, and Python-based workflows. These protocols target bench neuroscientists and computational biologists and emphasize replicability, cloud-friendly options, and publication-quality visualization. © 2025 Wiley Periodicals LLC. Basic Protocol 1: Discovering connectome datasets and computing summary statistics with BossDB and Motif Studio Basic Protocol 2: Writing queries with DotMotif Basic Protocol 3: Querying known network motifs locally with DotMotif Support Protocol 1: Provisioning ad hoc graph databases for large-scale graph analysis Support Protocol 2: Querying structures and systems in the cloud with neuPrint Basic Protocol 4: Viewing anatomical motif features with BossDB and Neuroglancer.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Connectome/methods
Humans
*Software
Nerve Net/physiology
Animals
Databases, Factual
Computational Biology/methods
RevDate: 2025-12-28
Scalable, reproducible, and cost-effective processing of large-scale medical imaging datasets.
Proceedings of SPIE--the International Society for Optical Engineering, 13411:.
Curating, processing, and combining large-scale medical imaging datasets from national studies is a non-trivial task due to the intense computation and data throughput required, variability of acquired data, and associated financial overhead. Existing platforms or tools for large-scale data curation, processing, and storage have difficulty achieving a viable cost-to-scale ratio of computation speed for research purposes, either being too slow or too expensive. Additionally, management and consistency of processing large data in a team-driven manner is a non-trivial task. We design a BIDS-compliant method for an efficient and robust data processing pipeline of large-scale diffusion-weighted and T1-weighted MRI data compatible with low-cost, high-efficiency computing systems. Our method accomplishes automated querying of data available for processing and process running in a consistent and reproducible manner that has long-term stability, while using heterogenous low-cost computational resources and storage systems for efficient processing and data transfer. We demonstrate how our organizational structure permits efficiency in a semi-automated data processing pipeline and show how our method is comparable in processing time to cloud-based computation while being almost 20 times more cost-effective. Our design allows for fast data throughput speeds and low latency to reduce the time for data transfer between storage servers and computation servers, achieving an average of 0.60 Gb/s compared to 0.33 Gb/s for using cloud-based processing methods. The design of our workflow engine permits quick process running while maintaining flexibility to adapt to newly acquired data.
Additional Links: PMID-41450588
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41450588,
year = {2025},
author = {Kim, ME and Ramadass, K and Gao, C and Kanakaraj, P and Newlin, NR and Rudravaram, G and Schilling, KG and Dewey, BE and Archer, D and Hohman, TJ and Li, Z and Bao, S and Landman, BA and Khairi, NM},
title = {Scalable, reproducible, and cost-effective processing of large-scale medical imaging datasets.},
journal = {Proceedings of SPIE--the International Society for Optical Engineering},
volume = {13411},
number = {},
pages = {},
pmid = {41450588},
issn = {0277-786X},
support = {UL1 TR000445/TR/NCATS NIH HHS/United States ; R01 EB017230/EB/NIBIB NIH HHS/United States ; U01 AG068057/AG/NIA NIH HHS/United States ; K01 EB032898/EB/NIBIB NIH HHS/United States ; U24 AG074855/AG/NIA NIH HHS/United States ; S10 OD023680/OD/NIH HHS/United States ; UL1 TR002243/TR/NCATS NIH HHS/United States ; K01 AG073584/AG/NIA NIH HHS/United States ; R01 AG059716/AG/NIA NIH HHS/United States ; S10 OD020154/OD/NIH HHS/United States ; },
abstract = {Curating, processing, and combining large-scale medical imaging datasets from national studies is a non-trivial task due to the intense computation and data throughput required, variability of acquired data, and associated financial overhead. Existing platforms or tools for large-scale data curation, processing, and storage have difficulty achieving a viable cost-to-scale ratio of computation speed for research purposes, either being too slow or too expensive. Additionally, management and consistency of processing large data in a team-driven manner is a non-trivial task. We design a BIDS-compliant method for an efficient and robust data processing pipeline of large-scale diffusion-weighted and T1-weighted MRI data compatible with low-cost, high-efficiency computing systems. Our method accomplishes automated querying of data available for processing and process running in a consistent and reproducible manner that has long-term stability, while using heterogenous low-cost computational resources and storage systems for efficient processing and data transfer. We demonstrate how our organizational structure permits efficiency in a semi-automated data processing pipeline and show how our method is comparable in processing time to cloud-based computation while being almost 20 times more cost-effective. Our design allows for fast data throughput speeds and low latency to reduce the time for data transfer between storage servers and computation servers, achieving an average of 0.60 Gb/s compared to 0.33 Gb/s for using cloud-based processing methods. The design of our workflow engine permits quick process running while maintaining flexibility to adapt to newly acquired data.},
}
RevDate: 2025-12-27
A scalable scheduling and resource management framework for cloud-native B2B applications.
Scientific reports, 15(1):44500.
In modern cloud computing environments, customers increasingly depend on on-demand resource provisioning to handle dynamic workloads. However, fluctuations in job arrival rates can result in prolonged queue times, which negatively affect overall system performance. Although existing scheduling algorithms provide efficient job management, they often fail to account for the combined impact of queue delays and the need for flexible resource provisioning-particularly in business-critical applications. In order to tackle these issues, the paper proposes a new Optimized Job Scheduling and Resource Scaling (OJSRS) algorithm designed to improve job execution efficiency and support elastic resource management in cloud environments. The OJSRS algorithm integrates two key components: Tree-based Job Scheduling (TJS) and Automated Resource Scaling and Scheduling (ARSS). The TJS component constructs a hierarchical structure that concurrently maps incoming jobs to the most suitable Virtual Machines (VMs), thereby minimizing queue delays. Meanwhile, ARSS adjusts resource allocation dynamically, increasing or decreasing capacity according to workload requirements and cloud service provider policies, enabling responsive and adaptive provisioning. Experimental results show that the OJSRS algorithm increases resource utilization by approximately 5-10% and accelerates job completion through proactive resource scaling. This approach provides a significant performance advantage for cloud-native business applications that require both efficiency and scalability.
Additional Links: PMID-41444306
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41444306,
year = {2025},
author = {Komarasamy, D and Rajavel, R and Harimoorthy, K and Pitchai, A},
title = {A scalable scheduling and resource management framework for cloud-native B2B applications.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {44500},
pmid = {41444306},
issn = {2045-2322},
abstract = {In modern cloud computing environments, customers increasingly depend on on-demand resource provisioning to handle dynamic workloads. However, fluctuations in job arrival rates can result in prolonged queue times, which negatively affect overall system performance. Although existing scheduling algorithms provide efficient job management, they often fail to account for the combined impact of queue delays and the need for flexible resource provisioning-particularly in business-critical applications. In order to tackle these issues, the paper proposes a new Optimized Job Scheduling and Resource Scaling (OJSRS) algorithm designed to improve job execution efficiency and support elastic resource management in cloud environments. The OJSRS algorithm integrates two key components: Tree-based Job Scheduling (TJS) and Automated Resource Scaling and Scheduling (ARSS). The TJS component constructs a hierarchical structure that concurrently maps incoming jobs to the most suitable Virtual Machines (VMs), thereby minimizing queue delays. Meanwhile, ARSS adjusts resource allocation dynamically, increasing or decreasing capacity according to workload requirements and cloud service provider policies, enabling responsive and adaptive provisioning. Experimental results show that the OJSRS algorithm increases resource utilization by approximately 5-10% and accelerates job completion through proactive resource scaling. This approach provides a significant performance advantage for cloud-native business applications that require both efficiency and scalability.},
}
RevDate: 2025-12-23
CmpDate: 2025-12-23
Digital twins: A new paradigm for innovation in clinical research and medical affairs.
The Malaysian journal of pathology, 47(3):355-368.
Digital Twin (DT) technology, originally conceptualised in engineering, has recently emerged as a transformative paradigm in healthcare, promising to redefine the generation, interpretation, and application of biomedical evidence. DTs enable real-time simulation, prediction, and optimisation of clinical outcomes. The review aims to elucidate how DTs may enhance methodological efficiency, ethical standards, and strategic innovation in biomedical science, while addressing their epistemological and regulatory challenges. A DT is a dynamic, data-driven virtual replica of a biological entity or clinical process, continuously updated through real-time data to simulate, predict, and optimise outcomes. Originating in engineering, DTs are now entering healthcare as enablers of predictive, preventive, and precision medicine. Supported by Internet of Things (IoT) technologies, cloud computing, and machine learning, DTs integrate heterogeneous data-genomic, physiological, behavioural, and environmental-into adaptive models capable of mirroring and anticipating patient trajectories. In clinical research, they enable synthetic control arms and in silico trials, reducing recruitment barriers, improving statistical power, and addressing ethical issues associated with placebo use. The recent qualification of DT-based methodologies such as PROCOVA™ by the EMA and FDA confirms their growing scientific and regulatory credibility. DTs are redefining Medical Affairs, strengthening its role as a bridge between data science and clinical practice. They enable patient-level insights and personalised scientific communication, transforming Medical Affairs into a predictive, data-driven discipline that supports evidence-based and patient-centered decisions.
Additional Links: PMID-41432469
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41432469,
year = {2025},
author = {Torresi, G and Verna, R},
title = {Digital twins: A new paradigm for innovation in clinical research and medical affairs.},
journal = {The Malaysian journal of pathology},
volume = {47},
number = {3},
pages = {355-368},
pmid = {41432469},
issn = {0126-8635},
mesh = {Humans ; *Biomedical Research/methods/trends ; Precision Medicine/methods ; Inventions ; },
abstract = {Digital Twin (DT) technology, originally conceptualised in engineering, has recently emerged as a transformative paradigm in healthcare, promising to redefine the generation, interpretation, and application of biomedical evidence. DTs enable real-time simulation, prediction, and optimisation of clinical outcomes. The review aims to elucidate how DTs may enhance methodological efficiency, ethical standards, and strategic innovation in biomedical science, while addressing their epistemological and regulatory challenges. A DT is a dynamic, data-driven virtual replica of a biological entity or clinical process, continuously updated through real-time data to simulate, predict, and optimise outcomes. Originating in engineering, DTs are now entering healthcare as enablers of predictive, preventive, and precision medicine. Supported by Internet of Things (IoT) technologies, cloud computing, and machine learning, DTs integrate heterogeneous data-genomic, physiological, behavioural, and environmental-into adaptive models capable of mirroring and anticipating patient trajectories. In clinical research, they enable synthetic control arms and in silico trials, reducing recruitment barriers, improving statistical power, and addressing ethical issues associated with placebo use. The recent qualification of DT-based methodologies such as PROCOVA™ by the EMA and FDA confirms their growing scientific and regulatory credibility. DTs are redefining Medical Affairs, strengthening its role as a bridge between data science and clinical practice. They enable patient-level insights and personalised scientific communication, transforming Medical Affairs into a predictive, data-driven discipline that supports evidence-based and patient-centered decisions.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Biomedical Research/methods/trends
Precision Medicine/methods
Inventions
RevDate: 2025-12-20
Federated learning-based trust and energy-aware routing in Fog-Cloud computing environments for the Internet of Things.
Scientific reports pii:10.1038/s41598-025-32010-0 [Epub ahead of print].
The rapid convergence of Fog, Cloud, and Internet of Things (IoT) technologies has introduced a new era of distributed intelligence and real-time data processing. However, ensuring secure, reliable, and energy-efficient communication across heterogeneous and resource-constrained nodes remains a fundamental challenge. This paper introduces a novel framework entitled Federated Learning-Based Trust and Energy-Aware Routing (FL-TEAR), designed to enhance routing performance in hybrid Fog-Cloud-IoT environments through collaborative intelligence, adaptive trust management, and dynamic energy optimization. The FL-TEAR system replaces static trust evaluation with a federated learning paradigm, allowing IoT and fog nodes to cooperatively train a global trust-energy model without exposing raw data. Trust scores are continuously refined based on behavioral patterns, communication reliability, and residual energy, while routing paths are selected using a composite fitness function integrating trustworthiness, energy availability, latency, and link stability. The hierarchical architecture, spanning IoT, fog, and cloud layers, reduces communication overhead, supports scalability, and preserves privacy. Simulation results confirm that FL-TEAR significantly outperforms state-of-the-art baselines such as E-ODMA (Energy-Efficient On-Demand Multipath Adaptive) + AOMDV (Ad hoc On-Demand Multipath Distance Vector), TAGA (Trust-Aware Geographic Routing Algorithm), and EigenTrust, achieving approximately 23% higher trust accuracy, 23% lower energy consumption, approximately 13% greater packet delivery ratio, and 37% lower delay. These findings demonstrate that federated learning can effectively balance security, sustainability, and quality of service (QoS) in large-scale IoT ecosystems, establishing FL-TEAR as a viable pathway toward intelligent, secure, and energy-efficient next-generation networks.
Additional Links: PMID-41422288
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41422288,
year = {2025},
author = {Wang, F and Wang, K},
title = {Federated learning-based trust and energy-aware routing in Fog-Cloud computing environments for the Internet of Things.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-32010-0},
pmid = {41422288},
issn = {2045-2322},
abstract = {The rapid convergence of Fog, Cloud, and Internet of Things (IoT) technologies has introduced a new era of distributed intelligence and real-time data processing. However, ensuring secure, reliable, and energy-efficient communication across heterogeneous and resource-constrained nodes remains a fundamental challenge. This paper introduces a novel framework entitled Federated Learning-Based Trust and Energy-Aware Routing (FL-TEAR), designed to enhance routing performance in hybrid Fog-Cloud-IoT environments through collaborative intelligence, adaptive trust management, and dynamic energy optimization. The FL-TEAR system replaces static trust evaluation with a federated learning paradigm, allowing IoT and fog nodes to cooperatively train a global trust-energy model without exposing raw data. Trust scores are continuously refined based on behavioral patterns, communication reliability, and residual energy, while routing paths are selected using a composite fitness function integrating trustworthiness, energy availability, latency, and link stability. The hierarchical architecture, spanning IoT, fog, and cloud layers, reduces communication overhead, supports scalability, and preserves privacy. Simulation results confirm that FL-TEAR significantly outperforms state-of-the-art baselines such as E-ODMA (Energy-Efficient On-Demand Multipath Adaptive) + AOMDV (Ad hoc On-Demand Multipath Distance Vector), TAGA (Trust-Aware Geographic Routing Algorithm), and EigenTrust, achieving approximately 23% higher trust accuracy, 23% lower energy consumption, approximately 13% greater packet delivery ratio, and 37% lower delay. These findings demonstrate that federated learning can effectively balance security, sustainability, and quality of service (QoS) in large-scale IoT ecosystems, establishing FL-TEAR as a viable pathway toward intelligent, secure, and energy-efficient next-generation networks.},
}
RevDate: 2025-12-20
High-resolution landfill characterization using SAR remote sensing and cloud-based processing.
Scientific reports pii:10.1038/s41598-025-32908-9 [Epub ahead of print].
Solid waste management in developing countries such as India faces persistent challenges due to weak monitoring systems and the absence of reliable reporting mechanisms for landfill statistics. To address this gap, this study develops a remote sensing methodology that integrates Python programming with the Sentinel Application Platform (SNAP) to generate Digital Elevation Models (DEMs) from Sentinel-1 synthetic aperture radar (SAR) imagery for quantifying landfill characteristics. Key parameters, including waste height and volumetric estimates, were extracted from satellite observations and processed through Google Earth Engine (GEE), enabling efficient large-scale analysis. A total of 80 landfill sites distributed across India were examined, providing the first nationwide assessment of landfill volume using a uniform and replicable framework. Field validation was conducted at two representative sites, Gondiya Landfill and Ujjain Ring Road Trenching Ground, through drone surveys and Differential Global Positioning System (DGPS) measurements. The evaluation showed deviations of 21.12% and 0.12% in height, 0.7% and 0.65% in area delineation, and 20.21% and 0.8% in volume for Gondiya and Ujjain, respectively, confirming the reliability of the proposed approach. These results demonstrate that SAR-based DEMs offer a cost-effective and scalable solution for systematic, near real-time monitoring of landfills across large regions. The framework not only supports capacity planning, environmental assessments, and policy formulation but also provides a pathway for developing countries to transition toward data-driven waste management strategies in the context of rapid urbanization and increasing waste generation.
Additional Links: PMID-41422276
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41422276,
year = {2025},
author = {Agrawal, S and Rakkasagi, S and Goyal, MK},
title = {High-resolution landfill characterization using SAR remote sensing and cloud-based processing.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-32908-9},
pmid = {41422276},
issn = {2045-2322},
abstract = {Solid waste management in developing countries such as India faces persistent challenges due to weak monitoring systems and the absence of reliable reporting mechanisms for landfill statistics. To address this gap, this study develops a remote sensing methodology that integrates Python programming with the Sentinel Application Platform (SNAP) to generate Digital Elevation Models (DEMs) from Sentinel-1 synthetic aperture radar (SAR) imagery for quantifying landfill characteristics. Key parameters, including waste height and volumetric estimates, were extracted from satellite observations and processed through Google Earth Engine (GEE), enabling efficient large-scale analysis. A total of 80 landfill sites distributed across India were examined, providing the first nationwide assessment of landfill volume using a uniform and replicable framework. Field validation was conducted at two representative sites, Gondiya Landfill and Ujjain Ring Road Trenching Ground, through drone surveys and Differential Global Positioning System (DGPS) measurements. The evaluation showed deviations of 21.12% and 0.12% in height, 0.7% and 0.65% in area delineation, and 20.21% and 0.8% in volume for Gondiya and Ujjain, respectively, confirming the reliability of the proposed approach. These results demonstrate that SAR-based DEMs offer a cost-effective and scalable solution for systematic, near real-time monitoring of landfills across large regions. The framework not only supports capacity planning, environmental assessments, and policy formulation but also provides a pathway for developing countries to transition toward data-driven waste management strategies in the context of rapid urbanization and increasing waste generation.},
}
RevDate: 2025-12-20
FDA-Approved AI Solutions in Dental Imaging: A Narrative Review of Applications, Evidence, and Outlook.
International dental journal, 76(1):109315 pii:S0020-6539(25)08598-3 [Epub ahead of print].
INTRODUCTION AND AIMS: Artificial intelligence (AI) has rapidly transformed dental imaging by enabling automated detection, diagnosis, and analysis of various dental conditions. However, a comprehensive synthesis of United States Food and Drug Administration (FDA)-cleared, clinically validated AI solutions in dental imaging remains limited. This review aims to catalog all standalone, cloud-based dental AI platforms with FDA clearance, highlighting their clinical applications, performance outcomes, and supporting evidence to guide evidence-based integration.
METHODS: A two-phase systematic search was conducted. In the first phase, searches of U.S. FDA regulatory databases (510[k], De Novo, and PMA) were performed through July 2025 to identify standalone, cloud-based dental AI imaging devices cleared or authorized for autonomous or semi-autonomous analysis. In the second phase, PubMed, Web of Science, and Google Scholar were systematically searched to retrieve studies assessing the performance or clinical utility of the identified platforms. Two independent reviewers performed data screening and extraction, with discrepancies resolved by a third reviewer.
RESULTS: Thirteen companies were identified as offering twenty-nine FDA-cleared AI products for dental imaging. These solutions addressed diverse clinical tasks, including caries detection, periodontal disease assessment, cephalometric analysis, multi-pathology diagnostics, automated dental charting, and three-dimensional segmentation. Performance outcomes reported by the FDA demonstrated high accuracy, sensitivity, and specificity across most platforms, particularly for caries detection, periodontal disease measurement, and cephalometric analysis. Among these, Relu Creator and WebCeph were supported by the highest number of peer-reviewed publications, whereas several newer platforms lacked independent clinical validation.
CONCLUSION: Standalone, FDA-cleared AI platforms represent a paradigm shift in dental imaging, providing clinically validated tools for diagnosis, treatment planning, and patient monitoring. By systematically cataloging these solutions, this review delivers an evidence-based reference for clinicians and researchers, supporting informed adoption and identifying areas for future investigation.
Additional Links: PMID-41421004
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41421004,
year = {2025},
author = {Shujaat, S and Aljadaan, H and Alrashid, H and Aboalela, AA and Riaz, M},
title = {FDA-Approved AI Solutions in Dental Imaging: A Narrative Review of Applications, Evidence, and Outlook.},
journal = {International dental journal},
volume = {76},
number = {1},
pages = {109315},
doi = {10.1016/j.identj.2025.109315},
pmid = {41421004},
issn = {1875-595X},
abstract = {INTRODUCTION AND AIMS: Artificial intelligence (AI) has rapidly transformed dental imaging by enabling automated detection, diagnosis, and analysis of various dental conditions. However, a comprehensive synthesis of United States Food and Drug Administration (FDA)-cleared, clinically validated AI solutions in dental imaging remains limited. This review aims to catalog all standalone, cloud-based dental AI platforms with FDA clearance, highlighting their clinical applications, performance outcomes, and supporting evidence to guide evidence-based integration.
METHODS: A two-phase systematic search was conducted. In the first phase, searches of U.S. FDA regulatory databases (510[k], De Novo, and PMA) were performed through July 2025 to identify standalone, cloud-based dental AI imaging devices cleared or authorized for autonomous or semi-autonomous analysis. In the second phase, PubMed, Web of Science, and Google Scholar were systematically searched to retrieve studies assessing the performance or clinical utility of the identified platforms. Two independent reviewers performed data screening and extraction, with discrepancies resolved by a third reviewer.
RESULTS: Thirteen companies were identified as offering twenty-nine FDA-cleared AI products for dental imaging. These solutions addressed diverse clinical tasks, including caries detection, periodontal disease assessment, cephalometric analysis, multi-pathology diagnostics, automated dental charting, and three-dimensional segmentation. Performance outcomes reported by the FDA demonstrated high accuracy, sensitivity, and specificity across most platforms, particularly for caries detection, periodontal disease measurement, and cephalometric analysis. Among these, Relu Creator and WebCeph were supported by the highest number of peer-reviewed publications, whereas several newer platforms lacked independent clinical validation.
CONCLUSION: Standalone, FDA-cleared AI platforms represent a paradigm shift in dental imaging, providing clinically validated tools for diagnosis, treatment planning, and patient monitoring. By systematically cataloging these solutions, this review delivers an evidence-based reference for clinicians and researchers, supporting informed adoption and identifying areas for future investigation.},
}
RevDate: 2025-12-28
CmpDate: 2025-12-26
Democratising high performance computing for bioinformatics through serverless cloud computing: A case study on CRISPR-Cas9 guide RNA design with Crackling Cloud.
PLoS computational biology, 21(12):e1013819.
Organisations are challenged when meeting the computational requirements of large-scale bioinformatics analyses using their own resources. Cloud computing has democratised large-scale resources, and to reduce the barriers of working with large-scale compute, leading cloud vendors offer serverless computing, a low-maintenance and low-cost model that provides ample resources for highly scalable software applications. While serverless computing has broad use, its adoption in bioinformatics remains poor. Here, we demonstrate the most extensive use of high-performance serverless computing for bioinformatics by applying the available technologies to CRISPR-Cas9 guide RNA (gRNA) design. Our adaptation of the established gRNA design tool, named Crackling, implements a novel, cloud-native and serverless-based, high-performance computing environment using technologies made available by Amazon Web Services (AWS). The architecture, compatible with technologies from all leading cloud vendors, and the AWS implementation, contributes to an effort of reducing the barrier to large computational capacity in bioinformatics and for CRISPR-Cas9 gRNA design. Crackling Cloud can be deployed to any AWS account, and is freely available on GitHub under the BSD 3-clause license: https://github.com/bmds-lab/Crackling-AWS.
Additional Links: PMID-41417859
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41417859,
year = {2025},
author = {Bradford, J and Joy, D and Winsen, M and Meurant, N and Wilkins, M and Wilson, LOW and Bauer, DC and Perrin, D},
title = {Democratising high performance computing for bioinformatics through serverless cloud computing: A case study on CRISPR-Cas9 guide RNA design with Crackling Cloud.},
journal = {PLoS computational biology},
volume = {21},
number = {12},
pages = {e1013819},
pmid = {41417859},
issn = {1553-7358},
mesh = {*Cloud Computing ; *Computational Biology/methods ; *RNA, Guide, CRISPR-Cas Systems/genetics ; *CRISPR-Cas Systems/genetics ; Software ; },
abstract = {Organisations are challenged when meeting the computational requirements of large-scale bioinformatics analyses using their own resources. Cloud computing has democratised large-scale resources, and to reduce the barriers of working with large-scale compute, leading cloud vendors offer serverless computing, a low-maintenance and low-cost model that provides ample resources for highly scalable software applications. While serverless computing has broad use, its adoption in bioinformatics remains poor. Here, we demonstrate the most extensive use of high-performance serverless computing for bioinformatics by applying the available technologies to CRISPR-Cas9 guide RNA (gRNA) design. Our adaptation of the established gRNA design tool, named Crackling, implements a novel, cloud-native and serverless-based, high-performance computing environment using technologies made available by Amazon Web Services (AWS). The architecture, compatible with technologies from all leading cloud vendors, and the AWS implementation, contributes to an effort of reducing the barrier to large computational capacity in bioinformatics and for CRISPR-Cas9 gRNA design. Crackling Cloud can be deployed to any AWS account, and is freely available on GitHub under the BSD 3-clause license: https://github.com/bmds-lab/Crackling-AWS.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
*Computational Biology/methods
*RNA, Guide, CRISPR-Cas Systems/genetics
*CRISPR-Cas Systems/genetics
Software
RevDate: 2025-12-30
CmpDate: 2025-12-29
ImmunoNX: a robust bioinformatics workflow to support personalized neoantigen vaccine trials.
ArXiv.
Personalized neoantigen vaccines represent a promising immunotherapy approach that harnesses tumor-specific antigens to stimulate anti-tumor immune responses. However, the design of these vaccines requires sophisticated computational workflows to predict and prioritize neoantigen candidates from patient sequencing data, coupled with rigorous review to ensure candidate quality. While numerous computational tools exist for neoantigen prediction, to our knowledge, there are no established protocols detailing the complete process from raw sequencing data through systematic candidate selection. Here, we present ImmunoNX (Immunogenomics Neoantigen eXplorer), an end-to-end protocol for neoantigen prediction and vaccine design that has supported over 185 patients across 11 clinical trials. The workflow integrates tumor DNA/RNA and matched normal DNA sequencing data through a computational pipeline built with Workflow Definition Language (WDL) and executed via Cromwell on Google Cloud Platform. ImmunoNX employs consensus-based variant calling, in-silico HLA typing, and pVACtools for neoantigen prediction. Additionally, we describe a two-stage immunogenomics review process with prioritization of neoantigen candidates, enabled by pVACview, followed by manual assessment of variants using the Integrative Genomics Viewer (IGV). This workflow enables vaccine design in under three months. We demonstrate the protocol using the HCC1395 breast cancer cell line dataset, identifying 78 high-confidence neoantigen candidates from 322 initial predictions. Although demonstrated here for vaccine development, this workflow can be adapted for diverse neoantigen therapies and experiments. Therefore, this protocol provides the research community with a reproducible, version-controlled framework for designing personalized neoantigen vaccines, supported by detailed documentation, example datasets, and open-source code.
Additional Links: PMID-41415611
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41415611,
year = {2025},
author = {Singhal, K and Schmidt, E and Kiwala, S and Goedegebuure, SP and Miller, CA and Xia, H and Cotto, KC and Li, J and Yao, J and Hendrickson, L and Richters, MM and Hoang, MH and Khanfar, M and Risch, I and O'Laughlin, S and Myers, N and Vickery, T and Davies, SR and Du, F and Mooney, TB and Coffman, A and Chang, GS and Hundal, J and Garza, JE and McLellan, MD and McMichael, JF and Maruska, J and Inabinett, WB and Hoos, WA and Karchin, R and Johanns, TM and Dunn, GP and Pachynski, RK and Fehniger, TA and Ward, JP and Foltz, JA and Gillanders, WE and Griffith, OL and Griffith, M},
title = {ImmunoNX: a robust bioinformatics workflow to support personalized neoantigen vaccine trials.},
journal = {ArXiv},
volume = {},
number = {},
pages = {},
pmid = {41415611},
issn = {2331-8422},
support = {T32 CA009621/CA/NCI NIH HHS/United States ; U01 CA248235/CA/NCI NIH HHS/United States ; U01 CA209936/CA/NCI NIH HHS/United States ; U01 CA231844/CA/NCI NIH HHS/United States ; T32 GM139774/GM/NIGMS NIH HHS/United States ; R00 HG007940/HG/NHGRI NIH HHS/United States ; P30 CA091842/CA/NCI NIH HHS/United States ; U24 CA237719/CA/NCI NIH HHS/United States ; P50 CA196510/CA/NCI NIH HHS/United States ; R01 CA240983/CA/NCI NIH HHS/United States ; UL1 TR002345/TR/NCATS NIH HHS/United States ; P50 CA272213/CA/NCI NIH HHS/United States ; K22 CA282364/CA/NCI NIH HHS/United States ; },
abstract = {Personalized neoantigen vaccines represent a promising immunotherapy approach that harnesses tumor-specific antigens to stimulate anti-tumor immune responses. However, the design of these vaccines requires sophisticated computational workflows to predict and prioritize neoantigen candidates from patient sequencing data, coupled with rigorous review to ensure candidate quality. While numerous computational tools exist for neoantigen prediction, to our knowledge, there are no established protocols detailing the complete process from raw sequencing data through systematic candidate selection. Here, we present ImmunoNX (Immunogenomics Neoantigen eXplorer), an end-to-end protocol for neoantigen prediction and vaccine design that has supported over 185 patients across 11 clinical trials. The workflow integrates tumor DNA/RNA and matched normal DNA sequencing data through a computational pipeline built with Workflow Definition Language (WDL) and executed via Cromwell on Google Cloud Platform. ImmunoNX employs consensus-based variant calling, in-silico HLA typing, and pVACtools for neoantigen prediction. Additionally, we describe a two-stage immunogenomics review process with prioritization of neoantigen candidates, enabled by pVACview, followed by manual assessment of variants using the Integrative Genomics Viewer (IGV). This workflow enables vaccine design in under three months. We demonstrate the protocol using the HCC1395 breast cancer cell line dataset, identifying 78 high-confidence neoantigen candidates from 322 initial predictions. Although demonstrated here for vaccine development, this workflow can be adapted for diverse neoantigen therapies and experiments. Therefore, this protocol provides the research community with a reproducible, version-controlled framework for designing personalized neoantigen vaccines, supported by detailed documentation, example datasets, and open-source code.},
}
RevDate: 2025-12-19
[Exploring the Spatial and Temporal Evolution of Fractional Vegetation Cover and Driving Factors in Zhahe Mining Area from 1987 to 2023].
Huan jing ke xue= Huanjing kexue, 46(12):7841-7852.
Coal mining significantly affects vegetation evolution, but the patterns of vegetation change and the driving factors behind them in shaft mining mines are less explored. The Zhahe mining area in Huaibei City, China, was used as the study area to extract the vegetation cover (FVC) between 1987 and 2023 and explore the deep-seated drivers. Relying on the Google Earth Engine cloud platform, a total of 734 scenes of Landsat-5, Landsat-7, and Landsat-8 satellite image data were acquired from 1987 to 2023. Based on the image element dichotomous model, the spatial and temporal changes in FVC in the Zhahe mining area during the 37 years period were quantitatively analyzed by using trend analysis and stability analysis, and the impacts of nine driving factors on FVC in three aspects, namely, climate, topography, and human activities, were analyzed by using the geodetic detector. The results showed that: ① FVC in the Zhahe mining area has been decreasing over the past 37 years, with an average rate of change of 0.02%·a[-1]. The average FVC level in the area was high, and the area with medium coverage and above accounted for 81.8%, with a spatial distribution characterized by "high in the northeast and low in the southwest." ② The FVC of each coal mine in the Zhahe mining area was dominated by high stability areas, accounting for more than 30% of the area, and the land use type was dominated by cultivated land and construction land, while the areas of lower stability were mainly concentrated in the Shuoxi Lake, the collapse zone of Zhong Lake, and the areas close to the town roads in the study area. ③ In the exploration of the influence of the nine driving factors on FVC, the order of the influence of each factor on FVC was as follows: land use type (0.41) > precipitation (0.164) > nighttime light (0.12) > temperature (0.095) > GDP (0.079) > population density (0.048) > elevation (0.043) > slope (0.040) > slope (0.021). The interaction between land use type and other factors had the strongest effect on the spatial variability of FVC.
Additional Links: PMID-41414004
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41414004,
year = {2025},
author = {Gu, XR and Yang, KM and Zhang, C and Jiang, KG and Chen, XY and Peng, LS},
title = {[Exploring the Spatial and Temporal Evolution of Fractional Vegetation Cover and Driving Factors in Zhahe Mining Area from 1987 to 2023].},
journal = {Huan jing ke xue= Huanjing kexue},
volume = {46},
number = {12},
pages = {7841-7852},
doi = {10.13227/j.hjkx.202410215},
pmid = {41414004},
issn = {0250-3301},
abstract = {Coal mining significantly affects vegetation evolution, but the patterns of vegetation change and the driving factors behind them in shaft mining mines are less explored. The Zhahe mining area in Huaibei City, China, was used as the study area to extract the vegetation cover (FVC) between 1987 and 2023 and explore the deep-seated drivers. Relying on the Google Earth Engine cloud platform, a total of 734 scenes of Landsat-5, Landsat-7, and Landsat-8 satellite image data were acquired from 1987 to 2023. Based on the image element dichotomous model, the spatial and temporal changes in FVC in the Zhahe mining area during the 37 years period were quantitatively analyzed by using trend analysis and stability analysis, and the impacts of nine driving factors on FVC in three aspects, namely, climate, topography, and human activities, were analyzed by using the geodetic detector. The results showed that: ① FVC in the Zhahe mining area has been decreasing over the past 37 years, with an average rate of change of 0.02%·a[-1]. The average FVC level in the area was high, and the area with medium coverage and above accounted for 81.8%, with a spatial distribution characterized by "high in the northeast and low in the southwest." ② The FVC of each coal mine in the Zhahe mining area was dominated by high stability areas, accounting for more than 30% of the area, and the land use type was dominated by cultivated land and construction land, while the areas of lower stability were mainly concentrated in the Shuoxi Lake, the collapse zone of Zhong Lake, and the areas close to the town roads in the study area. ③ In the exploration of the influence of the nine driving factors on FVC, the order of the influence of each factor on FVC was as follows: land use type (0.41) > precipitation (0.164) > nighttime light (0.12) > temperature (0.095) > GDP (0.079) > population density (0.048) > elevation (0.043) > slope (0.040) > slope (0.021). The interaction between land use type and other factors had the strongest effect on the spatial variability of FVC.},
}
RevDate: 2025-12-21
CmpDate: 2025-12-18
Autonomous vehicles with augmented reality internet of things and edge intelligence system for industry 5.0 based on 6G.
PloS one, 20(12):e0339022.
In an era of rapidly evolving technology, traditional cloud computing struggles to meet the demands of resource-intensive smart devices. This necessitates a shift towards Edge Computing (EC), which brings computation and data storage closer to the network's edge, enhancing efficiency and reducing latency. This is particularly crucial for the Internet of Things (IoT), where supporting mobility, location awareness, and real-time processing are paramount. However, the scalability of EC applications is significantly influenced by network parameters and the capabilities of the computing system. This paper proposes a novel system architecture for Industry 5.0 that leverages the synergy between 6G networks, autonomous vehicles, Augmented Reality (AR), IoT, and edge intelligence to revolutionize transportation systems. Our approach integrates AR for enhanced user interfaces, utilizes IoT for data acquisition and control, and employs edge computing for real-time decision-making. Our experimental results demonstrate a strong correlation between processing speed and network bandwidth. While increasing either parameter individually enhances overall system performance. The two-tier architecture, combined with the Entity Objects (EO) model, demonstrates superior scalability compared to traditional approaches. By distributing processing tasks and leveraging the resources of other edge servers, the system can handle increasing numbers of AVs and data loads without compromising performance.
Additional Links: PMID-41411273
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41411273,
year = {2025},
author = {Ahmed, AA and Kadhim, AK and Hasan, MK and Al-Ghuribi, SM and Hamed Abd, D and Aliesawi, SA and Hezam Murshed, BA and Topham, L and Khan, W and Hussain, AJ},
title = {Autonomous vehicles with augmented reality internet of things and edge intelligence system for industry 5.0 based on 6G.},
journal = {PloS one},
volume = {20},
number = {12},
pages = {e0339022},
pmid = {41411273},
issn = {1932-6203},
mesh = {*Internet of Things ; *Augmented Reality ; *Industry ; Algorithms ; Cloud Computing ; *Artificial Intelligence ; Humans ; },
abstract = {In an era of rapidly evolving technology, traditional cloud computing struggles to meet the demands of resource-intensive smart devices. This necessitates a shift towards Edge Computing (EC), which brings computation and data storage closer to the network's edge, enhancing efficiency and reducing latency. This is particularly crucial for the Internet of Things (IoT), where supporting mobility, location awareness, and real-time processing are paramount. However, the scalability of EC applications is significantly influenced by network parameters and the capabilities of the computing system. This paper proposes a novel system architecture for Industry 5.0 that leverages the synergy between 6G networks, autonomous vehicles, Augmented Reality (AR), IoT, and edge intelligence to revolutionize transportation systems. Our approach integrates AR for enhanced user interfaces, utilizes IoT for data acquisition and control, and employs edge computing for real-time decision-making. Our experimental results demonstrate a strong correlation between processing speed and network bandwidth. While increasing either parameter individually enhances overall system performance. The two-tier architecture, combined with the Entity Objects (EO) model, demonstrates superior scalability compared to traditional approaches. By distributing processing tasks and leveraging the resources of other edge servers, the system can handle increasing numbers of AVs and data loads without compromising performance.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Internet of Things
*Augmented Reality
*Industry
Algorithms
Cloud Computing
*Artificial Intelligence
Humans
RevDate: 2025-12-17
Research on cloud-edge-end distributed collaborative computing based on deep reinforcement learning.
Scientific reports pii:10.1038/s41598-025-32813-1 [Epub ahead of print].
Additional Links: PMID-41407848
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41407848,
year = {2025},
author = {Wu, C and Ye, Q and Wang, Y and Zhang, D and Zhang, W and Jiang, X},
title = {Research on cloud-edge-end distributed collaborative computing based on deep reinforcement learning.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-32813-1},
pmid = {41407848},
issn = {2045-2322},
support = {5700- 202358842A-4-3-WL//science and technology program of State grid Corporation of China/ ; },
}
RevDate: 2025-12-19
Blockchain-based secure MEC model for VANETs using hybrid networks.
Scientific reports, 15(1):43912.
Vehicular Ad-hoc Networks (VANETs) are a type of mobile ad-hoc network that enables vehicles to interact with one another and roadside infrastructure. Multi-Access Edge Computing (MEC) provides a promising solution by positioning storage and computation resources closer to the network edge. This helps to reduce the latency and improve performance. The combination of MEC and blockchain enhances data processing and security. This integration improves privacy safeguards, prevents fraud, and supports trusted communication within VANETs. Consequently, this proposed model aims to develop an innovative approach that leverages these technologies. The main objective of the implemented technique is to create a blockchain architecture powered by deep learning, which ensures the safety of VANETs. The network architecture consists of three layers: perception, edge computing, and services. The main goal of the initial layer is to protect the privacy of VANET data through blockchain activities. The perception layer processes data using edge computing and cloud services. The service layer ensures data protection by through the blockchain technology and storing information in a public cloud. The last layer focuses on addressing user demands for throughput and Quality of Service (QoS). The proposed framework is good for assessing the dependability of vehicle nodes stored on the blockchain. To accomplish node authentication, an Adaptive and Dilated Hybrid Network (ADHyNet) is used. In this approach, the Residual Long Short-Term Memory (Res-LSTM) with Gated Recurrent Unit (GRU) forms the ADHyNet, where the Random Number Updated Skill Optimization Algorithm (RNU-SOA) is used to optimize the hyperparameters. Finally, the encryption process is carried out using Homomorphic Encryption combined with Elliptic Curve Cryptography (HECC) to secure data. This process ensures that confidential user information is protected against unauthorized access. The functionality of the system is thoroughly assessed and simulated. The suggested technique outperforms well than other approaches in terms of data security in VANET.
Additional Links: PMID-41402461
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41402461,
year = {2025},
author = {Goud, GV and Arunachalam, R and Shukla, SK and Saranya, K and Venugopal, S and Palanisamy, P},
title = {Blockchain-based secure MEC model for VANETs using hybrid networks.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {43912},
pmid = {41402461},
issn = {2045-2322},
abstract = {Vehicular Ad-hoc Networks (VANETs) are a type of mobile ad-hoc network that enables vehicles to interact with one another and roadside infrastructure. Multi-Access Edge Computing (MEC) provides a promising solution by positioning storage and computation resources closer to the network edge. This helps to reduce the latency and improve performance. The combination of MEC and blockchain enhances data processing and security. This integration improves privacy safeguards, prevents fraud, and supports trusted communication within VANETs. Consequently, this proposed model aims to develop an innovative approach that leverages these technologies. The main objective of the implemented technique is to create a blockchain architecture powered by deep learning, which ensures the safety of VANETs. The network architecture consists of three layers: perception, edge computing, and services. The main goal of the initial layer is to protect the privacy of VANET data through blockchain activities. The perception layer processes data using edge computing and cloud services. The service layer ensures data protection by through the blockchain technology and storing information in a public cloud. The last layer focuses on addressing user demands for throughput and Quality of Service (QoS). The proposed framework is good for assessing the dependability of vehicle nodes stored on the blockchain. To accomplish node authentication, an Adaptive and Dilated Hybrid Network (ADHyNet) is used. In this approach, the Residual Long Short-Term Memory (Res-LSTM) with Gated Recurrent Unit (GRU) forms the ADHyNet, where the Random Number Updated Skill Optimization Algorithm (RNU-SOA) is used to optimize the hyperparameters. Finally, the encryption process is carried out using Homomorphic Encryption combined with Elliptic Curve Cryptography (HECC) to secure data. This process ensures that confidential user information is protected against unauthorized access. The functionality of the system is thoroughly assessed and simulated. The suggested technique outperforms well than other approaches in terms of data security in VANET.},
}
RevDate: 2025-12-16
Molecular crystal memristor-based edge AI platform for energy-efficient and real-time smart grid inspection.
Science bulletin pii:S2095-9273(25)01227-7 [Epub ahead of print].
Vast power grid infrastructure generates enormous volumes of inspection data from smart meters, unmanned aerial vehicle (UAV) patrols, and high-definition video monitoring. Meeting the demand for real-time analysis places stringent requirements on latency, energy efficiency, and on-device intelligence at the edge. Here, we present a molecular crystal memristor-based edge artificial intelligence (AI) hardware platform that can be directly deployed in inspection devices, enabling real-time grid monitoring with drastically reduced computational and storage overheads. The memristor exhibits highly controllable filamentary switching behavior, stable multi-level conductance states, femtowatt-scale power consumption, and outstanding retention. Leveraging these properties, the platform enables fully hardware-integrated convolution, achieving 97% feature-extraction accuracy and 67.75 TOPS/W energy efficiency, thereby substantially alleviating the computational and storage load of cloud servers. This work establishes a scalable and energy-efficient in-memory computing framework for smart grid inspection and provides a powerful foundation for broader edge AI applications.
Additional Links: PMID-41402193
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41402193,
year = {2025},
author = {Guan, P and Qin, L and Ning, K and Liu, J and Ouyang, D and Yu, Y and Wu, J and Lu, X and Fu, Y and Li, Y and Li, H and Zhai, T},
title = {Molecular crystal memristor-based edge AI platform for energy-efficient and real-time smart grid inspection.},
journal = {Science bulletin},
volume = {},
number = {},
pages = {},
doi = {10.1016/j.scib.2025.11.062},
pmid = {41402193},
issn = {2095-9281},
abstract = {Vast power grid infrastructure generates enormous volumes of inspection data from smart meters, unmanned aerial vehicle (UAV) patrols, and high-definition video monitoring. Meeting the demand for real-time analysis places stringent requirements on latency, energy efficiency, and on-device intelligence at the edge. Here, we present a molecular crystal memristor-based edge artificial intelligence (AI) hardware platform that can be directly deployed in inspection devices, enabling real-time grid monitoring with drastically reduced computational and storage overheads. The memristor exhibits highly controllable filamentary switching behavior, stable multi-level conductance states, femtowatt-scale power consumption, and outstanding retention. Leveraging these properties, the platform enables fully hardware-integrated convolution, achieving 97% feature-extraction accuracy and 67.75 TOPS/W energy efficiency, thereby substantially alleviating the computational and storage load of cloud servers. This work establishes a scalable and energy-efficient in-memory computing framework for smart grid inspection and provides a powerful foundation for broader edge AI applications.},
}
RevDate: 2026-01-03
CmpDate: 2026-01-03
AI-embedded IoT healthcare optimization with trust-aware mobile edge computing.
Scientific reports, 16(1):10.
Embedded technologies combined with the Internet of Things (IoT), have transformed healthcare monitoring systems into automated and responsive platforms. In recent decades, many existing approaches have been based on edge computing to reduce response time in patient monitoring and provide a reliable method for interaction among the medical team and experts during disease diagnosis. Such approaches are the interconnection of battery-powered devices and physical objects to capture the physiological data streams for medical treatment and facilitate personalized healthcare systems. However, as wireless devices have limited resources for fulfilling end-user requests, this affects the accuracy of the medical system, especially in the presence of malicious devices on the communication infrastructure. Under diverse network conditions, such solutions lower the reliability level of the devices and increase the likelihood of suspicious processes. Therefore, to keep these significant concerns in IoT-based healthcare applications, trust and security should be adopted while collecting patients' data over an insecure medium. In this research study, we propose a model referred to as Edge-Cloud Trusted Intelligence (ECTI), aiming to decrease the computing overhead on the devices. Additionally, multi-level security is implemented to ensure privacy preservation by adopting trusted behavior when communicating in a distributed environment. The edges utilize resources efficiently by employing task offloading strategies, enabling lightweight collaborative decision-making for routing in the healthcare domain. The performance results revealed notable improvement of the proposed model against related schemes in terms of various network metrics.
Additional Links: PMID-41398195
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41398195,
year = {2025},
author = {Alamri, M and Haseeb, K and Humayun, M and Alshammeri, M},
title = {AI-embedded IoT healthcare optimization with trust-aware mobile edge computing.},
journal = {Scientific reports},
volume = {16},
number = {1},
pages = {10},
pmid = {41398195},
issn = {2045-2322},
support = {DGSSR-2025-02-01298//Deanship of Graduate Studies and Scientific Research at Jouf University/ ; },
mesh = {*Internet of Things ; Humans ; Computer Security ; Cloud Computing ; Wireless Technology ; Trust ; *Artificial Intelligence ; Delivery of Health Care ; Telemedicine ; },
abstract = {Embedded technologies combined with the Internet of Things (IoT), have transformed healthcare monitoring systems into automated and responsive platforms. In recent decades, many existing approaches have been based on edge computing to reduce response time in patient monitoring and provide a reliable method for interaction among the medical team and experts during disease diagnosis. Such approaches are the interconnection of battery-powered devices and physical objects to capture the physiological data streams for medical treatment and facilitate personalized healthcare systems. However, as wireless devices have limited resources for fulfilling end-user requests, this affects the accuracy of the medical system, especially in the presence of malicious devices on the communication infrastructure. Under diverse network conditions, such solutions lower the reliability level of the devices and increase the likelihood of suspicious processes. Therefore, to keep these significant concerns in IoT-based healthcare applications, trust and security should be adopted while collecting patients' data over an insecure medium. In this research study, we propose a model referred to as Edge-Cloud Trusted Intelligence (ECTI), aiming to decrease the computing overhead on the devices. Additionally, multi-level security is implemented to ensure privacy preservation by adopting trusted behavior when communicating in a distributed environment. The edges utilize resources efficiently by employing task offloading strategies, enabling lightweight collaborative decision-making for routing in the healthcare domain. The performance results revealed notable improvement of the proposed model against related schemes in terms of various network metrics.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Internet of Things
Humans
Computer Security
Cloud Computing
Wireless Technology
Trust
*Artificial Intelligence
Delivery of Health Care
Telemedicine
RevDate: 2025-12-13
A custom hash algorithm for hosting secure gray scale image repository in public cloud.
Scientific reports pii:10.1038/s41598-025-31792-7 [Epub ahead of print].
Nowadays, Cloud computing is an essential platform for securing resources and effectively managing files. In digital technology, many data leaks or breaches frequently occur during the storage and transmission process. Several techniques for secure image transmission have been developed by researchers worldwide. In the traditional method, data loss prevention (DLP) is the best way to protect sensitive data from breaches. The massive amount of data is not feasible in the existing storage system. However, ensuring data security remains a severe challenge. Cloud infrastructure provides a more robust, reliable, and scalable solution to overcome attacks in developing regions. The primary objective of cloud storage is to provide affordable and easy access to storage, with a vast amount of data stored across multiple cloud storage services. This paper proposed a custom block-based hash algorithm that generates a digital fingerprint from the grayscale-scale image. The pivotal contribution presented in the proposed work lies in emphasising data integrity generation and validation, tamper detection, and accurate identification of the tampered region. The entire 256 × 256 image is considered for tamper-proofing, and the hash values generated are based on the proposed work. In the integrity validation process, it compares the digest with the original digest. The cloud environment provides scalable infrastructure for securely managing and storing the digital fingerprint. User-level authentication is also incorporated into the proposed framework. Additionally, a Graphical User Interface (GUI) application has been developed for generating a hash and verifying whether the image has been tampered with or not, with the tampered region marked by a bounding box. Various benchmark metrics are analysed for validating the outfit of the proposed algorithm. The metrics, including quantitative and qualitative tests for integrity codes, collision property, and avalanche effect, were analysed, and the proposed algorithm exhibits a good ability towards integrity validation.
Additional Links: PMID-41390772
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41390772,
year = {2025},
author = {Murugesan, V and Chidambaram, N and Amirtharajan, R},
title = {A custom hash algorithm for hosting secure gray scale image repository in public cloud.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-31792-7},
pmid = {41390772},
issn = {2045-2322},
support = {SR/FST/ET-I/2018/221(C)//DST FIST Fund/ ; },
abstract = {Nowadays, Cloud computing is an essential platform for securing resources and effectively managing files. In digital technology, many data leaks or breaches frequently occur during the storage and transmission process. Several techniques for secure image transmission have been developed by researchers worldwide. In the traditional method, data loss prevention (DLP) is the best way to protect sensitive data from breaches. The massive amount of data is not feasible in the existing storage system. However, ensuring data security remains a severe challenge. Cloud infrastructure provides a more robust, reliable, and scalable solution to overcome attacks in developing regions. The primary objective of cloud storage is to provide affordable and easy access to storage, with a vast amount of data stored across multiple cloud storage services. This paper proposed a custom block-based hash algorithm that generates a digital fingerprint from the grayscale-scale image. The pivotal contribution presented in the proposed work lies in emphasising data integrity generation and validation, tamper detection, and accurate identification of the tampered region. The entire 256 × 256 image is considered for tamper-proofing, and the hash values generated are based on the proposed work. In the integrity validation process, it compares the digest with the original digest. The cloud environment provides scalable infrastructure for securely managing and storing the digital fingerprint. User-level authentication is also incorporated into the proposed framework. Additionally, a Graphical User Interface (GUI) application has been developed for generating a hash and verifying whether the image has been tampered with or not, with the tampered region marked by a bounding box. Various benchmark metrics are analysed for validating the outfit of the proposed algorithm. The metrics, including quantitative and qualitative tests for integrity codes, collision property, and avalanche effect, were analysed, and the proposed algorithm exhibits a good ability towards integrity validation.},
}
RevDate: 2025-12-18
BlueEdge neural network approach and its application to automated data type classification in mobile edge computing.
Scientific reports, 15(1):43823.
UNLABELLED: Owing to the increasing number of IoT gadgets and the growth of big data, we are now facing massive amounts of diverse data that require proper preprocessing before they can be analyzed. Conventional methods involve sending data directly to the cloud, where it is cleaned and sorted, resulting in a more crowded network, increased latency, and a potential threat to users’ privacy. This paper presents an enhanced version of the BlueEdge framework—a neural network solution designed for the automated classification of data types on edge devices. We achieve this by utilizing a feed-forward neural network and optimized features to identify the presence of 14 distinct data types. Because of this, input data can be preprocessed near its source, and not in the cloud. We utilized a comprehensive dataset comprising 1400 samples, encompassing various data formats from around the world. Compared with rule-based methods, experimental assessment achieves better performance, and results in reduced data transmission (reduced by 62%) and processing latency (78 times faster than cloud-based systems), with resource efficiency comparable to low-end mobile devices. Additionally, our strategy demonstrates strong performance under various data conditions, achieving accuracy levels of over 85% on datasets that may include variations and a noise level as high as 20%. The approach used here is capable of processing data for IoT devices used in education, which can lead to more efficient connections with the cloud and better privacy preservation.
SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1038/s41598-025-30445-z.
Additional Links: PMID-41387763
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41387763,
year = {2025},
author = {Elmobark, N and El-Ghareeb, H and Elhishi, S},
title = {BlueEdge neural network approach and its application to automated data type classification in mobile edge computing.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {43823},
pmid = {41387763},
issn = {2045-2322},
abstract = {UNLABELLED: Owing to the increasing number of IoT gadgets and the growth of big data, we are now facing massive amounts of diverse data that require proper preprocessing before they can be analyzed. Conventional methods involve sending data directly to the cloud, where it is cleaned and sorted, resulting in a more crowded network, increased latency, and a potential threat to users’ privacy. This paper presents an enhanced version of the BlueEdge framework—a neural network solution designed for the automated classification of data types on edge devices. We achieve this by utilizing a feed-forward neural network and optimized features to identify the presence of 14 distinct data types. Because of this, input data can be preprocessed near its source, and not in the cloud. We utilized a comprehensive dataset comprising 1400 samples, encompassing various data formats from around the world. Compared with rule-based methods, experimental assessment achieves better performance, and results in reduced data transmission (reduced by 62%) and processing latency (78 times faster than cloud-based systems), with resource efficiency comparable to low-end mobile devices. Additionally, our strategy demonstrates strong performance under various data conditions, achieving accuracy levels of over 85% on datasets that may include variations and a noise level as high as 20%. The approach used here is capable of processing data for IoT devices used in education, which can lead to more efficient connections with the cloud and better privacy preservation.
SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1038/s41598-025-30445-z.},
}
RevDate: 2025-12-14
PyEOGPR: A Python package for vegetation trait mapping with Gaussian Process Regression on Earth observation cloud platforms.
Ecological informatics, 92:103497.
Developed to efficiently quantify vegetation traits from satellite Earth Observation (EO) data, the here presented PyEOGPR Python package makes trained probabilistic Gaussian Process Regression (GPR) models readily accessible within cloud-computing platforms like Google Earth Engine (GEE) and openEO. PyEOGPR provides a diversity of validated hybrid GPR models targeting common vegetation traits, as well as newer, more challenging ones such as canopy nitrogen content (CNC), applicable to Sentinel-2 (S2) and Sentinel-3 (S3) data. The package also enables users to incorporate newly trained GPR models for quantifying user-defined surface properties. A key advantage of GPR models is their provision of associated uncertainty estimates, significantly enhancing retrieval reliability. PyEOGPR streamlines large-scale vegetation analysis, facilitating quantitative map generation from local to global scales with customizable time windows, eliminating the need for local image downloads or processing. This paper outlines the complete processing pipeline and demonstrates the generation of landscape-scale maps of key vegetation traits using S2 (20 m resolution) data, and global trait maps using S3 data. PyEOGPR currently supports 27 generically applicable GPR models, aiding environmental monitoring and sustainable agroecological management, with minimal coding expertise required. This integration democratizes access to advanced GPR models within cloud environments, making spatial vegetation dynamics analyses accessible to a broader user base and improving the efficiency of EO data processing.
Additional Links: PMID-41383661
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41383661,
year = {2025},
author = {Kovács, DD and De Clerck, E and Verrelst, J},
title = {PyEOGPR: A Python package for vegetation trait mapping with Gaussian Process Regression on Earth observation cloud platforms.},
journal = {Ecological informatics},
volume = {92},
number = {},
pages = {103497},
pmid = {41383661},
issn = {1574-9541},
abstract = {Developed to efficiently quantify vegetation traits from satellite Earth Observation (EO) data, the here presented PyEOGPR Python package makes trained probabilistic Gaussian Process Regression (GPR) models readily accessible within cloud-computing platforms like Google Earth Engine (GEE) and openEO. PyEOGPR provides a diversity of validated hybrid GPR models targeting common vegetation traits, as well as newer, more challenging ones such as canopy nitrogen content (CNC), applicable to Sentinel-2 (S2) and Sentinel-3 (S3) data. The package also enables users to incorporate newly trained GPR models for quantifying user-defined surface properties. A key advantage of GPR models is their provision of associated uncertainty estimates, significantly enhancing retrieval reliability. PyEOGPR streamlines large-scale vegetation analysis, facilitating quantitative map generation from local to global scales with customizable time windows, eliminating the need for local image downloads or processing. This paper outlines the complete processing pipeline and demonstrates the generation of landscape-scale maps of key vegetation traits using S2 (20 m resolution) data, and global trait maps using S3 data. PyEOGPR currently supports 27 generically applicable GPR models, aiding environmental monitoring and sustainable agroecological management, with minimal coding expertise required. This integration democratizes access to advanced GPR models within cloud environments, making spatial vegetation dynamics analyses accessible to a broader user base and improving the efficiency of EO data processing.},
}
RevDate: 2025-12-14
CmpDate: 2025-12-11
Secure Fog Computing for Remote Health Monitoring with Data Prioritisation and AI-Based Anomaly Detection.
Sensors (Basel, Switzerland), 25(23):.
Smart remote health monitoring requires time-critical medical data of patients from IoT-enabled cyber-physical systems (CPSs) to be securely transmitted and analysed in real time for early interventions and personalised patient care. Existing cloud architectures are insufficient for smart health systems due to their inherent issues with latency, bandwidth, and privacy. Fog architectures using data storage closer to edge devices introduce challenges in data management, security, and privacy for effective monitoring of a patient's sensitive and critical health data. These gaps found in the literature form the main research focus of this study. As an initial modest step to advance research further, we propose an innovative fog-based framework which is the first of its kind to integrate secure communication with intelligent data prioritisation (IDP) integrated into an AI-based enhanced Random Forest anomaly and threat detection model. Our experimental study to validate our model involves a simulated smart healthcare scenario with synthesised health data streams from distributed wearable devices. Features such as heart rate, SpO2, and breathing rate are dynamically prioritised using AI strategies and rule-based thresholds so that urgent health anomalies are transmitted securely in real time to support clinicians and medical experts for personalised early interventions. We establish a successful proof-of-concept implementation of our framework by achieving high predictive performance measures with an initial high score of 93.5% accuracy, 90.8% precision, 88.7% recall, and 89.7% F1-score.
Additional Links: PMID-41374704
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41374704,
year = {2025},
author = {Fahd, K and Parvin, S and Di Serio, A and Venkatraman, S},
title = {Secure Fog Computing for Remote Health Monitoring with Data Prioritisation and AI-Based Anomaly Detection.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {23},
pages = {},
pmid = {41374704},
issn = {1424-8220},
mesh = {Humans ; Monitoring, Physiologic/methods ; *Cloud Computing ; *Artificial Intelligence ; *Computer Security ; Wearable Electronic Devices ; Telemedicine ; Algorithms ; Remote Sensing Technology ; },
abstract = {Smart remote health monitoring requires time-critical medical data of patients from IoT-enabled cyber-physical systems (CPSs) to be securely transmitted and analysed in real time for early interventions and personalised patient care. Existing cloud architectures are insufficient for smart health systems due to their inherent issues with latency, bandwidth, and privacy. Fog architectures using data storage closer to edge devices introduce challenges in data management, security, and privacy for effective monitoring of a patient's sensitive and critical health data. These gaps found in the literature form the main research focus of this study. As an initial modest step to advance research further, we propose an innovative fog-based framework which is the first of its kind to integrate secure communication with intelligent data prioritisation (IDP) integrated into an AI-based enhanced Random Forest anomaly and threat detection model. Our experimental study to validate our model involves a simulated smart healthcare scenario with synthesised health data streams from distributed wearable devices. Features such as heart rate, SpO2, and breathing rate are dynamically prioritised using AI strategies and rule-based thresholds so that urgent health anomalies are transmitted securely in real time to support clinicians and medical experts for personalised early interventions. We establish a successful proof-of-concept implementation of our framework by achieving high predictive performance measures with an initial high score of 93.5% accuracy, 90.8% precision, 88.7% recall, and 89.7% F1-score.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
Monitoring, Physiologic/methods
*Cloud Computing
*Artificial Intelligence
*Computer Security
Wearable Electronic Devices
Telemedicine
Algorithms
Remote Sensing Technology
RevDate: 2025-12-14
Privacy-Preserving Hierarchical Fog Federated Learning (PP-HFFL) for IoT Intrusion Detection.
Sensors (Basel, Switzerland), 25(23):.
The rapid expansion of the Internet of Things (IoT) across critical sectors such as healthcare, energy, cybersecurity, smart cities, and finance has increased its exposure to cyberattacks. Conventional centralized machine learning-based Intrusion Detection Systems (IDS) face limitations, including data privacy risks, legal restrictions on cross-border data transfers, and high communication overhead. To overcome these challenges, we propose Privacy-Preserving Hierarchical Fog Federated Learning (PP-HFFL) for IoT intrusion detection, where fog nodes serve as intermediaries between IoT devices and the cloud, collecting and preprocessing local data, thus training models on behalf of IoT clusters. The framework incorporates a Personalized Federated Learning (PFL) to handle heterogeneous, non-independent, and identically distributed (non-IID) data and leverages differential privacy (DP) to protect sensitive information. Experiments on RT-IoT 2022 and CIC-IoT 2023 datasets demonstrate that PP-HFFL achieves detection accuracy comparable to centralized systems, reduces communication overhead, preserves privacy, and adapts effectively across non-IID data. This hierarchical approach provides a practical and secure solution for next-generation IoT intrusion detection.
Additional Links: PMID-41374671
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41374671,
year = {2025},
author = {Islam, MM and Abdullah, WM and Saha, BN},
title = {Privacy-Preserving Hierarchical Fog Federated Learning (PP-HFFL) for IoT Intrusion Detection.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {23},
pages = {},
pmid = {41374671},
issn = {1424-8220},
support = {CRG-SEED-2501-07//Concordia University of Edmonton/ ; },
abstract = {The rapid expansion of the Internet of Things (IoT) across critical sectors such as healthcare, energy, cybersecurity, smart cities, and finance has increased its exposure to cyberattacks. Conventional centralized machine learning-based Intrusion Detection Systems (IDS) face limitations, including data privacy risks, legal restrictions on cross-border data transfers, and high communication overhead. To overcome these challenges, we propose Privacy-Preserving Hierarchical Fog Federated Learning (PP-HFFL) for IoT intrusion detection, where fog nodes serve as intermediaries between IoT devices and the cloud, collecting and preprocessing local data, thus training models on behalf of IoT clusters. The framework incorporates a Personalized Federated Learning (PFL) to handle heterogeneous, non-independent, and identically distributed (non-IID) data and leverages differential privacy (DP) to protect sensitive information. Experiments on RT-IoT 2022 and CIC-IoT 2023 datasets demonstrate that PP-HFFL achieves detection accuracy comparable to centralized systems, reduces communication overhead, preserves privacy, and adapts effectively across non-IID data. This hierarchical approach provides a practical and secure solution for next-generation IoT intrusion detection.},
}
RevDate: 2025-12-14
A Framework for Integration of Machine Vision with IoT Sensing.
Sensors (Basel, Switzerland), 25(23):.
Automated monitoring systems increasingly leverage diverse sensing sources, yet a disconnect often persists between machine vision and IoT sensor pipelines. While IoT sensors provide reliable point measurements and cameras offer rich spatial context, their independent operation limits coherent environmental interpretation. Existing multimodal fusion frameworks frequently lack tight synchronization and efficient cross-modal learning. This paper introduces a unified edge-cloud framework that deeply integrates cameras as active sensing nodes within an IoT network. Our approach features tight time synchronization between visual and IoT data streams and employs cross-modal knowledge distillation to enable efficient model training on resource-constrained edge devices. The system leverages a multi-task learning setup with dynamically adjusted loss weighting, combining architectures like EfficientNet, Vision Transformers, and U-Net derivatives. Validation on environmental monitoring tasks, including classification, segmentation, and anomaly detection, demonstrates the framework's robustness. Experiments deployed on compact edge hardware (Jetson Nano, Coral TPU) achieved 94.8% classification accuracy and 87.6% segmentation quality (mIoU), and they also sustained sub-second inference latency. The results confirm that the proposed synchronized, knowledge-driven fusion yields a more adaptive, context-aware, and deployment-ready sensing solution, significantly advancing the practical integration of machine vision within IoT ecosystems.
Additional Links: PMID-41374611
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41374611,
year = {2025},
author = {Nwatuzie, G and Peyravi, H},
title = {A Framework for Integration of Machine Vision with IoT Sensing.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {23},
pages = {},
pmid = {41374611},
issn = {1424-8220},
abstract = {Automated monitoring systems increasingly leverage diverse sensing sources, yet a disconnect often persists between machine vision and IoT sensor pipelines. While IoT sensors provide reliable point measurements and cameras offer rich spatial context, their independent operation limits coherent environmental interpretation. Existing multimodal fusion frameworks frequently lack tight synchronization and efficient cross-modal learning. This paper introduces a unified edge-cloud framework that deeply integrates cameras as active sensing nodes within an IoT network. Our approach features tight time synchronization between visual and IoT data streams and employs cross-modal knowledge distillation to enable efficient model training on resource-constrained edge devices. The system leverages a multi-task learning setup with dynamically adjusted loss weighting, combining architectures like EfficientNet, Vision Transformers, and U-Net derivatives. Validation on environmental monitoring tasks, including classification, segmentation, and anomaly detection, demonstrates the framework's robustness. Experiments deployed on compact edge hardware (Jetson Nano, Coral TPU) achieved 94.8% classification accuracy and 87.6% segmentation quality (mIoU), and they also sustained sub-second inference latency. The results confirm that the proposed synchronized, knowledge-driven fusion yields a more adaptive, context-aware, and deployment-ready sensing solution, significantly advancing the practical integration of machine vision within IoT ecosystems.},
}
RevDate: 2025-12-14
CmpDate: 2025-12-11
The B-Health Box: A Standards-Based Fog IoT Gateway for Interoperable Health and Wellbeing Data Collection.
Sensors (Basel, Switzerland), 25(23):.
In recent years, healthcare is evolving to meet the needs of a growing and ageing population. To support better and more reliable care, a comprehensive and up-to-date Personal Health Record (PHR) is essential. Ideally, the PHR should contain all health-related information about an individual and be available for sharing with healthcare institutions. However, due to interoperability issues of the medical and fitness devices, most of the times, the PHR only contains the same information as the patient Electronic Health Record (EHR). This results in lack of health-related information (e.g., physical activity, working patterns) essential to address medical conditions, support prescriptions, and treatment follow-up. This paper introduces the B-Health IoT Box, a fog IoT computing framework for eHealth interoperability and data collection that enables seamless, secure integration of health and contextual data into interoperable health records. The system was deployed in real-world settings involving over 4500 users, successfully collecting and transmitting more than 1.5 million datasets. The validation shown that data was collected, harmonized, and properly stored in different eHealth platforms, enriching data from personal EHR with mobile and wearable sensors data. The solution supports real-time and near real-time data collection, fast prototyping, and secure cloud integration, offering a modular, standards-compliant gateway for digital health ecosystems. The health and health-related data is available in FHIR format enabling interoperable eHealth ecosystems, and better equality of access to health and care services.
Additional Links: PMID-41374490
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41374490,
year = {2025},
author = {Marques, M and Delgado-Gomes, V and Januário, F and Lopes, C and Jardim-Goncalves, R and Agostinho, C},
title = {The B-Health Box: A Standards-Based Fog IoT Gateway for Interoperable Health and Wellbeing Data Collection.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {23},
pages = {},
pmid = {41374490},
issn = {1424-8220},
support = {826117, 857172, 872548, 101016000, 101092043//European Commission/ ; },
mesh = {Humans ; Electronic Health Records ; Telemedicine ; *Data Collection/methods ; Wearable Electronic Devices ; Health Records, Personal ; },
abstract = {In recent years, healthcare is evolving to meet the needs of a growing and ageing population. To support better and more reliable care, a comprehensive and up-to-date Personal Health Record (PHR) is essential. Ideally, the PHR should contain all health-related information about an individual and be available for sharing with healthcare institutions. However, due to interoperability issues of the medical and fitness devices, most of the times, the PHR only contains the same information as the patient Electronic Health Record (EHR). This results in lack of health-related information (e.g., physical activity, working patterns) essential to address medical conditions, support prescriptions, and treatment follow-up. This paper introduces the B-Health IoT Box, a fog IoT computing framework for eHealth interoperability and data collection that enables seamless, secure integration of health and contextual data into interoperable health records. The system was deployed in real-world settings involving over 4500 users, successfully collecting and transmitting more than 1.5 million datasets. The validation shown that data was collected, harmonized, and properly stored in different eHealth platforms, enriching data from personal EHR with mobile and wearable sensors data. The solution supports real-time and near real-time data collection, fast prototyping, and secure cloud integration, offering a modular, standards-compliant gateway for digital health ecosystems. The health and health-related data is available in FHIR format enabling interoperable eHealth ecosystems, and better equality of access to health and care services.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
Electronic Health Records
Telemedicine
*Data Collection/methods
Wearable Electronic Devices
Health Records, Personal
RevDate: 2025-12-12
CmpDate: 2025-12-10
The Nextflow nf-core/metatdenovo pipeline for reproducible annotation of metatranscriptomes, and more.
PeerJ, 13:e20328.
Metatranscriptomics-the sequencing of community RNA-has become a popular tool in microbial ecology, proving useful for both in situ surveys and experiments. However, annotating raw sequence data remains challenging for many research groups with limited computational experience. Standardized and reproducible analyses are important to enhance transparency, comparability across studies, and long-term reproducibility. To simplify metatranscriptome processing for biologists, and to promote reproducible analyses, we introduce nf-core/metatdenovo, a Nextflow-based workflow. Nextflow pipelines run on different computing platforms, from standalone systems to high-performance computing clusters and cloud platforms (e.g., AWS, Google Cloud, Azure) and use container technology such as Docker or Singularity to reproducibly provision software. Biologists can access the pipeline using either the command line or the Seqera platform, which provides a web browser-based interface to Nextflow pipelines. Collaborating with nf-core ensures high-quality, documented, reproducible workflows. Our nf-core/metatdenovo pipeline adheres to these established standards, enabling FAIR metatranscriptome de novo assembly, quantification, and annotation.
Additional Links: PMID-41368505
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41368505,
year = {2025},
author = {Di Leo, D and Nilsson, E and Krinos, A and Pinhassi, J and Lundin, D},
title = {The Nextflow nf-core/metatdenovo pipeline for reproducible annotation of metatranscriptomes, and more.},
journal = {PeerJ},
volume = {13},
number = {},
pages = {e20328},
pmid = {41368505},
issn = {2167-8359},
mesh = {*Software ; Reproducibility of Results ; Workflow ; *Transcriptome ; *Computational Biology/methods ; *Molecular Sequence Annotation/methods ; *Metagenomics/methods ; },
abstract = {Metatranscriptomics-the sequencing of community RNA-has become a popular tool in microbial ecology, proving useful for both in situ surveys and experiments. However, annotating raw sequence data remains challenging for many research groups with limited computational experience. Standardized and reproducible analyses are important to enhance transparency, comparability across studies, and long-term reproducibility. To simplify metatranscriptome processing for biologists, and to promote reproducible analyses, we introduce nf-core/metatdenovo, a Nextflow-based workflow. Nextflow pipelines run on different computing platforms, from standalone systems to high-performance computing clusters and cloud platforms (e.g., AWS, Google Cloud, Azure) and use container technology such as Docker or Singularity to reproducibly provision software. Biologists can access the pipeline using either the command line or the Seqera platform, which provides a web browser-based interface to Nextflow pipelines. Collaborating with nf-core ensures high-quality, documented, reproducible workflows. Our nf-core/metatdenovo pipeline adheres to these established standards, enabling FAIR metatranscriptome de novo assembly, quantification, and annotation.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Software
Reproducibility of Results
Workflow
*Transcriptome
*Computational Biology/methods
*Molecular Sequence Annotation/methods
*Metagenomics/methods
RevDate: 2025-12-08
SLICED: A secure and adaptive cloud-iot framework for low-latency e-learning environments.
Scientific reports pii:10.1038/s41598-025-31428-w [Epub ahead of print].
Providing dependable, secure connectivity remains a persistent challenge in digital education, particularly in data-sensitive, remote learning environments. This study presents SLICED, which stands for Secure Learning Integration via Cloud and Edge Devices. It is a framework that integrates Internet of Things edge devices with Amazon Web Services (AWS) Cloud services. SLICED orchestrates AWS IoT Core, Lambda, and Key Management Service (KMS) to enable encrypted communication, user authentication, and real-time edge analytics. When compared to traditional AWS-IoT educational systems, this adaptive integration cuts down on latency and increases the level of data protection. The results of experiments conducted in simulated learning networks demonstrate that SLICED can achieve up to 27% lower latency and 33% greater security, thereby providing smart learning environments that are both scalable and safe.
Additional Links: PMID-41361235
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41361235,
year = {2025},
author = {Aswin, K and Shanmugapriya, N and Gopi, R},
title = {SLICED: A secure and adaptive cloud-iot framework for low-latency e-learning environments.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-31428-w},
pmid = {41361235},
issn = {2045-2322},
abstract = {Providing dependable, secure connectivity remains a persistent challenge in digital education, particularly in data-sensitive, remote learning environments. This study presents SLICED, which stands for Secure Learning Integration via Cloud and Edge Devices. It is a framework that integrates Internet of Things edge devices with Amazon Web Services (AWS) Cloud services. SLICED orchestrates AWS IoT Core, Lambda, and Key Management Service (KMS) to enable encrypted communication, user authentication, and real-time edge analytics. When compared to traditional AWS-IoT educational systems, this adaptive integration cuts down on latency and increases the level of data protection. The results of experiments conducted in simulated learning networks demonstrate that SLICED can achieve up to 27% lower latency and 33% greater security, thereby providing smart learning environments that are both scalable and safe.},
}
RevDate: 2025-12-11
CmpDate: 2025-12-08
Automated pipeline for operant behavior phenotyping for high-throughput data management, processing, and visualization.
NPP - digital psychiatry and neuroscience, 3(1):25.
Operant behavior paradigms are essential in preclinical models of neuropsychiatric disorders, such as substance use disorders, enabling the study of complex behaviors including learning, salience, motivation, and preference. These tasks often involve repeated, time-resolved interactions over extended periods, producing large behavioral datasets with rich temporal structure. To support genome-wide association studies (GWAS), the Preclinical Addiction Research Consortium (PARC) has phenotyped over 3000 rats for oxycodone and cocaine addiction-like behaviors using extended access self-administration, producing over 100,000 data files. To manage, store, and process this data efficiently, we leveraged Dropbox, Microsoft Azure Cloud Services, and other widely available computational tools to develop a robust, automated data processing pipeline. Raw MedPC operant output files are automatically converted into structured Excel files using custom scripts, then integrated with standardized experimental, behavioral, and metadata spreadsheets, all uploaded from Dropbox into a relational SQL database on Azure. The pipeline enables automated quality control, data backups, daily summary reports, and interactive visualizations. This approach has dramatically improved PARC's high-throughput phenotyping capabilities by reducing human workload and error, while improving data quality, richness, and accessibility. We here share our approach, as these streamlined workflows can deliver benefits to operant studies of any scale, supporting more efficient, transparent, reproducible, and collaborative preclinical research.
Additional Links: PMID-41360967
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41360967,
year = {2025},
author = {Kim, S and Huang, Y and Singla, U and Hu, A and Kalra, S and Morgan, AA and Sichel, B and Othman, D and Carrette, LLG},
title = {Automated pipeline for operant behavior phenotyping for high-throughput data management, processing, and visualization.},
journal = {NPP - digital psychiatry and neuroscience},
volume = {3},
number = {1},
pages = {25},
pmid = {41360967},
issn = {2948-1570},
abstract = {Operant behavior paradigms are essential in preclinical models of neuropsychiatric disorders, such as substance use disorders, enabling the study of complex behaviors including learning, salience, motivation, and preference. These tasks often involve repeated, time-resolved interactions over extended periods, producing large behavioral datasets with rich temporal structure. To support genome-wide association studies (GWAS), the Preclinical Addiction Research Consortium (PARC) has phenotyped over 3000 rats for oxycodone and cocaine addiction-like behaviors using extended access self-administration, producing over 100,000 data files. To manage, store, and process this data efficiently, we leveraged Dropbox, Microsoft Azure Cloud Services, and other widely available computational tools to develop a robust, automated data processing pipeline. Raw MedPC operant output files are automatically converted into structured Excel files using custom scripts, then integrated with standardized experimental, behavioral, and metadata spreadsheets, all uploaded from Dropbox into a relational SQL database on Azure. The pipeline enables automated quality control, data backups, daily summary reports, and interactive visualizations. This approach has dramatically improved PARC's high-throughput phenotyping capabilities by reducing human workload and error, while improving data quality, richness, and accessibility. We here share our approach, as these streamlined workflows can deliver benefits to operant studies of any scale, supporting more efficient, transparent, reproducible, and collaborative preclinical research.},
}
RevDate: 2025-12-06
SlingBAG: point cloud-based iterative algorithm for large-scale 3D photoacoustic imaging.
Nature communications pii:10.1038/s41467-025-66855-w [Epub ahead of print].
Large-scale 3D photoacoustic imaging has become increasingly important for both clinical and pre-clinical applications. Limited by cost and system complexity, only systems with sparsely-distributed sensors can be widely implemented, which necessitates advanced reconstruction algorithms to reduce artifacts. However, the high computing memory and time consumption of traditional iterative reconstruction (IR) algorithms is practically unacceptable for large-scale 3D photoacoustic imaging. Here, we propose a point cloud-based IR algorithm that reduces memory consumption by several orders, wherein the 3D photoacoustic scene is modeled as a series of Gaussian-distributed spherical sources stored in form of point cloud. During the IR process, not only are properties of each Gaussian source, including its peak intensity (initial pressure value), standard deviation (size) and mean (position) continuously optimized, but also each Gaussian source itself adaptively undergoes destroying, splitting, and duplication along the gradient direction. This method, named SlingBAG, the sliding Gaussian ball adaptive growth algorithm, enables high-quality large-scale 3D photoacoustic reconstruction with fast iteration and extremely low memory usage. We validated the SlingBAG algorithm in both simulation study and in vivo animal experiments.
Additional Links: PMID-41353449
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41353449,
year = {2025},
author = {Li, S and Wang, Y and Gao, J and Kim, C and Choi, S and Zhang, Y and Chen, Q and Yao, Y and Li, C},
title = {SlingBAG: point cloud-based iterative algorithm for large-scale 3D photoacoustic imaging.},
journal = {Nature communications},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41467-025-66855-w},
pmid = {41353449},
issn = {2041-1723},
abstract = {Large-scale 3D photoacoustic imaging has become increasingly important for both clinical and pre-clinical applications. Limited by cost and system complexity, only systems with sparsely-distributed sensors can be widely implemented, which necessitates advanced reconstruction algorithms to reduce artifacts. However, the high computing memory and time consumption of traditional iterative reconstruction (IR) algorithms is practically unacceptable for large-scale 3D photoacoustic imaging. Here, we propose a point cloud-based IR algorithm that reduces memory consumption by several orders, wherein the 3D photoacoustic scene is modeled as a series of Gaussian-distributed spherical sources stored in form of point cloud. During the IR process, not only are properties of each Gaussian source, including its peak intensity (initial pressure value), standard deviation (size) and mean (position) continuously optimized, but also each Gaussian source itself adaptively undergoes destroying, splitting, and duplication along the gradient direction. This method, named SlingBAG, the sliding Gaussian ball adaptive growth algorithm, enables high-quality large-scale 3D photoacoustic reconstruction with fast iteration and extremely low memory usage. We validated the SlingBAG algorithm in both simulation study and in vivo animal experiments.},
}
RevDate: 2025-12-05
Blockchain-based cryptographic framework for secure data transmission in IoT edge environments using ECaps-Net.
Scientific reports pii:10.1038/s41598-025-30906-5 [Epub ahead of print].
In the evolving landscape of Internet of Things (IoT), the integration of interconnected devices and cloud computing has revolutionized data collection and processing. However, this connectivity poses numerous security challenges about data privacy, integrity, and security. Traditional cloud-based security approaches inadequate for managing the distributed and dynamic nature of IoT ecosystems. The emergence of the edge computing paradigm allowed for the transfer of data processing and storage closer to local edge devices, but introduces new vulnerabilities at the edges. Thus, an Intrusion Detection System (IDS) is required in this situation. IDS built at the edge can quickly detect and mitigate possible attacks by continually monitoring network traffic, device interactions, and real-time anomalies. Therefore, in this study, we propose an Enhanced Deep Learning (DL)-based IDS integrated with a Blockchain-Based Cryptographic-Algorithm to ensure secure data transmission in an IoT edge computing environment. Initially, the intrusion dataset undergoes preprocessing step to enhance its quality by eliminating unnecessary data and normalizing the dataset. then, the pre-processed data is classified using an Enhanced Capsule Network (ECaps-Net), which incorporates a Squeeze and Excitation (SE) block to highlight important features and surpasses less important ones. After classification, the classified normal data is converted into blocks using Blockchain technology. Every block is hashed using the Merkle-Damgard cryptographic algorithm to ensure data integrity and confidentiality. The proposed framework outperformed existing methods with a maximum accuracy of 98.90% and 98.78% on the KDD Cup-99 and UNSW-NB 15 datasets, respectively. The proposed mechanism protects cloud server and edge devices from malicious access, offering a reliable and efficient solution for secure data transmission in IoT edge environments.
Additional Links: PMID-41350368
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41350368,
year = {2025},
author = {Mohamed Meerasha, I and Syed Masood, JAI and P, T and R, AA},
title = {Blockchain-based cryptographic framework for secure data transmission in IoT edge environments using ECaps-Net.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-30906-5},
pmid = {41350368},
issn = {2045-2322},
abstract = {In the evolving landscape of Internet of Things (IoT), the integration of interconnected devices and cloud computing has revolutionized data collection and processing. However, this connectivity poses numerous security challenges about data privacy, integrity, and security. Traditional cloud-based security approaches inadequate for managing the distributed and dynamic nature of IoT ecosystems. The emergence of the edge computing paradigm allowed for the transfer of data processing and storage closer to local edge devices, but introduces new vulnerabilities at the edges. Thus, an Intrusion Detection System (IDS) is required in this situation. IDS built at the edge can quickly detect and mitigate possible attacks by continually monitoring network traffic, device interactions, and real-time anomalies. Therefore, in this study, we propose an Enhanced Deep Learning (DL)-based IDS integrated with a Blockchain-Based Cryptographic-Algorithm to ensure secure data transmission in an IoT edge computing environment. Initially, the intrusion dataset undergoes preprocessing step to enhance its quality by eliminating unnecessary data and normalizing the dataset. then, the pre-processed data is classified using an Enhanced Capsule Network (ECaps-Net), which incorporates a Squeeze and Excitation (SE) block to highlight important features and surpasses less important ones. After classification, the classified normal data is converted into blocks using Blockchain technology. Every block is hashed using the Merkle-Damgard cryptographic algorithm to ensure data integrity and confidentiality. The proposed framework outperformed existing methods with a maximum accuracy of 98.90% and 98.78% on the KDD Cup-99 and UNSW-NB 15 datasets, respectively. The proposed mechanism protects cloud server and edge devices from malicious access, offering a reliable and efficient solution for secure data transmission in IoT edge environments.},
}
RevDate: 2025-12-03
Detecting continuous structural heterogeneity in single molecule localization microscopy data with a point cloud variational auto-encoder.
Scientific reports pii:10.1038/s41598-025-31201-z [Epub ahead of print].
The low degree of labeling and limited photon count of fluorescent emitters in single molecule localization microscopy results in poor quality images of macro-molecular complexes. Particle fusion provides a single reconstruction with high signal-to-noise ratio by combining many single molecule localization microscopy images of the same structure. The underlying assumption of homogeneity is not always valid, heterogeneity can arise due to geometrical shape variations or distinct conformational states. We introduce a Point Cloud Variational Auto-Encoder that works directly on 2D and 3D localization data, to detect multiple modes of variation in such datasets. The computing time is on the order of a few minutes, enabled by the linear scaling with dataset size, and fast network training in just four epochs. The use of lists of localization data instead of pixelated images leads to just minor differences in computational burden between 2D and 3D cases. With the proposed method, we detected radius variation in 2D Nuclear Pore Complex data, height variations in 3D DNA origami tetrahedron data, and both radius and height variations in 3D Nuclear Pore Complex data. In all cases, the detected variations were on the few nanometer scale.
Additional Links: PMID-41339750
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41339750,
year = {2025},
author = {Haghparast, S and Zhang, Y and Tao, Q and Stallinga, S and Rieger, B},
title = {Detecting continuous structural heterogeneity in single molecule localization microscopy data with a point cloud variational auto-encoder.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-31201-z},
pmid = {41339750},
issn = {2045-2322},
support = {17046//Nederlandse Organisatie voor Wetenschappelijk Onderzoek/ ; },
abstract = {The low degree of labeling and limited photon count of fluorescent emitters in single molecule localization microscopy results in poor quality images of macro-molecular complexes. Particle fusion provides a single reconstruction with high signal-to-noise ratio by combining many single molecule localization microscopy images of the same structure. The underlying assumption of homogeneity is not always valid, heterogeneity can arise due to geometrical shape variations or distinct conformational states. We introduce a Point Cloud Variational Auto-Encoder that works directly on 2D and 3D localization data, to detect multiple modes of variation in such datasets. The computing time is on the order of a few minutes, enabled by the linear scaling with dataset size, and fast network training in just four epochs. The use of lists of localization data instead of pixelated images leads to just minor differences in computational burden between 2D and 3D cases. With the proposed method, we detected radius variation in 2D Nuclear Pore Complex data, height variations in 3D DNA origami tetrahedron data, and both radius and height variations in 3D Nuclear Pore Complex data. In all cases, the detected variations were on the few nanometer scale.},
}
RevDate: 2025-12-07
Identifying and assessing the cloud computing implementation drivers for sustainable building projects.
Scientific reports, 15(1):43122.
The sustainability aspect must be implemented during all the phases of the decision-making phase regarding construction project execution to obtain the full advantages, shorn of conceding the project objective. Cloud computing (CC) has been an appreciated tool for successful and viable building processes in various nations over the past twenty years. CC and its drivers have certainly enhanced the successful and sustainable targets of quality, cost, and time. Conversely, CC adoption by the building industry in Egypt. Hence, the aim of this study is to build a decision support model to back drivers of CC adoption by analyzing the relationship concerning drivers of CC in building business in Egypt. The data was derived from various sources of the literature. A questionnaire survey for quantitative data generation followed this. The data was derived from 106 building practitioners in Egypt. Consequently, the study employed exploratory factor analysis (EFA) to authenticate the findings derived from the survey tool. The results categorized the drivers into three groups: Technology Drivers, Client Support Drivers, and Organization Drivers. Structural equation modeling using partial least squares (PLS-SEM) was then applied to test the relationships and rank their influence. Findings indicate that Technology is the most significant driver of CC adoption (β = 0.378, p < 0.001), followed closely by Client Support (β = 0.372, p < 0.001) and Organization (β = 0.360, p < 0.001). These findings can be used as a baseline or criteria for decision-making concerning improvements in the cost-effectiveness of CC and its proficiency to increase efficacy in the building sector. Therefore, this study adds to the understanding of contemporary construction management and engineering by extending the existing literature on CC adoption derivers and their effects on the building industry.
Additional Links: PMID-41339371
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41339371,
year = {2025},
author = {Alkersh, M and Alhusban, M},
title = {Identifying and assessing the cloud computing implementation drivers for sustainable building projects.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {43122},
pmid = {41339371},
issn = {2045-2322},
support = {(PSAU/2024/01/ 29773)//Prince Sattam bin Abdulaziz University/ ; },
abstract = {The sustainability aspect must be implemented during all the phases of the decision-making phase regarding construction project execution to obtain the full advantages, shorn of conceding the project objective. Cloud computing (CC) has been an appreciated tool for successful and viable building processes in various nations over the past twenty years. CC and its drivers have certainly enhanced the successful and sustainable targets of quality, cost, and time. Conversely, CC adoption by the building industry in Egypt. Hence, the aim of this study is to build a decision support model to back drivers of CC adoption by analyzing the relationship concerning drivers of CC in building business in Egypt. The data was derived from various sources of the literature. A questionnaire survey for quantitative data generation followed this. The data was derived from 106 building practitioners in Egypt. Consequently, the study employed exploratory factor analysis (EFA) to authenticate the findings derived from the survey tool. The results categorized the drivers into three groups: Technology Drivers, Client Support Drivers, and Organization Drivers. Structural equation modeling using partial least squares (PLS-SEM) was then applied to test the relationships and rank their influence. Findings indicate that Technology is the most significant driver of CC adoption (β = 0.378, p < 0.001), followed closely by Client Support (β = 0.372, p < 0.001) and Organization (β = 0.360, p < 0.001). These findings can be used as a baseline or criteria for decision-making concerning improvements in the cost-effectiveness of CC and its proficiency to increase efficacy in the building sector. Therefore, this study adds to the understanding of contemporary construction management and engineering by extending the existing literature on CC adoption derivers and their effects on the building industry.},
}
RevDate: 2025-12-03
CmpDate: 2025-12-03
A Cloud-based Risk Stratification Platform Cardiovascular Disease, Depression and Comorbidities.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-5.
There is strong clinical evidence that patients with depression have a high probability to exhibit cardiovascular disease (CVD) and vice versa. Thus, it is important to accurately identify these patients to provide optimal management of the comorbid conditions. Although the existing literature focuses on the development of artificial intelligence (AI) models for the diagnosis of CVD and/or depression, there is not currently any reported tool or system which integrates such models for clinical practice. In this work, we present a cloud-based platform to enable the easier, accurate, and cost-effective diagnosis of CVD and depression. The cloud-based platform is an integrated cloud-enabled computing unit that provides the execution of artificial intelligence computing algorithms along with data exchange services by utilizing the REST (Representational State Transfer) architecture. The platform enables the seamless and transparent interfacing of AI models and applications for the end-users. During the development a variety of state-of-the-art technologies and architectural models were integrated through a Payara Application Server, the Python programming environment (version 3) and a MySQL database server. Java SDK 11 was used for developing the full-stack API of the user interfaces and the back-end logic including the REST interfaces. The platform is hosted on a Linux Virtual Machine (VM). The development resulted in a cost-effective, accurate and efficient tool for the risk stratification of depression and CVD.Clinical Relevance- This is a state-of-the-art cloud-based platform for the risk stratification of CVD and depression. Example: Cardiologists and psychiatrists can use this platform to identify patients with CVD and depression and then prescribe more detailed examinations.
Additional Links: PMID-41335843
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41335843,
year = {2025},
author = {Kalatzis, F and Tsakanikas, V and Pezoulas, VC and Tassi, S and Tsarapatsani, K and Bourantas, G and Fotiadis, D and Sakellarios, A},
title = {A Cloud-based Risk Stratification Platform Cardiovascular Disease, Depression and Comorbidities.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-5},
doi = {10.1109/EMBC58623.2025.11253640},
pmid = {41335843},
issn = {2694-0604},
mesh = {*Cardiovascular Diseases/diagnosis/epidemiology ; Humans ; *Depression/diagnosis/epidemiology ; *Cloud Computing ; Comorbidity ; Risk Assessment ; Algorithms ; Artificial Intelligence ; },
abstract = {There is strong clinical evidence that patients with depression have a high probability to exhibit cardiovascular disease (CVD) and vice versa. Thus, it is important to accurately identify these patients to provide optimal management of the comorbid conditions. Although the existing literature focuses on the development of artificial intelligence (AI) models for the diagnosis of CVD and/or depression, there is not currently any reported tool or system which integrates such models for clinical practice. In this work, we present a cloud-based platform to enable the easier, accurate, and cost-effective diagnosis of CVD and depression. The cloud-based platform is an integrated cloud-enabled computing unit that provides the execution of artificial intelligence computing algorithms along with data exchange services by utilizing the REST (Representational State Transfer) architecture. The platform enables the seamless and transparent interfacing of AI models and applications for the end-users. During the development a variety of state-of-the-art technologies and architectural models were integrated through a Payara Application Server, the Python programming environment (version 3) and a MySQL database server. Java SDK 11 was used for developing the full-stack API of the user interfaces and the back-end logic including the REST interfaces. The platform is hosted on a Linux Virtual Machine (VM). The development resulted in a cost-effective, accurate and efficient tool for the risk stratification of depression and CVD.Clinical Relevance- This is a state-of-the-art cloud-based platform for the risk stratification of CVD and depression. Example: Cardiologists and psychiatrists can use this platform to identify patients with CVD and depression and then prescribe more detailed examinations.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cardiovascular Diseases/diagnosis/epidemiology
Humans
*Depression/diagnosis/epidemiology
*Cloud Computing
Comorbidity
Risk Assessment
Algorithms
Artificial Intelligence
RevDate: 2026-01-03
CmpDate: 2026-01-02
Analysis of clinical, single cell, and spatial data from the Human Tumor Atlas Network (HTAN) with massively distributed cloud-based queries.
Research square.
Cancer research increasingly relies on large-scale, multimodal datasets that capture the complexity of tumor ecosystems across diverse patients, cancer types, and disease stages. The Human Tumor Atlas Network (HTAN) generates such data, including single-cell transcriptomics, proteomics, and multiplexed imaging. However, the volume and heterogeneity of the data present challenges for researchers seeking to integrate, explore, and analyze these datasets at scale. To this end, HTAN developed a cloud-based infrastructure that transforms clinical and assay metadata into aggregate Google BigQuery tables, hosted through the Institute for Systems Biology Cancer Gateway in the Cloud (ISB-CGC). This infrastructure introduces two key innovations: (1) a provenance-based HTAN ID table that simplifies cohort construction and cross-assay integration, and (2) the novel adaptation of BigQuery's geospatial functions for use in spatial biology, enabling neighborhood and correlation analysis of tumor microenvironments. We demonstrate these capabilities through R and Python notebooks that highlight use cases such as identifying precancer and organ-specific sample cohorts, integrating multimodal datasets, and analyzing single-cell and spatial data. By lowering technical and computational barriers, this infrastructure provides a cost-effective and intuitive entry point for researchers, highlighting the potential of cloud-based platforms to accelerate cancer discoveries.
Additional Links: PMID-41333415
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41333415,
year = {2025},
author = {Gibbs, DL and Pozhidayeva, D and Katariya, Y and Aguilar, B and Anton, K and Lau, C and Longabaugh, WJ and de Bruijn, I and Lash, A and Nikolov, M and Altreuter, J and Clayton, A and Gopalan, A and Taylor, AJ and Schultz, N and Cerami, E and Thorsson, V},
title = {Analysis of clinical, single cell, and spatial data from the Human Tumor Atlas Network (HTAN) with massively distributed cloud-based queries.},
journal = {Research square},
volume = {},
number = {},
pages = {},
pmid = {41333415},
issn = {2693-5015},
support = {U24 CA233243/CA/NCI NIH HHS/United States ; },
abstract = {Cancer research increasingly relies on large-scale, multimodal datasets that capture the complexity of tumor ecosystems across diverse patients, cancer types, and disease stages. The Human Tumor Atlas Network (HTAN) generates such data, including single-cell transcriptomics, proteomics, and multiplexed imaging. However, the volume and heterogeneity of the data present challenges for researchers seeking to integrate, explore, and analyze these datasets at scale. To this end, HTAN developed a cloud-based infrastructure that transforms clinical and assay metadata into aggregate Google BigQuery tables, hosted through the Institute for Systems Biology Cancer Gateway in the Cloud (ISB-CGC). This infrastructure introduces two key innovations: (1) a provenance-based HTAN ID table that simplifies cohort construction and cross-assay integration, and (2) the novel adaptation of BigQuery's geospatial functions for use in spatial biology, enabling neighborhood and correlation analysis of tumor microenvironments. We demonstrate these capabilities through R and Python notebooks that highlight use cases such as identifying precancer and organ-specific sample cohorts, integrating multimodal datasets, and analyzing single-cell and spatial data. By lowering technical and computational barriers, this infrastructure provides a cost-effective and intuitive entry point for researchers, highlighting the potential of cloud-based platforms to accelerate cancer discoveries.},
}
RevDate: 2025-12-05
CmpDate: 2025-12-03
SZBC-AI4TCM: a comprehensive web-based computing platform for traditional Chinese medicine research and development.
Frontiers in pharmacology, 16:1698202.
INTRODUCTION: In recent years, the increasing complexity and volume of data in traditional Chinese medicine (TCM) research have rendered the conventional experimental methods inadequate for modern TCM development. The analysis of intricate TCM data demands proficiency in multiple programming languages, artificial intelligence (AI) techniques, and bioinformatics, posing significant challenges for researchers lacking such expertise. Thus, there is an urgent need to develop user-friendly software tools that encompass various aspects of TCM data analysis.
METHODS: We developed a comprehensive web-based computing platform, SZBC-AI4TCM, a comprehensive web-based computing platform for traditional Chinese medicine that embodies the "ShuZhiBenCao" (Digital Herbal) concept through artificial intelligence, designed to accelerate TCM research and reduce costs by integrating advanced AI algorithms and bioinformatics tools.
RESULTS: Leveraging machine learning, deep learning, and big data analytics, the platform enables end-to-end analysis, from TCM formulation and mechanism elucidation to drug screening. Featuring an intuitive visual interface and hardware-software acceleration, SZBC-AI4TCM allows researchers without computational backgrounds to conduct comprehensive and accurate analyses efficiently. By using the TCM research in Alzheimer's disease as an example, we showcase its functionalities, operational methods, and analytical capabilities.
DISCUSSION: SZBC-AI4TCM not only provides robust computational support for TCM research but also significantly enhances efficiency and reduces costs. It offers novel approaches for studying complex TCM systems, thereby advancing the modernization of TCM. As interdisciplinary collaboration and cloud computing continue to evolve, SZBC-AI4TCM is poised to play a strong role in TCM research and foster its growth in addition to contributing to global health. SZBC-AI4TCM is publicly for access at https://ai.tasly.com/ui/\#/frontend/login.
Additional Links: PMID-41333020
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41333020,
year = {2025},
author = {Lang, J and Guo, K and Yang, J and Yang, P and Wei, Y and Han, J and Zhao, S and Liu, Z and Yi, H and Yan, X and Chen, B and Wang, C and Xu, J and Ge, J and Zhang, W and Zhou, X and Fang, J and Su, J and Yan, K and Hu, Y and Wang, W},
title = {SZBC-AI4TCM: a comprehensive web-based computing platform for traditional Chinese medicine research and development.},
journal = {Frontiers in pharmacology},
volume = {16},
number = {},
pages = {1698202},
pmid = {41333020},
issn = {1663-9812},
abstract = {INTRODUCTION: In recent years, the increasing complexity and volume of data in traditional Chinese medicine (TCM) research have rendered the conventional experimental methods inadequate for modern TCM development. The analysis of intricate TCM data demands proficiency in multiple programming languages, artificial intelligence (AI) techniques, and bioinformatics, posing significant challenges for researchers lacking such expertise. Thus, there is an urgent need to develop user-friendly software tools that encompass various aspects of TCM data analysis.
METHODS: We developed a comprehensive web-based computing platform, SZBC-AI4TCM, a comprehensive web-based computing platform for traditional Chinese medicine that embodies the "ShuZhiBenCao" (Digital Herbal) concept through artificial intelligence, designed to accelerate TCM research and reduce costs by integrating advanced AI algorithms and bioinformatics tools.
RESULTS: Leveraging machine learning, deep learning, and big data analytics, the platform enables end-to-end analysis, from TCM formulation and mechanism elucidation to drug screening. Featuring an intuitive visual interface and hardware-software acceleration, SZBC-AI4TCM allows researchers without computational backgrounds to conduct comprehensive and accurate analyses efficiently. By using the TCM research in Alzheimer's disease as an example, we showcase its functionalities, operational methods, and analytical capabilities.
DISCUSSION: SZBC-AI4TCM not only provides robust computational support for TCM research but also significantly enhances efficiency and reduces costs. It offers novel approaches for studying complex TCM systems, thereby advancing the modernization of TCM. As interdisciplinary collaboration and cloud computing continue to evolve, SZBC-AI4TCM is poised to play a strong role in TCM research and foster its growth in addition to contributing to global health. SZBC-AI4TCM is publicly for access at https://ai.tasly.com/ui/\#/frontend/login.},
}
RevDate: 2025-12-22
CmpDate: 2025-12-22
FERAL: A Video-Understanding System for Direct Video-to-Behavior Mapping.
bioRxiv : the preprint server for biology.
Animal behavior unfolds continuously in time, yet quantitative analyses often require segmenting it into discrete, interpretable states. Although manual annotation can achieve this, it remains slow, subjective, and difficult to scale. Most automated pipelines use tracked body parts to infer actions, but are limited by tracking quality, and discard much of the visual information contained in raw videos. Here we present FERAL (Feature Extraction for Recognition of Animal Locomotion), a supervised video-understanding toolkit that bridges this gap by mapping raw video directly to frame-level behavioral labels, bypassing the need for pose estimation. Across benchmarks, FERAL outperforms state-of-the-art pose- and video-based baselines: on a benchmarking dataset of mouse social interaction, it surpasses Google's Videoprism using just a quarter of the training data. FERAL generalizes across species, recording conditions, and levels of behavioral organization: from single-animal locomotion to complex social interactions and emergent collective dynamics. Released as a user-friendly, open-source package, FERAL overcomes the challenges of traditional approaches, integrates easily with existing analysis pipelines, and can be deployed locally or on cloud servers with a few clicks. By mapping raw video directly to annotated behavior, FERAL lowers the barrier to scalable, cross-species behavioral quantification and broadens the range of behavioral analyses possible in both the lab and the wild.
Additional Links: PMID-41332589
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41332589,
year = {2025},
author = {Skovorodnikov, P and Zhao, J and Buck, F and Kay, T and Frank, DD and Koger, B and Costelloe, BR and Couzin, ID and Razzauti, J},
title = {FERAL: A Video-Understanding System for Direct Video-to-Behavior Mapping.},
journal = {bioRxiv : the preprint server for biology},
volume = {},
number = {},
pages = {},
pmid = {41332589},
issn = {2692-8205},
support = {F31 NS132477/NS/NINDS NIH HHS/United States ; K99 DC021506/DC/NIDCD NIH HHS/United States ; T32 GM152349/GM/NIGMS NIH HHS/United States ; },
abstract = {Animal behavior unfolds continuously in time, yet quantitative analyses often require segmenting it into discrete, interpretable states. Although manual annotation can achieve this, it remains slow, subjective, and difficult to scale. Most automated pipelines use tracked body parts to infer actions, but are limited by tracking quality, and discard much of the visual information contained in raw videos. Here we present FERAL (Feature Extraction for Recognition of Animal Locomotion), a supervised video-understanding toolkit that bridges this gap by mapping raw video directly to frame-level behavioral labels, bypassing the need for pose estimation. Across benchmarks, FERAL outperforms state-of-the-art pose- and video-based baselines: on a benchmarking dataset of mouse social interaction, it surpasses Google's Videoprism using just a quarter of the training data. FERAL generalizes across species, recording conditions, and levels of behavioral organization: from single-animal locomotion to complex social interactions and emergent collective dynamics. Released as a user-friendly, open-source package, FERAL overcomes the challenges of traditional approaches, integrates easily with existing analysis pipelines, and can be deployed locally or on cloud servers with a few clicks. By mapping raw video directly to annotated behavior, FERAL lowers the barrier to scalable, cross-species behavioral quantification and broadens the range of behavioral analyses possible in both the lab and the wild.},
}
RevDate: 2025-12-02
Improved multi-strategy secretary bird optimization for efficient IoT task scheduling in fog cloud computing.
Scientific reports pii:10.1038/s41598-025-30918-1 [Epub ahead of print].
Applications designed for real-time IoT operations improve cloud-based service utilization due to their rapid scalability. Though cloud computing appears to be more effective for data processing and storage in a range of IoT applications, its real-time scalability presents issues in fulfilling the demands of network bandwidth and latency-sensitive applications. In this context, fog computing is shown to be a complementary paradigm to cloud computing, providing extra benefits and capabilities aimed at extending cloud services to end users and edge devices. Due to the restricted capabilities of fog nodes, only lightweight activities can be conducted locally, while jobs requiring more processing time are handled in the cloud. As a result, an Improved Multi-Strategy Enhanced Secretary Bird Optimization Algorithm using Reinforcement Learning (IMSESBOA + RL) for IoT Task Scheduling (TS) mechanism is presented to reduce data processing time and enhance Quality of Service (QoS) in fog-cloud computing. This IMSESBOA + RL approach is designed as an efficient scheduling model that investigates and processes various scalable quantities of tasks while minimizing latency and energy costs. It used a multi-objective methodology based on Secretary Bird Optimization Algorithm's (SBOA) balanced exploration and exploitation capabilities, which has multi-strategy benefits in terms of maximizing resource consumption rate and shortening makespan. It further uses RL for dynamically adapting to the new workloads by excelling in learning optimal strategies using the interaction of trial and error with the environment. The simulation findings of the IMSESBOA + RL approach verified that it reduced makespan by 19.42% and execution time by 18.32% compared to the baseline approaches with various jobs originating from IoT applications.
Additional Links: PMID-41331066
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41331066,
year = {2025},
author = {Sangeetha, K and Kanthimathi, M},
title = {Improved multi-strategy secretary bird optimization for efficient IoT task scheduling in fog cloud computing.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-30918-1},
pmid = {41331066},
issn = {2045-2322},
abstract = {Applications designed for real-time IoT operations improve cloud-based service utilization due to their rapid scalability. Though cloud computing appears to be more effective for data processing and storage in a range of IoT applications, its real-time scalability presents issues in fulfilling the demands of network bandwidth and latency-sensitive applications. In this context, fog computing is shown to be a complementary paradigm to cloud computing, providing extra benefits and capabilities aimed at extending cloud services to end users and edge devices. Due to the restricted capabilities of fog nodes, only lightweight activities can be conducted locally, while jobs requiring more processing time are handled in the cloud. As a result, an Improved Multi-Strategy Enhanced Secretary Bird Optimization Algorithm using Reinforcement Learning (IMSESBOA + RL) for IoT Task Scheduling (TS) mechanism is presented to reduce data processing time and enhance Quality of Service (QoS) in fog-cloud computing. This IMSESBOA + RL approach is designed as an efficient scheduling model that investigates and processes various scalable quantities of tasks while minimizing latency and energy costs. It used a multi-objective methodology based on Secretary Bird Optimization Algorithm's (SBOA) balanced exploration and exploitation capabilities, which has multi-strategy benefits in terms of maximizing resource consumption rate and shortening makespan. It further uses RL for dynamically adapting to the new workloads by excelling in learning optimal strategies using the interaction of trial and error with the environment. The simulation findings of the IMSESBOA + RL approach verified that it reduced makespan by 19.42% and execution time by 18.32% compared to the baseline approaches with various jobs originating from IoT applications.},
}
RevDate: 2026-01-03
BHFVAL: Block chain-Enabled Hierarchical Federated Variational Auto encoder Framework for Secure Intrusion Detection in Vehicular Networks.
Scientific reports, 15(1):45742.
In modern vehicular systems, providing secure data processing with decentralized learning efficacy under limited computational resources and varying network conditions is challenging. This paper introduces an intelligent, effective, and secure learning model for the Internet of Vehicles (IoV) as a solution to the vulnerability of centralized architectures and the inefficiency of existing federated learning in adversarial environments. The Blockchain-Enabled Hierarchical Federated Variational Autoencoder Learning (BHFVAL) model uses a multilevel learning process on edge, fog, and cloud layers protected by a Reputation-Based Byzantine Fault Tolerance (RBFT) mechanism filtering out incorrect inputs during model aggregation. HFVAL is at its core, providing adaptive encoding and learning task assignments based on dynamic networks and resource status. To minimize communication latency, the platform employs a lightweight edge-computing (LEC) module to enable proximity-based processing. Hyperparameter optimization is enabled using the Osprey Optimization Algorithm (OOA) for maximum convergence effectiveness. Secure communication is achieved by implementing a Lightweight Secure Communication Protocol (LSCP) on Elliptic Curve-Based Homomorphic Encryption (ECHE) to enable encrypted V2X communication with minimal computational overhead and reduced latency. Extensive experimentation using the UNSW-NB15 and CIC-IDS-2017 datasets exhibited strong detection performance: UNSW-NB15 achieved 96.83% accuracy and 96.65% F1-score under IID, slightly declining to 95.74% accuracy and 95. 40% F1-score under non-IID conditions. The CIC-IDS-2017 achieved 97.36% accuracy, 97.2% AUROC, and 97.1% F1-score under IID, slightly declining to 96.40% accuracy and 96.20% F1-score under non-IID conditions. The results attest to the dependability, adaptability, and efficacy of the framework in decentralized privacy-sensitive vehicular networks.
Additional Links: PMID-41326496
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41326496,
year = {2025},
author = {Visuvanathan, GE and Sayeed, MS and Yogarayan, S},
title = {BHFVAL: Block chain-Enabled Hierarchical Federated Variational Auto encoder Framework for Secure Intrusion Detection in Vehicular Networks.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {45742},
pmid = {41326496},
issn = {2045-2322},
abstract = {In modern vehicular systems, providing secure data processing with decentralized learning efficacy under limited computational resources and varying network conditions is challenging. This paper introduces an intelligent, effective, and secure learning model for the Internet of Vehicles (IoV) as a solution to the vulnerability of centralized architectures and the inefficiency of existing federated learning in adversarial environments. The Blockchain-Enabled Hierarchical Federated Variational Autoencoder Learning (BHFVAL) model uses a multilevel learning process on edge, fog, and cloud layers protected by a Reputation-Based Byzantine Fault Tolerance (RBFT) mechanism filtering out incorrect inputs during model aggregation. HFVAL is at its core, providing adaptive encoding and learning task assignments based on dynamic networks and resource status. To minimize communication latency, the platform employs a lightweight edge-computing (LEC) module to enable proximity-based processing. Hyperparameter optimization is enabled using the Osprey Optimization Algorithm (OOA) for maximum convergence effectiveness. Secure communication is achieved by implementing a Lightweight Secure Communication Protocol (LSCP) on Elliptic Curve-Based Homomorphic Encryption (ECHE) to enable encrypted V2X communication with minimal computational overhead and reduced latency. Extensive experimentation using the UNSW-NB15 and CIC-IDS-2017 datasets exhibited strong detection performance: UNSW-NB15 achieved 96.83% accuracy and 96.65% F1-score under IID, slightly declining to 95.74% accuracy and 95. 40% F1-score under non-IID conditions. The CIC-IDS-2017 achieved 97.36% accuracy, 97.2% AUROC, and 97.1% F1-score under IID, slightly declining to 96.40% accuracy and 96.20% F1-score under non-IID conditions. The results attest to the dependability, adaptability, and efficacy of the framework in decentralized privacy-sensitive vehicular networks.},
}
RevDate: 2025-12-01
CmpDate: 2025-12-01
An Open-source Protocol for Deep Learning-based Segmentation of Tubular Structures in 3D Fluorescence Microscopy Images.
Journal of visualized experiments : JoVE.
Segmenting tubular structures in dense biological tissues from 3D fluorescence microscopy images is critical to study complex tissue but remains challenging due to image complexity, variability, and quality issues. Here, we introduce an open-source, user-friendly toolbox for end-to-end segmentation of tubular structures in 3D images, accessible to researchers without formal programming training. The toolbox features interactive Jupyter notebooks implementing two simple yet efficient deep learning architectures -- 3D U-Net and 3D U-Net with attention mechanisms -- for precise 3D segmentation of tubular networks. A key innovation is our simulation-based data augmentation strategy, which enhances model performance even with minimal training data (as few as one 3D image). Employing user-provided masks, the protocol generates artificial microscopy images with varying signal-to-noise ratios and simulates realistic imaging artifacts, including uneven staining, point spread function convolution, axial intensity variations, and Poisson and Gaussian noise. The protocol systematically guides users through data augmentation, model training, qualitative and quantitative evaluation on test sets, and inference on new images. We validate the toolbox by analyzing two morphologically distinct tubular networks in mouse liver tissue -- the bile canaliculi and sinusoidal networks -- demonstrating that both architectures perform well, with the attention U-Net slightly outperforming the standard U-Net when trained with augmented data. Our comprehensive toolbox, executable on local Graphics Processing Units (GPUs), high-performance computing clusters, or cloud platforms, contributes to the democratization of advanced image analysis for a broad spectrum of researchers.
Additional Links: PMID-41325317
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41325317,
year = {2025},
author = {Velasco, R and Pérez-Gallardo, C and Segovia-Miranda, F and Morales-Navarrete, H},
title = {An Open-source Protocol for Deep Learning-based Segmentation of Tubular Structures in 3D Fluorescence Microscopy Images.},
journal = {Journal of visualized experiments : JoVE},
volume = {},
number = {225},
pages = {},
doi = {10.3791/68004},
pmid = {41325317},
issn = {1940-087X},
mesh = {*Deep Learning ; Microscopy, Fluorescence/methods ; Animals ; *Imaging, Three-Dimensional/methods ; Mice ; Software ; Liver ; },
abstract = {Segmenting tubular structures in dense biological tissues from 3D fluorescence microscopy images is critical to study complex tissue but remains challenging due to image complexity, variability, and quality issues. Here, we introduce an open-source, user-friendly toolbox for end-to-end segmentation of tubular structures in 3D images, accessible to researchers without formal programming training. The toolbox features interactive Jupyter notebooks implementing two simple yet efficient deep learning architectures -- 3D U-Net and 3D U-Net with attention mechanisms -- for precise 3D segmentation of tubular networks. A key innovation is our simulation-based data augmentation strategy, which enhances model performance even with minimal training data (as few as one 3D image). Employing user-provided masks, the protocol generates artificial microscopy images with varying signal-to-noise ratios and simulates realistic imaging artifacts, including uneven staining, point spread function convolution, axial intensity variations, and Poisson and Gaussian noise. The protocol systematically guides users through data augmentation, model training, qualitative and quantitative evaluation on test sets, and inference on new images. We validate the toolbox by analyzing two morphologically distinct tubular networks in mouse liver tissue -- the bile canaliculi and sinusoidal networks -- demonstrating that both architectures perform well, with the attention U-Net slightly outperforming the standard U-Net when trained with augmented data. Our comprehensive toolbox, executable on local Graphics Processing Units (GPUs), high-performance computing clusters, or cloud platforms, contributes to the democratization of advanced image analysis for a broad spectrum of researchers.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Deep Learning
Microscopy, Fluorescence/methods
Animals
*Imaging, Three-Dimensional/methods
Mice
Software
Liver
RevDate: 2025-11-29
AIFS: an efficient face recognition method based on AI and enhanced few-shot learning.
Scientific reports pii:10.1038/s41598-025-29992-2 [Epub ahead of print].
The growing demand for real-time, adaptive facial recognition in resource-constrained environments like telemedicine, surveillance, and biometric authentication necessitates scalable AI solutions. Existing systems often falter under low-data conditions or limited computational resources. This paper introduces AIFS, an efficient and hybrid facial recognition framework that unifies traditional feature-based learning with modern few-shot deep learning under a shared Siamese architecture. The framework proposes two synergistic approaches: (1) a lightweight edge-oriented path using the Viola-Jones algorithm combined with Particle Swarm Optimization (PSO) for facial feature extraction within a Siamese network, optimized for low-power devices, and (2) a deep learning cloud-oriented path using a Siamese network with triplet loss, employing EfficientNetV2 and InceptionV3 as high-capacity feature encoders for enhanced generalization from limited examples. The proposed AIFS framework is validated across diverse platforms to simulate real-world deployment, with CPUs and Raspberry Pi representing resource-constrained edge devices, and GPUs representing high-capacity cloud environments. Tested on the Kaggle Face Recognition Dataset under a one-shot, low-data setting, AIFS achieves up to 99% accuracy. The results demonstrate a balance between latency, inference speed, and resource efficiency, confirming AIFS as a scalable and robust solution for real-time facial recognition in heterogeneous computing scenarios.
Additional Links: PMID-41318652
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41318652,
year = {2025},
author = {Nasralla, MM},
title = {AIFS: an efficient face recognition method based on AI and enhanced few-shot learning.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-29992-2},
pmid = {41318652},
issn = {2045-2322},
abstract = {The growing demand for real-time, adaptive facial recognition in resource-constrained environments like telemedicine, surveillance, and biometric authentication necessitates scalable AI solutions. Existing systems often falter under low-data conditions or limited computational resources. This paper introduces AIFS, an efficient and hybrid facial recognition framework that unifies traditional feature-based learning with modern few-shot deep learning under a shared Siamese architecture. The framework proposes two synergistic approaches: (1) a lightweight edge-oriented path using the Viola-Jones algorithm combined with Particle Swarm Optimization (PSO) for facial feature extraction within a Siamese network, optimized for low-power devices, and (2) a deep learning cloud-oriented path using a Siamese network with triplet loss, employing EfficientNetV2 and InceptionV3 as high-capacity feature encoders for enhanced generalization from limited examples. The proposed AIFS framework is validated across diverse platforms to simulate real-world deployment, with CPUs and Raspberry Pi representing resource-constrained edge devices, and GPUs representing high-capacity cloud environments. Tested on the Kaggle Face Recognition Dataset under a one-shot, low-data setting, AIFS achieves up to 99% accuracy. The results demonstrate a balance between latency, inference speed, and resource efficiency, confirming AIFS as a scalable and robust solution for real-time facial recognition in heterogeneous computing scenarios.},
}
RevDate: 2025-12-04
Smart transplant+: A HyCARE hybrid AI-cloud framework for intelligent donor-recipient matching, workflow automation, and post-transplant optimization.
Transplant immunology, 94:102332 pii:S0966-3274(25)00160-1 [Epub ahead of print].
Organ transplantation is a life-saving medical intervention to reverse end-stage organ failure. Despite its life-saving potential, organ transplantation faces inefficiencies like organ shortages, long wait times, and rejection risks due to manual, clinically limited donor-recipient matching. The rapid growth of AI and cloud computing offers new opportunities to enhance organ transplantation. This study proposes Smart Transplant+, a HyCARE system enabling intelligent matching, decision-making, and process automation. The architecture leverages a huge Organ Transplant Dataset and the most advanced methods such as Feedforward Neural Networks and Genetic Algorithms to maximize donor-recipient matching. Gated Recurrent Units are utilized in pre-transplant risk prediction, and post-transplant care is augmented with real-time tracking by IoT-based wearable sensors. The system has been programmed using Python, along with software tools like TensorFlow for machine learning and AES encryption for secure data storage and transmission. The Smart Transplant+ system provides 95-98 % accuracy which is higher than existing methods in identifying suitable donors and recipients and the potential for successful transplantation, and greatly enhances organ transplant efficiency and success rate. This book illustrates the revolutionary potential of synergizing IoT, cloud technology, and AI to optimize transplant care and improve outcomes.
Additional Links: PMID-41317747
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41317747,
year = {2025},
author = {Pulakhandam, W and Chaluvadi, A and Vallu, VR and Padmavathy, R},
title = {Smart transplant+: A HyCARE hybrid AI-cloud framework for intelligent donor-recipient matching, workflow automation, and post-transplant optimization.},
journal = {Transplant immunology},
volume = {94},
number = {},
pages = {102332},
doi = {10.1016/j.trim.2025.102332},
pmid = {41317747},
issn = {1878-5492},
abstract = {Organ transplantation is a life-saving medical intervention to reverse end-stage organ failure. Despite its life-saving potential, organ transplantation faces inefficiencies like organ shortages, long wait times, and rejection risks due to manual, clinically limited donor-recipient matching. The rapid growth of AI and cloud computing offers new opportunities to enhance organ transplantation. This study proposes Smart Transplant+, a HyCARE system enabling intelligent matching, decision-making, and process automation. The architecture leverages a huge Organ Transplant Dataset and the most advanced methods such as Feedforward Neural Networks and Genetic Algorithms to maximize donor-recipient matching. Gated Recurrent Units are utilized in pre-transplant risk prediction, and post-transplant care is augmented with real-time tracking by IoT-based wearable sensors. The system has been programmed using Python, along with software tools like TensorFlow for machine learning and AES encryption for secure data storage and transmission. The Smart Transplant+ system provides 95-98 % accuracy which is higher than existing methods in identifying suitable donors and recipients and the potential for successful transplantation, and greatly enhances organ transplant efficiency and success rate. This book illustrates the revolutionary potential of synergizing IoT, cloud technology, and AI to optimize transplant care and improve outcomes.},
}
RevDate: 2025-11-29
The construction of an integrated cloud network digital intelligence platform for rail transit based on artificial intelligence.
Scientific reports pii:10.1038/s41598-025-29732-6 [Epub ahead of print].
This study presents the design and validation of a closed-loop control platform for rail transit construction. The platform integrates multi-source data, enables real-time prediction, and supports AI-driven scheduling, with strategy execution and feedback implemented via digital twins. A three-layer architecture is constructed, comprising edge sensing, cloud computing, and intelligent interaction. The system incorporates data fusion middleware, an AI decision engine, and a 3D digital twins module. The operational workflow follows the perception-fusion-prediction/optimization-execution/feedback loop: edge devices collect on-site status, cloud middleware integrates and serves the data, the AI engine performs prediction and scheduling optimization, and the digital twins layer validates strategies and dispatches execution to the front end. At the data modeling level, a Transformer-Encoder-based multimodal temporal fusion model is designed, and graph attention networks are employed for heterogeneous structure modeling. Apache Kafka and Flink handle streaming data to achieve high-frequency, low-latency processing. The intelligent analysis layer integrates a Spatio-Temporal Graph Convolutional Network for passenger flow and construction period prediction, a Shifted Window Transformer for image recognition, and the Proximal Policy Optimization (PPO) algorithm for task scheduling optimization. Field tests in an urban rail construction project show that the platform maintains 91.6% accuracy in passenger flow prediction under high-concurrency conditions and achieves 98.2% accuracy in image recognition. PPO-based scheduling reduces average task completion time by 27.4%. The system sustains an average response latency of 280 ms, peak throughput of 27,000 messages per second, and over 95% closed-loop execution success rate. These results indicate that the platform meets its design targets in prediction accuracy, response latency, and scheduling efficiency under real-world conditions, providing a foundation for informatization and intelligent upgrading in urban rail transit.
Additional Links: PMID-41315657
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41315657,
year = {2025},
author = {Wang, K and Zhou, X and Guan, J},
title = {The construction of an integrated cloud network digital intelligence platform for rail transit based on artificial intelligence.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-29732-6},
pmid = {41315657},
issn = {2045-2322},
abstract = {This study presents the design and validation of a closed-loop control platform for rail transit construction. The platform integrates multi-source data, enables real-time prediction, and supports AI-driven scheduling, with strategy execution and feedback implemented via digital twins. A three-layer architecture is constructed, comprising edge sensing, cloud computing, and intelligent interaction. The system incorporates data fusion middleware, an AI decision engine, and a 3D digital twins module. The operational workflow follows the perception-fusion-prediction/optimization-execution/feedback loop: edge devices collect on-site status, cloud middleware integrates and serves the data, the AI engine performs prediction and scheduling optimization, and the digital twins layer validates strategies and dispatches execution to the front end. At the data modeling level, a Transformer-Encoder-based multimodal temporal fusion model is designed, and graph attention networks are employed for heterogeneous structure modeling. Apache Kafka and Flink handle streaming data to achieve high-frequency, low-latency processing. The intelligent analysis layer integrates a Spatio-Temporal Graph Convolutional Network for passenger flow and construction period prediction, a Shifted Window Transformer for image recognition, and the Proximal Policy Optimization (PPO) algorithm for task scheduling optimization. Field tests in an urban rail construction project show that the platform maintains 91.6% accuracy in passenger flow prediction under high-concurrency conditions and achieves 98.2% accuracy in image recognition. PPO-based scheduling reduces average task completion time by 27.4%. The system sustains an average response latency of 280 ms, peak throughput of 27,000 messages per second, and over 95% closed-loop execution success rate. These results indicate that the platform meets its design targets in prediction accuracy, response latency, and scheduling efficiency under real-world conditions, providing a foundation for informatization and intelligent upgrading in urban rail transit.},
}
RevDate: 2025-12-01
A deep learning-based intelligent curriculum system for enhancing public music education: a case study across three universities in Southwest China.
Scientific reports, 15(1):42798.
Responding to national aesthetic education reforms, this study introduces a deep learning-driven platform to enhance public music education in Southwest China's universities. Utilizing LSTM and Transformer models, the system analyzes real-time student learning, predicts mastery trends, and delivers personalized feedback via a cloud-based interface. A semester-long experiment across Guizhou Minzu University, Guizhou University, and Xichang University compared three groups: traditional instruction, MOOC-based hybrid teaching, and AI-enhanced personalized learning. The AI group achieved 32% higher post-test mastery scores, with predictive models maintaining high accuracy (RMSE < 0.15). The platform supports adaptive assessments, intelligent feedback, and instructional decision-making, offering a scalable solution for AI integration in arts education, particularly in culturally diverse, data-scarce settings. This work informs policymakers and developers aiming to modernize aesthetic education through advanced computing.
Additional Links: PMID-41315653
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41315653,
year = {2025},
author = {Du, H and Butkaew, P},
title = {A deep learning-based intelligent curriculum system for enhancing public music education: a case study across three universities in Southwest China.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {42798},
pmid = {41315653},
issn = {2045-2322},
support = {YBS202324//Research on the Efficiency and Improvement Path of Public Music Education in Comprehensive Colleges and Universities in Southwest China/ ; },
abstract = {Responding to national aesthetic education reforms, this study introduces a deep learning-driven platform to enhance public music education in Southwest China's universities. Utilizing LSTM and Transformer models, the system analyzes real-time student learning, predicts mastery trends, and delivers personalized feedback via a cloud-based interface. A semester-long experiment across Guizhou Minzu University, Guizhou University, and Xichang University compared three groups: traditional instruction, MOOC-based hybrid teaching, and AI-enhanced personalized learning. The AI group achieved 32% higher post-test mastery scores, with predictive models maintaining high accuracy (RMSE < 0.15). The platform supports adaptive assessments, intelligent feedback, and instructional decision-making, offering a scalable solution for AI integration in arts education, particularly in culturally diverse, data-scarce settings. This work informs policymakers and developers aiming to modernize aesthetic education through advanced computing.},
}
RevDate: 2026-01-01
Modifier guided resilient CNN inference enables fault-tolerant edge collaboration for IoT.
Scientific reports, 15(1):45458.
In resource-constrained Internet of Things (IoT) scenarios, implementing robust and accurate deep learning inference is problematic due to device failures, limited computing power, and privacy concerns. We present a resilient, completely edge-based distributed convolutional neural network (CNN) architecture that eliminates cloud dependencies while enabling accurate and fault-tolerant inference. At its core is a lightweight Modifier Module deployed at the edge, which synthesizes predictions for failing devices by pooling peer CNN outputs and weights. This dynamic mechanism is trained via a novel fail-simulation technique, allowing it to mimic missing outputs in real-time without model duplication or cloud fallback. We assess our methodology using MNIST and CIFAR-10 datasets under both homogeneous and heterogeneous data partitions, with up to five simultaneous device failures. The system displays up to 1.5% absolute accuracy improvement, 30% error rate reduction, and stable operation even with over 80% device dropout, exceeding ensemble, dropout, and federated baselines. Our strategy combines significant statistical significance, low resource utilization (~ 15 KB per model), and real-time responsiveness, making it well-suited for safety-critical IoT installations where cloud access is infeasible.
Additional Links: PMID-41310049
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41310049,
year = {2025},
author = {Jamshidi, O and Abbasi, M and Ramazani, A and Salimi Shahraki, A and Taherkordi, A},
title = {Modifier guided resilient CNN inference enables fault-tolerant edge collaboration for IoT.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {45458},
pmid = {41310049},
issn = {2045-2322},
abstract = {In resource-constrained Internet of Things (IoT) scenarios, implementing robust and accurate deep learning inference is problematic due to device failures, limited computing power, and privacy concerns. We present a resilient, completely edge-based distributed convolutional neural network (CNN) architecture that eliminates cloud dependencies while enabling accurate and fault-tolerant inference. At its core is a lightweight Modifier Module deployed at the edge, which synthesizes predictions for failing devices by pooling peer CNN outputs and weights. This dynamic mechanism is trained via a novel fail-simulation technique, allowing it to mimic missing outputs in real-time without model duplication or cloud fallback. We assess our methodology using MNIST and CIFAR-10 datasets under both homogeneous and heterogeneous data partitions, with up to five simultaneous device failures. The system displays up to 1.5% absolute accuracy improvement, 30% error rate reduction, and stable operation even with over 80% device dropout, exceeding ensemble, dropout, and federated baselines. Our strategy combines significant statistical significance, low resource utilization (~ 15 KB per model), and real-time responsiveness, making it well-suited for safety-critical IoT installations where cloud access is infeasible.},
}
RevDate: 2025-11-30
Hybrid modeling and rapid prototyping technology based on the geomagic system.
Scientific reports, 15(1):42456 pii:10.1038/s41598-025-26566-0.
The structural characteristics of gear parts are analyzed, and appropriate point cloud processing flow is formulated. Taking the Geomagic system as the computing platform and taking spur gears and spiral bevel gear gears as examples, the forward and reverse hybrid modelling is carried out, and a solid model that meets the accuracy requirements is obtained, which verifies the effectiveness of the hybrid modelling. A 3D printing process is then carried out on the generated solid model, and the corresponding process parameters are set to obtain a feasible physical model. This hybrid modelling + rapid prototyping solution can effectively improve the design efficiency of products, reduce product development costs, and improve the competitiveness of enterprises.
Additional Links: PMID-41309939
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41309939,
year = {2025},
author = {Yin, H and Ding, Y and Long, C and Wang, L and Jiang, Z and Wang, Z and Zhang, J and Yang, Y and Wu, G and Li, X},
title = {Hybrid modeling and rapid prototyping technology based on the geomagic system.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {42456},
doi = {10.1038/s41598-025-26566-0},
pmid = {41309939},
issn = {2045-2322},
support = {KJQN202401341//Science and Technology Research Program of Chongqing Municipal Education Commission/ ; 2024yc-cxfz30079//Technology Innovation and Application Development Project of Chongqing Yongchuan District Science and Technology Bureau/ ; 2024yc-cxfz30073//Technology Innovation and Application Development Project of Chongqing Yongchuan District Science and Technology Bureau/ ; KJQN202001336//Technology Research Program of Chongqing Municipal Education Commission/ ; KJQN202301311//Technology Research Program of Chongqing Municipal Education Commission/ ; CSTB2022NSCQ-MSX0352//Natural Science Foundation of Chongqing/ ; },
abstract = {The structural characteristics of gear parts are analyzed, and appropriate point cloud processing flow is formulated. Taking the Geomagic system as the computing platform and taking spur gears and spiral bevel gear gears as examples, the forward and reverse hybrid modelling is carried out, and a solid model that meets the accuracy requirements is obtained, which verifies the effectiveness of the hybrid modelling. A 3D printing process is then carried out on the generated solid model, and the corresponding process parameters are set to obtain a feasible physical model. This hybrid modelling + rapid prototyping solution can effectively improve the design efficiency of products, reduce product development costs, and improve the competitiveness of enterprises.},
}
RevDate: 2025-12-15
CmpDate: 2025-11-27
GrantCheck-an AI Solution for Guiding Grant Language to New Policy Requirements: Development Study.
JMIR formative research, 9:e79038.
BACKGROUND: Academic institutions face increasing challenges in grant writing due to evolving federal and state policies that restrict the use of specific language. Manual review processes are labor-intensive and may delay submissions, highlighting the need for scalable, secure solutions that ensure compliance without compromising scientific integrity.
OBJECTIVE: This study aimed to develop a secure, artificial intelligence-powered tool that assists researchers in writing grants consistent with evolving state and federal policy requirements.
METHODS: GrantCheck (University of Massachusetts Chan Medical School) was built on a private Amazon Web Services virtual private cloud, integrating a rule-based natural language processing engine with large language models accessed via Amazon Bedrock. A hybrid pipeline detects flagged terms and generates alternative phrasing, with validation steps to prevent hallucinations. A secure web-based front end enables document upload and report retrieval. Usability was assessed using the System Usability Scale.
RESULTS: GrantCheck achieved high performance in detecting and recommending alternatives for sensitive terms, with a precision of 1.00, recall of 0.73, and an F1-score of 0.84-outperforming general-purpose models including GPT-4o (OpenAI; F1=0.43), Deepseek R1 (High-Flyer; F1=0.40), Llama 3.1 (Meta AI; F1=0.27), Gemini 2.5 Flash (Google; F1=0.58), and even Gemini 2.5 Pro (Google; F1=0.72). Usability testing among 25 faculty and staff yielded a mean System Usability Scale score of 85.9 (SD 13.4), indicating high user satisfaction and strong workflow integration.
CONCLUSIONS: GrantCheck demonstrates the feasibility of deploying institutionally hosted, artificial intelligence-driven systems to support compliant and researcher-friendly grant writing. Beyond administrative efficiency, such systems can indirectly safeguard public health research continuity by minimizing grant delays and funding losses caused by language-related policy changes. By maintaining compliance without suppressing scientific rigor or inclusivity, GrantCheck helps protect the pipeline of research that advances biomedical discovery, health equity, and patient outcomes. This capability is particularly relevant for proposals in sensitive domains-such as social determinants of health, behavioral medicine, and community-based research-that are most vulnerable to evolving policy restrictions. As a proof-of-concept development study, our implementation is tailored to one institution's policy environment and security infrastructure, and findings should be interpreted as preliminary rather than universally generalizable.
Additional Links: PMID-41308189
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41308189,
year = {2025},
author = {Shi, Q and Oztekin, A and Matthew, G and Bortle, J and Jenkins, H and Wong, SK and Langlois, P and Zaki, A and Coleman, B and Luzuriaga, K and Zai, AH},
title = {GrantCheck-an AI Solution for Guiding Grant Language to New Policy Requirements: Development Study.},
journal = {JMIR formative research},
volume = {9},
number = {},
pages = {e79038},
pmid = {41308189},
issn = {2561-326X},
mesh = {Humans ; *Artificial Intelligence ; *Natural Language Processing ; *Writing ; *Research Support as Topic ; },
abstract = {BACKGROUND: Academic institutions face increasing challenges in grant writing due to evolving federal and state policies that restrict the use of specific language. Manual review processes are labor-intensive and may delay submissions, highlighting the need for scalable, secure solutions that ensure compliance without compromising scientific integrity.
OBJECTIVE: This study aimed to develop a secure, artificial intelligence-powered tool that assists researchers in writing grants consistent with evolving state and federal policy requirements.
METHODS: GrantCheck (University of Massachusetts Chan Medical School) was built on a private Amazon Web Services virtual private cloud, integrating a rule-based natural language processing engine with large language models accessed via Amazon Bedrock. A hybrid pipeline detects flagged terms and generates alternative phrasing, with validation steps to prevent hallucinations. A secure web-based front end enables document upload and report retrieval. Usability was assessed using the System Usability Scale.
RESULTS: GrantCheck achieved high performance in detecting and recommending alternatives for sensitive terms, with a precision of 1.00, recall of 0.73, and an F1-score of 0.84-outperforming general-purpose models including GPT-4o (OpenAI; F1=0.43), Deepseek R1 (High-Flyer; F1=0.40), Llama 3.1 (Meta AI; F1=0.27), Gemini 2.5 Flash (Google; F1=0.58), and even Gemini 2.5 Pro (Google; F1=0.72). Usability testing among 25 faculty and staff yielded a mean System Usability Scale score of 85.9 (SD 13.4), indicating high user satisfaction and strong workflow integration.
CONCLUSIONS: GrantCheck demonstrates the feasibility of deploying institutionally hosted, artificial intelligence-driven systems to support compliant and researcher-friendly grant writing. Beyond administrative efficiency, such systems can indirectly safeguard public health research continuity by minimizing grant delays and funding losses caused by language-related policy changes. By maintaining compliance without suppressing scientific rigor or inclusivity, GrantCheck helps protect the pipeline of research that advances biomedical discovery, health equity, and patient outcomes. This capability is particularly relevant for proposals in sensitive domains-such as social determinants of health, behavioral medicine, and community-based research-that are most vulnerable to evolving policy restrictions. As a proof-of-concept development study, our implementation is tailored to one institution's policy environment and security infrastructure, and findings should be interpreted as preliminary rather than universally generalizable.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Artificial Intelligence
*Natural Language Processing
*Writing
*Research Support as Topic
RevDate: 2025-11-30
Edge-Computing Smart Irrigation Controller Using LoRaWAN and LSTM for Predictive Controlled Deficit Irrigation.
Sensors (Basel, Switzerland), 25(22):.
Enhancing sustainability in agriculture has become a significant challenge today where in the current context of climate change, particularly in countries of the Mediterranean area, the amount of water available for irrigation is becoming increasingly limited. Automating irrigation processes using affordable sensors can help save irrigation water and produce almonds more sustainably. This work presents an IoT-enabled edge computing model for smart irrigation systems focused on precision agriculture. This model combines IoT sensors, hybrid machine learning algorithms, and edge computing to predict soil moisture and manage Controlled Deficit Irrigation (CDI) strategies in high density almond tree fields applying reductions of 35% ETc (crop evapotranspiration). By gathering and analyzing meteorological, humidity soil, and crop data, a soft ML (Machine Learning) model has been developed to enhance irrigation practices and identify crop anomalies in real-time without cloud computing. This methodology has the potential to transform agricultural practices by enabling precise and efficient water management, even in remote locations with lack of internet access. This study represents an initial step toward implementing ML algorithms for irrigation CDI strategies.
Additional Links: PMID-41305289
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41305289,
year = {2025},
author = {Baseca, CC and Dionísio, R and Ribeiro, F and Metrôlho, J},
title = {Edge-Computing Smart Irrigation Controller Using LoRaWAN and LSTM for Predictive Controlled Deficit Irrigation.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {22},
pages = {},
pmid = {41305289},
issn = {1424-8220},
abstract = {Enhancing sustainability in agriculture has become a significant challenge today where in the current context of climate change, particularly in countries of the Mediterranean area, the amount of water available for irrigation is becoming increasingly limited. Automating irrigation processes using affordable sensors can help save irrigation water and produce almonds more sustainably. This work presents an IoT-enabled edge computing model for smart irrigation systems focused on precision agriculture. This model combines IoT sensors, hybrid machine learning algorithms, and edge computing to predict soil moisture and manage Controlled Deficit Irrigation (CDI) strategies in high density almond tree fields applying reductions of 35% ETc (crop evapotranspiration). By gathering and analyzing meteorological, humidity soil, and crop data, a soft ML (Machine Learning) model has been developed to enhance irrigation practices and identify crop anomalies in real-time without cloud computing. This methodology has the potential to transform agricultural practices by enabling precise and efficient water management, even in remote locations with lack of internet access. This study represents an initial step toward implementing ML algorithms for irrigation CDI strategies.},
}
RevDate: 2025-11-30
Online Mapping from Weight Matching Odometry and Highly Dynamic Point Cloud Filtering via Pseudo-Occupancy Grid.
Sensors (Basel, Switzerland), 25(22):.
Efficient locomotion in autonomous driving and robotics requires clearer visualization and more precise map. This paper presents a high accuracy online mapping including weight matching LiDAR-IMU-GNSS odometry and an object-level highly dynamic point cloud filtering method based on a pseudo-occupancy grid. The odometry integrates IMU pre-integration, ground point segmentation through progressive morphological filtering (PMF), motion compensation, and weight feature point matching. Weight feature point matching enhances alignment accuracy by combining geometric and reflectance intensity similarities. By computing the pseudo-occupancy ratio between the current frame and prior local submaps, the grid probability values are updated to identify the distribution of dynamic grids. Object-level point cloud cluster segmentation is obtained using the curved voxel clustering method, eventually leading to filtering out the object-level highly dynamic point clouds during the online mapping process. Compared to the LIO-SAM and FAST-LIO2 frameworks, the proposed odometry demonstrates superior accuracy in the KITTI, UrbanLoco, and Newer College (NCD) datasets. Meantime, the proposed highly dynamic point cloud filtering algorithm exhibits better detection precision than the performance of Removert and ERASOR. Furthermore, the high-accuracy online mapping is built from a real-time dataset with the comprehensive filtering of driving vehicles, cyclists, and pedestrians. This research contributes to the field of high-accuracy online mapping, especially in filtering highly dynamic objects in an advanced way.
Additional Links: PMID-41305080
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41305080,
year = {2025},
author = {Zhao, X and Cao, X and Ding, M and Jiang, D and Wei, C},
title = {Online Mapping from Weight Matching Odometry and Highly Dynamic Point Cloud Filtering via Pseudo-Occupancy Grid.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {22},
pages = {},
pmid = {41305080},
issn = {1424-8220},
support = {8KD006(2024)-2//the State Administration of Science. Technology and Industry for National Defense Project/ ; },
abstract = {Efficient locomotion in autonomous driving and robotics requires clearer visualization and more precise map. This paper presents a high accuracy online mapping including weight matching LiDAR-IMU-GNSS odometry and an object-level highly dynamic point cloud filtering method based on a pseudo-occupancy grid. The odometry integrates IMU pre-integration, ground point segmentation through progressive morphological filtering (PMF), motion compensation, and weight feature point matching. Weight feature point matching enhances alignment accuracy by combining geometric and reflectance intensity similarities. By computing the pseudo-occupancy ratio between the current frame and prior local submaps, the grid probability values are updated to identify the distribution of dynamic grids. Object-level point cloud cluster segmentation is obtained using the curved voxel clustering method, eventually leading to filtering out the object-level highly dynamic point clouds during the online mapping process. Compared to the LIO-SAM and FAST-LIO2 frameworks, the proposed odometry demonstrates superior accuracy in the KITTI, UrbanLoco, and Newer College (NCD) datasets. Meantime, the proposed highly dynamic point cloud filtering algorithm exhibits better detection precision than the performance of Removert and ERASOR. Furthermore, the high-accuracy online mapping is built from a real-time dataset with the comprehensive filtering of driving vehicles, cyclists, and pedestrians. This research contributes to the field of high-accuracy online mapping, especially in filtering highly dynamic objects in an advanced way.},
}
RevDate: 2025-11-30
CmpDate: 2025-11-27
Transforming Smart Healthcare Systems with AI-Driven Edge Computing for Distributed IoMT Networks.
Bioengineering (Basel, Switzerland), 12(11):.
The Internet of Medical Things (IoMT) with edge computing provides opportunities for the rapid growth and development of a smart healthcare system (SHM). It consists of wearable sensors, physical objects, and electronic devices that collect health data, perform local processing, and later forward it to a cloud platform for further analysis. Most existing approaches focus on diagnosing health conditions and reporting them to medical experts for personalized treatment. However, they overlook the need to provide dynamic approaches to address the unpredictable nature of the healthcare system, which relies on public infrastructure that all connected devices can access. Furthermore, the rapid processing of health data on constrained devices often leads to uneven load distribution and affects the system's responsiveness in critical circumstances. Our research study proposes a model based on AI-driven and edge computing technologies to provide a lightweight and innovative healthcare system. It enhances the learning capabilities of the system and efficiently detects network anomalies in a distributed IoMT network, without incurring additional overhead on a bounded system. The proposed model is verified and tested through simulations using synthetic data, and the obtained results prove its efficacy in terms of energy consumption by 53%, latency by 46%, packet loss rate by 52%, network throughput by 56%, and overhead by 48% than related solutions.
Additional Links: PMID-41301188
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41301188,
year = {2025},
author = {Almufareh, MF and Humayun, M and Haseeb, K},
title = {Transforming Smart Healthcare Systems with AI-Driven Edge Computing for Distributed IoMT Networks.},
journal = {Bioengineering (Basel, Switzerland)},
volume = {12},
number = {11},
pages = {},
pmid = {41301188},
issn = {2306-5354},
support = {GSSR-2025-02-01292//eanship of Graduate Studies and Scientific Research at Jouf University/ ; },
abstract = {The Internet of Medical Things (IoMT) with edge computing provides opportunities for the rapid growth and development of a smart healthcare system (SHM). It consists of wearable sensors, physical objects, and electronic devices that collect health data, perform local processing, and later forward it to a cloud platform for further analysis. Most existing approaches focus on diagnosing health conditions and reporting them to medical experts for personalized treatment. However, they overlook the need to provide dynamic approaches to address the unpredictable nature of the healthcare system, which relies on public infrastructure that all connected devices can access. Furthermore, the rapid processing of health data on constrained devices often leads to uneven load distribution and affects the system's responsiveness in critical circumstances. Our research study proposes a model based on AI-driven and edge computing technologies to provide a lightweight and innovative healthcare system. It enhances the learning capabilities of the system and efficiently detects network anomalies in a distributed IoMT network, without incurring additional overhead on a bounded system. The proposed model is verified and tested through simulations using synthetic data, and the obtained results prove its efficacy in terms of energy consumption by 53%, latency by 46%, packet loss rate by 52%, network throughput by 56%, and overhead by 48% than related solutions.},
}
RevDate: 2026-01-01
Dynamic multi objective task scheduling in cloud computing using reinforcement learning for energy and cost optimization.
Scientific reports, 15(1):45387.
Efficient task scheduling in cloud computing is crucial for managing dynamic workloads while balancing performance, energy efficiency, and operational costs. This paper introduces a novel Reinforcement Learning-Driven Multi-Objective Task Scheduling (RL-MOTS) framework that leverages a Deep Q-Network (DQN) to dynamically allocate tasks across virtual machines. By integrating multi-objective optimization, RL-MOTS simultaneously minimizes energy consumption, reduces costs, and ensures Quality of Service (QoS) under varying workload conditions. The framework employs a reward function that adapts to real-time resource utilization, task deadlines, and energy metrics, enabling robust performance in heterogeneous cloud environments. Evaluations conducted using a simulated cloud platform demonstrate that RL-MOTS achieves up to 27% reduction in energy consumption and 18% improvement in cost efficiency compared to state-of-the-art heuristic and metaheuristic methods, while meeting stringent deadline constraints. Its adaptability to hybrid cloud-edge architectures makes RL-MOTS a forward-looking solution for next-generation distributed computing systems.
Additional Links: PMID-41298680
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41298680,
year = {2025},
author = {Yu, X and Mi, J and Tang, L and Long, L and Qin, X},
title = {Dynamic multi objective task scheduling in cloud computing using reinforcement learning for energy and cost optimization.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {45387},
pmid = {41298680},
issn = {2045-2322},
abstract = {Efficient task scheduling in cloud computing is crucial for managing dynamic workloads while balancing performance, energy efficiency, and operational costs. This paper introduces a novel Reinforcement Learning-Driven Multi-Objective Task Scheduling (RL-MOTS) framework that leverages a Deep Q-Network (DQN) to dynamically allocate tasks across virtual machines. By integrating multi-objective optimization, RL-MOTS simultaneously minimizes energy consumption, reduces costs, and ensures Quality of Service (QoS) under varying workload conditions. The framework employs a reward function that adapts to real-time resource utilization, task deadlines, and energy metrics, enabling robust performance in heterogeneous cloud environments. Evaluations conducted using a simulated cloud platform demonstrate that RL-MOTS achieves up to 27% reduction in energy consumption and 18% improvement in cost efficiency compared to state-of-the-art heuristic and metaheuristic methods, while meeting stringent deadline constraints. Its adaptability to hybrid cloud-edge architectures makes RL-MOTS a forward-looking solution for next-generation distributed computing systems.},
}
RevDate: 2025-12-15
CmpDate: 2025-12-15
Advancements and challenges in bioinformatics tools for microbial genomics in the last decade: Toward the smart integration of bioinformatics tools, digital resources, and emerging technologies for the analysis of complex biological data.
Infection, genetics and evolution : journal of molecular epidemiology and evolutionary genetics in infectious diseases, 136:105859.
Over the past decade, microbial genomics has been transformed by advances in sequencing technologies and bioinformatics, enabling the transition from targeted gene markers to complete genome assemblies and ecological scale metagenomic surveys. This review presents a comprehensive overview of the bioinformatics pipelines that structure this field, from sample preparation, PCR amplification, and next-generation sequencing (NGS) to read preprocessing, genome assembly, polishing, structural and functional annotation, and submission to public databases. We highlight the major tools that have become standards at each stage, including FastQC, SPAdes, Prokka, Bakta, CARD, GTDB-Tk, QIIME 2, and Kraken2, while also emphasizing recent innovations such as hybrid assemblers, ontology-driven annotation frameworks, and automated workflows (nf-core, Bactopia). Applications extend across microbiology, from antimicrobial resistance surveillance and phylogenetic classification to ecological studies, exemplified here by three case studies: termite gut microbiota profiling by 16S metabarcoding, the description of new Bartonella species from bats, and the genomic characterization of rare Salmonella enterica serovars from primates. Despite these advances, persistent challenges remain, including incomplete and biased reference databases, computational bottlenecks, and economic disparities in sequencing and storage capacities. In response, international initiatives increasingly promote open, interoperable, and reusable bioinformatics infrastructures. Conforming to the Findable, Accessible, Interoperable, Reusable (FAIR) principles and global frameworks such as Global Alliance for Genomics and Health (GA4GH), these efforts are driving greater standardization, transparency, and data sharing across the microbial genomics community. Future perspectives point toward the integration of artificial intelligence, long-read and telomere-to-telomere (T2T) sequencing, cloud-native infrastructures, and even quantum computing, paving the way for a predictive, reproducible, and globally inclusive microbial genomics.
Additional Links: PMID-41297621
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41297621,
year = {2025},
author = {Houmenou, CT and Sokhna, C and Fenollar, F and Mediannikov, O},
title = {Advancements and challenges in bioinformatics tools for microbial genomics in the last decade: Toward the smart integration of bioinformatics tools, digital resources, and emerging technologies for the analysis of complex biological data.},
journal = {Infection, genetics and evolution : journal of molecular epidemiology and evolutionary genetics in infectious diseases},
volume = {136},
number = {},
pages = {105859},
doi = {10.1016/j.meegid.2025.105859},
pmid = {41297621},
issn = {1567-7257},
mesh = {*Computational Biology/methods ; *Genomics/methods ; High-Throughput Nucleotide Sequencing ; *Metagenomics/methods ; Humans ; Animals ; },
abstract = {Over the past decade, microbial genomics has been transformed by advances in sequencing technologies and bioinformatics, enabling the transition from targeted gene markers to complete genome assemblies and ecological scale metagenomic surveys. This review presents a comprehensive overview of the bioinformatics pipelines that structure this field, from sample preparation, PCR amplification, and next-generation sequencing (NGS) to read preprocessing, genome assembly, polishing, structural and functional annotation, and submission to public databases. We highlight the major tools that have become standards at each stage, including FastQC, SPAdes, Prokka, Bakta, CARD, GTDB-Tk, QIIME 2, and Kraken2, while also emphasizing recent innovations such as hybrid assemblers, ontology-driven annotation frameworks, and automated workflows (nf-core, Bactopia). Applications extend across microbiology, from antimicrobial resistance surveillance and phylogenetic classification to ecological studies, exemplified here by three case studies: termite gut microbiota profiling by 16S metabarcoding, the description of new Bartonella species from bats, and the genomic characterization of rare Salmonella enterica serovars from primates. Despite these advances, persistent challenges remain, including incomplete and biased reference databases, computational bottlenecks, and economic disparities in sequencing and storage capacities. In response, international initiatives increasingly promote open, interoperable, and reusable bioinformatics infrastructures. Conforming to the Findable, Accessible, Interoperable, Reusable (FAIR) principles and global frameworks such as Global Alliance for Genomics and Health (GA4GH), these efforts are driving greater standardization, transparency, and data sharing across the microbial genomics community. Future perspectives point toward the integration of artificial intelligence, long-read and telomere-to-telomere (T2T) sequencing, cloud-native infrastructures, and even quantum computing, paving the way for a predictive, reproducible, and globally inclusive microbial genomics.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Computational Biology/methods
*Genomics/methods
High-Throughput Nucleotide Sequencing
*Metagenomics/methods
Humans
Animals
RevDate: 2025-11-28
CmpDate: 2025-11-26
Sentinel-2-Based Forest Health Survey of ICP Forests Level I and II Plots in Hungary.
Journal of imaging, 11(11):.
Forest damage has been increasingly recorded over the past decade in both Europe and Hungary, primarily due to prolonged droughts, causing a decline in forest health. In the framework of ICP Forests, the forest damage has been monitored for decades; however, it is labour-intensive and time-consuming. Satellite-based remote sensing offers a rapid and efficient method for assessing large-scale damage events, combining the ground-based ICP Forests datasets. This study utilised cloud computing and Sentinel-2 satellite imagery to monitor forest health and detect anomalies. Standardised NDVI (Z NDVI) maps were produced for the period from 2017 to 2023 to identify disturbances in the forest. The research focused on seven active ICP Forests Level II and 78 Level I plots in Hungary. Z NDVI values were divided into five categories based on damage severity, and there was agreement between Level II field data and satellite imagery. In 2017, severe damage was caused by late frost and wind; however, the forest recovered by 2018. Another decline was observed in 2021 due to wind and in 2022 due to drought. Data from the ICP Forests Level I plots, which represent forest condition in Hungary, indicated that 80% of the monitored stands were damaged, with 30% suffering moderate damage and 15% experiencing severe damage. Z NDVI classifications aligned with the field data, showing widespread forest damage across the country.
Additional Links: PMID-41295130
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41295130,
year = {2025},
author = {Molnár, T and Bolla, B and Szabó, O and Koltay, A},
title = {Sentinel-2-Based Forest Health Survey of ICP Forests Level I and II Plots in Hungary.},
journal = {Journal of imaging},
volume = {11},
number = {11},
pages = {},
pmid = {41295130},
issn = {2313-433X},
support = {TKP2021-NKTA-43//Ministry of Culture and Innovation of Hungary/ ; },
abstract = {Forest damage has been increasingly recorded over the past decade in both Europe and Hungary, primarily due to prolonged droughts, causing a decline in forest health. In the framework of ICP Forests, the forest damage has been monitored for decades; however, it is labour-intensive and time-consuming. Satellite-based remote sensing offers a rapid and efficient method for assessing large-scale damage events, combining the ground-based ICP Forests datasets. This study utilised cloud computing and Sentinel-2 satellite imagery to monitor forest health and detect anomalies. Standardised NDVI (Z NDVI) maps were produced for the period from 2017 to 2023 to identify disturbances in the forest. The research focused on seven active ICP Forests Level II and 78 Level I plots in Hungary. Z NDVI values were divided into five categories based on damage severity, and there was agreement between Level II field data and satellite imagery. In 2017, severe damage was caused by late frost and wind; however, the forest recovered by 2018. Another decline was observed in 2021 due to wind and in 2022 due to drought. Data from the ICP Forests Level I plots, which represent forest condition in Hungary, indicated that 80% of the monitored stands were damaged, with 30% suffering moderate damage and 15% experiencing severe damage. Z NDVI classifications aligned with the field data, showing widespread forest damage across the country.},
}
RevDate: 2025-11-28
Reinforcement learning based multi objective task scheduling for energy efficient and cost effective cloud edge computing.
Scientific reports, 15(1):41716.
The rapid proliferation of Internet of Things (IoT) devices and latency-sensitive applications has amplified the need for efficient task scheduling in hybrid cloud-edge environments. Traditional heuristic and metaheuristic algorithms often fall short in addressing the dynamic nature of workloads and the conflicting objectives of performance, energy efficiency, and cost-effectiveness. To overcome these challenges, this study introduces Reinforcement Learning-Based Multi-Objective Task Scheduling (RL-MOTS), a framework leveraging Deep Q-Networks (DQNs) for intelligent and adaptive resource allocation. The proposed model formulates scheduling as a Markov Decision Process, incorporating a priority-aware dynamic queueing mechanism and a multi-objective reward function that balances task latency, energy consumption, and operational costs. Additionally, the framework employs a state-reward tensor to capture trade-offs among objectives, enabling real-time decision-making across heterogeneous cloud and edge nodes. Comprehensive simulations using CloudSim validate the robustness of RL-MOTS under varying workload conditions. Compared to baseline strategies such as FCFS, Min-Min, and multi-objective heuristic models, RL-MOTS achieves up to 28% reduction in energy consumption, 20% improvement in cost efficiency, and significant reductions in makespan and deadline violations, while maintaining strict Quality of Service (QoS) requirements. The framework's adaptability to preemptive and non-preemptive scheduling further enhances its resilience and scalability. These findings establish RL-MOTS as a forward-looking solution for sustainable, cost-efficient, and performance-oriented computing in next-generation distributed systems. Future research will focus on integrating transfer learning and federated learning to increase scalability and privacy in large, decentralized environments, including those applicable to the medical industry.
Additional Links: PMID-41286289
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41286289,
year = {2025},
author = {Zhang, W and Ou, H},
title = {Reinforcement learning based multi objective task scheduling for energy efficient and cost effective cloud edge computing.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {41716},
pmid = {41286289},
issn = {2045-2322},
abstract = {The rapid proliferation of Internet of Things (IoT) devices and latency-sensitive applications has amplified the need for efficient task scheduling in hybrid cloud-edge environments. Traditional heuristic and metaheuristic algorithms often fall short in addressing the dynamic nature of workloads and the conflicting objectives of performance, energy efficiency, and cost-effectiveness. To overcome these challenges, this study introduces Reinforcement Learning-Based Multi-Objective Task Scheduling (RL-MOTS), a framework leveraging Deep Q-Networks (DQNs) for intelligent and adaptive resource allocation. The proposed model formulates scheduling as a Markov Decision Process, incorporating a priority-aware dynamic queueing mechanism and a multi-objective reward function that balances task latency, energy consumption, and operational costs. Additionally, the framework employs a state-reward tensor to capture trade-offs among objectives, enabling real-time decision-making across heterogeneous cloud and edge nodes. Comprehensive simulations using CloudSim validate the robustness of RL-MOTS under varying workload conditions. Compared to baseline strategies such as FCFS, Min-Min, and multi-objective heuristic models, RL-MOTS achieves up to 28% reduction in energy consumption, 20% improvement in cost efficiency, and significant reductions in makespan and deadline violations, while maintaining strict Quality of Service (QoS) requirements. The framework's adaptability to preemptive and non-preemptive scheduling further enhances its resilience and scalability. These findings establish RL-MOTS as a forward-looking solution for sustainable, cost-efficient, and performance-oriented computing in next-generation distributed systems. Future research will focus on integrating transfer learning and federated learning to increase scalability and privacy in large, decentralized environments, including those applicable to the medical industry.},
}
RevDate: 2025-12-05
Enhancing IIoT security through blockchain-enabled workload analysis in fog computing environments.
Scientific reports, 15(1):42898.
Robots and software are utilized in industrial automation to run machinery and processes in a variety of sectors. Numerous applications incorporate machine learning, the Internet of Things (IoT), and other methods to offer clever features that enhance user experience. Businesses and individuals can successfully accomplish both commercial and noncommercial requirements with the help of such technologies. Due to high risk as well as inefficiency of traditional procedures, organisations are expected to automate industrial processes. Aim of this research is to propose novel technique in workload analysis for fog network and blockchain model in security improvement for IIoT application. Here the IIoT network malicious activity is analysed using blockchain reinforcement gaussian neural network. Then the manufacturing industry workload analysis is carried out using fog cloud based virtual machine multilayer perceptron model. The experimental analysis is carried out for various security dataset in manufacturing industry in terms of latency, QoS, accuracy, reliability, data integrity.
Additional Links: PMID-41286215
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41286215,
year = {2025},
author = {Samriya, JK and Kumar, A and Bhansali, A and Malik, M and Pan, SH and Arya, V and Alhalabi, W and Gupta, BB},
title = {Enhancing IIoT security through blockchain-enabled workload analysis in fog computing environments.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {42898},
pmid = {41286215},
issn = {2045-2322},
abstract = {Robots and software are utilized in industrial automation to run machinery and processes in a variety of sectors. Numerous applications incorporate machine learning, the Internet of Things (IoT), and other methods to offer clever features that enhance user experience. Businesses and individuals can successfully accomplish both commercial and noncommercial requirements with the help of such technologies. Due to high risk as well as inefficiency of traditional procedures, organisations are expected to automate industrial processes. Aim of this research is to propose novel technique in workload analysis for fog network and blockchain model in security improvement for IIoT application. Here the IIoT network malicious activity is analysed using blockchain reinforcement gaussian neural network. Then the manufacturing industry workload analysis is carried out using fog cloud based virtual machine multilayer perceptron model. The experimental analysis is carried out for various security dataset in manufacturing industry in terms of latency, QoS, accuracy, reliability, data integrity.},
}
RevDate: 2025-11-24
CmpDate: 2025-11-24
GWASHub: An Automated Cloud-Based Platform for Genome-Wide Association Study Meta-Analysis.
medRxiv : the preprint server for health sciences pii:2025.10.21.25338463.
Genome-wide association studies (GWAS) often aggregate data from millions of participants across multiple cohorts using meta-analysis to maximise power for genetic discovery. The increase in availability of genomic biobanks, together with a growing focus on phenotypic subgroups, genetic diversity, and sex-stratified analyses, has led GWAS meta-analyses to routinely produce hundreds of summary statistic files accompanied by detailed meta-data. Scalable infrastructures for data handling, quality control (QC), and meta-analysis workflows are essential to prevent errors, ensure reproducibility, and reduce the burden on researchers, allowing them to focus on downstream research and clinical translation. To address this need, we developed GWASHub, a secure cloud-based platform designed for the curation, processing and meta-analysis of GWAS summary statistics. GWASHub features i) private and secure project spaces, ii) automated file harmonisation and data validation, iii) GWAS meta-data capture, iv) customisable variant QC, v) GWAS meta-analysis, vi) analysis reporting and visualisation, and vii) results download. Users interact with the portal via an intuitive web interface built on Nuxt.js, a high-performance JavaScript framework. Data is securely managed through an Amazon Web Services (AWS) MySQL database and S3 block storage. Analysis jobs are distributed to AWS compute resources in a scalable fashion. The QC dashboard presents tabular and graphical QC outputs allowing manual review of individual datasets. Those passing QC are made available to the meta-analysis module. Individual datasets and meta-analysis results are available for download by project users with appropriate access permissions. In GWASHub, a "project" serves as a virtual workspace spanning an entire consortium, allowing individuals with different roles, such as data contributors (users) and project coordinators (main analysts), to collaborate securely under a unified framework. GWASHub has a flexible architecture to allow for ongoing development and incorporation of alternative quality control or meta-analysis procedures, to meet the specific needs of researchers. GWASHub was developed as a joint initiative by the HERMES Consortium and the Cardiovascular Knowledge Portal, and access to the platform is free and available upon request. GWASHub addresses a critical need in the genetics research community by providing a scalable, secure, and user-friendly platform for managing the complexity of large-scale GWAS meta-analyses. As the volume and diversity of GWAS data continue to grow, platforms like GWASHub may help to accelerate insights into the genetic architecture of complex traits.
Additional Links: PMID-41282854
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41282854,
year = {2025},
author = {Sunderland, N and Hite, D and Smadbeck, P and Hoang, Q and Jang, DK and Tragante, V and Jiang, JC and Shah, S and Paternoster, L and Burtt, NP and Flannick, J and Lumbers, RT},
title = {GWASHub: An Automated Cloud-Based Platform for Genome-Wide Association Study Meta-Analysis.},
journal = {medRxiv : the preprint server for health sciences},
volume = {},
number = {},
pages = {},
doi = {10.1101/2025.10.21.25338463},
pmid = {41282854},
abstract = {Genome-wide association studies (GWAS) often aggregate data from millions of participants across multiple cohorts using meta-analysis to maximise power for genetic discovery. The increase in availability of genomic biobanks, together with a growing focus on phenotypic subgroups, genetic diversity, and sex-stratified analyses, has led GWAS meta-analyses to routinely produce hundreds of summary statistic files accompanied by detailed meta-data. Scalable infrastructures for data handling, quality control (QC), and meta-analysis workflows are essential to prevent errors, ensure reproducibility, and reduce the burden on researchers, allowing them to focus on downstream research and clinical translation. To address this need, we developed GWASHub, a secure cloud-based platform designed for the curation, processing and meta-analysis of GWAS summary statistics. GWASHub features i) private and secure project spaces, ii) automated file harmonisation and data validation, iii) GWAS meta-data capture, iv) customisable variant QC, v) GWAS meta-analysis, vi) analysis reporting and visualisation, and vii) results download. Users interact with the portal via an intuitive web interface built on Nuxt.js, a high-performance JavaScript framework. Data is securely managed through an Amazon Web Services (AWS) MySQL database and S3 block storage. Analysis jobs are distributed to AWS compute resources in a scalable fashion. The QC dashboard presents tabular and graphical QC outputs allowing manual review of individual datasets. Those passing QC are made available to the meta-analysis module. Individual datasets and meta-analysis results are available for download by project users with appropriate access permissions. In GWASHub, a "project" serves as a virtual workspace spanning an entire consortium, allowing individuals with different roles, such as data contributors (users) and project coordinators (main analysts), to collaborate securely under a unified framework. GWASHub has a flexible architecture to allow for ongoing development and incorporation of alternative quality control or meta-analysis procedures, to meet the specific needs of researchers. GWASHub was developed as a joint initiative by the HERMES Consortium and the Cardiovascular Knowledge Portal, and access to the platform is free and available upon request. GWASHub addresses a critical need in the genetics research community by providing a scalable, secure, and user-friendly platform for managing the complexity of large-scale GWAS meta-analyses. As the volume and diversity of GWAS data continue to grow, platforms like GWASHub may help to accelerate insights into the genetic architecture of complex traits.},
}
RevDate: 2025-11-26
CmpDate: 2025-11-24
Hybrid artificial intelligence frameworks for otoscopic diagnosis: Integrating convolutional neural networks and large language models toward real-time mobile health.
Digital health, 11:20552076251395449.
BACKGROUND: Otitis media remains a significant global health concern, particularly in resource-limited settings where timely diagnosis is challenging. Artificial intelligence (AI) offers promising solutions to enhance diagnostic accuracy in mobile health applications.
OBJECTIVE: This study introduces a hybrid AI framework that integrates convolutional neural networks (CNNs) for image classification with large language models (LLMs) for clinical reasoning, enabling real-time otoscopic diagnosis.
METHODS: We developed a dual-path system combining CNN-based feature extraction with LLM-supported interpretation. The framework was optimized for mobile deployment, with lightweight models operating on-device and advanced reasoning performed via secure cloud APIs. A dataset of 10,465 otoendoscopic images (expanded from 2820 original clinical images through data augmentation) across 10 middle-ear conditions was used for training and validation. Diagnostic performance was benchmarked against clinicians of varying expertise.
RESULTS: The hybrid CNN-LLM system achieved an overall diagnostic accuracy of 97.6%, demonstrating the synergistic benefit of combining CNN-driven visual analysis with LLM-based clinical reasoning. The system delivered sub-200 ms feedback and achieved specialist-level performance in identifying common ear pathologies.
CONCLUSIONS: This hybrid AI framework substantially improves diagnostic precision and responsiveness in otoscopic evaluation. Its mobile-friendly design supports scalable deployment in telemedicine and primary care, offering a practical solution to enhance ear disease diagnosis in underserved regions.
Additional Links: PMID-41278373
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41278373,
year = {2025},
author = {Chu, YC and Chen, YC and Hsu, CY and Kuo, CT and Cheng, YF and Lin, KH and Liao, WH},
title = {Hybrid artificial intelligence frameworks for otoscopic diagnosis: Integrating convolutional neural networks and large language models toward real-time mobile health.},
journal = {Digital health},
volume = {11},
number = {},
pages = {20552076251395449},
pmid = {41278373},
issn = {2055-2076},
abstract = {BACKGROUND: Otitis media remains a significant global health concern, particularly in resource-limited settings where timely diagnosis is challenging. Artificial intelligence (AI) offers promising solutions to enhance diagnostic accuracy in mobile health applications.
OBJECTIVE: This study introduces a hybrid AI framework that integrates convolutional neural networks (CNNs) for image classification with large language models (LLMs) for clinical reasoning, enabling real-time otoscopic diagnosis.
METHODS: We developed a dual-path system combining CNN-based feature extraction with LLM-supported interpretation. The framework was optimized for mobile deployment, with lightweight models operating on-device and advanced reasoning performed via secure cloud APIs. A dataset of 10,465 otoendoscopic images (expanded from 2820 original clinical images through data augmentation) across 10 middle-ear conditions was used for training and validation. Diagnostic performance was benchmarked against clinicians of varying expertise.
RESULTS: The hybrid CNN-LLM system achieved an overall diagnostic accuracy of 97.6%, demonstrating the synergistic benefit of combining CNN-driven visual analysis with LLM-based clinical reasoning. The system delivered sub-200 ms feedback and achieved specialist-level performance in identifying common ear pathologies.
CONCLUSIONS: This hybrid AI framework substantially improves diagnostic precision and responsiveness in otoscopic evaluation. Its mobile-friendly design supports scalable deployment in telemedicine and primary care, offering a practical solution to enhance ear disease diagnosis in underserved regions.},
}
RevDate: 2025-12-21
CmpDate: 2025-12-01
Exploring environmental sustainability of artificial intelligence in radiology: A scoping review.
European journal of radiology, 194:112558.
OBJECTIVE: Artificial intelligence (AI) is increasingly used in radiology, but its environmental implications have not been sufficiently studied, so far. This study aims to synthesize existing literature on the environmental sustainability of AI in radiology and highlights strategies proposed to mitigate its impact.
METHODS: A scoping review was conducted following the Joanna Briggs Institute methodology. Searches across MEDLINE, Embase, CINAHL, and Web of Science focused on English and French publications from 2014 to 2024, targeting AI, environmental sustainability, and medical imaging. Eligible studies addressed environmental sustainability of AI in medical imaging. Conference abstracts, non-radiological or non-human studies, and unavailable full texts were excluded. Two independent reviewers assessed titles, abstracts, and full texts, while four reviewers conducted data extraction and analysis.
RESULTS: The search identified 3,723 results, of which 13 met inclusion criteria: nine research articles and four reviews. Four themes emerged: energy consumption (n = 10), carbon footprint (n = 6), computational resources (n = 9), and water consumption (n = 2). Reported metrics included CO2-equivalent emissions, training time, power use effectiveness, equivalent distance travelled by car, energy demands, and water consumption. Strategies to enhance sustainability included lightweight model architectures, quantization and pruning, efficient optimizers, and early stopping. Broader recommendations encompassed integrating carbon and energy metrics into AI evaluation, transitioning to cloud computing, and developing an eco-label for radiology AI systems.
CONCLUSIONS: Research on sustainable AI in radiology remains scarce but is rapidly growing. This review highlights key metrics and strategies to guide future research and practice toward more transparent, consistent, and environmentally responsible AI development in radiology.
ABBREVIATIONS: AI, Artificial intelligence; CNN, Convolutional neural networks; CT, Computed tomography; CPU, Central Processing Unit; DL, Deep learning; FLOP, Floating-point operation; GHG, Greenhouses gas; GPU, Graphics Processing Unit; LCA, Life Cycle Assessment; LLM, Large Language Model; MeSH, Medical Subject Headings; ML, Machine learning; MRI, Magnetic resonance imaging; NLP, Natural language processing; PUE, Power Usage Effectiveness; TPU, Tensor Processing Unit; USA, United States of America; ViT, Vision Transformer; WUE, Water Usage Effectiveness.
Additional Links: PMID-41275851
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41275851,
year = {2026},
author = {Champendal, M and Lokaj, B and de Gevigney, VD and Brulé, G and Zaghir, J and Boiko, P and Lovis, C and Müller, H and Schmid, J and Ribeiro, RT},
title = {Exploring environmental sustainability of artificial intelligence in radiology: A scoping review.},
journal = {European journal of radiology},
volume = {194},
number = {},
pages = {112558},
doi = {10.1016/j.ejrad.2025.112558},
pmid = {41275851},
issn = {1872-7727},
mesh = {*Artificial Intelligence ; *Radiology/methods ; Humans ; *Conservation of Natural Resources ; Carbon Footprint ; *Diagnostic Imaging ; },
abstract = {OBJECTIVE: Artificial intelligence (AI) is increasingly used in radiology, but its environmental implications have not been sufficiently studied, so far. This study aims to synthesize existing literature on the environmental sustainability of AI in radiology and highlights strategies proposed to mitigate its impact.
METHODS: A scoping review was conducted following the Joanna Briggs Institute methodology. Searches across MEDLINE, Embase, CINAHL, and Web of Science focused on English and French publications from 2014 to 2024, targeting AI, environmental sustainability, and medical imaging. Eligible studies addressed environmental sustainability of AI in medical imaging. Conference abstracts, non-radiological or non-human studies, and unavailable full texts were excluded. Two independent reviewers assessed titles, abstracts, and full texts, while four reviewers conducted data extraction and analysis.
RESULTS: The search identified 3,723 results, of which 13 met inclusion criteria: nine research articles and four reviews. Four themes emerged: energy consumption (n = 10), carbon footprint (n = 6), computational resources (n = 9), and water consumption (n = 2). Reported metrics included CO2-equivalent emissions, training time, power use effectiveness, equivalent distance travelled by car, energy demands, and water consumption. Strategies to enhance sustainability included lightweight model architectures, quantization and pruning, efficient optimizers, and early stopping. Broader recommendations encompassed integrating carbon and energy metrics into AI evaluation, transitioning to cloud computing, and developing an eco-label for radiology AI systems.
CONCLUSIONS: Research on sustainable AI in radiology remains scarce but is rapidly growing. This review highlights key metrics and strategies to guide future research and practice toward more transparent, consistent, and environmentally responsible AI development in radiology.
ABBREVIATIONS: AI, Artificial intelligence; CNN, Convolutional neural networks; CT, Computed tomography; CPU, Central Processing Unit; DL, Deep learning; FLOP, Floating-point operation; GHG, Greenhouses gas; GPU, Graphics Processing Unit; LCA, Life Cycle Assessment; LLM, Large Language Model; MeSH, Medical Subject Headings; ML, Machine learning; MRI, Magnetic resonance imaging; NLP, Natural language processing; PUE, Power Usage Effectiveness; TPU, Tensor Processing Unit; USA, United States of America; ViT, Vision Transformer; WUE, Water Usage Effectiveness.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Artificial Intelligence
*Radiology/methods
Humans
*Conservation of Natural Resources
Carbon Footprint
*Diagnostic Imaging
RevDate: 2025-11-27
An intelligent job scheduling and real-time resource optimization for edge-cloud continuum in next generation networks.
Scientific reports, 15(1):41534.
While cloud-edge infrastructures demand flexible and sophisticated resource management, 6G networks necessitate very low latency, great dependability, and broad connection. Cloud computing's scalability and agility enable it to prioritize service delivery at various levels of detail while serving billions of users. However, due to resource inefficiencies, virtual machine (VM) issues, response delays, and deadline violations, real-time task scheduling is challenging in these settings. This study develops an AI-powered task scheduling system based on the newly published Unfair Semi-Greedy (USG) algorithm, Earliest Deadline First (EDF), and Enhanced Deadline Zero-Laxity (EDZL) algorithm. The system chooses the best scheduler based on load and work criticality by combining reinforcement learning adaptive logic with a dynamic resource table. Over 10,000 soft real-time task sets were utilized to evaluate the framework across various cloud-edge scenarios. When compared to solo EDF and EDZL solutions, the recommended hybrid method reduced average response times by up to 26.3% and deadline exceptions by 41.7%. The USG component achieved 98.6% task stimulability under saturated edge settings, indicating significant changes in workload. These findings suggest that the method might be useful for applications that need a speedy turnaround. This architecture is especially well-suited for autonomous systems, remote healthcare, and immersive media, all of which require low latency and dependability, and it may be extended to AI-native 6G networks.
Additional Links: PMID-41274945
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41274945,
year = {2025},
author = {Naeem, AB and Senapati, B and Rasheed, J and Baili, J and Osman, O},
title = {An intelligent job scheduling and real-time resource optimization for edge-cloud continuum in next generation networks.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {41534},
pmid = {41274945},
issn = {2045-2322},
support = {RGP2/109/46//Deanship of Research and Graduate Studies, King Khalid University, Saudi Arabia/ ; },
abstract = {While cloud-edge infrastructures demand flexible and sophisticated resource management, 6G networks necessitate very low latency, great dependability, and broad connection. Cloud computing's scalability and agility enable it to prioritize service delivery at various levels of detail while serving billions of users. However, due to resource inefficiencies, virtual machine (VM) issues, response delays, and deadline violations, real-time task scheduling is challenging in these settings. This study develops an AI-powered task scheduling system based on the newly published Unfair Semi-Greedy (USG) algorithm, Earliest Deadline First (EDF), and Enhanced Deadline Zero-Laxity (EDZL) algorithm. The system chooses the best scheduler based on load and work criticality by combining reinforcement learning adaptive logic with a dynamic resource table. Over 10,000 soft real-time task sets were utilized to evaluate the framework across various cloud-edge scenarios. When compared to solo EDF and EDZL solutions, the recommended hybrid method reduced average response times by up to 26.3% and deadline exceptions by 41.7%. The USG component achieved 98.6% task stimulability under saturated edge settings, indicating significant changes in workload. These findings suggest that the method might be useful for applications that need a speedy turnaround. This architecture is especially well-suited for autonomous systems, remote healthcare, and immersive media, all of which require low latency and dependability, and it may be extended to AI-native 6G networks.},
}
RevDate: 2025-11-22
AlphaFold Protein Structure Database 2025: a redesigned interface and updated structural coverage.
Nucleic acids research pii:8340156 [Epub ahead of print].
The AlphaFold Protein Structure Database (AFDB; https://alphafold.ebi.ac.uk), developed by EMBL-EBI and Google DeepMind, provides open access to hundreds of millions of high-accuracy protein structure predictions, transforming research in structural biology and the wider life sciences. Since its launch, AFDB has become a widely used bioinformatics resource, integrated into major databases, visualization platforms, and analysis pipelines. Here, we report the update of the database to align with the UniProt 2025_03 release, along with a comprehensive redesign of the entry page to enhance usability, accessibility, and structural interpretation. The new design integrates annotations directly with an interactive 3D viewer and introduces dedicated domains and summary tabs. Structural coverage has also been updated to include isoforms plus underlying multiple sequence alignments. Data are available through the website, FTP, Google Cloud, and updated APIs. Together, these advances reinforce AFDB as a sustainable resource for exploring protein sequence-structure relationships.
Additional Links: PMID-41273079
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41273079,
year = {2025},
author = {Bertoni, D and Tsenkov, M and Magana, P and Nair, S and Pidruchna, I and Querino Lima Afonso, M and Midlik, A and Paramval, U and Lawal, D and Tanweer, A and Last, M and Patel, R and Laydon, A and Lasecki, D and Dietrich, N and Tomlinson, H and Žídek, A and Green, T and Kovalevskiy, O and Lau, A and Kandathil, S and Bordin, N and Sillitoe, I and Mirdita, M and Jones, D and Orengo, C and Steinegger, M and Fleming, JR and Velankar, S},
title = {AlphaFold Protein Structure Database 2025: a redesigned interface and updated structural coverage.},
journal = {Nucleic acids research},
volume = {},
number = {},
pages = {},
doi = {10.1093/nar/gkaf1226},
pmid = {41273079},
issn = {1362-4962},
support = {20-BBSRC/NSF-BIO//BBRSC/ ; BB/Y000455/1//BBRSC/ ; BB/W018802/1//BBRSC/ ; BB/T019409/1//BBRSC/ ; BB/W008556/1//BBRSC/ ; 221327/Z/20/Z//Welcome Trust/ ; 310300/Z/24/Z//Welcome Trust/ ; RS-2020-NR049543//National Research Foundation/ ; RS-2021-NR061659//National Research Foundation/ ; RS-2021-NR056571//National Research Foundation/ ; RS-2024-00396026//National Research Foundation/ ; NNF24SA0092560//Creative-Pioneering Researchers Program and Novo Nordisk Foundation/ ; RS-2023-00250470//National Research Foundation of Korea/ ; //European Molecular Biology Laboratory/ ; //European Molecular Biology Laboratory/ ; },
abstract = {The AlphaFold Protein Structure Database (AFDB; https://alphafold.ebi.ac.uk), developed by EMBL-EBI and Google DeepMind, provides open access to hundreds of millions of high-accuracy protein structure predictions, transforming research in structural biology and the wider life sciences. Since its launch, AFDB has become a widely used bioinformatics resource, integrated into major databases, visualization platforms, and analysis pipelines. Here, we report the update of the database to align with the UniProt 2025_03 release, along with a comprehensive redesign of the entry page to enhance usability, accessibility, and structural interpretation. The new design integrates annotations directly with an interactive 3D viewer and introduces dedicated domains and summary tabs. Structural coverage has also been updated to include isoforms plus underlying multiple sequence alignments. Data are available through the website, FTP, Google Cloud, and updated APIs. Together, these advances reinforce AFDB as a sustainable resource for exploring protein sequence-structure relationships.},
}
RevDate: 2026-01-01
CmpDate: 2025-12-30
An integrated queuing and certainty factor theory model for efficient edge computing in remote patient monitoring systems.
Scientific reports, 15(1):44973.
Remote Patient Monitoring Systems (RPMS) require efficient resource management to prioritize life-critical data in latency-sensitive healthcare environments. This research introduces an Integrated Queuing and Certainty Factor Theory (IQCT) model aimed at optimizing bandwidth allocation and task scheduling within fog-edge-cloud architectures. IQCT prioritizes patient requests in real time by classifying them into emergency, warning, and normal categories using certainty factor(CF) -based urgency assessment. Simulated on Raspberry Pi fog nodes with the UCI Heart Disease dataset, its performance was benchmarked against FCFS, PQ, and WFQ using metrics such as latency, energy consumption, and response time under varying workloads. IQCT reduced latency for emergency requests by 54.5% and improved network efficiency by 30.08% compared to FCFS. It also lowered response and execution times by 49.5% and 36%, and decreased fog-layer energy consumption by 30.8%. Scalability tests confirmed stable quality of service (QoS) under peak loads, demonstrating adaptability to dynamic demand. The adaptation of PQ and CF theory can lead to more efficient and optimized performance in RPMS. The IQCT model has significantly reduced the latency by 54.5% in emergency situations, in comparison with the existing models.
Additional Links: PMID-41272028
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41272028,
year = {2025},
author = {RahimiZadeh, K and Beheshti, A and Javadi, B and Yazdani, A},
title = {An integrated queuing and certainty factor theory model for efficient edge computing in remote patient monitoring systems.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {44973},
pmid = {41272028},
issn = {2045-2322},
support = {//Shiraz Transplant Research Center, Shiraz University of Medical Sciences/ ; },
mesh = {Humans ; Monitoring, Physiologic/methods ; *Models, Theoretical ; Cloud Computing ; Algorithms ; Telemedicine ; Remote Patient Monitoring ; },
abstract = {Remote Patient Monitoring Systems (RPMS) require efficient resource management to prioritize life-critical data in latency-sensitive healthcare environments. This research introduces an Integrated Queuing and Certainty Factor Theory (IQCT) model aimed at optimizing bandwidth allocation and task scheduling within fog-edge-cloud architectures. IQCT prioritizes patient requests in real time by classifying them into emergency, warning, and normal categories using certainty factor(CF) -based urgency assessment. Simulated on Raspberry Pi fog nodes with the UCI Heart Disease dataset, its performance was benchmarked against FCFS, PQ, and WFQ using metrics such as latency, energy consumption, and response time under varying workloads. IQCT reduced latency for emergency requests by 54.5% and improved network efficiency by 30.08% compared to FCFS. It also lowered response and execution times by 49.5% and 36%, and decreased fog-layer energy consumption by 30.8%. Scalability tests confirmed stable quality of service (QoS) under peak loads, demonstrating adaptability to dynamic demand. The adaptation of PQ and CF theory can lead to more efficient and optimized performance in RPMS. The IQCT model has significantly reduced the latency by 54.5% in emergency situations, in comparison with the existing models.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
Monitoring, Physiologic/methods
*Models, Theoretical
Cloud Computing
Algorithms
Telemedicine
Remote Patient Monitoring
RevDate: 2025-11-24
Load balancing for cloud computing using optimized cluster based federated learning.
Scientific reports, 15(1):41328.
Task scheduling and load balancing in cloud computing represent challenging NP-hard optimization problems that often result in inefficient resource utilization, elevated energy consumption, and prolonged execution times. This study introduces a novel Cluster-based Federated Learning (FL) framework that addresses system heterogeneity by clustering virtual machines (VMs) with similar characteristics via unsupervised learning, enabling dynamic and efficient task allocation. The proposed method leverages VM capabilities and a derivative-based objective function to optimize scheduling. We benchmark the approach against established metaheuristic algorithms including Whale Optimization Algorithm (WOA), Butterfly Optimization (BFO), Mayfly Optimization (MFO), and Fire Hawk Optimization (FHO). Evaluated using makespan, idle time, and degree of imbalance, the Cluster-based FL model coupled with the COA algorithm consistently outperforms existing methods, achieving up to a 10% reduction in makespan, a 15% decrease in idle time, and a significant improvement in load balancing across VMs. These results highlight the efficacy of integrating clustering within federated learning paradigms to deliver scalable, adaptive, and resilient cloud resource management solutions.
Additional Links: PMID-41271909
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41271909,
year = {2025},
author = {Chennam, KK and V, UM and Aluvalu, R and Chinthaginjala, R and AbWahab, M and Zhao, X and Tolba, A},
title = {Load balancing for cloud computing using optimized cluster based federated learning.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {41328},
pmid = {41271909},
issn = {2045-2322},
abstract = {Task scheduling and load balancing in cloud computing represent challenging NP-hard optimization problems that often result in inefficient resource utilization, elevated energy consumption, and prolonged execution times. This study introduces a novel Cluster-based Federated Learning (FL) framework that addresses system heterogeneity by clustering virtual machines (VMs) with similar characteristics via unsupervised learning, enabling dynamic and efficient task allocation. The proposed method leverages VM capabilities and a derivative-based objective function to optimize scheduling. We benchmark the approach against established metaheuristic algorithms including Whale Optimization Algorithm (WOA), Butterfly Optimization (BFO), Mayfly Optimization (MFO), and Fire Hawk Optimization (FHO). Evaluated using makespan, idle time, and degree of imbalance, the Cluster-based FL model coupled with the COA algorithm consistently outperforms existing methods, achieving up to a 10% reduction in makespan, a 15% decrease in idle time, and a significant improvement in load balancing across VMs. These results highlight the efficacy of integrating clustering within federated learning paradigms to deliver scalable, adaptive, and resilient cloud resource management solutions.},
}
RevDate: 2025-11-24
CmpDate: 2025-11-21
Intelligent feature fusion with dynamic graph convolutional recurrent network for robust object detection to assist individuals with disabilities in a smart Iot edge-cloud environment.
Scientific reports, 15(1):41228.
Smart Internet of Things (IoT)-edge-cloud computing defines intelligent systems where IoT devices create data at the network's edge, which is then further processed and analyzed in local edge devices before transmission to the cloud for deeper insights and storage. Visual impairment, like blindness, has a deep effect on a person's psychological and cognitive functions. So, the use of assistive models can help mitigate the adverse effects and improve the quality of life for individuals who are blind. Much current research mainly concentrates on mobility, navigation, and object detection (OD) in smart devices and advanced technologies for visually challenged people. OD is a vital feature of computer vision that includes categorizing objects within an image, allowing applications like augmented reality, image retrieval, etc. Recently, deep learning (DL) models have emerged as an excellent technique for mining feature representation from data, primarily due to significant developments in OD. The DL model is well-trained with manifold images of objects that are highly applicable to visually impaired individuals. This paper presents an intelligent Feature Fusion with Dynamic Graph Convolutional Recurrent Network for Robust Object Detection (FFDGCRN-ROD) approach to assist individuals with disabilities. The paper aims to present an intelligent OD framework for individuals with disabilities utilizing a smart IoT edge cloud environment to enable monitoring and assistive decision-making. At first, the image pre-processing phase involves resizing, normalization, and image enhancement to eliminate the noise and enhance the image quality. For the OD process, the FFDGCRN-ROD approach employs the faster R-CNN to identify and locate specific targets within the images automatically. Furthermore, the fusion models, namely CapsNet, SqueezeNet, and Inceptionv3, are used for the feature extraction process. Finally, the FFDGCRN-ROD model implements the dynamic adaptive graph convolutional recurrent network (DA-GCRN) model to detect and classify objects for visually impaired people accurately. The experimental validation of the FFDGCRN-ROD methodology is performed under the Indoor OD dataset. The comparison analysis of the FFDGCRN-ROD methodology demonstrated a superior accuracy value of 99.65% over existing techniques.
Additional Links: PMID-41271840
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41271840,
year = {2025},
author = {Alohali, MA and Alanazi, F and Alsahafi, YA and Yaseen, I},
title = {Intelligent feature fusion with dynamic graph convolutional recurrent network for robust object detection to assist individuals with disabilities in a smart Iot edge-cloud environment.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {41228},
pmid = {41271840},
issn = {2045-2322},
mesh = {Humans ; Deep Learning ; *Internet of Things ; Neural Networks, Computer ; *Cloud Computing ; *Persons with Disabilities ; *Persons with Visual Disabilities ; Self-Help Devices ; Algorithms ; Image Processing, Computer-Assisted/methods ; },
abstract = {Smart Internet of Things (IoT)-edge-cloud computing defines intelligent systems where IoT devices create data at the network's edge, which is then further processed and analyzed in local edge devices before transmission to the cloud for deeper insights and storage. Visual impairment, like blindness, has a deep effect on a person's psychological and cognitive functions. So, the use of assistive models can help mitigate the adverse effects and improve the quality of life for individuals who are blind. Much current research mainly concentrates on mobility, navigation, and object detection (OD) in smart devices and advanced technologies for visually challenged people. OD is a vital feature of computer vision that includes categorizing objects within an image, allowing applications like augmented reality, image retrieval, etc. Recently, deep learning (DL) models have emerged as an excellent technique for mining feature representation from data, primarily due to significant developments in OD. The DL model is well-trained with manifold images of objects that are highly applicable to visually impaired individuals. This paper presents an intelligent Feature Fusion with Dynamic Graph Convolutional Recurrent Network for Robust Object Detection (FFDGCRN-ROD) approach to assist individuals with disabilities. The paper aims to present an intelligent OD framework for individuals with disabilities utilizing a smart IoT edge cloud environment to enable monitoring and assistive decision-making. At first, the image pre-processing phase involves resizing, normalization, and image enhancement to eliminate the noise and enhance the image quality. For the OD process, the FFDGCRN-ROD approach employs the faster R-CNN to identify and locate specific targets within the images automatically. Furthermore, the fusion models, namely CapsNet, SqueezeNet, and Inceptionv3, are used for the feature extraction process. Finally, the FFDGCRN-ROD model implements the dynamic adaptive graph convolutional recurrent network (DA-GCRN) model to detect and classify objects for visually impaired people accurately. The experimental validation of the FFDGCRN-ROD methodology is performed under the Indoor OD dataset. The comparison analysis of the FFDGCRN-ROD methodology demonstrated a superior accuracy value of 99.65% over existing techniques.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
Deep Learning
*Internet of Things
Neural Networks, Computer
*Cloud Computing
*Persons with Disabilities
*Persons with Visual Disabilities
Self-Help Devices
Algorithms
Image Processing, Computer-Assisted/methods
RevDate: 2025-11-21
Using artificial intelligence to automate the analysis of psoriasis severity: A pilot study.
Dermatology (Basel, Switzerland) pii:000549640 [Epub ahead of print].
INTRODUCTION: The Psoriasis Area and Severity Index (PASI) score is widely used to assess psoriasis severity; however, manual PASI scoring is susceptible to environmental variability and subjective interpretation. This study leverages artificial intelligence to improve the consistency and objectivity of psoriasis severity classification based on features extracted from 2D clinical images.
METHODS: This study employed the YOLOv8 deep learning model to classify psoriatic lesions according to the severity of erythema, thickness, and scaling- key subcomponents of the PASI scoring system. Severity was assessed as follows: (0), mild (1), moderate (2), severe (3), or very severe (4). Model training and analysis were conducted in a cloud-based environment (Google Colab) using three different datasets. Stratified k-fold cross-validation was employed to ensure robustness by preserving the distribution of PASI scores across folds. Model performance was assessed using a confusion matrix and accuracy metrics.
RESULTS: In experiments, the YOLOv8 model proved highly effective in classifying psoriasis images based on PASI scores. Stratified k-fold cross-validation was shown to enhance model reliability across diverse datasets.
CONCLUSIONS: This study represents a significant advancement in the application of AI to the automated classification of lesion severity based on erythema, thickness, and scaling-key subcomponents of PASI.
Additional Links: PMID-41269911
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41269911,
year = {2025},
author = {Chou, CL and Su, CK and Cruz, SKD and Peng, SJ},
title = {Using artificial intelligence to automate the analysis of psoriasis severity: A pilot study.},
journal = {Dermatology (Basel, Switzerland)},
volume = {},
number = {},
pages = {1-23},
doi = {10.1159/000549640},
pmid = {41269911},
issn = {1421-9832},
abstract = {INTRODUCTION: The Psoriasis Area and Severity Index (PASI) score is widely used to assess psoriasis severity; however, manual PASI scoring is susceptible to environmental variability and subjective interpretation. This study leverages artificial intelligence to improve the consistency and objectivity of psoriasis severity classification based on features extracted from 2D clinical images.
METHODS: This study employed the YOLOv8 deep learning model to classify psoriatic lesions according to the severity of erythema, thickness, and scaling- key subcomponents of the PASI scoring system. Severity was assessed as follows: (0), mild (1), moderate (2), severe (3), or very severe (4). Model training and analysis were conducted in a cloud-based environment (Google Colab) using three different datasets. Stratified k-fold cross-validation was employed to ensure robustness by preserving the distribution of PASI scores across folds. Model performance was assessed using a confusion matrix and accuracy metrics.
RESULTS: In experiments, the YOLOv8 model proved highly effective in classifying psoriasis images based on PASI scores. Stratified k-fold cross-validation was shown to enhance model reliability across diverse datasets.
CONCLUSIONS: This study represents a significant advancement in the application of AI to the automated classification of lesion severity based on erythema, thickness, and scaling-key subcomponents of PASI.},
}
RevDate: 2025-11-23
CmpDate: 2025-11-21
The AIR·MS data platform for artificial intelligence in healthcare.
JAMIA open, 8(6):ooaf145.
OBJECTIVE: To present the Artificial Intelligence-Ready Mount Sinai (AIR·MS) platform-unified access to diverse clinical datasets from the Mount Sinai Health System (MSHS), along with computational infrastructure for AI-driven research and demonstrate its utility with 3 research projects.
MATERIALS AND METHODS: AIR·MS integrates structured and unstructured data from multiple MSHS sources via the OMOP Common Data Model on an in-memory columnar database. Unstructured pathology and radiology data are integrated through metadata extracted from and linking the raw source data. Data access and analytics are supported from the HIPAA-compliant Azure cloud and the on-premises Minerva High-Performance Computing (HPC) environment.
RESULTS: AIR·MS provides access to structured electronic health records, clinical notes, and metadata for pathology and radiology images, covering over 12M patients. The platform enables interactive cohort building and AI model training. Experimentation with complex cohort queries confirm a high system performance. Three use cases demonstrate, risk-factor discovery, and federated cardiovascular risk modeling.
DISCUSSION: AIR·MS demonstrates how clinical data and infrastructure can be integrated to support large-scale AI-based research. The platform's performance, scale, and cross-institutional design position it as a model for similar initiatives.
CONCLUSION: AIR·MS provides a scalable, secure, and collaborative platform for AI-enabled healthcare research on multimodal clinical data.
Additional Links: PMID-41267854
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41267854,
year = {2025},
author = {Guerrero, P and Ernebjerg, M and Holst, T and Weese, D and DiBello, H and Ibing, S and Schmidt, L and Ungaro, R and Renard, B and Lippert, C and Alleva, E and Quinn, TD and Kovatch, P and Antao, EM and Heyneke, E and Rasheed, A and Kalabakov, S and Arnrich, B and Charney, A and Wieler, LH and Nadkarni, G},
title = {The AIR·MS data platform for artificial intelligence in healthcare.},
journal = {JAMIA open},
volume = {8},
number = {6},
pages = {ooaf145},
pmid = {41267854},
issn = {2574-2531},
abstract = {OBJECTIVE: To present the Artificial Intelligence-Ready Mount Sinai (AIR·MS) platform-unified access to diverse clinical datasets from the Mount Sinai Health System (MSHS), along with computational infrastructure for AI-driven research and demonstrate its utility with 3 research projects.
MATERIALS AND METHODS: AIR·MS integrates structured and unstructured data from multiple MSHS sources via the OMOP Common Data Model on an in-memory columnar database. Unstructured pathology and radiology data are integrated through metadata extracted from and linking the raw source data. Data access and analytics are supported from the HIPAA-compliant Azure cloud and the on-premises Minerva High-Performance Computing (HPC) environment.
RESULTS: AIR·MS provides access to structured electronic health records, clinical notes, and metadata for pathology and radiology images, covering over 12M patients. The platform enables interactive cohort building and AI model training. Experimentation with complex cohort queries confirm a high system performance. Three use cases demonstrate, risk-factor discovery, and federated cardiovascular risk modeling.
DISCUSSION: AIR·MS demonstrates how clinical data and infrastructure can be integrated to support large-scale AI-based research. The platform's performance, scale, and cross-institutional design position it as a model for similar initiatives.
CONCLUSION: AIR·MS provides a scalable, secure, and collaborative platform for AI-enabled healthcare research on multimodal clinical data.},
}
RevDate: 2025-11-23
Cloud edge enabled stacked ensemble learning framework with meta model for situation aware maritime traffic monitoring and control systems.
Scientific reports, 15(1):41099.
In the last few years, the increasing trend of vessel density, different types of vessels, and the increased need for real-time data have made maritime traffic management significantly more difficult. This study presents a situation-aware framework based on stacked ensemble learning and cloud-edge hybridization, which is aimed at enhancing the maritime traffic monitoring and control systems. This approach combines stacked ensemble learning with a meta-model for vessel type classification and employs the concept of cloud-edge architecture to strike a balance between computational efficiency and delay minimization. While the edge layer takes care of real-time inference and situational analysis on the go, the cloud layer takes care of model training and amalgamation of data from various sources. Our evaluation made use of a comprehensive maritime vessel dataset and compared the performance with the state-of-the-art deep learning models (VGG16, VGG19, DenseNet121, and ResNet50). Our experiments show that the stacked ensemble learning with a meta-model significantly outperforms the traditional ones, achieving an overall accuracy of 0.98, macro average precision of 0.97, macro average recall of 0.98, and an F1-score of 0.98. Both ROC and PR curves also demonstrate excellent AUC values, which tend to 1.00 for almost all categories of vessels, which is a strong performance in distinguishing vessels from each other. Test predictions are outstandingly accurate, with confidence in vessel classification exceeding 99% in most cases. From these results, the proposed method shows robustness, scalability, and effectiveness for real-time maritime surveillance, naval defense systems, and autonomous vessel traffic control in industrial IoT environments.
Additional Links: PMID-41266744
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41266744,
year = {2025},
author = {Ahmad, Z and Seo, JT and Jeon, S},
title = {Cloud edge enabled stacked ensemble learning framework with meta model for situation aware maritime traffic monitoring and control systems.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {41099},
pmid = {41266744},
issn = {2045-2322},
support = {RS-2023-00241376//Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT)/ ; },
abstract = {In the last few years, the increasing trend of vessel density, different types of vessels, and the increased need for real-time data have made maritime traffic management significantly more difficult. This study presents a situation-aware framework based on stacked ensemble learning and cloud-edge hybridization, which is aimed at enhancing the maritime traffic monitoring and control systems. This approach combines stacked ensemble learning with a meta-model for vessel type classification and employs the concept of cloud-edge architecture to strike a balance between computational efficiency and delay minimization. While the edge layer takes care of real-time inference and situational analysis on the go, the cloud layer takes care of model training and amalgamation of data from various sources. Our evaluation made use of a comprehensive maritime vessel dataset and compared the performance with the state-of-the-art deep learning models (VGG16, VGG19, DenseNet121, and ResNet50). Our experiments show that the stacked ensemble learning with a meta-model significantly outperforms the traditional ones, achieving an overall accuracy of 0.98, macro average precision of 0.97, macro average recall of 0.98, and an F1-score of 0.98. Both ROC and PR curves also demonstrate excellent AUC values, which tend to 1.00 for almost all categories of vessels, which is a strong performance in distinguishing vessels from each other. Test predictions are outstandingly accurate, with confidence in vessel classification exceeding 99% in most cases. From these results, the proposed method shows robustness, scalability, and effectiveness for real-time maritime surveillance, naval defense systems, and autonomous vessel traffic control in industrial IoT environments.},
}
RevDate: 2025-11-20
NimbusImage: a cloud-computing platform for image analysis.
Additional Links: PMID-41266644
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41266644,
year = {2025},
author = {Niu, Z and Bruyère, T and Manthey, D and Li, J and O'Farrell, A and Raj, A},
title = {NimbusImage: a cloud-computing platform for image analysis.},
journal = {Nature methods},
volume = {},
number = {},
pages = {},
pmid = {41266644},
issn = {1548-7105},
}
RevDate: 2025-11-23
Secure blockchain integrated deep learning framework for federated risk-adaptive and privacy-preserving IoT edge intelligence sets.
Scientific reports, 15(1):41133.
An enormous demand for a secure, scalable, intelligent edge computing framework has emerged for the exponentially increasing number of Internet of Things (IoT) devices for any substrate of modern digital infrastructure. These edge nodes distributed across heterogeneous environments serve as primary interfaces for sensing, computation, and actuations. Their physical deployment in unattended scenarios puts them at risk of being targets for resource manipulation. One widely accepted IoT architecture with traditional notions of edge may consider a threat to its centralized knowledge with an unbounded attack surface that includes anything that can remotely connect to the edge from the cloud-like domain. Existing strategies either forget the dynamic risk context of edge nodes or do not achieve a reasonable trade-off between security and resource constraints, essentially degrading the robustness and trustworthiness of solutions intended for real-life scenarios. To address the existing gaps, the work presents a novel Blockchain Integrated Deep Learning Framework for secure IoT edge computing, introducing a hybrid architecture where the transparency of blockchain meets deep learning flexibility. The proposed system incorporates five specialized components: Blockchain-Orchestrated Federated Curriculum Learning (BOFCL), which ensures risk-prioritized training using threat indices derived from blockchain logs; this adaptive sequencing enhances responsiveness to high-risk edge scenarios. Zero-Knowledge Proof Enabled Secure Inference Engine (ZK-SIE) provides verifiable privacy-preserving inference, ensuring model integrity without exposing input data or model internals in process. Blockchain Indexed Adversarial Attack Simulator (BI-AAS) focuses on testing the models in edge environments against attack scenarios drawn from common adversarial profiles and thereby facilitates a model defensive retraining. Energy-Aware Lightweight Consensus with Adaptive Synchronization (ELCAS) avoids overhead by seeking energy-efficient participants for global model synchronization in constrained environments. Trust Indexed Model Provenance and Deployment Ledger (TIMPDL) ensures model lineage tracking and deploy ability in a transparent manner by providing composite trust scores computed from data quality, node reputation, and validation metrics. Altogether, the framework combines the data integrity, adversarial robustness, and trust-aware deployment, shortening training latency, synchronization energy, and privacy leakage. It is a foundational advancement supporting secure decentralized edge intelligence for next-generation IoT ecosystems.
Additional Links: PMID-41266631
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41266631,
year = {2025},
author = {Swathi, K and Durga, P and Prasad, KV and Chaitanya, AK and Santhi, K and Vidyullatha, P and Rao, SVA},
title = {Secure blockchain integrated deep learning framework for federated risk-adaptive and privacy-preserving IoT edge intelligence sets.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {41133},
pmid = {41266631},
issn = {2045-2322},
abstract = {An enormous demand for a secure, scalable, intelligent edge computing framework has emerged for the exponentially increasing number of Internet of Things (IoT) devices for any substrate of modern digital infrastructure. These edge nodes distributed across heterogeneous environments serve as primary interfaces for sensing, computation, and actuations. Their physical deployment in unattended scenarios puts them at risk of being targets for resource manipulation. One widely accepted IoT architecture with traditional notions of edge may consider a threat to its centralized knowledge with an unbounded attack surface that includes anything that can remotely connect to the edge from the cloud-like domain. Existing strategies either forget the dynamic risk context of edge nodes or do not achieve a reasonable trade-off between security and resource constraints, essentially degrading the robustness and trustworthiness of solutions intended for real-life scenarios. To address the existing gaps, the work presents a novel Blockchain Integrated Deep Learning Framework for secure IoT edge computing, introducing a hybrid architecture where the transparency of blockchain meets deep learning flexibility. The proposed system incorporates five specialized components: Blockchain-Orchestrated Federated Curriculum Learning (BOFCL), which ensures risk-prioritized training using threat indices derived from blockchain logs; this adaptive sequencing enhances responsiveness to high-risk edge scenarios. Zero-Knowledge Proof Enabled Secure Inference Engine (ZK-SIE) provides verifiable privacy-preserving inference, ensuring model integrity without exposing input data or model internals in process. Blockchain Indexed Adversarial Attack Simulator (BI-AAS) focuses on testing the models in edge environments against attack scenarios drawn from common adversarial profiles and thereby facilitates a model defensive retraining. Energy-Aware Lightweight Consensus with Adaptive Synchronization (ELCAS) avoids overhead by seeking energy-efficient participants for global model synchronization in constrained environments. Trust Indexed Model Provenance and Deployment Ledger (TIMPDL) ensures model lineage tracking and deploy ability in a transparent manner by providing composite trust scores computed from data quality, node reputation, and validation metrics. Altogether, the framework combines the data integrity, adversarial robustness, and trust-aware deployment, shortening training latency, synchronization energy, and privacy leakage. It is a foundational advancement supporting secure decentralized edge intelligence for next-generation IoT ecosystems.},
}
RevDate: 2025-11-22
CmpDate: 2025-11-20
ResNet-18 based multi-task visual inference and adaptive control for an edge-deployed autonomous robot.
Frontiers in robotics and AI, 12:1680285.
Current industrial robots deployed in small and medium-sized businesses (SMEs) are too complex, expensive, or dependent on external computing resources. In order to bridge this gap, we introduce an autonomous logistics robot that combines adaptive control and visual perception on a small edge computing platform. The NVIDIA Jetson Nano was equipped with a modified ResNet-18 model that allowed it to concurrently execute three tasks: object-handling zone recognition, obstacle detection, and path tracking. A lightweight rack-and-pinion mechanism enables payload lifting of up to 2 kg without external assistance. Experimental evaluation in semi-structured warehouse settings demonstrated a path tracking accuracy of 92%, obstacle avoidance success of 88%, and object handling success of 90%, with a maximum perception-to-action latency of 150 m. The system maintains stable operation for up to 3 hours on a single charge. Unlike other approaches that focus on single functions or require cloud support, our design integrates navigation, perception, and mechanical handling into a low-power, standalone solution. This highlights its potential as a practical and cost-effective automation platform for SMEs.
Additional Links: PMID-41262207
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41262207,
year = {2025},
author = {Silva Araujo, SDC and Ong Michael, GK and Deshpande, UU and Deshpande, S and Avalappa, MG and Amasi, Y and Patil, S and Bhat, S and Karigoudar, S},
title = {ResNet-18 based multi-task visual inference and adaptive control for an edge-deployed autonomous robot.},
journal = {Frontiers in robotics and AI},
volume = {12},
number = {},
pages = {1680285},
pmid = {41262207},
issn = {2296-9144},
abstract = {Current industrial robots deployed in small and medium-sized businesses (SMEs) are too complex, expensive, or dependent on external computing resources. In order to bridge this gap, we introduce an autonomous logistics robot that combines adaptive control and visual perception on a small edge computing platform. The NVIDIA Jetson Nano was equipped with a modified ResNet-18 model that allowed it to concurrently execute three tasks: object-handling zone recognition, obstacle detection, and path tracking. A lightweight rack-and-pinion mechanism enables payload lifting of up to 2 kg without external assistance. Experimental evaluation in semi-structured warehouse settings demonstrated a path tracking accuracy of 92%, obstacle avoidance success of 88%, and object handling success of 90%, with a maximum perception-to-action latency of 150 m. The system maintains stable operation for up to 3 hours on a single charge. Unlike other approaches that focus on single functions or require cloud support, our design integrates navigation, perception, and mechanical handling into a low-power, standalone solution. This highlights its potential as a practical and cost-effective automation platform for SMEs.},
}
RevDate: 2025-12-09
High-Throughput Approach for Minimum Energy Pathway Search Using the Nudged Elastic Band Method with Efficient Data Handling and Parallel Computing.
Journal of chemical theory and computation, 21(23):12048-12063.
The Nudged Elastic Band (NEB) method is critical for mapping chemical reaction pathways but is a computationally and data-intensive workflow involving a large number of single-point (SP) calculations. Additionally, due to the complexity of the NEB method, understanding how variations in the protocol (algorithm, levels of theory, and parameters) impact performance is challenging. To address these issues, we developed and tested a high-throughput approach on the QCArchive cloud-based infrastructure, utilizing two open-source projects, QCFractal and geomeTRIC, to enhance the NEB efficiency. This approach parallelizes SP energy and gradient calculations and stores results in a database, facilitating data organization and retrieval. To evaluate its performance, we optimized four elementary reactions from the RGD1 data set of organic reactions using the B3LYP/6-31G(d), B3LYP-D3/def2-TZVP methods, and the PM7 semiempirical model. We tested 72 different combinations of chain optimization parameters and three types of band forces: conventional NEB, a hybrid band that projects out the perpendicular energy gradient as in NEB but retains the full spring force, and a plain band that does not project any forces. The highest-energy images of the optimized chains were used as the initial structures for transition state (TS) optimization to locate the first-order saddle points. The NEB and TS steps may be performed at different levels of theory, allowing us to perform NEB calculations with either DFT or PM7, followed by TS optimizations at the DFT level. The final TS structures were compared with reference geometries from the data set, which were further optimized at the corresponding level of theory. The convergence rates of TS and NEB are reported to demonstrate how the parameters influence the performance. Next, we performed NEB calculations on 118 diverse chemical reactions from a compilation of seven barrier height data sets from the literature using two selected protocols: one uses the NEB method, while the other employs the hybrid band. Notably, the hybrid band yielded consistently higher convergence rates across reactions from both data sets. Lastly, three elementary reactions from our previous work involving molecular transition metal catalysts were optimized using the hybrid band, successfully reproducing the earlier results. This study demonstrates that the high-throughput approach can perform a large number of NEB calculations concurrently in parallel while storing all calculation results in a database. The results presented here also confirm the reliability and correctness of the new implementation.
Additional Links: PMID-41259716
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41259716,
year = {2025},
author = {Park, H and Pritchard, BP and Wang, LP},
title = {High-Throughput Approach for Minimum Energy Pathway Search Using the Nudged Elastic Band Method with Efficient Data Handling and Parallel Computing.},
journal = {Journal of chemical theory and computation},
volume = {21},
number = {23},
pages = {12048-12063},
doi = {10.1021/acs.jctc.5c01540},
pmid = {41259716},
issn = {1549-9626},
abstract = {The Nudged Elastic Band (NEB) method is critical for mapping chemical reaction pathways but is a computationally and data-intensive workflow involving a large number of single-point (SP) calculations. Additionally, due to the complexity of the NEB method, understanding how variations in the protocol (algorithm, levels of theory, and parameters) impact performance is challenging. To address these issues, we developed and tested a high-throughput approach on the QCArchive cloud-based infrastructure, utilizing two open-source projects, QCFractal and geomeTRIC, to enhance the NEB efficiency. This approach parallelizes SP energy and gradient calculations and stores results in a database, facilitating data organization and retrieval. To evaluate its performance, we optimized four elementary reactions from the RGD1 data set of organic reactions using the B3LYP/6-31G(d), B3LYP-D3/def2-TZVP methods, and the PM7 semiempirical model. We tested 72 different combinations of chain optimization parameters and three types of band forces: conventional NEB, a hybrid band that projects out the perpendicular energy gradient as in NEB but retains the full spring force, and a plain band that does not project any forces. The highest-energy images of the optimized chains were used as the initial structures for transition state (TS) optimization to locate the first-order saddle points. The NEB and TS steps may be performed at different levels of theory, allowing us to perform NEB calculations with either DFT or PM7, followed by TS optimizations at the DFT level. The final TS structures were compared with reference geometries from the data set, which were further optimized at the corresponding level of theory. The convergence rates of TS and NEB are reported to demonstrate how the parameters influence the performance. Next, we performed NEB calculations on 118 diverse chemical reactions from a compilation of seven barrier height data sets from the literature using two selected protocols: one uses the NEB method, while the other employs the hybrid band. Notably, the hybrid band yielded consistently higher convergence rates across reactions from both data sets. Lastly, three elementary reactions from our previous work involving molecular transition metal catalysts were optimized using the hybrid band, successfully reproducing the earlier results. This study demonstrates that the high-throughput approach can perform a large number of NEB calculations concurrently in parallel while storing all calculation results in a database. The results presented here also confirm the reliability and correctness of the new implementation.},
}
RevDate: 2025-11-22
Optimizing dispatch factor in smart energy networks using cloud-based computational resources.
Scientific reports, 15(1):40683 pii:10.1038/s41598-025-23033-8.
The coordination of eco-friendly power sources and brilliant matrix advances has changed the power lattice framework. Optimized dispatch of distributed resources assisted by cloud-based technologies, is inevitable in this evolving era of could computing. This paper provides a comprehensive framework of cloud-based load management technologies, with a focus on the dispatch factor as a crucial parameter in making energy dispatch decisions. Cloud computing provides grid administrators with the adaptability and computational power expected to advance energy dispatch continuously. The contributions of this research work about the optimized dispatch of power sources include i) formulation of constrained optimization objective function for power distribution network ii) proposed a novel algorithm for evaluation of the parametric values involved in proposed objective function. iii) proposed a framework for resourcing the computational burden to cloud computational platform. These contributions inculcates a methodology for efficient energy dispatch, highlighting the use of machine learning, optimization algorithms, and real-time data analytics to adjust the dispatch factor dynamically. The paper concludes with the discussion of results obtained after implementation of proposed methodology on google cloud platform which shows the effectiveness of the proposed methodology.
Additional Links: PMID-41258143
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41258143,
year = {2025},
author = {Abedin, ZU and Jianbin, L and Siddique, M and Khan, HMA},
title = {Optimizing dispatch factor in smart energy networks using cloud-based computational resources.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {40683},
doi = {10.1038/s41598-025-23033-8},
pmid = {41258143},
issn = {2045-2322},
support = {Self Funding//Zain Ul Abedin/ ; Self Funding//Zain Ul Abedin/ ; Self Funding//Zain Ul Abedin/ ; Self Funding//Zain Ul Abedin/ ; },
abstract = {The coordination of eco-friendly power sources and brilliant matrix advances has changed the power lattice framework. Optimized dispatch of distributed resources assisted by cloud-based technologies, is inevitable in this evolving era of could computing. This paper provides a comprehensive framework of cloud-based load management technologies, with a focus on the dispatch factor as a crucial parameter in making energy dispatch decisions. Cloud computing provides grid administrators with the adaptability and computational power expected to advance energy dispatch continuously. The contributions of this research work about the optimized dispatch of power sources include i) formulation of constrained optimization objective function for power distribution network ii) proposed a novel algorithm for evaluation of the parametric values involved in proposed objective function. iii) proposed a framework for resourcing the computational burden to cloud computational platform. These contributions inculcates a methodology for efficient energy dispatch, highlighting the use of machine learning, optimization algorithms, and real-time data analytics to adjust the dispatch factor dynamically. The paper concludes with the discussion of results obtained after implementation of proposed methodology on google cloud platform which shows the effectiveness of the proposed methodology.},
}
RevDate: 2025-11-23
Automated detection of mycobacterium tuberculosis based on cloud computing.
BMC infectious diseases, 25(1):1620.
BACKGROUND: Tuberculosis (TB) is a prevalent infectious disease that infects about a quarter of the world’s population and has become one of the leading causes of death in the world’s population. Early diagnosis and treatment are crucial for preventing the spread of the disease and improving the patient’s prognosis. Mycobacterium tuberculosis microscopy can provide a basis for TB disease diagnosis, the accuracy of which is influenced by the doctor’s experience. With the development of artificial intelligence technology, deep learning can provide automated bacteriological diagnostic assistance for doctors. This study aims to develop an efficient and accurate automated detection method for Mycobacterium tuberculosis based on a cloud computing platform utilizing deep learning algorithms.
METHODS: The study incorporated 9963 annotated data from 1265 acid-fast stained smears on the Kaggle database. It used the deep learning algorithm YOLOv5 on Jiutian platform to extract and analyze features from the acid-fast stained positive regions and build a disease detection model.
RESULTS: The results indicate that the model demonstrates rapid detection of Mycobacterium tuberculosis, with a maximum mAP50 of 94.80%, an accuracy of 93.72%, and a recall of 91.24%.
CONCLUSION: The study is expected to provide accurate and efficient aid for doctors in diagnosing TB.
Additional Links: PMID-41257591
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41257591,
year = {2025},
author = {Ding, X and Li, T and Li, S and Zhao, W and Xiong, M and Pei, G and Pan, Y},
title = {Automated detection of mycobacterium tuberculosis based on cloud computing.},
journal = {BMC infectious diseases},
volume = {25},
number = {1},
pages = {1620},
pmid = {41257591},
issn = {1471-2334},
support = {2021-A3//Southern University of Science and Technology Hospital/ ; 2021-A3//Southern University of Science and Technology Hospital/ ; 2021-A3//Southern University of Science and Technology Hospital/ ; NSZD2023059//Shenzhen Nanshan District/ ; NSZD2023059//Shenzhen Nanshan District/ ; NSZD2023059//Shenzhen Nanshan District/ ; JCYJ2022053011260200//Shenzhen Science and Technology Innovation Commission/ ; SGDX20211123114204007//Shenzhen Science and Technology Innovation Commission/ ; SGDX20211123114204007//Shenzhen Science and Technology Innovation Commission/ ; JCYJ2022053011260200//Shenzhen Science and Technology Innovation Commission/ ; SGDX20211123114204007//Shenzhen Science and Technology Innovation Commission/ ; JCYJ2022053011260200//Shenzhen Science and Technology Innovation Commission/ ; },
abstract = {BACKGROUND: Tuberculosis (TB) is a prevalent infectious disease that infects about a quarter of the world’s population and has become one of the leading causes of death in the world’s population. Early diagnosis and treatment are crucial for preventing the spread of the disease and improving the patient’s prognosis. Mycobacterium tuberculosis microscopy can provide a basis for TB disease diagnosis, the accuracy of which is influenced by the doctor’s experience. With the development of artificial intelligence technology, deep learning can provide automated bacteriological diagnostic assistance for doctors. This study aims to develop an efficient and accurate automated detection method for Mycobacterium tuberculosis based on a cloud computing platform utilizing deep learning algorithms.
METHODS: The study incorporated 9963 annotated data from 1265 acid-fast stained smears on the Kaggle database. It used the deep learning algorithm YOLOv5 on Jiutian platform to extract and analyze features from the acid-fast stained positive regions and build a disease detection model.
RESULTS: The results indicate that the model demonstrates rapid detection of Mycobacterium tuberculosis, with a maximum mAP50 of 94.80%, an accuracy of 93.72%, and a recall of 91.24%.
CONCLUSION: The study is expected to provide accurate and efficient aid for doctors in diagnosing TB.},
}
RevDate: 2025-11-22
Performance Evaluation of Boiling Chamber With Microchannel Chip and Taper Microgap.
ASME journal of heat and mass transfer, 147(12):121605.
The increasing trend of power densities in high-performance computing, driven by artificial intelligence, machine learning, and cloud computing, necessitates advanced thermal management solutions to maintain operational stability and energy efficiency. This study examines the effectiveness of cooling a 1.5 U simulated copper microchannel chip compared to a plain chip. Both chip types were tested with and without configurations for dual taper microgaps to enhance the heat transfer performance of a boiling chamber (BC). Experimental investigation was conducted using 500 μm wide × 400 μm deep microchannels separated by 200 μm fins. Varying inlet gaps (0.5-4 mm) and taper lengths (8.25 mm and 16.5 mm) with a taper angle of 3 deg were employed in dual taper configuration. Their impact on critical heat flux (CHF) and subcooled boiling dynamics was investigated. Microchannels provided considerable performance enhancement over a plain surface with or without the dual taper microgap. The findings demonstrate that smaller inlet gaps (0.5-1 mm) and longer taper lengths (16.5 mm, with central liquid inlet) significantly enhance nucleate boiling. These configurations improve vapor escape and delay CHF through subcooled boiling and submerged condensation. However, a lower CHF was noted due to vapor agglomeration within the microgap. The 80% fill ratio microchannel chip exhibited the highest CHF as subcooled boiling increased liquid replenishment and prevented vapor stagnation. Similarly, lower coolant temperatures (20-30 °C) enhanced boiling performance, where submerged condensation accelerated bubble collapse and improved heat dissipation efficiency in lower surface temperatures.
Additional Links: PMID-41256764
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41256764,
year = {2025},
author = {Mustafa, NE and Kandlikar, SG},
title = {Performance Evaluation of Boiling Chamber With Microchannel Chip and Taper Microgap.},
journal = {ASME journal of heat and mass transfer},
volume = {147},
number = {12},
pages = {121605},
pmid = {41256764},
issn = {2832-8469},
support = {R35 GM128877/GM/NIGMS NIH HHS/United States ; },
abstract = {The increasing trend of power densities in high-performance computing, driven by artificial intelligence, machine learning, and cloud computing, necessitates advanced thermal management solutions to maintain operational stability and energy efficiency. This study examines the effectiveness of cooling a 1.5 U simulated copper microchannel chip compared to a plain chip. Both chip types were tested with and without configurations for dual taper microgaps to enhance the heat transfer performance of a boiling chamber (BC). Experimental investigation was conducted using 500 μm wide × 400 μm deep microchannels separated by 200 μm fins. Varying inlet gaps (0.5-4 mm) and taper lengths (8.25 mm and 16.5 mm) with a taper angle of 3 deg were employed in dual taper configuration. Their impact on critical heat flux (CHF) and subcooled boiling dynamics was investigated. Microchannels provided considerable performance enhancement over a plain surface with or without the dual taper microgap. The findings demonstrate that smaller inlet gaps (0.5-1 mm) and longer taper lengths (16.5 mm, with central liquid inlet) significantly enhance nucleate boiling. These configurations improve vapor escape and delay CHF through subcooled boiling and submerged condensation. However, a lower CHF was noted due to vapor agglomeration within the microgap. The 80% fill ratio microchannel chip exhibited the highest CHF as subcooled boiling increased liquid replenishment and prevented vapor stagnation. Similarly, lower coolant temperatures (20-30 °C) enhanced boiling performance, where submerged condensation accelerated bubble collapse and improved heat dissipation efficiency in lower surface temperatures.},
}
RevDate: 2025-11-23
CmpDate: 2025-11-19
Rapid NGS Analysis on Google Cloud Platform: Performance Benchmark and User Tutorial.
Clinical and translational science, 18(11):e70416.
Next-Generation Sequencing (NGS) is being increasingly adopted in clinical settings as a tool to increase diagnostic yield in genetically determined pathologies. However, for patients in critical conditions the time to results of data analysis is crucial for a rapid diagnosis and response. Sentieon DNASeq and Clara Parabricks Germline are two widely used pipelines for ultra-rapid NGS analysis, but their high computational demands often exceed the resources available in many healthcare facilities. Cloud platforms, like Google Cloud Platform (GCP), offer scalable solutions to address these limitations. Yet, setting up these pipelines in a cloud environment can be complex. This work provides a benchmark of the two solutions, and offers a comprehensive tutorial aimed at easing their implementation on GCP by healthcare bioinformaticians. Additionally, it presents valuable cost guidance to healthcare managers who consider implementing cloud-based NGS processing. Using five publicly available exome (WES) and five genome (WGS) samples, we benchmarked both pipelines on GCP in terms of runtime, cost, and resource utilization. Our results show that Sentieon and Parabricks perform comparably. Both pipelines are viable options for rapid, cloud-based NGS analysis, enabling healthcare providers to access advanced genomic tools without the need for extensive local infrastructure.
Additional Links: PMID-41255067
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41255067,
year = {2025},
author = {Franzoso, E and Santorsola, M and Lescai, F},
title = {Rapid NGS Analysis on Google Cloud Platform: Performance Benchmark and User Tutorial.},
journal = {Clinical and translational science},
volume = {18},
number = {11},
pages = {e70416},
pmid = {41255067},
issn = {1752-8062},
mesh = {Humans ; *Cloud Computing ; *High-Throughput Nucleotide Sequencing/methods/economics ; Benchmarking ; Computational Biology/methods ; Software ; },
abstract = {Next-Generation Sequencing (NGS) is being increasingly adopted in clinical settings as a tool to increase diagnostic yield in genetically determined pathologies. However, for patients in critical conditions the time to results of data analysis is crucial for a rapid diagnosis and response. Sentieon DNASeq and Clara Parabricks Germline are two widely used pipelines for ultra-rapid NGS analysis, but their high computational demands often exceed the resources available in many healthcare facilities. Cloud platforms, like Google Cloud Platform (GCP), offer scalable solutions to address these limitations. Yet, setting up these pipelines in a cloud environment can be complex. This work provides a benchmark of the two solutions, and offers a comprehensive tutorial aimed at easing their implementation on GCP by healthcare bioinformaticians. Additionally, it presents valuable cost guidance to healthcare managers who consider implementing cloud-based NGS processing. Using five publicly available exome (WES) and five genome (WGS) samples, we benchmarked both pipelines on GCP in terms of runtime, cost, and resource utilization. Our results show that Sentieon and Parabricks perform comparably. Both pipelines are viable options for rapid, cloud-based NGS analysis, enabling healthcare providers to access advanced genomic tools without the need for extensive local infrastructure.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Cloud Computing
*High-Throughput Nucleotide Sequencing/methods/economics
Benchmarking
Computational Biology/methods
Software
RevDate: 2025-11-20
A comprehensive survey on securing the social internet of things: protocols, threat mitigation, technological integrations, tools, and performance metrics.
Scientific reports, 15(1):40190.
The integration of social networking concepts with the Internet of Things (IoT) has led to the Social Internet of Things (SIoT)-a paradigm enabling autonomous, context-aware interactions among devices based on social relationships. While this connectivity improves interoperability, it also raises critical challenges in trust management, secure communication, and data protection. This survey reviews 225 papers published between 2014 and 18 September 2025, analyzing advancements in SIoT security. Sources include IEEE Xplore, ACM Digital Library, Springer, ScienceDirect (Elsevier), MDPI, Wiley, Taylor & Francis, and Google Scholar. Blockchain and AI/ML approaches feature prominently, with blockchain referenced in more than 50 papers, AI/ML in over 80, and many adopting both in combination. The literature is examined across architectural foundations, security requirements, and layered defenses, with evaluation most often based on latency, accuracy, scalability, and false-positive rate. The review further highlights existing security and communication protocols, attack mitigation strategies, and the adoption of blockchain, cloud, and edge computing for scalable and decentralized processing. The survey traces the evolution of SIoT research, identifies future directions to strengthen security and transparency, and serves as a reference for researchers and practitioners designing secure and decentralized SIoT environments.
Additional Links: PMID-41249278
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41249278,
year = {2025},
author = {Patil, DA and G, S},
title = {A comprehensive survey on securing the social internet of things: protocols, threat mitigation, technological integrations, tools, and performance metrics.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {40190},
pmid = {41249278},
issn = {2045-2322},
abstract = {The integration of social networking concepts with the Internet of Things (IoT) has led to the Social Internet of Things (SIoT)-a paradigm enabling autonomous, context-aware interactions among devices based on social relationships. While this connectivity improves interoperability, it also raises critical challenges in trust management, secure communication, and data protection. This survey reviews 225 papers published between 2014 and 18 September 2025, analyzing advancements in SIoT security. Sources include IEEE Xplore, ACM Digital Library, Springer, ScienceDirect (Elsevier), MDPI, Wiley, Taylor & Francis, and Google Scholar. Blockchain and AI/ML approaches feature prominently, with blockchain referenced in more than 50 papers, AI/ML in over 80, and many adopting both in combination. The literature is examined across architectural foundations, security requirements, and layered defenses, with evaluation most often based on latency, accuracy, scalability, and false-positive rate. The review further highlights existing security and communication protocols, attack mitigation strategies, and the adoption of blockchain, cloud, and edge computing for scalable and decentralized processing. The survey traces the evolution of SIoT research, identifies future directions to strengthen security and transparency, and serves as a reference for researchers and practitioners designing secure and decentralized SIoT environments.},
}
▼ ▼ LOAD NEXT 100 CITATIONS
RJR Experience and Expertise
Researcher
Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.
Educator
Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.
Administrator
Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.
Technologist
Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.
Publisher
While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.
Speaker
Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.
Facilitator
Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.
Designer
Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.
RJR Picks from Around the Web (updated 11 MAY 2018 )
Old Science
Weird Science
Treating Disease with Fecal Transplantation
Fossils of miniature humans (hobbits) discovered in Indonesia
Paleontology
Dinosaur tail, complete with feathers, found preserved in amber.
Astronomy
Mysterious fast radio burst (FRB) detected in the distant universe.
Big Data & Informatics
Big Data: Buzzword or Big Deal?
Hacking the genome: Identifying anonymized human subjects using publicly available data.