Other Sites:
Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About: RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE
ESP: PubMed Auto Bibliography 20 Jan 2026 at 01:42 Created:
Cloud Computing
Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.
Created with PubMed® Query: ( cloud[TIAB] AND (computing[TIAB] OR "amazon web services"[TIAB] OR google[TIAB] OR "microsoft azure"[TIAB]) ) NOT pmcbook NOT ispreviousversion
Citations The Papers (from PubMed®)
RevDate: 2026-01-19
A chronic kidney disease prediction system based on Internet of Things using walrus optimized deep learning technique.
Informatics for health & social care [Epub ahead of print].
The Internet of Things (IoT) and cloud computing (CC) concepts are commonly incorporated in healthcare applications. In the healthcare industry, a huge quantity of patient data is generated by IoT devices. The integral storage of mobile devices and processing power is used to analyze the stored data in the cloud. The Internet of Medical Things (IoMT) combines health monitoring mechanisms with medical equipment and sensors to monitor patient records and offer extra smart and experienced healthcare services. This paper proposes an effective and walrus-optimized deep learning (DL) technique for chronic kidney disease (CKD) prediction in IoT. To begin, the data are collected from the CKD dataset, and the preprocessing procedures, such as missing value imputation, numerical conversion, and normalization, are performed to improve the quality of the dataset. Then, dataset balancing is done using the k-means (KM) clustering algorithm to prevent the model from making inaccurate predictions. After that, enhanced residual network 50 (EResNet50) is utilized to extract more discriminative features from the dataset. From that, the optimal features are selected via elite opposition and the Cauchy distribution-based walrus optimization algorithm (ECWOA). Finally, the classification uses the walrus-optimized bidirectional long short-term memory (WOBLSTM). The simulation outcomes demonstrated the effectiveness of our method over existing techniques, with a higher sensitivity of 99.89% for CKD prediction.
Additional Links: PMID-41553156
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41553156,
year = {2026},
author = {M, S and M, T and G, ER and M, A},
title = {A chronic kidney disease prediction system based on Internet of Things using walrus optimized deep learning technique.},
journal = {Informatics for health & social care},
volume = {},
number = {},
pages = {1-21},
doi = {10.1080/17538157.2025.2610695},
pmid = {41553156},
issn = {1753-8165},
abstract = {The Internet of Things (IoT) and cloud computing (CC) concepts are commonly incorporated in healthcare applications. In the healthcare industry, a huge quantity of patient data is generated by IoT devices. The integral storage of mobile devices and processing power is used to analyze the stored data in the cloud. The Internet of Medical Things (IoMT) combines health monitoring mechanisms with medical equipment and sensors to monitor patient records and offer extra smart and experienced healthcare services. This paper proposes an effective and walrus-optimized deep learning (DL) technique for chronic kidney disease (CKD) prediction in IoT. To begin, the data are collected from the CKD dataset, and the preprocessing procedures, such as missing value imputation, numerical conversion, and normalization, are performed to improve the quality of the dataset. Then, dataset balancing is done using the k-means (KM) clustering algorithm to prevent the model from making inaccurate predictions. After that, enhanced residual network 50 (EResNet50) is utilized to extract more discriminative features from the dataset. From that, the optimal features are selected via elite opposition and the Cauchy distribution-based walrus optimization algorithm (ECWOA). Finally, the classification uses the walrus-optimized bidirectional long short-term memory (WOBLSTM). The simulation outcomes demonstrated the effectiveness of our method over existing techniques, with a higher sensitivity of 99.89% for CKD prediction.},
}
RevDate: 2026-01-19
CmpDate: 2026-01-19
Efficient data replication in distributed clouds via quantum entanglement algorithms.
MethodsX, 16:103762.
In cloud computing, it remains difficult to make data available in a cloud service such that the data is replicated and maintained consistently across various data centers. Traditional replication systems are sufficient, even though they take too long to process, cause significant data transfers, and face problems with final data consistency. This work presents a new method named Quantum Entanglement-Based Replication Algorithm (QERA), which makes use of quantum entanglement to ensure quick and high-performance synchronization of cloud data across all nodes. In this proposed work, the QERA approach encodes data changes in the primary cloud node onto quantum states and entangled qubit pairs to the related replica nodes. As a result, any change is quickly shown on all replicas without the usual overhead and delay of message broadcasts. It simulates how QERA is designed to decrease latency, promote consistency, and make better use of resources in cloud environments. This paper creates a theoretical framework using IBM Qiskit and Microsoft Quantum Development Kit simulators to compare classical and quantum baseline algorithms. The results show that QERA may greatly enhance the way updates and replications are managed across many cloud systems. It demonstrates how QERA can ensure a very synchronized replication among the remote cloud nodes. Employs a qubit pair entangled to minimize latency and decrease bandwidth expenses as it goes through updates. Combines the idea of quantum teleportation with methods of non-invasive verification made to maintain the integrity of the state without altering the quantum system.
Additional Links: PMID-41551262
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41551262,
year = {2026},
author = {B, PS and N, R and Ravi, J and C, V and J, G and I, G and K, DK and Muthusamy, E},
title = {Efficient data replication in distributed clouds via quantum entanglement algorithms.},
journal = {MethodsX},
volume = {16},
number = {},
pages = {103762},
pmid = {41551262},
issn = {2215-0161},
abstract = {In cloud computing, it remains difficult to make data available in a cloud service such that the data is replicated and maintained consistently across various data centers. Traditional replication systems are sufficient, even though they take too long to process, cause significant data transfers, and face problems with final data consistency. This work presents a new method named Quantum Entanglement-Based Replication Algorithm (QERA), which makes use of quantum entanglement to ensure quick and high-performance synchronization of cloud data across all nodes. In this proposed work, the QERA approach encodes data changes in the primary cloud node onto quantum states and entangled qubit pairs to the related replica nodes. As a result, any change is quickly shown on all replicas without the usual overhead and delay of message broadcasts. It simulates how QERA is designed to decrease latency, promote consistency, and make better use of resources in cloud environments. This paper creates a theoretical framework using IBM Qiskit and Microsoft Quantum Development Kit simulators to compare classical and quantum baseline algorithms. The results show that QERA may greatly enhance the way updates and replications are managed across many cloud systems. It demonstrates how QERA can ensure a very synchronized replication among the remote cloud nodes. Employs a qubit pair entangled to minimize latency and decrease bandwidth expenses as it goes through updates. Combines the idea of quantum teleportation with methods of non-invasive verification made to maintain the integrity of the state without altering the quantum system.},
}
RevDate: 2026-01-18
An automated pipeline for efficiently generating standardized, child-friendly audiovisual language stimuli.
Developmental cognitive neuroscience, 78:101674 pii:S1878-9293(26)00006-X [Epub ahead of print].
Creating engaging language stimuli suitable for children can be difficult and time-consuming. To simplify and accelerate the process, we developed an automated pipeline that combines existing audio generation and animation tools to generate customizable audiovisual stimuli from text input. The pipeline consists of two components: the first uses Google Cloud Text-to-Speech to generate audio stimuli from text, and the second uses Adobe Character Animator to create video stimuli in which an animated character "speaks" the audio with speech-aligned mouth movements. We evaluated the pipeline with two stimulus sets, including an acoustic comparison between generated audio stimuli and existing human-recorded stimuli. The pipeline is efficient, taking less than 2 min to generate each audiovisual stimulus, and fewer than 9 % of stimuli needed to be regenerated. The audio generation component is particularly fast, taking less than 1 s per stimulus. By leveraging automated tools for language stimulus creation, this pipeline can facilitate developmental research on language and other domains of cognition, especially in cognitive neuroscience studies that require large numbers of stimuli.
Additional Links: PMID-41548476
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41548476,
year = {2026},
author = {Santi, B and Soza, M and Tuckute, G and Sathe, A and Fedorenko, E and Olson, H},
title = {An automated pipeline for efficiently generating standardized, child-friendly audiovisual language stimuli.},
journal = {Developmental cognitive neuroscience},
volume = {78},
number = {},
pages = {101674},
doi = {10.1016/j.dcn.2026.101674},
pmid = {41548476},
issn = {1878-9307},
abstract = {Creating engaging language stimuli suitable for children can be difficult and time-consuming. To simplify and accelerate the process, we developed an automated pipeline that combines existing audio generation and animation tools to generate customizable audiovisual stimuli from text input. The pipeline consists of two components: the first uses Google Cloud Text-to-Speech to generate audio stimuli from text, and the second uses Adobe Character Animator to create video stimuli in which an animated character "speaks" the audio with speech-aligned mouth movements. We evaluated the pipeline with two stimulus sets, including an acoustic comparison between generated audio stimuli and existing human-recorded stimuli. The pipeline is efficient, taking less than 2 min to generate each audiovisual stimulus, and fewer than 9 % of stimuli needed to be regenerated. The audio generation component is particularly fast, taking less than 1 s per stimulus. By leveraging automated tools for language stimulus creation, this pipeline can facilitate developmental research on language and other domains of cognition, especially in cognitive neuroscience studies that require large numbers of stimuli.},
}
RevDate: 2026-01-17
Impact of Labeling Inaccuracy and Image Noise on Tooth Segmentation in Panoramic Radiographs using Federated, Centralized and Local Learning.
Dento maxillo facial radiology pii:8428117 [Epub ahead of print].
OBJECTIVES: Federated learning (FL) may mitigate privacy constraints, heterogeneous data quality, and inconsistent labeling in dental diagnostic artificial intelligence (AI). FL was compared with centralized (CL) and local learning (LL) for tooth segmentation in panoramic radiographs across multiple data corruption scenarios.
METHODS: An Attention U-Net was trained on 2066 radiographs from six institutions across four settings: baseline (unaltered data); label manipulation (dilated/missing annotations); image-quality manipulation (additive Gaussian noise); and exclusion of one faulty client with corrupted data. FL was implemented via the Flower AI framework. Per-client training- and validation loss trajectories were monitored for anomaly detection and a set of metrics (Dice, IoU, HD, HD95 and ASSD) were evaluated on a hold-out test set. From these metrics significance results were reported through Wilcoxon signed-rank test. CL and LL served as comparators.
RESULTS: Baseline: FL achieved a median Dice of 0.94889 (ASSD: 1.33229), slightly better than CL at 0.94706 (ASSD: 1.37074) and LL at 0.93557-0.94026 (ASSD: 1.51910-1.69777). Label manipulation: FL maintained the best median Dice score at 0.94884 (ASSD: 1.46487) versus CL's 0.94183 (ASSD: 1.75738) and LL's 0.93003-0.94026 (ASSD: 1.51910-2.11462). Similar performance was observed when two faulty clients were introduced. Image noise: FL led with Dice at 0.94853 (ASSD: 1.31088); CL scored 0.94787 (ASSD: 1.36131); LL ranged from 0.93179-0.94026 (ASSD: 1.51910-1.77350). Similar performance was observed when two faulty clients were introduced, with CL performing slightly better than FL. Faulty-client exclusion: FL reached Dice at 0.94790 (ASSD: 1.33113) better than CL's 0.94550 (ASSD: 1.39318). Loss-curve monitoring reliably flagged the corrupted site.
CONCLUSIONS: FL matches or exceeds CL and outperforms LL across corruption scenarios while preserving privacy. Per-client loss trajectories provide an effective anomaly-detection mechanism and support FL as a practical, privacy-preserving approach for scalable clinical AI deployment.
Additional Links: PMID-41546377
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41546377,
year = {2026},
author = {Andreas Balle Rubak, J and Naveed, K and Jain, S and Esterle, L and Iosifidis, A and Pauwels, R},
title = {Impact of Labeling Inaccuracy and Image Noise on Tooth Segmentation in Panoramic Radiographs using Federated, Centralized and Local Learning.},
journal = {Dento maxillo facial radiology},
volume = {},
number = {},
pages = {},
doi = {10.1093/dmfr/twag001},
pmid = {41546377},
issn = {1476-542X},
abstract = {OBJECTIVES: Federated learning (FL) may mitigate privacy constraints, heterogeneous data quality, and inconsistent labeling in dental diagnostic artificial intelligence (AI). FL was compared with centralized (CL) and local learning (LL) for tooth segmentation in panoramic radiographs across multiple data corruption scenarios.
METHODS: An Attention U-Net was trained on 2066 radiographs from six institutions across four settings: baseline (unaltered data); label manipulation (dilated/missing annotations); image-quality manipulation (additive Gaussian noise); and exclusion of one faulty client with corrupted data. FL was implemented via the Flower AI framework. Per-client training- and validation loss trajectories were monitored for anomaly detection and a set of metrics (Dice, IoU, HD, HD95 and ASSD) were evaluated on a hold-out test set. From these metrics significance results were reported through Wilcoxon signed-rank test. CL and LL served as comparators.
RESULTS: Baseline: FL achieved a median Dice of 0.94889 (ASSD: 1.33229), slightly better than CL at 0.94706 (ASSD: 1.37074) and LL at 0.93557-0.94026 (ASSD: 1.51910-1.69777). Label manipulation: FL maintained the best median Dice score at 0.94884 (ASSD: 1.46487) versus CL's 0.94183 (ASSD: 1.75738) and LL's 0.93003-0.94026 (ASSD: 1.51910-2.11462). Similar performance was observed when two faulty clients were introduced. Image noise: FL led with Dice at 0.94853 (ASSD: 1.31088); CL scored 0.94787 (ASSD: 1.36131); LL ranged from 0.93179-0.94026 (ASSD: 1.51910-1.77350). Similar performance was observed when two faulty clients were introduced, with CL performing slightly better than FL. Faulty-client exclusion: FL reached Dice at 0.94790 (ASSD: 1.33113) better than CL's 0.94550 (ASSD: 1.39318). Loss-curve monitoring reliably flagged the corrupted site.
CONCLUSIONS: FL matches or exceeds CL and outperforms LL across corruption scenarios while preserving privacy. Per-client loss trajectories provide an effective anomaly-detection mechanism and support FL as a practical, privacy-preserving approach for scalable clinical AI deployment.},
}
RevDate: 2026-01-15
Optimized CatBoost machine learning (OCML) for DDoS detection in cloud virtual machines with time-series and adversarial robustness.
Scientific reports, 16(1):2064.
Distributed Denial of Service (DDoS) attacks represent one of the most strategically executed and severe threats in cloud computing, often leading to substantial data loss and significant financial damage for both cloud service providers and their users. Numerous studies have been conducted to enhance cloud security against such attacks through the application of machine learning techniques. This paper implements the Optimized Catboost machine learning algorithm (OCML) with hyperparameter optimization using Optuna to achieve efficient training. Feature selection was conducted using the SHAP (SHapley Additive exPlanations) method, as the dataset contains over 80 features. The proposed model achieved an accuracy of 99.2% in detecting Distributed Denial of Service (DDoS) attacks in cloud virtual machines (VMs), enabling the system to filter out malicious jobs and allocate resources efficiently. The CICIDS 2019 dataset was used as the benchmark for evaluation. Furthermore, the robustness of the proposed model was assessed using adversarial attacks, specifically the Fast Gradient Sign Method (FGSM), the Carlini-Wagner (CW) attack, and Projected Gradient Descent (PGD). The Catboost model achieves accuracies against these attacks 97%, 80% and 71% respectively. In addition, the robustness against time series network traffic attacks using pulse wave, random burst, and slow ramp achieves 80%, 83% and 77% respectively.
Additional Links: PMID-41540130
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41540130,
year = {2026},
author = {Samy, H and Bahaa-Eldin, AM and Sobh, MA and Taha, A},
title = {Optimized CatBoost machine learning (OCML) for DDoS detection in cloud virtual machines with time-series and adversarial robustness.},
journal = {Scientific reports},
volume = {16},
number = {1},
pages = {2064},
pmid = {41540130},
issn = {2045-2322},
abstract = {Distributed Denial of Service (DDoS) attacks represent one of the most strategically executed and severe threats in cloud computing, often leading to substantial data loss and significant financial damage for both cloud service providers and their users. Numerous studies have been conducted to enhance cloud security against such attacks through the application of machine learning techniques. This paper implements the Optimized Catboost machine learning algorithm (OCML) with hyperparameter optimization using Optuna to achieve efficient training. Feature selection was conducted using the SHAP (SHapley Additive exPlanations) method, as the dataset contains over 80 features. The proposed model achieved an accuracy of 99.2% in detecting Distributed Denial of Service (DDoS) attacks in cloud virtual machines (VMs), enabling the system to filter out malicious jobs and allocate resources efficiently. The CICIDS 2019 dataset was used as the benchmark for evaluation. Furthermore, the robustness of the proposed model was assessed using adversarial attacks, specifically the Fast Gradient Sign Method (FGSM), the Carlini-Wagner (CW) attack, and Projected Gradient Descent (PGD). The Catboost model achieves accuracies against these attacks 97%, 80% and 71% respectively. In addition, the robustness against time series network traffic attacks using pulse wave, random burst, and slow ramp achieves 80%, 83% and 77% respectively.},
}
RevDate: 2026-01-16
Smart irrigation-based internet of things and cloud computing technologies for sustainable farming.
Scientific reports pii:10.1038/s41598-026-35810-0 [Epub ahead of print].
Sustainable water management in agriculture is a major challenge, particularly in regions facing water scarcity and the growing impacts of climate change. The lack of efficiency of traditional irrigation methods often leads to water waste, reduced productivity, and increased pressure on natural resources. In this context, it is imperative to develop innovative solutions to optimize water use while maintaining agricultural performance. This paper proposes a smart irrigation system based on the internet of things (IoT) and cloud computing. The system incorporates several sensors to measure key environmental parameters, such as temperature, air humidity, soil moisture, and water level. An embedded ESP32 microcontroller collects and transmits the data to the thingsBoard cloud platform, where it is analyzed in real time to determine precise irrigation needs. The system's algorithm automatically makes the necessary decisions to activate or deactivate the irrigation pump, ensuring optimal and accurate water management. Experimental results demonstrate that the system significantly reduces water waste while optimizing irrigation based on the actual needs of the soil and crops. Real-time measurements and automated decision-making ensure accurate and efficient irrigation that adapts to fluctuations in environmental conditions. Performance analysis shows that the proposed approach significantly improves water resource management compared to traditional methods. The integration of cloud computing and the IoT facilitates remote monitoring and automated decision-making, making the system adaptable to a variety of crops and agricultural lands. The estimated cost of implementing the smart irrigation system is approximately $44.00, confirming its economic feasibility and appeal to small and medium-sized farms seeking to optimize water use. This solution also helps to build farmers' resilience to climate change and water scarcity. The system presented represents a significant advance in the field of smart and sustainable irrigation. By optimizing water use and improving agricultural productivity, the system directly contributes to food security, water resource conservation, and climate resilience. Thus, this study provides a replicable and adaptable model for the development of large-scale smart and sustainable agricultural solutions.
Additional Links: PMID-41545563
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41545563,
year = {2026},
author = {Morchid, A and Qjidaa, H and Alami, RE and Mobayen, S and Skruch, P and Bossoufi, B},
title = {Smart irrigation-based internet of things and cloud computing technologies for sustainable farming.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-35810-0},
pmid = {41545563},
issn = {2045-2322},
abstract = {Sustainable water management in agriculture is a major challenge, particularly in regions facing water scarcity and the growing impacts of climate change. The lack of efficiency of traditional irrigation methods often leads to water waste, reduced productivity, and increased pressure on natural resources. In this context, it is imperative to develop innovative solutions to optimize water use while maintaining agricultural performance. This paper proposes a smart irrigation system based on the internet of things (IoT) and cloud computing. The system incorporates several sensors to measure key environmental parameters, such as temperature, air humidity, soil moisture, and water level. An embedded ESP32 microcontroller collects and transmits the data to the thingsBoard cloud platform, where it is analyzed in real time to determine precise irrigation needs. The system's algorithm automatically makes the necessary decisions to activate or deactivate the irrigation pump, ensuring optimal and accurate water management. Experimental results demonstrate that the system significantly reduces water waste while optimizing irrigation based on the actual needs of the soil and crops. Real-time measurements and automated decision-making ensure accurate and efficient irrigation that adapts to fluctuations in environmental conditions. Performance analysis shows that the proposed approach significantly improves water resource management compared to traditional methods. The integration of cloud computing and the IoT facilitates remote monitoring and automated decision-making, making the system adaptable to a variety of crops and agricultural lands. The estimated cost of implementing the smart irrigation system is approximately $44.00, confirming its economic feasibility and appeal to small and medium-sized farms seeking to optimize water use. This solution also helps to build farmers' resilience to climate change and water scarcity. The system presented represents a significant advance in the field of smart and sustainable irrigation. By optimizing water use and improving agricultural productivity, the system directly contributes to food security, water resource conservation, and climate resilience. Thus, this study provides a replicable and adaptable model for the development of large-scale smart and sustainable agricultural solutions.},
}
RevDate: 2026-01-16
IoT-driven smart irrigation system to improve water use efficiency.
Scientific reports pii:10.1038/s41598-025-33826-6 [Epub ahead of print].
The agriculture sector is the cornerstone of many global economic entities, plays a central role in highly contributing to ensure food security and the gross domestic product. Difficulties caused by traditional irrigation methods, population growth, and climate change are leading to the development of current irrigation systems. This study presented a smart irrigation system using novel techniques like, Internet of Things (IoT), cloud computing, embedded system and sensors. The smart system integrates real-time monitoring and control during irrigation, fertilization, and biopesticides application. A mobile application is implemented to monitor and control the entire system. Results showed that using wood vinegar at low concentrations is an effective way to improve water use efficiency, increase lettuce yield, and optimize disease control compared to other concentrations. The impact of 400 concentration on the evaluation criteria was found to achieve the best values at 26% moisture content. The smart system reduces water consumption by 47% and achieving a 43% increase in yield as well the lowest level of disease severity index with a value of 7.78%. The system proposed features real-time monitoring and control, improving water use efficiency and supporting smart agriculture practices as well as contribute to food and water security.
Additional Links: PMID-41545460
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41545460,
year = {2026},
author = {Mohamed, ZE and Afify, MK and Badr, MM and Omar, OA},
title = {IoT-driven smart irrigation system to improve water use efficiency.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-33826-6},
pmid = {41545460},
issn = {2045-2322},
abstract = {The agriculture sector is the cornerstone of many global economic entities, plays a central role in highly contributing to ensure food security and the gross domestic product. Difficulties caused by traditional irrigation methods, population growth, and climate change are leading to the development of current irrigation systems. This study presented a smart irrigation system using novel techniques like, Internet of Things (IoT), cloud computing, embedded system and sensors. The smart system integrates real-time monitoring and control during irrigation, fertilization, and biopesticides application. A mobile application is implemented to monitor and control the entire system. Results showed that using wood vinegar at low concentrations is an effective way to improve water use efficiency, increase lettuce yield, and optimize disease control compared to other concentrations. The impact of 400 concentration on the evaluation criteria was found to achieve the best values at 26% moisture content. The smart system reduces water consumption by 47% and achieving a 43% increase in yield as well the lowest level of disease severity index with a value of 7.78%. The system proposed features real-time monitoring and control, improving water use efficiency and supporting smart agriculture practices as well as contribute to food and water security.},
}
RevDate: 2026-01-16
CmpDate: 2026-01-16
Data Science Education for Residents, Researchers, and Students in Psychiatry and Psychology: Program Development and Evaluation Study.
JMIR medical education, 12:e75125 pii:v12i1e75125.
BACKGROUND: The use of artificial intelligence (AI) to analyze health care data has become common in behavioral health sciences. However, the lack of training opportunities for mental health professionals limits clinicians' ability to adopt AI in clinical settings. AI education is essential for trainees, equipping them with the literacy needed to implement AI tools in practice, collaborate effectively with data scientists, and develop skills as interdisciplinary researchers with computing skills.
OBJECTIVE: As part of the Penn Innovation in Suicide Prevention Implementation Research Center, we developed, implemented, and evaluated a virtual workshop to educate psychiatry and psychology trainees on using AI for suicide prevention research.
METHODS: The workshop introduced trainees to natural language processing (NLP) concepts and Python coding skills using Jupyter notebooks within a secure Microsoft Azure Databricks cloud computing and analytics environment. We designed a 3-hour workshop that covered 4 key NLP topics: data characterization, data standardization, concept extraction, and statistical analysis. To demonstrate real-world applications, we processed chief complaints from electronic health records to compare the prevalence of suicide-related encounters across populations by race, ethnicity, and age. Training materials were developed based on standard NLP techniques and domain-specific tasks, such as preprocessing psychiatry-related acronyms. Two researchers drafted and demonstrated the code, incorporating feedback from the Methods Core of the Innovation in Suicide Prevention Implementation Research to refine the materials. To evaluate the effectiveness of the workshop, we used the Kirkpatrick program evaluation model, focusing on participants' reactions (level 1) and learning outcomes (level 2). Confidence changes in knowledge and skills before and after the workshop were assessed using paired t tests, and open-ended questions were included to gather feedback for future improvements.
RESULTS: A total of 10 trainees participated in the workshop virtually, including residents, postdoctoral researchers, and graduate students from the psychiatry and psychology departments. The participants found the workshop helpful (mean 3.17 on a scale of 1-4, SD 0.41). Their overall confidence in NLP knowledge significantly increased (P=.002) from 1.35 (SD 0.47) to 2.79 (SD 0.46). Confidence in coding abilities also improved significantly (P=.01), increasing from 1.33 (SD 0.60) to 2.25 (SD 0.42). Open-ended feedback suggested incorporating thematic analysis and exploring additional datasets for future workshops.
CONCLUSIONS: This study illustrates the effectiveness of a tailored data science workshop for trainees in psychiatry and psychology, focusing on applying NLP techniques for suicide prevention research. The workshop significantly enhanced participants' confidence in conducting data science research. Future workshops will cover additional topics of interest, such as working with large language models, thematic analysis, diverse datasets, and multifaceted outcomes. This includes examining how participants' learning impacts their practice and research, as well as assessing knowledge and skills beyond self-reported confidence through methods such as case studies for deeper insights.
Additional Links: PMID-41544003
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41544003,
year = {2026},
author = {Donnelly, HK and Mandell, D and Hwang, S and Schriver, E and Vurgun, U and Neill, G and Patel, E and Reilly, ME and Steinberg, M and Calloway, A and Gallop, R and Oquendo, MA and Brown, GK and Mowery, DL},
title = {Data Science Education for Residents, Researchers, and Students in Psychiatry and Psychology: Program Development and Evaluation Study.},
journal = {JMIR medical education},
volume = {12},
number = {},
pages = {e75125},
doi = {10.2196/75125},
pmid = {41544003},
issn = {2369-3762},
mesh = {Humans ; *Psychiatry/education ; *Data Science/education ; *Psychology/education ; Program Evaluation ; Internship and Residency ; Program Development ; *Research Personnel/education ; Artificial Intelligence ; Suicide Prevention ; Natural Language Processing ; Students ; },
abstract = {BACKGROUND: The use of artificial intelligence (AI) to analyze health care data has become common in behavioral health sciences. However, the lack of training opportunities for mental health professionals limits clinicians' ability to adopt AI in clinical settings. AI education is essential for trainees, equipping them with the literacy needed to implement AI tools in practice, collaborate effectively with data scientists, and develop skills as interdisciplinary researchers with computing skills.
OBJECTIVE: As part of the Penn Innovation in Suicide Prevention Implementation Research Center, we developed, implemented, and evaluated a virtual workshop to educate psychiatry and psychology trainees on using AI for suicide prevention research.
METHODS: The workshop introduced trainees to natural language processing (NLP) concepts and Python coding skills using Jupyter notebooks within a secure Microsoft Azure Databricks cloud computing and analytics environment. We designed a 3-hour workshop that covered 4 key NLP topics: data characterization, data standardization, concept extraction, and statistical analysis. To demonstrate real-world applications, we processed chief complaints from electronic health records to compare the prevalence of suicide-related encounters across populations by race, ethnicity, and age. Training materials were developed based on standard NLP techniques and domain-specific tasks, such as preprocessing psychiatry-related acronyms. Two researchers drafted and demonstrated the code, incorporating feedback from the Methods Core of the Innovation in Suicide Prevention Implementation Research to refine the materials. To evaluate the effectiveness of the workshop, we used the Kirkpatrick program evaluation model, focusing on participants' reactions (level 1) and learning outcomes (level 2). Confidence changes in knowledge and skills before and after the workshop were assessed using paired t tests, and open-ended questions were included to gather feedback for future improvements.
RESULTS: A total of 10 trainees participated in the workshop virtually, including residents, postdoctoral researchers, and graduate students from the psychiatry and psychology departments. The participants found the workshop helpful (mean 3.17 on a scale of 1-4, SD 0.41). Their overall confidence in NLP knowledge significantly increased (P=.002) from 1.35 (SD 0.47) to 2.79 (SD 0.46). Confidence in coding abilities also improved significantly (P=.01), increasing from 1.33 (SD 0.60) to 2.25 (SD 0.42). Open-ended feedback suggested incorporating thematic analysis and exploring additional datasets for future workshops.
CONCLUSIONS: This study illustrates the effectiveness of a tailored data science workshop for trainees in psychiatry and psychology, focusing on applying NLP techniques for suicide prevention research. The workshop significantly enhanced participants' confidence in conducting data science research. Future workshops will cover additional topics of interest, such as working with large language models, thematic analysis, diverse datasets, and multifaceted outcomes. This includes examining how participants' learning impacts their practice and research, as well as assessing knowledge and skills beyond self-reported confidence through methods such as case studies for deeper insights.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Psychiatry/education
*Data Science/education
*Psychology/education
Program Evaluation
Internship and Residency
Program Development
*Research Personnel/education
Artificial Intelligence
Suicide Prevention
Natural Language Processing
Students
RevDate: 2026-01-14
Desargues cloud TPS: a cloud-based automatic radiation treatment planning system for IMRT.
Biomedical engineering online pii:10.1186/s12938-026-01510-z [Epub ahead of print].
PURPOSE: To develop a cloud-based automated treatment planning system for intensity-modulated radiation therapy and evaluate its efficacy and safety for tumors in various anatomical sites under general clinical scenarios.
RESULTS: All the plans from both groups satisfy the PTV prescription dose coverage requirement of at least 95% of the PTV volume. The mean HI of plan A group and plan B group is 0.084 and 0.081, respectively, with no statistically significant difference from those of plan C group. The mean CI, PQM, OOT and POT are 0.806, 77.55. 410 s and 185 s for plan A group, and 0.841, 76.87, 515.1 s and 271.1 s for plan B group, which were significantly superior than those of plan C group except for the CI of plan A group. There is no statistically significant difference between the dose accuracies of plan B and plan C groups.
CONCLUSIONS: It is concluded that the overall efficacy and safety of the Desargues Cloud TPS are not significantly different to those of Varian Eclipse, while some efficacy indicators of plans generated from automatic planning without or with manual adjustments are even significantly superior to those of fully manual plans from Eclipse. The cloud-based automatic treatment planning additionally increase the efficiency of treatment planning process and facilitate the sharing of planning knowledge.
MATERIALS AND METHODS: The cloud-based automatic radiation treatment planning system, Desargues Cloud TPS, was designed and developed based on browser/server mode, where all the computing intensive functions were deployed on the server and user interfaces were implemented on the web. The communication between the browser and the server was through the local area network (LAN) of a radiotherapy institution. The automatic treatment planning module adopted a hybrid of both knowledge-based planning (KBP) and protocol-based automatic iterative optimization (PB-AIO), consisting of three steps: beam angle optimization (BAO), beam fluence optimization (BFO) and machine parameter optimization (MPO). 53 patients from two institutions have been enrolled in a multi-center self-controlled clinical validation. For each patient, three IMRT plans were designed. The plan A and B were designed on Desargues Cloud TPS using automatic planning without and with manual adjustments, respectively. The plan C was designed on Varian Eclipse TPS using fully manual planning. The efficacy indicators were heterogeneous index, conformity index, plan quality metric, overall operation time and plan optimization time. The safety indicators were gamma indices of dose verification.
Additional Links: PMID-41530840
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41530840,
year = {2026},
author = {Guo, J and Qin, S and Guo, C and Zhu, M and Zhou, Y and Wang, H and Xu, X and Zhan, W and Chen, L and Ni, J and Tang, Y and Chen, J and Shen, Y and Chen, H and Men, K and Liu, H and Pan, Y and Ye, J and Huan, J and Zhou, J},
title = {Desargues cloud TPS: a cloud-based automatic radiation treatment planning system for IMRT.},
journal = {Biomedical engineering online},
volume = {},
number = {},
pages = {},
doi = {10.1186/s12938-026-01510-z},
pmid = {41530840},
issn = {1475-925X},
support = {ZDXK202235//Jiangsu Provincial Medical Key Discipline/ ; 2024Z046//Key Research and Development Project of Ningbo/ ; 2024Z220//Key Research and Development Project of Ningbo/ ; 2025C02249(SD2)//Zhejiang Provincial Leading Goose Plan Project/ ; 2022YFC2402303//National Key Research and Development Program of China/ ; LCZX202351//Clinical Key Disease Diagnosis and Treatment Technology Project of Suzhou City/ ; },
abstract = {PURPOSE: To develop a cloud-based automated treatment planning system for intensity-modulated radiation therapy and evaluate its efficacy and safety for tumors in various anatomical sites under general clinical scenarios.
RESULTS: All the plans from both groups satisfy the PTV prescription dose coverage requirement of at least 95% of the PTV volume. The mean HI of plan A group and plan B group is 0.084 and 0.081, respectively, with no statistically significant difference from those of plan C group. The mean CI, PQM, OOT and POT are 0.806, 77.55. 410 s and 185 s for plan A group, and 0.841, 76.87, 515.1 s and 271.1 s for plan B group, which were significantly superior than those of plan C group except for the CI of plan A group. There is no statistically significant difference between the dose accuracies of plan B and plan C groups.
CONCLUSIONS: It is concluded that the overall efficacy and safety of the Desargues Cloud TPS are not significantly different to those of Varian Eclipse, while some efficacy indicators of plans generated from automatic planning without or with manual adjustments are even significantly superior to those of fully manual plans from Eclipse. The cloud-based automatic treatment planning additionally increase the efficiency of treatment planning process and facilitate the sharing of planning knowledge.
MATERIALS AND METHODS: The cloud-based automatic radiation treatment planning system, Desargues Cloud TPS, was designed and developed based on browser/server mode, where all the computing intensive functions were deployed on the server and user interfaces were implemented on the web. The communication between the browser and the server was through the local area network (LAN) of a radiotherapy institution. The automatic treatment planning module adopted a hybrid of both knowledge-based planning (KBP) and protocol-based automatic iterative optimization (PB-AIO), consisting of three steps: beam angle optimization (BAO), beam fluence optimization (BFO) and machine parameter optimization (MPO). 53 patients from two institutions have been enrolled in a multi-center self-controlled clinical validation. For each patient, three IMRT plans were designed. The plan A and B were designed on Desargues Cloud TPS using automatic planning without and with manual adjustments, respectively. The plan C was designed on Varian Eclipse TPS using fully manual planning. The efficacy indicators were heterogeneous index, conformity index, plan quality metric, overall operation time and plan optimization time. The safety indicators were gamma indices of dose verification.},
}
RevDate: 2026-01-12
CmpDate: 2026-01-12
Ferroelectric Optoelectronic Sensor for Intelligent Flame Detection and In-Sensor Motion Perception.
Nano-micro letters, 18(1):123.
Next-generation fire safety systems demand precise detection and motion recognition of flames. In-sensor computing, which integrates sensing, memory, and processing capabilities, has emerged as a key technology in flame detection. However, the implementation of hardware-level functional demonstrations based on artificial vision systems in the solar-blind ultraviolet (UV) band (200-280 nm) is hindered by the weak detection capability. Here, we propose Ga2O3/In2Se3 heterojunctions for the ferroelectric (abbreviation: Fe) optoelectronic sensor (abbreviation: OES) array (5 × 5 pixels), which is capable of ultraweak UV light detection with an ultrahigh detectivity through ferroelectric regulation and features in configurable multimode functionality. The Fe-OES array can directly sense different flame motions and simulate the non-spiking gradient neurons of insect visual system. Moreover, the flame signal can be effectively amplified in combination with leaky integration-and-fire neuron hardware. Using this Fe-OES system and neuromorphic hardware, we successfully demonstrate three flame processing tasks: achieving efficient flame detection across all time periods with terminal and cloud-based alarms; flame motion recognition with a lightweight convolutional neural network achieving 96.47% accuracy; and flame light recognition with 90.51% accuracy by means of a photosensitive artificial neural system. This work provides effective tools and approaches for addressing a variety of complex flame detection tasks.
Additional Links: PMID-41526779
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41526779,
year = {2026},
author = {Wei, J and Ma, G and Liang, R and Wang, W and Chen, J and Guan, S and Jiang, J and Zhu, X and Cheng, Q and Shen, Y and Xia, Q and Wu, S and Wan, H and Zeng, L and Li, M and Wang, Y and Shen, L and Han, W and Wang, H},
title = {Ferroelectric Optoelectronic Sensor for Intelligent Flame Detection and In-Sensor Motion Perception.},
journal = {Nano-micro letters},
volume = {18},
number = {1},
pages = {123},
pmid = {41526779},
issn = {2150-5551},
abstract = {Next-generation fire safety systems demand precise detection and motion recognition of flames. In-sensor computing, which integrates sensing, memory, and processing capabilities, has emerged as a key technology in flame detection. However, the implementation of hardware-level functional demonstrations based on artificial vision systems in the solar-blind ultraviolet (UV) band (200-280 nm) is hindered by the weak detection capability. Here, we propose Ga2O3/In2Se3 heterojunctions for the ferroelectric (abbreviation: Fe) optoelectronic sensor (abbreviation: OES) array (5 × 5 pixels), which is capable of ultraweak UV light detection with an ultrahigh detectivity through ferroelectric regulation and features in configurable multimode functionality. The Fe-OES array can directly sense different flame motions and simulate the non-spiking gradient neurons of insect visual system. Moreover, the flame signal can be effectively amplified in combination with leaky integration-and-fire neuron hardware. Using this Fe-OES system and neuromorphic hardware, we successfully demonstrate three flame processing tasks: achieving efficient flame detection across all time periods with terminal and cloud-based alarms; flame motion recognition with a lightweight convolutional neural network achieving 96.47% accuracy; and flame light recognition with 90.51% accuracy by means of a photosensitive artificial neural system. This work provides effective tools and approaches for addressing a variety of complex flame detection tasks.},
}
RevDate: 2026-01-12
A Personalized Point-of-Care Platform for Discovery and Validation of miRNA Targets Using AI and Edge Computing Supporting Personalized Cancer Therapy.
IEEE journal of biomedical and health informatics, PP: [Epub ahead of print].
The paradigm of cancer therapy is rapidly shifting towards personalized precision medicine, yet current diagnostic approaches remain constrained by centralized laboratory infrastructure, creating critical delays between sample collection and therapeutic intervention. To address this limitation, we present GTMIT, a novel point-of-care (POC) platform integrating artificial intelligence (AI) and edge computing for real-time discovery and validation of microRNA (miRNA) targets directly at the patient's bedside. Unlike traditional laboratory-centric models, GTMIT with edge computing operates on local hardware resources (e.g., portable sequencers and mobile devices), enabling POC decision-making without reliance on the cloud. Our framework combines three key innovations: 1) A Transformer-GNN hybrid architecture with Power Normalization for robust miRNA-mRNA interaction prediction; 2) SNP-adaptive Gapped Pattern Graph Convolutional Networks (GP-GCN) accounting for patient-specific genetic variations; and 3) Edge therapeutic optimization incorporating regional cancer prevalence patterns and resource constraints. We evaluate our proposed platform on several clinical datasets. GTMIT demonstrates excellent performance on a range of metrics, achieving 94% AUC, 87% precision, and 79% recall on benchmark datasets.GTMIT demonstrates excellent performance on a range of metrics, achieving 94% AUC, 87% precision, and 79% recall on benchmark datasets. By bridging molecular diagnostics with immediate intervention at the POC, GTMIT reduces time-to-treatment from days to minutes, particularly benefiting resource-limited settings.
Additional Links: PMID-41525639
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41525639,
year = {2026},
author = {Li, D and Li, C and Zhu, F and Chen, X and Mishra, S and Routray, S},
title = {A Personalized Point-of-Care Platform for Discovery and Validation of miRNA Targets Using AI and Edge Computing Supporting Personalized Cancer Therapy.},
journal = {IEEE journal of biomedical and health informatics},
volume = {PP},
number = {},
pages = {},
doi = {10.1109/JBHI.2025.3626933},
pmid = {41525639},
issn = {2168-2208},
abstract = {The paradigm of cancer therapy is rapidly shifting towards personalized precision medicine, yet current diagnostic approaches remain constrained by centralized laboratory infrastructure, creating critical delays between sample collection and therapeutic intervention. To address this limitation, we present GTMIT, a novel point-of-care (POC) platform integrating artificial intelligence (AI) and edge computing for real-time discovery and validation of microRNA (miRNA) targets directly at the patient's bedside. Unlike traditional laboratory-centric models, GTMIT with edge computing operates on local hardware resources (e.g., portable sequencers and mobile devices), enabling POC decision-making without reliance on the cloud. Our framework combines three key innovations: 1) A Transformer-GNN hybrid architecture with Power Normalization for robust miRNA-mRNA interaction prediction; 2) SNP-adaptive Gapped Pattern Graph Convolutional Networks (GP-GCN) accounting for patient-specific genetic variations; and 3) Edge therapeutic optimization incorporating regional cancer prevalence patterns and resource constraints. We evaluate our proposed platform on several clinical datasets. GTMIT demonstrates excellent performance on a range of metrics, achieving 94% AUC, 87% precision, and 79% recall on benchmark datasets.GTMIT demonstrates excellent performance on a range of metrics, achieving 94% AUC, 87% precision, and 79% recall on benchmark datasets. By bridging molecular diagnostics with immediate intervention at the POC, GTMIT reduces time-to-treatment from days to minutes, particularly benefiting resource-limited settings.},
}
RevDate: 2026-01-12
CmpDate: 2026-01-12
Data management for distributed computational workflows: An iRODS-based setup and its performance.
PloS one, 21(1):e0340757 pii:PONE-D-24-57570.
Modern data-management frameworks promise a flexible and efficient management of data and metadata across storage backends. However, such claims need to be put to a meaningful test in daily practice. We conjecture that such frameworks should be fit to construct a data backend for workflows which use geographically distributed high-performance and cloud computing systems. Cross-site data transfers within such a backend should largely saturate network bandwidth, in particular when parameters such as buffer sizes are optimized. To explore this further, we evaluate the "integrated Rule-Oriented Data System" iRODS with EUDAT's B2SAFE module as data backend for the "Distributed Data Infrastructure" within the LEXIS Platform for complex computing workflow orchestration and distributed data management. The focus of our study is on testing our conjectures-i.e., on construction and assessment of the data infrastructure and on measurements of data-transfer performance over the wide-area network between two selected supercomputing sites connected to LEXIS. We analyze limitations and identify optimization opportunities. Efficient utilization of the available network bandwidth is possible and depends on suitable client configuration and file size. Our work shows that systems such as iRODS nowadays fit the requirements for integration in federated computing infrastructures involving web-based authentication flows with OpenID Connect and rich on-line services. We are continuing to exploit these properties in the EXA4MIND project, where we aim at optimizing data-heavy workflows, integrating various systems for managing structured and unstructured data.
Additional Links: PMID-41525253
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41525253,
year = {2026},
author = {Hayek, M and Golasowski, M and Hachinger, S and García-Hernández, RJ and Munke, J and Lindner, G and Slaninová, K and Tunka, P and Vondrák, V and Kranzlmüller, D and Martinovič, J},
title = {Data management for distributed computational workflows: An iRODS-based setup and its performance.},
journal = {PloS one},
volume = {21},
number = {1},
pages = {e0340757},
doi = {10.1371/journal.pone.0340757},
pmid = {41525253},
issn = {1932-6203},
mesh = {*Workflow ; *Data Management/methods ; Cloud Computing ; Software ; },
abstract = {Modern data-management frameworks promise a flexible and efficient management of data and metadata across storage backends. However, such claims need to be put to a meaningful test in daily practice. We conjecture that such frameworks should be fit to construct a data backend for workflows which use geographically distributed high-performance and cloud computing systems. Cross-site data transfers within such a backend should largely saturate network bandwidth, in particular when parameters such as buffer sizes are optimized. To explore this further, we evaluate the "integrated Rule-Oriented Data System" iRODS with EUDAT's B2SAFE module as data backend for the "Distributed Data Infrastructure" within the LEXIS Platform for complex computing workflow orchestration and distributed data management. The focus of our study is on testing our conjectures-i.e., on construction and assessment of the data infrastructure and on measurements of data-transfer performance over the wide-area network between two selected supercomputing sites connected to LEXIS. We analyze limitations and identify optimization opportunities. Efficient utilization of the available network bandwidth is possible and depends on suitable client configuration and file size. Our work shows that systems such as iRODS nowadays fit the requirements for integration in federated computing infrastructures involving web-based authentication flows with OpenID Connect and rich on-line services. We are continuing to exploit these properties in the EXA4MIND project, where we aim at optimizing data-heavy workflows, integrating various systems for managing structured and unstructured data.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Workflow
*Data Management/methods
Cloud Computing
Software
RevDate: 2026-01-10
CmpDate: 2026-01-10
A Review of High-Throughput Optical Sensors for Food Detection Based on Machine Learning.
Foods (Basel, Switzerland), 15(1): pii:foods15010133.
As the global food industry expands and consumers demand higher food safety and quality standards, high-throughput detection technology utilizing digital intelligent optical sensors has emerged as a research hotspot in food testing due to its advantages of speed, precision, and non-destructive operation. Integrating cutting-edge achievements in optics, electronics, and computer science with machine learning algorithms, this technology efficiently processes massive datasets. This paper systematically summarizes the construction principles of intelligent optical sensors and their applications in food inspection. Sensors convert light signals into electrical signals using nanomaterials such as quantum dots, metal nanoparticles, and upconversion nanoparticles, and then employ machine learning algorithms including support vector machines, random forests, and convolutional neural networks for data analysis and model optimization. This enables efficient detection of target substances like pesticide residues, heavy metals, microorganisms, and food freshness. Furthermore, the integration of multiple detection mechanisms-including spectral analysis, fluorescence imaging, and hyperspectral imaging-has significantly broadened the sensors' application scenarios. Looking ahead, optical sensors will evolve toward multifunctional integration, miniaturization, and intelligent operation. By leveraging cloud computing and IoT technologies, they will deliver innovative solutions for comprehensive monitoring of food quality and safety across the entire supply chain.
Additional Links: PMID-41517198
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41517198,
year = {2026},
author = {Wang, Y and Yang, Y and Liu, H},
title = {A Review of High-Throughput Optical Sensors for Food Detection Based on Machine Learning.},
journal = {Foods (Basel, Switzerland)},
volume = {15},
number = {1},
pages = {},
doi = {10.3390/foods15010133},
pmid = {41517198},
issn = {2304-8158},
support = {the National Key Research and Development Program (No. 2023YFF1104801)//Huilin Liu/ ; },
abstract = {As the global food industry expands and consumers demand higher food safety and quality standards, high-throughput detection technology utilizing digital intelligent optical sensors has emerged as a research hotspot in food testing due to its advantages of speed, precision, and non-destructive operation. Integrating cutting-edge achievements in optics, electronics, and computer science with machine learning algorithms, this technology efficiently processes massive datasets. This paper systematically summarizes the construction principles of intelligent optical sensors and their applications in food inspection. Sensors convert light signals into electrical signals using nanomaterials such as quantum dots, metal nanoparticles, and upconversion nanoparticles, and then employ machine learning algorithms including support vector machines, random forests, and convolutional neural networks for data analysis and model optimization. This enables efficient detection of target substances like pesticide residues, heavy metals, microorganisms, and food freshness. Furthermore, the integration of multiple detection mechanisms-including spectral analysis, fluorescence imaging, and hyperspectral imaging-has significantly broadened the sensors' application scenarios. Looking ahead, optical sensors will evolve toward multifunctional integration, miniaturization, and intelligent operation. By leveraging cloud computing and IoT technologies, they will deliver innovative solutions for comprehensive monitoring of food quality and safety across the entire supply chain.},
}
RevDate: 2026-01-10
CmpDate: 2026-01-10
Sensor Driven Resource Optimization Framework for Intelligent Fog Enabled IoHT Systems.
Sensors (Basel, Switzerland), 26(1): pii:s26010348.
Fog computing has revolutionized the world by providing its services close to the user premises, which results in reducing the communication latency for many real-time applications. This communication latency has been a major constraint in cloud computing and ultimately causes user dissatisfaction due to slow response time. Many real-time applications like smart transportation, smart healthcare systems, smart cities, smart farming, video surveillance, and virtual and augmented reality are delay-sensitive real-time applications and require quick response times. The response delay in certain critical healthcare applications might cause serious loss to health patients. Therefore, by leveraging fog computing, a substantial portion of healthcare-related computational tasks can be offloaded to nearby fog nodes. This localized processing significantly reduces latency and enhances system availability, making it particularly advantageous for time-sensitive and mission-critical healthcare applications. Due to close proximity to end users, fog computing is considered to be the most suitable computing platform for real-time applications. However, fog devices are resource constrained and require proper resource management techniques for efficient resource utilization. This study presents an optimized resource allocation and scheduling framework for delay-sensitive healthcare applications using a Modified Particle Swarm Optimization (MPSO) algorithm. Using the iFogSim toolkit, the proposed technique was evaluated for many extensive simulations to obtain the desired results in terms of system response time, cost of execution and execution time. Experimental results demonstrate that the MPSO-based method reduces makespan by up to 8% and execution cost by up to 3% compared to existing metaheuristic algorithms, highlighting its effectiveness in enhancing overall fog computing performance for healthcare systems.
Additional Links: PMID-41516782
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41516782,
year = {2026},
author = {Khan, S and Shah, IA and Loh, WK and Khan, JA and Mylonas, A and Pitropakis, N},
title = {Sensor Driven Resource Optimization Framework for Intelligent Fog Enabled IoHT Systems.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {1},
pages = {},
doi = {10.3390/s26010348},
pmid = {41516782},
issn = {1424-8220},
mesh = {Algorithms ; Humans ; *Cloud Computing ; },
abstract = {Fog computing has revolutionized the world by providing its services close to the user premises, which results in reducing the communication latency for many real-time applications. This communication latency has been a major constraint in cloud computing and ultimately causes user dissatisfaction due to slow response time. Many real-time applications like smart transportation, smart healthcare systems, smart cities, smart farming, video surveillance, and virtual and augmented reality are delay-sensitive real-time applications and require quick response times. The response delay in certain critical healthcare applications might cause serious loss to health patients. Therefore, by leveraging fog computing, a substantial portion of healthcare-related computational tasks can be offloaded to nearby fog nodes. This localized processing significantly reduces latency and enhances system availability, making it particularly advantageous for time-sensitive and mission-critical healthcare applications. Due to close proximity to end users, fog computing is considered to be the most suitable computing platform for real-time applications. However, fog devices are resource constrained and require proper resource management techniques for efficient resource utilization. This study presents an optimized resource allocation and scheduling framework for delay-sensitive healthcare applications using a Modified Particle Swarm Optimization (MPSO) algorithm. Using the iFogSim toolkit, the proposed technique was evaluated for many extensive simulations to obtain the desired results in terms of system response time, cost of execution and execution time. Experimental results demonstrate that the MPSO-based method reduces makespan by up to 8% and execution cost by up to 3% compared to existing metaheuristic algorithms, highlighting its effectiveness in enhancing overall fog computing performance for healthcare systems.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Algorithms
Humans
*Cloud Computing
RevDate: 2026-01-10
MIGS: A Modular Edge Gateway with Instance-Based Isolation for Heterogeneous Industrial IoT Interoperability.
Sensors (Basel, Switzerland), 26(1): pii:s26010314.
The exponential proliferation of the Internet of Things (IoT) has catalyzed a paradigm shift in industrial automation and smart city infrastructure. However, this rapid expansion has engendered significant heterogeneity in communication protocols, creating critical barriers to seamless data integration and interoperability. Conventional gateway solutions frequently exhibit limited flexibility in supporting diverse protocol stacks simultaneously and often lack granular user controllability. To mitigate these deficiencies, this paper proposes a novel, modular IoT gateway architecture, designated as MIGS (Modular IoT Gateway System). The proposed architecture comprises four distinct components: a Management Component, a Southbound Component, a Northbound Component, and a Cache Component. Specifically, the Southbound Component employs instance-based isolation and independent task threading to manage heterogeneous field devices utilizing protocols such as Modbus, MQTT, and OPC UA. The Northbound Component facilitates reliable bidirectional data transmission with cloud platforms. A dedicated Cache Component is integrated to decouple data acquisition from transmission, ensuring data integrity during network latency. Furthermore, a web-based Control Service Module affords comprehensive runtime management. We explicate the data transmission methodology and formulate a theoretical latency model to quantify the impact of the Python Global Interpreter Lock (GIL) and serialization overhead. Functional validation and theoretical analysis confirm the system's efficacy in concurrent multi-protocol communication, robust data forwarding, and operational flexibility. The MIGS framework significantly enhances interoperability within heterogeneous IoT environments, offering a scalable solution for next-generation industrial applications.
Additional Links: PMID-41516748
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41516748,
year = {2026},
author = {Ai, Y and Zhu, Y and Jiang, Y and Deng, Y},
title = {MIGS: A Modular Edge Gateway with Instance-Based Isolation for Heterogeneous Industrial IoT Interoperability.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {1},
pages = {},
doi = {10.3390/s26010314},
pmid = {41516748},
issn = {1424-8220},
abstract = {The exponential proliferation of the Internet of Things (IoT) has catalyzed a paradigm shift in industrial automation and smart city infrastructure. However, this rapid expansion has engendered significant heterogeneity in communication protocols, creating critical barriers to seamless data integration and interoperability. Conventional gateway solutions frequently exhibit limited flexibility in supporting diverse protocol stacks simultaneously and often lack granular user controllability. To mitigate these deficiencies, this paper proposes a novel, modular IoT gateway architecture, designated as MIGS (Modular IoT Gateway System). The proposed architecture comprises four distinct components: a Management Component, a Southbound Component, a Northbound Component, and a Cache Component. Specifically, the Southbound Component employs instance-based isolation and independent task threading to manage heterogeneous field devices utilizing protocols such as Modbus, MQTT, and OPC UA. The Northbound Component facilitates reliable bidirectional data transmission with cloud platforms. A dedicated Cache Component is integrated to decouple data acquisition from transmission, ensuring data integrity during network latency. Furthermore, a web-based Control Service Module affords comprehensive runtime management. We explicate the data transmission methodology and formulate a theoretical latency model to quantify the impact of the Python Global Interpreter Lock (GIL) and serialization overhead. Functional validation and theoretical analysis confirm the system's efficacy in concurrent multi-protocol communication, robust data forwarding, and operational flexibility. The MIGS framework significantly enhances interoperability within heterogeneous IoT environments, offering a scalable solution for next-generation industrial applications.},
}
RevDate: 2026-01-10
CmpDate: 2026-01-10
A Systematic Review of Federated and Cloud Computing Approaches for Predicting Mental Health Risks.
Sensors (Basel, Switzerland), 26(1): pii:s26010229.
Mental health disorders affect large numbers of people worldwide and are a major cause of long-term disability. Digital health technologies such as mobile apps and wearable devices now generate rich behavioural data that could support earlier detection and more personalised care. However, these data are highly sensitive and distributed across devices and platforms, which makes privacy protection and scalable analysis challenging; federated learning offers a way to train models across devices while keeping raw data local. When combined with edge, fog, or cloud computing, federated learning offers a way to support near-real-time mental health analysis while keeping raw data local. This review screened 1104 records, assessed 31 full-text articles using a five-question quality checklist, and retained 17 empirical studies that achieved a score of at least 7/10 for synthesis. The included studies were compared in terms of their FL and edge/cloud architectures, data sources, privacy and security techniques, and evidence for operation in real-world settings. The synthesis highlights innovative but fragmented progress, with limited work on comorbidity modelling, deployment evaluation, and common benchmarks, and identifies priorities for the development of scalable, practical, and ethically robust FL systems for digital mental health.
Additional Links: PMID-41516665
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41516665,
year = {2025},
author = {Fiaz, I and Kanwal, N and Al-Said Ahmad, A},
title = {A Systematic Review of Federated and Cloud Computing Approaches for Predicting Mental Health Risks.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {1},
pages = {},
doi = {10.3390/s26010229},
pmid = {41516665},
issn = {1424-8220},
mesh = {*Cloud Computing ; Humans ; *Mental Health ; *Mental Disorders/diagnosis ; Wearable Electronic Devices ; Mobile Applications ; Telemedicine ; },
abstract = {Mental health disorders affect large numbers of people worldwide and are a major cause of long-term disability. Digital health technologies such as mobile apps and wearable devices now generate rich behavioural data that could support earlier detection and more personalised care. However, these data are highly sensitive and distributed across devices and platforms, which makes privacy protection and scalable analysis challenging; federated learning offers a way to train models across devices while keeping raw data local. When combined with edge, fog, or cloud computing, federated learning offers a way to support near-real-time mental health analysis while keeping raw data local. This review screened 1104 records, assessed 31 full-text articles using a five-question quality checklist, and retained 17 empirical studies that achieved a score of at least 7/10 for synthesis. The included studies were compared in terms of their FL and edge/cloud architectures, data sources, privacy and security techniques, and evidence for operation in real-world settings. The synthesis highlights innovative but fragmented progress, with limited work on comorbidity modelling, deployment evaluation, and common benchmarks, and identifies priorities for the development of scalable, practical, and ethically robust FL systems for digital mental health.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
Humans
*Mental Health
*Mental Disorders/diagnosis
Wearable Electronic Devices
Mobile Applications
Telemedicine
RevDate: 2026-01-10
A Lightweight Authentication and Key Distribution Protocol for XR Glasses Using PUF and Cloud-Assisted ECC.
Sensors (Basel, Switzerland), 26(1): pii:s26010217.
The rapid convergence of artificial intelligence (AI), cloud computing, and 5G communication has positioned extended reality (XR) as a core technology bridging the physical and virtual worlds. Encompassing virtual reality (VR), augmented reality (AR), and mixed reality (MR), XR has demonstrated transformative potential across sectors such as healthcare, industry, education, and defense. However, the compact architecture and limited computational capabilities of XR devices render conventional cryptographic authentication schemes inefficient, while the real-time transmission of biometric and positional data introduces significant privacy and security vulnerabilities. To overcome these challenges, this study introduces PXRA (PUF-based XR authentication), a lightweight and secure authentication and key distribution protocol optimized for cloud-assisted XR environments. PXRA utilizes a physically unclonable function (PUF) for device-level hardware authentication and offloads elliptic curve cryptography (ECC) operations to the cloud to enhance computational efficiency. Authenticated encryption with associated data (AEAD) ensures message confidentiality and integrity, while formal verification through ProVerif confirms the protocol's robustness under the Dolev-Yao adversary model. Experimental results demonstrate that PXRA reduces device-side computational overhead by restricting XR terminals to lightweight PUF and hash functions, achieving an average authentication latency below 15 ms sufficient for real-time XR performance. Formal analysis verifies PXRA's resistance to replay, impersonation, and key compromise attacks, while preserving user anonymity and session unlinkability. These findings establish the feasibility of integrating hardware-based PUF authentication with cloud-assisted cryptographic computation to enable secure, scalable, and real-time XR systems. The proposed framework lays a foundation for future XR applications in telemedicine, remote collaboration, and immersive education, where both performance and privacy preservation are paramount. Our contribution lies in a hybrid PUF-cloud ECC architecture, context-bound AEAD for session-splicing resistance, and a noise-resilient BCH-based fuzzy extractor supporting up to 15% BER.
Additional Links: PMID-41516652
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41516652,
year = {2025},
author = {Cha, W and Lee, HJ and Kook, S and Kim, K and Won, D},
title = {A Lightweight Authentication and Key Distribution Protocol for XR Glasses Using PUF and Cloud-Assisted ECC.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {1},
pages = {},
doi = {10.3390/s26010217},
pmid = {41516652},
issn = {1424-8220},
abstract = {The rapid convergence of artificial intelligence (AI), cloud computing, and 5G communication has positioned extended reality (XR) as a core technology bridging the physical and virtual worlds. Encompassing virtual reality (VR), augmented reality (AR), and mixed reality (MR), XR has demonstrated transformative potential across sectors such as healthcare, industry, education, and defense. However, the compact architecture and limited computational capabilities of XR devices render conventional cryptographic authentication schemes inefficient, while the real-time transmission of biometric and positional data introduces significant privacy and security vulnerabilities. To overcome these challenges, this study introduces PXRA (PUF-based XR authentication), a lightweight and secure authentication and key distribution protocol optimized for cloud-assisted XR environments. PXRA utilizes a physically unclonable function (PUF) for device-level hardware authentication and offloads elliptic curve cryptography (ECC) operations to the cloud to enhance computational efficiency. Authenticated encryption with associated data (AEAD) ensures message confidentiality and integrity, while formal verification through ProVerif confirms the protocol's robustness under the Dolev-Yao adversary model. Experimental results demonstrate that PXRA reduces device-side computational overhead by restricting XR terminals to lightweight PUF and hash functions, achieving an average authentication latency below 15 ms sufficient for real-time XR performance. Formal analysis verifies PXRA's resistance to replay, impersonation, and key compromise attacks, while preserving user anonymity and session unlinkability. These findings establish the feasibility of integrating hardware-based PUF authentication with cloud-assisted cryptographic computation to enable secure, scalable, and real-time XR systems. The proposed framework lays a foundation for future XR applications in telemedicine, remote collaboration, and immersive education, where both performance and privacy preservation are paramount. Our contribution lies in a hybrid PUF-cloud ECC architecture, context-bound AEAD for session-splicing resistance, and a noise-resilient BCH-based fuzzy extractor supporting up to 15% BER.},
}
RevDate: 2026-01-10
CmpDate: 2026-01-10
An Efficient Clinical Decision Support Framework Using IoMT Based on Explainable and Trustworthy Artificial Intelligence with Transformer Model and Blockchain-Integrated Chunking.
Diagnostics (Basel, Switzerland), 16(1): pii:diagnostics16010007.
Background/Objectives: The use of edge-cloud architectures has increased rapidly to move the analysis of AI-enabled health data to global environments. However, data security, communication overhead, cost-effectiveness, and data transmission losses are still important problems to be solved. Methods: In this paper, we propose a reliable, explainable, and energy-efficient stress detection framework supported by a cost-oriented blockchain-based content-defined chunking approach to minimise the losses during data transfer. In the proposed architecture, the Nurse Stress dataset represents IoMT data. While the chunking process reduces communication volume and storage costs by avoiding data duplication, blockchain technology eliminates the risks of unauthorised access and manipulation by ensuring the immutability and traceability of data blocks. Results: All Transformer-based models have demonstrated over 99% accuracy. The TimesNet model, in particular, has been designated as the system's reference model, exhibiting superior performance in terms of both stability and accuracy. The main contribution of this study lies in proposing one of the first integrated frameworks that jointly employs chunking-based data management, blockchain-enabled trust mechanisms, and edge-cloud computing with XAI to ensure secure and transparent IoMT data processing. The proposed system not only performs highly accurate stress detection, but also optimises the dimensions of reliable data transmission, energy and cost efficiency, and clinical reliability. Conclusions: In this respect, the study presents a scalable, reliable, and repeatable approach in health decision support systems by combining data security, integrity, and explainability issues, which are addressed separately in the literature, in a holistic manner.
Additional Links: PMID-41515502
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41515502,
year = {2025},
author = {Arslanoğlu, K and Karaköse, M},
title = {An Efficient Clinical Decision Support Framework Using IoMT Based on Explainable and Trustworthy Artificial Intelligence with Transformer Model and Blockchain-Integrated Chunking.},
journal = {Diagnostics (Basel, Switzerland)},
volume = {16},
number = {1},
pages = {},
doi = {10.3390/diagnostics16010007},
pmid = {41515502},
issn = {2075-4418},
abstract = {Background/Objectives: The use of edge-cloud architectures has increased rapidly to move the analysis of AI-enabled health data to global environments. However, data security, communication overhead, cost-effectiveness, and data transmission losses are still important problems to be solved. Methods: In this paper, we propose a reliable, explainable, and energy-efficient stress detection framework supported by a cost-oriented blockchain-based content-defined chunking approach to minimise the losses during data transfer. In the proposed architecture, the Nurse Stress dataset represents IoMT data. While the chunking process reduces communication volume and storage costs by avoiding data duplication, blockchain technology eliminates the risks of unauthorised access and manipulation by ensuring the immutability and traceability of data blocks. Results: All Transformer-based models have demonstrated over 99% accuracy. The TimesNet model, in particular, has been designated as the system's reference model, exhibiting superior performance in terms of both stability and accuracy. The main contribution of this study lies in proposing one of the first integrated frameworks that jointly employs chunking-based data management, blockchain-enabled trust mechanisms, and edge-cloud computing with XAI to ensure secure and transparent IoMT data processing. The proposed system not only performs highly accurate stress detection, but also optimises the dimensions of reliable data transmission, energy and cost efficiency, and clinical reliability. Conclusions: In this respect, the study presents a scalable, reliable, and repeatable approach in health decision support systems by combining data security, integrity, and explainability issues, which are addressed separately in the literature, in a holistic manner.},
}
RevDate: 2026-01-09
Enhancing patient admission efficiency through a hybrid cloud framework for medical record sharing.
Scientific reports pii:10.1038/s41598-026-35014-6 [Epub ahead of print].
The fragmentation of patient data across multiple healthcare institutions presents a significant challenge to realizing timely and effective treatment. Although electronic medical records have replaced traditional paper records, they often remain isolated within individual hospital information systems, limiting data exchange and preventing physicians from accessing complete medical histories during patient admission. These restrictions hinder the efficiency of diagnosis and treatment, particularly in critical care settings, such as emergency departments. Cloud computing provides a promising solution by enabling controlled electronic medical record sharing, thereby improving the continuity and quality of care. This study presents a system-level, multi-layered hybrid cloud architecture framework designed to facilitate seamless and managed exchange of electronic medical records among healthcare organizations. To further enhance operational efficiency, the system integrates fingerprint authentication based on hashed identifiers for rapid patient identification and an Internet of Things bracelet for real-time monitoring of vital signs. System performance was evaluated using discrete-event simulation implemented in the OMNeT++ framework, with simulation parameters informed by real emergency department data from three hospitals in Saudi Arabia. The evaluation considers multiple workflow scenarios and incorporates repeated simulation runs to assess performance stability. The simulation results indicate consistent reductions in average patient waiting times, while treatment durations remain stable and patient throughput increases. These findings highlight the potential of the proposed framework to enhance electronic medical record management, streamline clinical workflows, and improve operational efficiency in time-critical environments.
Additional Links: PMID-41513951
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41513951,
year = {2026},
author = {Abughazalah, M and Alsaggaf, W and Saifuddin, S and Sarhan, S},
title = {Enhancing patient admission efficiency through a hybrid cloud framework for medical record sharing.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-35014-6},
pmid = {41513951},
issn = {2045-2322},
abstract = {The fragmentation of patient data across multiple healthcare institutions presents a significant challenge to realizing timely and effective treatment. Although electronic medical records have replaced traditional paper records, they often remain isolated within individual hospital information systems, limiting data exchange and preventing physicians from accessing complete medical histories during patient admission. These restrictions hinder the efficiency of diagnosis and treatment, particularly in critical care settings, such as emergency departments. Cloud computing provides a promising solution by enabling controlled electronic medical record sharing, thereby improving the continuity and quality of care. This study presents a system-level, multi-layered hybrid cloud architecture framework designed to facilitate seamless and managed exchange of electronic medical records among healthcare organizations. To further enhance operational efficiency, the system integrates fingerprint authentication based on hashed identifiers for rapid patient identification and an Internet of Things bracelet for real-time monitoring of vital signs. System performance was evaluated using discrete-event simulation implemented in the OMNeT++ framework, with simulation parameters informed by real emergency department data from three hospitals in Saudi Arabia. The evaluation considers multiple workflow scenarios and incorporates repeated simulation runs to assess performance stability. The simulation results indicate consistent reductions in average patient waiting times, while treatment durations remain stable and patient throughput increases. These findings highlight the potential of the proposed framework to enhance electronic medical record management, streamline clinical workflows, and improve operational efficiency in time-critical environments.},
}
RevDate: 2026-01-09
CmpDate: 2026-01-09
SNAP: Streamlined Nextflow Analysis Pipeline for Immunoprecipitation-Based Epigenomic Profiling of Circulating Chromatin.
bioRxiv : the preprint server for biology.
Epigenomic profiling of circulating chromatin is a powerful and minimally invasive approach for detecting and monitoring disease, but there are no bioinformatics pipelines tailored to the unique characteristics of cell-free chromatin. We present SNAP (Streamlined Nextflow Analysis Pipeline), a reproducible, scalable, and modular workflow specifically designed for immunoprecipitation-based methods for profiling cell-free chromatin. SNAP incorporates quality control metrics optimized for circulating chromatin, including enrichment score and fragment count thresholds, as well as direct estimation of circulating tumor DNA (ctDNA) content from fragment length distributions. It also includes SNP fingerprinting to enable sample identity verification. When applied to cfChIP-seq and cfMeDIP-seq data across multiple cancer types, SNAP's quality filters significantly improved classification performance while maintaining high data retention. Independent validation using plasma from patients with osteosarcoma confirmed the detection of tumor-associated epigenomic signatures that correlated with ctDNA levels and reflected disease biology. SNAP's modular architecture enables straightforward extension to additional cell-free immunoprecipitation-based assays, providing a robust framework to support studies of circulating chromatin broadly. SNAP is compatible with cloud and high-performance computing environments and is publicly available at https://github.com/prc992/SNAP/ .
Additional Links: PMID-41509217
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41509217,
year = {2025},
author = {Zhang, Z and Da Silva Cordeiro, P and Chhetri, SB and Fortunato, B and Jin, Z and El Hajj Chehade, R and Semaan, K and Gulati, G and Lee, GG and Hemauer, C and Bian, W and Sotudian, S and Zhang, Z and Osei-Hwedieh, D and Heim, TE and Painter, C and Nawfal, R and Eid, M and Vasseur, D and Canniff, J and Savignano, H and Phillips, N and Seo, JH and Weiss, KR and Freedman, ML and Baca, SC},
title = {SNAP: Streamlined Nextflow Analysis Pipeline for Immunoprecipitation-Based Epigenomic Profiling of Circulating Chromatin.},
journal = {bioRxiv : the preprint server for biology},
volume = {},
number = {},
pages = {},
pmid = {41509217},
issn = {2692-8205},
abstract = {Epigenomic profiling of circulating chromatin is a powerful and minimally invasive approach for detecting and monitoring disease, but there are no bioinformatics pipelines tailored to the unique characteristics of cell-free chromatin. We present SNAP (Streamlined Nextflow Analysis Pipeline), a reproducible, scalable, and modular workflow specifically designed for immunoprecipitation-based methods for profiling cell-free chromatin. SNAP incorporates quality control metrics optimized for circulating chromatin, including enrichment score and fragment count thresholds, as well as direct estimation of circulating tumor DNA (ctDNA) content from fragment length distributions. It also includes SNP fingerprinting to enable sample identity verification. When applied to cfChIP-seq and cfMeDIP-seq data across multiple cancer types, SNAP's quality filters significantly improved classification performance while maintaining high data retention. Independent validation using plasma from patients with osteosarcoma confirmed the detection of tumor-associated epigenomic signatures that correlated with ctDNA levels and reflected disease biology. SNAP's modular architecture enables straightforward extension to additional cell-free immunoprecipitation-based assays, providing a robust framework to support studies of circulating chromatin broadly. SNAP is compatible with cloud and high-performance computing environments and is publicly available at https://github.com/prc992/SNAP/ .},
}
RevDate: 2026-01-07
Credibility measurement of cloud services based on information entropy and Markov chain.
Scientific reports pii:10.1038/s41598-026-35346-3 [Epub ahead of print].
Despite the rapid advancement of cloud computing technologies, user skepticism about service credibility remains a major barrier to adoption of cloud services. At present, there is not a comprehensive and systematic understanding of the factors that affect the credibility of cloud services. In view of the uncertainty and correlation between the factors of cloud service credibility, this study analyzed the user's demand for credit and credibility. The cloud service credibility attributes were divided into six dimensions: cloud service visibility, controllability, security, reliability, cloud service provider viability and user satisfaction. A cloud service credibility measurement model combining information entropy and Markov chain was established, which could calculate the uncertainty of each factor in the attribute model. The degree of influence on the credibility of cloud service and the credibility level of cloud service provider are calculated in the model. The experimental validation demonstrates that the information entropy and Markov chain model achieves a 15% improvement in prediction accuracy compared to traditional AHP methods, with particularly notable enhancements in dynamic scenario adaptability, which helps users make informed decisions when selecting cloud services.
Additional Links: PMID-41501126
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41501126,
year = {2026},
author = {Ou, L and Yu, J},
title = {Credibility measurement of cloud services based on information entropy and Markov chain.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-35346-3},
pmid = {41501126},
issn = {2045-2322},
support = {JAT230720//Science and Technology Project of Fujian Provincial Department of Education, China/ ; SHE2524//2025 Higher Education Research Project of Sanming University in China/ ; },
abstract = {Despite the rapid advancement of cloud computing technologies, user skepticism about service credibility remains a major barrier to adoption of cloud services. At present, there is not a comprehensive and systematic understanding of the factors that affect the credibility of cloud services. In view of the uncertainty and correlation between the factors of cloud service credibility, this study analyzed the user's demand for credit and credibility. The cloud service credibility attributes were divided into six dimensions: cloud service visibility, controllability, security, reliability, cloud service provider viability and user satisfaction. A cloud service credibility measurement model combining information entropy and Markov chain was established, which could calculate the uncertainty of each factor in the attribute model. The degree of influence on the credibility of cloud service and the credibility level of cloud service provider are calculated in the model. The experimental validation demonstrates that the information entropy and Markov chain model achieves a 15% improvement in prediction accuracy compared to traditional AHP methods, with particularly notable enhancements in dynamic scenario adaptability, which helps users make informed decisions when selecting cloud services.},
}
RevDate: 2026-01-07
CmpDate: 2026-01-07
The future of big data and artificial intelligence on dairy farms: A proposed dairy data ecosystem.
JDS communications, 6(Suppl 1):S9-S14.
The dairy sector should overcome challenges in productivity, sustainability, and data management by adopting intelligent, scalable, and privacy-preserving technological solutions. Adopting data and artificial intelligence (AI) technologies is essential to ensure efficient operations and informed decision making and to keep a competitive market advantage. This paper proposes an integrated, multimodal AI framework to support data-intensive dairy farm operations by leveraging big data principles and advancing them through AI technologies. The proposed architecture incorporates edge computing, autonomous AI agents, and federated learning to enable real-time, privacy-preserving analytics at the farm level and promote knowledge sharing and refinement through research farms and cloud collaboration. Farms collect heterogeneous data, which can be transformed into embeddings for both local inference and cloud analysis. These embeddings form the input of AI agents that support health monitoring, risk prediction, operational optimization, and decision making. Privacy is preserved by sharing only model weights or anonymized data externally. The edge layer handles time-sensitive tasks and communicates with a centralized enterprise cloud hosting global models and distributing updates. A research and development cloud linked to research farms ensures model testing and validation. The entire system is orchestrated by autonomous AI agents that manage data, choose models, and interact with stakeholders, and human oversight ensures safe decisions, as illustrated in the practical use case of mastitis management. This architecture could support data integrity, scalability, and real-time personalization, along with opening up space for partnerships between farms, research institutions, and regulatory bodies to promote secure, cross-sector innovation.
Additional Links: PMID-41497383
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41497383,
year = {2025},
author = {Hostens, M and Franceschini, S and van Leerdam, M and Yang, H and Pokharel, S and Liu, E and Niu, P and Zhang, H and Noor, S and Hermans, K and Salamone, M and Sharma, S},
title = {The future of big data and artificial intelligence on dairy farms: A proposed dairy data ecosystem.},
journal = {JDS communications},
volume = {6},
number = {Suppl 1},
pages = {S9-S14},
pmid = {41497383},
issn = {2666-9102},
abstract = {The dairy sector should overcome challenges in productivity, sustainability, and data management by adopting intelligent, scalable, and privacy-preserving technological solutions. Adopting data and artificial intelligence (AI) technologies is essential to ensure efficient operations and informed decision making and to keep a competitive market advantage. This paper proposes an integrated, multimodal AI framework to support data-intensive dairy farm operations by leveraging big data principles and advancing them through AI technologies. The proposed architecture incorporates edge computing, autonomous AI agents, and federated learning to enable real-time, privacy-preserving analytics at the farm level and promote knowledge sharing and refinement through research farms and cloud collaboration. Farms collect heterogeneous data, which can be transformed into embeddings for both local inference and cloud analysis. These embeddings form the input of AI agents that support health monitoring, risk prediction, operational optimization, and decision making. Privacy is preserved by sharing only model weights or anonymized data externally. The edge layer handles time-sensitive tasks and communicates with a centralized enterprise cloud hosting global models and distributing updates. A research and development cloud linked to research farms ensures model testing and validation. The entire system is orchestrated by autonomous AI agents that manage data, choose models, and interact with stakeholders, and human oversight ensures safe decisions, as illustrated in the practical use case of mastitis management. This architecture could support data integrity, scalability, and real-time personalization, along with opening up space for partnerships between farms, research institutions, and regulatory bodies to promote secure, cross-sector innovation.},
}
RevDate: 2026-01-06
Two-Tier heuristic search for ransomware-as-a-service based cyberattack défense analysis using explainable Bayesian deep learning model.
Scientific reports, 16(1):437.
Data security assurance is essential owing to the improving popularity of cloud computing and its extensive usage through several industries, particularly in light of the increasing number of cyber-security attacks. Ransomware-as-a-service (RaaS) attacks are prominent and widespread, allowing uniform individuals with minimum technology to perform ransomware processes. While RaaS methods have declined the access barriers for cyber threats, generative artificial intelligence (AI) growth might result in new possibilities for offenders. The high prevalence of RaaS-based cyberattacks poses essential challenges to cybersecurity, requiring progressive and understandable defensive mechanisms. Furthermore, deep or machine learning (ML) methods mainly provide a black box, giving no data about how it functions. Understanding the details of a classification model’s decision can be beneficial for understanding the work way to be identified. This study presents a novel Two-Tier Metaheuristic Algorithm for Cyberattack Defense Analysis using Explainable Artificial Intelligence based Bayesian Deep Learning (TTMCDA-XAIBDL) method. The main intention of the TTMCDA-XAIBDL method is to detect and mitigate ransomware cyber threats. Initially, the TTMCDA-XAIBDL method performs data preprocessing using Z-score normalization to ensure standardization and scalability of features. Next, the improved sand cat swarm optimization (ISCSO) technique is used for the feature selection. The Bayesian neural network (BNN) is employed to classify cyberattack defence. Moreover, the BNN’s hyperparameters are fine-tuned using the whale optimization algorithm (WOA) model, optimizing its performance for effective detection of ransomware threats. Finally, the XAI using SHAP is integrated to provide explainability, offering perceptions of the model’s decision-making procedure and adopting trust in the system. To demonstrate the effectiveness of the TTMCDA-XAIBDL technique, a series of simulations are conducted using a ransomware detection dataset to evaluate its classification performance. The performance validation of the TTMCDA-XAIBDL technique portrayed a superior accuracy value of 99.29% over the recent methods.
Additional Links: PMID-41490912
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41490912,
year = {2026},
author = {Almuflih, AS},
title = {Two-Tier heuristic search for ransomware-as-a-service based cyberattack défense analysis using explainable Bayesian deep learning model.},
journal = {Scientific reports},
volume = {16},
number = {1},
pages = {437},
pmid = {41490912},
issn = {2045-2322},
abstract = {Data security assurance is essential owing to the improving popularity of cloud computing and its extensive usage through several industries, particularly in light of the increasing number of cyber-security attacks. Ransomware-as-a-service (RaaS) attacks are prominent and widespread, allowing uniform individuals with minimum technology to perform ransomware processes. While RaaS methods have declined the access barriers for cyber threats, generative artificial intelligence (AI) growth might result in new possibilities for offenders. The high prevalence of RaaS-based cyberattacks poses essential challenges to cybersecurity, requiring progressive and understandable defensive mechanisms. Furthermore, deep or machine learning (ML) methods mainly provide a black box, giving no data about how it functions. Understanding the details of a classification model’s decision can be beneficial for understanding the work way to be identified. This study presents a novel Two-Tier Metaheuristic Algorithm for Cyberattack Defense Analysis using Explainable Artificial Intelligence based Bayesian Deep Learning (TTMCDA-XAIBDL) method. The main intention of the TTMCDA-XAIBDL method is to detect and mitigate ransomware cyber threats. Initially, the TTMCDA-XAIBDL method performs data preprocessing using Z-score normalization to ensure standardization and scalability of features. Next, the improved sand cat swarm optimization (ISCSO) technique is used for the feature selection. The Bayesian neural network (BNN) is employed to classify cyberattack defence. Moreover, the BNN’s hyperparameters are fine-tuned using the whale optimization algorithm (WOA) model, optimizing its performance for effective detection of ransomware threats. Finally, the XAI using SHAP is integrated to provide explainability, offering perceptions of the model’s decision-making procedure and adopting trust in the system. To demonstrate the effectiveness of the TTMCDA-XAIBDL technique, a series of simulations are conducted using a ransomware detection dataset to evaluate its classification performance. The performance validation of the TTMCDA-XAIBDL technique portrayed a superior accuracy value of 99.29% over the recent methods.},
}
RevDate: 2026-01-04
Computing power network dynamic resource scheduling integrating time series mixing dynamic state estimation and hierarchical reinforcement learning.
Scientific reports pii:10.1038/s41598-025-32753-w [Epub ahead of print].
With the evolution of cloud computing towards a multi-cloud architecture, cross-cloud resource scheduling faces challenges such as heterogeneous environment adaptation and slow dynamic load response. How to improve resource utilization while ensuring service quality has become a core challenge in the field of cloud management. To address this need, we propose the TSL-HRL intelligent scheduling framework, which integrates time-series feature modeling and hierarchical reinforcement learning. The framework utilizes a time-series mixing module to deeply mine the periodic fluctuations and burst demand features of computing, storage, and network resources. It integrates a dynamic state estimation module with Kalman filtering to capture real-time changes in resource supply and demand. Additionally, it constructs a high-level planning - low-level response hierarchical reinforcement learning architecture: the high-level Q-learning algorithm formulates a global long-term resource allocation strategy to ensure optimal overall scheduling, while the low-level A2C algorithm adjusts the execution plan based on real-time network fluctuations and node load, enabling fast adaptation to dynamic changes, forming a macro-micro collaborative decision mechanism. In experiments on the Multi-Cloud Service Composition Dataset and Google 2019 Cluster dynamic node scenarios, TSL-HRL effectively balanced resource utilization efficiency and scheduling real-time performance with its three-level architecture design of time-series feature extraction - dynamic state perception - hierarchical strategy optimization. The study shows that TSL-HRL provides a systematic solution for resource management in multi-cloud environments. Future research will focus on lightweight extensions for edge-cloud collaborative scenarios, multi-objective energy consumption optimization frameworks, and meta-learning-driven rapid adaptation technologies, promoting the application and generalization of intelligent resource scheduling technologies in real-world complex scenarios.
Additional Links: PMID-41486178
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41486178,
year = {2026},
author = {Liu, H and Zhang, S and Li, L and Sun, T and Xue, W and Yao, X and Xu, Y},
title = {Computing power network dynamic resource scheduling integrating time series mixing dynamic state estimation and hierarchical reinforcement learning.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-32753-w},
pmid = {41486178},
issn = {2045-2322},
abstract = {With the evolution of cloud computing towards a multi-cloud architecture, cross-cloud resource scheduling faces challenges such as heterogeneous environment adaptation and slow dynamic load response. How to improve resource utilization while ensuring service quality has become a core challenge in the field of cloud management. To address this need, we propose the TSL-HRL intelligent scheduling framework, which integrates time-series feature modeling and hierarchical reinforcement learning. The framework utilizes a time-series mixing module to deeply mine the periodic fluctuations and burst demand features of computing, storage, and network resources. It integrates a dynamic state estimation module with Kalman filtering to capture real-time changes in resource supply and demand. Additionally, it constructs a high-level planning - low-level response hierarchical reinforcement learning architecture: the high-level Q-learning algorithm formulates a global long-term resource allocation strategy to ensure optimal overall scheduling, while the low-level A2C algorithm adjusts the execution plan based on real-time network fluctuations and node load, enabling fast adaptation to dynamic changes, forming a macro-micro collaborative decision mechanism. In experiments on the Multi-Cloud Service Composition Dataset and Google 2019 Cluster dynamic node scenarios, TSL-HRL effectively balanced resource utilization efficiency and scheduling real-time performance with its three-level architecture design of time-series feature extraction - dynamic state perception - hierarchical strategy optimization. The study shows that TSL-HRL provides a systematic solution for resource management in multi-cloud environments. Future research will focus on lightweight extensions for edge-cloud collaborative scenarios, multi-objective energy consumption optimization frameworks, and meta-learning-driven rapid adaptation technologies, promoting the application and generalization of intelligent resource scheduling technologies in real-world complex scenarios.},
}
RevDate: 2026-01-04
Data security storage and transmission framework for AI computing power platforms.
Scientific reports pii:10.1038/s41598-025-31786-5 [Epub ahead of print].
In the era of rapidly expanding artificial intelligence (AI) applications, ensuring secure data storage and transmission within AI computing power platforms remains a critical challenge. This research presents a novel data security storage and transmission system, termed as secure artificial intelligence data storage and transmission (Secure AI-DST), tailored for AI computing environments. The proposed framework integrates a hybrid encryption mechanism that combines Amended Merkle Tree (AMerT) hashing with Secret Elliptic Curve Cryptography (SEllC) enhanced data confidentiality. For secure storage and decentralization, the system leverages blockchain with InterPlanetary File System (IPFS) integration, ensuring tamper-proof and scalable data handling. To classify various attack types, a novel deep learning model attention bidirectional gated recurrent unit-assisted residual network (Att-BGR) is deployed, offering accurate detection of intrusions. Simulation studies conducted in MATLAB® 2023b using both synthetic and real-time datasets show that the Secure AI-DST system reduces unauthorized access attempts by 92.7%, maintains data integrity with 99.98% accuracy under simulated cyberattacks, and achieves a packet validation success rate of 97.6% across edge-to-cloud transmissions. Furthermore, the proposed method introduces only a 4.3% computational overhead, making it highly suitable for real-time AI workloads. These outcomes confirm the effectiveness of Secure AI-DST in ensuring end-to-end data guard, resilience against cyber threats, and scalable presentation for next-generation AI computing substructures.
Additional Links: PMID-41484422
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41484422,
year = {2026},
author = {Chen, J and Lu, Z and Zheng, H and Ren, Z and Chen, Y and Shang, J},
title = {Data security storage and transmission framework for AI computing power platforms.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-31786-5},
pmid = {41484422},
issn = {2045-2322},
abstract = {In the era of rapidly expanding artificial intelligence (AI) applications, ensuring secure data storage and transmission within AI computing power platforms remains a critical challenge. This research presents a novel data security storage and transmission system, termed as secure artificial intelligence data storage and transmission (Secure AI-DST), tailored for AI computing environments. The proposed framework integrates a hybrid encryption mechanism that combines Amended Merkle Tree (AMerT) hashing with Secret Elliptic Curve Cryptography (SEllC) enhanced data confidentiality. For secure storage and decentralization, the system leverages blockchain with InterPlanetary File System (IPFS) integration, ensuring tamper-proof and scalable data handling. To classify various attack types, a novel deep learning model attention bidirectional gated recurrent unit-assisted residual network (Att-BGR) is deployed, offering accurate detection of intrusions. Simulation studies conducted in MATLAB® 2023b using both synthetic and real-time datasets show that the Secure AI-DST system reduces unauthorized access attempts by 92.7%, maintains data integrity with 99.98% accuracy under simulated cyberattacks, and achieves a packet validation success rate of 97.6% across edge-to-cloud transmissions. Furthermore, the proposed method introduces only a 4.3% computational overhead, making it highly suitable for real-time AI workloads. These outcomes confirm the effectiveness of Secure AI-DST in ensuring end-to-end data guard, resilience against cyber threats, and scalable presentation for next-generation AI computing substructures.},
}
RevDate: 2026-01-03
Ensemble deep learning approach for traffic video analytics in edge computing.
Scientific reports pii:10.1038/s41598-025-25628-7 [Epub ahead of print].
Video analytics is the new era of computer vision in identifying and classifying objects. Traffic surveillance videos can be analysed to using computer vision to comprehend the road traffic. Monitoring the real-time road traffic is essential to control them. Computer vision helps in identifying the vehicles on the road, but the present techniques either perform the video analysis on the cloud platform or the edge platform. The former introduces more delay in processing while controlling is needed in real-time, the latter is not accurate in estimating the current road traffic. YOLO algorithms are the most notable ones for efficient real-time object detection. To make such object detections feasible in lightweight environments, its tinier version called Tiny YOLO is used. Edge computing is the efficient framework to have its computation done on the edge of the physical layer without the need to move data into the cloud to reduce latency. A novel hybrid model of vehicle detection and classification using Tiny YOLO and YOLOR is constructed at the edge layer. This hybrid model processes the video frames at a higher rate and produces the traffic estimate. The numerical traffic volume is sent to Ensemble Learning in Traffic Video Analytics (ELITVA) which uses F-RNN to make decisions in reducing the traffic flow seamlessly. The experimental results performed on drone dataset captured at road signals show an increase in precision by 13.8%, accuracy by 4.8%, recall by 17.4%, F1 score by 19.9%, and frame rate processing by 12.8% compared to other existing traffic surveillance systems and efficient controlling of road traffic.
Additional Links: PMID-41484116
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41484116,
year = {2026},
author = {Sathyamoorthy, M and Rajasekar, V and Krishnamoorthi, S and Pamucar, D},
title = {Ensemble deep learning approach for traffic video analytics in edge computing.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-25628-7},
pmid = {41484116},
issn = {2045-2322},
abstract = {Video analytics is the new era of computer vision in identifying and classifying objects. Traffic surveillance videos can be analysed to using computer vision to comprehend the road traffic. Monitoring the real-time road traffic is essential to control them. Computer vision helps in identifying the vehicles on the road, but the present techniques either perform the video analysis on the cloud platform or the edge platform. The former introduces more delay in processing while controlling is needed in real-time, the latter is not accurate in estimating the current road traffic. YOLO algorithms are the most notable ones for efficient real-time object detection. To make such object detections feasible in lightweight environments, its tinier version called Tiny YOLO is used. Edge computing is the efficient framework to have its computation done on the edge of the physical layer without the need to move data into the cloud to reduce latency. A novel hybrid model of vehicle detection and classification using Tiny YOLO and YOLOR is constructed at the edge layer. This hybrid model processes the video frames at a higher rate and produces the traffic estimate. The numerical traffic volume is sent to Ensemble Learning in Traffic Video Analytics (ELITVA) which uses F-RNN to make decisions in reducing the traffic flow seamlessly. The experimental results performed on drone dataset captured at road signals show an increase in precision by 13.8%, accuracy by 4.8%, recall by 17.4%, F1 score by 19.9%, and frame rate processing by 12.8% compared to other existing traffic surveillance systems and efficient controlling of road traffic.},
}
RevDate: 2026-01-04
CmpDate: 2026-01-02
Collaborative optimization of computational offloading and resource allocation based on Stackelberg game.
PloS one, 21(1):e0339955.
The exponential growth of the Internet of Things and mobile edge computing has intensified the need for substantial data processing and instantaneous response. Consequently, collaboration between the cloud, the edge and the end has become a key computing paradigm. However, in this architecture, task scheduling is complex, resources are heterogeneous and dynamic, and it is still a serious challenge to achieve low-latency and energy-efficient task processing. Aiming at the deficiency of dynamic collaborative optimization in the existing research, this paper introduces a collaborative optimization approach for computational offloading and resource allocation, utilizing the Stackelberg game to maximize the system's total utility. First, an overall utility model that integrates delay, energy consumption, and revenue is constructed for application scenarios involving multi-cloud servers, multi-edge servers, and multiple users. Subsequently, a three-tier Stackelberg game model is developed in which the cloud assumes the role of the leader, focusing on the establishment of resource pricing strategies. Concurrently, the edge operates as the sub-leader, fine-tuning the distribution of computational resources in alignment with the cloud's strategic initiatives. Meanwhile, the mobile terminal functions as the follower, meticulously optimizing the computation offloading ratio in response to the superior strategies delineated by the preceding tiers. Next, through game equilibrium analysis, the existence and uniqueness of the Stackelberg equilibrium are proven. Finally, a BI-PRO is proposed based on the backward induction resource pricing, allocation, and computation offload optimization algorithm. The experimental findings indicate that the proposed Stackelberg game method optimizes the system's total revenue and maintains stable performance across various scenarios. These results confirm the superiority and robustness of the method.
Additional Links: PMID-41481652
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41481652,
year = {2026},
author = {Li, L and Yu, Q and Wang, C and Zhao, J and Lv, J and Wang, S and Hu, C},
title = {Collaborative optimization of computational offloading and resource allocation based on Stackelberg game.},
journal = {PloS one},
volume = {21},
number = {1},
pages = {e0339955},
pmid = {41481652},
issn = {1932-6203},
mesh = {*Resource Allocation/methods ; *Game Theory ; *Cloud Computing ; Algorithms ; Cooperative Behavior ; Humans ; Models, Theoretical ; },
abstract = {The exponential growth of the Internet of Things and mobile edge computing has intensified the need for substantial data processing and instantaneous response. Consequently, collaboration between the cloud, the edge and the end has become a key computing paradigm. However, in this architecture, task scheduling is complex, resources are heterogeneous and dynamic, and it is still a serious challenge to achieve low-latency and energy-efficient task processing. Aiming at the deficiency of dynamic collaborative optimization in the existing research, this paper introduces a collaborative optimization approach for computational offloading and resource allocation, utilizing the Stackelberg game to maximize the system's total utility. First, an overall utility model that integrates delay, energy consumption, and revenue is constructed for application scenarios involving multi-cloud servers, multi-edge servers, and multiple users. Subsequently, a three-tier Stackelberg game model is developed in which the cloud assumes the role of the leader, focusing on the establishment of resource pricing strategies. Concurrently, the edge operates as the sub-leader, fine-tuning the distribution of computational resources in alignment with the cloud's strategic initiatives. Meanwhile, the mobile terminal functions as the follower, meticulously optimizing the computation offloading ratio in response to the superior strategies delineated by the preceding tiers. Next, through game equilibrium analysis, the existence and uniqueness of the Stackelberg equilibrium are proven. Finally, a BI-PRO is proposed based on the backward induction resource pricing, allocation, and computation offload optimization algorithm. The experimental findings indicate that the proposed Stackelberg game method optimizes the system's total revenue and maintains stable performance across various scenarios. These results confirm the superiority and robustness of the method.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Resource Allocation/methods
*Game Theory
*Cloud Computing
Algorithms
Cooperative Behavior
Humans
Models, Theoretical
RevDate: 2026-01-02
CmpDate: 2026-01-02
MorphoCloud: Democratizing Access to High-Performance Computing for Morphological Data Analysis.
ArXiv pii:2512.21408.
The digitization of biological specimens has revolutionized the field of morphology, creating large collections of 3D data, and microCT in particular. This revolution was initially supported by the development of open-source software tools, specifically the development of SlicerMorph extension to the open-source image analytics platform 3D Slicer. Through SlicerMorph and 3D Slicer, biologists, morphologists and scientists in related fields have all the necessary tools to import, visualize and analyze these complex and large datasets in a single platform that is flexible and expandible, without the need of proprietary software that hinders scientific collaboration and sharing. Yet, a significant "compute gap" remains: While data and software are now open and accessible, the necessary high-end computing resources to run them are often not equally accessible in all institutions, and particularly lacking at Primarily Undergraduate Institutions (PUIs) and other educational settings. Here, we present MorphoCloud, an "IssuesOps"-based platform that leverages Github Actions and the JetStream2 cloud farm to provide on-demand, research-grade computing environments to researchers working with 3D morphological datasets. By delivering a GPU-accelerated full desktop experience via a web browser, MorphoCloud eliminates hardware barriers, enabling complex 3D analysis and AI-assisted segmentation. This paper explains the platform and its architecture, as well as use cases it is designed to support.
Additional Links: PMID-41479453
Full Text:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41479453,
year = {2025},
author = {Maga, AM and Fillion-Robin, JC},
title = {MorphoCloud: Democratizing Access to High-Performance Computing for Morphological Data Analysis.},
journal = {ArXiv},
volume = {},
number = {},
pages = {},
pmid = {41479453},
issn = {2331-8422},
abstract = {The digitization of biological specimens has revolutionized the field of morphology, creating large collections of 3D data, and microCT in particular. This revolution was initially supported by the development of open-source software tools, specifically the development of SlicerMorph extension to the open-source image analytics platform 3D Slicer. Through SlicerMorph and 3D Slicer, biologists, morphologists and scientists in related fields have all the necessary tools to import, visualize and analyze these complex and large datasets in a single platform that is flexible and expandible, without the need of proprietary software that hinders scientific collaboration and sharing. Yet, a significant "compute gap" remains: While data and software are now open and accessible, the necessary high-end computing resources to run them are often not equally accessible in all institutions, and particularly lacking at Primarily Undergraduate Institutions (PUIs) and other educational settings. Here, we present MorphoCloud, an "IssuesOps"-based platform that leverages Github Actions and the JetStream2 cloud farm to provide on-demand, research-grade computing environments to researchers working with 3D morphological datasets. By delivering a GPU-accelerated full desktop experience via a web browser, MorphoCloud eliminates hardware barriers, enabling complex 3D analysis and AI-assisted segmentation. This paper explains the platform and its architecture, as well as use cases it is designed to support.},
}
RevDate: 2025-12-31
Scalable photonic reservoir computing for parallel machine learning tasks.
Nature communications pii:10.1038/s41467-025-67983-z [Epub ahead of print].
Neuromorphic photonics enables brain-inspired information processing with higher bandwidth and lower energy consumption than traditional electronics, addressing the growing computational demands of the Internet of Things, cloud services, and edge computing. However, even current state-of-the-art electronic and photonic platforms are incapable of delivering the scalable throughput, multitasking processing, and energy efficiency required by these applications. Here, we demonstrate a tunable photonic reservoir computing device based on a nonlinear amplifying loop mirror (NALM), leveraging a time-delayed, single-unit, all-optical architecture. By combining dense temporal encoding with wavelength-division multiplexing, the system supports concurrent multitasking across independent data channels, enabling scalable computational performance without additional hardware complexity. Experiments and theoretical validation on classification and prediction benchmarks demonstrate the device's performance, achieving a throughput of 20 tera-operations-per-second and an energy efficiency of 4.4 fJ per operation. These results highlight a promising path towards reconfigurable, compact, and high-performance photonic processors for real-time intelligent applications.
Additional Links: PMID-41476165
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41476165,
year = {2025},
author = {Aadhi, A and Di Lauro, L and Fischer, B and Dmitriev, P and Alamgir, I and Mazoukh, C and Perron, N and Viktorov, EA and Kovalev, AV and Eshaghi, A and Vakili, S and Chemnitz, M and Roztocki, P and Little, BE and Chu, ST and Moss, DJ and Morandotti, R},
title = {Scalable photonic reservoir computing for parallel machine learning tasks.},
journal = {Nature communications},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41467-025-67983-z},
pmid = {41476165},
issn = {2041-1723},
abstract = {Neuromorphic photonics enables brain-inspired information processing with higher bandwidth and lower energy consumption than traditional electronics, addressing the growing computational demands of the Internet of Things, cloud services, and edge computing. However, even current state-of-the-art electronic and photonic platforms are incapable of delivering the scalable throughput, multitasking processing, and energy efficiency required by these applications. Here, we demonstrate a tunable photonic reservoir computing device based on a nonlinear amplifying loop mirror (NALM), leveraging a time-delayed, single-unit, all-optical architecture. By combining dense temporal encoding with wavelength-division multiplexing, the system supports concurrent multitasking across independent data channels, enabling scalable computational performance without additional hardware complexity. Experiments and theoretical validation on classification and prediction benchmarks demonstrate the device's performance, achieving a throughput of 20 tera-operations-per-second and an energy efficiency of 4.4 fJ per operation. These results highlight a promising path towards reconfigurable, compact, and high-performance photonic processors for real-time intelligent applications.},
}
RevDate: 2026-01-02
CmpDate: 2025-12-31
MTBseq-nf: Enabling Scalable Tuberculosis Genomics "Big Data" Analysis Through a User-Friendly Nextflow Wrapper for MTBseq Pipeline.
Microorganisms, 13(12):.
The MTBseq pipeline, published in 2018, was designed to address bioinformatics challenges in tuberculosis (TB) research using whole-genome sequencing (WGS) data. It was the first publicly available tool on GitHub to perform full analysis of WGS data for Mycobacterium tuberculosis complex (MTBC) encompassing quality control through mapping, variant calling for lineage classification, drug resistance prediction, and phylogenetic inference. However, the pipeline's architecture is not optimal for analyses on high-performance computing or cloud computing environments that often involve large datasets. To overcome this limitation, we developed MTBseq-nf, a Nextflow wrapper that provides parallelization for faster execution speeds in addition to several other significant enhancements. The MTBseq-nf wrapper can run several instances of the same step in parallel, fully utilizing the available resources, unlike the linear, batched analysis of samples in the TBfull step of the MTBseq pipeline. For evaluation of scalability and reproducibility, we used 90 M. tuberculosis genomes (European Nucleotide Archive-ENA accession PRJEB7727) for the benchmarking analysis on a dedicated computational server. In our benchmarks, MTBseq-nf in its parallel mode is at least twice as fast as the standard MTBseq pipeline for cohorts exceeding 20 samples. Through integration with the best practices of nf-core, Bioconda, and Biocontainers projects MTBseq-nf ensures reproducibility and platform independence, providing a scalable and efficient solution for TB genomic surveillance.
Additional Links: PMID-41471889
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41471889,
year = {2025},
author = {Sharma, A and Marcon, DJ and Loubser, J and Lima, KVB and van der Spuy, G and Conceição, EC},
title = {MTBseq-nf: Enabling Scalable Tuberculosis Genomics "Big Data" Analysis Through a User-Friendly Nextflow Wrapper for MTBseq Pipeline.},
journal = {Microorganisms},
volume = {13},
number = {12},
pages = {},
pmid = {41471889},
issn = {2076-2607},
support = {445784/2023-7//National Council for Scientific and Technological Development/ ; 3083687//Oracle Cloud credits/ ; PhD Scholarship//National Research Foundation/ ; },
abstract = {The MTBseq pipeline, published in 2018, was designed to address bioinformatics challenges in tuberculosis (TB) research using whole-genome sequencing (WGS) data. It was the first publicly available tool on GitHub to perform full analysis of WGS data for Mycobacterium tuberculosis complex (MTBC) encompassing quality control through mapping, variant calling for lineage classification, drug resistance prediction, and phylogenetic inference. However, the pipeline's architecture is not optimal for analyses on high-performance computing or cloud computing environments that often involve large datasets. To overcome this limitation, we developed MTBseq-nf, a Nextflow wrapper that provides parallelization for faster execution speeds in addition to several other significant enhancements. The MTBseq-nf wrapper can run several instances of the same step in parallel, fully utilizing the available resources, unlike the linear, batched analysis of samples in the TBfull step of the MTBseq pipeline. For evaluation of scalability and reproducibility, we used 90 M. tuberculosis genomes (European Nucleotide Archive-ENA accession PRJEB7727) for the benchmarking analysis on a dedicated computational server. In our benchmarks, MTBseq-nf in its parallel mode is at least twice as fast as the standard MTBseq pipeline for cohorts exceeding 20 samples. Through integration with the best practices of nf-core, Bioconda, and Biocontainers projects MTBseq-nf ensures reproducibility and platform independence, providing a scalable and efficient solution for TB genomic surveillance.},
}
RevDate: 2026-01-03
CmpDate: 2025-12-31
Distributed Deep Learning in IoT Sensor Network for the Diagnosis of Plant Diseases.
Sensors (Basel, Switzerland), 25(24):.
The early detection of plant diseases is critical to improving agricultural productivity and ensuring food security. However, conventional centralized deep learning approaches are often unsuitable for large-scale agricultural deployments, as they rely on continuous data transmission to cloud servers and require high computational resources that are impractical for Internet of Things (IoT)-based field environments. In this article, we present a distributed deep learning framework based on Federated Learning (FL) for the diagnosis of plant diseases in IoT sensor networks. The proposed architecture integrates multiple IoT nodes and an edge computing node that collaboratively train an EfficientNet B0 model using the Federated Averaging (FedAvg) algorithm without transferring local data. Two training pipelines are evaluated: a standard single-model pipeline and a hierarchical pipeline that combines a crop classifier with crop-specific disease models. Experimental results on a multicrop leaf image dataset under realistic augmentation scenarios demonstrate that the hierarchical FL approach improves per-crop classification accuracy and robustness to environmental variations, while the standard pipeline offers lower latency and energy consumption.
Additional Links: PMID-41471641
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41471641,
year = {2025},
author = {Papanikolaou, A and Tziouvaras, A and Floros, G and Xenakis, A and Bonsignorio, F},
title = {Distributed Deep Learning in IoT Sensor Network for the Diagnosis of Plant Diseases.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {24},
pages = {},
pmid = {41471641},
issn = {1424-8220},
mesh = {*Deep Learning ; *Plant Diseases ; *Internet of Things ; Algorithms ; Neural Networks, Computer ; Crops, Agricultural ; Plant Leaves ; },
abstract = {The early detection of plant diseases is critical to improving agricultural productivity and ensuring food security. However, conventional centralized deep learning approaches are often unsuitable for large-scale agricultural deployments, as they rely on continuous data transmission to cloud servers and require high computational resources that are impractical for Internet of Things (IoT)-based field environments. In this article, we present a distributed deep learning framework based on Federated Learning (FL) for the diagnosis of plant diseases in IoT sensor networks. The proposed architecture integrates multiple IoT nodes and an edge computing node that collaboratively train an EfficientNet B0 model using the Federated Averaging (FedAvg) algorithm without transferring local data. Two training pipelines are evaluated: a standard single-model pipeline and a hierarchical pipeline that combines a crop classifier with crop-specific disease models. Experimental results on a multicrop leaf image dataset under realistic augmentation scenarios demonstrate that the hierarchical FL approach improves per-crop classification accuracy and robustness to environmental variations, while the standard pipeline offers lower latency and energy consumption.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Deep Learning
*Plant Diseases
*Internet of Things
Algorithms
Neural Networks, Computer
Crops, Agricultural
Plant Leaves
RevDate: 2026-01-03
CmpDate: 2025-12-31
Edge-Enabled Hybrid Encryption Framework for Secure Health Information Exchange in IoT-Based Smart Healthcare Systems.
Sensors (Basel, Switzerland), 25(24):.
The integration of the Internet of Things (IoT) and edge computing is transforming healthcare by enabling real-time acquisition, processing, and exchange of sensitive patient data close to the data source. However, the distributed nature of IoT-enabled smart healthcare systems exposes them to severe security and privacy risks during health information exchange (HIE). This study proposes an edge-enabled hybrid encryption framework that combines elliptic curve cryptography (ECC), HMAC-SHA256, and the Advanced Encryption Standard (AES) to ensure data confidentiality, integrity, and efficient computation in healthcare communication networks. The proposed model minimizes latency and reduces cloud dependency by executing encryption and verification at the network edge. It provides the first systematic comparison of hybrid encryption configurations for edge-based HIE, evaluating CPU usage, memory consumption, and scalability across varying data volumes. Experimental results demonstrate that the ECC + HMAC-SHA256 + AES configuration achieves high encryption efficiency and strong resistance to attacks while maintaining lightweight processing suitable for edge devices. This approach provides a scalable and secure solution for protecting sensitive health data in next-generation IoT-enabled smart healthcare systems.
Additional Links: PMID-41471577
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41471577,
year = {2025},
author = {Ghani, NA and Bagustari, BA and Ahmad, M and Tolle, H and Kurnianingtyas, D},
title = {Edge-Enabled Hybrid Encryption Framework for Secure Health Information Exchange in IoT-Based Smart Healthcare Systems.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {24},
pages = {},
pmid = {41471577},
issn = {1424-8220},
support = {IMG005-2023//University of Malaya/ ; 01703/UN10.A0101/B/TU.01.00.1/2024//University of Brawijaya/ ; },
mesh = {*Computer Security ; *Health Information Exchange ; *Internet of Things ; Humans ; Confidentiality ; Delivery of Health Care ; Algorithms ; Cloud Computing ; },
abstract = {The integration of the Internet of Things (IoT) and edge computing is transforming healthcare by enabling real-time acquisition, processing, and exchange of sensitive patient data close to the data source. However, the distributed nature of IoT-enabled smart healthcare systems exposes them to severe security and privacy risks during health information exchange (HIE). This study proposes an edge-enabled hybrid encryption framework that combines elliptic curve cryptography (ECC), HMAC-SHA256, and the Advanced Encryption Standard (AES) to ensure data confidentiality, integrity, and efficient computation in healthcare communication networks. The proposed model minimizes latency and reduces cloud dependency by executing encryption and verification at the network edge. It provides the first systematic comparison of hybrid encryption configurations for edge-based HIE, evaluating CPU usage, memory consumption, and scalability across varying data volumes. Experimental results demonstrate that the ECC + HMAC-SHA256 + AES configuration achieves high encryption efficiency and strong resistance to attacks while maintaining lightweight processing suitable for edge devices. This approach provides a scalable and secure solution for protecting sensitive health data in next-generation IoT-enabled smart healthcare systems.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Computer Security
*Health Information Exchange
*Internet of Things
Humans
Confidentiality
Delivery of Health Care
Algorithms
Cloud Computing
RevDate: 2026-01-03
Two Novel Cloud-Masking Algorithms Tested in a Tropical Forest Setting Using High-Resolution NICFI-Planet Basemaps.
Sensors (Basel, Switzerland), 25(24):.
High-resolution NICFI-Planet image collection on Google Earth Engine (GEE) promises fine-scale tropical forest monitoring, but persistent cloud covers, shadows, and haze undermine its value. Here, we present two simple, fully reproducible cloud-masking algorithms. We introduce (A) a Blue and Near-Infrared threshold and (B) a Sentinel-2-derived statistical thresholding approach that sets per-band cutoffs. Both are implemented end-to-end in GEE for operational use. The algorithms were first developed, tuned, and evaluated in the Sundarbans (Bangladesh) using strongly contrasting dry- and monsoon-season scenes. To assess their broader utility, we additionally tested them in two independent deltaic mangrove systems, namely, the Bidyadhari Delta in West Bengal, India, and the Ayeyarwady Delta in Myanmar. Across all sites, Algorithm B consistently removes the largest share of cloud and bright-water pixels but tends to over-mask haze and low-contrast features. Algorithm A retains more usable pixels; however, its aggressiveness is region-dependent. It appears more conservative in the Sundarbans but noticeably more over-inclusive in the India and Myanmar scenes. A Random Forest classifier provided map offers a useful reference but the model is dependent on the quantity and quality of labeled samples. The novelty of the algorithms lies in their design specifically for NICFI-Planet basemaps and their ability to operate without labeled samples. Because they rely on simple, fully shareable GEE code, they can be readily applied in regions in a consistent manner. These two algorithms offer a pragmatic operational pathway: apply them as a first-pass filter keeping in mind that its behavior may vary across environments.
Additional Links: PMID-41471553
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41471553,
year = {2025},
author = {Islam, KMA and Abir, S and Kennedy, R},
title = {Two Novel Cloud-Masking Algorithms Tested in a Tropical Forest Setting Using High-Resolution NICFI-Planet Basemaps.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {24},
pages = {},
pmid = {41471553},
issn = {1424-8220},
support = {80NSSC23K0245//This work was supported by a grant from NASA's SERVIR program under agreement 80NSSC23K0245/ ; },
abstract = {High-resolution NICFI-Planet image collection on Google Earth Engine (GEE) promises fine-scale tropical forest monitoring, but persistent cloud covers, shadows, and haze undermine its value. Here, we present two simple, fully reproducible cloud-masking algorithms. We introduce (A) a Blue and Near-Infrared threshold and (B) a Sentinel-2-derived statistical thresholding approach that sets per-band cutoffs. Both are implemented end-to-end in GEE for operational use. The algorithms were first developed, tuned, and evaluated in the Sundarbans (Bangladesh) using strongly contrasting dry- and monsoon-season scenes. To assess their broader utility, we additionally tested them in two independent deltaic mangrove systems, namely, the Bidyadhari Delta in West Bengal, India, and the Ayeyarwady Delta in Myanmar. Across all sites, Algorithm B consistently removes the largest share of cloud and bright-water pixels but tends to over-mask haze and low-contrast features. Algorithm A retains more usable pixels; however, its aggressiveness is region-dependent. It appears more conservative in the Sundarbans but noticeably more over-inclusive in the India and Myanmar scenes. A Random Forest classifier provided map offers a useful reference but the model is dependent on the quantity and quality of labeled samples. The novelty of the algorithms lies in their design specifically for NICFI-Planet basemaps and their ability to operate without labeled samples. Because they rely on simple, fully shareable GEE code, they can be readily applied in regions in a consistent manner. These two algorithms offer a pragmatic operational pathway: apply them as a first-pass filter keeping in mind that its behavior may vary across environments.},
}
RevDate: 2026-01-03
The Research on a Collaborative Management Model for Multi-Source Heterogeneous Data Based on OPC Communication.
Sensors (Basel, Switzerland), 25(24):.
Effectively managing multi-source heterogeneous data remains a critical challenge in distributed cyber-physical systems (CPS). To address this, we present a novel and edge-centric computing framework integrating four key technological innovations. Firstly, a hybrid OPC communication stack seamlessly combines Client/Server, Publish/Subscribe, and P2P paradigms, enabling scalable interoperability across devices, edge nodes, and the cloud. Secondly, an event-triggered adaptive Kalman filter is introduced; it incorporates online noise-covariance estimation and multi-threshold triggering mechanisms. This approach significantly reduces state-estimation error by 46.7% and computational load by 41% compared to conventional fixed-rate sampling. Thirdly, temporal asynchrony among edge sensors is resolved by a Dynamic Time Warping (DTW)-based data-fusion module, which employs optimization constrained by Mahalanobis distance. Ultimately, a content-aware deterministic message queue data distribution mechanism is designed to ensure an end-to-end latency of less than 10 ms for critical control commands. This mechanism, which utilizes a "rules first" scheduling strategy and a dynamic resource allocation mechanism, guarantees low latency for key instructions even under the response loads of multiple data messages. The core contribution of this study is the proposal and empirical validation of an architecture co-design methodology aimed at ultra-high-performance industrial systems. This approach moves beyond the conventional paradigm of independently optimizing individual components, and instead prioritizes system-level synergy as the foundation for performance enhancement. Experimental evaluations were conducted under industrial-grade workloads, which involve over 100 heterogeneous data sources. These evaluations reveal that systems designed with this methodology can simultaneously achieve millimeter-level accuracy in field data acquisition and millisecond-level latency in the execution of critical control commands. These results highlight a promising pathway toward the development of real-time intelligent systems capable of meeting the stringent demands of next-generation industrial applications, and demonstrate immediate applicability in smart manufacturing domains.
Additional Links: PMID-41471512
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41471512,
year = {2025},
author = {Tian, J and Shang, C and Ren, T and Li, Z and Zhang, E and Yang, J and He, M},
title = {The Research on a Collaborative Management Model for Multi-Source Heterogeneous Data Based on OPC Communication.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {24},
pages = {},
pmid = {41471512},
issn = {1424-8220},
support = {Grant No. U24A6005//National Natural Science Foundation of China/ ; },
abstract = {Effectively managing multi-source heterogeneous data remains a critical challenge in distributed cyber-physical systems (CPS). To address this, we present a novel and edge-centric computing framework integrating four key technological innovations. Firstly, a hybrid OPC communication stack seamlessly combines Client/Server, Publish/Subscribe, and P2P paradigms, enabling scalable interoperability across devices, edge nodes, and the cloud. Secondly, an event-triggered adaptive Kalman filter is introduced; it incorporates online noise-covariance estimation and multi-threshold triggering mechanisms. This approach significantly reduces state-estimation error by 46.7% and computational load by 41% compared to conventional fixed-rate sampling. Thirdly, temporal asynchrony among edge sensors is resolved by a Dynamic Time Warping (DTW)-based data-fusion module, which employs optimization constrained by Mahalanobis distance. Ultimately, a content-aware deterministic message queue data distribution mechanism is designed to ensure an end-to-end latency of less than 10 ms for critical control commands. This mechanism, which utilizes a "rules first" scheduling strategy and a dynamic resource allocation mechanism, guarantees low latency for key instructions even under the response loads of multiple data messages. The core contribution of this study is the proposal and empirical validation of an architecture co-design methodology aimed at ultra-high-performance industrial systems. This approach moves beyond the conventional paradigm of independently optimizing individual components, and instead prioritizes system-level synergy as the foundation for performance enhancement. Experimental evaluations were conducted under industrial-grade workloads, which involve over 100 heterogeneous data sources. These evaluations reveal that systems designed with this methodology can simultaneously achieve millimeter-level accuracy in field data acquisition and millisecond-level latency in the execution of critical control commands. These results highlight a promising pathway toward the development of real-time intelligent systems capable of meeting the stringent demands of next-generation industrial applications, and demonstrate immediate applicability in smart manufacturing domains.},
}
RevDate: 2026-01-03
Adaptive Reinforcement Learning-Based Framework for Energy-Efficient Task Offloading in a Fog-Cloud Environment.
Sensors (Basel, Switzerland), 25(24):.
Ever-increasing computational demand introduced by the expanding scale of Internet of Things (IoT) devices poses significant concerns in terms of energy consumption in a fog-cloud environment. Due to the limited resources of IoT devices, energy-efficient task offloading becomes even more challenging for time-sensitive tasks. In this paper, we propose a reinforcement learning-based framework, namely Adaptive Q-learning-based Energy-aware Task Offloading (AQETO), that dynamically manages the energy consumption of fog nodes in a fog-cloud network. Concurrently, it considers IoT task delay tolerance and allocates computational resources while satisfying deadline requirements. The proposed approach dynamically determines energy states of each fog node using Q-learning depending on workload fluctuations. Moreover, AQETO prioritizes allocation of the most urgent tasks to minimize delays. Extensive experiments demonstrate the effectiveness of AQETO in terms of the minimization of fog node energy consumption and delay and the maximization of system efficiency.
Additional Links: PMID-41471511
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41471511,
year = {2025},
author = {Mikavica, B and Kostic-Ljubisavljevic, A},
title = {Adaptive Reinforcement Learning-Based Framework for Energy-Efficient Task Offloading in a Fog-Cloud Environment.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {24},
pages = {},
pmid = {41471511},
issn = {1424-8220},
abstract = {Ever-increasing computational demand introduced by the expanding scale of Internet of Things (IoT) devices poses significant concerns in terms of energy consumption in a fog-cloud environment. Due to the limited resources of IoT devices, energy-efficient task offloading becomes even more challenging for time-sensitive tasks. In this paper, we propose a reinforcement learning-based framework, namely Adaptive Q-learning-based Energy-aware Task Offloading (AQETO), that dynamically manages the energy consumption of fog nodes in a fog-cloud network. Concurrently, it considers IoT task delay tolerance and allocates computational resources while satisfying deadline requirements. The proposed approach dynamically determines energy states of each fog node using Q-learning depending on workload fluctuations. Moreover, AQETO prioritizes allocation of the most urgent tasks to minimize delays. Extensive experiments demonstrate the effectiveness of AQETO in terms of the minimization of fog node energy consumption and delay and the maximization of system efficiency.},
}
RevDate: 2026-01-03
Edge Temporal Digital Twin Network for Sensor-Driven Fault Detection in Nuclear Power Systems.
Sensors (Basel, Switzerland), 25(24):.
The safe and efficient operation of nuclear power systems largely relies on sensor networks that continuously collect and transmit monitoring data. However, due to the high sensitivity of the nuclear power field and strict privacy restrictions, data among different nuclear entities are typically not directly shareable, which poses challenges to constructing a global digital twin with strong generalization capability. Moreover, most existing digital twin approaches tend to treat sensor data as static, overlooking critical temporal patterns that could enhance fault prediction performance. To address these issues, this paper proposes an Edge Temporal Digital Twin Network (ETDTN) for cloud-edge collaborative, sensor-driven fault detection in nuclear power systems. ETDTN introduces a continuous variable temporal representation to fully exploit temporal information from sensors, incorporates a global representation module to alleviate the non-IID characteristics among different subsystems, and integrates a temporal attention mechanism based on graph neural networks in the latent space to strengthen temporal feature learning. Extensive experiments on real nuclear power datasets from 17 independent units demonstrate that ETDTN achieves significantly better fault detection performance than existing methods under non-sharing data scenarios, obtaining the best results in both accuracy and F1 score. The findings indicate that ETDTN not only effectively preserves data privacy through federated parameter aggregation but also captures latent temporal patterns, providing a powerful tool for sensor-driven fault detection and predictive maintenance in nuclear power systems.
Additional Links: PMID-41471501
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41471501,
year = {2025},
author = {Liu, S and Ye, G and Zhao, X},
title = {Edge Temporal Digital Twin Network for Sensor-Driven Fault Detection in Nuclear Power Systems.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {24},
pages = {},
pmid = {41471501},
issn = {1424-8220},
abstract = {The safe and efficient operation of nuclear power systems largely relies on sensor networks that continuously collect and transmit monitoring data. However, due to the high sensitivity of the nuclear power field and strict privacy restrictions, data among different nuclear entities are typically not directly shareable, which poses challenges to constructing a global digital twin with strong generalization capability. Moreover, most existing digital twin approaches tend to treat sensor data as static, overlooking critical temporal patterns that could enhance fault prediction performance. To address these issues, this paper proposes an Edge Temporal Digital Twin Network (ETDTN) for cloud-edge collaborative, sensor-driven fault detection in nuclear power systems. ETDTN introduces a continuous variable temporal representation to fully exploit temporal information from sensors, incorporates a global representation module to alleviate the non-IID characteristics among different subsystems, and integrates a temporal attention mechanism based on graph neural networks in the latent space to strengthen temporal feature learning. Extensive experiments on real nuclear power datasets from 17 independent units demonstrate that ETDTN achieves significantly better fault detection performance than existing methods under non-sharing data scenarios, obtaining the best results in both accuracy and F1 score. The findings indicate that ETDTN not only effectively preserves data privacy through federated parameter aggregation but also captures latent temporal patterns, providing a powerful tool for sensor-driven fault detection and predictive maintenance in nuclear power systems.},
}
RevDate: 2026-01-03
AI-Enabled Dynamic Edge-Cloud Resource Allocation for Smart Cities and Smart Buildings.
Sensors (Basel, Switzerland), 25(24):.
The rapid expansion of IoT devices represents significant progress in areas such as smart buildings and smart cities, but at the same time, the volume of data generated represents a challenge, which can lead to real bottlenecks in the data analysis process, thus resulting in increased waiting times for end users. The use of cloud-based solutions may prove inefficient in some cases, as the bandwidth required for transmitting data generated by IoT devices is limited. The integration with Edge computing mitigates this issue, bringing data processing closer to the resource that generates it. Edge computing plays a key role in improving cloud performance by offloading tasks closer to the data source, optimizing resource allocation. Achieving the desired performance requires a dynamic approach to resource management, where task execution can be prioritized based on current load conditions: either at the Edge node or the Cloud node. This paper proposes an approach based on the Seasonal Auto Regressive Integrated Moving Average (SARIMA) model for seamlessly switching between the Cloud and Edge nodes in the event of a loss of connection between the Cloud and Edge nodes. Thereby ensuring the command loop remains closed by transferring the task to the Edge node until the Cloud node becomes available. In this way, the prediction that could underlie a command is not jeopardized by the lack of connection to the cloud node. The method was evaluated using real-world resource utilization data and compared against a Simple Moving Average (SMA) baseline using standard metrics: RMSE, MAE, MAPE, and MSE. Experimental results demonstrate that SRIMA significantly improves prediction accuracy, achieving up to 64% improvement for CPU usage and 35% for RAM usage compared to SMA. These findings highlight the effectiveness of incorporating seasonality and autoregressive components in predictive models for edge computing, contributing to more efficient resource allocation and enhanced performance in smart city environments.
Additional Links: PMID-41471434
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41471434,
year = {2025},
author = {Dumitru, MC and Caramihai, SI and Dumitrascu, A and Pietraru, RN and Moisescu, MA},
title = {AI-Enabled Dynamic Edge-Cloud Resource Allocation for Smart Cities and Smart Buildings.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {24},
pages = {},
pmid = {41471434},
issn = {1424-8220},
abstract = {The rapid expansion of IoT devices represents significant progress in areas such as smart buildings and smart cities, but at the same time, the volume of data generated represents a challenge, which can lead to real bottlenecks in the data analysis process, thus resulting in increased waiting times for end users. The use of cloud-based solutions may prove inefficient in some cases, as the bandwidth required for transmitting data generated by IoT devices is limited. The integration with Edge computing mitigates this issue, bringing data processing closer to the resource that generates it. Edge computing plays a key role in improving cloud performance by offloading tasks closer to the data source, optimizing resource allocation. Achieving the desired performance requires a dynamic approach to resource management, where task execution can be prioritized based on current load conditions: either at the Edge node or the Cloud node. This paper proposes an approach based on the Seasonal Auto Regressive Integrated Moving Average (SARIMA) model for seamlessly switching between the Cloud and Edge nodes in the event of a loss of connection between the Cloud and Edge nodes. Thereby ensuring the command loop remains closed by transferring the task to the Edge node until the Cloud node becomes available. In this way, the prediction that could underlie a command is not jeopardized by the lack of connection to the cloud node. The method was evaluated using real-world resource utilization data and compared against a Simple Moving Average (SMA) baseline using standard metrics: RMSE, MAE, MAPE, and MSE. Experimental results demonstrate that SRIMA significantly improves prediction accuracy, achieving up to 64% improvement for CPU usage and 35% for RAM usage compared to SMA. These findings highlight the effectiveness of incorporating seasonality and autoregressive components in predictive models for edge computing, contributing to more efficient resource allocation and enhanced performance in smart city environments.},
}
RevDate: 2026-01-02
CmpDate: 2025-12-30
The Use of Industry 4.0 and 5.0 Technologies in the Transformation of Food Services: An Integrative Review.
Foods (Basel, Switzerland), 14(24):.
Industry 5.0 involves the integration of advanced technologies, collaboration between humans and intelligent machines, resilience and sustainability, all of which are essential for the advancement of the food services industry. This analysis reviews the scientific literature on Industries 4.0 and 5.0 technologies, whether experimental or implemented, focused on producing large meals in food service. The review has been conducted through a systematic search, covering aspects from consumer ordering and the cooking process to distribution while considering management, quality control, and sustainability. A total of thirty-one articles, published between 2006 and 2025, were selected, with the majority focusing on Industry 5.0 (71%) and a significant proportion on testing phases (77.4%). In the context of Food Service Perspectives, the emphasis has been placed on customer service (32.3%), highlighting the use of Artificial Intelligence (AI)-powered robots for serving customers and AI for service personalization. Sustainability has also received attention (29%), focusing on AI and machine learning (ML) applications aimed at waste reduction. In management (22.6%), AI has been applied to optimize production schedules, enhance menu engineering, and improve overall management. Big Data (BD) and ML were utilized for sales analysis, while Blockchain technology was employed for traceability. Cooking innovations (9.7%) centered on automation, particularly the use of collaborative robots (cobots). For Quality Control (6.4%), AI, along with the Internet of Things (IoT) and Cloud Computing, has been used to monitor the physical aspects of food. The study underscores the importance of strategic investments in technology to optimize processes and resources, personalize services, and ensure food quality, thereby promoting balance and sustainability.
Additional Links: PMID-41465025
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41465025,
year = {2025},
author = {Cantarelli da Silva, R and Bacharini Lima, L and Batistela Dos Santos, E and Akutsu, RC},
title = {The Use of Industry 4.0 and 5.0 Technologies in the Transformation of Food Services: An Integrative Review.},
journal = {Foods (Basel, Switzerland)},
volume = {14},
number = {24},
pages = {},
pmid = {41465025},
issn = {2304-8158},
abstract = {Industry 5.0 involves the integration of advanced technologies, collaboration between humans and intelligent machines, resilience and sustainability, all of which are essential for the advancement of the food services industry. This analysis reviews the scientific literature on Industries 4.0 and 5.0 technologies, whether experimental or implemented, focused on producing large meals in food service. The review has been conducted through a systematic search, covering aspects from consumer ordering and the cooking process to distribution while considering management, quality control, and sustainability. A total of thirty-one articles, published between 2006 and 2025, were selected, with the majority focusing on Industry 5.0 (71%) and a significant proportion on testing phases (77.4%). In the context of Food Service Perspectives, the emphasis has been placed on customer service (32.3%), highlighting the use of Artificial Intelligence (AI)-powered robots for serving customers and AI for service personalization. Sustainability has also received attention (29%), focusing on AI and machine learning (ML) applications aimed at waste reduction. In management (22.6%), AI has been applied to optimize production schedules, enhance menu engineering, and improve overall management. Big Data (BD) and ML were utilized for sales analysis, while Blockchain technology was employed for traceability. Cooking innovations (9.7%) centered on automation, particularly the use of collaborative robots (cobots). For Quality Control (6.4%), AI, along with the Internet of Things (IoT) and Cloud Computing, has been used to monitor the physical aspects of food. The study underscores the importance of strategic investments in technology to optimize processes and resources, personalize services, and ensure food quality, thereby promoting balance and sustainability.},
}
RevDate: 2025-12-31
CmpDate: 2025-12-29
The Challenges of Data Privacy and Cybersecurity in Cloud Computing and Artificial Intelligence (AI) Applications for EQA Organizations.
EJIFCC, 36(4):599-604.
BACKGROUND: The adoption of cloud computing and Artificial Intelligence (AI) technologies offers significant advantages for External Quality Assessment (EQA) providers, including scalability, cost efficiency, and broader accessibility. However, these benefits come with substantial cybersecurity and data privacy challenges.
METHODOLOGY: We performed a systematic literature review on cybersecurity risks in healthcare cloud computing, consulted experts in bioinformatics and cybersecurity, and analyzed real-world hacking incidents targeting EQA organizations. A risk-focused framework was developed to outline key challenges and best practice mitigation strategies.
RESULTS: Ten key challenges were identified: 1. data breaches and unauthorized access, 2. compliance with regulations such as HIPAA and GDPR, 3. data sovereignty and jurisdictional issues, 4. shared infrastructure vulnerabilities, 5. insider threats, 6. data loss and availability concerns, 7. inadequate security measures by cloud providers, 8. application vulnerabilities, 9. limited visibility and control, and 10. the complexity of cloud security management.
CONCLUSION: To fully benefit from cloud computing and AI, EQA providers must implement robust security practices, ensure regulatory compliance, and continuously monitor their environments. Proactive cybersecurity strategies are essential to safeguarding sensitive laboratory data and maintaining operational continuity and accreditation.
Additional Links: PMID-41459181
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41459181,
year = {2025},
author = {Haliassos, A and Kasvis, D and Karathanos, S},
title = {The Challenges of Data Privacy and Cybersecurity in Cloud Computing and Artificial Intelligence (AI) Applications for EQA Organizations.},
journal = {EJIFCC},
volume = {36},
number = {4},
pages = {599-604},
pmid = {41459181},
issn = {1650-3414},
abstract = {BACKGROUND: The adoption of cloud computing and Artificial Intelligence (AI) technologies offers significant advantages for External Quality Assessment (EQA) providers, including scalability, cost efficiency, and broader accessibility. However, these benefits come with substantial cybersecurity and data privacy challenges.
METHODOLOGY: We performed a systematic literature review on cybersecurity risks in healthcare cloud computing, consulted experts in bioinformatics and cybersecurity, and analyzed real-world hacking incidents targeting EQA organizations. A risk-focused framework was developed to outline key challenges and best practice mitigation strategies.
RESULTS: Ten key challenges were identified: 1. data breaches and unauthorized access, 2. compliance with regulations such as HIPAA and GDPR, 3. data sovereignty and jurisdictional issues, 4. shared infrastructure vulnerabilities, 5. insider threats, 6. data loss and availability concerns, 7. inadequate security measures by cloud providers, 8. application vulnerabilities, 9. limited visibility and control, and 10. the complexity of cloud security management.
CONCLUSION: To fully benefit from cloud computing and AI, EQA providers must implement robust security practices, ensure regulatory compliance, and continuously monitor their environments. Proactive cybersecurity strategies are essential to safeguarding sensitive laboratory data and maintaining operational continuity and accreditation.},
}
RevDate: 2025-12-27
Evaluation and optimization of resource matching for perception services in power communication networks.
Scientific reports pii:10.1038/s41598-025-31776-7 [Epub ahead of print].
In the cloud-edge-end communication architecture of the new power system, heterogeneous perception services face a fundamental and long-standing demand-supply mismatch with multi-dimensional resources (computing, storage, spectrum/bandwidth, and power) under QoS constraints such as delay, reliability, and accuracy. To uniformly measure and minimize this mismatch under resource-limited and time-varying network conditions-thereby enabling precise and efficient perception-this paper proposes an intelligent perception-service efficiency evaluation and optimization method for electric power information and communication networks based on fit entropy. First, based on the theory of information entropy, the fit entropy is defined for the degree of matching between the requirements of perception services such as delay and reliability and the provision of resources. Then, based on the fit entropy, a three-layer matching model of business domain- logical domain- physical domain is constructed, and then a many-to-many matching optimization problem between the business, service function chain and physical device is formed. Furthermore, a dynamic hypergraph neural network based on the gated attention mechanism is designed to solve this problem, where the multi-type aware service requests are dynamically mapped to cross-domain hyperedges, and the fit entropy is used as the weight of the hyperedges to quantify the global fit among the three domains. The fit entropy is optimized by adaptively adjusting the hypergraph structure and the weight of the hyperedges. The simulation results show that this method can significantly improve the quality of service of perceptive services and effectively balance the utilization of network resources and service adaptability.
Additional Links: PMID-41455713
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41455713,
year = {2025},
author = {Wei, L and Shang, L and Zhang, M and Li, H and Zhu, X},
title = {Evaluation and optimization of resource matching for perception services in power communication networks.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-31776-7},
pmid = {41455713},
issn = {2045-2322},
support = {J2024160//Research on Enhancing Support Capabilities and Optimizing Key Technologies for the Global Information Network/ ; },
abstract = {In the cloud-edge-end communication architecture of the new power system, heterogeneous perception services face a fundamental and long-standing demand-supply mismatch with multi-dimensional resources (computing, storage, spectrum/bandwidth, and power) under QoS constraints such as delay, reliability, and accuracy. To uniformly measure and minimize this mismatch under resource-limited and time-varying network conditions-thereby enabling precise and efficient perception-this paper proposes an intelligent perception-service efficiency evaluation and optimization method for electric power information and communication networks based on fit entropy. First, based on the theory of information entropy, the fit entropy is defined for the degree of matching between the requirements of perception services such as delay and reliability and the provision of resources. Then, based on the fit entropy, a three-layer matching model of business domain- logical domain- physical domain is constructed, and then a many-to-many matching optimization problem between the business, service function chain and physical device is formed. Furthermore, a dynamic hypergraph neural network based on the gated attention mechanism is designed to solve this problem, where the multi-type aware service requests are dynamically mapped to cross-domain hyperedges, and the fit entropy is used as the weight of the hyperedges to quantify the global fit among the three domains. The fit entropy is optimized by adaptively adjusting the hypergraph structure and the weight of the hyperedges. The simulation results show that this method can significantly improve the quality of service of perceptive services and effectively balance the utilization of network resources and service adaptability.},
}
RevDate: 2025-12-26
NF-MORL: a neuro-fuzzy multi-objective reinforcement learning framework for task scheduling in fog computing environments.
Scientific reports pii:10.1038/s41598-025-32235-z [Epub ahead of print].
The proliferation of IoT devices has exerted significant demand on computing systems to process data rapidly, efficiently, and in proximity to its source. Conventional cloud-based methods frequently fail because of elevated latency and centralized constraints. Fog computing has emerged as a viable option by decentralizing computation to the edge; yet, successfully scheduling work in these dynamic and heterogeneous contexts continues to pose a significant difficulty. This research presents A Neuro-Fuzzy Multi-Objective Reinforcement Learning (NF-MORL), an innovative framework that integrates neuro-fuzzy systems with multi-objective reinforcement learning to tackle task scheduling in fog networks. The concept is straightforward yet impactful: a Takagi-Sugeno fuzzy layer addresses uncertainty and offers interpretable priorities, while a multi-objective actor-critic agent acquires the capacity to reconcile conflicting objectives makespan, energy consumption, cost, and reliability through practical experience. We assessed NF-MORL using empirical data from Google Cluster and EdgeBench. The findings were promising: relative to cutting-edge techniques, our methodology decreased makespan by up to 35%, enhanced energy efficiency by about 30%, reduced operational expenses by up to 40%, and augmented fault tolerance by as much as 37%. These enhancements persisted across various workload sizes, demonstrating that NF-MORL can effectively adjust to fluctuating situations. Our research indicates that integrating human-like reasoning through fuzzy logic with autonomous learning via reinforcement learning can yield more effective and resilient schedulers for actual fog deployments.
Additional Links: PMID-41453898
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41453898,
year = {2025},
author = {Yu, X and Tang, L and Mi, J and Long, L and Qin, X and Li, X and Mo, Q},
title = {NF-MORL: a neuro-fuzzy multi-objective reinforcement learning framework for task scheduling in fog computing environments.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-32235-z},
pmid = {41453898},
issn = {2045-2322},
abstract = {The proliferation of IoT devices has exerted significant demand on computing systems to process data rapidly, efficiently, and in proximity to its source. Conventional cloud-based methods frequently fail because of elevated latency and centralized constraints. Fog computing has emerged as a viable option by decentralizing computation to the edge; yet, successfully scheduling work in these dynamic and heterogeneous contexts continues to pose a significant difficulty. This research presents A Neuro-Fuzzy Multi-Objective Reinforcement Learning (NF-MORL), an innovative framework that integrates neuro-fuzzy systems with multi-objective reinforcement learning to tackle task scheduling in fog networks. The concept is straightforward yet impactful: a Takagi-Sugeno fuzzy layer addresses uncertainty and offers interpretable priorities, while a multi-objective actor-critic agent acquires the capacity to reconcile conflicting objectives makespan, energy consumption, cost, and reliability through practical experience. We assessed NF-MORL using empirical data from Google Cluster and EdgeBench. The findings were promising: relative to cutting-edge techniques, our methodology decreased makespan by up to 35%, enhanced energy efficiency by about 30%, reduced operational expenses by up to 40%, and augmented fault tolerance by as much as 37%. These enhancements persisted across various workload sizes, demonstrating that NF-MORL can effectively adjust to fluctuating situations. Our research indicates that integrating human-like reasoning through fuzzy logic with autonomous learning via reinforcement learning can yield more effective and resilient schedulers for actual fog deployments.},
}
RevDate: 2026-01-05
CmpDate: 2025-12-26
Analyzing Large Connectome Graphs With BossDB Network Tools.
Current protocols, 5(12):e70273.
Modern connectomics enables large-scale, comparative network neuroscience across individuals, species, development, and evolution. The field now regularly produces extensive maps of neural connectivity exceeding hundreds of millions of synapses in continuous volumes. When connectomes are deposited in central archives such as BossDB with standardized metadata, researchers can pose previously intractable questions about neuronal networks. Here, we present step-by-step protocols for connectome dataset discovery and access, scalable graph construction and analysis, and reproducible comparative connectomics using BossDB, Motif Studio, DotMotif, Neuroglancer, neuPrint, and Python-based workflows. These protocols target bench neuroscientists and computational biologists and emphasize replicability, cloud-friendly options, and publication-quality visualization. © 2025 Wiley Periodicals LLC. Basic Protocol 1: Discovering connectome datasets and computing summary statistics with BossDB and Motif Studio Basic Protocol 2: Writing queries with DotMotif Basic Protocol 3: Querying known network motifs locally with DotMotif Support Protocol 1: Provisioning ad hoc graph databases for large-scale graph analysis Support Protocol 2: Querying structures and systems in the cloud with neuPrint Basic Protocol 4: Viewing anatomical motif features with BossDB and Neuroglancer.
Additional Links: PMID-41451919
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41451919,
year = {2025},
author = {Matelsky, JK and Martinez, H and Xenes, D and Robinette, M and Panigrahi, A and Wester, B},
title = {Analyzing Large Connectome Graphs With BossDB Network Tools.},
journal = {Current protocols},
volume = {5},
number = {12},
pages = {e70273},
doi = {10.1002/cpz1.70273},
pmid = {41451919},
issn = {2691-1299},
mesh = {*Connectome/methods ; Humans ; *Software ; Nerve Net/physiology ; Animals ; Databases, Factual ; Computational Biology/methods ; },
abstract = {Modern connectomics enables large-scale, comparative network neuroscience across individuals, species, development, and evolution. The field now regularly produces extensive maps of neural connectivity exceeding hundreds of millions of synapses in continuous volumes. When connectomes are deposited in central archives such as BossDB with standardized metadata, researchers can pose previously intractable questions about neuronal networks. Here, we present step-by-step protocols for connectome dataset discovery and access, scalable graph construction and analysis, and reproducible comparative connectomics using BossDB, Motif Studio, DotMotif, Neuroglancer, neuPrint, and Python-based workflows. These protocols target bench neuroscientists and computational biologists and emphasize replicability, cloud-friendly options, and publication-quality visualization. © 2025 Wiley Periodicals LLC. Basic Protocol 1: Discovering connectome datasets and computing summary statistics with BossDB and Motif Studio Basic Protocol 2: Writing queries with DotMotif Basic Protocol 3: Querying known network motifs locally with DotMotif Support Protocol 1: Provisioning ad hoc graph databases for large-scale graph analysis Support Protocol 2: Querying structures and systems in the cloud with neuPrint Basic Protocol 4: Viewing anatomical motif features with BossDB and Neuroglancer.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Connectome/methods
Humans
*Software
Nerve Net/physiology
Animals
Databases, Factual
Computational Biology/methods
RevDate: 2025-12-28
Scalable, reproducible, and cost-effective processing of large-scale medical imaging datasets.
Proceedings of SPIE--the International Society for Optical Engineering, 13411:.
Curating, processing, and combining large-scale medical imaging datasets from national studies is a non-trivial task due to the intense computation and data throughput required, variability of acquired data, and associated financial overhead. Existing platforms or tools for large-scale data curation, processing, and storage have difficulty achieving a viable cost-to-scale ratio of computation speed for research purposes, either being too slow or too expensive. Additionally, management and consistency of processing large data in a team-driven manner is a non-trivial task. We design a BIDS-compliant method for an efficient and robust data processing pipeline of large-scale diffusion-weighted and T1-weighted MRI data compatible with low-cost, high-efficiency computing systems. Our method accomplishes automated querying of data available for processing and process running in a consistent and reproducible manner that has long-term stability, while using heterogenous low-cost computational resources and storage systems for efficient processing and data transfer. We demonstrate how our organizational structure permits efficiency in a semi-automated data processing pipeline and show how our method is comparable in processing time to cloud-based computation while being almost 20 times more cost-effective. Our design allows for fast data throughput speeds and low latency to reduce the time for data transfer between storage servers and computation servers, achieving an average of 0.60 Gb/s compared to 0.33 Gb/s for using cloud-based processing methods. The design of our workflow engine permits quick process running while maintaining flexibility to adapt to newly acquired data.
Additional Links: PMID-41450588
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41450588,
year = {2025},
author = {Kim, ME and Ramadass, K and Gao, C and Kanakaraj, P and Newlin, NR and Rudravaram, G and Schilling, KG and Dewey, BE and Archer, D and Hohman, TJ and Li, Z and Bao, S and Landman, BA and Khairi, NM},
title = {Scalable, reproducible, and cost-effective processing of large-scale medical imaging datasets.},
journal = {Proceedings of SPIE--the International Society for Optical Engineering},
volume = {13411},
number = {},
pages = {},
pmid = {41450588},
issn = {0277-786X},
support = {UL1 TR000445/TR/NCATS NIH HHS/United States ; R01 EB017230/EB/NIBIB NIH HHS/United States ; U01 AG068057/AG/NIA NIH HHS/United States ; K01 EB032898/EB/NIBIB NIH HHS/United States ; U24 AG074855/AG/NIA NIH HHS/United States ; S10 OD023680/OD/NIH HHS/United States ; UL1 TR002243/TR/NCATS NIH HHS/United States ; K01 AG073584/AG/NIA NIH HHS/United States ; R01 AG059716/AG/NIA NIH HHS/United States ; S10 OD020154/OD/NIH HHS/United States ; },
abstract = {Curating, processing, and combining large-scale medical imaging datasets from national studies is a non-trivial task due to the intense computation and data throughput required, variability of acquired data, and associated financial overhead. Existing platforms or tools for large-scale data curation, processing, and storage have difficulty achieving a viable cost-to-scale ratio of computation speed for research purposes, either being too slow or too expensive. Additionally, management and consistency of processing large data in a team-driven manner is a non-trivial task. We design a BIDS-compliant method for an efficient and robust data processing pipeline of large-scale diffusion-weighted and T1-weighted MRI data compatible with low-cost, high-efficiency computing systems. Our method accomplishes automated querying of data available for processing and process running in a consistent and reproducible manner that has long-term stability, while using heterogenous low-cost computational resources and storage systems for efficient processing and data transfer. We demonstrate how our organizational structure permits efficiency in a semi-automated data processing pipeline and show how our method is comparable in processing time to cloud-based computation while being almost 20 times more cost-effective. Our design allows for fast data throughput speeds and low latency to reduce the time for data transfer between storage servers and computation servers, achieving an average of 0.60 Gb/s compared to 0.33 Gb/s for using cloud-based processing methods. The design of our workflow engine permits quick process running while maintaining flexibility to adapt to newly acquired data.},
}
RevDate: 2025-12-27
A scalable scheduling and resource management framework for cloud-native B2B applications.
Scientific reports, 15(1):44500.
In modern cloud computing environments, customers increasingly depend on on-demand resource provisioning to handle dynamic workloads. However, fluctuations in job arrival rates can result in prolonged queue times, which negatively affect overall system performance. Although existing scheduling algorithms provide efficient job management, they often fail to account for the combined impact of queue delays and the need for flexible resource provisioning-particularly in business-critical applications. In order to tackle these issues, the paper proposes a new Optimized Job Scheduling and Resource Scaling (OJSRS) algorithm designed to improve job execution efficiency and support elastic resource management in cloud environments. The OJSRS algorithm integrates two key components: Tree-based Job Scheduling (TJS) and Automated Resource Scaling and Scheduling (ARSS). The TJS component constructs a hierarchical structure that concurrently maps incoming jobs to the most suitable Virtual Machines (VMs), thereby minimizing queue delays. Meanwhile, ARSS adjusts resource allocation dynamically, increasing or decreasing capacity according to workload requirements and cloud service provider policies, enabling responsive and adaptive provisioning. Experimental results show that the OJSRS algorithm increases resource utilization by approximately 5-10% and accelerates job completion through proactive resource scaling. This approach provides a significant performance advantage for cloud-native business applications that require both efficiency and scalability.
Additional Links: PMID-41444306
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41444306,
year = {2025},
author = {Komarasamy, D and Rajavel, R and Harimoorthy, K and Pitchai, A},
title = {A scalable scheduling and resource management framework for cloud-native B2B applications.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {44500},
pmid = {41444306},
issn = {2045-2322},
abstract = {In modern cloud computing environments, customers increasingly depend on on-demand resource provisioning to handle dynamic workloads. However, fluctuations in job arrival rates can result in prolonged queue times, which negatively affect overall system performance. Although existing scheduling algorithms provide efficient job management, they often fail to account for the combined impact of queue delays and the need for flexible resource provisioning-particularly in business-critical applications. In order to tackle these issues, the paper proposes a new Optimized Job Scheduling and Resource Scaling (OJSRS) algorithm designed to improve job execution efficiency and support elastic resource management in cloud environments. The OJSRS algorithm integrates two key components: Tree-based Job Scheduling (TJS) and Automated Resource Scaling and Scheduling (ARSS). The TJS component constructs a hierarchical structure that concurrently maps incoming jobs to the most suitable Virtual Machines (VMs), thereby minimizing queue delays. Meanwhile, ARSS adjusts resource allocation dynamically, increasing or decreasing capacity according to workload requirements and cloud service provider policies, enabling responsive and adaptive provisioning. Experimental results show that the OJSRS algorithm increases resource utilization by approximately 5-10% and accelerates job completion through proactive resource scaling. This approach provides a significant performance advantage for cloud-native business applications that require both efficiency and scalability.},
}
RevDate: 2025-12-23
CmpDate: 2025-12-23
Digital twins: A new paradigm for innovation in clinical research and medical affairs.
The Malaysian journal of pathology, 47(3):355-368.
Digital Twin (DT) technology, originally conceptualised in engineering, has recently emerged as a transformative paradigm in healthcare, promising to redefine the generation, interpretation, and application of biomedical evidence. DTs enable real-time simulation, prediction, and optimisation of clinical outcomes. The review aims to elucidate how DTs may enhance methodological efficiency, ethical standards, and strategic innovation in biomedical science, while addressing their epistemological and regulatory challenges. A DT is a dynamic, data-driven virtual replica of a biological entity or clinical process, continuously updated through real-time data to simulate, predict, and optimise outcomes. Originating in engineering, DTs are now entering healthcare as enablers of predictive, preventive, and precision medicine. Supported by Internet of Things (IoT) technologies, cloud computing, and machine learning, DTs integrate heterogeneous data-genomic, physiological, behavioural, and environmental-into adaptive models capable of mirroring and anticipating patient trajectories. In clinical research, they enable synthetic control arms and in silico trials, reducing recruitment barriers, improving statistical power, and addressing ethical issues associated with placebo use. The recent qualification of DT-based methodologies such as PROCOVA™ by the EMA and FDA confirms their growing scientific and regulatory credibility. DTs are redefining Medical Affairs, strengthening its role as a bridge between data science and clinical practice. They enable patient-level insights and personalised scientific communication, transforming Medical Affairs into a predictive, data-driven discipline that supports evidence-based and patient-centered decisions.
Additional Links: PMID-41432469
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41432469,
year = {2025},
author = {Torresi, G and Verna, R},
title = {Digital twins: A new paradigm for innovation in clinical research and medical affairs.},
journal = {The Malaysian journal of pathology},
volume = {47},
number = {3},
pages = {355-368},
pmid = {41432469},
issn = {0126-8635},
mesh = {Humans ; *Biomedical Research/methods/trends ; Precision Medicine/methods ; Inventions ; },
abstract = {Digital Twin (DT) technology, originally conceptualised in engineering, has recently emerged as a transformative paradigm in healthcare, promising to redefine the generation, interpretation, and application of biomedical evidence. DTs enable real-time simulation, prediction, and optimisation of clinical outcomes. The review aims to elucidate how DTs may enhance methodological efficiency, ethical standards, and strategic innovation in biomedical science, while addressing their epistemological and regulatory challenges. A DT is a dynamic, data-driven virtual replica of a biological entity or clinical process, continuously updated through real-time data to simulate, predict, and optimise outcomes. Originating in engineering, DTs are now entering healthcare as enablers of predictive, preventive, and precision medicine. Supported by Internet of Things (IoT) technologies, cloud computing, and machine learning, DTs integrate heterogeneous data-genomic, physiological, behavioural, and environmental-into adaptive models capable of mirroring and anticipating patient trajectories. In clinical research, they enable synthetic control arms and in silico trials, reducing recruitment barriers, improving statistical power, and addressing ethical issues associated with placebo use. The recent qualification of DT-based methodologies such as PROCOVA™ by the EMA and FDA confirms their growing scientific and regulatory credibility. DTs are redefining Medical Affairs, strengthening its role as a bridge between data science and clinical practice. They enable patient-level insights and personalised scientific communication, transforming Medical Affairs into a predictive, data-driven discipline that supports evidence-based and patient-centered decisions.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Biomedical Research/methods/trends
Precision Medicine/methods
Inventions
RevDate: 2025-12-20
Federated learning-based trust and energy-aware routing in Fog-Cloud computing environments for the Internet of Things.
Scientific reports pii:10.1038/s41598-025-32010-0 [Epub ahead of print].
The rapid convergence of Fog, Cloud, and Internet of Things (IoT) technologies has introduced a new era of distributed intelligence and real-time data processing. However, ensuring secure, reliable, and energy-efficient communication across heterogeneous and resource-constrained nodes remains a fundamental challenge. This paper introduces a novel framework entitled Federated Learning-Based Trust and Energy-Aware Routing (FL-TEAR), designed to enhance routing performance in hybrid Fog-Cloud-IoT environments through collaborative intelligence, adaptive trust management, and dynamic energy optimization. The FL-TEAR system replaces static trust evaluation with a federated learning paradigm, allowing IoT and fog nodes to cooperatively train a global trust-energy model without exposing raw data. Trust scores are continuously refined based on behavioral patterns, communication reliability, and residual energy, while routing paths are selected using a composite fitness function integrating trustworthiness, energy availability, latency, and link stability. The hierarchical architecture, spanning IoT, fog, and cloud layers, reduces communication overhead, supports scalability, and preserves privacy. Simulation results confirm that FL-TEAR significantly outperforms state-of-the-art baselines such as E-ODMA (Energy-Efficient On-Demand Multipath Adaptive) + AOMDV (Ad hoc On-Demand Multipath Distance Vector), TAGA (Trust-Aware Geographic Routing Algorithm), and EigenTrust, achieving approximately 23% higher trust accuracy, 23% lower energy consumption, approximately 13% greater packet delivery ratio, and 37% lower delay. These findings demonstrate that federated learning can effectively balance security, sustainability, and quality of service (QoS) in large-scale IoT ecosystems, establishing FL-TEAR as a viable pathway toward intelligent, secure, and energy-efficient next-generation networks.
Additional Links: PMID-41422288
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41422288,
year = {2025},
author = {Wang, F and Wang, K},
title = {Federated learning-based trust and energy-aware routing in Fog-Cloud computing environments for the Internet of Things.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-32010-0},
pmid = {41422288},
issn = {2045-2322},
abstract = {The rapid convergence of Fog, Cloud, and Internet of Things (IoT) technologies has introduced a new era of distributed intelligence and real-time data processing. However, ensuring secure, reliable, and energy-efficient communication across heterogeneous and resource-constrained nodes remains a fundamental challenge. This paper introduces a novel framework entitled Federated Learning-Based Trust and Energy-Aware Routing (FL-TEAR), designed to enhance routing performance in hybrid Fog-Cloud-IoT environments through collaborative intelligence, adaptive trust management, and dynamic energy optimization. The FL-TEAR system replaces static trust evaluation with a federated learning paradigm, allowing IoT and fog nodes to cooperatively train a global trust-energy model without exposing raw data. Trust scores are continuously refined based on behavioral patterns, communication reliability, and residual energy, while routing paths are selected using a composite fitness function integrating trustworthiness, energy availability, latency, and link stability. The hierarchical architecture, spanning IoT, fog, and cloud layers, reduces communication overhead, supports scalability, and preserves privacy. Simulation results confirm that FL-TEAR significantly outperforms state-of-the-art baselines such as E-ODMA (Energy-Efficient On-Demand Multipath Adaptive) + AOMDV (Ad hoc On-Demand Multipath Distance Vector), TAGA (Trust-Aware Geographic Routing Algorithm), and EigenTrust, achieving approximately 23% higher trust accuracy, 23% lower energy consumption, approximately 13% greater packet delivery ratio, and 37% lower delay. These findings demonstrate that federated learning can effectively balance security, sustainability, and quality of service (QoS) in large-scale IoT ecosystems, establishing FL-TEAR as a viable pathway toward intelligent, secure, and energy-efficient next-generation networks.},
}
RevDate: 2025-12-20
High-resolution landfill characterization using SAR remote sensing and cloud-based processing.
Scientific reports pii:10.1038/s41598-025-32908-9 [Epub ahead of print].
Solid waste management in developing countries such as India faces persistent challenges due to weak monitoring systems and the absence of reliable reporting mechanisms for landfill statistics. To address this gap, this study develops a remote sensing methodology that integrates Python programming with the Sentinel Application Platform (SNAP) to generate Digital Elevation Models (DEMs) from Sentinel-1 synthetic aperture radar (SAR) imagery for quantifying landfill characteristics. Key parameters, including waste height and volumetric estimates, were extracted from satellite observations and processed through Google Earth Engine (GEE), enabling efficient large-scale analysis. A total of 80 landfill sites distributed across India were examined, providing the first nationwide assessment of landfill volume using a uniform and replicable framework. Field validation was conducted at two representative sites, Gondiya Landfill and Ujjain Ring Road Trenching Ground, through drone surveys and Differential Global Positioning System (DGPS) measurements. The evaluation showed deviations of 21.12% and 0.12% in height, 0.7% and 0.65% in area delineation, and 20.21% and 0.8% in volume for Gondiya and Ujjain, respectively, confirming the reliability of the proposed approach. These results demonstrate that SAR-based DEMs offer a cost-effective and scalable solution for systematic, near real-time monitoring of landfills across large regions. The framework not only supports capacity planning, environmental assessments, and policy formulation but also provides a pathway for developing countries to transition toward data-driven waste management strategies in the context of rapid urbanization and increasing waste generation.
Additional Links: PMID-41422276
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41422276,
year = {2025},
author = {Agrawal, S and Rakkasagi, S and Goyal, MK},
title = {High-resolution landfill characterization using SAR remote sensing and cloud-based processing.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-32908-9},
pmid = {41422276},
issn = {2045-2322},
abstract = {Solid waste management in developing countries such as India faces persistent challenges due to weak monitoring systems and the absence of reliable reporting mechanisms for landfill statistics. To address this gap, this study develops a remote sensing methodology that integrates Python programming with the Sentinel Application Platform (SNAP) to generate Digital Elevation Models (DEMs) from Sentinel-1 synthetic aperture radar (SAR) imagery for quantifying landfill characteristics. Key parameters, including waste height and volumetric estimates, were extracted from satellite observations and processed through Google Earth Engine (GEE), enabling efficient large-scale analysis. A total of 80 landfill sites distributed across India were examined, providing the first nationwide assessment of landfill volume using a uniform and replicable framework. Field validation was conducted at two representative sites, Gondiya Landfill and Ujjain Ring Road Trenching Ground, through drone surveys and Differential Global Positioning System (DGPS) measurements. The evaluation showed deviations of 21.12% and 0.12% in height, 0.7% and 0.65% in area delineation, and 20.21% and 0.8% in volume for Gondiya and Ujjain, respectively, confirming the reliability of the proposed approach. These results demonstrate that SAR-based DEMs offer a cost-effective and scalable solution for systematic, near real-time monitoring of landfills across large regions. The framework not only supports capacity planning, environmental assessments, and policy formulation but also provides a pathway for developing countries to transition toward data-driven waste management strategies in the context of rapid urbanization and increasing waste generation.},
}
RevDate: 2025-12-20
FDA-Approved AI Solutions in Dental Imaging: A Narrative Review of Applications, Evidence, and Outlook.
International dental journal, 76(1):109315 pii:S0020-6539(25)08598-3 [Epub ahead of print].
INTRODUCTION AND AIMS: Artificial intelligence (AI) has rapidly transformed dental imaging by enabling automated detection, diagnosis, and analysis of various dental conditions. However, a comprehensive synthesis of United States Food and Drug Administration (FDA)-cleared, clinically validated AI solutions in dental imaging remains limited. This review aims to catalog all standalone, cloud-based dental AI platforms with FDA clearance, highlighting their clinical applications, performance outcomes, and supporting evidence to guide evidence-based integration.
METHODS: A two-phase systematic search was conducted. In the first phase, searches of U.S. FDA regulatory databases (510[k], De Novo, and PMA) were performed through July 2025 to identify standalone, cloud-based dental AI imaging devices cleared or authorized for autonomous or semi-autonomous analysis. In the second phase, PubMed, Web of Science, and Google Scholar were systematically searched to retrieve studies assessing the performance or clinical utility of the identified platforms. Two independent reviewers performed data screening and extraction, with discrepancies resolved by a third reviewer.
RESULTS: Thirteen companies were identified as offering twenty-nine FDA-cleared AI products for dental imaging. These solutions addressed diverse clinical tasks, including caries detection, periodontal disease assessment, cephalometric analysis, multi-pathology diagnostics, automated dental charting, and three-dimensional segmentation. Performance outcomes reported by the FDA demonstrated high accuracy, sensitivity, and specificity across most platforms, particularly for caries detection, periodontal disease measurement, and cephalometric analysis. Among these, Relu Creator and WebCeph were supported by the highest number of peer-reviewed publications, whereas several newer platforms lacked independent clinical validation.
CONCLUSION: Standalone, FDA-cleared AI platforms represent a paradigm shift in dental imaging, providing clinically validated tools for diagnosis, treatment planning, and patient monitoring. By systematically cataloging these solutions, this review delivers an evidence-based reference for clinicians and researchers, supporting informed adoption and identifying areas for future investigation.
Additional Links: PMID-41421004
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41421004,
year = {2025},
author = {Shujaat, S and Aljadaan, H and Alrashid, H and Aboalela, AA and Riaz, M},
title = {FDA-Approved AI Solutions in Dental Imaging: A Narrative Review of Applications, Evidence, and Outlook.},
journal = {International dental journal},
volume = {76},
number = {1},
pages = {109315},
doi = {10.1016/j.identj.2025.109315},
pmid = {41421004},
issn = {1875-595X},
abstract = {INTRODUCTION AND AIMS: Artificial intelligence (AI) has rapidly transformed dental imaging by enabling automated detection, diagnosis, and analysis of various dental conditions. However, a comprehensive synthesis of United States Food and Drug Administration (FDA)-cleared, clinically validated AI solutions in dental imaging remains limited. This review aims to catalog all standalone, cloud-based dental AI platforms with FDA clearance, highlighting their clinical applications, performance outcomes, and supporting evidence to guide evidence-based integration.
METHODS: A two-phase systematic search was conducted. In the first phase, searches of U.S. FDA regulatory databases (510[k], De Novo, and PMA) were performed through July 2025 to identify standalone, cloud-based dental AI imaging devices cleared or authorized for autonomous or semi-autonomous analysis. In the second phase, PubMed, Web of Science, and Google Scholar were systematically searched to retrieve studies assessing the performance or clinical utility of the identified platforms. Two independent reviewers performed data screening and extraction, with discrepancies resolved by a third reviewer.
RESULTS: Thirteen companies were identified as offering twenty-nine FDA-cleared AI products for dental imaging. These solutions addressed diverse clinical tasks, including caries detection, periodontal disease assessment, cephalometric analysis, multi-pathology diagnostics, automated dental charting, and three-dimensional segmentation. Performance outcomes reported by the FDA demonstrated high accuracy, sensitivity, and specificity across most platforms, particularly for caries detection, periodontal disease measurement, and cephalometric analysis. Among these, Relu Creator and WebCeph were supported by the highest number of peer-reviewed publications, whereas several newer platforms lacked independent clinical validation.
CONCLUSION: Standalone, FDA-cleared AI platforms represent a paradigm shift in dental imaging, providing clinically validated tools for diagnosis, treatment planning, and patient monitoring. By systematically cataloging these solutions, this review delivers an evidence-based reference for clinicians and researchers, supporting informed adoption and identifying areas for future investigation.},
}
RevDate: 2025-12-28
CmpDate: 2025-12-26
Democratising high performance computing for bioinformatics through serverless cloud computing: A case study on CRISPR-Cas9 guide RNA design with Crackling Cloud.
PLoS computational biology, 21(12):e1013819.
Organisations are challenged when meeting the computational requirements of large-scale bioinformatics analyses using their own resources. Cloud computing has democratised large-scale resources, and to reduce the barriers of working with large-scale compute, leading cloud vendors offer serverless computing, a low-maintenance and low-cost model that provides ample resources for highly scalable software applications. While serverless computing has broad use, its adoption in bioinformatics remains poor. Here, we demonstrate the most extensive use of high-performance serverless computing for bioinformatics by applying the available technologies to CRISPR-Cas9 guide RNA (gRNA) design. Our adaptation of the established gRNA design tool, named Crackling, implements a novel, cloud-native and serverless-based, high-performance computing environment using technologies made available by Amazon Web Services (AWS). The architecture, compatible with technologies from all leading cloud vendors, and the AWS implementation, contributes to an effort of reducing the barrier to large computational capacity in bioinformatics and for CRISPR-Cas9 gRNA design. Crackling Cloud can be deployed to any AWS account, and is freely available on GitHub under the BSD 3-clause license: https://github.com/bmds-lab/Crackling-AWS.
Additional Links: PMID-41417859
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41417859,
year = {2025},
author = {Bradford, J and Joy, D and Winsen, M and Meurant, N and Wilkins, M and Wilson, LOW and Bauer, DC and Perrin, D},
title = {Democratising high performance computing for bioinformatics through serverless cloud computing: A case study on CRISPR-Cas9 guide RNA design with Crackling Cloud.},
journal = {PLoS computational biology},
volume = {21},
number = {12},
pages = {e1013819},
pmid = {41417859},
issn = {1553-7358},
mesh = {*Cloud Computing ; *Computational Biology/methods ; *RNA, Guide, CRISPR-Cas Systems/genetics ; *CRISPR-Cas Systems/genetics ; Software ; },
abstract = {Organisations are challenged when meeting the computational requirements of large-scale bioinformatics analyses using their own resources. Cloud computing has democratised large-scale resources, and to reduce the barriers of working with large-scale compute, leading cloud vendors offer serverless computing, a low-maintenance and low-cost model that provides ample resources for highly scalable software applications. While serverless computing has broad use, its adoption in bioinformatics remains poor. Here, we demonstrate the most extensive use of high-performance serverless computing for bioinformatics by applying the available technologies to CRISPR-Cas9 guide RNA (gRNA) design. Our adaptation of the established gRNA design tool, named Crackling, implements a novel, cloud-native and serverless-based, high-performance computing environment using technologies made available by Amazon Web Services (AWS). The architecture, compatible with technologies from all leading cloud vendors, and the AWS implementation, contributes to an effort of reducing the barrier to large computational capacity in bioinformatics and for CRISPR-Cas9 gRNA design. Crackling Cloud can be deployed to any AWS account, and is freely available on GitHub under the BSD 3-clause license: https://github.com/bmds-lab/Crackling-AWS.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
*Computational Biology/methods
*RNA, Guide, CRISPR-Cas Systems/genetics
*CRISPR-Cas Systems/genetics
Software
RevDate: 2025-12-30
CmpDate: 2025-12-29
ImmunoNX: a robust bioinformatics workflow to support personalized neoantigen vaccine trials.
ArXiv.
Personalized neoantigen vaccines represent a promising immunotherapy approach that harnesses tumor-specific antigens to stimulate anti-tumor immune responses. However, the design of these vaccines requires sophisticated computational workflows to predict and prioritize neoantigen candidates from patient sequencing data, coupled with rigorous review to ensure candidate quality. While numerous computational tools exist for neoantigen prediction, to our knowledge, there are no established protocols detailing the complete process from raw sequencing data through systematic candidate selection. Here, we present ImmunoNX (Immunogenomics Neoantigen eXplorer), an end-to-end protocol for neoantigen prediction and vaccine design that has supported over 185 patients across 11 clinical trials. The workflow integrates tumor DNA/RNA and matched normal DNA sequencing data through a computational pipeline built with Workflow Definition Language (WDL) and executed via Cromwell on Google Cloud Platform. ImmunoNX employs consensus-based variant calling, in-silico HLA typing, and pVACtools for neoantigen prediction. Additionally, we describe a two-stage immunogenomics review process with prioritization of neoantigen candidates, enabled by pVACview, followed by manual assessment of variants using the Integrative Genomics Viewer (IGV). This workflow enables vaccine design in under three months. We demonstrate the protocol using the HCC1395 breast cancer cell line dataset, identifying 78 high-confidence neoantigen candidates from 322 initial predictions. Although demonstrated here for vaccine development, this workflow can be adapted for diverse neoantigen therapies and experiments. Therefore, this protocol provides the research community with a reproducible, version-controlled framework for designing personalized neoantigen vaccines, supported by detailed documentation, example datasets, and open-source code.
Additional Links: PMID-41415611
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41415611,
year = {2025},
author = {Singhal, K and Schmidt, E and Kiwala, S and Goedegebuure, SP and Miller, CA and Xia, H and Cotto, KC and Li, J and Yao, J and Hendrickson, L and Richters, MM and Hoang, MH and Khanfar, M and Risch, I and O'Laughlin, S and Myers, N and Vickery, T and Davies, SR and Du, F and Mooney, TB and Coffman, A and Chang, GS and Hundal, J and Garza, JE and McLellan, MD and McMichael, JF and Maruska, J and Inabinett, WB and Hoos, WA and Karchin, R and Johanns, TM and Dunn, GP and Pachynski, RK and Fehniger, TA and Ward, JP and Foltz, JA and Gillanders, WE and Griffith, OL and Griffith, M},
title = {ImmunoNX: a robust bioinformatics workflow to support personalized neoantigen vaccine trials.},
journal = {ArXiv},
volume = {},
number = {},
pages = {},
pmid = {41415611},
issn = {2331-8422},
support = {T32 CA009621/CA/NCI NIH HHS/United States ; U01 CA248235/CA/NCI NIH HHS/United States ; U01 CA209936/CA/NCI NIH HHS/United States ; U01 CA231844/CA/NCI NIH HHS/United States ; T32 GM139774/GM/NIGMS NIH HHS/United States ; R00 HG007940/HG/NHGRI NIH HHS/United States ; P30 CA091842/CA/NCI NIH HHS/United States ; U24 CA237719/CA/NCI NIH HHS/United States ; P50 CA196510/CA/NCI NIH HHS/United States ; R01 CA240983/CA/NCI NIH HHS/United States ; UL1 TR002345/TR/NCATS NIH HHS/United States ; P50 CA272213/CA/NCI NIH HHS/United States ; K22 CA282364/CA/NCI NIH HHS/United States ; },
abstract = {Personalized neoantigen vaccines represent a promising immunotherapy approach that harnesses tumor-specific antigens to stimulate anti-tumor immune responses. However, the design of these vaccines requires sophisticated computational workflows to predict and prioritize neoantigen candidates from patient sequencing data, coupled with rigorous review to ensure candidate quality. While numerous computational tools exist for neoantigen prediction, to our knowledge, there are no established protocols detailing the complete process from raw sequencing data through systematic candidate selection. Here, we present ImmunoNX (Immunogenomics Neoantigen eXplorer), an end-to-end protocol for neoantigen prediction and vaccine design that has supported over 185 patients across 11 clinical trials. The workflow integrates tumor DNA/RNA and matched normal DNA sequencing data through a computational pipeline built with Workflow Definition Language (WDL) and executed via Cromwell on Google Cloud Platform. ImmunoNX employs consensus-based variant calling, in-silico HLA typing, and pVACtools for neoantigen prediction. Additionally, we describe a two-stage immunogenomics review process with prioritization of neoantigen candidates, enabled by pVACview, followed by manual assessment of variants using the Integrative Genomics Viewer (IGV). This workflow enables vaccine design in under three months. We demonstrate the protocol using the HCC1395 breast cancer cell line dataset, identifying 78 high-confidence neoantigen candidates from 322 initial predictions. Although demonstrated here for vaccine development, this workflow can be adapted for diverse neoantigen therapies and experiments. Therefore, this protocol provides the research community with a reproducible, version-controlled framework for designing personalized neoantigen vaccines, supported by detailed documentation, example datasets, and open-source code.},
}
RevDate: 2025-12-19
[Exploring the Spatial and Temporal Evolution of Fractional Vegetation Cover and Driving Factors in Zhahe Mining Area from 1987 to 2023].
Huan jing ke xue= Huanjing kexue, 46(12):7841-7852.
Coal mining significantly affects vegetation evolution, but the patterns of vegetation change and the driving factors behind them in shaft mining mines are less explored. The Zhahe mining area in Huaibei City, China, was used as the study area to extract the vegetation cover (FVC) between 1987 and 2023 and explore the deep-seated drivers. Relying on the Google Earth Engine cloud platform, a total of 734 scenes of Landsat-5, Landsat-7, and Landsat-8 satellite image data were acquired from 1987 to 2023. Based on the image element dichotomous model, the spatial and temporal changes in FVC in the Zhahe mining area during the 37 years period were quantitatively analyzed by using trend analysis and stability analysis, and the impacts of nine driving factors on FVC in three aspects, namely, climate, topography, and human activities, were analyzed by using the geodetic detector. The results showed that: ① FVC in the Zhahe mining area has been decreasing over the past 37 years, with an average rate of change of 0.02%·a[-1]. The average FVC level in the area was high, and the area with medium coverage and above accounted for 81.8%, with a spatial distribution characterized by "high in the northeast and low in the southwest." ② The FVC of each coal mine in the Zhahe mining area was dominated by high stability areas, accounting for more than 30% of the area, and the land use type was dominated by cultivated land and construction land, while the areas of lower stability were mainly concentrated in the Shuoxi Lake, the collapse zone of Zhong Lake, and the areas close to the town roads in the study area. ③ In the exploration of the influence of the nine driving factors on FVC, the order of the influence of each factor on FVC was as follows: land use type (0.41) > precipitation (0.164) > nighttime light (0.12) > temperature (0.095) > GDP (0.079) > population density (0.048) > elevation (0.043) > slope (0.040) > slope (0.021). The interaction between land use type and other factors had the strongest effect on the spatial variability of FVC.
Additional Links: PMID-41414004
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41414004,
year = {2025},
author = {Gu, XR and Yang, KM and Zhang, C and Jiang, KG and Chen, XY and Peng, LS},
title = {[Exploring the Spatial and Temporal Evolution of Fractional Vegetation Cover and Driving Factors in Zhahe Mining Area from 1987 to 2023].},
journal = {Huan jing ke xue= Huanjing kexue},
volume = {46},
number = {12},
pages = {7841-7852},
doi = {10.13227/j.hjkx.202410215},
pmid = {41414004},
issn = {0250-3301},
abstract = {Coal mining significantly affects vegetation evolution, but the patterns of vegetation change and the driving factors behind them in shaft mining mines are less explored. The Zhahe mining area in Huaibei City, China, was used as the study area to extract the vegetation cover (FVC) between 1987 and 2023 and explore the deep-seated drivers. Relying on the Google Earth Engine cloud platform, a total of 734 scenes of Landsat-5, Landsat-7, and Landsat-8 satellite image data were acquired from 1987 to 2023. Based on the image element dichotomous model, the spatial and temporal changes in FVC in the Zhahe mining area during the 37 years period were quantitatively analyzed by using trend analysis and stability analysis, and the impacts of nine driving factors on FVC in three aspects, namely, climate, topography, and human activities, were analyzed by using the geodetic detector. The results showed that: ① FVC in the Zhahe mining area has been decreasing over the past 37 years, with an average rate of change of 0.02%·a[-1]. The average FVC level in the area was high, and the area with medium coverage and above accounted for 81.8%, with a spatial distribution characterized by "high in the northeast and low in the southwest." ② The FVC of each coal mine in the Zhahe mining area was dominated by high stability areas, accounting for more than 30% of the area, and the land use type was dominated by cultivated land and construction land, while the areas of lower stability were mainly concentrated in the Shuoxi Lake, the collapse zone of Zhong Lake, and the areas close to the town roads in the study area. ③ In the exploration of the influence of the nine driving factors on FVC, the order of the influence of each factor on FVC was as follows: land use type (0.41) > precipitation (0.164) > nighttime light (0.12) > temperature (0.095) > GDP (0.079) > population density (0.048) > elevation (0.043) > slope (0.040) > slope (0.021). The interaction between land use type and other factors had the strongest effect on the spatial variability of FVC.},
}
RevDate: 2025-12-21
CmpDate: 2025-12-18
Autonomous vehicles with augmented reality internet of things and edge intelligence system for industry 5.0 based on 6G.
PloS one, 20(12):e0339022.
In an era of rapidly evolving technology, traditional cloud computing struggles to meet the demands of resource-intensive smart devices. This necessitates a shift towards Edge Computing (EC), which brings computation and data storage closer to the network's edge, enhancing efficiency and reducing latency. This is particularly crucial for the Internet of Things (IoT), where supporting mobility, location awareness, and real-time processing are paramount. However, the scalability of EC applications is significantly influenced by network parameters and the capabilities of the computing system. This paper proposes a novel system architecture for Industry 5.0 that leverages the synergy between 6G networks, autonomous vehicles, Augmented Reality (AR), IoT, and edge intelligence to revolutionize transportation systems. Our approach integrates AR for enhanced user interfaces, utilizes IoT for data acquisition and control, and employs edge computing for real-time decision-making. Our experimental results demonstrate a strong correlation between processing speed and network bandwidth. While increasing either parameter individually enhances overall system performance. The two-tier architecture, combined with the Entity Objects (EO) model, demonstrates superior scalability compared to traditional approaches. By distributing processing tasks and leveraging the resources of other edge servers, the system can handle increasing numbers of AVs and data loads without compromising performance.
Additional Links: PMID-41411273
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41411273,
year = {2025},
author = {Ahmed, AA and Kadhim, AK and Hasan, MK and Al-Ghuribi, SM and Hamed Abd, D and Aliesawi, SA and Hezam Murshed, BA and Topham, L and Khan, W and Hussain, AJ},
title = {Autonomous vehicles with augmented reality internet of things and edge intelligence system for industry 5.0 based on 6G.},
journal = {PloS one},
volume = {20},
number = {12},
pages = {e0339022},
pmid = {41411273},
issn = {1932-6203},
mesh = {*Internet of Things ; *Augmented Reality ; *Industry ; Algorithms ; Cloud Computing ; *Artificial Intelligence ; Humans ; },
abstract = {In an era of rapidly evolving technology, traditional cloud computing struggles to meet the demands of resource-intensive smart devices. This necessitates a shift towards Edge Computing (EC), which brings computation and data storage closer to the network's edge, enhancing efficiency and reducing latency. This is particularly crucial for the Internet of Things (IoT), where supporting mobility, location awareness, and real-time processing are paramount. However, the scalability of EC applications is significantly influenced by network parameters and the capabilities of the computing system. This paper proposes a novel system architecture for Industry 5.0 that leverages the synergy between 6G networks, autonomous vehicles, Augmented Reality (AR), IoT, and edge intelligence to revolutionize transportation systems. Our approach integrates AR for enhanced user interfaces, utilizes IoT for data acquisition and control, and employs edge computing for real-time decision-making. Our experimental results demonstrate a strong correlation between processing speed and network bandwidth. While increasing either parameter individually enhances overall system performance. The two-tier architecture, combined with the Entity Objects (EO) model, demonstrates superior scalability compared to traditional approaches. By distributing processing tasks and leveraging the resources of other edge servers, the system can handle increasing numbers of AVs and data loads without compromising performance.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Internet of Things
*Augmented Reality
*Industry
Algorithms
Cloud Computing
*Artificial Intelligence
Humans
RevDate: 2025-12-17
Research on cloud-edge-end distributed collaborative computing based on deep reinforcement learning.
Scientific reports pii:10.1038/s41598-025-32813-1 [Epub ahead of print].
Additional Links: PMID-41407848
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41407848,
year = {2025},
author = {Wu, C and Ye, Q and Wang, Y and Zhang, D and Zhang, W and Jiang, X},
title = {Research on cloud-edge-end distributed collaborative computing based on deep reinforcement learning.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-32813-1},
pmid = {41407848},
issn = {2045-2322},
support = {5700- 202358842A-4-3-WL//science and technology program of State grid Corporation of China/ ; },
}
RevDate: 2025-12-19
Blockchain-based secure MEC model for VANETs using hybrid networks.
Scientific reports, 15(1):43912.
Vehicular Ad-hoc Networks (VANETs) are a type of mobile ad-hoc network that enables vehicles to interact with one another and roadside infrastructure. Multi-Access Edge Computing (MEC) provides a promising solution by positioning storage and computation resources closer to the network edge. This helps to reduce the latency and improve performance. The combination of MEC and blockchain enhances data processing and security. This integration improves privacy safeguards, prevents fraud, and supports trusted communication within VANETs. Consequently, this proposed model aims to develop an innovative approach that leverages these technologies. The main objective of the implemented technique is to create a blockchain architecture powered by deep learning, which ensures the safety of VANETs. The network architecture consists of three layers: perception, edge computing, and services. The main goal of the initial layer is to protect the privacy of VANET data through blockchain activities. The perception layer processes data using edge computing and cloud services. The service layer ensures data protection by through the blockchain technology and storing information in a public cloud. The last layer focuses on addressing user demands for throughput and Quality of Service (QoS). The proposed framework is good for assessing the dependability of vehicle nodes stored on the blockchain. To accomplish node authentication, an Adaptive and Dilated Hybrid Network (ADHyNet) is used. In this approach, the Residual Long Short-Term Memory (Res-LSTM) with Gated Recurrent Unit (GRU) forms the ADHyNet, where the Random Number Updated Skill Optimization Algorithm (RNU-SOA) is used to optimize the hyperparameters. Finally, the encryption process is carried out using Homomorphic Encryption combined with Elliptic Curve Cryptography (HECC) to secure data. This process ensures that confidential user information is protected against unauthorized access. The functionality of the system is thoroughly assessed and simulated. The suggested technique outperforms well than other approaches in terms of data security in VANET.
Additional Links: PMID-41402461
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41402461,
year = {2025},
author = {Goud, GV and Arunachalam, R and Shukla, SK and Saranya, K and Venugopal, S and Palanisamy, P},
title = {Blockchain-based secure MEC model for VANETs using hybrid networks.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {43912},
pmid = {41402461},
issn = {2045-2322},
abstract = {Vehicular Ad-hoc Networks (VANETs) are a type of mobile ad-hoc network that enables vehicles to interact with one another and roadside infrastructure. Multi-Access Edge Computing (MEC) provides a promising solution by positioning storage and computation resources closer to the network edge. This helps to reduce the latency and improve performance. The combination of MEC and blockchain enhances data processing and security. This integration improves privacy safeguards, prevents fraud, and supports trusted communication within VANETs. Consequently, this proposed model aims to develop an innovative approach that leverages these technologies. The main objective of the implemented technique is to create a blockchain architecture powered by deep learning, which ensures the safety of VANETs. The network architecture consists of three layers: perception, edge computing, and services. The main goal of the initial layer is to protect the privacy of VANET data through blockchain activities. The perception layer processes data using edge computing and cloud services. The service layer ensures data protection by through the blockchain technology and storing information in a public cloud. The last layer focuses on addressing user demands for throughput and Quality of Service (QoS). The proposed framework is good for assessing the dependability of vehicle nodes stored on the blockchain. To accomplish node authentication, an Adaptive and Dilated Hybrid Network (ADHyNet) is used. In this approach, the Residual Long Short-Term Memory (Res-LSTM) with Gated Recurrent Unit (GRU) forms the ADHyNet, where the Random Number Updated Skill Optimization Algorithm (RNU-SOA) is used to optimize the hyperparameters. Finally, the encryption process is carried out using Homomorphic Encryption combined with Elliptic Curve Cryptography (HECC) to secure data. This process ensures that confidential user information is protected against unauthorized access. The functionality of the system is thoroughly assessed and simulated. The suggested technique outperforms well than other approaches in terms of data security in VANET.},
}
RevDate: 2025-12-16
Molecular crystal memristor-based edge AI platform for energy-efficient and real-time smart grid inspection.
Science bulletin pii:S2095-9273(25)01227-7 [Epub ahead of print].
Vast power grid infrastructure generates enormous volumes of inspection data from smart meters, unmanned aerial vehicle (UAV) patrols, and high-definition video monitoring. Meeting the demand for real-time analysis places stringent requirements on latency, energy efficiency, and on-device intelligence at the edge. Here, we present a molecular crystal memristor-based edge artificial intelligence (AI) hardware platform that can be directly deployed in inspection devices, enabling real-time grid monitoring with drastically reduced computational and storage overheads. The memristor exhibits highly controllable filamentary switching behavior, stable multi-level conductance states, femtowatt-scale power consumption, and outstanding retention. Leveraging these properties, the platform enables fully hardware-integrated convolution, achieving 97% feature-extraction accuracy and 67.75 TOPS/W energy efficiency, thereby substantially alleviating the computational and storage load of cloud servers. This work establishes a scalable and energy-efficient in-memory computing framework for smart grid inspection and provides a powerful foundation for broader edge AI applications.
Additional Links: PMID-41402193
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41402193,
year = {2025},
author = {Guan, P and Qin, L and Ning, K and Liu, J and Ouyang, D and Yu, Y and Wu, J and Lu, X and Fu, Y and Li, Y and Li, H and Zhai, T},
title = {Molecular crystal memristor-based edge AI platform for energy-efficient and real-time smart grid inspection.},
journal = {Science bulletin},
volume = {},
number = {},
pages = {},
doi = {10.1016/j.scib.2025.11.062},
pmid = {41402193},
issn = {2095-9281},
abstract = {Vast power grid infrastructure generates enormous volumes of inspection data from smart meters, unmanned aerial vehicle (UAV) patrols, and high-definition video monitoring. Meeting the demand for real-time analysis places stringent requirements on latency, energy efficiency, and on-device intelligence at the edge. Here, we present a molecular crystal memristor-based edge artificial intelligence (AI) hardware platform that can be directly deployed in inspection devices, enabling real-time grid monitoring with drastically reduced computational and storage overheads. The memristor exhibits highly controllable filamentary switching behavior, stable multi-level conductance states, femtowatt-scale power consumption, and outstanding retention. Leveraging these properties, the platform enables fully hardware-integrated convolution, achieving 97% feature-extraction accuracy and 67.75 TOPS/W energy efficiency, thereby substantially alleviating the computational and storage load of cloud servers. This work establishes a scalable and energy-efficient in-memory computing framework for smart grid inspection and provides a powerful foundation for broader edge AI applications.},
}
RevDate: 2026-01-03
CmpDate: 2026-01-03
AI-embedded IoT healthcare optimization with trust-aware mobile edge computing.
Scientific reports, 16(1):10.
Embedded technologies combined with the Internet of Things (IoT), have transformed healthcare monitoring systems into automated and responsive platforms. In recent decades, many existing approaches have been based on edge computing to reduce response time in patient monitoring and provide a reliable method for interaction among the medical team and experts during disease diagnosis. Such approaches are the interconnection of battery-powered devices and physical objects to capture the physiological data streams for medical treatment and facilitate personalized healthcare systems. However, as wireless devices have limited resources for fulfilling end-user requests, this affects the accuracy of the medical system, especially in the presence of malicious devices on the communication infrastructure. Under diverse network conditions, such solutions lower the reliability level of the devices and increase the likelihood of suspicious processes. Therefore, to keep these significant concerns in IoT-based healthcare applications, trust and security should be adopted while collecting patients' data over an insecure medium. In this research study, we propose a model referred to as Edge-Cloud Trusted Intelligence (ECTI), aiming to decrease the computing overhead on the devices. Additionally, multi-level security is implemented to ensure privacy preservation by adopting trusted behavior when communicating in a distributed environment. The edges utilize resources efficiently by employing task offloading strategies, enabling lightweight collaborative decision-making for routing in the healthcare domain. The performance results revealed notable improvement of the proposed model against related schemes in terms of various network metrics.
Additional Links: PMID-41398195
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41398195,
year = {2025},
author = {Alamri, M and Haseeb, K and Humayun, M and Alshammeri, M},
title = {AI-embedded IoT healthcare optimization with trust-aware mobile edge computing.},
journal = {Scientific reports},
volume = {16},
number = {1},
pages = {10},
pmid = {41398195},
issn = {2045-2322},
support = {DGSSR-2025-02-01298//Deanship of Graduate Studies and Scientific Research at Jouf University/ ; },
mesh = {*Internet of Things ; Humans ; Computer Security ; Cloud Computing ; Wireless Technology ; Trust ; *Artificial Intelligence ; Delivery of Health Care ; Telemedicine ; },
abstract = {Embedded technologies combined with the Internet of Things (IoT), have transformed healthcare monitoring systems into automated and responsive platforms. In recent decades, many existing approaches have been based on edge computing to reduce response time in patient monitoring and provide a reliable method for interaction among the medical team and experts during disease diagnosis. Such approaches are the interconnection of battery-powered devices and physical objects to capture the physiological data streams for medical treatment and facilitate personalized healthcare systems. However, as wireless devices have limited resources for fulfilling end-user requests, this affects the accuracy of the medical system, especially in the presence of malicious devices on the communication infrastructure. Under diverse network conditions, such solutions lower the reliability level of the devices and increase the likelihood of suspicious processes. Therefore, to keep these significant concerns in IoT-based healthcare applications, trust and security should be adopted while collecting patients' data over an insecure medium. In this research study, we propose a model referred to as Edge-Cloud Trusted Intelligence (ECTI), aiming to decrease the computing overhead on the devices. Additionally, multi-level security is implemented to ensure privacy preservation by adopting trusted behavior when communicating in a distributed environment. The edges utilize resources efficiently by employing task offloading strategies, enabling lightweight collaborative decision-making for routing in the healthcare domain. The performance results revealed notable improvement of the proposed model against related schemes in terms of various network metrics.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Internet of Things
Humans
Computer Security
Cloud Computing
Wireless Technology
Trust
*Artificial Intelligence
Delivery of Health Care
Telemedicine
RevDate: 2025-12-13
A custom hash algorithm for hosting secure gray scale image repository in public cloud.
Scientific reports pii:10.1038/s41598-025-31792-7 [Epub ahead of print].
Nowadays, Cloud computing is an essential platform for securing resources and effectively managing files. In digital technology, many data leaks or breaches frequently occur during the storage and transmission process. Several techniques for secure image transmission have been developed by researchers worldwide. In the traditional method, data loss prevention (DLP) is the best way to protect sensitive data from breaches. The massive amount of data is not feasible in the existing storage system. However, ensuring data security remains a severe challenge. Cloud infrastructure provides a more robust, reliable, and scalable solution to overcome attacks in developing regions. The primary objective of cloud storage is to provide affordable and easy access to storage, with a vast amount of data stored across multiple cloud storage services. This paper proposed a custom block-based hash algorithm that generates a digital fingerprint from the grayscale-scale image. The pivotal contribution presented in the proposed work lies in emphasising data integrity generation and validation, tamper detection, and accurate identification of the tampered region. The entire 256 × 256 image is considered for tamper-proofing, and the hash values generated are based on the proposed work. In the integrity validation process, it compares the digest with the original digest. The cloud environment provides scalable infrastructure for securely managing and storing the digital fingerprint. User-level authentication is also incorporated into the proposed framework. Additionally, a Graphical User Interface (GUI) application has been developed for generating a hash and verifying whether the image has been tampered with or not, with the tampered region marked by a bounding box. Various benchmark metrics are analysed for validating the outfit of the proposed algorithm. The metrics, including quantitative and qualitative tests for integrity codes, collision property, and avalanche effect, were analysed, and the proposed algorithm exhibits a good ability towards integrity validation.
Additional Links: PMID-41390772
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41390772,
year = {2025},
author = {Murugesan, V and Chidambaram, N and Amirtharajan, R},
title = {A custom hash algorithm for hosting secure gray scale image repository in public cloud.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-31792-7},
pmid = {41390772},
issn = {2045-2322},
support = {SR/FST/ET-I/2018/221(C)//DST FIST Fund/ ; },
abstract = {Nowadays, Cloud computing is an essential platform for securing resources and effectively managing files. In digital technology, many data leaks or breaches frequently occur during the storage and transmission process. Several techniques for secure image transmission have been developed by researchers worldwide. In the traditional method, data loss prevention (DLP) is the best way to protect sensitive data from breaches. The massive amount of data is not feasible in the existing storage system. However, ensuring data security remains a severe challenge. Cloud infrastructure provides a more robust, reliable, and scalable solution to overcome attacks in developing regions. The primary objective of cloud storage is to provide affordable and easy access to storage, with a vast amount of data stored across multiple cloud storage services. This paper proposed a custom block-based hash algorithm that generates a digital fingerprint from the grayscale-scale image. The pivotal contribution presented in the proposed work lies in emphasising data integrity generation and validation, tamper detection, and accurate identification of the tampered region. The entire 256 × 256 image is considered for tamper-proofing, and the hash values generated are based on the proposed work. In the integrity validation process, it compares the digest with the original digest. The cloud environment provides scalable infrastructure for securely managing and storing the digital fingerprint. User-level authentication is also incorporated into the proposed framework. Additionally, a Graphical User Interface (GUI) application has been developed for generating a hash and verifying whether the image has been tampered with or not, with the tampered region marked by a bounding box. Various benchmark metrics are analysed for validating the outfit of the proposed algorithm. The metrics, including quantitative and qualitative tests for integrity codes, collision property, and avalanche effect, were analysed, and the proposed algorithm exhibits a good ability towards integrity validation.},
}
RevDate: 2025-12-18
BlueEdge neural network approach and its application to automated data type classification in mobile edge computing.
Scientific reports, 15(1):43823.
UNLABELLED: Owing to the increasing number of IoT gadgets and the growth of big data, we are now facing massive amounts of diverse data that require proper preprocessing before they can be analyzed. Conventional methods involve sending data directly to the cloud, where it is cleaned and sorted, resulting in a more crowded network, increased latency, and a potential threat to users’ privacy. This paper presents an enhanced version of the BlueEdge framework—a neural network solution designed for the automated classification of data types on edge devices. We achieve this by utilizing a feed-forward neural network and optimized features to identify the presence of 14 distinct data types. Because of this, input data can be preprocessed near its source, and not in the cloud. We utilized a comprehensive dataset comprising 1400 samples, encompassing various data formats from around the world. Compared with rule-based methods, experimental assessment achieves better performance, and results in reduced data transmission (reduced by 62%) and processing latency (78 times faster than cloud-based systems), with resource efficiency comparable to low-end mobile devices. Additionally, our strategy demonstrates strong performance under various data conditions, achieving accuracy levels of over 85% on datasets that may include variations and a noise level as high as 20%. The approach used here is capable of processing data for IoT devices used in education, which can lead to more efficient connections with the cloud and better privacy preservation.
SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1038/s41598-025-30445-z.
Additional Links: PMID-41387763
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41387763,
year = {2025},
author = {Elmobark, N and El-Ghareeb, H and Elhishi, S},
title = {BlueEdge neural network approach and its application to automated data type classification in mobile edge computing.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {43823},
pmid = {41387763},
issn = {2045-2322},
abstract = {UNLABELLED: Owing to the increasing number of IoT gadgets and the growth of big data, we are now facing massive amounts of diverse data that require proper preprocessing before they can be analyzed. Conventional methods involve sending data directly to the cloud, where it is cleaned and sorted, resulting in a more crowded network, increased latency, and a potential threat to users’ privacy. This paper presents an enhanced version of the BlueEdge framework—a neural network solution designed for the automated classification of data types on edge devices. We achieve this by utilizing a feed-forward neural network and optimized features to identify the presence of 14 distinct data types. Because of this, input data can be preprocessed near its source, and not in the cloud. We utilized a comprehensive dataset comprising 1400 samples, encompassing various data formats from around the world. Compared with rule-based methods, experimental assessment achieves better performance, and results in reduced data transmission (reduced by 62%) and processing latency (78 times faster than cloud-based systems), with resource efficiency comparable to low-end mobile devices. Additionally, our strategy demonstrates strong performance under various data conditions, achieving accuracy levels of over 85% on datasets that may include variations and a noise level as high as 20%. The approach used here is capable of processing data for IoT devices used in education, which can lead to more efficient connections with the cloud and better privacy preservation.
SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1038/s41598-025-30445-z.},
}
RevDate: 2025-12-14
PyEOGPR: A Python package for vegetation trait mapping with Gaussian Process Regression on Earth observation cloud platforms.
Ecological informatics, 92:103497.
Developed to efficiently quantify vegetation traits from satellite Earth Observation (EO) data, the here presented PyEOGPR Python package makes trained probabilistic Gaussian Process Regression (GPR) models readily accessible within cloud-computing platforms like Google Earth Engine (GEE) and openEO. PyEOGPR provides a diversity of validated hybrid GPR models targeting common vegetation traits, as well as newer, more challenging ones such as canopy nitrogen content (CNC), applicable to Sentinel-2 (S2) and Sentinel-3 (S3) data. The package also enables users to incorporate newly trained GPR models for quantifying user-defined surface properties. A key advantage of GPR models is their provision of associated uncertainty estimates, significantly enhancing retrieval reliability. PyEOGPR streamlines large-scale vegetation analysis, facilitating quantitative map generation from local to global scales with customizable time windows, eliminating the need for local image downloads or processing. This paper outlines the complete processing pipeline and demonstrates the generation of landscape-scale maps of key vegetation traits using S2 (20 m resolution) data, and global trait maps using S3 data. PyEOGPR currently supports 27 generically applicable GPR models, aiding environmental monitoring and sustainable agroecological management, with minimal coding expertise required. This integration democratizes access to advanced GPR models within cloud environments, making spatial vegetation dynamics analyses accessible to a broader user base and improving the efficiency of EO data processing.
Additional Links: PMID-41383661
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41383661,
year = {2025},
author = {Kovács, DD and De Clerck, E and Verrelst, J},
title = {PyEOGPR: A Python package for vegetation trait mapping with Gaussian Process Regression on Earth observation cloud platforms.},
journal = {Ecological informatics},
volume = {92},
number = {},
pages = {103497},
pmid = {41383661},
issn = {1574-9541},
abstract = {Developed to efficiently quantify vegetation traits from satellite Earth Observation (EO) data, the here presented PyEOGPR Python package makes trained probabilistic Gaussian Process Regression (GPR) models readily accessible within cloud-computing platforms like Google Earth Engine (GEE) and openEO. PyEOGPR provides a diversity of validated hybrid GPR models targeting common vegetation traits, as well as newer, more challenging ones such as canopy nitrogen content (CNC), applicable to Sentinel-2 (S2) and Sentinel-3 (S3) data. The package also enables users to incorporate newly trained GPR models for quantifying user-defined surface properties. A key advantage of GPR models is their provision of associated uncertainty estimates, significantly enhancing retrieval reliability. PyEOGPR streamlines large-scale vegetation analysis, facilitating quantitative map generation from local to global scales with customizable time windows, eliminating the need for local image downloads or processing. This paper outlines the complete processing pipeline and demonstrates the generation of landscape-scale maps of key vegetation traits using S2 (20 m resolution) data, and global trait maps using S3 data. PyEOGPR currently supports 27 generically applicable GPR models, aiding environmental monitoring and sustainable agroecological management, with minimal coding expertise required. This integration democratizes access to advanced GPR models within cloud environments, making spatial vegetation dynamics analyses accessible to a broader user base and improving the efficiency of EO data processing.},
}
RevDate: 2025-12-14
CmpDate: 2025-12-11
Secure Fog Computing for Remote Health Monitoring with Data Prioritisation and AI-Based Anomaly Detection.
Sensors (Basel, Switzerland), 25(23):.
Smart remote health monitoring requires time-critical medical data of patients from IoT-enabled cyber-physical systems (CPSs) to be securely transmitted and analysed in real time for early interventions and personalised patient care. Existing cloud architectures are insufficient for smart health systems due to their inherent issues with latency, bandwidth, and privacy. Fog architectures using data storage closer to edge devices introduce challenges in data management, security, and privacy for effective monitoring of a patient's sensitive and critical health data. These gaps found in the literature form the main research focus of this study. As an initial modest step to advance research further, we propose an innovative fog-based framework which is the first of its kind to integrate secure communication with intelligent data prioritisation (IDP) integrated into an AI-based enhanced Random Forest anomaly and threat detection model. Our experimental study to validate our model involves a simulated smart healthcare scenario with synthesised health data streams from distributed wearable devices. Features such as heart rate, SpO2, and breathing rate are dynamically prioritised using AI strategies and rule-based thresholds so that urgent health anomalies are transmitted securely in real time to support clinicians and medical experts for personalised early interventions. We establish a successful proof-of-concept implementation of our framework by achieving high predictive performance measures with an initial high score of 93.5% accuracy, 90.8% precision, 88.7% recall, and 89.7% F1-score.
Additional Links: PMID-41374704
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41374704,
year = {2025},
author = {Fahd, K and Parvin, S and Di Serio, A and Venkatraman, S},
title = {Secure Fog Computing for Remote Health Monitoring with Data Prioritisation and AI-Based Anomaly Detection.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {23},
pages = {},
pmid = {41374704},
issn = {1424-8220},
mesh = {Humans ; Monitoring, Physiologic/methods ; *Cloud Computing ; *Artificial Intelligence ; *Computer Security ; Wearable Electronic Devices ; Telemedicine ; Algorithms ; Remote Sensing Technology ; },
abstract = {Smart remote health monitoring requires time-critical medical data of patients from IoT-enabled cyber-physical systems (CPSs) to be securely transmitted and analysed in real time for early interventions and personalised patient care. Existing cloud architectures are insufficient for smart health systems due to their inherent issues with latency, bandwidth, and privacy. Fog architectures using data storage closer to edge devices introduce challenges in data management, security, and privacy for effective monitoring of a patient's sensitive and critical health data. These gaps found in the literature form the main research focus of this study. As an initial modest step to advance research further, we propose an innovative fog-based framework which is the first of its kind to integrate secure communication with intelligent data prioritisation (IDP) integrated into an AI-based enhanced Random Forest anomaly and threat detection model. Our experimental study to validate our model involves a simulated smart healthcare scenario with synthesised health data streams from distributed wearable devices. Features such as heart rate, SpO2, and breathing rate are dynamically prioritised using AI strategies and rule-based thresholds so that urgent health anomalies are transmitted securely in real time to support clinicians and medical experts for personalised early interventions. We establish a successful proof-of-concept implementation of our framework by achieving high predictive performance measures with an initial high score of 93.5% accuracy, 90.8% precision, 88.7% recall, and 89.7% F1-score.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
Monitoring, Physiologic/methods
*Cloud Computing
*Artificial Intelligence
*Computer Security
Wearable Electronic Devices
Telemedicine
Algorithms
Remote Sensing Technology
RevDate: 2025-12-14
Privacy-Preserving Hierarchical Fog Federated Learning (PP-HFFL) for IoT Intrusion Detection.
Sensors (Basel, Switzerland), 25(23):.
The rapid expansion of the Internet of Things (IoT) across critical sectors such as healthcare, energy, cybersecurity, smart cities, and finance has increased its exposure to cyberattacks. Conventional centralized machine learning-based Intrusion Detection Systems (IDS) face limitations, including data privacy risks, legal restrictions on cross-border data transfers, and high communication overhead. To overcome these challenges, we propose Privacy-Preserving Hierarchical Fog Federated Learning (PP-HFFL) for IoT intrusion detection, where fog nodes serve as intermediaries between IoT devices and the cloud, collecting and preprocessing local data, thus training models on behalf of IoT clusters. The framework incorporates a Personalized Federated Learning (PFL) to handle heterogeneous, non-independent, and identically distributed (non-IID) data and leverages differential privacy (DP) to protect sensitive information. Experiments on RT-IoT 2022 and CIC-IoT 2023 datasets demonstrate that PP-HFFL achieves detection accuracy comparable to centralized systems, reduces communication overhead, preserves privacy, and adapts effectively across non-IID data. This hierarchical approach provides a practical and secure solution for next-generation IoT intrusion detection.
Additional Links: PMID-41374671
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41374671,
year = {2025},
author = {Islam, MM and Abdullah, WM and Saha, BN},
title = {Privacy-Preserving Hierarchical Fog Federated Learning (PP-HFFL) for IoT Intrusion Detection.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {23},
pages = {},
pmid = {41374671},
issn = {1424-8220},
support = {CRG-SEED-2501-07//Concordia University of Edmonton/ ; },
abstract = {The rapid expansion of the Internet of Things (IoT) across critical sectors such as healthcare, energy, cybersecurity, smart cities, and finance has increased its exposure to cyberattacks. Conventional centralized machine learning-based Intrusion Detection Systems (IDS) face limitations, including data privacy risks, legal restrictions on cross-border data transfers, and high communication overhead. To overcome these challenges, we propose Privacy-Preserving Hierarchical Fog Federated Learning (PP-HFFL) for IoT intrusion detection, where fog nodes serve as intermediaries between IoT devices and the cloud, collecting and preprocessing local data, thus training models on behalf of IoT clusters. The framework incorporates a Personalized Federated Learning (PFL) to handle heterogeneous, non-independent, and identically distributed (non-IID) data and leverages differential privacy (DP) to protect sensitive information. Experiments on RT-IoT 2022 and CIC-IoT 2023 datasets demonstrate that PP-HFFL achieves detection accuracy comparable to centralized systems, reduces communication overhead, preserves privacy, and adapts effectively across non-IID data. This hierarchical approach provides a practical and secure solution for next-generation IoT intrusion detection.},
}
RevDate: 2025-12-14
A Framework for Integration of Machine Vision with IoT Sensing.
Sensors (Basel, Switzerland), 25(23):.
Automated monitoring systems increasingly leverage diverse sensing sources, yet a disconnect often persists between machine vision and IoT sensor pipelines. While IoT sensors provide reliable point measurements and cameras offer rich spatial context, their independent operation limits coherent environmental interpretation. Existing multimodal fusion frameworks frequently lack tight synchronization and efficient cross-modal learning. This paper introduces a unified edge-cloud framework that deeply integrates cameras as active sensing nodes within an IoT network. Our approach features tight time synchronization between visual and IoT data streams and employs cross-modal knowledge distillation to enable efficient model training on resource-constrained edge devices. The system leverages a multi-task learning setup with dynamically adjusted loss weighting, combining architectures like EfficientNet, Vision Transformers, and U-Net derivatives. Validation on environmental monitoring tasks, including classification, segmentation, and anomaly detection, demonstrates the framework's robustness. Experiments deployed on compact edge hardware (Jetson Nano, Coral TPU) achieved 94.8% classification accuracy and 87.6% segmentation quality (mIoU), and they also sustained sub-second inference latency. The results confirm that the proposed synchronized, knowledge-driven fusion yields a more adaptive, context-aware, and deployment-ready sensing solution, significantly advancing the practical integration of machine vision within IoT ecosystems.
Additional Links: PMID-41374611
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41374611,
year = {2025},
author = {Nwatuzie, G and Peyravi, H},
title = {A Framework for Integration of Machine Vision with IoT Sensing.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {23},
pages = {},
pmid = {41374611},
issn = {1424-8220},
abstract = {Automated monitoring systems increasingly leverage diverse sensing sources, yet a disconnect often persists between machine vision and IoT sensor pipelines. While IoT sensors provide reliable point measurements and cameras offer rich spatial context, their independent operation limits coherent environmental interpretation. Existing multimodal fusion frameworks frequently lack tight synchronization and efficient cross-modal learning. This paper introduces a unified edge-cloud framework that deeply integrates cameras as active sensing nodes within an IoT network. Our approach features tight time synchronization between visual and IoT data streams and employs cross-modal knowledge distillation to enable efficient model training on resource-constrained edge devices. The system leverages a multi-task learning setup with dynamically adjusted loss weighting, combining architectures like EfficientNet, Vision Transformers, and U-Net derivatives. Validation on environmental monitoring tasks, including classification, segmentation, and anomaly detection, demonstrates the framework's robustness. Experiments deployed on compact edge hardware (Jetson Nano, Coral TPU) achieved 94.8% classification accuracy and 87.6% segmentation quality (mIoU), and they also sustained sub-second inference latency. The results confirm that the proposed synchronized, knowledge-driven fusion yields a more adaptive, context-aware, and deployment-ready sensing solution, significantly advancing the practical integration of machine vision within IoT ecosystems.},
}
RevDate: 2025-12-14
CmpDate: 2025-12-11
The B-Health Box: A Standards-Based Fog IoT Gateway for Interoperable Health and Wellbeing Data Collection.
Sensors (Basel, Switzerland), 25(23):.
In recent years, healthcare is evolving to meet the needs of a growing and ageing population. To support better and more reliable care, a comprehensive and up-to-date Personal Health Record (PHR) is essential. Ideally, the PHR should contain all health-related information about an individual and be available for sharing with healthcare institutions. However, due to interoperability issues of the medical and fitness devices, most of the times, the PHR only contains the same information as the patient Electronic Health Record (EHR). This results in lack of health-related information (e.g., physical activity, working patterns) essential to address medical conditions, support prescriptions, and treatment follow-up. This paper introduces the B-Health IoT Box, a fog IoT computing framework for eHealth interoperability and data collection that enables seamless, secure integration of health and contextual data into interoperable health records. The system was deployed in real-world settings involving over 4500 users, successfully collecting and transmitting more than 1.5 million datasets. The validation shown that data was collected, harmonized, and properly stored in different eHealth platforms, enriching data from personal EHR with mobile and wearable sensors data. The solution supports real-time and near real-time data collection, fast prototyping, and secure cloud integration, offering a modular, standards-compliant gateway for digital health ecosystems. The health and health-related data is available in FHIR format enabling interoperable eHealth ecosystems, and better equality of access to health and care services.
Additional Links: PMID-41374490
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41374490,
year = {2025},
author = {Marques, M and Delgado-Gomes, V and Januário, F and Lopes, C and Jardim-Goncalves, R and Agostinho, C},
title = {The B-Health Box: A Standards-Based Fog IoT Gateway for Interoperable Health and Wellbeing Data Collection.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {23},
pages = {},
pmid = {41374490},
issn = {1424-8220},
support = {826117, 857172, 872548, 101016000, 101092043//European Commission/ ; },
mesh = {Humans ; Electronic Health Records ; Telemedicine ; *Data Collection/methods ; Wearable Electronic Devices ; Health Records, Personal ; },
abstract = {In recent years, healthcare is evolving to meet the needs of a growing and ageing population. To support better and more reliable care, a comprehensive and up-to-date Personal Health Record (PHR) is essential. Ideally, the PHR should contain all health-related information about an individual and be available for sharing with healthcare institutions. However, due to interoperability issues of the medical and fitness devices, most of the times, the PHR only contains the same information as the patient Electronic Health Record (EHR). This results in lack of health-related information (e.g., physical activity, working patterns) essential to address medical conditions, support prescriptions, and treatment follow-up. This paper introduces the B-Health IoT Box, a fog IoT computing framework for eHealth interoperability and data collection that enables seamless, secure integration of health and contextual data into interoperable health records. The system was deployed in real-world settings involving over 4500 users, successfully collecting and transmitting more than 1.5 million datasets. The validation shown that data was collected, harmonized, and properly stored in different eHealth platforms, enriching data from personal EHR with mobile and wearable sensors data. The solution supports real-time and near real-time data collection, fast prototyping, and secure cloud integration, offering a modular, standards-compliant gateway for digital health ecosystems. The health and health-related data is available in FHIR format enabling interoperable eHealth ecosystems, and better equality of access to health and care services.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
Electronic Health Records
Telemedicine
*Data Collection/methods
Wearable Electronic Devices
Health Records, Personal
RevDate: 2025-12-12
CmpDate: 2025-12-10
The Nextflow nf-core/metatdenovo pipeline for reproducible annotation of metatranscriptomes, and more.
PeerJ, 13:e20328.
Metatranscriptomics-the sequencing of community RNA-has become a popular tool in microbial ecology, proving useful for both in situ surveys and experiments. However, annotating raw sequence data remains challenging for many research groups with limited computational experience. Standardized and reproducible analyses are important to enhance transparency, comparability across studies, and long-term reproducibility. To simplify metatranscriptome processing for biologists, and to promote reproducible analyses, we introduce nf-core/metatdenovo, a Nextflow-based workflow. Nextflow pipelines run on different computing platforms, from standalone systems to high-performance computing clusters and cloud platforms (e.g., AWS, Google Cloud, Azure) and use container technology such as Docker or Singularity to reproducibly provision software. Biologists can access the pipeline using either the command line or the Seqera platform, which provides a web browser-based interface to Nextflow pipelines. Collaborating with nf-core ensures high-quality, documented, reproducible workflows. Our nf-core/metatdenovo pipeline adheres to these established standards, enabling FAIR metatranscriptome de novo assembly, quantification, and annotation.
Additional Links: PMID-41368505
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41368505,
year = {2025},
author = {Di Leo, D and Nilsson, E and Krinos, A and Pinhassi, J and Lundin, D},
title = {The Nextflow nf-core/metatdenovo pipeline for reproducible annotation of metatranscriptomes, and more.},
journal = {PeerJ},
volume = {13},
number = {},
pages = {e20328},
pmid = {41368505},
issn = {2167-8359},
mesh = {*Software ; Reproducibility of Results ; Workflow ; *Transcriptome ; *Computational Biology/methods ; *Molecular Sequence Annotation/methods ; *Metagenomics/methods ; },
abstract = {Metatranscriptomics-the sequencing of community RNA-has become a popular tool in microbial ecology, proving useful for both in situ surveys and experiments. However, annotating raw sequence data remains challenging for many research groups with limited computational experience. Standardized and reproducible analyses are important to enhance transparency, comparability across studies, and long-term reproducibility. To simplify metatranscriptome processing for biologists, and to promote reproducible analyses, we introduce nf-core/metatdenovo, a Nextflow-based workflow. Nextflow pipelines run on different computing platforms, from standalone systems to high-performance computing clusters and cloud platforms (e.g., AWS, Google Cloud, Azure) and use container technology such as Docker or Singularity to reproducibly provision software. Biologists can access the pipeline using either the command line or the Seqera platform, which provides a web browser-based interface to Nextflow pipelines. Collaborating with nf-core ensures high-quality, documented, reproducible workflows. Our nf-core/metatdenovo pipeline adheres to these established standards, enabling FAIR metatranscriptome de novo assembly, quantification, and annotation.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Software
Reproducibility of Results
Workflow
*Transcriptome
*Computational Biology/methods
*Molecular Sequence Annotation/methods
*Metagenomics/methods
RevDate: 2025-12-08
SLICED: A secure and adaptive cloud-iot framework for low-latency e-learning environments.
Scientific reports pii:10.1038/s41598-025-31428-w [Epub ahead of print].
Providing dependable, secure connectivity remains a persistent challenge in digital education, particularly in data-sensitive, remote learning environments. This study presents SLICED, which stands for Secure Learning Integration via Cloud and Edge Devices. It is a framework that integrates Internet of Things edge devices with Amazon Web Services (AWS) Cloud services. SLICED orchestrates AWS IoT Core, Lambda, and Key Management Service (KMS) to enable encrypted communication, user authentication, and real-time edge analytics. When compared to traditional AWS-IoT educational systems, this adaptive integration cuts down on latency and increases the level of data protection. The results of experiments conducted in simulated learning networks demonstrate that SLICED can achieve up to 27% lower latency and 33% greater security, thereby providing smart learning environments that are both scalable and safe.
Additional Links: PMID-41361235
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41361235,
year = {2025},
author = {Aswin, K and Shanmugapriya, N and Gopi, R},
title = {SLICED: A secure and adaptive cloud-iot framework for low-latency e-learning environments.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-31428-w},
pmid = {41361235},
issn = {2045-2322},
abstract = {Providing dependable, secure connectivity remains a persistent challenge in digital education, particularly in data-sensitive, remote learning environments. This study presents SLICED, which stands for Secure Learning Integration via Cloud and Edge Devices. It is a framework that integrates Internet of Things edge devices with Amazon Web Services (AWS) Cloud services. SLICED orchestrates AWS IoT Core, Lambda, and Key Management Service (KMS) to enable encrypted communication, user authentication, and real-time edge analytics. When compared to traditional AWS-IoT educational systems, this adaptive integration cuts down on latency and increases the level of data protection. The results of experiments conducted in simulated learning networks demonstrate that SLICED can achieve up to 27% lower latency and 33% greater security, thereby providing smart learning environments that are both scalable and safe.},
}
RevDate: 2025-12-11
CmpDate: 2025-12-08
Automated pipeline for operant behavior phenotyping for high-throughput data management, processing, and visualization.
NPP - digital psychiatry and neuroscience, 3(1):25.
Operant behavior paradigms are essential in preclinical models of neuropsychiatric disorders, such as substance use disorders, enabling the study of complex behaviors including learning, salience, motivation, and preference. These tasks often involve repeated, time-resolved interactions over extended periods, producing large behavioral datasets with rich temporal structure. To support genome-wide association studies (GWAS), the Preclinical Addiction Research Consortium (PARC) has phenotyped over 3000 rats for oxycodone and cocaine addiction-like behaviors using extended access self-administration, producing over 100,000 data files. To manage, store, and process this data efficiently, we leveraged Dropbox, Microsoft Azure Cloud Services, and other widely available computational tools to develop a robust, automated data processing pipeline. Raw MedPC operant output files are automatically converted into structured Excel files using custom scripts, then integrated with standardized experimental, behavioral, and metadata spreadsheets, all uploaded from Dropbox into a relational SQL database on Azure. The pipeline enables automated quality control, data backups, daily summary reports, and interactive visualizations. This approach has dramatically improved PARC's high-throughput phenotyping capabilities by reducing human workload and error, while improving data quality, richness, and accessibility. We here share our approach, as these streamlined workflows can deliver benefits to operant studies of any scale, supporting more efficient, transparent, reproducible, and collaborative preclinical research.
Additional Links: PMID-41360967
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41360967,
year = {2025},
author = {Kim, S and Huang, Y and Singla, U and Hu, A and Kalra, S and Morgan, AA and Sichel, B and Othman, D and Carrette, LLG},
title = {Automated pipeline for operant behavior phenotyping for high-throughput data management, processing, and visualization.},
journal = {NPP - digital psychiatry and neuroscience},
volume = {3},
number = {1},
pages = {25},
pmid = {41360967},
issn = {2948-1570},
abstract = {Operant behavior paradigms are essential in preclinical models of neuropsychiatric disorders, such as substance use disorders, enabling the study of complex behaviors including learning, salience, motivation, and preference. These tasks often involve repeated, time-resolved interactions over extended periods, producing large behavioral datasets with rich temporal structure. To support genome-wide association studies (GWAS), the Preclinical Addiction Research Consortium (PARC) has phenotyped over 3000 rats for oxycodone and cocaine addiction-like behaviors using extended access self-administration, producing over 100,000 data files. To manage, store, and process this data efficiently, we leveraged Dropbox, Microsoft Azure Cloud Services, and other widely available computational tools to develop a robust, automated data processing pipeline. Raw MedPC operant output files are automatically converted into structured Excel files using custom scripts, then integrated with standardized experimental, behavioral, and metadata spreadsheets, all uploaded from Dropbox into a relational SQL database on Azure. The pipeline enables automated quality control, data backups, daily summary reports, and interactive visualizations. This approach has dramatically improved PARC's high-throughput phenotyping capabilities by reducing human workload and error, while improving data quality, richness, and accessibility. We here share our approach, as these streamlined workflows can deliver benefits to operant studies of any scale, supporting more efficient, transparent, reproducible, and collaborative preclinical research.},
}
RevDate: 2025-12-06
SlingBAG: point cloud-based iterative algorithm for large-scale 3D photoacoustic imaging.
Nature communications pii:10.1038/s41467-025-66855-w [Epub ahead of print].
Large-scale 3D photoacoustic imaging has become increasingly important for both clinical and pre-clinical applications. Limited by cost and system complexity, only systems with sparsely-distributed sensors can be widely implemented, which necessitates advanced reconstruction algorithms to reduce artifacts. However, the high computing memory and time consumption of traditional iterative reconstruction (IR) algorithms is practically unacceptable for large-scale 3D photoacoustic imaging. Here, we propose a point cloud-based IR algorithm that reduces memory consumption by several orders, wherein the 3D photoacoustic scene is modeled as a series of Gaussian-distributed spherical sources stored in form of point cloud. During the IR process, not only are properties of each Gaussian source, including its peak intensity (initial pressure value), standard deviation (size) and mean (position) continuously optimized, but also each Gaussian source itself adaptively undergoes destroying, splitting, and duplication along the gradient direction. This method, named SlingBAG, the sliding Gaussian ball adaptive growth algorithm, enables high-quality large-scale 3D photoacoustic reconstruction with fast iteration and extremely low memory usage. We validated the SlingBAG algorithm in both simulation study and in vivo animal experiments.
Additional Links: PMID-41353449
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41353449,
year = {2025},
author = {Li, S and Wang, Y and Gao, J and Kim, C and Choi, S and Zhang, Y and Chen, Q and Yao, Y and Li, C},
title = {SlingBAG: point cloud-based iterative algorithm for large-scale 3D photoacoustic imaging.},
journal = {Nature communications},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41467-025-66855-w},
pmid = {41353449},
issn = {2041-1723},
abstract = {Large-scale 3D photoacoustic imaging has become increasingly important for both clinical and pre-clinical applications. Limited by cost and system complexity, only systems with sparsely-distributed sensors can be widely implemented, which necessitates advanced reconstruction algorithms to reduce artifacts. However, the high computing memory and time consumption of traditional iterative reconstruction (IR) algorithms is practically unacceptable for large-scale 3D photoacoustic imaging. Here, we propose a point cloud-based IR algorithm that reduces memory consumption by several orders, wherein the 3D photoacoustic scene is modeled as a series of Gaussian-distributed spherical sources stored in form of point cloud. During the IR process, not only are properties of each Gaussian source, including its peak intensity (initial pressure value), standard deviation (size) and mean (position) continuously optimized, but also each Gaussian source itself adaptively undergoes destroying, splitting, and duplication along the gradient direction. This method, named SlingBAG, the sliding Gaussian ball adaptive growth algorithm, enables high-quality large-scale 3D photoacoustic reconstruction with fast iteration and extremely low memory usage. We validated the SlingBAG algorithm in both simulation study and in vivo animal experiments.},
}
RevDate: 2025-12-05
Blockchain-based cryptographic framework for secure data transmission in IoT edge environments using ECaps-Net.
Scientific reports pii:10.1038/s41598-025-30906-5 [Epub ahead of print].
In the evolving landscape of Internet of Things (IoT), the integration of interconnected devices and cloud computing has revolutionized data collection and processing. However, this connectivity poses numerous security challenges about data privacy, integrity, and security. Traditional cloud-based security approaches inadequate for managing the distributed and dynamic nature of IoT ecosystems. The emergence of the edge computing paradigm allowed for the transfer of data processing and storage closer to local edge devices, but introduces new vulnerabilities at the edges. Thus, an Intrusion Detection System (IDS) is required in this situation. IDS built at the edge can quickly detect and mitigate possible attacks by continually monitoring network traffic, device interactions, and real-time anomalies. Therefore, in this study, we propose an Enhanced Deep Learning (DL)-based IDS integrated with a Blockchain-Based Cryptographic-Algorithm to ensure secure data transmission in an IoT edge computing environment. Initially, the intrusion dataset undergoes preprocessing step to enhance its quality by eliminating unnecessary data and normalizing the dataset. then, the pre-processed data is classified using an Enhanced Capsule Network (ECaps-Net), which incorporates a Squeeze and Excitation (SE) block to highlight important features and surpasses less important ones. After classification, the classified normal data is converted into blocks using Blockchain technology. Every block is hashed using the Merkle-Damgard cryptographic algorithm to ensure data integrity and confidentiality. The proposed framework outperformed existing methods with a maximum accuracy of 98.90% and 98.78% on the KDD Cup-99 and UNSW-NB 15 datasets, respectively. The proposed mechanism protects cloud server and edge devices from malicious access, offering a reliable and efficient solution for secure data transmission in IoT edge environments.
Additional Links: PMID-41350368
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41350368,
year = {2025},
author = {Mohamed Meerasha, I and Syed Masood, JAI and P, T and R, AA},
title = {Blockchain-based cryptographic framework for secure data transmission in IoT edge environments using ECaps-Net.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-30906-5},
pmid = {41350368},
issn = {2045-2322},
abstract = {In the evolving landscape of Internet of Things (IoT), the integration of interconnected devices and cloud computing has revolutionized data collection and processing. However, this connectivity poses numerous security challenges about data privacy, integrity, and security. Traditional cloud-based security approaches inadequate for managing the distributed and dynamic nature of IoT ecosystems. The emergence of the edge computing paradigm allowed for the transfer of data processing and storage closer to local edge devices, but introduces new vulnerabilities at the edges. Thus, an Intrusion Detection System (IDS) is required in this situation. IDS built at the edge can quickly detect and mitigate possible attacks by continually monitoring network traffic, device interactions, and real-time anomalies. Therefore, in this study, we propose an Enhanced Deep Learning (DL)-based IDS integrated with a Blockchain-Based Cryptographic-Algorithm to ensure secure data transmission in an IoT edge computing environment. Initially, the intrusion dataset undergoes preprocessing step to enhance its quality by eliminating unnecessary data and normalizing the dataset. then, the pre-processed data is classified using an Enhanced Capsule Network (ECaps-Net), which incorporates a Squeeze and Excitation (SE) block to highlight important features and surpasses less important ones. After classification, the classified normal data is converted into blocks using Blockchain technology. Every block is hashed using the Merkle-Damgard cryptographic algorithm to ensure data integrity and confidentiality. The proposed framework outperformed existing methods with a maximum accuracy of 98.90% and 98.78% on the KDD Cup-99 and UNSW-NB 15 datasets, respectively. The proposed mechanism protects cloud server and edge devices from malicious access, offering a reliable and efficient solution for secure data transmission in IoT edge environments.},
}
RevDate: 2025-12-03
Detecting continuous structural heterogeneity in single molecule localization microscopy data with a point cloud variational auto-encoder.
Scientific reports pii:10.1038/s41598-025-31201-z [Epub ahead of print].
The low degree of labeling and limited photon count of fluorescent emitters in single molecule localization microscopy results in poor quality images of macro-molecular complexes. Particle fusion provides a single reconstruction with high signal-to-noise ratio by combining many single molecule localization microscopy images of the same structure. The underlying assumption of homogeneity is not always valid, heterogeneity can arise due to geometrical shape variations or distinct conformational states. We introduce a Point Cloud Variational Auto-Encoder that works directly on 2D and 3D localization data, to detect multiple modes of variation in such datasets. The computing time is on the order of a few minutes, enabled by the linear scaling with dataset size, and fast network training in just four epochs. The use of lists of localization data instead of pixelated images leads to just minor differences in computational burden between 2D and 3D cases. With the proposed method, we detected radius variation in 2D Nuclear Pore Complex data, height variations in 3D DNA origami tetrahedron data, and both radius and height variations in 3D Nuclear Pore Complex data. In all cases, the detected variations were on the few nanometer scale.
Additional Links: PMID-41339750
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41339750,
year = {2025},
author = {Haghparast, S and Zhang, Y and Tao, Q and Stallinga, S and Rieger, B},
title = {Detecting continuous structural heterogeneity in single molecule localization microscopy data with a point cloud variational auto-encoder.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-31201-z},
pmid = {41339750},
issn = {2045-2322},
support = {17046//Nederlandse Organisatie voor Wetenschappelijk Onderzoek/ ; },
abstract = {The low degree of labeling and limited photon count of fluorescent emitters in single molecule localization microscopy results in poor quality images of macro-molecular complexes. Particle fusion provides a single reconstruction with high signal-to-noise ratio by combining many single molecule localization microscopy images of the same structure. The underlying assumption of homogeneity is not always valid, heterogeneity can arise due to geometrical shape variations or distinct conformational states. We introduce a Point Cloud Variational Auto-Encoder that works directly on 2D and 3D localization data, to detect multiple modes of variation in such datasets. The computing time is on the order of a few minutes, enabled by the linear scaling with dataset size, and fast network training in just four epochs. The use of lists of localization data instead of pixelated images leads to just minor differences in computational burden between 2D and 3D cases. With the proposed method, we detected radius variation in 2D Nuclear Pore Complex data, height variations in 3D DNA origami tetrahedron data, and both radius and height variations in 3D Nuclear Pore Complex data. In all cases, the detected variations were on the few nanometer scale.},
}
RevDate: 2025-12-07
Identifying and assessing the cloud computing implementation drivers for sustainable building projects.
Scientific reports, 15(1):43122.
The sustainability aspect must be implemented during all the phases of the decision-making phase regarding construction project execution to obtain the full advantages, shorn of conceding the project objective. Cloud computing (CC) has been an appreciated tool for successful and viable building processes in various nations over the past twenty years. CC and its drivers have certainly enhanced the successful and sustainable targets of quality, cost, and time. Conversely, CC adoption by the building industry in Egypt. Hence, the aim of this study is to build a decision support model to back drivers of CC adoption by analyzing the relationship concerning drivers of CC in building business in Egypt. The data was derived from various sources of the literature. A questionnaire survey for quantitative data generation followed this. The data was derived from 106 building practitioners in Egypt. Consequently, the study employed exploratory factor analysis (EFA) to authenticate the findings derived from the survey tool. The results categorized the drivers into three groups: Technology Drivers, Client Support Drivers, and Organization Drivers. Structural equation modeling using partial least squares (PLS-SEM) was then applied to test the relationships and rank their influence. Findings indicate that Technology is the most significant driver of CC adoption (β = 0.378, p < 0.001), followed closely by Client Support (β = 0.372, p < 0.001) and Organization (β = 0.360, p < 0.001). These findings can be used as a baseline or criteria for decision-making concerning improvements in the cost-effectiveness of CC and its proficiency to increase efficacy in the building sector. Therefore, this study adds to the understanding of contemporary construction management and engineering by extending the existing literature on CC adoption derivers and their effects on the building industry.
Additional Links: PMID-41339371
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41339371,
year = {2025},
author = {Alkersh, M and Alhusban, M},
title = {Identifying and assessing the cloud computing implementation drivers for sustainable building projects.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {43122},
pmid = {41339371},
issn = {2045-2322},
support = {(PSAU/2024/01/ 29773)//Prince Sattam bin Abdulaziz University/ ; },
abstract = {The sustainability aspect must be implemented during all the phases of the decision-making phase regarding construction project execution to obtain the full advantages, shorn of conceding the project objective. Cloud computing (CC) has been an appreciated tool for successful and viable building processes in various nations over the past twenty years. CC and its drivers have certainly enhanced the successful and sustainable targets of quality, cost, and time. Conversely, CC adoption by the building industry in Egypt. Hence, the aim of this study is to build a decision support model to back drivers of CC adoption by analyzing the relationship concerning drivers of CC in building business in Egypt. The data was derived from various sources of the literature. A questionnaire survey for quantitative data generation followed this. The data was derived from 106 building practitioners in Egypt. Consequently, the study employed exploratory factor analysis (EFA) to authenticate the findings derived from the survey tool. The results categorized the drivers into three groups: Technology Drivers, Client Support Drivers, and Organization Drivers. Structural equation modeling using partial least squares (PLS-SEM) was then applied to test the relationships and rank their influence. Findings indicate that Technology is the most significant driver of CC adoption (β = 0.378, p < 0.001), followed closely by Client Support (β = 0.372, p < 0.001) and Organization (β = 0.360, p < 0.001). These findings can be used as a baseline or criteria for decision-making concerning improvements in the cost-effectiveness of CC and its proficiency to increase efficacy in the building sector. Therefore, this study adds to the understanding of contemporary construction management and engineering by extending the existing literature on CC adoption derivers and their effects on the building industry.},
}
RevDate: 2025-12-03
CmpDate: 2025-12-03
A Cloud-based Risk Stratification Platform Cardiovascular Disease, Depression and Comorbidities.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-5.
There is strong clinical evidence that patients with depression have a high probability to exhibit cardiovascular disease (CVD) and vice versa. Thus, it is important to accurately identify these patients to provide optimal management of the comorbid conditions. Although the existing literature focuses on the development of artificial intelligence (AI) models for the diagnosis of CVD and/or depression, there is not currently any reported tool or system which integrates such models for clinical practice. In this work, we present a cloud-based platform to enable the easier, accurate, and cost-effective diagnosis of CVD and depression. The cloud-based platform is an integrated cloud-enabled computing unit that provides the execution of artificial intelligence computing algorithms along with data exchange services by utilizing the REST (Representational State Transfer) architecture. The platform enables the seamless and transparent interfacing of AI models and applications for the end-users. During the development a variety of state-of-the-art technologies and architectural models were integrated through a Payara Application Server, the Python programming environment (version 3) and a MySQL database server. Java SDK 11 was used for developing the full-stack API of the user interfaces and the back-end logic including the REST interfaces. The platform is hosted on a Linux Virtual Machine (VM). The development resulted in a cost-effective, accurate and efficient tool for the risk stratification of depression and CVD.Clinical Relevance- This is a state-of-the-art cloud-based platform for the risk stratification of CVD and depression. Example: Cardiologists and psychiatrists can use this platform to identify patients with CVD and depression and then prescribe more detailed examinations.
Additional Links: PMID-41335843
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41335843,
year = {2025},
author = {Kalatzis, F and Tsakanikas, V and Pezoulas, VC and Tassi, S and Tsarapatsani, K and Bourantas, G and Fotiadis, D and Sakellarios, A},
title = {A Cloud-based Risk Stratification Platform Cardiovascular Disease, Depression and Comorbidities.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-5},
doi = {10.1109/EMBC58623.2025.11253640},
pmid = {41335843},
issn = {2694-0604},
mesh = {*Cardiovascular Diseases/diagnosis/epidemiology ; Humans ; *Depression/diagnosis/epidemiology ; *Cloud Computing ; Comorbidity ; Risk Assessment ; Algorithms ; Artificial Intelligence ; },
abstract = {There is strong clinical evidence that patients with depression have a high probability to exhibit cardiovascular disease (CVD) and vice versa. Thus, it is important to accurately identify these patients to provide optimal management of the comorbid conditions. Although the existing literature focuses on the development of artificial intelligence (AI) models for the diagnosis of CVD and/or depression, there is not currently any reported tool or system which integrates such models for clinical practice. In this work, we present a cloud-based platform to enable the easier, accurate, and cost-effective diagnosis of CVD and depression. The cloud-based platform is an integrated cloud-enabled computing unit that provides the execution of artificial intelligence computing algorithms along with data exchange services by utilizing the REST (Representational State Transfer) architecture. The platform enables the seamless and transparent interfacing of AI models and applications for the end-users. During the development a variety of state-of-the-art technologies and architectural models were integrated through a Payara Application Server, the Python programming environment (version 3) and a MySQL database server. Java SDK 11 was used for developing the full-stack API of the user interfaces and the back-end logic including the REST interfaces. The platform is hosted on a Linux Virtual Machine (VM). The development resulted in a cost-effective, accurate and efficient tool for the risk stratification of depression and CVD.Clinical Relevance- This is a state-of-the-art cloud-based platform for the risk stratification of CVD and depression. Example: Cardiologists and psychiatrists can use this platform to identify patients with CVD and depression and then prescribe more detailed examinations.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cardiovascular Diseases/diagnosis/epidemiology
Humans
*Depression/diagnosis/epidemiology
*Cloud Computing
Comorbidity
Risk Assessment
Algorithms
Artificial Intelligence
RevDate: 2026-01-03
CmpDate: 2026-01-02
Analysis of clinical, single cell, and spatial data from the Human Tumor Atlas Network (HTAN) with massively distributed cloud-based queries.
Research square.
Cancer research increasingly relies on large-scale, multimodal datasets that capture the complexity of tumor ecosystems across diverse patients, cancer types, and disease stages. The Human Tumor Atlas Network (HTAN) generates such data, including single-cell transcriptomics, proteomics, and multiplexed imaging. However, the volume and heterogeneity of the data present challenges for researchers seeking to integrate, explore, and analyze these datasets at scale. To this end, HTAN developed a cloud-based infrastructure that transforms clinical and assay metadata into aggregate Google BigQuery tables, hosted through the Institute for Systems Biology Cancer Gateway in the Cloud (ISB-CGC). This infrastructure introduces two key innovations: (1) a provenance-based HTAN ID table that simplifies cohort construction and cross-assay integration, and (2) the novel adaptation of BigQuery's geospatial functions for use in spatial biology, enabling neighborhood and correlation analysis of tumor microenvironments. We demonstrate these capabilities through R and Python notebooks that highlight use cases such as identifying precancer and organ-specific sample cohorts, integrating multimodal datasets, and analyzing single-cell and spatial data. By lowering technical and computational barriers, this infrastructure provides a cost-effective and intuitive entry point for researchers, highlighting the potential of cloud-based platforms to accelerate cancer discoveries.
Additional Links: PMID-41333415
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41333415,
year = {2025},
author = {Gibbs, DL and Pozhidayeva, D and Katariya, Y and Aguilar, B and Anton, K and Lau, C and Longabaugh, WJ and de Bruijn, I and Lash, A and Nikolov, M and Altreuter, J and Clayton, A and Gopalan, A and Taylor, AJ and Schultz, N and Cerami, E and Thorsson, V},
title = {Analysis of clinical, single cell, and spatial data from the Human Tumor Atlas Network (HTAN) with massively distributed cloud-based queries.},
journal = {Research square},
volume = {},
number = {},
pages = {},
pmid = {41333415},
issn = {2693-5015},
support = {U24 CA233243/CA/NCI NIH HHS/United States ; },
abstract = {Cancer research increasingly relies on large-scale, multimodal datasets that capture the complexity of tumor ecosystems across diverse patients, cancer types, and disease stages. The Human Tumor Atlas Network (HTAN) generates such data, including single-cell transcriptomics, proteomics, and multiplexed imaging. However, the volume and heterogeneity of the data present challenges for researchers seeking to integrate, explore, and analyze these datasets at scale. To this end, HTAN developed a cloud-based infrastructure that transforms clinical and assay metadata into aggregate Google BigQuery tables, hosted through the Institute for Systems Biology Cancer Gateway in the Cloud (ISB-CGC). This infrastructure introduces two key innovations: (1) a provenance-based HTAN ID table that simplifies cohort construction and cross-assay integration, and (2) the novel adaptation of BigQuery's geospatial functions for use in spatial biology, enabling neighborhood and correlation analysis of tumor microenvironments. We demonstrate these capabilities through R and Python notebooks that highlight use cases such as identifying precancer and organ-specific sample cohorts, integrating multimodal datasets, and analyzing single-cell and spatial data. By lowering technical and computational barriers, this infrastructure provides a cost-effective and intuitive entry point for researchers, highlighting the potential of cloud-based platforms to accelerate cancer discoveries.},
}
RevDate: 2025-12-05
CmpDate: 2025-12-03
SZBC-AI4TCM: a comprehensive web-based computing platform for traditional Chinese medicine research and development.
Frontiers in pharmacology, 16:1698202.
INTRODUCTION: In recent years, the increasing complexity and volume of data in traditional Chinese medicine (TCM) research have rendered the conventional experimental methods inadequate for modern TCM development. The analysis of intricate TCM data demands proficiency in multiple programming languages, artificial intelligence (AI) techniques, and bioinformatics, posing significant challenges for researchers lacking such expertise. Thus, there is an urgent need to develop user-friendly software tools that encompass various aspects of TCM data analysis.
METHODS: We developed a comprehensive web-based computing platform, SZBC-AI4TCM, a comprehensive web-based computing platform for traditional Chinese medicine that embodies the "ShuZhiBenCao" (Digital Herbal) concept through artificial intelligence, designed to accelerate TCM research and reduce costs by integrating advanced AI algorithms and bioinformatics tools.
RESULTS: Leveraging machine learning, deep learning, and big data analytics, the platform enables end-to-end analysis, from TCM formulation and mechanism elucidation to drug screening. Featuring an intuitive visual interface and hardware-software acceleration, SZBC-AI4TCM allows researchers without computational backgrounds to conduct comprehensive and accurate analyses efficiently. By using the TCM research in Alzheimer's disease as an example, we showcase its functionalities, operational methods, and analytical capabilities.
DISCUSSION: SZBC-AI4TCM not only provides robust computational support for TCM research but also significantly enhances efficiency and reduces costs. It offers novel approaches for studying complex TCM systems, thereby advancing the modernization of TCM. As interdisciplinary collaboration and cloud computing continue to evolve, SZBC-AI4TCM is poised to play a strong role in TCM research and foster its growth in addition to contributing to global health. SZBC-AI4TCM is publicly for access at https://ai.tasly.com/ui/\#/frontend/login.
Additional Links: PMID-41333020
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41333020,
year = {2025},
author = {Lang, J and Guo, K and Yang, J and Yang, P and Wei, Y and Han, J and Zhao, S and Liu, Z and Yi, H and Yan, X and Chen, B and Wang, C and Xu, J and Ge, J and Zhang, W and Zhou, X and Fang, J and Su, J and Yan, K and Hu, Y and Wang, W},
title = {SZBC-AI4TCM: a comprehensive web-based computing platform for traditional Chinese medicine research and development.},
journal = {Frontiers in pharmacology},
volume = {16},
number = {},
pages = {1698202},
pmid = {41333020},
issn = {1663-9812},
abstract = {INTRODUCTION: In recent years, the increasing complexity and volume of data in traditional Chinese medicine (TCM) research have rendered the conventional experimental methods inadequate for modern TCM development. The analysis of intricate TCM data demands proficiency in multiple programming languages, artificial intelligence (AI) techniques, and bioinformatics, posing significant challenges for researchers lacking such expertise. Thus, there is an urgent need to develop user-friendly software tools that encompass various aspects of TCM data analysis.
METHODS: We developed a comprehensive web-based computing platform, SZBC-AI4TCM, a comprehensive web-based computing platform for traditional Chinese medicine that embodies the "ShuZhiBenCao" (Digital Herbal) concept through artificial intelligence, designed to accelerate TCM research and reduce costs by integrating advanced AI algorithms and bioinformatics tools.
RESULTS: Leveraging machine learning, deep learning, and big data analytics, the platform enables end-to-end analysis, from TCM formulation and mechanism elucidation to drug screening. Featuring an intuitive visual interface and hardware-software acceleration, SZBC-AI4TCM allows researchers without computational backgrounds to conduct comprehensive and accurate analyses efficiently. By using the TCM research in Alzheimer's disease as an example, we showcase its functionalities, operational methods, and analytical capabilities.
DISCUSSION: SZBC-AI4TCM not only provides robust computational support for TCM research but also significantly enhances efficiency and reduces costs. It offers novel approaches for studying complex TCM systems, thereby advancing the modernization of TCM. As interdisciplinary collaboration and cloud computing continue to evolve, SZBC-AI4TCM is poised to play a strong role in TCM research and foster its growth in addition to contributing to global health. SZBC-AI4TCM is publicly for access at https://ai.tasly.com/ui/\#/frontend/login.},
}
RevDate: 2025-12-22
CmpDate: 2025-12-22
FERAL: A Video-Understanding System for Direct Video-to-Behavior Mapping.
bioRxiv : the preprint server for biology.
Animal behavior unfolds continuously in time, yet quantitative analyses often require segmenting it into discrete, interpretable states. Although manual annotation can achieve this, it remains slow, subjective, and difficult to scale. Most automated pipelines use tracked body parts to infer actions, but are limited by tracking quality, and discard much of the visual information contained in raw videos. Here we present FERAL (Feature Extraction for Recognition of Animal Locomotion), a supervised video-understanding toolkit that bridges this gap by mapping raw video directly to frame-level behavioral labels, bypassing the need for pose estimation. Across benchmarks, FERAL outperforms state-of-the-art pose- and video-based baselines: on a benchmarking dataset of mouse social interaction, it surpasses Google's Videoprism using just a quarter of the training data. FERAL generalizes across species, recording conditions, and levels of behavioral organization: from single-animal locomotion to complex social interactions and emergent collective dynamics. Released as a user-friendly, open-source package, FERAL overcomes the challenges of traditional approaches, integrates easily with existing analysis pipelines, and can be deployed locally or on cloud servers with a few clicks. By mapping raw video directly to annotated behavior, FERAL lowers the barrier to scalable, cross-species behavioral quantification and broadens the range of behavioral analyses possible in both the lab and the wild.
Additional Links: PMID-41332589
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41332589,
year = {2025},
author = {Skovorodnikov, P and Zhao, J and Buck, F and Kay, T and Frank, DD and Koger, B and Costelloe, BR and Couzin, ID and Razzauti, J},
title = {FERAL: A Video-Understanding System for Direct Video-to-Behavior Mapping.},
journal = {bioRxiv : the preprint server for biology},
volume = {},
number = {},
pages = {},
pmid = {41332589},
issn = {2692-8205},
support = {F31 NS132477/NS/NINDS NIH HHS/United States ; K99 DC021506/DC/NIDCD NIH HHS/United States ; T32 GM152349/GM/NIGMS NIH HHS/United States ; },
abstract = {Animal behavior unfolds continuously in time, yet quantitative analyses often require segmenting it into discrete, interpretable states. Although manual annotation can achieve this, it remains slow, subjective, and difficult to scale. Most automated pipelines use tracked body parts to infer actions, but are limited by tracking quality, and discard much of the visual information contained in raw videos. Here we present FERAL (Feature Extraction for Recognition of Animal Locomotion), a supervised video-understanding toolkit that bridges this gap by mapping raw video directly to frame-level behavioral labels, bypassing the need for pose estimation. Across benchmarks, FERAL outperforms state-of-the-art pose- and video-based baselines: on a benchmarking dataset of mouse social interaction, it surpasses Google's Videoprism using just a quarter of the training data. FERAL generalizes across species, recording conditions, and levels of behavioral organization: from single-animal locomotion to complex social interactions and emergent collective dynamics. Released as a user-friendly, open-source package, FERAL overcomes the challenges of traditional approaches, integrates easily with existing analysis pipelines, and can be deployed locally or on cloud servers with a few clicks. By mapping raw video directly to annotated behavior, FERAL lowers the barrier to scalable, cross-species behavioral quantification and broadens the range of behavioral analyses possible in both the lab and the wild.},
}
RevDate: 2025-12-02
Improved multi-strategy secretary bird optimization for efficient IoT task scheduling in fog cloud computing.
Scientific reports pii:10.1038/s41598-025-30918-1 [Epub ahead of print].
Applications designed for real-time IoT operations improve cloud-based service utilization due to their rapid scalability. Though cloud computing appears to be more effective for data processing and storage in a range of IoT applications, its real-time scalability presents issues in fulfilling the demands of network bandwidth and latency-sensitive applications. In this context, fog computing is shown to be a complementary paradigm to cloud computing, providing extra benefits and capabilities aimed at extending cloud services to end users and edge devices. Due to the restricted capabilities of fog nodes, only lightweight activities can be conducted locally, while jobs requiring more processing time are handled in the cloud. As a result, an Improved Multi-Strategy Enhanced Secretary Bird Optimization Algorithm using Reinforcement Learning (IMSESBOA + RL) for IoT Task Scheduling (TS) mechanism is presented to reduce data processing time and enhance Quality of Service (QoS) in fog-cloud computing. This IMSESBOA + RL approach is designed as an efficient scheduling model that investigates and processes various scalable quantities of tasks while minimizing latency and energy costs. It used a multi-objective methodology based on Secretary Bird Optimization Algorithm's (SBOA) balanced exploration and exploitation capabilities, which has multi-strategy benefits in terms of maximizing resource consumption rate and shortening makespan. It further uses RL for dynamically adapting to the new workloads by excelling in learning optimal strategies using the interaction of trial and error with the environment. The simulation findings of the IMSESBOA + RL approach verified that it reduced makespan by 19.42% and execution time by 18.32% compared to the baseline approaches with various jobs originating from IoT applications.
Additional Links: PMID-41331066
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41331066,
year = {2025},
author = {Sangeetha, K and Kanthimathi, M},
title = {Improved multi-strategy secretary bird optimization for efficient IoT task scheduling in fog cloud computing.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-30918-1},
pmid = {41331066},
issn = {2045-2322},
abstract = {Applications designed for real-time IoT operations improve cloud-based service utilization due to their rapid scalability. Though cloud computing appears to be more effective for data processing and storage in a range of IoT applications, its real-time scalability presents issues in fulfilling the demands of network bandwidth and latency-sensitive applications. In this context, fog computing is shown to be a complementary paradigm to cloud computing, providing extra benefits and capabilities aimed at extending cloud services to end users and edge devices. Due to the restricted capabilities of fog nodes, only lightweight activities can be conducted locally, while jobs requiring more processing time are handled in the cloud. As a result, an Improved Multi-Strategy Enhanced Secretary Bird Optimization Algorithm using Reinforcement Learning (IMSESBOA + RL) for IoT Task Scheduling (TS) mechanism is presented to reduce data processing time and enhance Quality of Service (QoS) in fog-cloud computing. This IMSESBOA + RL approach is designed as an efficient scheduling model that investigates and processes various scalable quantities of tasks while minimizing latency and energy costs. It used a multi-objective methodology based on Secretary Bird Optimization Algorithm's (SBOA) balanced exploration and exploitation capabilities, which has multi-strategy benefits in terms of maximizing resource consumption rate and shortening makespan. It further uses RL for dynamically adapting to the new workloads by excelling in learning optimal strategies using the interaction of trial and error with the environment. The simulation findings of the IMSESBOA + RL approach verified that it reduced makespan by 19.42% and execution time by 18.32% compared to the baseline approaches with various jobs originating from IoT applications.},
}
RevDate: 2026-01-03
BHFVAL: Block chain-Enabled Hierarchical Federated Variational Auto encoder Framework for Secure Intrusion Detection in Vehicular Networks.
Scientific reports, 15(1):45742.
In modern vehicular systems, providing secure data processing with decentralized learning efficacy under limited computational resources and varying network conditions is challenging. This paper introduces an intelligent, effective, and secure learning model for the Internet of Vehicles (IoV) as a solution to the vulnerability of centralized architectures and the inefficiency of existing federated learning in adversarial environments. The Blockchain-Enabled Hierarchical Federated Variational Autoencoder Learning (BHFVAL) model uses a multilevel learning process on edge, fog, and cloud layers protected by a Reputation-Based Byzantine Fault Tolerance (RBFT) mechanism filtering out incorrect inputs during model aggregation. HFVAL is at its core, providing adaptive encoding and learning task assignments based on dynamic networks and resource status. To minimize communication latency, the platform employs a lightweight edge-computing (LEC) module to enable proximity-based processing. Hyperparameter optimization is enabled using the Osprey Optimization Algorithm (OOA) for maximum convergence effectiveness. Secure communication is achieved by implementing a Lightweight Secure Communication Protocol (LSCP) on Elliptic Curve-Based Homomorphic Encryption (ECHE) to enable encrypted V2X communication with minimal computational overhead and reduced latency. Extensive experimentation using the UNSW-NB15 and CIC-IDS-2017 datasets exhibited strong detection performance: UNSW-NB15 achieved 96.83% accuracy and 96.65% F1-score under IID, slightly declining to 95.74% accuracy and 95. 40% F1-score under non-IID conditions. The CIC-IDS-2017 achieved 97.36% accuracy, 97.2% AUROC, and 97.1% F1-score under IID, slightly declining to 96.40% accuracy and 96.20% F1-score under non-IID conditions. The results attest to the dependability, adaptability, and efficacy of the framework in decentralized privacy-sensitive vehicular networks.
Additional Links: PMID-41326496
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41326496,
year = {2025},
author = {Visuvanathan, GE and Sayeed, MS and Yogarayan, S},
title = {BHFVAL: Block chain-Enabled Hierarchical Federated Variational Auto encoder Framework for Secure Intrusion Detection in Vehicular Networks.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {45742},
pmid = {41326496},
issn = {2045-2322},
abstract = {In modern vehicular systems, providing secure data processing with decentralized learning efficacy under limited computational resources and varying network conditions is challenging. This paper introduces an intelligent, effective, and secure learning model for the Internet of Vehicles (IoV) as a solution to the vulnerability of centralized architectures and the inefficiency of existing federated learning in adversarial environments. The Blockchain-Enabled Hierarchical Federated Variational Autoencoder Learning (BHFVAL) model uses a multilevel learning process on edge, fog, and cloud layers protected by a Reputation-Based Byzantine Fault Tolerance (RBFT) mechanism filtering out incorrect inputs during model aggregation. HFVAL is at its core, providing adaptive encoding and learning task assignments based on dynamic networks and resource status. To minimize communication latency, the platform employs a lightweight edge-computing (LEC) module to enable proximity-based processing. Hyperparameter optimization is enabled using the Osprey Optimization Algorithm (OOA) for maximum convergence effectiveness. Secure communication is achieved by implementing a Lightweight Secure Communication Protocol (LSCP) on Elliptic Curve-Based Homomorphic Encryption (ECHE) to enable encrypted V2X communication with minimal computational overhead and reduced latency. Extensive experimentation using the UNSW-NB15 and CIC-IDS-2017 datasets exhibited strong detection performance: UNSW-NB15 achieved 96.83% accuracy and 96.65% F1-score under IID, slightly declining to 95.74% accuracy and 95. 40% F1-score under non-IID conditions. The CIC-IDS-2017 achieved 97.36% accuracy, 97.2% AUROC, and 97.1% F1-score under IID, slightly declining to 96.40% accuracy and 96.20% F1-score under non-IID conditions. The results attest to the dependability, adaptability, and efficacy of the framework in decentralized privacy-sensitive vehicular networks.},
}
RevDate: 2025-12-01
CmpDate: 2025-12-01
An Open-source Protocol for Deep Learning-based Segmentation of Tubular Structures in 3D Fluorescence Microscopy Images.
Journal of visualized experiments : JoVE.
Segmenting tubular structures in dense biological tissues from 3D fluorescence microscopy images is critical to study complex tissue but remains challenging due to image complexity, variability, and quality issues. Here, we introduce an open-source, user-friendly toolbox for end-to-end segmentation of tubular structures in 3D images, accessible to researchers without formal programming training. The toolbox features interactive Jupyter notebooks implementing two simple yet efficient deep learning architectures -- 3D U-Net and 3D U-Net with attention mechanisms -- for precise 3D segmentation of tubular networks. A key innovation is our simulation-based data augmentation strategy, which enhances model performance even with minimal training data (as few as one 3D image). Employing user-provided masks, the protocol generates artificial microscopy images with varying signal-to-noise ratios and simulates realistic imaging artifacts, including uneven staining, point spread function convolution, axial intensity variations, and Poisson and Gaussian noise. The protocol systematically guides users through data augmentation, model training, qualitative and quantitative evaluation on test sets, and inference on new images. We validate the toolbox by analyzing two morphologically distinct tubular networks in mouse liver tissue -- the bile canaliculi and sinusoidal networks -- demonstrating that both architectures perform well, with the attention U-Net slightly outperforming the standard U-Net when trained with augmented data. Our comprehensive toolbox, executable on local Graphics Processing Units (GPUs), high-performance computing clusters, or cloud platforms, contributes to the democratization of advanced image analysis for a broad spectrum of researchers.
Additional Links: PMID-41325317
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41325317,
year = {2025},
author = {Velasco, R and Pérez-Gallardo, C and Segovia-Miranda, F and Morales-Navarrete, H},
title = {An Open-source Protocol for Deep Learning-based Segmentation of Tubular Structures in 3D Fluorescence Microscopy Images.},
journal = {Journal of visualized experiments : JoVE},
volume = {},
number = {225},
pages = {},
doi = {10.3791/68004},
pmid = {41325317},
issn = {1940-087X},
mesh = {*Deep Learning ; Microscopy, Fluorescence/methods ; Animals ; *Imaging, Three-Dimensional/methods ; Mice ; Software ; Liver ; },
abstract = {Segmenting tubular structures in dense biological tissues from 3D fluorescence microscopy images is critical to study complex tissue but remains challenging due to image complexity, variability, and quality issues. Here, we introduce an open-source, user-friendly toolbox for end-to-end segmentation of tubular structures in 3D images, accessible to researchers without formal programming training. The toolbox features interactive Jupyter notebooks implementing two simple yet efficient deep learning architectures -- 3D U-Net and 3D U-Net with attention mechanisms -- for precise 3D segmentation of tubular networks. A key innovation is our simulation-based data augmentation strategy, which enhances model performance even with minimal training data (as few as one 3D image). Employing user-provided masks, the protocol generates artificial microscopy images with varying signal-to-noise ratios and simulates realistic imaging artifacts, including uneven staining, point spread function convolution, axial intensity variations, and Poisson and Gaussian noise. The protocol systematically guides users through data augmentation, model training, qualitative and quantitative evaluation on test sets, and inference on new images. We validate the toolbox by analyzing two morphologically distinct tubular networks in mouse liver tissue -- the bile canaliculi and sinusoidal networks -- demonstrating that both architectures perform well, with the attention U-Net slightly outperforming the standard U-Net when trained with augmented data. Our comprehensive toolbox, executable on local Graphics Processing Units (GPUs), high-performance computing clusters, or cloud platforms, contributes to the democratization of advanced image analysis for a broad spectrum of researchers.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Deep Learning
Microscopy, Fluorescence/methods
Animals
*Imaging, Three-Dimensional/methods
Mice
Software
Liver
RevDate: 2025-11-29
AIFS: an efficient face recognition method based on AI and enhanced few-shot learning.
Scientific reports pii:10.1038/s41598-025-29992-2 [Epub ahead of print].
The growing demand for real-time, adaptive facial recognition in resource-constrained environments like telemedicine, surveillance, and biometric authentication necessitates scalable AI solutions. Existing systems often falter under low-data conditions or limited computational resources. This paper introduces AIFS, an efficient and hybrid facial recognition framework that unifies traditional feature-based learning with modern few-shot deep learning under a shared Siamese architecture. The framework proposes two synergistic approaches: (1) a lightweight edge-oriented path using the Viola-Jones algorithm combined with Particle Swarm Optimization (PSO) for facial feature extraction within a Siamese network, optimized for low-power devices, and (2) a deep learning cloud-oriented path using a Siamese network with triplet loss, employing EfficientNetV2 and InceptionV3 as high-capacity feature encoders for enhanced generalization from limited examples. The proposed AIFS framework is validated across diverse platforms to simulate real-world deployment, with CPUs and Raspberry Pi representing resource-constrained edge devices, and GPUs representing high-capacity cloud environments. Tested on the Kaggle Face Recognition Dataset under a one-shot, low-data setting, AIFS achieves up to 99% accuracy. The results demonstrate a balance between latency, inference speed, and resource efficiency, confirming AIFS as a scalable and robust solution for real-time facial recognition in heterogeneous computing scenarios.
Additional Links: PMID-41318652
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41318652,
year = {2025},
author = {Nasralla, MM},
title = {AIFS: an efficient face recognition method based on AI and enhanced few-shot learning.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-29992-2},
pmid = {41318652},
issn = {2045-2322},
abstract = {The growing demand for real-time, adaptive facial recognition in resource-constrained environments like telemedicine, surveillance, and biometric authentication necessitates scalable AI solutions. Existing systems often falter under low-data conditions or limited computational resources. This paper introduces AIFS, an efficient and hybrid facial recognition framework that unifies traditional feature-based learning with modern few-shot deep learning under a shared Siamese architecture. The framework proposes two synergistic approaches: (1) a lightweight edge-oriented path using the Viola-Jones algorithm combined with Particle Swarm Optimization (PSO) for facial feature extraction within a Siamese network, optimized for low-power devices, and (2) a deep learning cloud-oriented path using a Siamese network with triplet loss, employing EfficientNetV2 and InceptionV3 as high-capacity feature encoders for enhanced generalization from limited examples. The proposed AIFS framework is validated across diverse platforms to simulate real-world deployment, with CPUs and Raspberry Pi representing resource-constrained edge devices, and GPUs representing high-capacity cloud environments. Tested on the Kaggle Face Recognition Dataset under a one-shot, low-data setting, AIFS achieves up to 99% accuracy. The results demonstrate a balance between latency, inference speed, and resource efficiency, confirming AIFS as a scalable and robust solution for real-time facial recognition in heterogeneous computing scenarios.},
}
RevDate: 2025-12-04
Smart transplant+: A HyCARE hybrid AI-cloud framework for intelligent donor-recipient matching, workflow automation, and post-transplant optimization.
Transplant immunology, 94:102332 pii:S0966-3274(25)00160-1 [Epub ahead of print].
Organ transplantation is a life-saving medical intervention to reverse end-stage organ failure. Despite its life-saving potential, organ transplantation faces inefficiencies like organ shortages, long wait times, and rejection risks due to manual, clinically limited donor-recipient matching. The rapid growth of AI and cloud computing offers new opportunities to enhance organ transplantation. This study proposes Smart Transplant+, a HyCARE system enabling intelligent matching, decision-making, and process automation. The architecture leverages a huge Organ Transplant Dataset and the most advanced methods such as Feedforward Neural Networks and Genetic Algorithms to maximize donor-recipient matching. Gated Recurrent Units are utilized in pre-transplant risk prediction, and post-transplant care is augmented with real-time tracking by IoT-based wearable sensors. The system has been programmed using Python, along with software tools like TensorFlow for machine learning and AES encryption for secure data storage and transmission. The Smart Transplant+ system provides 95-98 % accuracy which is higher than existing methods in identifying suitable donors and recipients and the potential for successful transplantation, and greatly enhances organ transplant efficiency and success rate. This book illustrates the revolutionary potential of synergizing IoT, cloud technology, and AI to optimize transplant care and improve outcomes.
Additional Links: PMID-41317747
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41317747,
year = {2025},
author = {Pulakhandam, W and Chaluvadi, A and Vallu, VR and Padmavathy, R},
title = {Smart transplant+: A HyCARE hybrid AI-cloud framework for intelligent donor-recipient matching, workflow automation, and post-transplant optimization.},
journal = {Transplant immunology},
volume = {94},
number = {},
pages = {102332},
doi = {10.1016/j.trim.2025.102332},
pmid = {41317747},
issn = {1878-5492},
abstract = {Organ transplantation is a life-saving medical intervention to reverse end-stage organ failure. Despite its life-saving potential, organ transplantation faces inefficiencies like organ shortages, long wait times, and rejection risks due to manual, clinically limited donor-recipient matching. The rapid growth of AI and cloud computing offers new opportunities to enhance organ transplantation. This study proposes Smart Transplant+, a HyCARE system enabling intelligent matching, decision-making, and process automation. The architecture leverages a huge Organ Transplant Dataset and the most advanced methods such as Feedforward Neural Networks and Genetic Algorithms to maximize donor-recipient matching. Gated Recurrent Units are utilized in pre-transplant risk prediction, and post-transplant care is augmented with real-time tracking by IoT-based wearable sensors. The system has been programmed using Python, along with software tools like TensorFlow for machine learning and AES encryption for secure data storage and transmission. The Smart Transplant+ system provides 95-98 % accuracy which is higher than existing methods in identifying suitable donors and recipients and the potential for successful transplantation, and greatly enhances organ transplant efficiency and success rate. This book illustrates the revolutionary potential of synergizing IoT, cloud technology, and AI to optimize transplant care and improve outcomes.},
}
RevDate: 2025-11-29
The construction of an integrated cloud network digital intelligence platform for rail transit based on artificial intelligence.
Scientific reports pii:10.1038/s41598-025-29732-6 [Epub ahead of print].
This study presents the design and validation of a closed-loop control platform for rail transit construction. The platform integrates multi-source data, enables real-time prediction, and supports AI-driven scheduling, with strategy execution and feedback implemented via digital twins. A three-layer architecture is constructed, comprising edge sensing, cloud computing, and intelligent interaction. The system incorporates data fusion middleware, an AI decision engine, and a 3D digital twins module. The operational workflow follows the perception-fusion-prediction/optimization-execution/feedback loop: edge devices collect on-site status, cloud middleware integrates and serves the data, the AI engine performs prediction and scheduling optimization, and the digital twins layer validates strategies and dispatches execution to the front end. At the data modeling level, a Transformer-Encoder-based multimodal temporal fusion model is designed, and graph attention networks are employed for heterogeneous structure modeling. Apache Kafka and Flink handle streaming data to achieve high-frequency, low-latency processing. The intelligent analysis layer integrates a Spatio-Temporal Graph Convolutional Network for passenger flow and construction period prediction, a Shifted Window Transformer for image recognition, and the Proximal Policy Optimization (PPO) algorithm for task scheduling optimization. Field tests in an urban rail construction project show that the platform maintains 91.6% accuracy in passenger flow prediction under high-concurrency conditions and achieves 98.2% accuracy in image recognition. PPO-based scheduling reduces average task completion time by 27.4%. The system sustains an average response latency of 280 ms, peak throughput of 27,000 messages per second, and over 95% closed-loop execution success rate. These results indicate that the platform meets its design targets in prediction accuracy, response latency, and scheduling efficiency under real-world conditions, providing a foundation for informatization and intelligent upgrading in urban rail transit.
Additional Links: PMID-41315657
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41315657,
year = {2025},
author = {Wang, K and Zhou, X and Guan, J},
title = {The construction of an integrated cloud network digital intelligence platform for rail transit based on artificial intelligence.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-29732-6},
pmid = {41315657},
issn = {2045-2322},
abstract = {This study presents the design and validation of a closed-loop control platform for rail transit construction. The platform integrates multi-source data, enables real-time prediction, and supports AI-driven scheduling, with strategy execution and feedback implemented via digital twins. A three-layer architecture is constructed, comprising edge sensing, cloud computing, and intelligent interaction. The system incorporates data fusion middleware, an AI decision engine, and a 3D digital twins module. The operational workflow follows the perception-fusion-prediction/optimization-execution/feedback loop: edge devices collect on-site status, cloud middleware integrates and serves the data, the AI engine performs prediction and scheduling optimization, and the digital twins layer validates strategies and dispatches execution to the front end. At the data modeling level, a Transformer-Encoder-based multimodal temporal fusion model is designed, and graph attention networks are employed for heterogeneous structure modeling. Apache Kafka and Flink handle streaming data to achieve high-frequency, low-latency processing. The intelligent analysis layer integrates a Spatio-Temporal Graph Convolutional Network for passenger flow and construction period prediction, a Shifted Window Transformer for image recognition, and the Proximal Policy Optimization (PPO) algorithm for task scheduling optimization. Field tests in an urban rail construction project show that the platform maintains 91.6% accuracy in passenger flow prediction under high-concurrency conditions and achieves 98.2% accuracy in image recognition. PPO-based scheduling reduces average task completion time by 27.4%. The system sustains an average response latency of 280 ms, peak throughput of 27,000 messages per second, and over 95% closed-loop execution success rate. These results indicate that the platform meets its design targets in prediction accuracy, response latency, and scheduling efficiency under real-world conditions, providing a foundation for informatization and intelligent upgrading in urban rail transit.},
}
RevDate: 2025-12-01
A deep learning-based intelligent curriculum system for enhancing public music education: a case study across three universities in Southwest China.
Scientific reports, 15(1):42798.
Responding to national aesthetic education reforms, this study introduces a deep learning-driven platform to enhance public music education in Southwest China's universities. Utilizing LSTM and Transformer models, the system analyzes real-time student learning, predicts mastery trends, and delivers personalized feedback via a cloud-based interface. A semester-long experiment across Guizhou Minzu University, Guizhou University, and Xichang University compared three groups: traditional instruction, MOOC-based hybrid teaching, and AI-enhanced personalized learning. The AI group achieved 32% higher post-test mastery scores, with predictive models maintaining high accuracy (RMSE < 0.15). The platform supports adaptive assessments, intelligent feedback, and instructional decision-making, offering a scalable solution for AI integration in arts education, particularly in culturally diverse, data-scarce settings. This work informs policymakers and developers aiming to modernize aesthetic education through advanced computing.
Additional Links: PMID-41315653
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41315653,
year = {2025},
author = {Du, H and Butkaew, P},
title = {A deep learning-based intelligent curriculum system for enhancing public music education: a case study across three universities in Southwest China.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {42798},
pmid = {41315653},
issn = {2045-2322},
support = {YBS202324//Research on the Efficiency and Improvement Path of Public Music Education in Comprehensive Colleges and Universities in Southwest China/ ; },
abstract = {Responding to national aesthetic education reforms, this study introduces a deep learning-driven platform to enhance public music education in Southwest China's universities. Utilizing LSTM and Transformer models, the system analyzes real-time student learning, predicts mastery trends, and delivers personalized feedback via a cloud-based interface. A semester-long experiment across Guizhou Minzu University, Guizhou University, and Xichang University compared three groups: traditional instruction, MOOC-based hybrid teaching, and AI-enhanced personalized learning. The AI group achieved 32% higher post-test mastery scores, with predictive models maintaining high accuracy (RMSE < 0.15). The platform supports adaptive assessments, intelligent feedback, and instructional decision-making, offering a scalable solution for AI integration in arts education, particularly in culturally diverse, data-scarce settings. This work informs policymakers and developers aiming to modernize aesthetic education through advanced computing.},
}
RevDate: 2026-01-01
Modifier guided resilient CNN inference enables fault-tolerant edge collaboration for IoT.
Scientific reports, 15(1):45458.
In resource-constrained Internet of Things (IoT) scenarios, implementing robust and accurate deep learning inference is problematic due to device failures, limited computing power, and privacy concerns. We present a resilient, completely edge-based distributed convolutional neural network (CNN) architecture that eliminates cloud dependencies while enabling accurate and fault-tolerant inference. At its core is a lightweight Modifier Module deployed at the edge, which synthesizes predictions for failing devices by pooling peer CNN outputs and weights. This dynamic mechanism is trained via a novel fail-simulation technique, allowing it to mimic missing outputs in real-time without model duplication or cloud fallback. We assess our methodology using MNIST and CIFAR-10 datasets under both homogeneous and heterogeneous data partitions, with up to five simultaneous device failures. The system displays up to 1.5% absolute accuracy improvement, 30% error rate reduction, and stable operation even with over 80% device dropout, exceeding ensemble, dropout, and federated baselines. Our strategy combines significant statistical significance, low resource utilization (~ 15 KB per model), and real-time responsiveness, making it well-suited for safety-critical IoT installations where cloud access is infeasible.
Additional Links: PMID-41310049
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41310049,
year = {2025},
author = {Jamshidi, O and Abbasi, M and Ramazani, A and Salimi Shahraki, A and Taherkordi, A},
title = {Modifier guided resilient CNN inference enables fault-tolerant edge collaboration for IoT.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {45458},
pmid = {41310049},
issn = {2045-2322},
abstract = {In resource-constrained Internet of Things (IoT) scenarios, implementing robust and accurate deep learning inference is problematic due to device failures, limited computing power, and privacy concerns. We present a resilient, completely edge-based distributed convolutional neural network (CNN) architecture that eliminates cloud dependencies while enabling accurate and fault-tolerant inference. At its core is a lightweight Modifier Module deployed at the edge, which synthesizes predictions for failing devices by pooling peer CNN outputs and weights. This dynamic mechanism is trained via a novel fail-simulation technique, allowing it to mimic missing outputs in real-time without model duplication or cloud fallback. We assess our methodology using MNIST and CIFAR-10 datasets under both homogeneous and heterogeneous data partitions, with up to five simultaneous device failures. The system displays up to 1.5% absolute accuracy improvement, 30% error rate reduction, and stable operation even with over 80% device dropout, exceeding ensemble, dropout, and federated baselines. Our strategy combines significant statistical significance, low resource utilization (~ 15 KB per model), and real-time responsiveness, making it well-suited for safety-critical IoT installations where cloud access is infeasible.},
}
RevDate: 2025-11-30
Hybrid modeling and rapid prototyping technology based on the geomagic system.
Scientific reports, 15(1):42456 pii:10.1038/s41598-025-26566-0.
The structural characteristics of gear parts are analyzed, and appropriate point cloud processing flow is formulated. Taking the Geomagic system as the computing platform and taking spur gears and spiral bevel gear gears as examples, the forward and reverse hybrid modelling is carried out, and a solid model that meets the accuracy requirements is obtained, which verifies the effectiveness of the hybrid modelling. A 3D printing process is then carried out on the generated solid model, and the corresponding process parameters are set to obtain a feasible physical model. This hybrid modelling + rapid prototyping solution can effectively improve the design efficiency of products, reduce product development costs, and improve the competitiveness of enterprises.
Additional Links: PMID-41309939
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41309939,
year = {2025},
author = {Yin, H and Ding, Y and Long, C and Wang, L and Jiang, Z and Wang, Z and Zhang, J and Yang, Y and Wu, G and Li, X},
title = {Hybrid modeling and rapid prototyping technology based on the geomagic system.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {42456},
doi = {10.1038/s41598-025-26566-0},
pmid = {41309939},
issn = {2045-2322},
support = {KJQN202401341//Science and Technology Research Program of Chongqing Municipal Education Commission/ ; 2024yc-cxfz30079//Technology Innovation and Application Development Project of Chongqing Yongchuan District Science and Technology Bureau/ ; 2024yc-cxfz30073//Technology Innovation and Application Development Project of Chongqing Yongchuan District Science and Technology Bureau/ ; KJQN202001336//Technology Research Program of Chongqing Municipal Education Commission/ ; KJQN202301311//Technology Research Program of Chongqing Municipal Education Commission/ ; CSTB2022NSCQ-MSX0352//Natural Science Foundation of Chongqing/ ; },
abstract = {The structural characteristics of gear parts are analyzed, and appropriate point cloud processing flow is formulated. Taking the Geomagic system as the computing platform and taking spur gears and spiral bevel gear gears as examples, the forward and reverse hybrid modelling is carried out, and a solid model that meets the accuracy requirements is obtained, which verifies the effectiveness of the hybrid modelling. A 3D printing process is then carried out on the generated solid model, and the corresponding process parameters are set to obtain a feasible physical model. This hybrid modelling + rapid prototyping solution can effectively improve the design efficiency of products, reduce product development costs, and improve the competitiveness of enterprises.},
}
RevDate: 2025-12-15
CmpDate: 2025-11-27
GrantCheck-an AI Solution for Guiding Grant Language to New Policy Requirements: Development Study.
JMIR formative research, 9:e79038.
BACKGROUND: Academic institutions face increasing challenges in grant writing due to evolving federal and state policies that restrict the use of specific language. Manual review processes are labor-intensive and may delay submissions, highlighting the need for scalable, secure solutions that ensure compliance without compromising scientific integrity.
OBJECTIVE: This study aimed to develop a secure, artificial intelligence-powered tool that assists researchers in writing grants consistent with evolving state and federal policy requirements.
METHODS: GrantCheck (University of Massachusetts Chan Medical School) was built on a private Amazon Web Services virtual private cloud, integrating a rule-based natural language processing engine with large language models accessed via Amazon Bedrock. A hybrid pipeline detects flagged terms and generates alternative phrasing, with validation steps to prevent hallucinations. A secure web-based front end enables document upload and report retrieval. Usability was assessed using the System Usability Scale.
RESULTS: GrantCheck achieved high performance in detecting and recommending alternatives for sensitive terms, with a precision of 1.00, recall of 0.73, and an F1-score of 0.84-outperforming general-purpose models including GPT-4o (OpenAI; F1=0.43), Deepseek R1 (High-Flyer; F1=0.40), Llama 3.1 (Meta AI; F1=0.27), Gemini 2.5 Flash (Google; F1=0.58), and even Gemini 2.5 Pro (Google; F1=0.72). Usability testing among 25 faculty and staff yielded a mean System Usability Scale score of 85.9 (SD 13.4), indicating high user satisfaction and strong workflow integration.
CONCLUSIONS: GrantCheck demonstrates the feasibility of deploying institutionally hosted, artificial intelligence-driven systems to support compliant and researcher-friendly grant writing. Beyond administrative efficiency, such systems can indirectly safeguard public health research continuity by minimizing grant delays and funding losses caused by language-related policy changes. By maintaining compliance without suppressing scientific rigor or inclusivity, GrantCheck helps protect the pipeline of research that advances biomedical discovery, health equity, and patient outcomes. This capability is particularly relevant for proposals in sensitive domains-such as social determinants of health, behavioral medicine, and community-based research-that are most vulnerable to evolving policy restrictions. As a proof-of-concept development study, our implementation is tailored to one institution's policy environment and security infrastructure, and findings should be interpreted as preliminary rather than universally generalizable.
Additional Links: PMID-41308189
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41308189,
year = {2025},
author = {Shi, Q and Oztekin, A and Matthew, G and Bortle, J and Jenkins, H and Wong, SK and Langlois, P and Zaki, A and Coleman, B and Luzuriaga, K and Zai, AH},
title = {GrantCheck-an AI Solution for Guiding Grant Language to New Policy Requirements: Development Study.},
journal = {JMIR formative research},
volume = {9},
number = {},
pages = {e79038},
pmid = {41308189},
issn = {2561-326X},
mesh = {Humans ; *Artificial Intelligence ; *Natural Language Processing ; *Writing ; *Research Support as Topic ; },
abstract = {BACKGROUND: Academic institutions face increasing challenges in grant writing due to evolving federal and state policies that restrict the use of specific language. Manual review processes are labor-intensive and may delay submissions, highlighting the need for scalable, secure solutions that ensure compliance without compromising scientific integrity.
OBJECTIVE: This study aimed to develop a secure, artificial intelligence-powered tool that assists researchers in writing grants consistent with evolving state and federal policy requirements.
METHODS: GrantCheck (University of Massachusetts Chan Medical School) was built on a private Amazon Web Services virtual private cloud, integrating a rule-based natural language processing engine with large language models accessed via Amazon Bedrock. A hybrid pipeline detects flagged terms and generates alternative phrasing, with validation steps to prevent hallucinations. A secure web-based front end enables document upload and report retrieval. Usability was assessed using the System Usability Scale.
RESULTS: GrantCheck achieved high performance in detecting and recommending alternatives for sensitive terms, with a precision of 1.00, recall of 0.73, and an F1-score of 0.84-outperforming general-purpose models including GPT-4o (OpenAI; F1=0.43), Deepseek R1 (High-Flyer; F1=0.40), Llama 3.1 (Meta AI; F1=0.27), Gemini 2.5 Flash (Google; F1=0.58), and even Gemini 2.5 Pro (Google; F1=0.72). Usability testing among 25 faculty and staff yielded a mean System Usability Scale score of 85.9 (SD 13.4), indicating high user satisfaction and strong workflow integration.
CONCLUSIONS: GrantCheck demonstrates the feasibility of deploying institutionally hosted, artificial intelligence-driven systems to support compliant and researcher-friendly grant writing. Beyond administrative efficiency, such systems can indirectly safeguard public health research continuity by minimizing grant delays and funding losses caused by language-related policy changes. By maintaining compliance without suppressing scientific rigor or inclusivity, GrantCheck helps protect the pipeline of research that advances biomedical discovery, health equity, and patient outcomes. This capability is particularly relevant for proposals in sensitive domains-such as social determinants of health, behavioral medicine, and community-based research-that are most vulnerable to evolving policy restrictions. As a proof-of-concept development study, our implementation is tailored to one institution's policy environment and security infrastructure, and findings should be interpreted as preliminary rather than universally generalizable.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Artificial Intelligence
*Natural Language Processing
*Writing
*Research Support as Topic
RevDate: 2025-11-30
Edge-Computing Smart Irrigation Controller Using LoRaWAN and LSTM for Predictive Controlled Deficit Irrigation.
Sensors (Basel, Switzerland), 25(22):.
Enhancing sustainability in agriculture has become a significant challenge today where in the current context of climate change, particularly in countries of the Mediterranean area, the amount of water available for irrigation is becoming increasingly limited. Automating irrigation processes using affordable sensors can help save irrigation water and produce almonds more sustainably. This work presents an IoT-enabled edge computing model for smart irrigation systems focused on precision agriculture. This model combines IoT sensors, hybrid machine learning algorithms, and edge computing to predict soil moisture and manage Controlled Deficit Irrigation (CDI) strategies in high density almond tree fields applying reductions of 35% ETc (crop evapotranspiration). By gathering and analyzing meteorological, humidity soil, and crop data, a soft ML (Machine Learning) model has been developed to enhance irrigation practices and identify crop anomalies in real-time without cloud computing. This methodology has the potential to transform agricultural practices by enabling precise and efficient water management, even in remote locations with lack of internet access. This study represents an initial step toward implementing ML algorithms for irrigation CDI strategies.
Additional Links: PMID-41305289
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41305289,
year = {2025},
author = {Baseca, CC and Dionísio, R and Ribeiro, F and Metrôlho, J},
title = {Edge-Computing Smart Irrigation Controller Using LoRaWAN and LSTM for Predictive Controlled Deficit Irrigation.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {22},
pages = {},
pmid = {41305289},
issn = {1424-8220},
abstract = {Enhancing sustainability in agriculture has become a significant challenge today where in the current context of climate change, particularly in countries of the Mediterranean area, the amount of water available for irrigation is becoming increasingly limited. Automating irrigation processes using affordable sensors can help save irrigation water and produce almonds more sustainably. This work presents an IoT-enabled edge computing model for smart irrigation systems focused on precision agriculture. This model combines IoT sensors, hybrid machine learning algorithms, and edge computing to predict soil moisture and manage Controlled Deficit Irrigation (CDI) strategies in high density almond tree fields applying reductions of 35% ETc (crop evapotranspiration). By gathering and analyzing meteorological, humidity soil, and crop data, a soft ML (Machine Learning) model has been developed to enhance irrigation practices and identify crop anomalies in real-time without cloud computing. This methodology has the potential to transform agricultural practices by enabling precise and efficient water management, even in remote locations with lack of internet access. This study represents an initial step toward implementing ML algorithms for irrigation CDI strategies.},
}
RevDate: 2025-11-30
Online Mapping from Weight Matching Odometry and Highly Dynamic Point Cloud Filtering via Pseudo-Occupancy Grid.
Sensors (Basel, Switzerland), 25(22):.
Efficient locomotion in autonomous driving and robotics requires clearer visualization and more precise map. This paper presents a high accuracy online mapping including weight matching LiDAR-IMU-GNSS odometry and an object-level highly dynamic point cloud filtering method based on a pseudo-occupancy grid. The odometry integrates IMU pre-integration, ground point segmentation through progressive morphological filtering (PMF), motion compensation, and weight feature point matching. Weight feature point matching enhances alignment accuracy by combining geometric and reflectance intensity similarities. By computing the pseudo-occupancy ratio between the current frame and prior local submaps, the grid probability values are updated to identify the distribution of dynamic grids. Object-level point cloud cluster segmentation is obtained using the curved voxel clustering method, eventually leading to filtering out the object-level highly dynamic point clouds during the online mapping process. Compared to the LIO-SAM and FAST-LIO2 frameworks, the proposed odometry demonstrates superior accuracy in the KITTI, UrbanLoco, and Newer College (NCD) datasets. Meantime, the proposed highly dynamic point cloud filtering algorithm exhibits better detection precision than the performance of Removert and ERASOR. Furthermore, the high-accuracy online mapping is built from a real-time dataset with the comprehensive filtering of driving vehicles, cyclists, and pedestrians. This research contributes to the field of high-accuracy online mapping, especially in filtering highly dynamic objects in an advanced way.
Additional Links: PMID-41305080
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41305080,
year = {2025},
author = {Zhao, X and Cao, X and Ding, M and Jiang, D and Wei, C},
title = {Online Mapping from Weight Matching Odometry and Highly Dynamic Point Cloud Filtering via Pseudo-Occupancy Grid.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {22},
pages = {},
pmid = {41305080},
issn = {1424-8220},
support = {8KD006(2024)-2//the State Administration of Science. Technology and Industry for National Defense Project/ ; },
abstract = {Efficient locomotion in autonomous driving and robotics requires clearer visualization and more precise map. This paper presents a high accuracy online mapping including weight matching LiDAR-IMU-GNSS odometry and an object-level highly dynamic point cloud filtering method based on a pseudo-occupancy grid. The odometry integrates IMU pre-integration, ground point segmentation through progressive morphological filtering (PMF), motion compensation, and weight feature point matching. Weight feature point matching enhances alignment accuracy by combining geometric and reflectance intensity similarities. By computing the pseudo-occupancy ratio between the current frame and prior local submaps, the grid probability values are updated to identify the distribution of dynamic grids. Object-level point cloud cluster segmentation is obtained using the curved voxel clustering method, eventually leading to filtering out the object-level highly dynamic point clouds during the online mapping process. Compared to the LIO-SAM and FAST-LIO2 frameworks, the proposed odometry demonstrates superior accuracy in the KITTI, UrbanLoco, and Newer College (NCD) datasets. Meantime, the proposed highly dynamic point cloud filtering algorithm exhibits better detection precision than the performance of Removert and ERASOR. Furthermore, the high-accuracy online mapping is built from a real-time dataset with the comprehensive filtering of driving vehicles, cyclists, and pedestrians. This research contributes to the field of high-accuracy online mapping, especially in filtering highly dynamic objects in an advanced way.},
}
RevDate: 2025-11-30
CmpDate: 2025-11-27
Transforming Smart Healthcare Systems with AI-Driven Edge Computing for Distributed IoMT Networks.
Bioengineering (Basel, Switzerland), 12(11):.
The Internet of Medical Things (IoMT) with edge computing provides opportunities for the rapid growth and development of a smart healthcare system (SHM). It consists of wearable sensors, physical objects, and electronic devices that collect health data, perform local processing, and later forward it to a cloud platform for further analysis. Most existing approaches focus on diagnosing health conditions and reporting them to medical experts for personalized treatment. However, they overlook the need to provide dynamic approaches to address the unpredictable nature of the healthcare system, which relies on public infrastructure that all connected devices can access. Furthermore, the rapid processing of health data on constrained devices often leads to uneven load distribution and affects the system's responsiveness in critical circumstances. Our research study proposes a model based on AI-driven and edge computing technologies to provide a lightweight and innovative healthcare system. It enhances the learning capabilities of the system and efficiently detects network anomalies in a distributed IoMT network, without incurring additional overhead on a bounded system. The proposed model is verified and tested through simulations using synthetic data, and the obtained results prove its efficacy in terms of energy consumption by 53%, latency by 46%, packet loss rate by 52%, network throughput by 56%, and overhead by 48% than related solutions.
Additional Links: PMID-41301188
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41301188,
year = {2025},
author = {Almufareh, MF and Humayun, M and Haseeb, K},
title = {Transforming Smart Healthcare Systems with AI-Driven Edge Computing for Distributed IoMT Networks.},
journal = {Bioengineering (Basel, Switzerland)},
volume = {12},
number = {11},
pages = {},
pmid = {41301188},
issn = {2306-5354},
support = {GSSR-2025-02-01292//eanship of Graduate Studies and Scientific Research at Jouf University/ ; },
abstract = {The Internet of Medical Things (IoMT) with edge computing provides opportunities for the rapid growth and development of a smart healthcare system (SHM). It consists of wearable sensors, physical objects, and electronic devices that collect health data, perform local processing, and later forward it to a cloud platform for further analysis. Most existing approaches focus on diagnosing health conditions and reporting them to medical experts for personalized treatment. However, they overlook the need to provide dynamic approaches to address the unpredictable nature of the healthcare system, which relies on public infrastructure that all connected devices can access. Furthermore, the rapid processing of health data on constrained devices often leads to uneven load distribution and affects the system's responsiveness in critical circumstances. Our research study proposes a model based on AI-driven and edge computing technologies to provide a lightweight and innovative healthcare system. It enhances the learning capabilities of the system and efficiently detects network anomalies in a distributed IoMT network, without incurring additional overhead on a bounded system. The proposed model is verified and tested through simulations using synthetic data, and the obtained results prove its efficacy in terms of energy consumption by 53%, latency by 46%, packet loss rate by 52%, network throughput by 56%, and overhead by 48% than related solutions.},
}
RevDate: 2026-01-01
Dynamic multi objective task scheduling in cloud computing using reinforcement learning for energy and cost optimization.
Scientific reports, 15(1):45387.
Efficient task scheduling in cloud computing is crucial for managing dynamic workloads while balancing performance, energy efficiency, and operational costs. This paper introduces a novel Reinforcement Learning-Driven Multi-Objective Task Scheduling (RL-MOTS) framework that leverages a Deep Q-Network (DQN) to dynamically allocate tasks across virtual machines. By integrating multi-objective optimization, RL-MOTS simultaneously minimizes energy consumption, reduces costs, and ensures Quality of Service (QoS) under varying workload conditions. The framework employs a reward function that adapts to real-time resource utilization, task deadlines, and energy metrics, enabling robust performance in heterogeneous cloud environments. Evaluations conducted using a simulated cloud platform demonstrate that RL-MOTS achieves up to 27% reduction in energy consumption and 18% improvement in cost efficiency compared to state-of-the-art heuristic and metaheuristic methods, while meeting stringent deadline constraints. Its adaptability to hybrid cloud-edge architectures makes RL-MOTS a forward-looking solution for next-generation distributed computing systems.
Additional Links: PMID-41298680
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41298680,
year = {2025},
author = {Yu, X and Mi, J and Tang, L and Long, L and Qin, X},
title = {Dynamic multi objective task scheduling in cloud computing using reinforcement learning for energy and cost optimization.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {45387},
pmid = {41298680},
issn = {2045-2322},
abstract = {Efficient task scheduling in cloud computing is crucial for managing dynamic workloads while balancing performance, energy efficiency, and operational costs. This paper introduces a novel Reinforcement Learning-Driven Multi-Objective Task Scheduling (RL-MOTS) framework that leverages a Deep Q-Network (DQN) to dynamically allocate tasks across virtual machines. By integrating multi-objective optimization, RL-MOTS simultaneously minimizes energy consumption, reduces costs, and ensures Quality of Service (QoS) under varying workload conditions. The framework employs a reward function that adapts to real-time resource utilization, task deadlines, and energy metrics, enabling robust performance in heterogeneous cloud environments. Evaluations conducted using a simulated cloud platform demonstrate that RL-MOTS achieves up to 27% reduction in energy consumption and 18% improvement in cost efficiency compared to state-of-the-art heuristic and metaheuristic methods, while meeting stringent deadline constraints. Its adaptability to hybrid cloud-edge architectures makes RL-MOTS a forward-looking solution for next-generation distributed computing systems.},
}
RevDate: 2025-12-15
CmpDate: 2025-12-15
Advancements and challenges in bioinformatics tools for microbial genomics in the last decade: Toward the smart integration of bioinformatics tools, digital resources, and emerging technologies for the analysis of complex biological data.
Infection, genetics and evolution : journal of molecular epidemiology and evolutionary genetics in infectious diseases, 136:105859.
Over the past decade, microbial genomics has been transformed by advances in sequencing technologies and bioinformatics, enabling the transition from targeted gene markers to complete genome assemblies and ecological scale metagenomic surveys. This review presents a comprehensive overview of the bioinformatics pipelines that structure this field, from sample preparation, PCR amplification, and next-generation sequencing (NGS) to read preprocessing, genome assembly, polishing, structural and functional annotation, and submission to public databases. We highlight the major tools that have become standards at each stage, including FastQC, SPAdes, Prokka, Bakta, CARD, GTDB-Tk, QIIME 2, and Kraken2, while also emphasizing recent innovations such as hybrid assemblers, ontology-driven annotation frameworks, and automated workflows (nf-core, Bactopia). Applications extend across microbiology, from antimicrobial resistance surveillance and phylogenetic classification to ecological studies, exemplified here by three case studies: termite gut microbiota profiling by 16S metabarcoding, the description of new Bartonella species from bats, and the genomic characterization of rare Salmonella enterica serovars from primates. Despite these advances, persistent challenges remain, including incomplete and biased reference databases, computational bottlenecks, and economic disparities in sequencing and storage capacities. In response, international initiatives increasingly promote open, interoperable, and reusable bioinformatics infrastructures. Conforming to the Findable, Accessible, Interoperable, Reusable (FAIR) principles and global frameworks such as Global Alliance for Genomics and Health (GA4GH), these efforts are driving greater standardization, transparency, and data sharing across the microbial genomics community. Future perspectives point toward the integration of artificial intelligence, long-read and telomere-to-telomere (T2T) sequencing, cloud-native infrastructures, and even quantum computing, paving the way for a predictive, reproducible, and globally inclusive microbial genomics.
Additional Links: PMID-41297621
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41297621,
year = {2025},
author = {Houmenou, CT and Sokhna, C and Fenollar, F and Mediannikov, O},
title = {Advancements and challenges in bioinformatics tools for microbial genomics in the last decade: Toward the smart integration of bioinformatics tools, digital resources, and emerging technologies for the analysis of complex biological data.},
journal = {Infection, genetics and evolution : journal of molecular epidemiology and evolutionary genetics in infectious diseases},
volume = {136},
number = {},
pages = {105859},
doi = {10.1016/j.meegid.2025.105859},
pmid = {41297621},
issn = {1567-7257},
mesh = {*Computational Biology/methods ; *Genomics/methods ; High-Throughput Nucleotide Sequencing ; *Metagenomics/methods ; Humans ; Animals ; },
abstract = {Over the past decade, microbial genomics has been transformed by advances in sequencing technologies and bioinformatics, enabling the transition from targeted gene markers to complete genome assemblies and ecological scale metagenomic surveys. This review presents a comprehensive overview of the bioinformatics pipelines that structure this field, from sample preparation, PCR amplification, and next-generation sequencing (NGS) to read preprocessing, genome assembly, polishing, structural and functional annotation, and submission to public databases. We highlight the major tools that have become standards at each stage, including FastQC, SPAdes, Prokka, Bakta, CARD, GTDB-Tk, QIIME 2, and Kraken2, while also emphasizing recent innovations such as hybrid assemblers, ontology-driven annotation frameworks, and automated workflows (nf-core, Bactopia). Applications extend across microbiology, from antimicrobial resistance surveillance and phylogenetic classification to ecological studies, exemplified here by three case studies: termite gut microbiota profiling by 16S metabarcoding, the description of new Bartonella species from bats, and the genomic characterization of rare Salmonella enterica serovars from primates. Despite these advances, persistent challenges remain, including incomplete and biased reference databases, computational bottlenecks, and economic disparities in sequencing and storage capacities. In response, international initiatives increasingly promote open, interoperable, and reusable bioinformatics infrastructures. Conforming to the Findable, Accessible, Interoperable, Reusable (FAIR) principles and global frameworks such as Global Alliance for Genomics and Health (GA4GH), these efforts are driving greater standardization, transparency, and data sharing across the microbial genomics community. Future perspectives point toward the integration of artificial intelligence, long-read and telomere-to-telomere (T2T) sequencing, cloud-native infrastructures, and even quantum computing, paving the way for a predictive, reproducible, and globally inclusive microbial genomics.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Computational Biology/methods
*Genomics/methods
High-Throughput Nucleotide Sequencing
*Metagenomics/methods
Humans
Animals
RevDate: 2025-11-28
CmpDate: 2025-11-26
Sentinel-2-Based Forest Health Survey of ICP Forests Level I and II Plots in Hungary.
Journal of imaging, 11(11):.
Forest damage has been increasingly recorded over the past decade in both Europe and Hungary, primarily due to prolonged droughts, causing a decline in forest health. In the framework of ICP Forests, the forest damage has been monitored for decades; however, it is labour-intensive and time-consuming. Satellite-based remote sensing offers a rapid and efficient method for assessing large-scale damage events, combining the ground-based ICP Forests datasets. This study utilised cloud computing and Sentinel-2 satellite imagery to monitor forest health and detect anomalies. Standardised NDVI (Z NDVI) maps were produced for the period from 2017 to 2023 to identify disturbances in the forest. The research focused on seven active ICP Forests Level II and 78 Level I plots in Hungary. Z NDVI values were divided into five categories based on damage severity, and there was agreement between Level II field data and satellite imagery. In 2017, severe damage was caused by late frost and wind; however, the forest recovered by 2018. Another decline was observed in 2021 due to wind and in 2022 due to drought. Data from the ICP Forests Level I plots, which represent forest condition in Hungary, indicated that 80% of the monitored stands were damaged, with 30% suffering moderate damage and 15% experiencing severe damage. Z NDVI classifications aligned with the field data, showing widespread forest damage across the country.
Additional Links: PMID-41295130
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41295130,
year = {2025},
author = {Molnár, T and Bolla, B and Szabó, O and Koltay, A},
title = {Sentinel-2-Based Forest Health Survey of ICP Forests Level I and II Plots in Hungary.},
journal = {Journal of imaging},
volume = {11},
number = {11},
pages = {},
pmid = {41295130},
issn = {2313-433X},
support = {TKP2021-NKTA-43//Ministry of Culture and Innovation of Hungary/ ; },
abstract = {Forest damage has been increasingly recorded over the past decade in both Europe and Hungary, primarily due to prolonged droughts, causing a decline in forest health. In the framework of ICP Forests, the forest damage has been monitored for decades; however, it is labour-intensive and time-consuming. Satellite-based remote sensing offers a rapid and efficient method for assessing large-scale damage events, combining the ground-based ICP Forests datasets. This study utilised cloud computing and Sentinel-2 satellite imagery to monitor forest health and detect anomalies. Standardised NDVI (Z NDVI) maps were produced for the period from 2017 to 2023 to identify disturbances in the forest. The research focused on seven active ICP Forests Level II and 78 Level I plots in Hungary. Z NDVI values were divided into five categories based on damage severity, and there was agreement between Level II field data and satellite imagery. In 2017, severe damage was caused by late frost and wind; however, the forest recovered by 2018. Another decline was observed in 2021 due to wind and in 2022 due to drought. Data from the ICP Forests Level I plots, which represent forest condition in Hungary, indicated that 80% of the monitored stands were damaged, with 30% suffering moderate damage and 15% experiencing severe damage. Z NDVI classifications aligned with the field data, showing widespread forest damage across the country.},
}
RevDate: 2025-11-28
Reinforcement learning based multi objective task scheduling for energy efficient and cost effective cloud edge computing.
Scientific reports, 15(1):41716.
The rapid proliferation of Internet of Things (IoT) devices and latency-sensitive applications has amplified the need for efficient task scheduling in hybrid cloud-edge environments. Traditional heuristic and metaheuristic algorithms often fall short in addressing the dynamic nature of workloads and the conflicting objectives of performance, energy efficiency, and cost-effectiveness. To overcome these challenges, this study introduces Reinforcement Learning-Based Multi-Objective Task Scheduling (RL-MOTS), a framework leveraging Deep Q-Networks (DQNs) for intelligent and adaptive resource allocation. The proposed model formulates scheduling as a Markov Decision Process, incorporating a priority-aware dynamic queueing mechanism and a multi-objective reward function that balances task latency, energy consumption, and operational costs. Additionally, the framework employs a state-reward tensor to capture trade-offs among objectives, enabling real-time decision-making across heterogeneous cloud and edge nodes. Comprehensive simulations using CloudSim validate the robustness of RL-MOTS under varying workload conditions. Compared to baseline strategies such as FCFS, Min-Min, and multi-objective heuristic models, RL-MOTS achieves up to 28% reduction in energy consumption, 20% improvement in cost efficiency, and significant reductions in makespan and deadline violations, while maintaining strict Quality of Service (QoS) requirements. The framework's adaptability to preemptive and non-preemptive scheduling further enhances its resilience and scalability. These findings establish RL-MOTS as a forward-looking solution for sustainable, cost-efficient, and performance-oriented computing in next-generation distributed systems. Future research will focus on integrating transfer learning and federated learning to increase scalability and privacy in large, decentralized environments, including those applicable to the medical industry.
Additional Links: PMID-41286289
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41286289,
year = {2025},
author = {Zhang, W and Ou, H},
title = {Reinforcement learning based multi objective task scheduling for energy efficient and cost effective cloud edge computing.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {41716},
pmid = {41286289},
issn = {2045-2322},
abstract = {The rapid proliferation of Internet of Things (IoT) devices and latency-sensitive applications has amplified the need for efficient task scheduling in hybrid cloud-edge environments. Traditional heuristic and metaheuristic algorithms often fall short in addressing the dynamic nature of workloads and the conflicting objectives of performance, energy efficiency, and cost-effectiveness. To overcome these challenges, this study introduces Reinforcement Learning-Based Multi-Objective Task Scheduling (RL-MOTS), a framework leveraging Deep Q-Networks (DQNs) for intelligent and adaptive resource allocation. The proposed model formulates scheduling as a Markov Decision Process, incorporating a priority-aware dynamic queueing mechanism and a multi-objective reward function that balances task latency, energy consumption, and operational costs. Additionally, the framework employs a state-reward tensor to capture trade-offs among objectives, enabling real-time decision-making across heterogeneous cloud and edge nodes. Comprehensive simulations using CloudSim validate the robustness of RL-MOTS under varying workload conditions. Compared to baseline strategies such as FCFS, Min-Min, and multi-objective heuristic models, RL-MOTS achieves up to 28% reduction in energy consumption, 20% improvement in cost efficiency, and significant reductions in makespan and deadline violations, while maintaining strict Quality of Service (QoS) requirements. The framework's adaptability to preemptive and non-preemptive scheduling further enhances its resilience and scalability. These findings establish RL-MOTS as a forward-looking solution for sustainable, cost-efficient, and performance-oriented computing in next-generation distributed systems. Future research will focus on integrating transfer learning and federated learning to increase scalability and privacy in large, decentralized environments, including those applicable to the medical industry.},
}
RevDate: 2025-12-05
Enhancing IIoT security through blockchain-enabled workload analysis in fog computing environments.
Scientific reports, 15(1):42898.
Robots and software are utilized in industrial automation to run machinery and processes in a variety of sectors. Numerous applications incorporate machine learning, the Internet of Things (IoT), and other methods to offer clever features that enhance user experience. Businesses and individuals can successfully accomplish both commercial and noncommercial requirements with the help of such technologies. Due to high risk as well as inefficiency of traditional procedures, organisations are expected to automate industrial processes. Aim of this research is to propose novel technique in workload analysis for fog network and blockchain model in security improvement for IIoT application. Here the IIoT network malicious activity is analysed using blockchain reinforcement gaussian neural network. Then the manufacturing industry workload analysis is carried out using fog cloud based virtual machine multilayer perceptron model. The experimental analysis is carried out for various security dataset in manufacturing industry in terms of latency, QoS, accuracy, reliability, data integrity.
Additional Links: PMID-41286215
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41286215,
year = {2025},
author = {Samriya, JK and Kumar, A and Bhansali, A and Malik, M and Pan, SH and Arya, V and Alhalabi, W and Gupta, BB},
title = {Enhancing IIoT security through blockchain-enabled workload analysis in fog computing environments.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {42898},
pmid = {41286215},
issn = {2045-2322},
abstract = {Robots and software are utilized in industrial automation to run machinery and processes in a variety of sectors. Numerous applications incorporate machine learning, the Internet of Things (IoT), and other methods to offer clever features that enhance user experience. Businesses and individuals can successfully accomplish both commercial and noncommercial requirements with the help of such technologies. Due to high risk as well as inefficiency of traditional procedures, organisations are expected to automate industrial processes. Aim of this research is to propose novel technique in workload analysis for fog network and blockchain model in security improvement for IIoT application. Here the IIoT network malicious activity is analysed using blockchain reinforcement gaussian neural network. Then the manufacturing industry workload analysis is carried out using fog cloud based virtual machine multilayer perceptron model. The experimental analysis is carried out for various security dataset in manufacturing industry in terms of latency, QoS, accuracy, reliability, data integrity.},
}
RevDate: 2025-11-24
CmpDate: 2025-11-24
GWASHub: An Automated Cloud-Based Platform for Genome-Wide Association Study Meta-Analysis.
medRxiv : the preprint server for health sciences pii:2025.10.21.25338463.
Genome-wide association studies (GWAS) often aggregate data from millions of participants across multiple cohorts using meta-analysis to maximise power for genetic discovery. The increase in availability of genomic biobanks, together with a growing focus on phenotypic subgroups, genetic diversity, and sex-stratified analyses, has led GWAS meta-analyses to routinely produce hundreds of summary statistic files accompanied by detailed meta-data. Scalable infrastructures for data handling, quality control (QC), and meta-analysis workflows are essential to prevent errors, ensure reproducibility, and reduce the burden on researchers, allowing them to focus on downstream research and clinical translation. To address this need, we developed GWASHub, a secure cloud-based platform designed for the curation, processing and meta-analysis of GWAS summary statistics. GWASHub features i) private and secure project spaces, ii) automated file harmonisation and data validation, iii) GWAS meta-data capture, iv) customisable variant QC, v) GWAS meta-analysis, vi) analysis reporting and visualisation, and vii) results download. Users interact with the portal via an intuitive web interface built on Nuxt.js, a high-performance JavaScript framework. Data is securely managed through an Amazon Web Services (AWS) MySQL database and S3 block storage. Analysis jobs are distributed to AWS compute resources in a scalable fashion. The QC dashboard presents tabular and graphical QC outputs allowing manual review of individual datasets. Those passing QC are made available to the meta-analysis module. Individual datasets and meta-analysis results are available for download by project users with appropriate access permissions. In GWASHub, a "project" serves as a virtual workspace spanning an entire consortium, allowing individuals with different roles, such as data contributors (users) and project coordinators (main analysts), to collaborate securely under a unified framework. GWASHub has a flexible architecture to allow for ongoing development and incorporation of alternative quality control or meta-analysis procedures, to meet the specific needs of researchers. GWASHub was developed as a joint initiative by the HERMES Consortium and the Cardiovascular Knowledge Portal, and access to the platform is free and available upon request. GWASHub addresses a critical need in the genetics research community by providing a scalable, secure, and user-friendly platform for managing the complexity of large-scale GWAS meta-analyses. As the volume and diversity of GWAS data continue to grow, platforms like GWASHub may help to accelerate insights into the genetic architecture of complex traits.
Additional Links: PMID-41282854
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41282854,
year = {2025},
author = {Sunderland, N and Hite, D and Smadbeck, P and Hoang, Q and Jang, DK and Tragante, V and Jiang, JC and Shah, S and Paternoster, L and Burtt, NP and Flannick, J and Lumbers, RT},
title = {GWASHub: An Automated Cloud-Based Platform for Genome-Wide Association Study Meta-Analysis.},
journal = {medRxiv : the preprint server for health sciences},
volume = {},
number = {},
pages = {},
doi = {10.1101/2025.10.21.25338463},
pmid = {41282854},
abstract = {Genome-wide association studies (GWAS) often aggregate data from millions of participants across multiple cohorts using meta-analysis to maximise power for genetic discovery. The increase in availability of genomic biobanks, together with a growing focus on phenotypic subgroups, genetic diversity, and sex-stratified analyses, has led GWAS meta-analyses to routinely produce hundreds of summary statistic files accompanied by detailed meta-data. Scalable infrastructures for data handling, quality control (QC), and meta-analysis workflows are essential to prevent errors, ensure reproducibility, and reduce the burden on researchers, allowing them to focus on downstream research and clinical translation. To address this need, we developed GWASHub, a secure cloud-based platform designed for the curation, processing and meta-analysis of GWAS summary statistics. GWASHub features i) private and secure project spaces, ii) automated file harmonisation and data validation, iii) GWAS meta-data capture, iv) customisable variant QC, v) GWAS meta-analysis, vi) analysis reporting and visualisation, and vii) results download. Users interact with the portal via an intuitive web interface built on Nuxt.js, a high-performance JavaScript framework. Data is securely managed through an Amazon Web Services (AWS) MySQL database and S3 block storage. Analysis jobs are distributed to AWS compute resources in a scalable fashion. The QC dashboard presents tabular and graphical QC outputs allowing manual review of individual datasets. Those passing QC are made available to the meta-analysis module. Individual datasets and meta-analysis results are available for download by project users with appropriate access permissions. In GWASHub, a "project" serves as a virtual workspace spanning an entire consortium, allowing individuals with different roles, such as data contributors (users) and project coordinators (main analysts), to collaborate securely under a unified framework. GWASHub has a flexible architecture to allow for ongoing development and incorporation of alternative quality control or meta-analysis procedures, to meet the specific needs of researchers. GWASHub was developed as a joint initiative by the HERMES Consortium and the Cardiovascular Knowledge Portal, and access to the platform is free and available upon request. GWASHub addresses a critical need in the genetics research community by providing a scalable, secure, and user-friendly platform for managing the complexity of large-scale GWAS meta-analyses. As the volume and diversity of GWAS data continue to grow, platforms like GWASHub may help to accelerate insights into the genetic architecture of complex traits.},
}
RevDate: 2025-11-26
CmpDate: 2025-11-24
Hybrid artificial intelligence frameworks for otoscopic diagnosis: Integrating convolutional neural networks and large language models toward real-time mobile health.
Digital health, 11:20552076251395449.
BACKGROUND: Otitis media remains a significant global health concern, particularly in resource-limited settings where timely diagnosis is challenging. Artificial intelligence (AI) offers promising solutions to enhance diagnostic accuracy in mobile health applications.
OBJECTIVE: This study introduces a hybrid AI framework that integrates convolutional neural networks (CNNs) for image classification with large language models (LLMs) for clinical reasoning, enabling real-time otoscopic diagnosis.
METHODS: We developed a dual-path system combining CNN-based feature extraction with LLM-supported interpretation. The framework was optimized for mobile deployment, with lightweight models operating on-device and advanced reasoning performed via secure cloud APIs. A dataset of 10,465 otoendoscopic images (expanded from 2820 original clinical images through data augmentation) across 10 middle-ear conditions was used for training and validation. Diagnostic performance was benchmarked against clinicians of varying expertise.
RESULTS: The hybrid CNN-LLM system achieved an overall diagnostic accuracy of 97.6%, demonstrating the synergistic benefit of combining CNN-driven visual analysis with LLM-based clinical reasoning. The system delivered sub-200 ms feedback and achieved specialist-level performance in identifying common ear pathologies.
CONCLUSIONS: This hybrid AI framework substantially improves diagnostic precision and responsiveness in otoscopic evaluation. Its mobile-friendly design supports scalable deployment in telemedicine and primary care, offering a practical solution to enhance ear disease diagnosis in underserved regions.
Additional Links: PMID-41278373
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41278373,
year = {2025},
author = {Chu, YC and Chen, YC and Hsu, CY and Kuo, CT and Cheng, YF and Lin, KH and Liao, WH},
title = {Hybrid artificial intelligence frameworks for otoscopic diagnosis: Integrating convolutional neural networks and large language models toward real-time mobile health.},
journal = {Digital health},
volume = {11},
number = {},
pages = {20552076251395449},
pmid = {41278373},
issn = {2055-2076},
abstract = {BACKGROUND: Otitis media remains a significant global health concern, particularly in resource-limited settings where timely diagnosis is challenging. Artificial intelligence (AI) offers promising solutions to enhance diagnostic accuracy in mobile health applications.
OBJECTIVE: This study introduces a hybrid AI framework that integrates convolutional neural networks (CNNs) for image classification with large language models (LLMs) for clinical reasoning, enabling real-time otoscopic diagnosis.
METHODS: We developed a dual-path system combining CNN-based feature extraction with LLM-supported interpretation. The framework was optimized for mobile deployment, with lightweight models operating on-device and advanced reasoning performed via secure cloud APIs. A dataset of 10,465 otoendoscopic images (expanded from 2820 original clinical images through data augmentation) across 10 middle-ear conditions was used for training and validation. Diagnostic performance was benchmarked against clinicians of varying expertise.
RESULTS: The hybrid CNN-LLM system achieved an overall diagnostic accuracy of 97.6%, demonstrating the synergistic benefit of combining CNN-driven visual analysis with LLM-based clinical reasoning. The system delivered sub-200 ms feedback and achieved specialist-level performance in identifying common ear pathologies.
CONCLUSIONS: This hybrid AI framework substantially improves diagnostic precision and responsiveness in otoscopic evaluation. Its mobile-friendly design supports scalable deployment in telemedicine and primary care, offering a practical solution to enhance ear disease diagnosis in underserved regions.},
}
RevDate: 2025-12-21
CmpDate: 2025-12-01
Exploring environmental sustainability of artificial intelligence in radiology: A scoping review.
European journal of radiology, 194:112558.
OBJECTIVE: Artificial intelligence (AI) is increasingly used in radiology, but its environmental implications have not been sufficiently studied, so far. This study aims to synthesize existing literature on the environmental sustainability of AI in radiology and highlights strategies proposed to mitigate its impact.
METHODS: A scoping review was conducted following the Joanna Briggs Institute methodology. Searches across MEDLINE, Embase, CINAHL, and Web of Science focused on English and French publications from 2014 to 2024, targeting AI, environmental sustainability, and medical imaging. Eligible studies addressed environmental sustainability of AI in medical imaging. Conference abstracts, non-radiological or non-human studies, and unavailable full texts were excluded. Two independent reviewers assessed titles, abstracts, and full texts, while four reviewers conducted data extraction and analysis.
RESULTS: The search identified 3,723 results, of which 13 met inclusion criteria: nine research articles and four reviews. Four themes emerged: energy consumption (n = 10), carbon footprint (n = 6), computational resources (n = 9), and water consumption (n = 2). Reported metrics included CO2-equivalent emissions, training time, power use effectiveness, equivalent distance travelled by car, energy demands, and water consumption. Strategies to enhance sustainability included lightweight model architectures, quantization and pruning, efficient optimizers, and early stopping. Broader recommendations encompassed integrating carbon and energy metrics into AI evaluation, transitioning to cloud computing, and developing an eco-label for radiology AI systems.
CONCLUSIONS: Research on sustainable AI in radiology remains scarce but is rapidly growing. This review highlights key metrics and strategies to guide future research and practice toward more transparent, consistent, and environmentally responsible AI development in radiology.
ABBREVIATIONS: AI, Artificial intelligence; CNN, Convolutional neural networks; CT, Computed tomography; CPU, Central Processing Unit; DL, Deep learning; FLOP, Floating-point operation; GHG, Greenhouses gas; GPU, Graphics Processing Unit; LCA, Life Cycle Assessment; LLM, Large Language Model; MeSH, Medical Subject Headings; ML, Machine learning; MRI, Magnetic resonance imaging; NLP, Natural language processing; PUE, Power Usage Effectiveness; TPU, Tensor Processing Unit; USA, United States of America; ViT, Vision Transformer; WUE, Water Usage Effectiveness.
Additional Links: PMID-41275851
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41275851,
year = {2026},
author = {Champendal, M and Lokaj, B and de Gevigney, VD and Brulé, G and Zaghir, J and Boiko, P and Lovis, C and Müller, H and Schmid, J and Ribeiro, RT},
title = {Exploring environmental sustainability of artificial intelligence in radiology: A scoping review.},
journal = {European journal of radiology},
volume = {194},
number = {},
pages = {112558},
doi = {10.1016/j.ejrad.2025.112558},
pmid = {41275851},
issn = {1872-7727},
mesh = {*Artificial Intelligence ; *Radiology/methods ; Humans ; *Conservation of Natural Resources ; Carbon Footprint ; *Diagnostic Imaging ; },
abstract = {OBJECTIVE: Artificial intelligence (AI) is increasingly used in radiology, but its environmental implications have not been sufficiently studied, so far. This study aims to synthesize existing literature on the environmental sustainability of AI in radiology and highlights strategies proposed to mitigate its impact.
METHODS: A scoping review was conducted following the Joanna Briggs Institute methodology. Searches across MEDLINE, Embase, CINAHL, and Web of Science focused on English and French publications from 2014 to 2024, targeting AI, environmental sustainability, and medical imaging. Eligible studies addressed environmental sustainability of AI in medical imaging. Conference abstracts, non-radiological or non-human studies, and unavailable full texts were excluded. Two independent reviewers assessed titles, abstracts, and full texts, while four reviewers conducted data extraction and analysis.
RESULTS: The search identified 3,723 results, of which 13 met inclusion criteria: nine research articles and four reviews. Four themes emerged: energy consumption (n = 10), carbon footprint (n = 6), computational resources (n = 9), and water consumption (n = 2). Reported metrics included CO2-equivalent emissions, training time, power use effectiveness, equivalent distance travelled by car, energy demands, and water consumption. Strategies to enhance sustainability included lightweight model architectures, quantization and pruning, efficient optimizers, and early stopping. Broader recommendations encompassed integrating carbon and energy metrics into AI evaluation, transitioning to cloud computing, and developing an eco-label for radiology AI systems.
CONCLUSIONS: Research on sustainable AI in radiology remains scarce but is rapidly growing. This review highlights key metrics and strategies to guide future research and practice toward more transparent, consistent, and environmentally responsible AI development in radiology.
ABBREVIATIONS: AI, Artificial intelligence; CNN, Convolutional neural networks; CT, Computed tomography; CPU, Central Processing Unit; DL, Deep learning; FLOP, Floating-point operation; GHG, Greenhouses gas; GPU, Graphics Processing Unit; LCA, Life Cycle Assessment; LLM, Large Language Model; MeSH, Medical Subject Headings; ML, Machine learning; MRI, Magnetic resonance imaging; NLP, Natural language processing; PUE, Power Usage Effectiveness; TPU, Tensor Processing Unit; USA, United States of America; ViT, Vision Transformer; WUE, Water Usage Effectiveness.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Artificial Intelligence
*Radiology/methods
Humans
*Conservation of Natural Resources
Carbon Footprint
*Diagnostic Imaging
RevDate: 2025-11-27
An intelligent job scheduling and real-time resource optimization for edge-cloud continuum in next generation networks.
Scientific reports, 15(1):41534.
While cloud-edge infrastructures demand flexible and sophisticated resource management, 6G networks necessitate very low latency, great dependability, and broad connection. Cloud computing's scalability and agility enable it to prioritize service delivery at various levels of detail while serving billions of users. However, due to resource inefficiencies, virtual machine (VM) issues, response delays, and deadline violations, real-time task scheduling is challenging in these settings. This study develops an AI-powered task scheduling system based on the newly published Unfair Semi-Greedy (USG) algorithm, Earliest Deadline First (EDF), and Enhanced Deadline Zero-Laxity (EDZL) algorithm. The system chooses the best scheduler based on load and work criticality by combining reinforcement learning adaptive logic with a dynamic resource table. Over 10,000 soft real-time task sets were utilized to evaluate the framework across various cloud-edge scenarios. When compared to solo EDF and EDZL solutions, the recommended hybrid method reduced average response times by up to 26.3% and deadline exceptions by 41.7%. The USG component achieved 98.6% task stimulability under saturated edge settings, indicating significant changes in workload. These findings suggest that the method might be useful for applications that need a speedy turnaround. This architecture is especially well-suited for autonomous systems, remote healthcare, and immersive media, all of which require low latency and dependability, and it may be extended to AI-native 6G networks.
Additional Links: PMID-41274945
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41274945,
year = {2025},
author = {Naeem, AB and Senapati, B and Rasheed, J and Baili, J and Osman, O},
title = {An intelligent job scheduling and real-time resource optimization for edge-cloud continuum in next generation networks.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {41534},
pmid = {41274945},
issn = {2045-2322},
support = {RGP2/109/46//Deanship of Research and Graduate Studies, King Khalid University, Saudi Arabia/ ; },
abstract = {While cloud-edge infrastructures demand flexible and sophisticated resource management, 6G networks necessitate very low latency, great dependability, and broad connection. Cloud computing's scalability and agility enable it to prioritize service delivery at various levels of detail while serving billions of users. However, due to resource inefficiencies, virtual machine (VM) issues, response delays, and deadline violations, real-time task scheduling is challenging in these settings. This study develops an AI-powered task scheduling system based on the newly published Unfair Semi-Greedy (USG) algorithm, Earliest Deadline First (EDF), and Enhanced Deadline Zero-Laxity (EDZL) algorithm. The system chooses the best scheduler based on load and work criticality by combining reinforcement learning adaptive logic with a dynamic resource table. Over 10,000 soft real-time task sets were utilized to evaluate the framework across various cloud-edge scenarios. When compared to solo EDF and EDZL solutions, the recommended hybrid method reduced average response times by up to 26.3% and deadline exceptions by 41.7%. The USG component achieved 98.6% task stimulability under saturated edge settings, indicating significant changes in workload. These findings suggest that the method might be useful for applications that need a speedy turnaround. This architecture is especially well-suited for autonomous systems, remote healthcare, and immersive media, all of which require low latency and dependability, and it may be extended to AI-native 6G networks.},
}
RevDate: 2025-11-22
AlphaFold Protein Structure Database 2025: a redesigned interface and updated structural coverage.
Nucleic acids research pii:8340156 [Epub ahead of print].
The AlphaFold Protein Structure Database (AFDB; https://alphafold.ebi.ac.uk), developed by EMBL-EBI and Google DeepMind, provides open access to hundreds of millions of high-accuracy protein structure predictions, transforming research in structural biology and the wider life sciences. Since its launch, AFDB has become a widely used bioinformatics resource, integrated into major databases, visualization platforms, and analysis pipelines. Here, we report the update of the database to align with the UniProt 2025_03 release, along with a comprehensive redesign of the entry page to enhance usability, accessibility, and structural interpretation. The new design integrates annotations directly with an interactive 3D viewer and introduces dedicated domains and summary tabs. Structural coverage has also been updated to include isoforms plus underlying multiple sequence alignments. Data are available through the website, FTP, Google Cloud, and updated APIs. Together, these advances reinforce AFDB as a sustainable resource for exploring protein sequence-structure relationships.
Additional Links: PMID-41273079
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41273079,
year = {2025},
author = {Bertoni, D and Tsenkov, M and Magana, P and Nair, S and Pidruchna, I and Querino Lima Afonso, M and Midlik, A and Paramval, U and Lawal, D and Tanweer, A and Last, M and Patel, R and Laydon, A and Lasecki, D and Dietrich, N and Tomlinson, H and Žídek, A and Green, T and Kovalevskiy, O and Lau, A and Kandathil, S and Bordin, N and Sillitoe, I and Mirdita, M and Jones, D and Orengo, C and Steinegger, M and Fleming, JR and Velankar, S},
title = {AlphaFold Protein Structure Database 2025: a redesigned interface and updated structural coverage.},
journal = {Nucleic acids research},
volume = {},
number = {},
pages = {},
doi = {10.1093/nar/gkaf1226},
pmid = {41273079},
issn = {1362-4962},
support = {20-BBSRC/NSF-BIO//BBRSC/ ; BB/Y000455/1//BBRSC/ ; BB/W018802/1//BBRSC/ ; BB/T019409/1//BBRSC/ ; BB/W008556/1//BBRSC/ ; 221327/Z/20/Z//Welcome Trust/ ; 310300/Z/24/Z//Welcome Trust/ ; RS-2020-NR049543//National Research Foundation/ ; RS-2021-NR061659//National Research Foundation/ ; RS-2021-NR056571//National Research Foundation/ ; RS-2024-00396026//National Research Foundation/ ; NNF24SA0092560//Creative-Pioneering Researchers Program and Novo Nordisk Foundation/ ; RS-2023-00250470//National Research Foundation of Korea/ ; //European Molecular Biology Laboratory/ ; //European Molecular Biology Laboratory/ ; },
abstract = {The AlphaFold Protein Structure Database (AFDB; https://alphafold.ebi.ac.uk), developed by EMBL-EBI and Google DeepMind, provides open access to hundreds of millions of high-accuracy protein structure predictions, transforming research in structural biology and the wider life sciences. Since its launch, AFDB has become a widely used bioinformatics resource, integrated into major databases, visualization platforms, and analysis pipelines. Here, we report the update of the database to align with the UniProt 2025_03 release, along with a comprehensive redesign of the entry page to enhance usability, accessibility, and structural interpretation. The new design integrates annotations directly with an interactive 3D viewer and introduces dedicated domains and summary tabs. Structural coverage has also been updated to include isoforms plus underlying multiple sequence alignments. Data are available through the website, FTP, Google Cloud, and updated APIs. Together, these advances reinforce AFDB as a sustainable resource for exploring protein sequence-structure relationships.},
}
RevDate: 2026-01-01
CmpDate: 2025-12-30
An integrated queuing and certainty factor theory model for efficient edge computing in remote patient monitoring systems.
Scientific reports, 15(1):44973.
Remote Patient Monitoring Systems (RPMS) require efficient resource management to prioritize life-critical data in latency-sensitive healthcare environments. This research introduces an Integrated Queuing and Certainty Factor Theory (IQCT) model aimed at optimizing bandwidth allocation and task scheduling within fog-edge-cloud architectures. IQCT prioritizes patient requests in real time by classifying them into emergency, warning, and normal categories using certainty factor(CF) -based urgency assessment. Simulated on Raspberry Pi fog nodes with the UCI Heart Disease dataset, its performance was benchmarked against FCFS, PQ, and WFQ using metrics such as latency, energy consumption, and response time under varying workloads. IQCT reduced latency for emergency requests by 54.5% and improved network efficiency by 30.08% compared to FCFS. It also lowered response and execution times by 49.5% and 36%, and decreased fog-layer energy consumption by 30.8%. Scalability tests confirmed stable quality of service (QoS) under peak loads, demonstrating adaptability to dynamic demand. The adaptation of PQ and CF theory can lead to more efficient and optimized performance in RPMS. The IQCT model has significantly reduced the latency by 54.5% in emergency situations, in comparison with the existing models.
Additional Links: PMID-41272028
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41272028,
year = {2025},
author = {RahimiZadeh, K and Beheshti, A and Javadi, B and Yazdani, A},
title = {An integrated queuing and certainty factor theory model for efficient edge computing in remote patient monitoring systems.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {44973},
pmid = {41272028},
issn = {2045-2322},
support = {//Shiraz Transplant Research Center, Shiraz University of Medical Sciences/ ; },
mesh = {Humans ; Monitoring, Physiologic/methods ; *Models, Theoretical ; Cloud Computing ; Algorithms ; Telemedicine ; Remote Patient Monitoring ; },
abstract = {Remote Patient Monitoring Systems (RPMS) require efficient resource management to prioritize life-critical data in latency-sensitive healthcare environments. This research introduces an Integrated Queuing and Certainty Factor Theory (IQCT) model aimed at optimizing bandwidth allocation and task scheduling within fog-edge-cloud architectures. IQCT prioritizes patient requests in real time by classifying them into emergency, warning, and normal categories using certainty factor(CF) -based urgency assessment. Simulated on Raspberry Pi fog nodes with the UCI Heart Disease dataset, its performance was benchmarked against FCFS, PQ, and WFQ using metrics such as latency, energy consumption, and response time under varying workloads. IQCT reduced latency for emergency requests by 54.5% and improved network efficiency by 30.08% compared to FCFS. It also lowered response and execution times by 49.5% and 36%, and decreased fog-layer energy consumption by 30.8%. Scalability tests confirmed stable quality of service (QoS) under peak loads, demonstrating adaptability to dynamic demand. The adaptation of PQ and CF theory can lead to more efficient and optimized performance in RPMS. The IQCT model has significantly reduced the latency by 54.5% in emergency situations, in comparison with the existing models.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
Monitoring, Physiologic/methods
*Models, Theoretical
Cloud Computing
Algorithms
Telemedicine
Remote Patient Monitoring
RevDate: 2025-11-24
Load balancing for cloud computing using optimized cluster based federated learning.
Scientific reports, 15(1):41328.
Task scheduling and load balancing in cloud computing represent challenging NP-hard optimization problems that often result in inefficient resource utilization, elevated energy consumption, and prolonged execution times. This study introduces a novel Cluster-based Federated Learning (FL) framework that addresses system heterogeneity by clustering virtual machines (VMs) with similar characteristics via unsupervised learning, enabling dynamic and efficient task allocation. The proposed method leverages VM capabilities and a derivative-based objective function to optimize scheduling. We benchmark the approach against established metaheuristic algorithms including Whale Optimization Algorithm (WOA), Butterfly Optimization (BFO), Mayfly Optimization (MFO), and Fire Hawk Optimization (FHO). Evaluated using makespan, idle time, and degree of imbalance, the Cluster-based FL model coupled with the COA algorithm consistently outperforms existing methods, achieving up to a 10% reduction in makespan, a 15% decrease in idle time, and a significant improvement in load balancing across VMs. These results highlight the efficacy of integrating clustering within federated learning paradigms to deliver scalable, adaptive, and resilient cloud resource management solutions.
Additional Links: PMID-41271909
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41271909,
year = {2025},
author = {Chennam, KK and V, UM and Aluvalu, R and Chinthaginjala, R and AbWahab, M and Zhao, X and Tolba, A},
title = {Load balancing for cloud computing using optimized cluster based federated learning.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {41328},
pmid = {41271909},
issn = {2045-2322},
abstract = {Task scheduling and load balancing in cloud computing represent challenging NP-hard optimization problems that often result in inefficient resource utilization, elevated energy consumption, and prolonged execution times. This study introduces a novel Cluster-based Federated Learning (FL) framework that addresses system heterogeneity by clustering virtual machines (VMs) with similar characteristics via unsupervised learning, enabling dynamic and efficient task allocation. The proposed method leverages VM capabilities and a derivative-based objective function to optimize scheduling. We benchmark the approach against established metaheuristic algorithms including Whale Optimization Algorithm (WOA), Butterfly Optimization (BFO), Mayfly Optimization (MFO), and Fire Hawk Optimization (FHO). Evaluated using makespan, idle time, and degree of imbalance, the Cluster-based FL model coupled with the COA algorithm consistently outperforms existing methods, achieving up to a 10% reduction in makespan, a 15% decrease in idle time, and a significant improvement in load balancing across VMs. These results highlight the efficacy of integrating clustering within federated learning paradigms to deliver scalable, adaptive, and resilient cloud resource management solutions.},
}
RevDate: 2025-11-24
CmpDate: 2025-11-21
Intelligent feature fusion with dynamic graph convolutional recurrent network for robust object detection to assist individuals with disabilities in a smart Iot edge-cloud environment.
Scientific reports, 15(1):41228.
Smart Internet of Things (IoT)-edge-cloud computing defines intelligent systems where IoT devices create data at the network's edge, which is then further processed and analyzed in local edge devices before transmission to the cloud for deeper insights and storage. Visual impairment, like blindness, has a deep effect on a person's psychological and cognitive functions. So, the use of assistive models can help mitigate the adverse effects and improve the quality of life for individuals who are blind. Much current research mainly concentrates on mobility, navigation, and object detection (OD) in smart devices and advanced technologies for visually challenged people. OD is a vital feature of computer vision that includes categorizing objects within an image, allowing applications like augmented reality, image retrieval, etc. Recently, deep learning (DL) models have emerged as an excellent technique for mining feature representation from data, primarily due to significant developments in OD. The DL model is well-trained with manifold images of objects that are highly applicable to visually impaired individuals. This paper presents an intelligent Feature Fusion with Dynamic Graph Convolutional Recurrent Network for Robust Object Detection (FFDGCRN-ROD) approach to assist individuals with disabilities. The paper aims to present an intelligent OD framework for individuals with disabilities utilizing a smart IoT edge cloud environment to enable monitoring and assistive decision-making. At first, the image pre-processing phase involves resizing, normalization, and image enhancement to eliminate the noise and enhance the image quality. For the OD process, the FFDGCRN-ROD approach employs the faster R-CNN to identify and locate specific targets within the images automatically. Furthermore, the fusion models, namely CapsNet, SqueezeNet, and Inceptionv3, are used for the feature extraction process. Finally, the FFDGCRN-ROD model implements the dynamic adaptive graph convolutional recurrent network (DA-GCRN) model to detect and classify objects for visually impaired people accurately. The experimental validation of the FFDGCRN-ROD methodology is performed under the Indoor OD dataset. The comparison analysis of the FFDGCRN-ROD methodology demonstrated a superior accuracy value of 99.65% over existing techniques.
Additional Links: PMID-41271840
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41271840,
year = {2025},
author = {Alohali, MA and Alanazi, F and Alsahafi, YA and Yaseen, I},
title = {Intelligent feature fusion with dynamic graph convolutional recurrent network for robust object detection to assist individuals with disabilities in a smart Iot edge-cloud environment.},
journal = {Scientific reports},
volume = {15},
number = {1},
pages = {41228},
pmid = {41271840},
issn = {2045-2322},
mesh = {Humans ; Deep Learning ; *Internet of Things ; Neural Networks, Computer ; *Cloud Computing ; *Persons with Disabilities ; *Persons with Visual Disabilities ; Self-Help Devices ; Algorithms ; Image Processing, Computer-Assisted/methods ; },
abstract = {Smart Internet of Things (IoT)-edge-cloud computing defines intelligent systems where IoT devices create data at the network's edge, which is then further processed and analyzed in local edge devices before transmission to the cloud for deeper insights and storage. Visual impairment, like blindness, has a deep effect on a person's psychological and cognitive functions. So, the use of assistive models can help mitigate the adverse effects and improve the quality of life for individuals who are blind. Much current research mainly concentrates on mobility, navigation, and object detection (OD) in smart devices and advanced technologies for visually challenged people. OD is a vital feature of computer vision that includes categorizing objects within an image, allowing applications like augmented reality, image retrieval, etc. Recently, deep learning (DL) models have emerged as an excellent technique for mining feature representation from data, primarily due to significant developments in OD. The DL model is well-trained with manifold images of objects that are highly applicable to visually impaired individuals. This paper presents an intelligent Feature Fusion with Dynamic Graph Convolutional Recurrent Network for Robust Object Detection (FFDGCRN-ROD) approach to assist individuals with disabilities. The paper aims to present an intelligent OD framework for individuals with disabilities utilizing a smart IoT edge cloud environment to enable monitoring and assistive decision-making. At first, the image pre-processing phase involves resizing, normalization, and image enhancement to eliminate the noise and enhance the image quality. For the OD process, the FFDGCRN-ROD approach employs the faster R-CNN to identify and locate specific targets within the images automatically. Furthermore, the fusion models, namely CapsNet, SqueezeNet, and Inceptionv3, are used for the feature extraction process. Finally, the FFDGCRN-ROD model implements the dynamic adaptive graph convolutional recurrent network (DA-GCRN) model to detect and classify objects for visually impaired people accurately. The experimental validation of the FFDGCRN-ROD methodology is performed under the Indoor OD dataset. The comparison analysis of the FFDGCRN-ROD methodology demonstrated a superior accuracy value of 99.65% over existing techniques.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
Deep Learning
*Internet of Things
Neural Networks, Computer
*Cloud Computing
*Persons with Disabilities
*Persons with Visual Disabilities
Self-Help Devices
Algorithms
Image Processing, Computer-Assisted/methods
▼ ▼ LOAD NEXT 100 CITATIONS
RJR Experience and Expertise
Researcher
Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.
Educator
Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.
Administrator
Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.
Technologist
Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.
Publisher
While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.
Speaker
Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.
Facilitator
Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.
Designer
Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.
RJR Picks from Around the Web (updated 11 MAY 2018 )
Old Science
Weird Science
Treating Disease with Fecal Transplantation
Fossils of miniature humans (hobbits) discovered in Indonesia
Paleontology
Dinosaur tail, complete with feathers, found preserved in amber.
Astronomy
Mysterious fast radio burst (FRB) detected in the distant universe.
Big Data & Informatics
Big Data: Buzzword or Big Deal?
Hacking the genome: Identifying anonymized human subjects using publicly available data.