Other Sites:
Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About: RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE
ESP: PubMed Auto Bibliography 05 Mar 2026 at 01:42 Created:
Cloud Computing
Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.
Created with PubMed® Query: ( cloud[TIAB] AND (computing[TIAB] OR "amazon web services"[TIAB] OR google[TIAB] OR "microsoft azure"[TIAB]) ) NOT pmcbook NOT ispreviousversion
Citations The Papers (from PubMed®)
RevDate: 2026-03-03
Feature Compression for Cloud-Edge Multimodal 3D Object Detection.
IEEE transactions on pattern analysis and machine intelligence, PP: [Epub ahead of print].
Machine vision systems, which can efficiently manage extensive visual perception tasks, are becoming increasingly popular in industrial production and daily life. Due to the challenge of simultaneously obtaining accurate depth and texture information with a single sensor, multimodal data captured by cameras and LiDAR is commonly used to enhance performance. Additionally, cloud-edge cooperation has emerged as a novel computing approach to improve user experience and ensure data security in machine vision systems. This paper proposes a pioneering solution to address the feature compression problem in multimodal 3D object detection. Given a sparse tensor-based object detection network at the edge device, we introduce two modes to accommodate different application requirements: Transmission-Friendly Feature Compression (T-FFC) and Accuracy-Friendly Feature Compression (A-FFC). In T-FFC mode, only the output of the last layer of the network's backbone is transmitted from the edge device. The received feature is processed at the cloud device through a channel expansion module and two spatial upsampling modules to generate multi-scale features. In A-FFC mode, we expand upon the T-FFC mode by transmitting two additional types of features. These added features enable the cloud device to generate more accurate multi-scale features. Experimental results on the KITTI dataset using the VirConv-L detection network showed that T-FFC was able to compress the features by a factor of 4933 with less than a 3% reduction in detection performance. On the other hand, A-FFC compressed the features by a factor of about 733 with almost no degradation in detection performance. We also designed optional residual extraction and 3D object reconstruction modules to facilitate the reconstruction of detected objects. The reconstructed objects effectively reflected the shape, occlusion, and details of the original objects.
Additional Links: PMID-41774639
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41774639,
year = {2026},
author = {Tian, C and Li, Z and Yuan, H and Hamzaoui, R and Shen, L and Kwong, S},
title = {Feature Compression for Cloud-Edge Multimodal 3D Object Detection.},
journal = {IEEE transactions on pattern analysis and machine intelligence},
volume = {PP},
number = {},
pages = {},
doi = {10.1109/TPAMI.2026.3669471},
pmid = {41774639},
issn = {1939-3539},
abstract = {Machine vision systems, which can efficiently manage extensive visual perception tasks, are becoming increasingly popular in industrial production and daily life. Due to the challenge of simultaneously obtaining accurate depth and texture information with a single sensor, multimodal data captured by cameras and LiDAR is commonly used to enhance performance. Additionally, cloud-edge cooperation has emerged as a novel computing approach to improve user experience and ensure data security in machine vision systems. This paper proposes a pioneering solution to address the feature compression problem in multimodal 3D object detection. Given a sparse tensor-based object detection network at the edge device, we introduce two modes to accommodate different application requirements: Transmission-Friendly Feature Compression (T-FFC) and Accuracy-Friendly Feature Compression (A-FFC). In T-FFC mode, only the output of the last layer of the network's backbone is transmitted from the edge device. The received feature is processed at the cloud device through a channel expansion module and two spatial upsampling modules to generate multi-scale features. In A-FFC mode, we expand upon the T-FFC mode by transmitting two additional types of features. These added features enable the cloud device to generate more accurate multi-scale features. Experimental results on the KITTI dataset using the VirConv-L detection network showed that T-FFC was able to compress the features by a factor of 4933 with less than a 3% reduction in detection performance. On the other hand, A-FFC compressed the features by a factor of about 733 with almost no degradation in detection performance. We also designed optional residual extraction and 3D object reconstruction modules to facilitate the reconstruction of detected objects. The reconstructed objects effectively reflected the shape, occlusion, and details of the original objects.},
}
RevDate: 2026-03-03
Local-Global-Graph Network-Based Biokey Generation with Electrocardiogram Signal and Lightweight Authentication in Cloud-Based Internet of Medical Things Networks.
Critical reviews in biomedical engineering, 54(1):67-95.
The internet of medical things (IoMT) is regarded as a promising framework, which is used to expand and improve telemedicine services. Cloud-based IoMT refers to the integration of medical devices and sensors with cloud computing infrastructure, enabling real-time remote data collection, processing, storage, and analysis. This architecture supports the efficient management of patient health information and facilitates advanced telemedicine services by offering scalable, secure, and accessible healthcare solutions. Ensuring secure access and communication in such systems is critical, as vulnerabilities in the network can expose sensitive patient data to significant risks. Among various security measures, authentication using biomedical signals, particularly electrocardiogram (ECG) signals, is gaining attention due to their unique, individual-specific characteristics. Therefore, this paper develops a new approach called local-global-graph network-based biokey generation (LGGNet-BioKey) for authentication in Cloud-based IoMT. Initially, the Cloud-based IoMT network is simulated, and it includes three entities, like cloud server, gateway, and patient. First, the public key and security parameters are initialized, and then the entities are registered with the cloud server. Next, the key generation is done using LGGNet, and then the BioKey generation is performed using an ECG signal. Next, the lightweight authentication is done and lastly, attribute-based encryption and decryption are performed in the data preservation phase. Furthermore, the LGGNet-BioKey model measured an execution time, memory usage, and key generation time of 3.772 sec, 9.096 MB, and 3.771 sec.
Additional Links: PMID-41774489
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41774489,
year = {2026},
author = {Nagarathinam, SKA and Bhukya, RN},
title = {Local-Global-Graph Network-Based Biokey Generation with Electrocardiogram Signal and Lightweight Authentication in Cloud-Based Internet of Medical Things Networks.},
journal = {Critical reviews in biomedical engineering},
volume = {54},
number = {1},
pages = {67-95},
doi = {10.1615/CritRevBiomedEng.2025058925},
pmid = {41774489},
issn = {1943-619X},
abstract = {The internet of medical things (IoMT) is regarded as a promising framework, which is used to expand and improve telemedicine services. Cloud-based IoMT refers to the integration of medical devices and sensors with cloud computing infrastructure, enabling real-time remote data collection, processing, storage, and analysis. This architecture supports the efficient management of patient health information and facilitates advanced telemedicine services by offering scalable, secure, and accessible healthcare solutions. Ensuring secure access and communication in such systems is critical, as vulnerabilities in the network can expose sensitive patient data to significant risks. Among various security measures, authentication using biomedical signals, particularly electrocardiogram (ECG) signals, is gaining attention due to their unique, individual-specific characteristics. Therefore, this paper develops a new approach called local-global-graph network-based biokey generation (LGGNet-BioKey) for authentication in Cloud-based IoMT. Initially, the Cloud-based IoMT network is simulated, and it includes three entities, like cloud server, gateway, and patient. First, the public key and security parameters are initialized, and then the entities are registered with the cloud server. Next, the key generation is done using LGGNet, and then the BioKey generation is performed using an ECG signal. Next, the lightweight authentication is done and lastly, attribute-based encryption and decryption are performed in the data preservation phase. Furthermore, the LGGNet-BioKey model measured an execution time, memory usage, and key generation time of 3.772 sec, 9.096 MB, and 3.771 sec.},
}
RevDate: 2026-03-02
Benchmarking multiple instance learning architectures from patches to pathology for prostate cancer detection and grading using attention-based weak supervision.
Scientific reports pii:10.1038/s41598-026-39196-x [Epub ahead of print].
Histopathological evaluation is necessary for the diagnosis and grading of prostate cancer, which is still one of the most common cancers in men globally. Traditional evaluation is time-consuming, prone to inter-observer variability, and challenging to scale. The clinical usefulness of current AI systems is limited by the need for comprehensive pixel-level annotations. The objective of this research is to develop and evaluate a large-scale benchmarking study on a weakly supervised deep learning framework that minimizes the need for annotation and ensures interpretability for automated prostate cancer diagnosis and International Society of Urological Pathology (ISUP) grading using whole slide images (WSIs). This study rigorously tested six cutting-edge multiple instance learning (MIL) architectures (CLAM-MB, CLAM-SB, ILRA-MIL, AC-MIL, AMD-MIL, WiKG-MIL), three feature encoders (ResNet50, CTransPath, UNI2), and four patch extraction techniques (varying sizes and overlap) using the PANDA dataset (10,616 WSIs), yielding 72 experimental configurations. The methodology used distributed cloud computing to process over 31 million tissue patches, implementing advanced attention mechanisms to ensure clinical interpretability through Grad-CAM visualizations. The optimum configuration (UNI2 encoder with ILRA-MIL, 256×256 patches, 50% overlap) achieved 78.75% accuracy and 90.12% quadratic weighted kappa (QWK), outperforming traditional methods and approaching expert pathologist-level diagnostic capability. Overlapping smaller patches offered the best balance of spatial resolution and contextual information, while domain-specific foundation models performed noticeably better than generic encoders. This work is the first large-scale, comprehensive comparison of weekly supervised MIL methods for prostate cancer diagnosis and grading. The proposed approach has excellent clinical diagnostic performance, scalability, practical feasibility through cloud computing, and interpretability using visualization tools.
Additional Links: PMID-41771952
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41771952,
year = {2026},
author = {Butt, NA and Sarwat, D and Noya, ID and Tutusaus, K and Samee, NA and Ashraf, I},
title = {Benchmarking multiple instance learning architectures from patches to pathology for prostate cancer detection and grading using attention-based weak supervision.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-39196-x},
pmid = {41771952},
issn = {2045-2322},
support = {PNURSP2026R746//Princess Nourah bint Abdulrahman University Researchers Supporting Project/ ; },
abstract = {Histopathological evaluation is necessary for the diagnosis and grading of prostate cancer, which is still one of the most common cancers in men globally. Traditional evaluation is time-consuming, prone to inter-observer variability, and challenging to scale. The clinical usefulness of current AI systems is limited by the need for comprehensive pixel-level annotations. The objective of this research is to develop and evaluate a large-scale benchmarking study on a weakly supervised deep learning framework that minimizes the need for annotation and ensures interpretability for automated prostate cancer diagnosis and International Society of Urological Pathology (ISUP) grading using whole slide images (WSIs). This study rigorously tested six cutting-edge multiple instance learning (MIL) architectures (CLAM-MB, CLAM-SB, ILRA-MIL, AC-MIL, AMD-MIL, WiKG-MIL), three feature encoders (ResNet50, CTransPath, UNI2), and four patch extraction techniques (varying sizes and overlap) using the PANDA dataset (10,616 WSIs), yielding 72 experimental configurations. The methodology used distributed cloud computing to process over 31 million tissue patches, implementing advanced attention mechanisms to ensure clinical interpretability through Grad-CAM visualizations. The optimum configuration (UNI2 encoder with ILRA-MIL, 256×256 patches, 50% overlap) achieved 78.75% accuracy and 90.12% quadratic weighted kappa (QWK), outperforming traditional methods and approaching expert pathologist-level diagnostic capability. Overlapping smaller patches offered the best balance of spatial resolution and contextual information, while domain-specific foundation models performed noticeably better than generic encoders. This work is the first large-scale, comprehensive comparison of weekly supervised MIL methods for prostate cancer diagnosis and grading. The proposed approach has excellent clinical diagnostic performance, scalability, practical feasibility through cloud computing, and interpretability using visualization tools.},
}
RevDate: 2026-03-02
Sovereignty-as-a-service: How big tech companies co-opt and redefine digital sovereignty.
Media, culture, and society, 48(2):416-424 pii:10.1177_01634437251395003.
This article introduces the concept of sovereignty-as-a-service to describe how Big Tech companies, specifically Microsoft, Amazon, and Google/Alphabet, are strategically redefining digital sovereignty through their programs of cloud infrastructure. Drawing on critical discourse analysis of official materials released between 2022 and 2023, we show how these companies respond to regulatory pressures, particularly in Europe, by offering modular and branded solutions that frame sovereignty as a technical, legal, and infrastructural matter. Rather than sovereignty being exercised over platforms, it is now provisioned by them, on their terms. We argue that sovereignty-as-a-service constitutes a form of discursive capture that empties the concept, aligning it with the ideological legacy of the Californian Ideology. In this reframing, digital sovereignty becomes a service to be purchased, configured, and optimized through proprietary platforms. By conceptualizing sovereignty as a site of contested meanings open to appropriation, this article contributes to critical debates on digital sovereignty and technology governance.
Additional Links: PMID-41769682
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41769682,
year = {2026},
author = {Grohmann, R and Costa Barbosa, A},
title = {Sovereignty-as-a-service: How big tech companies co-opt and redefine digital sovereignty.},
journal = {Media, culture, and society},
volume = {48},
number = {2},
pages = {416-424},
doi = {10.1177/01634437251395003},
pmid = {41769682},
issn = {1460-3675},
abstract = {This article introduces the concept of sovereignty-as-a-service to describe how Big Tech companies, specifically Microsoft, Amazon, and Google/Alphabet, are strategically redefining digital sovereignty through their programs of cloud infrastructure. Drawing on critical discourse analysis of official materials released between 2022 and 2023, we show how these companies respond to regulatory pressures, particularly in Europe, by offering modular and branded solutions that frame sovereignty as a technical, legal, and infrastructural matter. Rather than sovereignty being exercised over platforms, it is now provisioned by them, on their terms. We argue that sovereignty-as-a-service constitutes a form of discursive capture that empties the concept, aligning it with the ideological legacy of the Californian Ideology. In this reframing, digital sovereignty becomes a service to be purchased, configured, and optimized through proprietary platforms. By conceptualizing sovereignty as a site of contested meanings open to appropriation, this article contributes to critical debates on digital sovereignty and technology governance.},
}
RevDate: 2026-03-02
Monitoring environmental impacts of a designated aquaculture area in the Karaburun Peninsula using Google Earth Engine.
PeerJ, 14:e20873.
Satellite-based monitoring of aquaculture impacts remains constrained by the absence of standardized, reproducible methodologies capable of capturing long-term environmental dynamics. This study introduces a novel framework that integrates Difference-in-Differences (DiD) causal inference with multi-decadal Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data and Google Earth Engine (GEE) cloud computing to evaluate aquaculture-related changes in coastal ecosystems. Using 20 years of satellite observations (2002-2022) from the Karaburun Peninsula, İzmir, Türkiye, we compared three representative sites: an aquaculture zone, a coastal area influenced by human settlements, and an offshore reference site with minimal anthropogenic activity. The human-impacted coastal site consistently exhibited the highest concentrations of surface parameters, reflecting dominant background anthropogenic influences. However, DiD analysis revealed no statistically significant differences in chlorophyll-a (Chl-a), particulate organic carbon (POC), or other parameters between the aquaculture and control sites, indicating that potential aquaculture-related effects remained below the detection threshold of the 1 km MODIS resolution. Despite these null results, the study demonstrates the feasibility and limitations of combining causal inference and cloud-based remote sensing for aquaculture monitoring. This methodological integration provides a scalable, cost-effective, and transferable framework for detecting and interpreting environmental change across large spatial and temporal domains. By defining the sensitivity limits of satellite-based detection, this work lays a foundation for future applications that merge high-resolution sensors, in-situ validation, and process-based modeling in sustainable aquaculture management.
Additional Links: PMID-41769407
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41769407,
year = {2026},
author = {Tosun, DD},
title = {Monitoring environmental impacts of a designated aquaculture area in the Karaburun Peninsula using Google Earth Engine.},
journal = {PeerJ},
volume = {14},
number = {},
pages = {e20873},
pmid = {41769407},
issn = {2167-8359},
abstract = {Satellite-based monitoring of aquaculture impacts remains constrained by the absence of standardized, reproducible methodologies capable of capturing long-term environmental dynamics. This study introduces a novel framework that integrates Difference-in-Differences (DiD) causal inference with multi-decadal Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data and Google Earth Engine (GEE) cloud computing to evaluate aquaculture-related changes in coastal ecosystems. Using 20 years of satellite observations (2002-2022) from the Karaburun Peninsula, İzmir, Türkiye, we compared three representative sites: an aquaculture zone, a coastal area influenced by human settlements, and an offshore reference site with minimal anthropogenic activity. The human-impacted coastal site consistently exhibited the highest concentrations of surface parameters, reflecting dominant background anthropogenic influences. However, DiD analysis revealed no statistically significant differences in chlorophyll-a (Chl-a), particulate organic carbon (POC), or other parameters between the aquaculture and control sites, indicating that potential aquaculture-related effects remained below the detection threshold of the 1 km MODIS resolution. Despite these null results, the study demonstrates the feasibility and limitations of combining causal inference and cloud-based remote sensing for aquaculture monitoring. This methodological integration provides a scalable, cost-effective, and transferable framework for detecting and interpreting environmental change across large spatial and temporal domains. By defining the sensitivity limits of satellite-based detection, this work lays a foundation for future applications that merge high-resolution sensors, in-situ validation, and process-based modeling in sustainable aquaculture management.},
}
RevDate: 2026-03-02
CmpDate: 2026-03-02
Enhancing E-health system accuracy using Rendezvous Data Processing Model (RDPM) with IoT-cloud integration.
Digital health, 12:20552076251406312.
OBJECTIVE: The study's overarching goal is to improve E-health monitoring systems' precision and performance by developing and implementing an Rendezvous Data Processing Model (RDPM) that is compatible with IoT-cloud architecture. The approach solves a problem with current E-health systems; these systems frequently make incorrect or redundant suggestions because they depend too much on static analytic methods and isolated data augmentation.
METHODS: The RDPM system recommended improves real-time decision-making by digesting historical suggestions and present analytical flaws. Divided features and data streams allow it to validate new hypotheses by comparing them to earlier observations. The state learning process has been improved by earlier efforts to avoid errors and data duplication, the model must distinguish intervening and non-intervening data. Internet-connected sensors collect massive volumes of patient and environment data. Cloud analytics evaluates the system's precision using these data.
RESULTS: Experimental results show that RDPM reduces data interruptions, analytical errors, and recommendation ratios while improving decision correctness. The model shows that it can quickly interpret many input streams without compromising accuracy. Compared to IoT-based healthcare analytics, the RDPM improves suggestion accuracy and reduces computing redundancy.
CONCLUSION: IoT-cloud technologies with the RDPM system establish an adaptive and scalable platform for sophisticated E-health monitoring. State learning and dynamic data validation allow RDPM to make more accurate and convenient health recommendations. This approach allows a healthcare system to self-improve, understand context, and manage massive, real-time datasets.
Additional Links: PMID-41767873
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41767873,
year = {2026},
author = {Shahab, S and Kumar Dutta, A and Shaikh, ZA and Yousef, A and Anjum, M},
title = {Enhancing E-health system accuracy using Rendezvous Data Processing Model (RDPM) with IoT-cloud integration.},
journal = {Digital health},
volume = {12},
number = {},
pages = {20552076251406312},
pmid = {41767873},
issn = {2055-2076},
abstract = {OBJECTIVE: The study's overarching goal is to improve E-health monitoring systems' precision and performance by developing and implementing an Rendezvous Data Processing Model (RDPM) that is compatible with IoT-cloud architecture. The approach solves a problem with current E-health systems; these systems frequently make incorrect or redundant suggestions because they depend too much on static analytic methods and isolated data augmentation.
METHODS: The RDPM system recommended improves real-time decision-making by digesting historical suggestions and present analytical flaws. Divided features and data streams allow it to validate new hypotheses by comparing them to earlier observations. The state learning process has been improved by earlier efforts to avoid errors and data duplication, the model must distinguish intervening and non-intervening data. Internet-connected sensors collect massive volumes of patient and environment data. Cloud analytics evaluates the system's precision using these data.
RESULTS: Experimental results show that RDPM reduces data interruptions, analytical errors, and recommendation ratios while improving decision correctness. The model shows that it can quickly interpret many input streams without compromising accuracy. Compared to IoT-based healthcare analytics, the RDPM improves suggestion accuracy and reduces computing redundancy.
CONCLUSION: IoT-cloud technologies with the RDPM system establish an adaptive and scalable platform for sophisticated E-health monitoring. State learning and dynamic data validation allow RDPM to make more accurate and convenient health recommendations. This approach allows a healthcare system to self-improve, understand context, and manage massive, real-time datasets.},
}
RevDate: 2026-02-27
IntelliScheduler: an edge-cloud computing environment hybrid deep learning framework for task scheduling based on learning.
Scientific reports pii:10.1038/s41598-026-41330-8 [Epub ahead of print].
Edge-cloud computing has emerged as an important paradigm for modern Internet of Things (IoT) workflow applications, enabling low latency and on-demand resource allocation. In scenarios with heterogeneous deadlines and varying workloads, SLA compliance requires efficient coordination between edge and cloud resources. However, cloud-centric scheduling and heuristic approaches tend to lack adaptability to rapidly changing system conditions and, as a result, experience long waiting times (the same applies to QoS). To tackle these issues, we present IntelliScheduler, a hybrid actor-critic deep reinforcement learning framework for adaptive task scheduling in an edge-cloud system. Our framework presents a runtime-aware state representation combined with a learning-based decision mechanism, backed by a multi-buffer experience replay architecture. Second, a learning-based optimal task scheduling (LbOTS) algorithm is developed to minimise total task execution delay by discovering optimal deployment decisions across edge and cloud computational resources using latency-aware reward modelling. We assess the proposed approach by conducting extensive simulation experiments under different workloads. We evaluate LbOTS across various experimental scenarios and report up to 13% higher normalised reward, 67% lower training loss, 52-66% lower operational cost, and 80-90% lower rejection rate compared to PSO, MBO, and MOPSObaselines, achieving approximately 15-75% better QoE. Though the current assessment is simulation-based, the adaptive learning formulation is highly relevant for application in dynamic edge-cloud scheduling scenarios.
Additional Links: PMID-41760833
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41760833,
year = {2026},
author = {Raju, LR and Reddy, MVK and Surukanti, SR and Sudhakar, G and Subrahmanya Sarma M, VV and Adepu, A},
title = {IntelliScheduler: an edge-cloud computing environment hybrid deep learning framework for task scheduling based on learning.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-41330-8},
pmid = {41760833},
issn = {2045-2322},
abstract = {Edge-cloud computing has emerged as an important paradigm for modern Internet of Things (IoT) workflow applications, enabling low latency and on-demand resource allocation. In scenarios with heterogeneous deadlines and varying workloads, SLA compliance requires efficient coordination between edge and cloud resources. However, cloud-centric scheduling and heuristic approaches tend to lack adaptability to rapidly changing system conditions and, as a result, experience long waiting times (the same applies to QoS). To tackle these issues, we present IntelliScheduler, a hybrid actor-critic deep reinforcement learning framework for adaptive task scheduling in an edge-cloud system. Our framework presents a runtime-aware state representation combined with a learning-based decision mechanism, backed by a multi-buffer experience replay architecture. Second, a learning-based optimal task scheduling (LbOTS) algorithm is developed to minimise total task execution delay by discovering optimal deployment decisions across edge and cloud computational resources using latency-aware reward modelling. We assess the proposed approach by conducting extensive simulation experiments under different workloads. We evaluate LbOTS across various experimental scenarios and report up to 13% higher normalised reward, 67% lower training loss, 52-66% lower operational cost, and 80-90% lower rejection rate compared to PSO, MBO, and MOPSObaselines, achieving approximately 15-75% better QoE. Though the current assessment is simulation-based, the adaptive learning formulation is highly relevant for application in dynamic edge-cloud scheduling scenarios.},
}
RevDate: 2026-02-27
The Evolving Cyberinfrastructure at the National Institutes of Health to Support Data and AI in Biomedical Research.
Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing, 31:859-864.
Technological advancements have made biomedicine rich in data. With the generation of enormous volumes of biomedical and clinical data, it has become imperative to support biomedical computing investigators to utilize this wealth of biologically meaningful information. Moreover, advancements in Artificial Intelligence (AI) techniques, in conjunction with improved capabilities in implementing large-scale data processing pipelines, have led to the development of robust computational techniques and algorithms to solve complex biological problems. However, there are many challenges associated with providing researchers with secure systems for accessing biomedical data and computational resources that must be addressed. Establishing and maintaining an impactful data and AI ecosystem to support efforts in advancing biomedical research requires effective, scalable, and standardized information technology solutions, funding programs, and technical guidance that facilitate researchers in utilizing the state-of-the-art. The U.S. National Institutes of Health (NIH) has established novel initiatives to implement a cyberinfrastructure that democratizes secure access to large biomedical datasets and cloud-based computing resources, equipping biocomputing scientists to pursue pioneering research. This workshop will highlight the major issues restraining researchers' access to biomedical datasets and computing infrastructures and will cover the key components of the NIH's cyberinfrastructure aimed at advancing data science and AI research for biomedical applications.
Additional Links: PMID-41758192
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41758192,
year = {2026},
author = {Ramwala, OA and Weber, N and Mooney, SD},
title = {The Evolving Cyberinfrastructure at the National Institutes of Health to Support Data and AI in Biomedical Research.},
journal = {Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing},
volume = {31},
number = {},
pages = {859-864},
doi = {10.1142/9789819824755_0064},
pmid = {41758192},
issn = {2335-6936},
abstract = {Technological advancements have made biomedicine rich in data. With the generation of enormous volumes of biomedical and clinical data, it has become imperative to support biomedical computing investigators to utilize this wealth of biologically meaningful information. Moreover, advancements in Artificial Intelligence (AI) techniques, in conjunction with improved capabilities in implementing large-scale data processing pipelines, have led to the development of robust computational techniques and algorithms to solve complex biological problems. However, there are many challenges associated with providing researchers with secure systems for accessing biomedical data and computational resources that must be addressed. Establishing and maintaining an impactful data and AI ecosystem to support efforts in advancing biomedical research requires effective, scalable, and standardized information technology solutions, funding programs, and technical guidance that facilitate researchers in utilizing the state-of-the-art. The U.S. National Institutes of Health (NIH) has established novel initiatives to implement a cyberinfrastructure that democratizes secure access to large biomedical datasets and cloud-based computing resources, equipping biocomputing scientists to pursue pioneering research. This workshop will highlight the major issues restraining researchers' access to biomedical datasets and computing infrastructures and will cover the key components of the NIH's cyberinfrastructure aimed at advancing data science and AI research for biomedical applications.},
}
RevDate: 2026-02-27
CmpDate: 2026-02-27
Singe cell RNA sequencing data processing using cloud-based serverless computing.
bioRxiv : the preprint server for biology pii:2025.04.26.650787.
Singe cell RNA sequencing (scRNA-seq) has become a routine method for measuring cell activities. Processing large scRNA-seq datasets requires high-performance computing resources. The emergence of cloud computing allows us to leverage its on-demand capabilities without major investment in infrastructure. Serverless computing provides cost efficiency by allowing users to pay only for actual resource usage, eliminating the necessity for pre-allocated server capacities. Additionally, there is no requirement to set up servers in advance. We present a novel and generalizable methodology using serverless cloud computing to accelerate computationally intensive workflows. We create an on-demand "supercomputer" using rapidly deployable cloud serverless functions as automatically provisioned computation units. We tested our methodology of optimizing a scRNA-seq workflow by leveraging serverless functions on the cloud using two publicly available peripheral blood mononuclear cell (PBMC) datasets. In addition, we demonstrate our approach using data generated by the NIH MorPhiC program, where we process a 450 GB human scRNA-seq dataset across 86 cell lines designed to study the temporal impact of perturbations on pancreatic differentiation. We compared the total execution time of the scRNA-seq serverless workflow with the traditional workflow without using serverless functions, and demonstrate major speedup for large scRNA-seq datasets.
Additional Links: PMID-41756869
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41756869,
year = {2026},
author = {Hung, LH and Nasam, N and Biju, C and Lloyd, W and Yeung, KY},
title = {Singe cell RNA sequencing data processing using cloud-based serverless computing.},
journal = {bioRxiv : the preprint server for biology},
volume = {},
number = {},
pages = {},
doi = {10.1101/2025.04.26.650787},
pmid = {41756869},
issn = {2692-8205},
abstract = {Singe cell RNA sequencing (scRNA-seq) has become a routine method for measuring cell activities. Processing large scRNA-seq datasets requires high-performance computing resources. The emergence of cloud computing allows us to leverage its on-demand capabilities without major investment in infrastructure. Serverless computing provides cost efficiency by allowing users to pay only for actual resource usage, eliminating the necessity for pre-allocated server capacities. Additionally, there is no requirement to set up servers in advance. We present a novel and generalizable methodology using serverless cloud computing to accelerate computationally intensive workflows. We create an on-demand "supercomputer" using rapidly deployable cloud serverless functions as automatically provisioned computation units. We tested our methodology of optimizing a scRNA-seq workflow by leveraging serverless functions on the cloud using two publicly available peripheral blood mononuclear cell (PBMC) datasets. In addition, we demonstrate our approach using data generated by the NIH MorPhiC program, where we process a 450 GB human scRNA-seq dataset across 86 cell lines designed to study the temporal impact of perturbations on pancreatic differentiation. We compared the total execution time of the scRNA-seq serverless workflow with the traditional workflow without using serverless functions, and demonstrate major speedup for large scRNA-seq datasets.},
}
RevDate: 2026-02-27
Intelligent Water Quality Assessment and Prediction System for Public Networks: A Comparative Analysis of ML Algorithms and Rule-Based Recommender Techniques.
Sensors (Basel, Switzerland), 26(4): pii:s26041392.
An assessment and prediction system for the quality of public water networks was developed, using Timișoara, Romania, as a case study. This was implemented on a Google Firebase cloud storage system and comprised twelve ML algorithms applied to test samples for drinkability and used in predictions of upcoming samples. The system compares 17 water quality parameters to the World Health Organization and public reports of Timișoara drinking water standards for 804 samples. The system provides real-time data storage, drinkability prediction for the reservoir water system, and rule-based critical water recommendations for elementary treatment in samples. The most accurate and best-calibrated against random forest, gradient boosting, and Logistic Regression algorithms was the decision tree algorithm of the ML models. The experimental findings also determine the regions of the worst and best water quality and propose respective treatment. In contrast to previous research and structures, the paper demonstrates an approved stable solution for smart water monitoring, correlating practical deployment with sophisticated data-based conclusions. The results contribute to enhancing public health, enhancing water management measures, and upscaling the system for larger-scale applications.
Additional Links: PMID-41755335
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41755335,
year = {2026},
author = {Paliuc, C and Banu-Taran, P and Petruc, SI and Bogdan, R and Popa, M},
title = {Intelligent Water Quality Assessment and Prediction System for Public Networks: A Comparative Analysis of ML Algorithms and Rule-Based Recommender Techniques.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {4},
pages = {},
doi = {10.3390/s26041392},
pmid = {41755335},
issn = {1424-8220},
abstract = {An assessment and prediction system for the quality of public water networks was developed, using Timișoara, Romania, as a case study. This was implemented on a Google Firebase cloud storage system and comprised twelve ML algorithms applied to test samples for drinkability and used in predictions of upcoming samples. The system compares 17 water quality parameters to the World Health Organization and public reports of Timișoara drinking water standards for 804 samples. The system provides real-time data storage, drinkability prediction for the reservoir water system, and rule-based critical water recommendations for elementary treatment in samples. The most accurate and best-calibrated against random forest, gradient boosting, and Logistic Regression algorithms was the decision tree algorithm of the ML models. The experimental findings also determine the regions of the worst and best water quality and propose respective treatment. In contrast to previous research and structures, the paper demonstrates an approved stable solution for smart water monitoring, correlating practical deployment with sophisticated data-based conclusions. The results contribute to enhancing public health, enhancing water management measures, and upscaling the system for larger-scale applications.},
}
RevDate: 2026-02-27
Verifiable Differential Privacy Partial Disclosure for IoT with Stateless k-Use Tokens.
Sensors (Basel, Switzerland), 26(4): pii:s26041393.
Internet of Things (IoT) applications often require only minimal necessary information-such as threshold judgments, binning, or prefixes-yet they must control privacy leakage arising from multi-round and cross-entity access without exposing raw values. Existing solutions, however, frequently rely on ciphertext structures and server-side states, making it difficult to define a leakage upper bound for restricted answers in the sense of Differential Privacy (DP), or they lack unified information budgeting and k-use control. To address these challenges, this paper proposes a verifiable differential privacy partial disclosure scheme for IoT. We employ DP accounting to uniformly constrain the leakage of three types of operators: threshold, binning, and prefix. Furthermore, we design stateless k-use tokens based on Verifiable Random Functions (VRFs) and chained receipts to generate publicly verifiable compliance evidence for each response. We implemented an end-edge-cloud prototype system and evaluated its performance on two use cases: smart meter threshold alarms and industrial sensor out-of-bound detection. Experimental results demonstrate that compared with a baseline relying on server-state counting for k-use control, our stateless k-use mechanism improves throughput by approximately 25-37% under concurrency scales of 1, 8, and 16, and reduces p95 latency by an average of 15%. Meanwhile, in multi-party splicing attack experiments, the re-identification accuracy remains stable in the 0.50-0.52 range, approximating random guessing. These results validate that the proposed scheme possesses low-energy engineering feasibility and audit-friendliness while effectively suppressing splicing risks.
Additional Links: PMID-41755332
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41755332,
year = {2026},
author = {Zheng, D and Shi, W and Pan, Y and Shu, S and Xu, C and Li, Z and Wang, B and Lin, Y and Liu, P},
title = {Verifiable Differential Privacy Partial Disclosure for IoT with Stateless k-Use Tokens.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {4},
pages = {},
doi = {10.3390/s26041393},
pmid = {41755332},
issn = {1424-8220},
support = {2022YFB3305302//National Key Research and Development Program of China/ ; 202510423112X//China National University Student Innovation & Entrepreneurship Development Program/ ; },
abstract = {Internet of Things (IoT) applications often require only minimal necessary information-such as threshold judgments, binning, or prefixes-yet they must control privacy leakage arising from multi-round and cross-entity access without exposing raw values. Existing solutions, however, frequently rely on ciphertext structures and server-side states, making it difficult to define a leakage upper bound for restricted answers in the sense of Differential Privacy (DP), or they lack unified information budgeting and k-use control. To address these challenges, this paper proposes a verifiable differential privacy partial disclosure scheme for IoT. We employ DP accounting to uniformly constrain the leakage of three types of operators: threshold, binning, and prefix. Furthermore, we design stateless k-use tokens based on Verifiable Random Functions (VRFs) and chained receipts to generate publicly verifiable compliance evidence for each response. We implemented an end-edge-cloud prototype system and evaluated its performance on two use cases: smart meter threshold alarms and industrial sensor out-of-bound detection. Experimental results demonstrate that compared with a baseline relying on server-state counting for k-use control, our stateless k-use mechanism improves throughput by approximately 25-37% under concurrency scales of 1, 8, and 16, and reduces p95 latency by an average of 15%. Meanwhile, in multi-party splicing attack experiments, the re-identification accuracy remains stable in the 0.50-0.52 range, approximating random guessing. These results validate that the proposed scheme possesses low-energy engineering feasibility and audit-friendliness while effectively suppressing splicing risks.},
}
RevDate: 2026-02-27
Two-Stage Wildlife Event Classification for Edge Deployment.
Sensors (Basel, Switzerland), 26(4): pii:s26041366.
Camera-based wildlife monitoring is often overwhelmed by non-target triggers and slowed by manual review or cloud-dependent inference, which can prevent timely intervention for high stakes human-wildlife conflicts. Our key contribution is a deployable, fully offline edge vision sensor that achieves near-real-time, highly accurate wildlife event classification by combining detector-based empty-image suppression with a lightweight classifier trained with a staged transfer-learning curriculum. Specifically, Stage 1 uses a pretrained You Only Look Once (YOLO)-family detector for permissive animal localization and empty-trigger suppression, and Stage 2 uses a lightweight EfficientNet-based binary classifier to confirm puma on detector crops and gate downstream actions. Our design is robust to low-quality nighttime monochrome imagery (motion blur, low contrast, illumination artifacts, and partial-body captures) and operates using commercially available components in connectivity-limited settings. In field deployments running since May 2025, end-to-end latency from camera trigger to action command is approximately 4 s. Ablation studies using a dataset of labeled wildlife images (pumas, not pumas) show that the two-stage approach substantially reduces false alarms in identifying pumas relative to a full-image classifier while maintaining high recall. On the held-out test set (N=1434 events), the proposed two-stage cascade achieves precision 0.983, recall 0.975, F1 0.979, accuracy 0.986, and balanced accuracy 0.983, with only 8 false positives and 12 false negatives. The system can be easily adapted for other species, as demonstrated by rapid retraining of the second stage to classify ringtails. Downstream responses (e.g., notifications and optional audio/light outputs) provide flexible actuation capabilities that can be configured to support intervention.
Additional Links: PMID-41755305
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41755305,
year = {2026},
author = {Viswanathan, AS and Bock, A and Bent, Z and Peyton, MA and Tartakovsky, DM and Santos, JE},
title = {Two-Stage Wildlife Event Classification for Edge Deployment.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {4},
pages = {},
doi = {10.3390/s26041366},
pmid = {41755305},
issn = {1424-8220},
support = {FA9550-24-1-0237//United States Air Force Office of Scientific Research/ ; DE-SC0023163//Office of Advanced Scientific Computing Research/ ; },
abstract = {Camera-based wildlife monitoring is often overwhelmed by non-target triggers and slowed by manual review or cloud-dependent inference, which can prevent timely intervention for high stakes human-wildlife conflicts. Our key contribution is a deployable, fully offline edge vision sensor that achieves near-real-time, highly accurate wildlife event classification by combining detector-based empty-image suppression with a lightweight classifier trained with a staged transfer-learning curriculum. Specifically, Stage 1 uses a pretrained You Only Look Once (YOLO)-family detector for permissive animal localization and empty-trigger suppression, and Stage 2 uses a lightweight EfficientNet-based binary classifier to confirm puma on detector crops and gate downstream actions. Our design is robust to low-quality nighttime monochrome imagery (motion blur, low contrast, illumination artifacts, and partial-body captures) and operates using commercially available components in connectivity-limited settings. In field deployments running since May 2025, end-to-end latency from camera trigger to action command is approximately 4 s. Ablation studies using a dataset of labeled wildlife images (pumas, not pumas) show that the two-stage approach substantially reduces false alarms in identifying pumas relative to a full-image classifier while maintaining high recall. On the held-out test set (N=1434 events), the proposed two-stage cascade achieves precision 0.983, recall 0.975, F1 0.979, accuracy 0.986, and balanced accuracy 0.983, with only 8 false positives and 12 false negatives. The system can be easily adapted for other species, as demonstrated by rapid retraining of the second stage to classify ringtails. Downstream responses (e.g., notifications and optional audio/light outputs) provide flexible actuation capabilities that can be configured to support intervention.},
}
RevDate: 2026-02-27
Dynamic Micro-Batch and Token-Budget Scheduling for IoT-Scale Pipeline-Parallel LLM Inference.
Sensors (Basel, Switzerland), 26(4): pii:s26041101.
Large language models in IoT-edge-cloud settings face bursty, heterogeneous requests that make pipeline-parallel inference prone to micro-batch imbalance and communication stalls, causing GPU idle time and SLO violations. We propose a runtime-adaptive scheduler that jointly tunes token budgets and micro-batch counts to balance prefill/decode workloads and minimize pipeline bubbles under changing compute and network conditions. On a four-node pipeline-parallel cluster across Llama-2-13b and Qwen2.5-14b at 100/1000 Mbps, our method outperforms vLLM and SGLang, reducing GPU idle time by up to 55% and improving throughput by up to 1.61 × while improving TTFT/ITL SLO satisfaction. These results show that dynamic scheduling is essential for scalable, latency-stable LLM inference in IoT-edge-cloud environments.
Additional Links: PMID-41755042
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41755042,
year = {2026},
author = {Ahn, J and Son, Y and Kim, D and Park, S},
title = {Dynamic Micro-Batch and Token-Budget Scheduling for IoT-Scale Pipeline-Parallel LLM Inference.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {4},
pages = {},
doi = {10.3390/s26041101},
pmid = {41755042},
issn = {1424-8220},
support = {25DIH-32//Daegu Digital Innovation Promotion Agency(DIP)/ ; },
abstract = {Large language models in IoT-edge-cloud settings face bursty, heterogeneous requests that make pipeline-parallel inference prone to micro-batch imbalance and communication stalls, causing GPU idle time and SLO violations. We propose a runtime-adaptive scheduler that jointly tunes token budgets and micro-batch counts to balance prefill/decode workloads and minimize pipeline bubbles under changing compute and network conditions. On a four-node pipeline-parallel cluster across Llama-2-13b and Qwen2.5-14b at 100/1000 Mbps, our method outperforms vLLM and SGLang, reducing GPU idle time by up to 55% and improving throughput by up to 1.61 × while improving TTFT/ITL SLO satisfaction. These results show that dynamic scheduling is essential for scalable, latency-stable LLM inference in IoT-edge-cloud environments.},
}
RevDate: 2026-02-27
Challenges and Opportunities in Multi-Omics Data Acquisition and Analysis: Toward Integrative Solutions.
Biomolecules, 16(2): pii:biom16020271.
In this perspective, we discuss the current challenges and opportunities in multi-omics, a rapidly evolving approach that integrates multiple molecular layers to advance our understanding of complex biological systems. As biomedical research moves toward precision medicine, the ability to correlate genotype, phenotype, and environmental contexts has never been more critical. Multi-omics enhances biomarker discovery and elucidates regulatory networks underlying health and disease. The dominant scientific paradigm for over a century was to take a reductionist approach, studying individual molecular components in isolation or as simplified systems. The advent of omics technologies in the 1990s enabled a systems paradigm, allowing holistic analyses of molecular networks. These early systems studies were constrained by technology and methodology to bulk tissue measurements and single-omics analyses. Recent advances in single-cell and spatial omics, high-throughput proteomics and metabolomics, cloud computing, and artificial intelligence now allow high-resolution, spatially contextualized multi-omics analyses. Despite these gains, challenges in data analysis and interpretation remain, including high dimensionality, missing or incomplete data, multiple batch effects, and method-specific variability. Emerging strategies-such as paired data collection, staged or joint integration, and latent factor or quasi-mediation frameworks-offer promising solutions, positioning multi-omics as a transformative tool for elucidating complex mechanisms and guiding personalized medicine. Continued refinement of these approaches may further enhance the utility of multi-omics for understanding complex biological systems.
Additional Links: PMID-41750340
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41750340,
year = {2026},
author = {Hemme, CL and Atoyan, J and Cai, A and Liu, C},
title = {Challenges and Opportunities in Multi-Omics Data Acquisition and Analysis: Toward Integrative Solutions.},
journal = {Biomolecules},
volume = {16},
number = {2},
pages = {},
doi = {10.3390/biom16020271},
pmid = {41750340},
issn = {2218-273X},
support = {P20GM103430/GM/NIGMS NIH HHS/United States ; },
abstract = {In this perspective, we discuss the current challenges and opportunities in multi-omics, a rapidly evolving approach that integrates multiple molecular layers to advance our understanding of complex biological systems. As biomedical research moves toward precision medicine, the ability to correlate genotype, phenotype, and environmental contexts has never been more critical. Multi-omics enhances biomarker discovery and elucidates regulatory networks underlying health and disease. The dominant scientific paradigm for over a century was to take a reductionist approach, studying individual molecular components in isolation or as simplified systems. The advent of omics technologies in the 1990s enabled a systems paradigm, allowing holistic analyses of molecular networks. These early systems studies were constrained by technology and methodology to bulk tissue measurements and single-omics analyses. Recent advances in single-cell and spatial omics, high-throughput proteomics and metabolomics, cloud computing, and artificial intelligence now allow high-resolution, spatially contextualized multi-omics analyses. Despite these gains, challenges in data analysis and interpretation remain, including high dimensionality, missing or incomplete data, multiple batch effects, and method-specific variability. Emerging strategies-such as paired data collection, staged or joint integration, and latent factor or quasi-mediation frameworks-offer promising solutions, positioning multi-omics as a transformative tool for elucidating complex mechanisms and guiding personalized medicine. Continued refinement of these approaches may further enhance the utility of multi-omics for understanding complex biological systems.},
}
RevDate: 2026-02-27
CmpDate: 2026-02-27
Stroke Rehabilitation, Novel Technology and the Internet of Medical Things.
Brain sciences, 16(2): pii:brainsci16020124.
Stroke continues to impose an enormous morbidity and mortality burden worldwide. Stroke survivors often incur debilitating consequences that impair motor function, independence in activities of daily living and quality of life. Rehabilitation is a pivotal intervention to minimize disability and promote functional recovery following a stroke. The Internet of Medical Things, a network of connected medical devices, software and health systems that collect, store and analyze health data over the internet, is an emerging resource in neurorehabilitation for stroke survivors. Technologies such as asynchronous transmission to handle intermittent connectivity, edge computing to conserve bandwidth and lengthen device life, functional interoperability across platforms, security mechanisms scalable to resource constraints, and hybrid architectures that combine local processing with cloud synchronization help bridge the digital divide and infrastructure limitations in low-resource environments. This manuscript reviews emerging rehabilitation technologies such as robotic devices, virtual reality, brain-computer interfaces and telerehabilitation in the setting of neurorehabilitation for stroke patients.
Additional Links: PMID-41750125
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41750125,
year = {2026},
author = {Costa, A and Schmalzried, E and Tong, J and Khanyan, B and Wang, W and Jin, Z and Bergese, SD},
title = {Stroke Rehabilitation, Novel Technology and the Internet of Medical Things.},
journal = {Brain sciences},
volume = {16},
number = {2},
pages = {},
doi = {10.3390/brainsci16020124},
pmid = {41750125},
issn = {2076-3425},
abstract = {Stroke continues to impose an enormous morbidity and mortality burden worldwide. Stroke survivors often incur debilitating consequences that impair motor function, independence in activities of daily living and quality of life. Rehabilitation is a pivotal intervention to minimize disability and promote functional recovery following a stroke. The Internet of Medical Things, a network of connected medical devices, software and health systems that collect, store and analyze health data over the internet, is an emerging resource in neurorehabilitation for stroke survivors. Technologies such as asynchronous transmission to handle intermittent connectivity, edge computing to conserve bandwidth and lengthen device life, functional interoperability across platforms, security mechanisms scalable to resource constraints, and hybrid architectures that combine local processing with cloud synchronization help bridge the digital divide and infrastructure limitations in low-resource environments. This manuscript reviews emerging rehabilitation technologies such as robotic devices, virtual reality, brain-computer interfaces and telerehabilitation in the setting of neurorehabilitation for stroke patients.},
}
RevDate: 2026-02-26
An Intelligent, low-cost water quality monitoring system with on-device machine learning and cloud integration.
Scientific reports pii:10.1038/s41598-026-37287-3 [Epub ahead of print].
Additional Links: PMID-41748630
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41748630,
year = {2026},
author = {Sharma, S and Mishra, D and Yadav, A and Gami, B and Madhan, ES},
title = {An Intelligent, low-cost water quality monitoring system with on-device machine learning and cloud integration.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-37287-3},
pmid = {41748630},
issn = {2045-2322},
}
RevDate: 2026-02-26
Copernicus Data Space Ecosystem establishes public cloud processing for earth observation data.
Scientific data pii:10.1038/s41597-026-06765-8 [Epub ahead of print].
The Copernicus Data Space Ecosystem is the official data platform for the Copernicus Programme's satellites. CDSE combines instant access to satellite imagery with Application Programming Interfaces and virtual machine processing. Instead of downloading satellite imagery for local computation, CDSE utilizes cloud-optimized files to provide data according to the filtering and processing request of the user, facilitating large-scale scientific analysis. Cloud computing on CDSE eliminates the need for users to rely on their own data infrastructure. The incorporated standards support both Open Science and commercialization of scientific tools and algorithms. CDSE serves all users from beginners to professionals, from the interactive visualization of imagery to custom ML algorithms. Acquiring the skills required to process Earth Observation data is facilitated by the open-source codebase and tutorials. Access to public cloud processing is expected to foster the uptake of Earth Observation across new domains. CDSE now provides the critical mass to serve as a tool for knowledge exchange and to influence commercial and public providers alike to support cloud processing.
Additional Links: PMID-41748608
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41748608,
year = {2026},
author = {D Kovács, D and Musial, J and Bojanowski, J and Clarijs, D and de la Mar, J and Zlinszky, A},
title = {Copernicus Data Space Ecosystem establishes public cloud processing for earth observation data.},
journal = {Scientific data},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41597-026-06765-8},
pmid = {41748608},
issn = {2052-4463},
abstract = {The Copernicus Data Space Ecosystem is the official data platform for the Copernicus Programme's satellites. CDSE combines instant access to satellite imagery with Application Programming Interfaces and virtual machine processing. Instead of downloading satellite imagery for local computation, CDSE utilizes cloud-optimized files to provide data according to the filtering and processing request of the user, facilitating large-scale scientific analysis. Cloud computing on CDSE eliminates the need for users to rely on their own data infrastructure. The incorporated standards support both Open Science and commercialization of scientific tools and algorithms. CDSE serves all users from beginners to professionals, from the interactive visualization of imagery to custom ML algorithms. Acquiring the skills required to process Earth Observation data is facilitated by the open-source codebase and tutorials. Access to public cloud processing is expected to foster the uptake of Earth Observation across new domains. CDSE now provides the critical mass to serve as a tool for knowledge exchange and to influence commercial and public providers alike to support cloud processing.},
}
RevDate: 2026-02-26
Design and implementation of a comprehensive management platform for drilling engineering.
PloS one, 21(2):e0343700 pii:PONE-D-25-60433.
To enhance the efficiency, safety, and data accuracy of drilling engineering, this study developed an integrated business management platform for drilling engineering grassroots units based on the Business Model Driven (BMD) approach. The platform is built on a "five horizontal, three vertical" cloud computing architecture, establishing a five-layer system from the infrastructure layer to the user layer horizontally, and supported by standard specifications, safety, and maintenance systems vertically, enabling collaboration across multiple business scenarios and data integration. Currently, four major modules with over 20 functionalities have been developed, supporting applications such as task coordination, engineering supervision, data analysis, and accident handling. Operational results demonstrate that the platform effectively promotes integrated management of drilling engineering through real-time data sharing, full-process quality control, and intelligent decision-making, thereby enhancing operational quality and safety, reducing accident risks, and providing critical technological support for the digital transformation and upgrading of the drilling industry.
Additional Links: PMID-41746936
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41746936,
year = {2026},
author = {Du, Y and Yang, Y and Wu, X and Gao, P and Ma, H},
title = {Design and implementation of a comprehensive management platform for drilling engineering.},
journal = {PloS one},
volume = {21},
number = {2},
pages = {e0343700},
doi = {10.1371/journal.pone.0343700},
pmid = {41746936},
issn = {1932-6203},
abstract = {To enhance the efficiency, safety, and data accuracy of drilling engineering, this study developed an integrated business management platform for drilling engineering grassroots units based on the Business Model Driven (BMD) approach. The platform is built on a "five horizontal, three vertical" cloud computing architecture, establishing a five-layer system from the infrastructure layer to the user layer horizontally, and supported by standard specifications, safety, and maintenance systems vertically, enabling collaboration across multiple business scenarios and data integration. Currently, four major modules with over 20 functionalities have been developed, supporting applications such as task coordination, engineering supervision, data analysis, and accident handling. Operational results demonstrate that the platform effectively promotes integrated management of drilling engineering through real-time data sharing, full-process quality control, and intelligent decision-making, thereby enhancing operational quality and safety, reducing accident risks, and providing critical technological support for the digital transformation and upgrading of the drilling industry.},
}
RevDate: 2026-02-26
Automatic Speech Recognition for Intelligibility Assessment in Children With Dysarthria.
Journal of speech, language, and hearing research : JSLHR [Epub ahead of print].
PURPOSE: Accurate assessment of speech intelligibility is critical for children with dysarthria secondary to cerebral palsy. Traditional assessment methods, such as human listeners' orthographic transcription and perceptual ratings (e.g., of ease of understanding [EoU]), are time consuming or subjective. Automatic speech recognition (ASR) may provide a more efficient, objective alternative, but its use for assessing intelligibility in this population is unexamined. This study evaluated the potential of ASR for intelligibility assessment in children with dysarthria and identified the most appropriate ASR systems for approximating human listeners' judgments.
METHOD: Five ASR systems transcribed speech samples from 20 children with dysarthria. Additionally, 168 adult listeners provided orthographic transcriptions and EoU ratings. Word recognition rate (WRR) was used as the metric for calculating ASR and human listeners' transcription accuracy. Spearman correlations were used to assess the relationship between ASR WRR and human WRR, as well as between ASR WRR and human EoU ratings.
RESULTS: The WRR yielded by four ASR systems (WhisperX-small, WhisperX-medium, WhisperX-large, and Google Cloud) showed strong correlations with human WRR, with WhisperX-medium demonstrating the strongest correlation. These four systems' WRRs also exhibited moderate-to-strong correlations with EoU ratings, with Google Cloud ASR showing the strongest correlation. In contrast, the WRR of Wav2Vec2 demonstrated a weak correlation with both human WRR and EoU ratings.
CONCLUSIONS: ASR shows promise for use in intelligibility assessment in children with dysarthria. Of the tested ASR systems, WhisperX-medium appears most promising for approximating human transcription accuracy, whereas Google Cloud ASR aligns best with perceptual ratings. Such differences in ASR performance highlight the need for careful system selection in clinical applications.
SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.31397457.
Additional Links: PMID-41746192
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41746192,
year = {2026},
author = {Choi, J and Moya-Galé, G and Hwang, K and Hirschberg, J and Levy, ES},
title = {Automatic Speech Recognition for Intelligibility Assessment in Children With Dysarthria.},
journal = {Journal of speech, language, and hearing research : JSLHR},
volume = {},
number = {},
pages = {1-17},
doi = {10.1044/2025_JSLHR-25-00562},
pmid = {41746192},
issn = {1558-9102},
abstract = {PURPOSE: Accurate assessment of speech intelligibility is critical for children with dysarthria secondary to cerebral palsy. Traditional assessment methods, such as human listeners' orthographic transcription and perceptual ratings (e.g., of ease of understanding [EoU]), are time consuming or subjective. Automatic speech recognition (ASR) may provide a more efficient, objective alternative, but its use for assessing intelligibility in this population is unexamined. This study evaluated the potential of ASR for intelligibility assessment in children with dysarthria and identified the most appropriate ASR systems for approximating human listeners' judgments.
METHOD: Five ASR systems transcribed speech samples from 20 children with dysarthria. Additionally, 168 adult listeners provided orthographic transcriptions and EoU ratings. Word recognition rate (WRR) was used as the metric for calculating ASR and human listeners' transcription accuracy. Spearman correlations were used to assess the relationship between ASR WRR and human WRR, as well as between ASR WRR and human EoU ratings.
RESULTS: The WRR yielded by four ASR systems (WhisperX-small, WhisperX-medium, WhisperX-large, and Google Cloud) showed strong correlations with human WRR, with WhisperX-medium demonstrating the strongest correlation. These four systems' WRRs also exhibited moderate-to-strong correlations with EoU ratings, with Google Cloud ASR showing the strongest correlation. In contrast, the WRR of Wav2Vec2 demonstrated a weak correlation with both human WRR and EoU ratings.
CONCLUSIONS: ASR shows promise for use in intelligibility assessment in children with dysarthria. Of the tested ASR systems, WhisperX-medium appears most promising for approximating human transcription accuracy, whereas Google Cloud ASR aligns best with perceptual ratings. Such differences in ASR performance highlight the need for careful system selection in clinical applications.
SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.31397457.},
}
RevDate: 2026-02-26
CmpDate: 2026-02-26
Accelerating Point Cloud Computation via Memory in Embedded Structured Light Cameras.
Journal of imaging, 12(2):.
Embedded structured light cameras have been widely applied in various fields. However, due to constraints such as insufficient computing resources, it remains difficult to achieve high-speed structured light point cloud computation. To address this issue, this study proposes a memory-driven computational framework for accelerating point cloud computation. Specifically, the point cloud computation process is precomputed as much as possible and stored in memory in the form of parameters, thereby significantly reducing the computational load during actual point cloud computation. The framework is instantiated in two forms: a low-memory method that minimizes memory footprint at the expense of point cloud stability, and a high-memory method that preserves the nonlinear phase-distance relation via an extensive lookup table. Experimental evaluations demonstrate that the proposed methods achieve comparable accuracy to the conventional method while delivering substantial speedups, and data-format optimizations further reduce required bandwidth. This framework offers a generalizable paradigm for optimizing structured light pipelines, paving the way for enhanced real-time 3D sensing in embedded applications.
Additional Links: PMID-41745455
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41745455,
year = {2026},
author = {Zhang, Y and Meng, S and Wang, S and Ren, Y},
title = {Accelerating Point Cloud Computation via Memory in Embedded Structured Light Cameras.},
journal = {Journal of imaging},
volume = {12},
number = {2},
pages = {},
pmid = {41745455},
issn = {2313-433X},
support = {F2025302004//Natural Science Foundation of Hebei Province/ ; 25B601//Hebei Academy of Sciences/ ; },
abstract = {Embedded structured light cameras have been widely applied in various fields. However, due to constraints such as insufficient computing resources, it remains difficult to achieve high-speed structured light point cloud computation. To address this issue, this study proposes a memory-driven computational framework for accelerating point cloud computation. Specifically, the point cloud computation process is precomputed as much as possible and stored in memory in the form of parameters, thereby significantly reducing the computational load during actual point cloud computation. The framework is instantiated in two forms: a low-memory method that minimizes memory footprint at the expense of point cloud stability, and a high-memory method that preserves the nonlinear phase-distance relation via an extensive lookup table. Experimental evaluations demonstrate that the proposed methods achieve comparable accuracy to the conventional method while delivering substantial speedups, and data-format optimizations further reduce required bandwidth. This framework offers a generalizable paradigm for optimizing structured light pipelines, paving the way for enhanced real-time 3D sensing in embedded applications.},
}
RevDate: 2026-02-25
Intelligent cloud-based RAS management: integration of DDPG reinforcement learning with AWS IoT for optimized aquaculture production.
Scientific reports pii:10.1038/s41598-025-33736-7 [Epub ahead of print].
While Deep Deterministic Policy Gradient (DDPG) reinforcement learning has demonstrated significant potential for optimizing aquaculture operations in laboratory and controlled environments, its practical deployment in commercial-scale Recirculating Aquaculture Systems (RAS) faces critical scalability and infrastructure challenges. This paper presents a novel cloud-edge hybrid architecture that enables the deployment of DDPG-based control systems across diverse commercial aquaculture operations, from small research facilities to large-scale production systems. Building upon our previous work in DDPG-based feeding rate optimization and energy management, we develop a comprehensive framework that addresses the practical challenges of deploying AI-based control systems in real-world aquaculture environments. The proposed architecture integrates AWS IoT Core for sensor connectivity, AWS Greengrass for edge intelligence, and a suite of cloud services for scalable model deployment and management. Edge optimization techniques, including 16-bit quantization and architecture pruning, reduced the DDPG model size by 74% (32 MB to 8.3 MB) while maintaining accuracy within 1.5% of the full-precision version, enabling real-time inference with 47 ± 8 ms latency across all deployment scales. Field validation in a commercial facility with 108 tanks (3,132 m[3] total volume) demonstrated exceptional scalability, with only 8.9% latency increase from small-scale (1,000 L) to large-scale (50,000 L) operations. The system achieved 99.97% IoT message delivery rates and maintained 98.7% reliability in critical parameter control, while comprehensive failsafe mechanisms ensured safe operation during network disruptions lasting up to 72 h. Network resilience testing validated robust performance under various connectivity challenges, maintaining 98.5% performance retention during minor network latency and 85.2% retention during 12-hour complete disconnections. This research establishes a practical blueprint for transitioning DDPG-based aquaculture management from research environments to commercial deployment, addressing critical gaps in scalability, reliability, and operational resilience that have previously limited the adoption of AI-based control systems in the aquaculture industry.
Additional Links: PMID-41741505
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41741505,
year = {2026},
author = {Elmessery, WM and Shams, MY and El-Hafeez, TA and Eid, MH and Székács, A and Saeed, O and Ahmed, AF and Alhumedi, M and Elwakeel, AE},
title = {Intelligent cloud-based RAS management: integration of DDPG reinforcement learning with AWS IoT for optimized aquaculture production.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-33736-7},
pmid = {41741505},
issn = {2045-2322},
abstract = {While Deep Deterministic Policy Gradient (DDPG) reinforcement learning has demonstrated significant potential for optimizing aquaculture operations in laboratory and controlled environments, its practical deployment in commercial-scale Recirculating Aquaculture Systems (RAS) faces critical scalability and infrastructure challenges. This paper presents a novel cloud-edge hybrid architecture that enables the deployment of DDPG-based control systems across diverse commercial aquaculture operations, from small research facilities to large-scale production systems. Building upon our previous work in DDPG-based feeding rate optimization and energy management, we develop a comprehensive framework that addresses the practical challenges of deploying AI-based control systems in real-world aquaculture environments. The proposed architecture integrates AWS IoT Core for sensor connectivity, AWS Greengrass for edge intelligence, and a suite of cloud services for scalable model deployment and management. Edge optimization techniques, including 16-bit quantization and architecture pruning, reduced the DDPG model size by 74% (32 MB to 8.3 MB) while maintaining accuracy within 1.5% of the full-precision version, enabling real-time inference with 47 ± 8 ms latency across all deployment scales. Field validation in a commercial facility with 108 tanks (3,132 m[3] total volume) demonstrated exceptional scalability, with only 8.9% latency increase from small-scale (1,000 L) to large-scale (50,000 L) operations. The system achieved 99.97% IoT message delivery rates and maintained 98.7% reliability in critical parameter control, while comprehensive failsafe mechanisms ensured safe operation during network disruptions lasting up to 72 h. Network resilience testing validated robust performance under various connectivity challenges, maintaining 98.5% performance retention during minor network latency and 85.2% retention during 12-hour complete disconnections. This research establishes a practical blueprint for transitioning DDPG-based aquaculture management from research environments to commercial deployment, addressing critical gaps in scalability, reliability, and operational resilience that have previously limited the adoption of AI-based control systems in the aquaculture industry.},
}
RevDate: 2026-02-20
SLA aware deep reinforcement learning for adaptive EdgeCloud task scheduling.
Scientific reports pii:10.1038/s41598-026-40237-8 [Epub ahead of print].
Additional Links: PMID-41720948
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41720948,
year = {2026},
author = {Yamsani, N and P, CR},
title = {SLA aware deep reinforcement learning for adaptive EdgeCloud task scheduling.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-40237-8},
pmid = {41720948},
issn = {2045-2322},
}
RevDate: 2026-02-18
A quantum-driven multi-objective scheduler for scalable task orchestration in fog-based cyber-physical-social systems.
Scientific reports, 16(1):6874.
Fog computing extends cloud capabilities toward the network edge, enabling low-latency Cyber-Physical-Social System (CPSS) services in domains such as smart cities and healthcare. However, multi-objective task scheduling in fog environments remains challenging due to conflicting goals minimizing execution time, resource costs, and energy consumption combined with the scalability limitations of classical evolutionary algorithms, which often converge slowly and produce poorly distributed Pareto fronts in large networks. To address these issues, this paper introduces FOG-QIEA, a quantum-inspired evolutionary algorithm designed for tri-objective fog scheduling. FOG-QIEA augments adaptive neighborhood mechanisms with quantum-inspired operators, including superposition-based population initialization, rotation-gate–driven updates, and measurement-guided selection, enabling faster and more diverse exploration of the solution space. The proposed model jointly optimizes total execution time, cost (including SLA violations), and energy efficiency while maintaining scalability across CPSS deployments with thousands of IoT tasks. Extensive simulations in iFogSim using realistic CPSS scenarios show that FOG-QIEA outperforms NSGA-II, MMPA-based approaches, and classical adaptive fog schedulers by 20–35% in convergence speed, 15–25% in energy reduction, and achieves significantly improved Pareto diversity. These results demonstrate the potential of FOG-QIEA as a sustainable and efficient scheduling framework, supporting future advancements toward quantum-hybrid optimization in fog and edge networks.
Additional Links: PMID-41622305
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41622305,
year = {2026},
author = {Hammouda, NG and Shalaby, M and Alfilh, RHC and Singh, NSS},
title = {A quantum-driven multi-objective scheduler for scalable task orchestration in fog-based cyber-physical-social systems.},
journal = {Scientific reports},
volume = {16},
number = {1},
pages = {6874},
pmid = {41622305},
issn = {2045-2322},
abstract = {Fog computing extends cloud capabilities toward the network edge, enabling low-latency Cyber-Physical-Social System (CPSS) services in domains such as smart cities and healthcare. However, multi-objective task scheduling in fog environments remains challenging due to conflicting goals minimizing execution time, resource costs, and energy consumption combined with the scalability limitations of classical evolutionary algorithms, which often converge slowly and produce poorly distributed Pareto fronts in large networks. To address these issues, this paper introduces FOG-QIEA, a quantum-inspired evolutionary algorithm designed for tri-objective fog scheduling. FOG-QIEA augments adaptive neighborhood mechanisms with quantum-inspired operators, including superposition-based population initialization, rotation-gate–driven updates, and measurement-guided selection, enabling faster and more diverse exploration of the solution space. The proposed model jointly optimizes total execution time, cost (including SLA violations), and energy efficiency while maintaining scalability across CPSS deployments with thousands of IoT tasks. Extensive simulations in iFogSim using realistic CPSS scenarios show that FOG-QIEA outperforms NSGA-II, MMPA-based approaches, and classical adaptive fog schedulers by 20–35% in convergence speed, 15–25% in energy reduction, and achieves significantly improved Pareto diversity. These results demonstrate the potential of FOG-QIEA as a sustainable and efficient scheduling framework, supporting future advancements toward quantum-hybrid optimization in fog and edge networks.},
}
RevDate: 2026-02-18
HoloQA: Full Reference Video Quality Assessor of Rendered Human Avatars in Virtual Reality.
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society, PP: [Epub ahead of print].
We present HoloQA, a new state-of-the-art Full Reference Video Quality Assessment (VQA) model that was designed using principles of visual neuroscience, information theory, and self-supervised deep learning to accurately predict the quality of rendered digital human avatars in Virtual Reality (VR) and Augmented Reality (AR) systems. The growing adoption of VR/AR applications that aim to transmit digital human avatars over bandwidth-limited video networks has driven the need for VQA algorithms that better account for the kinds of distortions that reduce the quality of rendered and viewed avatars. As we will show, standard VQA models often fail to capture distortions unique to the rendering, transmission, and compression of videos containing human avatars. Towards solving this difficult problem, we adopt a multi-level Mixture-of-Experts approach. This involves computing distortion-aware perceptual features and high-level content-aware deep features that capture semantic attributes of human body avatars. The high-level features are computed using a self-supervised, pre-trained deep learning network. We show that HoloQA is able to achieve state-of-the-art performance on the recently introduced LIVE-Meta Rendered Human Avatar VQA database, demonstrating its efficacy in predicting the quality of rendered human avatars in VR. Furthermore, we demonstrate the competitive performance of HoloQA on other digital human avatar databases and on another synthetically generated video quality use case: cloud gaming. The code associated with this work will be made available on GitHub.
Additional Links: PMID-41706772
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41706772,
year = {2026},
author = {Saha, A and Chen, YC and Hane, C and Bazin, JC and Katsavounidis, I and Chapiro, A and Bovik, AC},
title = {HoloQA: Full Reference Video Quality Assessor of Rendered Human Avatars in Virtual Reality.},
journal = {IEEE transactions on image processing : a publication of the IEEE Signal Processing Society},
volume = {PP},
number = {},
pages = {},
doi = {10.1109/TIP.2026.3663930},
pmid = {41706772},
issn = {1941-0042},
abstract = {We present HoloQA, a new state-of-the-art Full Reference Video Quality Assessment (VQA) model that was designed using principles of visual neuroscience, information theory, and self-supervised deep learning to accurately predict the quality of rendered digital human avatars in Virtual Reality (VR) and Augmented Reality (AR) systems. The growing adoption of VR/AR applications that aim to transmit digital human avatars over bandwidth-limited video networks has driven the need for VQA algorithms that better account for the kinds of distortions that reduce the quality of rendered and viewed avatars. As we will show, standard VQA models often fail to capture distortions unique to the rendering, transmission, and compression of videos containing human avatars. Towards solving this difficult problem, we adopt a multi-level Mixture-of-Experts approach. This involves computing distortion-aware perceptual features and high-level content-aware deep features that capture semantic attributes of human body avatars. The high-level features are computed using a self-supervised, pre-trained deep learning network. We show that HoloQA is able to achieve state-of-the-art performance on the recently introduced LIVE-Meta Rendered Human Avatar VQA database, demonstrating its efficacy in predicting the quality of rendered human avatars in VR. Furthermore, we demonstrate the competitive performance of HoloQA on other digital human avatar databases and on another synthetically generated video quality use case: cloud gaming. The code associated with this work will be made available on GitHub.},
}
RevDate: 2026-02-18
CmpDate: 2026-02-18
Dataset on resource allocation and usage for a private cloud.
Data in brief, 65:112514.
While public cloud providers dominate the commercial landscape, private clouds are widely adopted by academic and research institutions to meet specific governance and operational requirements. There are multiple available datasets about resource usage of public clouds; however, datasets capturing usage patterns in private clouds remain scarce, which limits research in this area. This work presents a dataset comprising over 64 million records collected from a private OpenStack-based cloud operated by the Distributed Systems Laboratory at the Federal University of Campina Grande, Brazil. Data was continuously gathered over nearly twelve months (May 23, 2024 to May 16, 2025), periodically querying OpenStack APIs and monitoring services every five minutes. The dataset captures different aspects of the infrastructure, allocation quotas, user-to-project associations (as OpenStack groups users into projects), server (virtual machines) specifications, and resource utilization for users and projects. Entries are timestamped, enabling temporal analyses of system dynamics. Sensitive attributes, such as user names, project names, IP addresses, and server names were protected, leaving only system-generated UUIDs. By offering a detailed, time-stamped, view of a private cloud, this dataset provides a valuable resource for cloud computing research, helping to bridge the gap in publicly available datasets from non-commercial cloud environments. The dataset is valuable not only for academic institutions but also for companies considering cloud repatriation.
Additional Links: PMID-41704506
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41704506,
year = {2026},
author = {Marques, P and Mendes, M and Pereira, TE and Farias, G},
title = {Dataset on resource allocation and usage for a private cloud.},
journal = {Data in brief},
volume = {65},
number = {},
pages = {112514},
pmid = {41704506},
issn = {2352-3409},
abstract = {While public cloud providers dominate the commercial landscape, private clouds are widely adopted by academic and research institutions to meet specific governance and operational requirements. There are multiple available datasets about resource usage of public clouds; however, datasets capturing usage patterns in private clouds remain scarce, which limits research in this area. This work presents a dataset comprising over 64 million records collected from a private OpenStack-based cloud operated by the Distributed Systems Laboratory at the Federal University of Campina Grande, Brazil. Data was continuously gathered over nearly twelve months (May 23, 2024 to May 16, 2025), periodically querying OpenStack APIs and monitoring services every five minutes. The dataset captures different aspects of the infrastructure, allocation quotas, user-to-project associations (as OpenStack groups users into projects), server (virtual machines) specifications, and resource utilization for users and projects. Entries are timestamped, enabling temporal analyses of system dynamics. Sensitive attributes, such as user names, project names, IP addresses, and server names were protected, leaving only system-generated UUIDs. By offering a detailed, time-stamped, view of a private cloud, this dataset provides a valuable resource for cloud computing research, helping to bridge the gap in publicly available datasets from non-commercial cloud environments. The dataset is valuable not only for academic institutions but also for companies considering cloud repatriation.},
}
RevDate: 2026-02-17
AsynDBT: asynchronous distributed bilevel tuning for efficient in-context learning with large language models.
Scientific reports pii:10.1038/s41598-026-39582-5 [Epub ahead of print].
With the rapid development of large language models (LLMs), an increasing number of applications leverage cloud-based LLM APIs to reduce usage costs. However, since cloud-based models' parameters and gradients are agnostic, users have to manually or use heuristic algorithms to adjust prompts for intervening LLM outputs, which requiring costly optimization procedures. In-context learning (ICL) has recently emerged as a promising paradigm that enables LLMs to adapt to new tasks using examples provided within the input, eliminating the need for parameter updates. Nevertheless, the advancement of ICL is often hindered by the lack of high-quality data, which is often sensitive and different to share. Federated learning (FL) offers a potential solution by enabling collaborative training of distributed LLMs while preserving data privacy. Despite this issues, previous FL approaches that incorporate ICL have struggled with severe straggler problems and challenges associated with heterogeneous non-identically data. To address these problems, we propose an asynchronous distributed bilevel tuning (AsynDBT) algorithm that optimizes both in-context learning samples and prompt fragments based on the feedback from the LLM, thereby enhancing downstream task performance. Benefiting from its distributed architecture, AsynDBT provides privacy protection and adaptability to heterogeneous computing environments. Furthermore, we present a theoretical analysis establishing the convergence guarantees of the proposed algorithm. Extensive experiments conducted on multiple benchmark datasets demonstrate the effectiveness and efficiency of AsynDBT.
Additional Links: PMID-41702990
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41702990,
year = {2026},
author = {Ma, H and Dou, S and Liu, Y and Xing, F and Feng, L and Pi, F},
title = {AsynDBT: asynchronous distributed bilevel tuning for efficient in-context learning with large language models.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-39582-5},
pmid = {41702990},
issn = {2045-2322},
support = {5105250183m//the Tianchi Talents - Young Doctor Program/ ; 2024B03028//Science and Technology Program of Xinjiang Uyghur Autonomous Region/ ; 202512120005//Regional Fund of the National Natural Science Foundation of China/ ; },
abstract = {With the rapid development of large language models (LLMs), an increasing number of applications leverage cloud-based LLM APIs to reduce usage costs. However, since cloud-based models' parameters and gradients are agnostic, users have to manually or use heuristic algorithms to adjust prompts for intervening LLM outputs, which requiring costly optimization procedures. In-context learning (ICL) has recently emerged as a promising paradigm that enables LLMs to adapt to new tasks using examples provided within the input, eliminating the need for parameter updates. Nevertheless, the advancement of ICL is often hindered by the lack of high-quality data, which is often sensitive and different to share. Federated learning (FL) offers a potential solution by enabling collaborative training of distributed LLMs while preserving data privacy. Despite this issues, previous FL approaches that incorporate ICL have struggled with severe straggler problems and challenges associated with heterogeneous non-identically data. To address these problems, we propose an asynchronous distributed bilevel tuning (AsynDBT) algorithm that optimizes both in-context learning samples and prompt fragments based on the feedback from the LLM, thereby enhancing downstream task performance. Benefiting from its distributed architecture, AsynDBT provides privacy protection and adaptability to heterogeneous computing environments. Furthermore, we present a theoretical analysis establishing the convergence guarantees of the proposed algorithm. Extensive experiments conducted on multiple benchmark datasets demonstrate the effectiveness and efficiency of AsynDBT.},
}
RevDate: 2026-02-13
TropMol: a cloud-based web tool for virtual screening and early-stage prediction of acetylcholinesterase inhibitors using machine learning.
Organic & biomolecular chemistry [Epub ahead of print].
Alzheimer's disease (AD) is the most common type of dementia, accounting for at least two-thirds of dementia cases in people aged 65 and older. Numerous approaches have been studied for the treatment of this disease, including the cholinergic hypothesis. Acetylcholinesterase (AChE) is the most promising target studied within the cholinergic hypothesis for the treatment of AD. Therefore, it is necessary to develop predictive models for the identification of AChE inhibitors. Thus, general drug design models can assist chemical synthesis groups and biochemical testing laboratories by enabling virtual screening and drug design. In this work, the objective is to build a generic molecular screening prediction model for public, online and free use based on pIC50, using a random forest model (RF). For this, a dataset with approximately 16 000 compounds and 134 classes of descriptors was used, resulting in more than 2 000 000 calculated descriptors. Other algorithms were studied, such as gradient boosting, XGBoost, LightGBM, and RF with descriptors from principal component analysis (PCA), but none demonstrated significantly superior results compared to the RF model. The final model studied obtained an R[2] = 0.76 with a 15% test set and obtained an R[2] = 0.73 with a 30% test set, with rigorous Y-scrambling confirming the absence of chance correlation. External validation performed on an independent test set comprising 10% of the data yielded an R[2] of 0.77 and an RMSE of 0.67, statistically confirming that the model retains high predictive accuracy for novel chemical scaffolds and is free from overfitting. It is suggested that compounds containing oxime groups (RR'C = NOH) and those with high structural branching (higher Balaban index) tend to be less potent AChE inhibitors (negative correlation). In addition, some descriptors indicate that electronic charge distribution, molecular surface area, and hydrophobicity play important roles in correlating with the inhibitory activity (pIC50) of the compounds. The presence of linear alkane chains also seems relevant to activity (positive correlation and greater importance). The data and models are available at the following link: (https://colab.research.google.com/drive/1gMcuXAsrqTIBMNnsCEWG9xfkK7aaZAbn?usp=sharing).
Additional Links: PMID-41685429
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41685429,
year = {2026},
author = {Doring, TH},
title = {TropMol: a cloud-based web tool for virtual screening and early-stage prediction of acetylcholinesterase inhibitors using machine learning.},
journal = {Organic & biomolecular chemistry},
volume = {},
number = {},
pages = {},
doi = {10.1039/d6ob00094k},
pmid = {41685429},
issn = {1477-0539},
abstract = {Alzheimer's disease (AD) is the most common type of dementia, accounting for at least two-thirds of dementia cases in people aged 65 and older. Numerous approaches have been studied for the treatment of this disease, including the cholinergic hypothesis. Acetylcholinesterase (AChE) is the most promising target studied within the cholinergic hypothesis for the treatment of AD. Therefore, it is necessary to develop predictive models for the identification of AChE inhibitors. Thus, general drug design models can assist chemical synthesis groups and biochemical testing laboratories by enabling virtual screening and drug design. In this work, the objective is to build a generic molecular screening prediction model for public, online and free use based on pIC50, using a random forest model (RF). For this, a dataset with approximately 16 000 compounds and 134 classes of descriptors was used, resulting in more than 2 000 000 calculated descriptors. Other algorithms were studied, such as gradient boosting, XGBoost, LightGBM, and RF with descriptors from principal component analysis (PCA), but none demonstrated significantly superior results compared to the RF model. The final model studied obtained an R[2] = 0.76 with a 15% test set and obtained an R[2] = 0.73 with a 30% test set, with rigorous Y-scrambling confirming the absence of chance correlation. External validation performed on an independent test set comprising 10% of the data yielded an R[2] of 0.77 and an RMSE of 0.67, statistically confirming that the model retains high predictive accuracy for novel chemical scaffolds and is free from overfitting. It is suggested that compounds containing oxime groups (RR'C = NOH) and those with high structural branching (higher Balaban index) tend to be less potent AChE inhibitors (negative correlation). In addition, some descriptors indicate that electronic charge distribution, molecular surface area, and hydrophobicity play important roles in correlating with the inhibitory activity (pIC50) of the compounds. The presence of linear alkane chains also seems relevant to activity (positive correlation and greater importance). The data and models are available at the following link: (https://colab.research.google.com/drive/1gMcuXAsrqTIBMNnsCEWG9xfkK7aaZAbn?usp=sharing).},
}
RevDate: 2026-02-13
CmpDate: 2026-02-13
Large language models for structured cardiovascular data extraction: a foundation for scalable research and clinical applications.
European heart journal. Digital health, 7(2):ztaf127.
AIMS: Automated extraction of information from cardiac reports would benefit both clinical reporting and research. Large language models (LLMs) hold promise for such automation, but their clinical performance and practical implementation across various computational environments remain unclear. This study aims to evaluate the feasibility and performance of LLM-based classification of echocardiogram and invasive coronary angiography reports, using real-world clinical data across local, high-performance computing and cloud-based platforms.
METHODS AND RESULTS: The angiography and echocardiography reports of 1000 patients, admitted with acute coronary syndrome, were labelled for multiple key diagnostic elements, including left ventricular function (LVF), culprit vessel, and acute occlusions. Report classification models were developed using LLMs via (i) prompt-based and (ii) fine-tuning approaches. Performance was assessed across different model types and compute infrastructures, with attention to class imbalance, ambiguous label annotations, and implementation costs. Large language models demonstrated strong performance in extracting structured diagnostic information from cardiac reports. Cloud-based models (such as GPT-4o) achieved the highest accuracy (0.87 for culprit vessel and 1.0 for LVF) and generalizability, but also smaller models run on a local high-performance cluster achieved reasonable accuracy, especially for less complex tasks (0.634 for culprit vessel and 0.984 for LVF). Classification was feasible with minimal pre-processing, enabling potential integration into electronic health record systems or research pipelines. Class imbalance, reflective of real-world prevalence, had a greater impact on fine-tuning approaches.
CONCLUSION: Large language models can reliably classify structured cardiology reports across diverse computed infrastructures. Their accuracy and adaptability support their use in clinical and research settings, particularly for scalable report structuring and dataset generation.
Additional Links: PMID-41684376
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41684376,
year = {2026},
author = {van der Loo, W and van der Valk, V and van den Broek, T and Atsma, D and Staring, M and Scherptong, R},
title = {Large language models for structured cardiovascular data extraction: a foundation for scalable research and clinical applications.},
journal = {European heart journal. Digital health},
volume = {7},
number = {2},
pages = {ztaf127},
pmid = {41684376},
issn = {2634-3916},
abstract = {AIMS: Automated extraction of information from cardiac reports would benefit both clinical reporting and research. Large language models (LLMs) hold promise for such automation, but their clinical performance and practical implementation across various computational environments remain unclear. This study aims to evaluate the feasibility and performance of LLM-based classification of echocardiogram and invasive coronary angiography reports, using real-world clinical data across local, high-performance computing and cloud-based platforms.
METHODS AND RESULTS: The angiography and echocardiography reports of 1000 patients, admitted with acute coronary syndrome, were labelled for multiple key diagnostic elements, including left ventricular function (LVF), culprit vessel, and acute occlusions. Report classification models were developed using LLMs via (i) prompt-based and (ii) fine-tuning approaches. Performance was assessed across different model types and compute infrastructures, with attention to class imbalance, ambiguous label annotations, and implementation costs. Large language models demonstrated strong performance in extracting structured diagnostic information from cardiac reports. Cloud-based models (such as GPT-4o) achieved the highest accuracy (0.87 for culprit vessel and 1.0 for LVF) and generalizability, but also smaller models run on a local high-performance cluster achieved reasonable accuracy, especially for less complex tasks (0.634 for culprit vessel and 0.984 for LVF). Classification was feasible with minimal pre-processing, enabling potential integration into electronic health record systems or research pipelines. Class imbalance, reflective of real-world prevalence, had a greater impact on fine-tuning approaches.
CONCLUSION: Large language models can reliably classify structured cardiology reports across diverse computed infrastructures. Their accuracy and adaptability support their use in clinical and research settings, particularly for scalable report structuring and dataset generation.},
}
RevDate: 2026-02-13
Privacy Protection Optimization Method for Cloud Platforms Based on Federated Learning and Homomorphic Encryption.
Sensors (Basel, Switzerland), 26(3): pii:s26030890.
With the wide application of cloud computing in multi-tenant, heterogeneous nodes and high-concurrency environments, model parameters frequently interact during distributed training, which easily leads to privacy leakage, communication redundancy, and decreased aggregation efficiency. To realize the collaborative optimization of privacy protection and computing performance, this study proposes the Heterogeneous Federated Homomorphic Encryption Cloud (HFHE-Cloud) model, which integrates federated learning (FL) and homomorphic encryption and constructs a secure and efficient collaborative learning framework for cloud platforms. Under the condition of not exposing the original data, the model effectively reduces the performance bottleneck caused by encryption calculation and communication delay through hierarchical key mapping and dynamic scheduling mechanism of heterogeneous nodes. The experimental results show that HFHE-Cloud is significantly superior to Federated Averaging (FedAvg), Federated Proximal (FedProx), Federated Personalization (FedPer) and Federated Normalized Averaging (FedNova) in comprehensive performance, Homomorphically Encrypted Federated Averaging (HE-FedAvg) and other five baseline models. In the dimension of privacy protection, the global accuracy is up to 94.25%, and the Loss is stable within 0.09. In terms of computing performance, the encryption and decryption time is shortened by about one third, and the encryption overhead is controlled at 13%. In terms of distributed training efficiency, the number of communication rounds is reduced by about one fifth, and the node participation rate is stable at over 90%. The results verify the model's ability to achieve high security and high scalability in multi-tenant environment. This study aims to provide cloud service providers and enterprise data holders with a technical solution of high-intensity privacy protection and efficient collaborative training that can be deployed in real cloud platforms.
Additional Links: PMID-41682403
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41682403,
year = {2026},
author = {Wang, J and Wang, Y},
title = {Privacy Protection Optimization Method for Cloud Platforms Based on Federated Learning and Homomorphic Encryption.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {3},
pages = {},
doi = {10.3390/s26030890},
pmid = {41682403},
issn = {1424-8220},
abstract = {With the wide application of cloud computing in multi-tenant, heterogeneous nodes and high-concurrency environments, model parameters frequently interact during distributed training, which easily leads to privacy leakage, communication redundancy, and decreased aggregation efficiency. To realize the collaborative optimization of privacy protection and computing performance, this study proposes the Heterogeneous Federated Homomorphic Encryption Cloud (HFHE-Cloud) model, which integrates federated learning (FL) and homomorphic encryption and constructs a secure and efficient collaborative learning framework for cloud platforms. Under the condition of not exposing the original data, the model effectively reduces the performance bottleneck caused by encryption calculation and communication delay through hierarchical key mapping and dynamic scheduling mechanism of heterogeneous nodes. The experimental results show that HFHE-Cloud is significantly superior to Federated Averaging (FedAvg), Federated Proximal (FedProx), Federated Personalization (FedPer) and Federated Normalized Averaging (FedNova) in comprehensive performance, Homomorphically Encrypted Federated Averaging (HE-FedAvg) and other five baseline models. In the dimension of privacy protection, the global accuracy is up to 94.25%, and the Loss is stable within 0.09. In terms of computing performance, the encryption and decryption time is shortened by about one third, and the encryption overhead is controlled at 13%. In terms of distributed training efficiency, the number of communication rounds is reduced by about one fifth, and the node participation rate is stable at over 90%. The results verify the model's ability to achieve high security and high scalability in multi-tenant environment. This study aims to provide cloud service providers and enterprise data holders with a technical solution of high-intensity privacy protection and efficient collaborative training that can be deployed in real cloud platforms.},
}
RevDate: 2026-02-13
Precision Farming with Smart Sensors: Current State, Challenges and Future Outlook.
Sensors (Basel, Switzerland), 26(3): pii:s26030882.
The agricultural sector, a vital industry for human survival and a primary source of food and raw materials, faces increasing pressure due to global population growth and environmental strains. Productivity, efficiency, and sustainability constraints are preventing traditional farming methods from adequately meeting the growing demand for food. Precision farming has emerged as a transformative paradigm to address these issues. It integrates advanced technologies to improve decision making, optimize yield, and conserve resources. This approach leverages technologies such as wireless sensor networks, the Internet of Things (IoT), robotics, drones, artificial intelligence (AI), and cloud computing to provide effective and cost-efficient agricultural services. Smart sensor technologies are foundational to precision farming. They offer crucial information regarding soil conditions, plant growth, and environmental factors in real time. This review explores the status, challenges, and prospects of smart sensor technologies in precision farming. The integration of smart sensors with the IoT and AI has significantly transformed how agricultural data is collected, analyzed, and utilized to optimize yield, conserve resources, and enhance overall farm efficiency. The review delves into various types of smart sensors used, their applications, and emerging technologies that promise to further innovate data acquisition and decision making in agriculture. Despite progress, challenges persist. They include sensor calibration, data privacy, interoperability, and adoption barriers. To fully realize the potential of smart sensors in ensuring global food security and promoting sustainable farming, the challenges need to be addressed.
Additional Links: PMID-41682397
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41682397,
year = {2026},
author = {Manono, BO and Mwami, B and Mutavi, S and Nzilu, F},
title = {Precision Farming with Smart Sensors: Current State, Challenges and Future Outlook.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {3},
pages = {},
doi = {10.3390/s26030882},
pmid = {41682397},
issn = {1424-8220},
abstract = {The agricultural sector, a vital industry for human survival and a primary source of food and raw materials, faces increasing pressure due to global population growth and environmental strains. Productivity, efficiency, and sustainability constraints are preventing traditional farming methods from adequately meeting the growing demand for food. Precision farming has emerged as a transformative paradigm to address these issues. It integrates advanced technologies to improve decision making, optimize yield, and conserve resources. This approach leverages technologies such as wireless sensor networks, the Internet of Things (IoT), robotics, drones, artificial intelligence (AI), and cloud computing to provide effective and cost-efficient agricultural services. Smart sensor technologies are foundational to precision farming. They offer crucial information regarding soil conditions, plant growth, and environmental factors in real time. This review explores the status, challenges, and prospects of smart sensor technologies in precision farming. The integration of smart sensors with the IoT and AI has significantly transformed how agricultural data is collected, analyzed, and utilized to optimize yield, conserve resources, and enhance overall farm efficiency. The review delves into various types of smart sensors used, their applications, and emerging technologies that promise to further innovate data acquisition and decision making in agriculture. Despite progress, challenges persist. They include sensor calibration, data privacy, interoperability, and adoption barriers. To fully realize the potential of smart sensors in ensuring global food security and promoting sustainable farming, the challenges need to be addressed.},
}
RevDate: 2026-02-13
EQARO-ECS: Efficient Quantum ARO-Based Edge Computing and SDN Routing Protocol for IoT Communication to Avoid Desertification.
Sensors (Basel, Switzerland), 26(3): pii:s26030824.
Desertification is the impoverishment of fertile land, caused by various factors and environmental effects, such as temperature and humidity. An appropriate Internet of Things (IoT) architecture, routing algorithms based on artificial intelligence (AI), and emerging technologies are essential to monitor and avoid desertification. However, the classical AI algorithms usually suffer from falling into local optimum issues and consuming more energy. This research proposed an improved multi-objective routing protocol, namely, the efficient quantum (EQ) artificial rabbit optimisation (ARO) based on edge computing (EC) and a software-defined network (SDN) concept (EQARO-ECS), which provides the best cluster table for the IoT network to avoid desertification. The methodology of the proposed EQARO-ECS protocol reduces energy consumption and improves data analysis speed by deploying new technologies, such as the Cloud, SDN, EC, and quantum technique-based ARO. This protocol increases the data analysis speed because of the suggested iterated quantum gates with the ARO, which can rapidly penetrate from the local to the global optimum. The protocol avoids desertification because of a new effective objective function that considers energy consumption, communication cost, and desertification parameters. The simulation results established that the suggested EQARO-ECS procedure increases accuracy and improves network lifetime by reducing energy depletion compared to other algorithms.
Additional Links: PMID-41682340
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41682340,
year = {2026},
author = {Al-Janabi, TA and Al-Raweshidy, HS and Zouri, M},
title = {EQARO-ECS: Efficient Quantum ARO-Based Edge Computing and SDN Routing Protocol for IoT Communication to Avoid Desertification.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {3},
pages = {},
doi = {10.3390/s26030824},
pmid = {41682340},
issn = {1424-8220},
abstract = {Desertification is the impoverishment of fertile land, caused by various factors and environmental effects, such as temperature and humidity. An appropriate Internet of Things (IoT) architecture, routing algorithms based on artificial intelligence (AI), and emerging technologies are essential to monitor and avoid desertification. However, the classical AI algorithms usually suffer from falling into local optimum issues and consuming more energy. This research proposed an improved multi-objective routing protocol, namely, the efficient quantum (EQ) artificial rabbit optimisation (ARO) based on edge computing (EC) and a software-defined network (SDN) concept (EQARO-ECS), which provides the best cluster table for the IoT network to avoid desertification. The methodology of the proposed EQARO-ECS protocol reduces energy consumption and improves data analysis speed by deploying new technologies, such as the Cloud, SDN, EC, and quantum technique-based ARO. This protocol increases the data analysis speed because of the suggested iterated quantum gates with the ARO, which can rapidly penetrate from the local to the global optimum. The protocol avoids desertification because of a new effective objective function that considers energy consumption, communication cost, and desertification parameters. The simulation results established that the suggested EQARO-ECS procedure increases accuracy and improves network lifetime by reducing energy depletion compared to other algorithms.},
}
RevDate: 2026-02-13
A Survey on the Computing Continuum and Meta-Operating Systems: Perspectives, Architectures, Outcomes, and Open Challenges.
Sensors (Basel, Switzerland), 26(3): pii:s26030799.
The goal of the study presented in this work is to analyze all recent advances in the context of the computing continuum and meta-operating systems (meta-OSs). The term continuum includes a variety of diverse hardware and computing elements, as well as network protocols, ranging from lightweight Internet of Things (IoT) components to more complex edge or cloud servers. To this end, the rapid penetration of IoT technology in modern-era networks, along with associated applications, poses new challenges towards efficient application deployment over heterogeneous network infrastructures. These challenges involve, among others, the interconnection of a vast number of IoT devices and protocols, proper resource management, and threat protection and privacy preservation. Hence, unified access mechanisms, data management policies, and security protocols are required across the continuum to support the vision of seamless connectivity and diverse device integration. This task becomes even more important as discussions on sixth generation (6G) networks are already taking place, which they are envisaged to coexist with IoT applications. Therefore, in this work the most significant technological approaches to satisfy the aforementioned challenges and requirements are presented and analyzed. To this end, a proposed architectural approach is also presented and discussed, which takes into consideration all key players and components in the continuum. In the same context, indicative use cases and scenarios that are leveraged from a meta-OSs in the computing continuum are presented as well. Finally, open issues and related challenges are also discussed.
Additional Links: PMID-41682316
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41682316,
year = {2026},
author = {Gkonis, PK and Giannopoulos, A and Nomikos, N and Sarakis, L and Nikolakakis, V and Patsourakis, G and Trakadas, P},
title = {A Survey on the Computing Continuum and Meta-Operating Systems: Perspectives, Architectures, Outcomes, and Open Challenges.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {3},
pages = {},
doi = {10.3390/s26030799},
pmid = {41682316},
issn = {1424-8220},
abstract = {The goal of the study presented in this work is to analyze all recent advances in the context of the computing continuum and meta-operating systems (meta-OSs). The term continuum includes a variety of diverse hardware and computing elements, as well as network protocols, ranging from lightweight Internet of Things (IoT) components to more complex edge or cloud servers. To this end, the rapid penetration of IoT technology in modern-era networks, along with associated applications, poses new challenges towards efficient application deployment over heterogeneous network infrastructures. These challenges involve, among others, the interconnection of a vast number of IoT devices and protocols, proper resource management, and threat protection and privacy preservation. Hence, unified access mechanisms, data management policies, and security protocols are required across the continuum to support the vision of seamless connectivity and diverse device integration. This task becomes even more important as discussions on sixth generation (6G) networks are already taking place, which they are envisaged to coexist with IoT applications. Therefore, in this work the most significant technological approaches to satisfy the aforementioned challenges and requirements are presented and analyzed. To this end, a proposed architectural approach is also presented and discussed, which takes into consideration all key players and components in the continuum. In the same context, indicative use cases and scenarios that are leveraged from a meta-OSs in the computing continuum are presented as well. Finally, open issues and related challenges are also discussed.},
}
RevDate: 2026-02-12
FMLCA: explainable and privacy-preserving federated machine learning classification algorithms for predicting heart disease in patients.
European journal of medical research pii:10.1186/s40001-026-04023-6 [Epub ahead of print].
BACKGROUND: Heart disease is a global health concern that significantly contributes to worldwide mortality. Machine Learning (ML) models have emerged as a powerful tool for predicting Coronary Artery Disease (CAD), a type of heart disease, by utilizing clinical features for classification. Federated Learning (FL) offers a solution for collaborative training without sharing raw data, thus addressing privacy concerns.
METHODS: This study presents an innovative approach, Federated Machine Learning Classification Algorithms (FMLCA), which utilizes cloud computing, privacy preservation techniques, and ML classification algorithms, including Decision Tree (DT), Adaptive Boosting (AdaBoost), K-Nearest Neighbors (KNN), Random Forest (RF), and Extreme Gradient Boosting (XGBoost), to predict CAD. In addition, privacy preserving is considered through the k-anonymity technique, and SHapley Additive exPlanations (SHAP) technique was utilized to identify features important in the model decision-making process.
RESULTS: The proposed RF model, compared to other models, obtained better performance. This RF model achieved an accuracy of 83.21% with privacy preservation and 84.49% without it. Furthermore, the SHAP technique enhances transparency by attributing feature influences in predictions.
CONCLUSION: Implementing these models on a cloud platform results in efficient computational performance. This proposed approach represents a significant advancement in predictive healthcare tools, capable of accurately predicting CAD across distributed environments. By placing a strong emphasis on privacy and security, this approach underscores its importance and paves the way for a transformative healthcare ecosystem that centers on the needs of patients and healthcare providers.
Additional Links: PMID-41680929
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41680929,
year = {2026},
author = {Sorayaie Azar, A and Gholami, F and Sharifi, L and Asl Asgharian Sardroud, A and Bagherzadeh Mohasefi, J and Wiil, UK},
title = {FMLCA: explainable and privacy-preserving federated machine learning classification algorithms for predicting heart disease in patients.},
journal = {European journal of medical research},
volume = {},
number = {},
pages = {},
doi = {10.1186/s40001-026-04023-6},
pmid = {41680929},
issn = {2047-783X},
abstract = {BACKGROUND: Heart disease is a global health concern that significantly contributes to worldwide mortality. Machine Learning (ML) models have emerged as a powerful tool for predicting Coronary Artery Disease (CAD), a type of heart disease, by utilizing clinical features for classification. Federated Learning (FL) offers a solution for collaborative training without sharing raw data, thus addressing privacy concerns.
METHODS: This study presents an innovative approach, Federated Machine Learning Classification Algorithms (FMLCA), which utilizes cloud computing, privacy preservation techniques, and ML classification algorithms, including Decision Tree (DT), Adaptive Boosting (AdaBoost), K-Nearest Neighbors (KNN), Random Forest (RF), and Extreme Gradient Boosting (XGBoost), to predict CAD. In addition, privacy preserving is considered through the k-anonymity technique, and SHapley Additive exPlanations (SHAP) technique was utilized to identify features important in the model decision-making process.
RESULTS: The proposed RF model, compared to other models, obtained better performance. This RF model achieved an accuracy of 83.21% with privacy preservation and 84.49% without it. Furthermore, the SHAP technique enhances transparency by attributing feature influences in predictions.
CONCLUSION: Implementing these models on a cloud platform results in efficient computational performance. This proposed approach represents a significant advancement in predictive healthcare tools, capable of accurately predicting CAD across distributed environments. By placing a strong emphasis on privacy and security, this approach underscores its importance and paves the way for a transformative healthcare ecosystem that centers on the needs of patients and healthcare providers.},
}
RevDate: 2026-02-11
WeMol: A Cloud-Based and Zero-Code Platform for AI-Driven Molecular Design and Simulation.
Journal of chemical information and modeling [Epub ahead of print].
Artificial intelligence (AI) has demonstrated remarkable potential in reshaping modern drug discovery, yet its widespread adoption is hindered by fragmented tools, high technical barriers, and the lack of user-friendly interfaces. Here, we present WeMol, an AI-driven one-stop molecular computing platform designed to streamline early-stage drug discovery. WeMol integrates a series of modules, covering molecular similarity search, structure-based and AI-enhanced docking, ADMET prediction, molecular generation, and molecular dynamics simulations. The platform features a zero-code, cloud-based interface that enables researchers without programming expertise to construct and execute comprehensive computational workflows. By integrating advanced AI algorithms with practical applications, WeMol lowers the entry barrier for nonexperts and provides a versatile, accessible, and reproducible solution to accelerate early drug design and discovery.
Additional Links: PMID-41668343
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41668343,
year = {2026},
author = {Liu, H and Yan, X and Fang, H and Ge, H and Hou, X},
title = {WeMol: A Cloud-Based and Zero-Code Platform for AI-Driven Molecular Design and Simulation.},
journal = {Journal of chemical information and modeling},
volume = {},
number = {},
pages = {},
doi = {10.1021/acs.jcim.6c00014},
pmid = {41668343},
issn = {1549-960X},
abstract = {Artificial intelligence (AI) has demonstrated remarkable potential in reshaping modern drug discovery, yet its widespread adoption is hindered by fragmented tools, high technical barriers, and the lack of user-friendly interfaces. Here, we present WeMol, an AI-driven one-stop molecular computing platform designed to streamline early-stage drug discovery. WeMol integrates a series of modules, covering molecular similarity search, structure-based and AI-enhanced docking, ADMET prediction, molecular generation, and molecular dynamics simulations. The platform features a zero-code, cloud-based interface that enables researchers without programming expertise to construct and execute comprehensive computational workflows. By integrating advanced AI algorithms with practical applications, WeMol lowers the entry barrier for nonexperts and provides a versatile, accessible, and reproducible solution to accelerate early drug design and discovery.},
}
RevDate: 2026-02-10
O-RAID: a satellite constellation architecture for ultra-resilient global data backup.
Scientific reports pii:10.1038/s41598-026-38784-1 [Epub ahead of print].
Growing global data volumes and the increasing frequency of climate-related and geopolitical threats highlight the need for ultra-resilient backup infrastructures. This paper proposes a novel Satellite-RAID architecture, named O-RAID, in which clusters of satellites operate as a distributed redundant array of independent disks (RAID), enabling large-scale cold and warm backup storage in Earth's orbit. Unlike previous work on space-based computing or satellite cloud relays, this research presents a formal design for orbital storage redundancy, inter-satellite parity exchange, latency-tolerant RAID protocols and power provisioning using a geostationary solar-energy beam. To establish a foundation for quantifying system resilience, we develop a reliability framework based on a Continuous-Time Markov Chain (CTMC) model, defining the states and transition rates for future survivability analysis of an orbital RAID equivalent. The paper provides a comprehensive analysis of the system architecture, its core components and the mathematical underpinnings for erasure coding and communication. An in-depth examination of system feasibility, survivability simulations, key constraints and communication overhead is presented, concluding that orbital backup storage presents a viable and promising paradigm for national archives, disaster-resilient storage and long-term scientific data preservation with technical readiness projected by 2035.
Additional Links: PMID-41667810
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41667810,
year = {2026},
author = {Meegama, RGN},
title = {O-RAID: a satellite constellation architecture for ultra-resilient global data backup.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-38784-1},
pmid = {41667810},
issn = {2045-2322},
abstract = {Growing global data volumes and the increasing frequency of climate-related and geopolitical threats highlight the need for ultra-resilient backup infrastructures. This paper proposes a novel Satellite-RAID architecture, named O-RAID, in which clusters of satellites operate as a distributed redundant array of independent disks (RAID), enabling large-scale cold and warm backup storage in Earth's orbit. Unlike previous work on space-based computing or satellite cloud relays, this research presents a formal design for orbital storage redundancy, inter-satellite parity exchange, latency-tolerant RAID protocols and power provisioning using a geostationary solar-energy beam. To establish a foundation for quantifying system resilience, we develop a reliability framework based on a Continuous-Time Markov Chain (CTMC) model, defining the states and transition rates for future survivability analysis of an orbital RAID equivalent. The paper provides a comprehensive analysis of the system architecture, its core components and the mathematical underpinnings for erasure coding and communication. An in-depth examination of system feasibility, survivability simulations, key constraints and communication overhead is presented, concluding that orbital backup storage presents a viable and promising paradigm for national archives, disaster-resilient storage and long-term scientific data preservation with technical readiness projected by 2035.},
}
RevDate: 2026-02-09
CmpDate: 2026-02-09
Big data in healthcare and medicine revisited design and managerial challenges in the age of artificial intelligence.
Health information science and systems, 14(1):38.
A decade ago, we characterized big data in healthcare as a nascent field anchored in distributed computing paradigms. The intervening years have witnessed a transformation so profound that revisiting our original framework is essential. This paper critically examines the evolution of big data in healthcare and medicine, assessing the shift from Hadoop-centric architectures to cloud computing platforms and GPU-accelerated artificial intelligence, including large language models and the emerging paradigm of agentic AI. The landscape has been reshaped by landmark biobank initiatives, breakthrough applications such as AlphaFold's Nobel Prize-winning solution to protein structure prediction, and the rapid growth of FDA-cleared AI medical devices from fewer than ten in 2015 to over 1200 by mid-2025. AI has enabled advances across precision oncology, drug discovery, and public health surveillance. Yet new challenges have emerged: algorithmic bias perpetuating health disparities, opacity undermining clinical trust, environmental sustainability concerns, and unresolved questions of privacy, security, data ownership, and interoperability. We propose extending the original "4Vs" framework to accommodate veracity through explainability, validity through fairness, and viability through sustainability. The paper concludes with prescriptive implications for healthcare organizations, technology developers, policymakers, and researchers.
Additional Links: PMID-41659839
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41659839,
year = {2026},
author = {Raghupathi, W and Raghupathi, V},
title = {Big data in healthcare and medicine revisited design and managerial challenges in the age of artificial intelligence.},
journal = {Health information science and systems},
volume = {14},
number = {1},
pages = {38},
pmid = {41659839},
issn = {2047-2501},
abstract = {A decade ago, we characterized big data in healthcare as a nascent field anchored in distributed computing paradigms. The intervening years have witnessed a transformation so profound that revisiting our original framework is essential. This paper critically examines the evolution of big data in healthcare and medicine, assessing the shift from Hadoop-centric architectures to cloud computing platforms and GPU-accelerated artificial intelligence, including large language models and the emerging paradigm of agentic AI. The landscape has been reshaped by landmark biobank initiatives, breakthrough applications such as AlphaFold's Nobel Prize-winning solution to protein structure prediction, and the rapid growth of FDA-cleared AI medical devices from fewer than ten in 2015 to over 1200 by mid-2025. AI has enabled advances across precision oncology, drug discovery, and public health surveillance. Yet new challenges have emerged: algorithmic bias perpetuating health disparities, opacity undermining clinical trust, environmental sustainability concerns, and unresolved questions of privacy, security, data ownership, and interoperability. We propose extending the original "4Vs" framework to accommodate veracity through explainability, validity through fairness, and viability through sustainability. The paper concludes with prescriptive implications for healthcare organizations, technology developers, policymakers, and researchers.},
}
RevDate: 2026-02-09
CmpDate: 2026-02-09
Emerging trends and bibliometric analysis of internet of medical things for innovative healthcare (2016-2023).
Digital health, 12:20552076251395701.
BACKGROUND: The internet of medical things (IoMT) is revolutionizing digital health through continuous monitoring, real-time diagnostics, and remote care capabilities. Nonetheless, research in this domain remains disjointed, with a restricted comprehension of its growth trajectories, principal contributors, and thematic emphasis. A comprehensive evaluation is thus required to inform forthcoming research, policy, and advancements in resilient healthcare technologies.
METHODS: This study performed a bibliometric and literature-based analysis of IoMT research indexed in the Scopus database from 2016 to 2023. The dataset was optimized by keyword screening, resulting in 762 pertinent papers. Bibliometric indices, including as publication and citation trends, authorship and institutional output, and funding patterns, were analyzed. Thematic evolution was examined by keyword co-occurrence and cluster mapping utilizing VOSviewer, complemented by a synthesis of literature.
RESULTS: A total of 762 publications on IOMT were identified, comprising 63.12% journal articles, 30.97% conference papers, and 5.91% review papers. The total publications rose from 1 in 2016 to 301 in 2023, indicating a 30,000% increase. Total citations reached 19,014, with an h-index of 171. The most prolific contributors were Mohsen M. Guizani, King Saud University, and India. Collaborations and funding, particularly from international agencies, were found to significantly drive research productivity. Keyword and cluster analyses revealed two dominant thematic areas: Smart Medical Diagnostics and Privacy-Driven Health Technologies. The literature further confirmed strong integration of machine learning, blockchain, sensor technologies, and cloud computing in IOMT applications.
CONCLUSION: This analysis consolidates fragmented IoMT research, providing a structured overview of its development, contributors, and thematic trajectories. The findings highlight the rapid growth, global collaborations, and integration of advanced technologies driving the field. By mapping benchmarks and research hotspots, the study offers valuable evidence to guide future investigations, interdisciplinary collaborations, and policy efforts aimed at strengthening secure and patient-centered digital health systems.
Additional Links: PMID-41659061
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41659061,
year = {2026},
author = {Xin, H and Ajibade, SM and Alhassan, GN and Yilmaz, Y},
title = {Emerging trends and bibliometric analysis of internet of medical things for innovative healthcare (2016-2023).},
journal = {Digital health},
volume = {12},
number = {},
pages = {20552076251395701},
pmid = {41659061},
issn = {2055-2076},
abstract = {BACKGROUND: The internet of medical things (IoMT) is revolutionizing digital health through continuous monitoring, real-time diagnostics, and remote care capabilities. Nonetheless, research in this domain remains disjointed, with a restricted comprehension of its growth trajectories, principal contributors, and thematic emphasis. A comprehensive evaluation is thus required to inform forthcoming research, policy, and advancements in resilient healthcare technologies.
METHODS: This study performed a bibliometric and literature-based analysis of IoMT research indexed in the Scopus database from 2016 to 2023. The dataset was optimized by keyword screening, resulting in 762 pertinent papers. Bibliometric indices, including as publication and citation trends, authorship and institutional output, and funding patterns, were analyzed. Thematic evolution was examined by keyword co-occurrence and cluster mapping utilizing VOSviewer, complemented by a synthesis of literature.
RESULTS: A total of 762 publications on IOMT were identified, comprising 63.12% journal articles, 30.97% conference papers, and 5.91% review papers. The total publications rose from 1 in 2016 to 301 in 2023, indicating a 30,000% increase. Total citations reached 19,014, with an h-index of 171. The most prolific contributors were Mohsen M. Guizani, King Saud University, and India. Collaborations and funding, particularly from international agencies, were found to significantly drive research productivity. Keyword and cluster analyses revealed two dominant thematic areas: Smart Medical Diagnostics and Privacy-Driven Health Technologies. The literature further confirmed strong integration of machine learning, blockchain, sensor technologies, and cloud computing in IOMT applications.
CONCLUSION: This analysis consolidates fragmented IoMT research, providing a structured overview of its development, contributors, and thematic trajectories. The findings highlight the rapid growth, global collaborations, and integration of advanced technologies driving the field. By mapping benchmarks and research hotspots, the study offers valuable evidence to guide future investigations, interdisciplinary collaborations, and policy efforts aimed at strengthening secure and patient-centered digital health systems.},
}
RevDate: 2026-02-09
CmpDate: 2026-02-09
IoMT-Fog-Cloud-based AI frameworks for chronic disease diagnosis: updated comparative analysis with recent AI-IoMT models (2020-2025).
Frontiers in medical technology, 8:1748964.
Chronic diseases such as diabetes and cardiovascular disease require frequent monitoring and timely clinical feedback to prevent complications. Internet of Medical Things (IoMT) systems increasingly combine near-patient sensing with Fog and Cloud computing so that time-critical preprocessing and inference can run close to the patient while compute-intensive training and population-level analytics remain in the Cloud. This review synthesizes primary studies published between 2020 and 2025 that implement AI-enabled IoMT, with an emphasis on systems that report both diagnostic performance and network quality-of-service (QoS). Following PRISMA 2020, we screened database records and included 14 primary studies; we focus the joint performance-QoS synthesis on six IoMT-Fog-Cloud frameworks for diabetes and cardiovascular disease and compare them with two recent multi-disease AI-IoMT models (DACL and TasLA). Diabetes-oriented implementations commonly report accuracy around 95%-96% using explainable or ensemble deep learning, whereas some cardiovascular frameworks report >99% accuracy in controlled settings; we therefore discuss plausible sources of optimistic performance, including small datasets, class imbalance, curated benchmarks, and potential leakage/overfitting in simulation-based evaluations. Across IoMT-Fog-Cloud studies, placing preprocessing and/or inference at the Fog layer repeatedly reduces end-to-end latency for streaming biosignals, but multi-Fog provisioning can increase energy and power demands. To support more reproducible comparisons, we organize 14 extracted metrics into (i) diagnostic performance (accuracy, precision, recall, F1-score, sensitivity, specificity) and (ii) system/network QoS (latency, jitter, throughput, bandwidth utilization, processing/execution time, network usage, energy consumption, power consumption), and we translate the evidence into study-linked design recommendations for future deployments.
Additional Links: PMID-41657732
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41657732,
year = {2026},
author = {Locharoenrat, K},
title = {IoMT-Fog-Cloud-based AI frameworks for chronic disease diagnosis: updated comparative analysis with recent AI-IoMT models (2020-2025).},
journal = {Frontiers in medical technology},
volume = {8},
number = {},
pages = {1748964},
pmid = {41657732},
issn = {2673-3129},
abstract = {Chronic diseases such as diabetes and cardiovascular disease require frequent monitoring and timely clinical feedback to prevent complications. Internet of Medical Things (IoMT) systems increasingly combine near-patient sensing with Fog and Cloud computing so that time-critical preprocessing and inference can run close to the patient while compute-intensive training and population-level analytics remain in the Cloud. This review synthesizes primary studies published between 2020 and 2025 that implement AI-enabled IoMT, with an emphasis on systems that report both diagnostic performance and network quality-of-service (QoS). Following PRISMA 2020, we screened database records and included 14 primary studies; we focus the joint performance-QoS synthesis on six IoMT-Fog-Cloud frameworks for diabetes and cardiovascular disease and compare them with two recent multi-disease AI-IoMT models (DACL and TasLA). Diabetes-oriented implementations commonly report accuracy around 95%-96% using explainable or ensemble deep learning, whereas some cardiovascular frameworks report >99% accuracy in controlled settings; we therefore discuss plausible sources of optimistic performance, including small datasets, class imbalance, curated benchmarks, and potential leakage/overfitting in simulation-based evaluations. Across IoMT-Fog-Cloud studies, placing preprocessing and/or inference at the Fog layer repeatedly reduces end-to-end latency for streaming biosignals, but multi-Fog provisioning can increase energy and power demands. To support more reproducible comparisons, we organize 14 extracted metrics into (i) diagnostic performance (accuracy, precision, recall, F1-score, sensitivity, specificity) and (ii) system/network QoS (latency, jitter, throughput, bandwidth utilization, processing/execution time, network usage, energy consumption, power consumption), and we translate the evidence into study-linked design recommendations for future deployments.},
}
RevDate: 2026-02-06
Energy and makespan optimised task mapping in fog enabled IoT application: a hybrid approach.
Scientific reports, 16(1):5210.
The Internet of Things (IoT) points to billions of connected devices that share data through the Internet. However, the increasing volume of data generated by IoT devices makes remote cloud data centers inefficient for delay-sensitive applications. In this regard, fog computing, which brings computation closer to the data source, plays a significant role in addressing the above issue. However, resource constraints in fog computing demand an effective task-scheduling technique to handle the enormous volume of data. Many researchers have proposed a variety of heuristic and meta-heuristic approaches for effective scheduling; however, there is still scope for improvement. In this paper, we propose EMAPSO (energy makespan-aware PSO). The simultaneous minimization of makespan and energy is presented as a bi-objective optimization problem. The approach also considered the load-balancing factor while assigning a task to a VM in a fog/cloud environment. The proposed algorithm, EMAPSO, is compared to standard PSO, Modified PSO (MPSO), Bird swarm optimization (BSO), and the Bee Life Algorithm (BLA). The experimental results show that the proposed method outperforms the compared algorithms in terms of resource utilization, makespan, and energy consumption.
Additional Links: PMID-41535366
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41535366,
year = {2026},
author = {Tripathy, N and Sahoo, S and Alghamdi, NS and Viriyasitavat, W and Dhiman, G},
title = {Energy and makespan optimised task mapping in fog enabled IoT application: a hybrid approach.},
journal = {Scientific reports},
volume = {16},
number = {1},
pages = {5210},
pmid = {41535366},
issn = {2045-2322},
abstract = {The Internet of Things (IoT) points to billions of connected devices that share data through the Internet. However, the increasing volume of data generated by IoT devices makes remote cloud data centers inefficient for delay-sensitive applications. In this regard, fog computing, which brings computation closer to the data source, plays a significant role in addressing the above issue. However, resource constraints in fog computing demand an effective task-scheduling technique to handle the enormous volume of data. Many researchers have proposed a variety of heuristic and meta-heuristic approaches for effective scheduling; however, there is still scope for improvement. In this paper, we propose EMAPSO (energy makespan-aware PSO). The simultaneous minimization of makespan and energy is presented as a bi-objective optimization problem. The approach also considered the load-balancing factor while assigning a task to a VM in a fog/cloud environment. The proposed algorithm, EMAPSO, is compared to standard PSO, Modified PSO (MPSO), Bird swarm optimization (BSO), and the Bee Life Algorithm (BLA). The experimental results show that the proposed method outperforms the compared algorithms in terms of resource utilization, makespan, and energy consumption.},
}
RevDate: 2026-02-07
DT-aided resource allocation via generative adversarial imitation learning in complex cloud-edge-end scenarios.
Scientific reports pii:10.1038/s41598-026-38367-0 [Epub ahead of print].
Traditional DRL-based resource allocation for cloud-edge-end computing primarily depends on known state parameters and real-time feedback rewards when making decisions. The traditional model, which heavily relies on prior knowledge and real-time feedback of the scene, faces challenges in delivering effective services in complex scenarios. We propose a DT-aided Expert-driven Generative Adversarial Imitation Learning (E-GAIL) model that leverages imitation learning capability to jointly allocate multiple constrained resources. Firstly, we introduce a single-expert trajectory generation algorithm based on Actor-Critic and Noisynet by using the rich historical data provided in DT Networks. This idea can enhance the fidelity of the imitated expert trajectory by utilizing the critic to update the network iteratively. Secondly, we fuse different single-expert trajectories into a multi-expert trajectory to expand the coverage area. We also employ the Nash equilibrium to identify the optimal equilibrium solution and reduce the conflicts among different experts. Finally, the parameters of the generator and discriminator in E-GAIL are updated according to the respective gradients to fit the multi-expert trajectory during the training process. Once the task is uploaded, the E-GAIL Agent in the edge server can rapidly obtain the resource allocation policy even without prior knowledge or real-time reward feedback. The experiment results indicate that E-GAIL can obtain the best-fit expert trajectory in large-scale noisy environments.
Additional Links: PMID-41654653
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41654653,
year = {2026},
author = {Zhang, X and Xin, M and Li, Y and Fu, Q},
title = {DT-aided resource allocation via generative adversarial imitation learning in complex cloud-edge-end scenarios.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-38367-0},
pmid = {41654653},
issn = {2045-2322},
support = {JSQB2023206S005//National Defense Basic Scientiffc Research Project/ ; No.:220XQD061//University of South China Doctoral Research Start-up Fund Project/ ; },
abstract = {Traditional DRL-based resource allocation for cloud-edge-end computing primarily depends on known state parameters and real-time feedback rewards when making decisions. The traditional model, which heavily relies on prior knowledge and real-time feedback of the scene, faces challenges in delivering effective services in complex scenarios. We propose a DT-aided Expert-driven Generative Adversarial Imitation Learning (E-GAIL) model that leverages imitation learning capability to jointly allocate multiple constrained resources. Firstly, we introduce a single-expert trajectory generation algorithm based on Actor-Critic and Noisynet by using the rich historical data provided in DT Networks. This idea can enhance the fidelity of the imitated expert trajectory by utilizing the critic to update the network iteratively. Secondly, we fuse different single-expert trajectories into a multi-expert trajectory to expand the coverage area. We also employ the Nash equilibrium to identify the optimal equilibrium solution and reduce the conflicts among different experts. Finally, the parameters of the generator and discriminator in E-GAIL are updated according to the respective gradients to fit the multi-expert trajectory during the training process. Once the task is uploaded, the E-GAIL Agent in the edge server can rapidly obtain the resource allocation policy even without prior knowledge or real-time reward feedback. The experiment results indicate that E-GAIL can obtain the best-fit expert trajectory in large-scale noisy environments.},
}
RevDate: 2026-02-07
Adaptive and intelligent customized deep Q-network for energy-efficient task offloading in mobile edge computing environments.
Scientific reports pii:10.1038/s41598-025-34765-y [Epub ahead of print].
The rapid expansion of edge-cloud infrastructures and latency-sensitive Internet of Things (IoT) applications has intensified the challenge of intelligent task offloading in dynamic and resource-constrained environments. This paper presents an Adaptive and Intelligent Customized Deep Q-Network (AICDQN), a novel reinforcement learning-based framework for real-time, priority-aware task scheduling in mobile edge computing systems. The proposed model formulates task offloading as a Markov Decision Process (MDP) and integrates a hybrid Gated Recurrent Unit-Long Short-Term Memory (GRU-LSTM) load prediction module to forecast workload fluctuations and task urgency trends. This foresight enables a Dynamic Dueling Double Deep Q-Network [Formula: see text] agent to make informed offloading decisions across local, edge, and cloud tiers. The system models compute nodes using priority-aware M/M/1, M/M/c and M/M/∞ queuing systems, enabling delay-sensitive and queue-aware decision-making. A dynamic priority scoring function integrates task urgency, deadline proximity, and node-level queue saturation, ensuring real-time tasks are prioritized effectively. Furthermore, an energy-aware scheduling policy proactively transitions underutilized servers into low-power states without compromising performance. Extensive simulations demonstrate that AICDQN achieves up to 33.39% reduction in delay, 57.74% improvement in energy efficiency, and 81.25% reduction in task drop rate compared with existing offloading algorithms, including Deep Deterministic Policy Gradient (DDPG), Distributed Dynamic Task Offloading (DDTO-DRL), Potential Game based Offloading Algorithm (PGOA), and the User-Level Online Offloading Framework (ULOOF). These results validate AICDQN as a scalable and adaptive solution for next-generation edge-cloud systems requiring efficient, intelligent, and energy-constrained task offloading.
Additional Links: PMID-41654577
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41654577,
year = {2026},
author = {Anand, J and Karthikeyan, B},
title = {Adaptive and intelligent customized deep Q-network for energy-efficient task offloading in mobile edge computing environments.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-34765-y},
pmid = {41654577},
issn = {2045-2322},
abstract = {The rapid expansion of edge-cloud infrastructures and latency-sensitive Internet of Things (IoT) applications has intensified the challenge of intelligent task offloading in dynamic and resource-constrained environments. This paper presents an Adaptive and Intelligent Customized Deep Q-Network (AICDQN), a novel reinforcement learning-based framework for real-time, priority-aware task scheduling in mobile edge computing systems. The proposed model formulates task offloading as a Markov Decision Process (MDP) and integrates a hybrid Gated Recurrent Unit-Long Short-Term Memory (GRU-LSTM) load prediction module to forecast workload fluctuations and task urgency trends. This foresight enables a Dynamic Dueling Double Deep Q-Network [Formula: see text] agent to make informed offloading decisions across local, edge, and cloud tiers. The system models compute nodes using priority-aware M/M/1, M/M/c and M/M/∞ queuing systems, enabling delay-sensitive and queue-aware decision-making. A dynamic priority scoring function integrates task urgency, deadline proximity, and node-level queue saturation, ensuring real-time tasks are prioritized effectively. Furthermore, an energy-aware scheduling policy proactively transitions underutilized servers into low-power states without compromising performance. Extensive simulations demonstrate that AICDQN achieves up to 33.39% reduction in delay, 57.74% improvement in energy efficiency, and 81.25% reduction in task drop rate compared with existing offloading algorithms, including Deep Deterministic Policy Gradient (DDPG), Distributed Dynamic Task Offloading (DDTO-DRL), Potential Game based Offloading Algorithm (PGOA), and the User-Level Online Offloading Framework (ULOOF). These results validate AICDQN as a scalable and adaptive solution for next-generation edge-cloud systems requiring efficient, intelligent, and energy-constrained task offloading.},
}
RevDate: 2026-02-07
CmpDate: 2026-02-07
A cloud-edge reference architecture for intertwining health digital domains.
Health informatics journal, 32(1):14604582251383803.
Objective: In the present work, LinkAll is introduced as a novel architectural model designed to enable real-time monitoring and cross-referential data analysis in remote monitoring systems across human, animal, and environmental health domains. LinkAll leverages Edge-Computing and Internet of Things principles to handle data collection, processing, and presentation from various sources. Methods: Two sibling systems were implemented to demonstrate its capability, one for monitoring urban greenery and the other for elderly home care. These systems were evaluated based on their ability to integrate with existing information systems, collect biophysical parameters, and ensure data cross-referencing. Results: Both systems demonstrate effective pluggability and cross-referenceability performances, meeting the stakeholders' requirements. LinkAll's ability to integrate diverse sensors and devices into existing infrastructures while providing real-time, machine-actionable insights, is also underscored. Conclusion: Pluggability, cross-referenceability, and compliance with FAIR principles make the architectural model introduced a robust solution for integrating human, animal, and environmental health monitoring systems, enhancing decision-making and contributing to One (Digital) Health's strategic goals.
Additional Links: PMID-41653444
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41653444,
year = {2026},
author = {Tramontano, A and Tamburis, O and Perillo, G and Iaccarino, G and Benis, A and Magliulo, M},
title = {A cloud-edge reference architecture for intertwining health digital domains.},
journal = {Health informatics journal},
volume = {32},
number = {1},
pages = {14604582251383803},
doi = {10.1177/14604582251383803},
pmid = {41653444},
issn = {1741-2811},
mesh = {Humans ; *Cloud Computing ; },
abstract = {Objective: In the present work, LinkAll is introduced as a novel architectural model designed to enable real-time monitoring and cross-referential data analysis in remote monitoring systems across human, animal, and environmental health domains. LinkAll leverages Edge-Computing and Internet of Things principles to handle data collection, processing, and presentation from various sources. Methods: Two sibling systems were implemented to demonstrate its capability, one for monitoring urban greenery and the other for elderly home care. These systems were evaluated based on their ability to integrate with existing information systems, collect biophysical parameters, and ensure data cross-referencing. Results: Both systems demonstrate effective pluggability and cross-referenceability performances, meeting the stakeholders' requirements. LinkAll's ability to integrate diverse sensors and devices into existing infrastructures while providing real-time, machine-actionable insights, is also underscored. Conclusion: Pluggability, cross-referenceability, and compliance with FAIR principles make the architectural model introduced a robust solution for integrating human, animal, and environmental health monitoring systems, enhancing decision-making and contributing to One (Digital) Health's strategic goals.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Cloud Computing
RevDate: 2026-02-07
DFDD: A Cloud-Ready Tool for Distance-Guided Fully Dynamic Docking in Host-Guest Complexation.
Journal of chemical information and modeling [Epub ahead of print].
Fully dynamic sampling of host-guest inclusion remains difficult because conventional docking and conventional molecular dynamics simulations can sample inclusion, but crystal-like binding is typically stochastic and difficult to reproduce. Here, we introduce DFDD (Distance-Guided Fully Dynamic Docking), a cloud-ready implementation of the LB-PaCS-MD framework designed to capture inclusion processes via unbiased molecular dynamics in explicit solvent. DFDD automates system setup, parameter generation, iterative short-cycle MD sampling, and trajectory analysis within a single workflow that runs on Google Colab without any installation. Progress toward complexation is guided only by the host-guest center-of-mass distance, allowing force-free exploration of insertion pathways and enabling the recovery of both stable and transient binding modes. Using β-cyclodextrin as a representative host, DFDD reproduces experimentally observed inclusion geometries within minutes and reveals intermediate states along the insertion route. Optional coupling with pKaNET-Cloud enables pH-aware, stereochemically consistent ligand protonation states prior to simulation, supporting robust host-guest modeling. This Application Note provides a transparent and accessible platform for efficient host-guest complexation studies. The DFDD framework is publicly available at https://github.com/nyelidl/DFDD.
Additional Links: PMID-41653112
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41653112,
year = {2026},
author = {Hengphasatporn, K and Duan, L and Harada, R and Shigeta, Y},
title = {DFDD: A Cloud-Ready Tool for Distance-Guided Fully Dynamic Docking in Host-Guest Complexation.},
journal = {Journal of chemical information and modeling},
volume = {},
number = {},
pages = {},
doi = {10.1021/acs.jcim.5c02852},
pmid = {41653112},
issn = {1549-960X},
abstract = {Fully dynamic sampling of host-guest inclusion remains difficult because conventional docking and conventional molecular dynamics simulations can sample inclusion, but crystal-like binding is typically stochastic and difficult to reproduce. Here, we introduce DFDD (Distance-Guided Fully Dynamic Docking), a cloud-ready implementation of the LB-PaCS-MD framework designed to capture inclusion processes via unbiased molecular dynamics in explicit solvent. DFDD automates system setup, parameter generation, iterative short-cycle MD sampling, and trajectory analysis within a single workflow that runs on Google Colab without any installation. Progress toward complexation is guided only by the host-guest center-of-mass distance, allowing force-free exploration of insertion pathways and enabling the recovery of both stable and transient binding modes. Using β-cyclodextrin as a representative host, DFDD reproduces experimentally observed inclusion geometries within minutes and reveals intermediate states along the insertion route. Optional coupling with pKaNET-Cloud enables pH-aware, stereochemically consistent ligand protonation states prior to simulation, supporting robust host-guest modeling. This Application Note provides a transparent and accessible platform for efficient host-guest complexation studies. The DFDD framework is publicly available at https://github.com/nyelidl/DFDD.},
}
RevDate: 2026-02-06
Undergraduate medical students' perceptions of an interactive and collaborative cloud-based learning strategy: survey at a single institution.
BMC medical education pii:10.1186/s12909-026-08640-x [Epub ahead of print].
Additional Links: PMID-41645196
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41645196,
year = {2026},
author = {Cortes, C and Jackman, TD and Dersch, AM and Taylor, TAH},
title = {Undergraduate medical students' perceptions of an interactive and collaborative cloud-based learning strategy: survey at a single institution.},
journal = {BMC medical education},
volume = {},
number = {},
pages = {},
doi = {10.1186/s12909-026-08640-x},
pmid = {41645196},
issn = {1472-6920},
}
RevDate: 2026-02-04
Bridging the implementation gap: Challenges and opportunities for integrating whole genome sequencing in tuberculosis surveillance in low-resource settings.
Diagnostic microbiology and infectious disease, 115(1):117282 pii:S0732-8893(26)00032-5 [Epub ahead of print].
INTRODUCTION: Tuberculosis (TB) remains a major global health concern, particularly in low-income countries where the impact is greater. The lack of proper surveillance tools in these countries is a big impediment to effective TB control. Whole-genome sequencing (WGS) has successfully been integrated into routine TB programs in high-income countries and transformed disease surveillance by providing rapid, high-resolution transmission insights, drug resistance profiling, and outbreak detection. However, its uptake in resource-limited settings where TB burden is most prevalent remains limited.
METHODS: This review examines how WGS is currently being utilised for TB surveillance and highlights the main obstacles to its adoption in limited-resource settings as well as the strategies that could improve its uptake. A literature search was conducted in PubMed, Google Scholar, and the World Health Organisation (WHO) databases with keywords "whole genome sequencing," "tuberculosis," "surveillance," "transmission," and "drug resistance." Studies published between 2015 and 2025 were prioritised, with a focus on applications in high-burden settings.
RESULTS: Key challenges identified include infrastructural issues whereby 78% of high-burden countries lack adequate sequencing facilities according to WHO 2023 data; financial barriers, with recurring costs surpassing $150 per sample in low-resource settings as compared to $80 in high-income countries, and a shortage of trained personnel with only 2.3 bioinformaticians being available per African country. Other hurdles involve concerns over data sovereignty, weak regulatory frameworks, and ethical dilemmas surrounding privacy and equitable data usage, with only 31% of low-resource countries having genomic data policies. Nevertheless, promising innovations like portable sequencing devices which have a sensitivity of up to 92% and cloud-based platforms that reduce computational needs by 70% offer scalable opportunities for equitable integration. We also highlight partnership models that blend WHO technical guidance, Global Fund financing, and South-South collaborations that could enhance sustainability.
CONCLUSION: To realise the full potential of WGS in TB-endemic regions, a coordinated approach that combines technical advancements with policy changes, ethical data governance, and sustained investment is needed. Tackling these challenges is essential in achieving equitable, genomics-informed TB control that aligns with global TB elimination goals.
Additional Links: PMID-41637877
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41637877,
year = {2026},
author = {Micheni, LN and Wambua, S and Magutah, K and Nkaiwuatei, J and Bazira, J and Sande, C},
title = {Bridging the implementation gap: Challenges and opportunities for integrating whole genome sequencing in tuberculosis surveillance in low-resource settings.},
journal = {Diagnostic microbiology and infectious disease},
volume = {115},
number = {1},
pages = {117282},
doi = {10.1016/j.diagmicrobio.2026.117282},
pmid = {41637877},
issn = {1879-0070},
abstract = {INTRODUCTION: Tuberculosis (TB) remains a major global health concern, particularly in low-income countries where the impact is greater. The lack of proper surveillance tools in these countries is a big impediment to effective TB control. Whole-genome sequencing (WGS) has successfully been integrated into routine TB programs in high-income countries and transformed disease surveillance by providing rapid, high-resolution transmission insights, drug resistance profiling, and outbreak detection. However, its uptake in resource-limited settings where TB burden is most prevalent remains limited.
METHODS: This review examines how WGS is currently being utilised for TB surveillance and highlights the main obstacles to its adoption in limited-resource settings as well as the strategies that could improve its uptake. A literature search was conducted in PubMed, Google Scholar, and the World Health Organisation (WHO) databases with keywords "whole genome sequencing," "tuberculosis," "surveillance," "transmission," and "drug resistance." Studies published between 2015 and 2025 were prioritised, with a focus on applications in high-burden settings.
RESULTS: Key challenges identified include infrastructural issues whereby 78% of high-burden countries lack adequate sequencing facilities according to WHO 2023 data; financial barriers, with recurring costs surpassing $150 per sample in low-resource settings as compared to $80 in high-income countries, and a shortage of trained personnel with only 2.3 bioinformaticians being available per African country. Other hurdles involve concerns over data sovereignty, weak regulatory frameworks, and ethical dilemmas surrounding privacy and equitable data usage, with only 31% of low-resource countries having genomic data policies. Nevertheless, promising innovations like portable sequencing devices which have a sensitivity of up to 92% and cloud-based platforms that reduce computational needs by 70% offer scalable opportunities for equitable integration. We also highlight partnership models that blend WHO technical guidance, Global Fund financing, and South-South collaborations that could enhance sustainability.
CONCLUSION: To realise the full potential of WGS in TB-endemic regions, a coordinated approach that combines technical advancements with policy changes, ethical data governance, and sustained investment is needed. Tackling these challenges is essential in achieving equitable, genomics-informed TB control that aligns with global TB elimination goals.},
}
RevDate: 2026-02-04
CmpDate: 2026-02-04
Streamline Protocol for Bulk-RNA Sequencing: From Data Extraction to Expression Analysis.
Current protocols, 6(2):e70304.
Next-generation RNA sequencing (RNA-seq) allows researchers to study gene expression across the whole genome. However, its analysis often needs powerful computers and advanced command-line skills, which can be challenging when resources are limited. This protocol provides a simple, start-to-finish RNA-seq data analysis method that is easy to follow, reproducible, and requires minimal local hardware. It uses free tools such as SRA Toolkit, FastQC, Trimmomatic, BWA/HISAT2, Samtools, and Subread, along with Python and R for further analysis using Google Colab. The process includes downloading raw data from NCBI GEO/SRA, checking data quality, trimming adapters and low-quality reads, aligning sequences to reference genomes, converting file formats, counting reads, normalizing to TPM, and creating visualizations such as heatmaps, bar plots, and volcano plots. Differential gene expression is analyzed with pyDESeq2, and functional enrichment is done using g:Profiler. Troubleshooting in RNA-seq generally involves configuring essential tools, resolving path and dependency issues, and ensuring proper handling of paired-end reads during analysis. By running the heavy computational steps on cloud platforms, this workflow makes RNA-seq analysis affordable and accessible to more researchers. © 2026 Wiley Periodicals LLC. Basic Protocol 1: Extracting and processing a high-throughput RNA-seq dataset with the command prompt and Windows Subsystem for Linux Basic Protocol 2: Normalization and visualization of processed RNA-seq dataset with Google Colab and Python 3.
Additional Links: PMID-41637157
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41637157,
year = {2026},
author = {Mohit, AA and Das, NR and Jain, A and Alam, NB and Mustafiz, A},
title = {Streamline Protocol for Bulk-RNA Sequencing: From Data Extraction to Expression Analysis.},
journal = {Current protocols},
volume = {6},
number = {2},
pages = {e70304},
doi = {10.1002/cpz1.70304},
pmid = {41637157},
issn = {2691-1299},
mesh = {Software ; *Sequence Analysis, RNA/methods ; *High-Throughput Nucleotide Sequencing/methods ; *Gene Expression Profiling/methods ; *RNA-Seq/methods ; Humans ; *Computational Biology/methods ; },
abstract = {Next-generation RNA sequencing (RNA-seq) allows researchers to study gene expression across the whole genome. However, its analysis often needs powerful computers and advanced command-line skills, which can be challenging when resources are limited. This protocol provides a simple, start-to-finish RNA-seq data analysis method that is easy to follow, reproducible, and requires minimal local hardware. It uses free tools such as SRA Toolkit, FastQC, Trimmomatic, BWA/HISAT2, Samtools, and Subread, along with Python and R for further analysis using Google Colab. The process includes downloading raw data from NCBI GEO/SRA, checking data quality, trimming adapters and low-quality reads, aligning sequences to reference genomes, converting file formats, counting reads, normalizing to TPM, and creating visualizations such as heatmaps, bar plots, and volcano plots. Differential gene expression is analyzed with pyDESeq2, and functional enrichment is done using g:Profiler. Troubleshooting in RNA-seq generally involves configuring essential tools, resolving path and dependency issues, and ensuring proper handling of paired-end reads during analysis. By running the heavy computational steps on cloud platforms, this workflow makes RNA-seq analysis affordable and accessible to more researchers. © 2026 Wiley Periodicals LLC. Basic Protocol 1: Extracting and processing a high-throughput RNA-seq dataset with the command prompt and Windows Subsystem for Linux Basic Protocol 2: Normalization and visualization of processed RNA-seq dataset with Google Colab and Python 3.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Software
*Sequence Analysis, RNA/methods
*High-Throughput Nucleotide Sequencing/methods
*Gene Expression Profiling/methods
*RNA-Seq/methods
Humans
*Computational Biology/methods
RevDate: 2026-02-03
CmpDate: 2026-02-03
AI-driven routing and layered architectures for intelligent ICT in nanosensor networked systems.
iScience, 29(2):114626.
This review examines the emerging integration of nanosensor networks with modern information and communication technologies to address critical needs in healthcare, environmental monitoring, and smart infrastructure. It evaluates how machine learning and artificial intelligence techniques improve data processing, energy management, real-time communication, and scalable system coordination within nanosensor environments. The analysis compares major learning approaches, including supervised, unsupervised, reinforcement, and deep learning methods, and highlights their effectiveness in data routing, anomaly detection, security, and predictive maintenance. The review also assesses new system architectures based on edge computing, cloud federated models, and intelligent communication protocols, focusing on performance indicators such as latency, throughput, and energy efficiency. Key challenges involving computational load, data privacy, and system interoperability are identified, and potential solutions inspired by biological systems, interpretable models, and quantum-based learning are explored. Overall, this work provides a unified framework for advancing intelligent and resource-efficient nanosensor communication systems with broad societal impact.
Additional Links: PMID-41630924
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41630924,
year = {2026},
author = {Yousif Dafhalla, AK and Attia Gasmalla, TA and Filali, A and Osman Sid Ahmed, NM and Adam, T and Elobaid, ME and Chandra Bose Gopinath, S},
title = {AI-driven routing and layered architectures for intelligent ICT in nanosensor networked systems.},
journal = {iScience},
volume = {29},
number = {2},
pages = {114626},
pmid = {41630924},
issn = {2589-0042},
abstract = {This review examines the emerging integration of nanosensor networks with modern information and communication technologies to address critical needs in healthcare, environmental monitoring, and smart infrastructure. It evaluates how machine learning and artificial intelligence techniques improve data processing, energy management, real-time communication, and scalable system coordination within nanosensor environments. The analysis compares major learning approaches, including supervised, unsupervised, reinforcement, and deep learning methods, and highlights their effectiveness in data routing, anomaly detection, security, and predictive maintenance. The review also assesses new system architectures based on edge computing, cloud federated models, and intelligent communication protocols, focusing on performance indicators such as latency, throughput, and energy efficiency. Key challenges involving computational load, data privacy, and system interoperability are identified, and potential solutions inspired by biological systems, interpretable models, and quantum-based learning are explored. Overall, this work provides a unified framework for advancing intelligent and resource-efficient nanosensor communication systems with broad societal impact.},
}
RevDate: 2026-02-02
Improvements on Scalable and Reproducible Cloud Implementation of Numerical Groundwater Modeling.
Ground water [Epub ahead of print].
In the past decade the groundwater modeling industry has trended toward more computationally intensive methods that necessarily require more parallel computing power due to the number of model runs required for these methods. Groundwater modeling that requires many parallel model runs is often limited by numerical burden or by the modeler's access to computational resources. Over the last 15 years the evolution of the cloud in accelerating groundwater model solutions has progressed; however, there are no apparent literature reviews of MODFLOW and PEST cloud implementation, specifically with regards to open-source and efficient scalable solutions. Here we describe infrastructure as code used to develop the architecture for running PEST++ in parallel on the cloud using Docker containers and open-source software to allow simple and repeatable cloud execution. The architecture utilizes Amazon Web Services and Terraform to facilitate cloud deployment and monitoring. A publicly available MODFLOW-6 model was used to evaluate parallel performance locally and in the cloud. Local model runs were found to have a linear 12 s increase in model run time per agent on a typical office computer compared to the cloud implementation's 0.02 s per model, indicating near perfect scaling even at up to 200 concurrent model runs. A consulting groundwater model was calibrated with the cloud infrastructure, which enabled acceleration of project completion at minimal cost.
Additional Links: PMID-41626743
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41626743,
year = {2026},
author = {Roth, M and Grove, J and Davis, A and Cornell, J},
title = {Improvements on Scalable and Reproducible Cloud Implementation of Numerical Groundwater Modeling.},
journal = {Ground water},
volume = {},
number = {},
pages = {},
doi = {10.1111/gwat.70052},
pmid = {41626743},
issn = {1745-6584},
abstract = {In the past decade the groundwater modeling industry has trended toward more computationally intensive methods that necessarily require more parallel computing power due to the number of model runs required for these methods. Groundwater modeling that requires many parallel model runs is often limited by numerical burden or by the modeler's access to computational resources. Over the last 15 years the evolution of the cloud in accelerating groundwater model solutions has progressed; however, there are no apparent literature reviews of MODFLOW and PEST cloud implementation, specifically with regards to open-source and efficient scalable solutions. Here we describe infrastructure as code used to develop the architecture for running PEST++ in parallel on the cloud using Docker containers and open-source software to allow simple and repeatable cloud execution. The architecture utilizes Amazon Web Services and Terraform to facilitate cloud deployment and monitoring. A publicly available MODFLOW-6 model was used to evaluate parallel performance locally and in the cloud. Local model runs were found to have a linear 12 s increase in model run time per agent on a typical office computer compared to the cloud implementation's 0.02 s per model, indicating near perfect scaling even at up to 200 concurrent model runs. A consulting groundwater model was calibrated with the cloud infrastructure, which enabled acceleration of project completion at minimal cost.},
}
RevDate: 2026-02-02
CmpDate: 2026-02-02
TropMol-Caipora: A Cloud-Based Web Tool to Predict Cruzain Inhibitors by Machine Learning.
ACS omega, 11(3):4167-4174.
Chagas disease (CD) affects approximately 8 million people and is classified as a high-priority neglected tropical disease by the WHO research and development actions. One promising avenue for drug development for CD is the inhibition of cruzain, a crucial cysteine protease of T. cruzi and one of the most extensively studied therapeutic targets. This study aims to construct a generic molecular screening model for public, online, and free use, based on pIC50 cruzain predictions using a Random Forest model. For this, a data set with approximately 8 thousand compounds and 168 classes of descriptors was used, resulting in more than a million calculated descriptors. The model achieved R [2] = 0.91 (RMSE = 0.33) for the training set and R [2] = 0.72 (RMSE = 0.55) for the test set. In 5-fold cross-validation, performance remained consistent (R [2] = 0.72 ± 0.01; RMSE = 0.57 ± 0.01). Some relevant insights were also observed. 1 - Aromaticity was shown to be a key factor in inhibitory activity. Compounds with nitrogenous aromatic rings are more likely to be more effective inhibitors. Aromatics in general also present correlation and structural relevance for an effective inhibitor. 2 - Halogenation may favor activity. The positive correlation may suggest that the introduction of halogen atoms may improve the activity of the compounds. 3 - Bicyclic or very rigid structures may decrease the inhibition efficiency of the tested candidates. 4 - Molecular accessibility and charge influence activity. Available in: https://colab.research.google.com/drive/1hotsXPddbJ6E0_hysLT9AqsXL-74Na-z?usp=sharing.
Additional Links: PMID-41626475
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41626475,
year = {2026},
author = {Doring, TH},
title = {TropMol-Caipora: A Cloud-Based Web Tool to Predict Cruzain Inhibitors by Machine Learning.},
journal = {ACS omega},
volume = {11},
number = {3},
pages = {4167-4174},
pmid = {41626475},
issn = {2470-1343},
abstract = {Chagas disease (CD) affects approximately 8 million people and is classified as a high-priority neglected tropical disease by the WHO research and development actions. One promising avenue for drug development for CD is the inhibition of cruzain, a crucial cysteine protease of T. cruzi and one of the most extensively studied therapeutic targets. This study aims to construct a generic molecular screening model for public, online, and free use, based on pIC50 cruzain predictions using a Random Forest model. For this, a data set with approximately 8 thousand compounds and 168 classes of descriptors was used, resulting in more than a million calculated descriptors. The model achieved R [2] = 0.91 (RMSE = 0.33) for the training set and R [2] = 0.72 (RMSE = 0.55) for the test set. In 5-fold cross-validation, performance remained consistent (R [2] = 0.72 ± 0.01; RMSE = 0.57 ± 0.01). Some relevant insights were also observed. 1 - Aromaticity was shown to be a key factor in inhibitory activity. Compounds with nitrogenous aromatic rings are more likely to be more effective inhibitors. Aromatics in general also present correlation and structural relevance for an effective inhibitor. 2 - Halogenation may favor activity. The positive correlation may suggest that the introduction of halogen atoms may improve the activity of the compounds. 3 - Bicyclic or very rigid structures may decrease the inhibition efficiency of the tested candidates. 4 - Molecular accessibility and charge influence activity. Available in: https://colab.research.google.com/drive/1hotsXPddbJ6E0_hysLT9AqsXL-74Na-z?usp=sharing.},
}
RevDate: 2026-02-02
Towards intelligent edge computing through reinforcement learning based offloading in public edge as a service.
Scientific reports, 16(1):4355.
Internet of Things (IoT) deployments face increasing challenges in meeting strict latency and cost requirements while ensuring efficient resource utilization in distributed environments. Traditional offloading often overlooks the role of intermediate regional layers and mobility, resulting in inefficiencies in real-world deployments. To address this gap, we propose Public Edge as a Service (PEaaS) as an intermediate tier and develop RegionalEdgeSimPy, a Python simulator to model and evaluate this framework. It uses a Proximal Policy Optimization (PPO) scheduler that models mobility and considers multiple input parameters (e.g., network latency, cost, congestion, and energy). Tasks are first evaluated at the serving (Wireless Access Point (WAP)) for feasibility under utilization thresholds. This decision uses action masking to restrict invalid options, and a reward function that integrates latency, cost, congestion, and energy to guide optimal offloading. Simulations conducted with 10 to 3000 devices in a 10 × 10 Kilometers smart city area. Results show that PPo prioritizes Edge processing until over-utilization, after which workloads are offloaded to the nearest PEaaS, with Cloud used sparingly. On average, Edge achieves 75.8% utilization, PEaaS stabilizes near 52.9%, and Cloud remains under 1.2% when active. These findings demonstrate that the PPO scheduling significantly reduces delay, cost, and task failures, providing improved scalability for mobility in IoT big data processing.
Additional Links: PMID-41622280
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41622280,
year = {2026},
author = {Jalal, A and Farooq, U and Rabbi, I and Badshah, A and Khan, A and Alam, MM and Su'ud, MM},
title = {Towards intelligent edge computing through reinforcement learning based offloading in public edge as a service.},
journal = {Scientific reports},
volume = {16},
number = {1},
pages = {4355},
pmid = {41622280},
issn = {2045-2322},
abstract = {Internet of Things (IoT) deployments face increasing challenges in meeting strict latency and cost requirements while ensuring efficient resource utilization in distributed environments. Traditional offloading often overlooks the role of intermediate regional layers and mobility, resulting in inefficiencies in real-world deployments. To address this gap, we propose Public Edge as a Service (PEaaS) as an intermediate tier and develop RegionalEdgeSimPy, a Python simulator to model and evaluate this framework. It uses a Proximal Policy Optimization (PPO) scheduler that models mobility and considers multiple input parameters (e.g., network latency, cost, congestion, and energy). Tasks are first evaluated at the serving (Wireless Access Point (WAP)) for feasibility under utilization thresholds. This decision uses action masking to restrict invalid options, and a reward function that integrates latency, cost, congestion, and energy to guide optimal offloading. Simulations conducted with 10 to 3000 devices in a 10 × 10 Kilometers smart city area. Results show that PPo prioritizes Edge processing until over-utilization, after which workloads are offloaded to the nearest PEaaS, with Cloud used sparingly. On average, Edge achieves 75.8% utilization, PEaaS stabilizes near 52.9%, and Cloud remains under 1.2% when active. These findings demonstrate that the PPO scheduling significantly reduces delay, cost, and task failures, providing improved scalability for mobility in IoT big data processing.},
}
RevDate: 2026-01-30
Efficient workflow scheduling in fog-cloud collaboration using a hybrid IPSO-GWO algorithm.
Scientific reports pii:10.1038/s41598-025-34462-w [Epub ahead of print].
With the rapid advancement of fog-cloud computing, task offloading and workflow scheduling have become pivotal in determining system performance and cost efficiency. To address the inherent complexity of this heterogeneous environment, a novel hybrid optimization strategy is introduced, integrating the Improved Particle Swarm Optimization (IPSO) algorithm, enhanced by a linearly decreasing inertia weight, with the Grey Wolf Optimization (GWO) algorithm. This hybridization is not merely a combination but a synergistic fusion, wherein the inertia weight adapts dynamically throughout the optimization process. Such adaptation ensures a balanced trade-off between exploration and exploitation, thereby mitigating the risk of premature convergence commonly observed in standard PSO. To assess the effectiveness of the proposed IPSO-GWO algorithm, extensive simulations were carried out using the FogWorkflowSim framework-an environment specifically developed to capture the complexities of workflow execution within fog-cloud architectures. Our evaluation encompasses a range of real-world scientific workflows, scaling up to 1000 tasks, and benchmarks the performance against PSO, GWO, IPSO, and the Gravitational Search Algorithm (GSA). The Analysis of Variance (ANOVA) is employed to substantiate the results. The experimental results reveal that the proposed IPSO-GWO approach consistently outperforms existing baseline methods across key performance metrics, including total cost, average energy consumption, and overall workflow execution time (makespan) in most scenarios, with average reductions of up to 26.14% in makespan, 37.73% in energy consumption, and 12.52% in total cost Beyond algorithmic innovation, this study contributes to a deeper understanding of workflow optimization dynamics in distributed fog-cloud systems, paving the way for more intelligent and adaptive task scheduling mechanisms in future computing paradigms.
Additional Links: PMID-41617810
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41617810,
year = {2026},
author = {Awad, S and Gamal, M and El Salam, KA and Abdel-Kader, RF},
title = {Efficient workflow scheduling in fog-cloud collaboration using a hybrid IPSO-GWO algorithm.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-34462-w},
pmid = {41617810},
issn = {2045-2322},
abstract = {With the rapid advancement of fog-cloud computing, task offloading and workflow scheduling have become pivotal in determining system performance and cost efficiency. To address the inherent complexity of this heterogeneous environment, a novel hybrid optimization strategy is introduced, integrating the Improved Particle Swarm Optimization (IPSO) algorithm, enhanced by a linearly decreasing inertia weight, with the Grey Wolf Optimization (GWO) algorithm. This hybridization is not merely a combination but a synergistic fusion, wherein the inertia weight adapts dynamically throughout the optimization process. Such adaptation ensures a balanced trade-off between exploration and exploitation, thereby mitigating the risk of premature convergence commonly observed in standard PSO. To assess the effectiveness of the proposed IPSO-GWO algorithm, extensive simulations were carried out using the FogWorkflowSim framework-an environment specifically developed to capture the complexities of workflow execution within fog-cloud architectures. Our evaluation encompasses a range of real-world scientific workflows, scaling up to 1000 tasks, and benchmarks the performance against PSO, GWO, IPSO, and the Gravitational Search Algorithm (GSA). The Analysis of Variance (ANOVA) is employed to substantiate the results. The experimental results reveal that the proposed IPSO-GWO approach consistently outperforms existing baseline methods across key performance metrics, including total cost, average energy consumption, and overall workflow execution time (makespan) in most scenarios, with average reductions of up to 26.14% in makespan, 37.73% in energy consumption, and 12.52% in total cost Beyond algorithmic innovation, this study contributes to a deeper understanding of workflow optimization dynamics in distributed fog-cloud systems, paving the way for more intelligent and adaptive task scheduling mechanisms in future computing paradigms.},
}
RevDate: 2026-01-29
Employing AI tools to predict features for dental care use in the United States during the global respiratory illness outbreak.
Frontiers in public health, 13:1692540.
Additional Links: PMID-41607911
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41607911,
year = {2025},
author = {Zanwar, PP and Kodan-Ghadr, HR and Thirumalai, V and Ghaddar, S and Huang, SJ and Harkness, B and Rey, E and Shah, R and Kurelli, SR and Patel, JS and Calzoni, L and Dede Yildirim, E and Duran, DG},
title = {Employing AI tools to predict features for dental care use in the United States during the global respiratory illness outbreak.},
journal = {Frontiers in public health},
volume = {13},
number = {},
pages = {1692540},
pmid = {41607911},
issn = {2296-2565},
}
RevDate: 2026-01-28
Robot Object Detection and Tracking Based on Image-Point Cloud Instance Matching.
Sensors (Basel, Switzerland), 26(2): pii:s26020718.
Effectively fusing the rich semantic information from camera images with the high-precision geometric measurements provided by LiDAR point clouds is a key challenge in mobile robot environmental perception. To address this problem, this paper proposes a highly extensible instance-aware fusion framework designed to achieve efficient alignment and unified modeling of heterogeneous sensory data. The proposed approach adopts a modular processing pipeline. First, semantic instance masks are extracted from RGB images using an instance segmentation network, and a projection mechanism is employed to establish spatial correspondences between image pixels and LiDAR point cloud measurements. Subsequently, three-dimensional bounding boxes are reconstructed through point cloud clustering and geometric fitting, and a reprojection-based validation mechanism is introduced to ensure consistency across modalities. Building upon this representation, the system integrates a data association module with a Kalman filter-based state estimator to form a closed-loop multi-object tracking framework. Experimental results on the KITTI dataset demonstrate that the proposed system achieves strong 2D and 3D detection performance across different difficulty levels. In multi-object tracking evaluation, the method attains a MOTA score of 47.8 and an IDF1 score of 71.93, validating the stability of the association strategy and the continuity of object trajectories in complex scenes. Furthermore, real-world experiments on a mobile computing platform show an average end-to-end latency of only 173.9 ms, while ablation studies further confirm the effectiveness of individual system components. Overall, the proposed framework exhibits strong performance in terms of geometric reconstruction accuracy and tracking robustness, and its lightweight design and low latency satisfy the stringent requirements of practical robotic deployment.
Additional Links: PMID-41600511
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41600511,
year = {2026},
author = {Wang, H and Zhu, R and Ye, Z and Li, Y},
title = {Robot Object Detection and Tracking Based on Image-Point Cloud Instance Matching.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {2},
pages = {},
doi = {10.3390/s26020718},
pmid = {41600511},
issn = {1424-8220},
support = {20252BAC240200//Natural Science Foundation Projects of the Jiangxi Provincial Department of Education/ ; 2022KYQD027//Doctoral Research Start-up Project of Jiangxi University of Water Resources and Electric Power/ ; Gan Jiao Yan Zi [2025] No. 5, "Pattern Recognition and Machine Learning"//Jiangxi Provincial Graduate Professional Degree Teaching Case Project/ ; NGYJG-2024-001//2024 Degree and Graduate Education Teaching Reform Research Project/ ; Nan Gong Jiao Zi [2023] No. 29 - "Fundamentals of Robotics"//First-Class Course of Jiangxi University of Water Resources and Electric Power/ ; },
abstract = {Effectively fusing the rich semantic information from camera images with the high-precision geometric measurements provided by LiDAR point clouds is a key challenge in mobile robot environmental perception. To address this problem, this paper proposes a highly extensible instance-aware fusion framework designed to achieve efficient alignment and unified modeling of heterogeneous sensory data. The proposed approach adopts a modular processing pipeline. First, semantic instance masks are extracted from RGB images using an instance segmentation network, and a projection mechanism is employed to establish spatial correspondences between image pixels and LiDAR point cloud measurements. Subsequently, three-dimensional bounding boxes are reconstructed through point cloud clustering and geometric fitting, and a reprojection-based validation mechanism is introduced to ensure consistency across modalities. Building upon this representation, the system integrates a data association module with a Kalman filter-based state estimator to form a closed-loop multi-object tracking framework. Experimental results on the KITTI dataset demonstrate that the proposed system achieves strong 2D and 3D detection performance across different difficulty levels. In multi-object tracking evaluation, the method attains a MOTA score of 47.8 and an IDF1 score of 71.93, validating the stability of the association strategy and the continuity of object trajectories in complex scenes. Furthermore, real-world experiments on a mobile computing platform show an average end-to-end latency of only 173.9 ms, while ablation studies further confirm the effectiveness of individual system components. Overall, the proposed framework exhibits strong performance in terms of geometric reconstruction accuracy and tracking robustness, and its lightweight design and low latency satisfy the stringent requirements of practical robotic deployment.},
}
RevDate: 2026-01-28
End-Edge-Cloud Collaborative Monitoring System with an Intelligent Multi-Parameter Sensor for Impact Anomaly Detection in GIL Pipelines.
Sensors (Basel, Switzerland), 26(2): pii:s26020606.
Gas-insulated transmission lines (GILs) are increasingly deployed in dense urban power networks, where complex construction activities may introduce external mechanical impacts and pose risks to pipeline structural integrity. However, existing GIL monitoring approaches mainly emphasize electrical and gas-state parameters, while lightweight solutions capable of rapidly detecting and localizing impact-induced structural anomalies remain limited. To address this gap, this paper proposes an intelligent end-edge-cloud monitoring system for impact anomaly detection in GIL pipelines. Numerical simulations are first conducted to analyze the dynamic response characteristics of the pipeline under impacts of varying magnitudes, orientations, and locations, revealing the relationship between impact scenarios and vibration mode evolution. An end-tier multi-parameter intelligent sensor is then developed, integrating triaxial acceleration and angular velocity measurement with embedded lightweight computing. Laboratory impact experiments are performed to acquire sensor data, which are used to train and validate a multi-class extreme gradient boosting (XGBoost) model deployed at the edge tier for accurate impact-location identification. Results show that, even with a single sensor positioned at the pipeline midpoint, fusing acceleration and angular velocity features enables reliable discrimination of impact regions. Finally, a lightweight cloud platform is implemented for visualizing structural responses and environmental parameters with downsampled edge-side data. The proposed system achieves rapid sensor-level anomaly detection, precise edge-level localization, and unified cloud-level monitoring, offering a low-cost and easily deployable solution for GIL structural health assessment.
Additional Links: PMID-41600405
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41600405,
year = {2026},
author = {Li, Q and Zeng, K and Zhou, Y and Xie, X and Tang, G},
title = {End-Edge-Cloud Collaborative Monitoring System with an Intelligent Multi-Parameter Sensor for Impact Anomaly Detection in GIL Pipelines.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {2},
pages = {},
doi = {10.3390/s26020606},
pmid = {41600405},
issn = {1424-8220},
support = {5200-202417104A-1-1-ZN//Science and Technology Project of State Grid Company Headquarters/ ; },
abstract = {Gas-insulated transmission lines (GILs) are increasingly deployed in dense urban power networks, where complex construction activities may introduce external mechanical impacts and pose risks to pipeline structural integrity. However, existing GIL monitoring approaches mainly emphasize electrical and gas-state parameters, while lightweight solutions capable of rapidly detecting and localizing impact-induced structural anomalies remain limited. To address this gap, this paper proposes an intelligent end-edge-cloud monitoring system for impact anomaly detection in GIL pipelines. Numerical simulations are first conducted to analyze the dynamic response characteristics of the pipeline under impacts of varying magnitudes, orientations, and locations, revealing the relationship between impact scenarios and vibration mode evolution. An end-tier multi-parameter intelligent sensor is then developed, integrating triaxial acceleration and angular velocity measurement with embedded lightweight computing. Laboratory impact experiments are performed to acquire sensor data, which are used to train and validate a multi-class extreme gradient boosting (XGBoost) model deployed at the edge tier for accurate impact-location identification. Results show that, even with a single sensor positioned at the pipeline midpoint, fusing acceleration and angular velocity features enables reliable discrimination of impact regions. Finally, a lightweight cloud platform is implemented for visualizing structural responses and environmental parameters with downsampled edge-side data. The proposed system achieves rapid sensor-level anomaly detection, precise edge-level localization, and unified cloud-level monitoring, offering a low-cost and easily deployable solution for GIL structural health assessment.},
}
RevDate: 2026-01-28
A Novel Architecture for Mitigating Botnet Threats in AI-Powered IoT Environments.
Sensors (Basel, Switzerland), 26(2): pii:s26020572.
The rapid growth of Artificial Intelligence of Things (AIoT) environments in various sectors has introduced major security challenges, as these smart devices can be exploited by malicious users to form Botnets of Things (BoT). Limited computational resources and weak encryption mechanisms in such devices make them attractive targets for attacks like Distributed Denial of Service (DDoS), Man-in-the-Middle (MitM), and malware distribution. In this paper, we propose a novel multi-layered architecture to mitigate BoT threats in AIoT environments. The system leverages edge traffic inspection, sandboxing, and machine learning techniques to analyze, detect, and prevent suspicious behavior, while uses centralized monitoring and response automation to ensure rapid mitigation. Experimental results demonstrate the necessity and superiority over or parallel to existing models, providing an early detection of botnet activity, reduced false positives, improved forensic capabilities, and scalable protection for large-scale AIoT areas. Overall, this solution delivers a comprehensive, resilient, and proactive framework to protect AIoT assets from evolving cyber threats.
Additional Links: PMID-41600368
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41600368,
year = {2026},
author = {Memos, VA and Stergiou, CL and Bermperis, AI and Plageras, AP and Psannis, KE},
title = {A Novel Architecture for Mitigating Botnet Threats in AI-Powered IoT Environments.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {2},
pages = {},
doi = {10.3390/s26020572},
pmid = {41600368},
issn = {1424-8220},
abstract = {The rapid growth of Artificial Intelligence of Things (AIoT) environments in various sectors has introduced major security challenges, as these smart devices can be exploited by malicious users to form Botnets of Things (BoT). Limited computational resources and weak encryption mechanisms in such devices make them attractive targets for attacks like Distributed Denial of Service (DDoS), Man-in-the-Middle (MitM), and malware distribution. In this paper, we propose a novel multi-layered architecture to mitigate BoT threats in AIoT environments. The system leverages edge traffic inspection, sandboxing, and machine learning techniques to analyze, detect, and prevent suspicious behavior, while uses centralized monitoring and response automation to ensure rapid mitigation. Experimental results demonstrate the necessity and superiority over or parallel to existing models, providing an early detection of botnet activity, reduced false positives, improved forensic capabilities, and scalable protection for large-scale AIoT areas. Overall, this solution delivers a comprehensive, resilient, and proactive framework to protect AIoT assets from evolving cyber threats.},
}
RevDate: 2026-01-28
A Hybrid CNN-LSTM Architecture for Seismic Event Detection Using High-Rate GNSS Velocity Time Series.
Sensors (Basel, Switzerland), 26(2): pii:s26020519.
Global Navigation Satellite Systems (GNSS) have become essential tools in geomatics engineering for precise positioning, cadastral surveys, topographic mapping, and deformation monitoring. Recent advances integrate GNSS with emerging technologies such as artificial intelligence (AI), machine learning (ML), cloud computing, and unmanned aerial systems (UAS), which have greatly improved accuracy, efficiency, and analytical capabilities in managing geospatial big data. In this study, we propose a hybrid Convolutional Neural Network-Long Short Term Memory (CNN-LSTM) architecture for seismic detection using high-rate (5 Hz) GNSS velocity time series. The model is trained on a large synthetic dataset generated by and real high-rate GNSS non-event data. Model performance was evaluated using real event and non-event data through an event-based approach. The results demonstrate that a hybrid deep-learning architecture can provide a reliable framework for seismic detection with high-rate GNSS velocity time series.
Additional Links: PMID-41600316
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41600316,
year = {2026},
author = {Başar, D and Çelik, RN},
title = {A Hybrid CNN-LSTM Architecture for Seismic Event Detection Using High-Rate GNSS Velocity Time Series.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {2},
pages = {},
doi = {10.3390/s26020519},
pmid = {41600316},
issn = {1424-8220},
support = {41251//Istanbul Technical University/ ; },
abstract = {Global Navigation Satellite Systems (GNSS) have become essential tools in geomatics engineering for precise positioning, cadastral surveys, topographic mapping, and deformation monitoring. Recent advances integrate GNSS with emerging technologies such as artificial intelligence (AI), machine learning (ML), cloud computing, and unmanned aerial systems (UAS), which have greatly improved accuracy, efficiency, and analytical capabilities in managing geospatial big data. In this study, we propose a hybrid Convolutional Neural Network-Long Short Term Memory (CNN-LSTM) architecture for seismic detection using high-rate (5 Hz) GNSS velocity time series. The model is trained on a large synthetic dataset generated by and real high-rate GNSS non-event data. Model performance was evaluated using real event and non-event data through an event-based approach. The results demonstrate that a hybrid deep-learning architecture can provide a reliable framework for seismic detection with high-rate GNSS velocity time series.},
}
RevDate: 2026-01-28
Mobile Network Softwarization: Technological Foundations and Impact on Improving Network Energy Efficiency.
Sensors (Basel, Switzerland), 26(2): pii:s26020503.
This paper provides a comprehensive overview of mobile network softwarization, emphasizing the technological foundations and its transformative impact on the energy efficiency of modern and future mobile networks. In the paper, a detailed analysis of communication concepts known as software-defined networking (SDN) and network function virtualization (NFV) is presented, with a description of their architectural principles, operational mechanisms, and the associated interfaces and management frameworks that enable programmability, virtualization, and centralized control in modern mobile networks. The study further explores the role of cloud computing, virtualization platforms, distributed SDN controllers, and resource orchestration systems, outlining how they collectively support mobile network scalability, automation, and service agility. To assess the maturity and evolution of mobile network softwarization, the paper reviews contemporary research directions, including SDN security, machine-learning-assisted traffic management, dynamic service function chaining, virtual network function (VNF) placement and migration, blockchain-based trust mechanisms, and artificial intelligence (AI)-enabled self-optimization. The analysis also evaluates the relationship between mobile network softwarization and energy consumption, presenting the main SDN- and NFV-based techniques that contribute to reducing mobile network power usage, such as traffic-aware control, rule placement optimization, end-host-aware strategies, VNF consolidation, and dynamic resource scaling. Findings indicate that although fifth-generation (5G) mobile network standalone deployments capable of fully exploiting softwarization remain limited, softwarized SDN/NFV-based architectures provide measurable benefits in reducing network operational costs and improving energy efficiency, especially when combined with AI-driven automation. The paper concludes that mobile network softwarization represents an essential enabler for sustainable 5G and future beyond-5G systems, while highlighting the need for continued research into scalable automation, interoperable architectures, and energy-efficient softwarized network designs.
Additional Links: PMID-41600299
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41600299,
year = {2026},
author = {Lorincz, J and Kukuruzović, A and Begušić, D},
title = {Mobile Network Softwarization: Technological Foundations and Impact on Improving Network Energy Efficiency.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {2},
pages = {},
doi = {10.3390/s26020503},
pmid = {41600299},
issn = {1424-8220},
abstract = {This paper provides a comprehensive overview of mobile network softwarization, emphasizing the technological foundations and its transformative impact on the energy efficiency of modern and future mobile networks. In the paper, a detailed analysis of communication concepts known as software-defined networking (SDN) and network function virtualization (NFV) is presented, with a description of their architectural principles, operational mechanisms, and the associated interfaces and management frameworks that enable programmability, virtualization, and centralized control in modern mobile networks. The study further explores the role of cloud computing, virtualization platforms, distributed SDN controllers, and resource orchestration systems, outlining how they collectively support mobile network scalability, automation, and service agility. To assess the maturity and evolution of mobile network softwarization, the paper reviews contemporary research directions, including SDN security, machine-learning-assisted traffic management, dynamic service function chaining, virtual network function (VNF) placement and migration, blockchain-based trust mechanisms, and artificial intelligence (AI)-enabled self-optimization. The analysis also evaluates the relationship between mobile network softwarization and energy consumption, presenting the main SDN- and NFV-based techniques that contribute to reducing mobile network power usage, such as traffic-aware control, rule placement optimization, end-host-aware strategies, VNF consolidation, and dynamic resource scaling. Findings indicate that although fifth-generation (5G) mobile network standalone deployments capable of fully exploiting softwarization remain limited, softwarized SDN/NFV-based architectures provide measurable benefits in reducing network operational costs and improving energy efficiency, especially when combined with AI-driven automation. The paper concludes that mobile network softwarization represents an essential enabler for sustainable 5G and future beyond-5G systems, while highlighting the need for continued research into scalable automation, interoperable architectures, and energy-efficient softwarized network designs.},
}
RevDate: 2026-01-28
CmpDate: 2026-01-28
IDeS + TRIZ: Sustainability Applied to DfAM for Polymer-Based Automotive Components.
Polymers, 18(2): pii:polym18020239.
This study aims to gather a sustainable understanding of additive manufacturing and other Manufacturing 4.0 approaches like horizontal and vertical integration and cloud computing techniques with a focus on industrial applications. The DfAM will apply 4.0 tools to gather product feasibility and execution, with CAE-FEM analysis and CAM. This publication focuses on the redesign of a vehicle suspension arm. The main objective is to apply innovative design techniques that optimize component performance while minimizing cost and time. The IDeS method and TRIZ methodology were used, resulting in a composite element, aiming to make the FDM-sourced process a viable option, with a weight reduction of more than 80%, with less material consumption and, hence, less vehicle energy consumption. The part obtained is holistically sustainable as it was obtained by reducing the overall labor used and material/scrap generated, and the IDES data sharing minimized rework and optimized the overall production time.
Additional Links: PMID-41599535
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41599535,
year = {2026},
author = {Leon-Cardenas, C and Donnici, G and Liverani, A and Frizziero, L},
title = {IDeS + TRIZ: Sustainability Applied to DfAM for Polymer-Based Automotive Components.},
journal = {Polymers},
volume = {18},
number = {2},
pages = {},
doi = {10.3390/polym18020239},
pmid = {41599535},
issn = {2073-4360},
abstract = {This study aims to gather a sustainable understanding of additive manufacturing and other Manufacturing 4.0 approaches like horizontal and vertical integration and cloud computing techniques with a focus on industrial applications. The DfAM will apply 4.0 tools to gather product feasibility and execution, with CAE-FEM analysis and CAM. This publication focuses on the redesign of a vehicle suspension arm. The main objective is to apply innovative design techniques that optimize component performance while minimizing cost and time. The IDeS method and TRIZ methodology were used, resulting in a composite element, aiming to make the FDM-sourced process a viable option, with a weight reduction of more than 80%, with less material consumption and, hence, less vehicle energy consumption. The part obtained is holistically sustainable as it was obtained by reducing the overall labor used and material/scrap generated, and the IDES data sharing minimized rework and optimized the overall production time.},
}
RevDate: 2026-01-28
CmpDate: 2026-01-28
DS-CKDSE: A Dual-Server Conjunctive Keyword Dynamic Searchable Encryption with Forward and Backward Security.
Entropy (Basel, Switzerland), 28(1): pii:e28010025.
Dynamic Searchable Encryption (DSE) is essential for enabling confidential search operations over encrypted data in cloud computing. However, all existing single-server DSE schemes are vulnerable to Keyword Pair Result Pattern (KPRP) leakage and fail to simultaneously achieve forward and backward security. To address these challenges, this paper proposes a conjunctive keyword DSE scheme based on a dual-server architecture (DS-CKDSE). By integrating a full binary tree with an Indistinguishable Bloom Filter (IBF), the proposed scheme adopts a secure index: The leaf nodes store the keywords and the associated file identifier, while the information of non-leaf nodes is encoded within the IBF. A random state update mechanism, a dual-state array for each keyword and the timestamp trapdoor designs jointly enable robust forward and backward security while supporting efficient conjunctive queries. The dual-server architecture mitigates KPRP leakage by separating secure index storage from trapdoor verification. The security analysis shows that the new scheme satisfies adaptive security under a defined leakage function. Finally, the performance of the proposed scheme is evaluated through experiments, and the results demonstrate that the new scheme enjoys high efficiency in both update and search operations.
Additional Links: PMID-41593932
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41593932,
year = {2025},
author = {Sun, H and Liu, Y and Zhang, Y and Li, C},
title = {DS-CKDSE: A Dual-Server Conjunctive Keyword Dynamic Searchable Encryption with Forward and Backward Security.},
journal = {Entropy (Basel, Switzerland)},
volume = {28},
number = {1},
pages = {},
doi = {10.3390/e28010025},
pmid = {41593932},
issn = {1099-4300},
support = {252102210177//The Science and Technology Research Project of Henan Province/ ; 202308410421//The CSC Visiting Fellow Scholarship/ ; },
abstract = {Dynamic Searchable Encryption (DSE) is essential for enabling confidential search operations over encrypted data in cloud computing. However, all existing single-server DSE schemes are vulnerable to Keyword Pair Result Pattern (KPRP) leakage and fail to simultaneously achieve forward and backward security. To address these challenges, this paper proposes a conjunctive keyword DSE scheme based on a dual-server architecture (DS-CKDSE). By integrating a full binary tree with an Indistinguishable Bloom Filter (IBF), the proposed scheme adopts a secure index: The leaf nodes store the keywords and the associated file identifier, while the information of non-leaf nodes is encoded within the IBF. A random state update mechanism, a dual-state array for each keyword and the timestamp trapdoor designs jointly enable robust forward and backward security while supporting efficient conjunctive queries. The dual-server architecture mitigates KPRP leakage by separating secure index storage from trapdoor verification. The security analysis shows that the new scheme satisfies adaptive security under a defined leakage function. Finally, the performance of the proposed scheme is evaluated through experiments, and the results demonstrate that the new scheme enjoys high efficiency in both update and search operations.},
}
RevDate: 2026-01-28
CmpDate: 2026-01-26
Enhancing healthcare outcome with scalable processing and predictive analytics via cloud healthcare API.
Frontiers in digital health, 7:1687131.
This systematic literature review investigates the Google Cloud Healthcare API's role in transforming healthcare delivery through advanced analytics, machine learning, and cloud-based solutions. The study examines current features of cloud-based healthcare platforms in managing heterogeneous healthcare data formats, analyzes the effectiveness of cloud solutions in enhancing clinical outcomes, and compares Google Cloud Healthcare API with alternative platforms. The findings reveal that Google Cloud Healthcare API demonstrates notable advantages through its fully managed, serverless architecture, native support for healthcare standards (e.g., FHIR, HL7v2, DICOM), and seamless integration with advanced AI/ML services. Cloud-based predictive analytics platforms have proven effective in reducing hospital readmissions, addressing physician burnout, and enabling scalable telemedicine solutions. However, significant challenges persist including data privacy concerns, regulatory compliance complexities, infrastructure dependencies, and potential vendor lock-in risks. The research demonstrates that healthcare organizations implementing comprehensive cloud-based solutions achieve measurable improvements in patient outcomes, operational efficiency, and care delivery models. While technical challenges around latency in medical imaging and interoperability remain, the evidence strongly supports cloud adoption for healthcare transformation, provided organizations address security, compliance, and implementation challenges through strategic planning and comprehensive change management approaches.
Additional Links: PMID-41586204
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41586204,
year = {2025},
author = {Salehi, SS and Saadatfar, H and Oyelere, SS and Hussain, S and Hassannataj Joloudari, J and Taheri Ledari, M and Arslan, E and Barzegar, B},
title = {Enhancing healthcare outcome with scalable processing and predictive analytics via cloud healthcare API.},
journal = {Frontiers in digital health},
volume = {7},
number = {},
pages = {1687131},
pmid = {41586204},
issn = {2673-253X},
abstract = {This systematic literature review investigates the Google Cloud Healthcare API's role in transforming healthcare delivery through advanced analytics, machine learning, and cloud-based solutions. The study examines current features of cloud-based healthcare platforms in managing heterogeneous healthcare data formats, analyzes the effectiveness of cloud solutions in enhancing clinical outcomes, and compares Google Cloud Healthcare API with alternative platforms. The findings reveal that Google Cloud Healthcare API demonstrates notable advantages through its fully managed, serverless architecture, native support for healthcare standards (e.g., FHIR, HL7v2, DICOM), and seamless integration with advanced AI/ML services. Cloud-based predictive analytics platforms have proven effective in reducing hospital readmissions, addressing physician burnout, and enabling scalable telemedicine solutions. However, significant challenges persist including data privacy concerns, regulatory compliance complexities, infrastructure dependencies, and potential vendor lock-in risks. The research demonstrates that healthcare organizations implementing comprehensive cloud-based solutions achieve measurable improvements in patient outcomes, operational efficiency, and care delivery models. While technical challenges around latency in medical imaging and interoperability remain, the evidence strongly supports cloud adoption for healthcare transformation, provided organizations address security, compliance, and implementation challenges through strategic planning and comprehensive change management approaches.},
}
RevDate: 2026-01-28
CmpDate: 2026-01-26
River plastic hotspot detection from space.
iScience, 29(2):114570.
Plastic pollution threatens terrestrial and aquatic ecosystems, and rivers play a central role in transporting and retaining plastics across landscapes. Effective mitigation requires scalable methods to identify riverine plastic accumulation hotspots. Here, we present a semi-automated, cloud-based pipeline that integrates satellite remote sensing and machine learning to detect river plastic hotspots. High-resolution PlanetScope imagery is used to annotate training regions, which are transferred to Sentinel-2 multispectral data to train Random Forest classifiers within Google Earth Engine. The approach is evaluated across three contrasting river systems-the Citarum (Indonesia), Motagua (Guatemala), and Odaw (Ghana)-to assess transferability under diverse environmental conditions. Intra-river transfer achieves up to 99.5% accuracy, while optimized inter-river transfer yields a plastic F1-score of 79%, outperforming previously reported results of 69%. By providing an open-access Google Earth Engine application, this work enables reproducible, large-scale monitoring of riverine plastic pollution and supports the development of global, satellite-based assessment strategies.
Additional Links: PMID-41585480
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41585480,
year = {2026},
author = {Pérez-García, Á and Amanda, G and López, JF and Rußwurm, M and van Emmerik, THM},
title = {River plastic hotspot detection from space.},
journal = {iScience},
volume = {29},
number = {2},
pages = {114570},
pmid = {41585480},
issn = {2589-0042},
abstract = {Plastic pollution threatens terrestrial and aquatic ecosystems, and rivers play a central role in transporting and retaining plastics across landscapes. Effective mitigation requires scalable methods to identify riverine plastic accumulation hotspots. Here, we present a semi-automated, cloud-based pipeline that integrates satellite remote sensing and machine learning to detect river plastic hotspots. High-resolution PlanetScope imagery is used to annotate training regions, which are transferred to Sentinel-2 multispectral data to train Random Forest classifiers within Google Earth Engine. The approach is evaluated across three contrasting river systems-the Citarum (Indonesia), Motagua (Guatemala), and Odaw (Ghana)-to assess transferability under diverse environmental conditions. Intra-river transfer achieves up to 99.5% accuracy, while optimized inter-river transfer yields a plastic F1-score of 79%, outperforming previously reported results of 69%. By providing an open-access Google Earth Engine application, this work enables reproducible, large-scale monitoring of riverine plastic pollution and supports the development of global, satellite-based assessment strategies.},
}
RevDate: 2026-01-22
Frag'n'Flow: automated workflow for large-scale quantitative proteomics in high performance computing environments.
BMC bioinformatics, 27(1):18.
BACKGROUND: Analysing large-scale mass spectrometry-based complex proteomics datasets often overwhelm desktop computational resources and require manual configuration for analysis. While FragPipe delivers rapid peptide identification across diverse sample preparation and acquisition modes (DDA, DIA, TMT), it remains challenging to deploy at scale.
RESULTS: We introduce Frag’n’Flow, a Nextflow‐based pipeline that encapsulates FragPipe, automates input manifest and workflow generation, manages tool dependencies and includes downstream data analysis options to enable reproducible, high‐performance analyses on HPC, cloud, and cluster environments. Benchmarking against other workflow-based solutions shows that our pipeline maintains quantitative accuracy and cuts runtime nearly in half on a typical DIA dataset of ~ 58 GB, while alleviating memory and I/O bottlenecks. We validate Frag’n’Flow results across three representative datasets, label-free DDA, DIA, and TMT, successfully recapitulating published biological signatures with minimal user intervention.
CONCLUSIONS: By combining the sensitivity and speed of FragPipe with Nextflow’s orchestration, Frag’n’Flow enables the analysis of large‐scale proteomics data, empowering the scientific community, without extensive computation expertise, to extract new insights from existing MS datasets. Frag’n’Flow is available at: https://github.com/ronalabrcns/FragNFlow.
GRAPHICAL ABSTRACT: [Image: see text]
SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12859-025-06305-y.
Additional Links: PMID-41486154
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41486154,
year = {2026},
author = {Szepesi-Nagy, I and Borosta, R and Szabo, Z and Tusnady, GE and Pongor, LS and Rona, G},
title = {Frag'n'Flow: automated workflow for large-scale quantitative proteomics in high performance computing environments.},
journal = {BMC bioinformatics},
volume = {27},
number = {1},
pages = {18},
pmid = {41486154},
issn = {1471-2105},
support = {EKÖP-2024-124//Nemzeti Kutatási, Fejlesztési és Innovaciós Alap/ ; K-146314//Nemzeti Kutatási Fejlesztési és Innovációs Hivatal/ ; BO/00697/23//Magyar Tudományos Akadémia/ ; LP2023-15/2023//Magyar Tudományos Akadémia/ ; IG5670-2024//EMBO/ ; KSZF-143/2023//Hungarian Research Network/ ; },
abstract = {BACKGROUND: Analysing large-scale mass spectrometry-based complex proteomics datasets often overwhelm desktop computational resources and require manual configuration for analysis. While FragPipe delivers rapid peptide identification across diverse sample preparation and acquisition modes (DDA, DIA, TMT), it remains challenging to deploy at scale.
RESULTS: We introduce Frag’n’Flow, a Nextflow‐based pipeline that encapsulates FragPipe, automates input manifest and workflow generation, manages tool dependencies and includes downstream data analysis options to enable reproducible, high‐performance analyses on HPC, cloud, and cluster environments. Benchmarking against other workflow-based solutions shows that our pipeline maintains quantitative accuracy and cuts runtime nearly in half on a typical DIA dataset of ~ 58 GB, while alleviating memory and I/O bottlenecks. We validate Frag’n’Flow results across three representative datasets, label-free DDA, DIA, and TMT, successfully recapitulating published biological signatures with minimal user intervention.
CONCLUSIONS: By combining the sensitivity and speed of FragPipe with Nextflow’s orchestration, Frag’n’Flow enables the analysis of large‐scale proteomics data, empowering the scientific community, without extensive computation expertise, to extract new insights from existing MS datasets. Frag’n’Flow is available at: https://github.com/ronalabrcns/FragNFlow.
GRAPHICAL ABSTRACT: [Image: see text]
SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12859-025-06305-y.},
}
RevDate: 2026-01-22
SAIGE-GPU - Accelerating Genome- and Phenome-Wide Association Studies using GPUs.
Bioinformatics (Oxford, England) pii:8438945 [Epub ahead of print].
MOTIVATION: Genome-wide association studies (GWAS) at biobank scale are computationally intensive, especially for admixed populations requiring robust statistical models. SAIGE is a widely used method for generalized linear mixed-model GWAS but is limited by its CPU-based implementation, making phenome-wide association studies impractical for many research groups.
RESULTS: We developed SAIGE-GPU, a GPU-accelerated version of SAIGE that replaces CPU-intensive matrix operations with GPU-optimized kernels. The core innovation is distributing genetic relationship matrix calculations across GPUs and communication layers. Applied to 2,068 phenotypes from 635,969 participants in the Million Veteran Program (MVP), including diverse and admixed populations, SAIGE-GPU achieved a 5-fold speedup in mixed model fitting on supercomputing infrastructure and cloud platforms. We further optimized the variant association testing step through multi-core and multi-trait parallelization. Deployed on Google Cloud Platform and Azure, the method provided substantial cost and time savings.
AVAILABILITY: Source code and binaries are available for download at https://github.com/saigegit/SAIGE/tree/SAIGE-GPU-1.3.3. A code snapshot is archived at Zenodo for reproducibility (DOI: [10.5281/zenodo.17642591]). SAIGE-GPU is available in a containerized format for use across HPC and cloud environments and is implemented in R/C ++ and runs on Linux systems.
SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
Additional Links: PMID-41572430
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41572430,
year = {2026},
author = {Rodriguez, A and Kim, Y and Nandi, TN and Keat, K and Kumar, R and Conery, M and Bhukar, R and Liu, M and Hessington, J and Maheshwari, K and , and Begoli, E and Tourassi, G and Muralidhar, S and Natarajan, P and Voight, BF and Cho, K and Gaziano, JM and Damrauer, SM and Liao, KP and Zhou, W and Huffman, JE and Verma, A and Madduri, RK},
title = {SAIGE-GPU - Accelerating Genome- and Phenome-Wide Association Studies using GPUs.},
journal = {Bioinformatics (Oxford, England)},
volume = {},
number = {},
pages = {},
doi = {10.1093/bioinformatics/btag032},
pmid = {41572430},
issn = {1367-4811},
abstract = {MOTIVATION: Genome-wide association studies (GWAS) at biobank scale are computationally intensive, especially for admixed populations requiring robust statistical models. SAIGE is a widely used method for generalized linear mixed-model GWAS but is limited by its CPU-based implementation, making phenome-wide association studies impractical for many research groups.
RESULTS: We developed SAIGE-GPU, a GPU-accelerated version of SAIGE that replaces CPU-intensive matrix operations with GPU-optimized kernels. The core innovation is distributing genetic relationship matrix calculations across GPUs and communication layers. Applied to 2,068 phenotypes from 635,969 participants in the Million Veteran Program (MVP), including diverse and admixed populations, SAIGE-GPU achieved a 5-fold speedup in mixed model fitting on supercomputing infrastructure and cloud platforms. We further optimized the variant association testing step through multi-core and multi-trait parallelization. Deployed on Google Cloud Platform and Azure, the method provided substantial cost and time savings.
AVAILABILITY: Source code and binaries are available for download at https://github.com/saigegit/SAIGE/tree/SAIGE-GPU-1.3.3. A code snapshot is archived at Zenodo for reproducibility (DOI: [10.5281/zenodo.17642591]). SAIGE-GPU is available in a containerized format for use across HPC and cloud environments and is implemented in R/C ++ and runs on Linux systems.
SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.},
}
RevDate: 2026-01-22
Artificial intelligence-enabled pediatric radiology in low-resource settings: addressing resource constraints in the African healthcare system.
Pediatric radiology [Epub ahead of print].
Artificial intelligence (AI) holds immense promise in guiding clinical decision making in pediatric radiology, but its implementation in resource-constrained healthcare systems is limited by several significant challenges. The common AI methods, specifically deep learning models, used for image synthesis, reconstruction and segmentation require high-performance computers (HPC) and large memory capacities, which are often unavailable in low- and middle-income countries, especially in Sub-Saharan Africa. Long reconstruction times, inadequate hardware, and reliance on expensive commercial software further hinder adoption. These issues are compounded by the scarcity of annotated pediatric datasets, variability in imaging protocols, and limited data-sharing infrastructure, all of which widen the AI divide, particularly in pediatric imaging. Even when advanced AI models are developed, deploying them into clinical workflows remains difficult due to poor integration with existing picture archiving and communication systems (PACS) and the limited internet infrastructure for cloud-based solutions and data storage. Addressing these barriers will require intentional efforts to provide affordable high-performance computing resources, open-source pediatric datasets, federated learning approaches, and seamless workflow integration backed by robust region-specific AI regulations. This review sheds light on these barriers and highlights opportunities for AI-enabled solutions to become routine in pediatric radiology on the African continent.
Additional Links: PMID-41569331
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41569331,
year = {2026},
author = {Nour, AS and Raymond, C and Zewdneh, D and Anazodo, U},
title = {Artificial intelligence-enabled pediatric radiology in low-resource settings: addressing resource constraints in the African healthcare system.},
journal = {Pediatric radiology},
volume = {},
number = {},
pages = {},
pmid = {41569331},
issn = {1432-1998},
abstract = {Artificial intelligence (AI) holds immense promise in guiding clinical decision making in pediatric radiology, but its implementation in resource-constrained healthcare systems is limited by several significant challenges. The common AI methods, specifically deep learning models, used for image synthesis, reconstruction and segmentation require high-performance computers (HPC) and large memory capacities, which are often unavailable in low- and middle-income countries, especially in Sub-Saharan Africa. Long reconstruction times, inadequate hardware, and reliance on expensive commercial software further hinder adoption. These issues are compounded by the scarcity of annotated pediatric datasets, variability in imaging protocols, and limited data-sharing infrastructure, all of which widen the AI divide, particularly in pediatric imaging. Even when advanced AI models are developed, deploying them into clinical workflows remains difficult due to poor integration with existing picture archiving and communication systems (PACS) and the limited internet infrastructure for cloud-based solutions and data storage. Addressing these barriers will require intentional efforts to provide affordable high-performance computing resources, open-source pediatric datasets, federated learning approaches, and seamless workflow integration backed by robust region-specific AI regulations. This review sheds light on these barriers and highlights opportunities for AI-enabled solutions to become routine in pediatric radiology on the African continent.},
}
RevDate: 2026-01-21
CmpDate: 2026-01-21
Network intrusion detection using a hybrid graph-based convolutional network and transformer architecture.
PloS one, 21(1):e0340997 pii:PONE-D-25-45283.
Cloud computing continues to expand rapidly due to its ability to provide internet-hosted services, including servers, databases, and storage. However, this growth increases exposure to sophisticated intrusion attacks that can evade traditional security mechanisms such as firewalls. As a result, network intrusion detection systems (NIDS) enhanced with machine learning and deep learning have become increasingly important. Despite notable advancements, many AI-based intrusion detection models remain limited by their dependence on extensive, high-quality attack datasets and their insufficient capacity to capture complex, dynamic patterns in distributed cloud environments. This study presents a hybrid intrusion detection model that combines a graph convolutional layer and a transformer encoder layer to form deep neural network architecture. Using the CIC-IDS 2018 dataset, tabular network traffic data was transformed into computational graphs, enabling the model called "GConvTrans" to leverage both local structural information and global context through graph convolutional layers and multi-head self-attention mechanisms, respectively. Experimental evaluation shows that the proposed GConvTrans obtained 84.7%, 96.75% and 96.94% accuracy on the training, validation and testing set respectively. These findings demonstrate that combining graph learning techniques with standard deep learning methods can be robust for detecting complex network intrusion. Further research would explore other datasets, continue refining the proposed architecture and its hyperparameters. Another future research direction for this work is to analyze the architecture on other graph learning tasks such as link prediction.
Additional Links: PMID-41564087
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41564087,
year = {2026},
author = {Appiahene, P and Berchie, SO and Botchway, E and Ayitey, MJ and Dawson, JK and Mettle, HN and Afrifa, S},
title = {Network intrusion detection using a hybrid graph-based convolutional network and transformer architecture.},
journal = {PloS one},
volume = {21},
number = {1},
pages = {e0340997},
doi = {10.1371/journal.pone.0340997},
pmid = {41564087},
issn = {1932-6203},
mesh = {*Neural Networks, Computer ; *Computer Security ; Deep Learning ; *Cloud Computing ; Algorithms ; Humans ; Machine Learning ; },
abstract = {Cloud computing continues to expand rapidly due to its ability to provide internet-hosted services, including servers, databases, and storage. However, this growth increases exposure to sophisticated intrusion attacks that can evade traditional security mechanisms such as firewalls. As a result, network intrusion detection systems (NIDS) enhanced with machine learning and deep learning have become increasingly important. Despite notable advancements, many AI-based intrusion detection models remain limited by their dependence on extensive, high-quality attack datasets and their insufficient capacity to capture complex, dynamic patterns in distributed cloud environments. This study presents a hybrid intrusion detection model that combines a graph convolutional layer and a transformer encoder layer to form deep neural network architecture. Using the CIC-IDS 2018 dataset, tabular network traffic data was transformed into computational graphs, enabling the model called "GConvTrans" to leverage both local structural information and global context through graph convolutional layers and multi-head self-attention mechanisms, respectively. Experimental evaluation shows that the proposed GConvTrans obtained 84.7%, 96.75% and 96.94% accuracy on the training, validation and testing set respectively. These findings demonstrate that combining graph learning techniques with standard deep learning methods can be robust for detecting complex network intrusion. Further research would explore other datasets, continue refining the proposed architecture and its hyperparameters. Another future research direction for this work is to analyze the architecture on other graph learning tasks such as link prediction.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Neural Networks, Computer
*Computer Security
Deep Learning
*Cloud Computing
Algorithms
Humans
Machine Learning
RevDate: 2026-01-19
HED-ID: an edge-deployable and explainable intrusion detection system optimized via metaheuristic learning.
Scientific reports, 16(1):2313.
UNLABELLED: The increasing complexity of network traffic has heightened the demand for intrusion detection systems (IDS) that deliver high accuracy, interpretability, and efficiency in diverse computing environments, including edge devices. Traditional deep learning-based IDS models perform well but often suffer from feature redundancy, poor generalization, and limited adaptability to resource-constrained platforms. To address these challenges, we propose HED-ID: an edge-deployable and explainable IDS framework. The system utilizes a Stacked Bidirectional Gated Recurrent Unit (S-BiGRU)—a recurrent neural network variant that captures bidirectional temporal dependencies—with an attention mechanism to focus on critical patterns in traffic flows. Grey Wolf Optimization (GWO), a metaheuristic algorithm inspired by wolf hunting behavior, is employed for joint feature selection and hyperparameter tuning to improve efficiency. Finally, SHapley Additive exPlanations (SHAP), a game-theoretic approach for model interpretability, quantifies feature contributions, linking predictions to observable network attributes. Evaluations on the CICIDS-2017, UNSW-NB15, and ToN-IoT datasets show consistent detection performance in both cloud-like and edge-like settings, with inference latency of 18–22 ms and memory usage of 92–115 MB. These results highlight HED-ID’s balanced trade-off between accuracy, interpretability, and resource efficiency, making it suitable for real-world network security applications.
SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1038/s41598-025-32183-8.
Additional Links: PMID-41554793
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41554793,
year = {2026},
author = {Nasir, K and Badri, SK and Alghazzawi, DM and Alghamdi, MY and Alkhozae, M and Almakky, A and Alhazmi, RM and Asghar, MZ},
title = {HED-ID: an edge-deployable and explainable intrusion detection system optimized via metaheuristic learning.},
journal = {Scientific reports},
volume = {16},
number = {1},
pages = {2313},
pmid = {41554793},
issn = {2045-2322},
abstract = {UNLABELLED: The increasing complexity of network traffic has heightened the demand for intrusion detection systems (IDS) that deliver high accuracy, interpretability, and efficiency in diverse computing environments, including edge devices. Traditional deep learning-based IDS models perform well but often suffer from feature redundancy, poor generalization, and limited adaptability to resource-constrained platforms. To address these challenges, we propose HED-ID: an edge-deployable and explainable IDS framework. The system utilizes a Stacked Bidirectional Gated Recurrent Unit (S-BiGRU)—a recurrent neural network variant that captures bidirectional temporal dependencies—with an attention mechanism to focus on critical patterns in traffic flows. Grey Wolf Optimization (GWO), a metaheuristic algorithm inspired by wolf hunting behavior, is employed for joint feature selection and hyperparameter tuning to improve efficiency. Finally, SHapley Additive exPlanations (SHAP), a game-theoretic approach for model interpretability, quantifies feature contributions, linking predictions to observable network attributes. Evaluations on the CICIDS-2017, UNSW-NB15, and ToN-IoT datasets show consistent detection performance in both cloud-like and edge-like settings, with inference latency of 18–22 ms and memory usage of 92–115 MB. These results highlight HED-ID’s balanced trade-off between accuracy, interpretability, and resource efficiency, making it suitable for real-world network security applications.
SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1038/s41598-025-32183-8.},
}
RevDate: 2026-01-19
A chronic kidney disease prediction system based on Internet of Things using walrus optimized deep learning technique.
Informatics for health & social care [Epub ahead of print].
The Internet of Things (IoT) and cloud computing (CC) concepts are commonly incorporated in healthcare applications. In the healthcare industry, a huge quantity of patient data is generated by IoT devices. The integral storage of mobile devices and processing power is used to analyze the stored data in the cloud. The Internet of Medical Things (IoMT) combines health monitoring mechanisms with medical equipment and sensors to monitor patient records and offer extra smart and experienced healthcare services. This paper proposes an effective and walrus-optimized deep learning (DL) technique for chronic kidney disease (CKD) prediction in IoT. To begin, the data are collected from the CKD dataset, and the preprocessing procedures, such as missing value imputation, numerical conversion, and normalization, are performed to improve the quality of the dataset. Then, dataset balancing is done using the k-means (KM) clustering algorithm to prevent the model from making inaccurate predictions. After that, enhanced residual network 50 (EResNet50) is utilized to extract more discriminative features from the dataset. From that, the optimal features are selected via elite opposition and the Cauchy distribution-based walrus optimization algorithm (ECWOA). Finally, the classification uses the walrus-optimized bidirectional long short-term memory (WOBLSTM). The simulation outcomes demonstrated the effectiveness of our method over existing techniques, with a higher sensitivity of 99.89% for CKD prediction.
Additional Links: PMID-41553156
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41553156,
year = {2026},
author = {M, S and M, T and G, ER and M, A},
title = {A chronic kidney disease prediction system based on Internet of Things using walrus optimized deep learning technique.},
journal = {Informatics for health & social care},
volume = {},
number = {},
pages = {1-21},
doi = {10.1080/17538157.2025.2610695},
pmid = {41553156},
issn = {1753-8165},
abstract = {The Internet of Things (IoT) and cloud computing (CC) concepts are commonly incorporated in healthcare applications. In the healthcare industry, a huge quantity of patient data is generated by IoT devices. The integral storage of mobile devices and processing power is used to analyze the stored data in the cloud. The Internet of Medical Things (IoMT) combines health monitoring mechanisms with medical equipment and sensors to monitor patient records and offer extra smart and experienced healthcare services. This paper proposes an effective and walrus-optimized deep learning (DL) technique for chronic kidney disease (CKD) prediction in IoT. To begin, the data are collected from the CKD dataset, and the preprocessing procedures, such as missing value imputation, numerical conversion, and normalization, are performed to improve the quality of the dataset. Then, dataset balancing is done using the k-means (KM) clustering algorithm to prevent the model from making inaccurate predictions. After that, enhanced residual network 50 (EResNet50) is utilized to extract more discriminative features from the dataset. From that, the optimal features are selected via elite opposition and the Cauchy distribution-based walrus optimization algorithm (ECWOA). Finally, the classification uses the walrus-optimized bidirectional long short-term memory (WOBLSTM). The simulation outcomes demonstrated the effectiveness of our method over existing techniques, with a higher sensitivity of 99.89% for CKD prediction.},
}
RevDate: 2026-01-19
CmpDate: 2026-01-19
Efficient data replication in distributed clouds via quantum entanglement algorithms.
MethodsX, 16:103762.
In cloud computing, it remains difficult to make data available in a cloud service such that the data is replicated and maintained consistently across various data centers. Traditional replication systems are sufficient, even though they take too long to process, cause significant data transfers, and face problems with final data consistency. This work presents a new method named Quantum Entanglement-Based Replication Algorithm (QERA), which makes use of quantum entanglement to ensure quick and high-performance synchronization of cloud data across all nodes. In this proposed work, the QERA approach encodes data changes in the primary cloud node onto quantum states and entangled qubit pairs to the related replica nodes. As a result, any change is quickly shown on all replicas without the usual overhead and delay of message broadcasts. It simulates how QERA is designed to decrease latency, promote consistency, and make better use of resources in cloud environments. This paper creates a theoretical framework using IBM Qiskit and Microsoft Quantum Development Kit simulators to compare classical and quantum baseline algorithms. The results show that QERA may greatly enhance the way updates and replications are managed across many cloud systems. It demonstrates how QERA can ensure a very synchronized replication among the remote cloud nodes. Employs a qubit pair entangled to minimize latency and decrease bandwidth expenses as it goes through updates. Combines the idea of quantum teleportation with methods of non-invasive verification made to maintain the integrity of the state without altering the quantum system.
Additional Links: PMID-41551262
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41551262,
year = {2026},
author = {B, PS and N, R and Ravi, J and C, V and J, G and I, G and K, DK and Muthusamy, E},
title = {Efficient data replication in distributed clouds via quantum entanglement algorithms.},
journal = {MethodsX},
volume = {16},
number = {},
pages = {103762},
pmid = {41551262},
issn = {2215-0161},
abstract = {In cloud computing, it remains difficult to make data available in a cloud service such that the data is replicated and maintained consistently across various data centers. Traditional replication systems are sufficient, even though they take too long to process, cause significant data transfers, and face problems with final data consistency. This work presents a new method named Quantum Entanglement-Based Replication Algorithm (QERA), which makes use of quantum entanglement to ensure quick and high-performance synchronization of cloud data across all nodes. In this proposed work, the QERA approach encodes data changes in the primary cloud node onto quantum states and entangled qubit pairs to the related replica nodes. As a result, any change is quickly shown on all replicas without the usual overhead and delay of message broadcasts. It simulates how QERA is designed to decrease latency, promote consistency, and make better use of resources in cloud environments. This paper creates a theoretical framework using IBM Qiskit and Microsoft Quantum Development Kit simulators to compare classical and quantum baseline algorithms. The results show that QERA may greatly enhance the way updates and replications are managed across many cloud systems. It demonstrates how QERA can ensure a very synchronized replication among the remote cloud nodes. Employs a qubit pair entangled to minimize latency and decrease bandwidth expenses as it goes through updates. Combines the idea of quantum teleportation with methods of non-invasive verification made to maintain the integrity of the state without altering the quantum system.},
}
RevDate: 2026-01-18
An automated pipeline for efficiently generating standardized, child-friendly audiovisual language stimuli.
Developmental cognitive neuroscience, 78:101674 pii:S1878-9293(26)00006-X [Epub ahead of print].
Creating engaging language stimuli suitable for children can be difficult and time-consuming. To simplify and accelerate the process, we developed an automated pipeline that combines existing audio generation and animation tools to generate customizable audiovisual stimuli from text input. The pipeline consists of two components: the first uses Google Cloud Text-to-Speech to generate audio stimuli from text, and the second uses Adobe Character Animator to create video stimuli in which an animated character "speaks" the audio with speech-aligned mouth movements. We evaluated the pipeline with two stimulus sets, including an acoustic comparison between generated audio stimuli and existing human-recorded stimuli. The pipeline is efficient, taking less than 2 min to generate each audiovisual stimulus, and fewer than 9 % of stimuli needed to be regenerated. The audio generation component is particularly fast, taking less than 1 s per stimulus. By leveraging automated tools for language stimulus creation, this pipeline can facilitate developmental research on language and other domains of cognition, especially in cognitive neuroscience studies that require large numbers of stimuli.
Additional Links: PMID-41548476
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41548476,
year = {2026},
author = {Santi, B and Soza, M and Tuckute, G and Sathe, A and Fedorenko, E and Olson, H},
title = {An automated pipeline for efficiently generating standardized, child-friendly audiovisual language stimuli.},
journal = {Developmental cognitive neuroscience},
volume = {78},
number = {},
pages = {101674},
doi = {10.1016/j.dcn.2026.101674},
pmid = {41548476},
issn = {1878-9307},
abstract = {Creating engaging language stimuli suitable for children can be difficult and time-consuming. To simplify and accelerate the process, we developed an automated pipeline that combines existing audio generation and animation tools to generate customizable audiovisual stimuli from text input. The pipeline consists of two components: the first uses Google Cloud Text-to-Speech to generate audio stimuli from text, and the second uses Adobe Character Animator to create video stimuli in which an animated character "speaks" the audio with speech-aligned mouth movements. We evaluated the pipeline with two stimulus sets, including an acoustic comparison between generated audio stimuli and existing human-recorded stimuli. The pipeline is efficient, taking less than 2 min to generate each audiovisual stimulus, and fewer than 9 % of stimuli needed to be regenerated. The audio generation component is particularly fast, taking less than 1 s per stimulus. By leveraging automated tools for language stimulus creation, this pipeline can facilitate developmental research on language and other domains of cognition, especially in cognitive neuroscience studies that require large numbers of stimuli.},
}
RevDate: 2026-01-17
Impact of Labeling Inaccuracy and Image Noise on Tooth Segmentation in Panoramic Radiographs using Federated, Centralized and Local Learning.
Dento maxillo facial radiology pii:8428117 [Epub ahead of print].
OBJECTIVES: Federated learning (FL) may mitigate privacy constraints, heterogeneous data quality, and inconsistent labeling in dental diagnostic artificial intelligence (AI). FL was compared with centralized (CL) and local learning (LL) for tooth segmentation in panoramic radiographs across multiple data corruption scenarios.
METHODS: An Attention U-Net was trained on 2066 radiographs from six institutions across four settings: baseline (unaltered data); label manipulation (dilated/missing annotations); image-quality manipulation (additive Gaussian noise); and exclusion of one faulty client with corrupted data. FL was implemented via the Flower AI framework. Per-client training- and validation loss trajectories were monitored for anomaly detection and a set of metrics (Dice, IoU, HD, HD95 and ASSD) were evaluated on a hold-out test set. From these metrics significance results were reported through Wilcoxon signed-rank test. CL and LL served as comparators.
RESULTS: Baseline: FL achieved a median Dice of 0.94889 (ASSD: 1.33229), slightly better than CL at 0.94706 (ASSD: 1.37074) and LL at 0.93557-0.94026 (ASSD: 1.51910-1.69777). Label manipulation: FL maintained the best median Dice score at 0.94884 (ASSD: 1.46487) versus CL's 0.94183 (ASSD: 1.75738) and LL's 0.93003-0.94026 (ASSD: 1.51910-2.11462). Similar performance was observed when two faulty clients were introduced. Image noise: FL led with Dice at 0.94853 (ASSD: 1.31088); CL scored 0.94787 (ASSD: 1.36131); LL ranged from 0.93179-0.94026 (ASSD: 1.51910-1.77350). Similar performance was observed when two faulty clients were introduced, with CL performing slightly better than FL. Faulty-client exclusion: FL reached Dice at 0.94790 (ASSD: 1.33113) better than CL's 0.94550 (ASSD: 1.39318). Loss-curve monitoring reliably flagged the corrupted site.
CONCLUSIONS: FL matches or exceeds CL and outperforms LL across corruption scenarios while preserving privacy. Per-client loss trajectories provide an effective anomaly-detection mechanism and support FL as a practical, privacy-preserving approach for scalable clinical AI deployment.
Additional Links: PMID-41546377
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41546377,
year = {2026},
author = {Andreas Balle Rubak, J and Naveed, K and Jain, S and Esterle, L and Iosifidis, A and Pauwels, R},
title = {Impact of Labeling Inaccuracy and Image Noise on Tooth Segmentation in Panoramic Radiographs using Federated, Centralized and Local Learning.},
journal = {Dento maxillo facial radiology},
volume = {},
number = {},
pages = {},
doi = {10.1093/dmfr/twag001},
pmid = {41546377},
issn = {1476-542X},
abstract = {OBJECTIVES: Federated learning (FL) may mitigate privacy constraints, heterogeneous data quality, and inconsistent labeling in dental diagnostic artificial intelligence (AI). FL was compared with centralized (CL) and local learning (LL) for tooth segmentation in panoramic radiographs across multiple data corruption scenarios.
METHODS: An Attention U-Net was trained on 2066 radiographs from six institutions across four settings: baseline (unaltered data); label manipulation (dilated/missing annotations); image-quality manipulation (additive Gaussian noise); and exclusion of one faulty client with corrupted data. FL was implemented via the Flower AI framework. Per-client training- and validation loss trajectories were monitored for anomaly detection and a set of metrics (Dice, IoU, HD, HD95 and ASSD) were evaluated on a hold-out test set. From these metrics significance results were reported through Wilcoxon signed-rank test. CL and LL served as comparators.
RESULTS: Baseline: FL achieved a median Dice of 0.94889 (ASSD: 1.33229), slightly better than CL at 0.94706 (ASSD: 1.37074) and LL at 0.93557-0.94026 (ASSD: 1.51910-1.69777). Label manipulation: FL maintained the best median Dice score at 0.94884 (ASSD: 1.46487) versus CL's 0.94183 (ASSD: 1.75738) and LL's 0.93003-0.94026 (ASSD: 1.51910-2.11462). Similar performance was observed when two faulty clients were introduced. Image noise: FL led with Dice at 0.94853 (ASSD: 1.31088); CL scored 0.94787 (ASSD: 1.36131); LL ranged from 0.93179-0.94026 (ASSD: 1.51910-1.77350). Similar performance was observed when two faulty clients were introduced, with CL performing slightly better than FL. Faulty-client exclusion: FL reached Dice at 0.94790 (ASSD: 1.33113) better than CL's 0.94550 (ASSD: 1.39318). Loss-curve monitoring reliably flagged the corrupted site.
CONCLUSIONS: FL matches or exceeds CL and outperforms LL across corruption scenarios while preserving privacy. Per-client loss trajectories provide an effective anomaly-detection mechanism and support FL as a practical, privacy-preserving approach for scalable clinical AI deployment.},
}
RevDate: 2026-01-15
Optimized CatBoost machine learning (OCML) for DDoS detection in cloud virtual machines with time-series and adversarial robustness.
Scientific reports, 16(1):2064.
Distributed Denial of Service (DDoS) attacks represent one of the most strategically executed and severe threats in cloud computing, often leading to substantial data loss and significant financial damage for both cloud service providers and their users. Numerous studies have been conducted to enhance cloud security against such attacks through the application of machine learning techniques. This paper implements the Optimized Catboost machine learning algorithm (OCML) with hyperparameter optimization using Optuna to achieve efficient training. Feature selection was conducted using the SHAP (SHapley Additive exPlanations) method, as the dataset contains over 80 features. The proposed model achieved an accuracy of 99.2% in detecting Distributed Denial of Service (DDoS) attacks in cloud virtual machines (VMs), enabling the system to filter out malicious jobs and allocate resources efficiently. The CICIDS 2019 dataset was used as the benchmark for evaluation. Furthermore, the robustness of the proposed model was assessed using adversarial attacks, specifically the Fast Gradient Sign Method (FGSM), the Carlini-Wagner (CW) attack, and Projected Gradient Descent (PGD). The Catboost model achieves accuracies against these attacks 97%, 80% and 71% respectively. In addition, the robustness against time series network traffic attacks using pulse wave, random burst, and slow ramp achieves 80%, 83% and 77% respectively.
Additional Links: PMID-41540130
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41540130,
year = {2026},
author = {Samy, H and Bahaa-Eldin, AM and Sobh, MA and Taha, A},
title = {Optimized CatBoost machine learning (OCML) for DDoS detection in cloud virtual machines with time-series and adversarial robustness.},
journal = {Scientific reports},
volume = {16},
number = {1},
pages = {2064},
pmid = {41540130},
issn = {2045-2322},
abstract = {Distributed Denial of Service (DDoS) attacks represent one of the most strategically executed and severe threats in cloud computing, often leading to substantial data loss and significant financial damage for both cloud service providers and their users. Numerous studies have been conducted to enhance cloud security against such attacks through the application of machine learning techniques. This paper implements the Optimized Catboost machine learning algorithm (OCML) with hyperparameter optimization using Optuna to achieve efficient training. Feature selection was conducted using the SHAP (SHapley Additive exPlanations) method, as the dataset contains over 80 features. The proposed model achieved an accuracy of 99.2% in detecting Distributed Denial of Service (DDoS) attacks in cloud virtual machines (VMs), enabling the system to filter out malicious jobs and allocate resources efficiently. The CICIDS 2019 dataset was used as the benchmark for evaluation. Furthermore, the robustness of the proposed model was assessed using adversarial attacks, specifically the Fast Gradient Sign Method (FGSM), the Carlini-Wagner (CW) attack, and Projected Gradient Descent (PGD). The Catboost model achieves accuracies against these attacks 97%, 80% and 71% respectively. In addition, the robustness against time series network traffic attacks using pulse wave, random burst, and slow ramp achieves 80%, 83% and 77% respectively.},
}
RevDate: 2026-01-16
Smart irrigation-based internet of things and cloud computing technologies for sustainable farming.
Scientific reports pii:10.1038/s41598-026-35810-0 [Epub ahead of print].
Sustainable water management in agriculture is a major challenge, particularly in regions facing water scarcity and the growing impacts of climate change. The lack of efficiency of traditional irrigation methods often leads to water waste, reduced productivity, and increased pressure on natural resources. In this context, it is imperative to develop innovative solutions to optimize water use while maintaining agricultural performance. This paper proposes a smart irrigation system based on the internet of things (IoT) and cloud computing. The system incorporates several sensors to measure key environmental parameters, such as temperature, air humidity, soil moisture, and water level. An embedded ESP32 microcontroller collects and transmits the data to the thingsBoard cloud platform, where it is analyzed in real time to determine precise irrigation needs. The system's algorithm automatically makes the necessary decisions to activate or deactivate the irrigation pump, ensuring optimal and accurate water management. Experimental results demonstrate that the system significantly reduces water waste while optimizing irrigation based on the actual needs of the soil and crops. Real-time measurements and automated decision-making ensure accurate and efficient irrigation that adapts to fluctuations in environmental conditions. Performance analysis shows that the proposed approach significantly improves water resource management compared to traditional methods. The integration of cloud computing and the IoT facilitates remote monitoring and automated decision-making, making the system adaptable to a variety of crops and agricultural lands. The estimated cost of implementing the smart irrigation system is approximately $44.00, confirming its economic feasibility and appeal to small and medium-sized farms seeking to optimize water use. This solution also helps to build farmers' resilience to climate change and water scarcity. The system presented represents a significant advance in the field of smart and sustainable irrigation. By optimizing water use and improving agricultural productivity, the system directly contributes to food security, water resource conservation, and climate resilience. Thus, this study provides a replicable and adaptable model for the development of large-scale smart and sustainable agricultural solutions.
Additional Links: PMID-41545563
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41545563,
year = {2026},
author = {Morchid, A and Qjidaa, H and Alami, RE and Mobayen, S and Skruch, P and Bossoufi, B},
title = {Smart irrigation-based internet of things and cloud computing technologies for sustainable farming.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-35810-0},
pmid = {41545563},
issn = {2045-2322},
abstract = {Sustainable water management in agriculture is a major challenge, particularly in regions facing water scarcity and the growing impacts of climate change. The lack of efficiency of traditional irrigation methods often leads to water waste, reduced productivity, and increased pressure on natural resources. In this context, it is imperative to develop innovative solutions to optimize water use while maintaining agricultural performance. This paper proposes a smart irrigation system based on the internet of things (IoT) and cloud computing. The system incorporates several sensors to measure key environmental parameters, such as temperature, air humidity, soil moisture, and water level. An embedded ESP32 microcontroller collects and transmits the data to the thingsBoard cloud platform, where it is analyzed in real time to determine precise irrigation needs. The system's algorithm automatically makes the necessary decisions to activate or deactivate the irrigation pump, ensuring optimal and accurate water management. Experimental results demonstrate that the system significantly reduces water waste while optimizing irrigation based on the actual needs of the soil and crops. Real-time measurements and automated decision-making ensure accurate and efficient irrigation that adapts to fluctuations in environmental conditions. Performance analysis shows that the proposed approach significantly improves water resource management compared to traditional methods. The integration of cloud computing and the IoT facilitates remote monitoring and automated decision-making, making the system adaptable to a variety of crops and agricultural lands. The estimated cost of implementing the smart irrigation system is approximately $44.00, confirming its economic feasibility and appeal to small and medium-sized farms seeking to optimize water use. This solution also helps to build farmers' resilience to climate change and water scarcity. The system presented represents a significant advance in the field of smart and sustainable irrigation. By optimizing water use and improving agricultural productivity, the system directly contributes to food security, water resource conservation, and climate resilience. Thus, this study provides a replicable and adaptable model for the development of large-scale smart and sustainable agricultural solutions.},
}
RevDate: 2026-01-16
IoT-driven smart irrigation system to improve water use efficiency.
Scientific reports pii:10.1038/s41598-025-33826-6 [Epub ahead of print].
The agriculture sector is the cornerstone of many global economic entities, plays a central role in highly contributing to ensure food security and the gross domestic product. Difficulties caused by traditional irrigation methods, population growth, and climate change are leading to the development of current irrigation systems. This study presented a smart irrigation system using novel techniques like, Internet of Things (IoT), cloud computing, embedded system and sensors. The smart system integrates real-time monitoring and control during irrigation, fertilization, and biopesticides application. A mobile application is implemented to monitor and control the entire system. Results showed that using wood vinegar at low concentrations is an effective way to improve water use efficiency, increase lettuce yield, and optimize disease control compared to other concentrations. The impact of 400 concentration on the evaluation criteria was found to achieve the best values at 26% moisture content. The smart system reduces water consumption by 47% and achieving a 43% increase in yield as well the lowest level of disease severity index with a value of 7.78%. The system proposed features real-time monitoring and control, improving water use efficiency and supporting smart agriculture practices as well as contribute to food and water security.
Additional Links: PMID-41545460
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41545460,
year = {2026},
author = {Mohamed, ZE and Afify, MK and Badr, MM and Omar, OA},
title = {IoT-driven smart irrigation system to improve water use efficiency.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-33826-6},
pmid = {41545460},
issn = {2045-2322},
abstract = {The agriculture sector is the cornerstone of many global economic entities, plays a central role in highly contributing to ensure food security and the gross domestic product. Difficulties caused by traditional irrigation methods, population growth, and climate change are leading to the development of current irrigation systems. This study presented a smart irrigation system using novel techniques like, Internet of Things (IoT), cloud computing, embedded system and sensors. The smart system integrates real-time monitoring and control during irrigation, fertilization, and biopesticides application. A mobile application is implemented to monitor and control the entire system. Results showed that using wood vinegar at low concentrations is an effective way to improve water use efficiency, increase lettuce yield, and optimize disease control compared to other concentrations. The impact of 400 concentration on the evaluation criteria was found to achieve the best values at 26% moisture content. The smart system reduces water consumption by 47% and achieving a 43% increase in yield as well the lowest level of disease severity index with a value of 7.78%. The system proposed features real-time monitoring and control, improving water use efficiency and supporting smart agriculture practices as well as contribute to food and water security.},
}
RevDate: 2026-01-16
CmpDate: 2026-01-16
Data Science Education for Residents, Researchers, and Students in Psychiatry and Psychology: Program Development and Evaluation Study.
JMIR medical education, 12:e75125 pii:v12i1e75125.
BACKGROUND: The use of artificial intelligence (AI) to analyze health care data has become common in behavioral health sciences. However, the lack of training opportunities for mental health professionals limits clinicians' ability to adopt AI in clinical settings. AI education is essential for trainees, equipping them with the literacy needed to implement AI tools in practice, collaborate effectively with data scientists, and develop skills as interdisciplinary researchers with computing skills.
OBJECTIVE: As part of the Penn Innovation in Suicide Prevention Implementation Research Center, we developed, implemented, and evaluated a virtual workshop to educate psychiatry and psychology trainees on using AI for suicide prevention research.
METHODS: The workshop introduced trainees to natural language processing (NLP) concepts and Python coding skills using Jupyter notebooks within a secure Microsoft Azure Databricks cloud computing and analytics environment. We designed a 3-hour workshop that covered 4 key NLP topics: data characterization, data standardization, concept extraction, and statistical analysis. To demonstrate real-world applications, we processed chief complaints from electronic health records to compare the prevalence of suicide-related encounters across populations by race, ethnicity, and age. Training materials were developed based on standard NLP techniques and domain-specific tasks, such as preprocessing psychiatry-related acronyms. Two researchers drafted and demonstrated the code, incorporating feedback from the Methods Core of the Innovation in Suicide Prevention Implementation Research to refine the materials. To evaluate the effectiveness of the workshop, we used the Kirkpatrick program evaluation model, focusing on participants' reactions (level 1) and learning outcomes (level 2). Confidence changes in knowledge and skills before and after the workshop were assessed using paired t tests, and open-ended questions were included to gather feedback for future improvements.
RESULTS: A total of 10 trainees participated in the workshop virtually, including residents, postdoctoral researchers, and graduate students from the psychiatry and psychology departments. The participants found the workshop helpful (mean 3.17 on a scale of 1-4, SD 0.41). Their overall confidence in NLP knowledge significantly increased (P=.002) from 1.35 (SD 0.47) to 2.79 (SD 0.46). Confidence in coding abilities also improved significantly (P=.01), increasing from 1.33 (SD 0.60) to 2.25 (SD 0.42). Open-ended feedback suggested incorporating thematic analysis and exploring additional datasets for future workshops.
CONCLUSIONS: This study illustrates the effectiveness of a tailored data science workshop for trainees in psychiatry and psychology, focusing on applying NLP techniques for suicide prevention research. The workshop significantly enhanced participants' confidence in conducting data science research. Future workshops will cover additional topics of interest, such as working with large language models, thematic analysis, diverse datasets, and multifaceted outcomes. This includes examining how participants' learning impacts their practice and research, as well as assessing knowledge and skills beyond self-reported confidence through methods such as case studies for deeper insights.
Additional Links: PMID-41544003
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41544003,
year = {2026},
author = {Donnelly, HK and Mandell, D and Hwang, S and Schriver, E and Vurgun, U and Neill, G and Patel, E and Reilly, ME and Steinberg, M and Calloway, A and Gallop, R and Oquendo, MA and Brown, GK and Mowery, DL},
title = {Data Science Education for Residents, Researchers, and Students in Psychiatry and Psychology: Program Development and Evaluation Study.},
journal = {JMIR medical education},
volume = {12},
number = {},
pages = {e75125},
doi = {10.2196/75125},
pmid = {41544003},
issn = {2369-3762},
mesh = {Humans ; *Psychiatry/education ; *Data Science/education ; *Psychology/education ; Program Evaluation ; Internship and Residency ; Program Development ; *Research Personnel/education ; Artificial Intelligence ; Suicide Prevention ; Natural Language Processing ; Students ; },
abstract = {BACKGROUND: The use of artificial intelligence (AI) to analyze health care data has become common in behavioral health sciences. However, the lack of training opportunities for mental health professionals limits clinicians' ability to adopt AI in clinical settings. AI education is essential for trainees, equipping them with the literacy needed to implement AI tools in practice, collaborate effectively with data scientists, and develop skills as interdisciplinary researchers with computing skills.
OBJECTIVE: As part of the Penn Innovation in Suicide Prevention Implementation Research Center, we developed, implemented, and evaluated a virtual workshop to educate psychiatry and psychology trainees on using AI for suicide prevention research.
METHODS: The workshop introduced trainees to natural language processing (NLP) concepts and Python coding skills using Jupyter notebooks within a secure Microsoft Azure Databricks cloud computing and analytics environment. We designed a 3-hour workshop that covered 4 key NLP topics: data characterization, data standardization, concept extraction, and statistical analysis. To demonstrate real-world applications, we processed chief complaints from electronic health records to compare the prevalence of suicide-related encounters across populations by race, ethnicity, and age. Training materials were developed based on standard NLP techniques and domain-specific tasks, such as preprocessing psychiatry-related acronyms. Two researchers drafted and demonstrated the code, incorporating feedback from the Methods Core of the Innovation in Suicide Prevention Implementation Research to refine the materials. To evaluate the effectiveness of the workshop, we used the Kirkpatrick program evaluation model, focusing on participants' reactions (level 1) and learning outcomes (level 2). Confidence changes in knowledge and skills before and after the workshop were assessed using paired t tests, and open-ended questions were included to gather feedback for future improvements.
RESULTS: A total of 10 trainees participated in the workshop virtually, including residents, postdoctoral researchers, and graduate students from the psychiatry and psychology departments. The participants found the workshop helpful (mean 3.17 on a scale of 1-4, SD 0.41). Their overall confidence in NLP knowledge significantly increased (P=.002) from 1.35 (SD 0.47) to 2.79 (SD 0.46). Confidence in coding abilities also improved significantly (P=.01), increasing from 1.33 (SD 0.60) to 2.25 (SD 0.42). Open-ended feedback suggested incorporating thematic analysis and exploring additional datasets for future workshops.
CONCLUSIONS: This study illustrates the effectiveness of a tailored data science workshop for trainees in psychiatry and psychology, focusing on applying NLP techniques for suicide prevention research. The workshop significantly enhanced participants' confidence in conducting data science research. Future workshops will cover additional topics of interest, such as working with large language models, thematic analysis, diverse datasets, and multifaceted outcomes. This includes examining how participants' learning impacts their practice and research, as well as assessing knowledge and skills beyond self-reported confidence through methods such as case studies for deeper insights.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Psychiatry/education
*Data Science/education
*Psychology/education
Program Evaluation
Internship and Residency
Program Development
*Research Personnel/education
Artificial Intelligence
Suicide Prevention
Natural Language Processing
Students
RevDate: 2026-01-14
Desargues cloud TPS: a cloud-based automatic radiation treatment planning system for IMRT.
Biomedical engineering online pii:10.1186/s12938-026-01510-z [Epub ahead of print].
PURPOSE: To develop a cloud-based automated treatment planning system for intensity-modulated radiation therapy and evaluate its efficacy and safety for tumors in various anatomical sites under general clinical scenarios.
RESULTS: All the plans from both groups satisfy the PTV prescription dose coverage requirement of at least 95% of the PTV volume. The mean HI of plan A group and plan B group is 0.084 and 0.081, respectively, with no statistically significant difference from those of plan C group. The mean CI, PQM, OOT and POT are 0.806, 77.55. 410 s and 185 s for plan A group, and 0.841, 76.87, 515.1 s and 271.1 s for plan B group, which were significantly superior than those of plan C group except for the CI of plan A group. There is no statistically significant difference between the dose accuracies of plan B and plan C groups.
CONCLUSIONS: It is concluded that the overall efficacy and safety of the Desargues Cloud TPS are not significantly different to those of Varian Eclipse, while some efficacy indicators of plans generated from automatic planning without or with manual adjustments are even significantly superior to those of fully manual plans from Eclipse. The cloud-based automatic treatment planning additionally increase the efficiency of treatment planning process and facilitate the sharing of planning knowledge.
MATERIALS AND METHODS: The cloud-based automatic radiation treatment planning system, Desargues Cloud TPS, was designed and developed based on browser/server mode, where all the computing intensive functions were deployed on the server and user interfaces were implemented on the web. The communication between the browser and the server was through the local area network (LAN) of a radiotherapy institution. The automatic treatment planning module adopted a hybrid of both knowledge-based planning (KBP) and protocol-based automatic iterative optimization (PB-AIO), consisting of three steps: beam angle optimization (BAO), beam fluence optimization (BFO) and machine parameter optimization (MPO). 53 patients from two institutions have been enrolled in a multi-center self-controlled clinical validation. For each patient, three IMRT plans were designed. The plan A and B were designed on Desargues Cloud TPS using automatic planning without and with manual adjustments, respectively. The plan C was designed on Varian Eclipse TPS using fully manual planning. The efficacy indicators were heterogeneous index, conformity index, plan quality metric, overall operation time and plan optimization time. The safety indicators were gamma indices of dose verification.
Additional Links: PMID-41530840
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41530840,
year = {2026},
author = {Guo, J and Qin, S and Guo, C and Zhu, M and Zhou, Y and Wang, H and Xu, X and Zhan, W and Chen, L and Ni, J and Tang, Y and Chen, J and Shen, Y and Chen, H and Men, K and Liu, H and Pan, Y and Ye, J and Huan, J and Zhou, J},
title = {Desargues cloud TPS: a cloud-based automatic radiation treatment planning system for IMRT.},
journal = {Biomedical engineering online},
volume = {},
number = {},
pages = {},
doi = {10.1186/s12938-026-01510-z},
pmid = {41530840},
issn = {1475-925X},
support = {ZDXK202235//Jiangsu Provincial Medical Key Discipline/ ; 2024Z046//Key Research and Development Project of Ningbo/ ; 2024Z220//Key Research and Development Project of Ningbo/ ; 2025C02249(SD2)//Zhejiang Provincial Leading Goose Plan Project/ ; 2022YFC2402303//National Key Research and Development Program of China/ ; LCZX202351//Clinical Key Disease Diagnosis and Treatment Technology Project of Suzhou City/ ; },
abstract = {PURPOSE: To develop a cloud-based automated treatment planning system for intensity-modulated radiation therapy and evaluate its efficacy and safety for tumors in various anatomical sites under general clinical scenarios.
RESULTS: All the plans from both groups satisfy the PTV prescription dose coverage requirement of at least 95% of the PTV volume. The mean HI of plan A group and plan B group is 0.084 and 0.081, respectively, with no statistically significant difference from those of plan C group. The mean CI, PQM, OOT and POT are 0.806, 77.55. 410 s and 185 s for plan A group, and 0.841, 76.87, 515.1 s and 271.1 s for plan B group, which were significantly superior than those of plan C group except for the CI of plan A group. There is no statistically significant difference between the dose accuracies of plan B and plan C groups.
CONCLUSIONS: It is concluded that the overall efficacy and safety of the Desargues Cloud TPS are not significantly different to those of Varian Eclipse, while some efficacy indicators of plans generated from automatic planning without or with manual adjustments are even significantly superior to those of fully manual plans from Eclipse. The cloud-based automatic treatment planning additionally increase the efficiency of treatment planning process and facilitate the sharing of planning knowledge.
MATERIALS AND METHODS: The cloud-based automatic radiation treatment planning system, Desargues Cloud TPS, was designed and developed based on browser/server mode, where all the computing intensive functions were deployed on the server and user interfaces were implemented on the web. The communication between the browser and the server was through the local area network (LAN) of a radiotherapy institution. The automatic treatment planning module adopted a hybrid of both knowledge-based planning (KBP) and protocol-based automatic iterative optimization (PB-AIO), consisting of three steps: beam angle optimization (BAO), beam fluence optimization (BFO) and machine parameter optimization (MPO). 53 patients from two institutions have been enrolled in a multi-center self-controlled clinical validation. For each patient, three IMRT plans were designed. The plan A and B were designed on Desargues Cloud TPS using automatic planning without and with manual adjustments, respectively. The plan C was designed on Varian Eclipse TPS using fully manual planning. The efficacy indicators were heterogeneous index, conformity index, plan quality metric, overall operation time and plan optimization time. The safety indicators were gamma indices of dose verification.},
}
RevDate: 2026-01-12
CmpDate: 2026-01-12
Ferroelectric Optoelectronic Sensor for Intelligent Flame Detection and In-Sensor Motion Perception.
Nano-micro letters, 18(1):123.
Next-generation fire safety systems demand precise detection and motion recognition of flames. In-sensor computing, which integrates sensing, memory, and processing capabilities, has emerged as a key technology in flame detection. However, the implementation of hardware-level functional demonstrations based on artificial vision systems in the solar-blind ultraviolet (UV) band (200-280 nm) is hindered by the weak detection capability. Here, we propose Ga2O3/In2Se3 heterojunctions for the ferroelectric (abbreviation: Fe) optoelectronic sensor (abbreviation: OES) array (5 × 5 pixels), which is capable of ultraweak UV light detection with an ultrahigh detectivity through ferroelectric regulation and features in configurable multimode functionality. The Fe-OES array can directly sense different flame motions and simulate the non-spiking gradient neurons of insect visual system. Moreover, the flame signal can be effectively amplified in combination with leaky integration-and-fire neuron hardware. Using this Fe-OES system and neuromorphic hardware, we successfully demonstrate three flame processing tasks: achieving efficient flame detection across all time periods with terminal and cloud-based alarms; flame motion recognition with a lightweight convolutional neural network achieving 96.47% accuracy; and flame light recognition with 90.51% accuracy by means of a photosensitive artificial neural system. This work provides effective tools and approaches for addressing a variety of complex flame detection tasks.
Additional Links: PMID-41526779
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41526779,
year = {2026},
author = {Wei, J and Ma, G and Liang, R and Wang, W and Chen, J and Guan, S and Jiang, J and Zhu, X and Cheng, Q and Shen, Y and Xia, Q and Wu, S and Wan, H and Zeng, L and Li, M and Wang, Y and Shen, L and Han, W and Wang, H},
title = {Ferroelectric Optoelectronic Sensor for Intelligent Flame Detection and In-Sensor Motion Perception.},
journal = {Nano-micro letters},
volume = {18},
number = {1},
pages = {123},
pmid = {41526779},
issn = {2150-5551},
abstract = {Next-generation fire safety systems demand precise detection and motion recognition of flames. In-sensor computing, which integrates sensing, memory, and processing capabilities, has emerged as a key technology in flame detection. However, the implementation of hardware-level functional demonstrations based on artificial vision systems in the solar-blind ultraviolet (UV) band (200-280 nm) is hindered by the weak detection capability. Here, we propose Ga2O3/In2Se3 heterojunctions for the ferroelectric (abbreviation: Fe) optoelectronic sensor (abbreviation: OES) array (5 × 5 pixels), which is capable of ultraweak UV light detection with an ultrahigh detectivity through ferroelectric regulation and features in configurable multimode functionality. The Fe-OES array can directly sense different flame motions and simulate the non-spiking gradient neurons of insect visual system. Moreover, the flame signal can be effectively amplified in combination with leaky integration-and-fire neuron hardware. Using this Fe-OES system and neuromorphic hardware, we successfully demonstrate three flame processing tasks: achieving efficient flame detection across all time periods with terminal and cloud-based alarms; flame motion recognition with a lightweight convolutional neural network achieving 96.47% accuracy; and flame light recognition with 90.51% accuracy by means of a photosensitive artificial neural system. This work provides effective tools and approaches for addressing a variety of complex flame detection tasks.},
}
RevDate: 2026-01-12
A Personalized Point-of-Care Platform for Discovery and Validation of miRNA Targets Using AI and Edge Computing Supporting Personalized Cancer Therapy.
IEEE journal of biomedical and health informatics, PP: [Epub ahead of print].
The paradigm of cancer therapy is rapidly shifting towards personalized precision medicine, yet current diagnostic approaches remain constrained by centralized laboratory infrastructure, creating critical delays between sample collection and therapeutic intervention. To address this limitation, we present GTMIT, a novel point-of-care (POC) platform integrating artificial intelligence (AI) and edge computing for real-time discovery and validation of microRNA (miRNA) targets directly at the patient's bedside. Unlike traditional laboratory-centric models, GTMIT with edge computing operates on local hardware resources (e.g., portable sequencers and mobile devices), enabling POC decision-making without reliance on the cloud. Our framework combines three key innovations: 1) A Transformer-GNN hybrid architecture with Power Normalization for robust miRNA-mRNA interaction prediction; 2) SNP-adaptive Gapped Pattern Graph Convolutional Networks (GP-GCN) accounting for patient-specific genetic variations; and 3) Edge therapeutic optimization incorporating regional cancer prevalence patterns and resource constraints. We evaluate our proposed platform on several clinical datasets. GTMIT demonstrates excellent performance on a range of metrics, achieving 94% AUC, 87% precision, and 79% recall on benchmark datasets.GTMIT demonstrates excellent performance on a range of metrics, achieving 94% AUC, 87% precision, and 79% recall on benchmark datasets. By bridging molecular diagnostics with immediate intervention at the POC, GTMIT reduces time-to-treatment from days to minutes, particularly benefiting resource-limited settings.
Additional Links: PMID-41525639
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41525639,
year = {2026},
author = {Li, D and Li, C and Zhu, F and Chen, X and Mishra, S and Routray, S},
title = {A Personalized Point-of-Care Platform for Discovery and Validation of miRNA Targets Using AI and Edge Computing Supporting Personalized Cancer Therapy.},
journal = {IEEE journal of biomedical and health informatics},
volume = {PP},
number = {},
pages = {},
doi = {10.1109/JBHI.2025.3626933},
pmid = {41525639},
issn = {2168-2208},
abstract = {The paradigm of cancer therapy is rapidly shifting towards personalized precision medicine, yet current diagnostic approaches remain constrained by centralized laboratory infrastructure, creating critical delays between sample collection and therapeutic intervention. To address this limitation, we present GTMIT, a novel point-of-care (POC) platform integrating artificial intelligence (AI) and edge computing for real-time discovery and validation of microRNA (miRNA) targets directly at the patient's bedside. Unlike traditional laboratory-centric models, GTMIT with edge computing operates on local hardware resources (e.g., portable sequencers and mobile devices), enabling POC decision-making without reliance on the cloud. Our framework combines three key innovations: 1) A Transformer-GNN hybrid architecture with Power Normalization for robust miRNA-mRNA interaction prediction; 2) SNP-adaptive Gapped Pattern Graph Convolutional Networks (GP-GCN) accounting for patient-specific genetic variations; and 3) Edge therapeutic optimization incorporating regional cancer prevalence patterns and resource constraints. We evaluate our proposed platform on several clinical datasets. GTMIT demonstrates excellent performance on a range of metrics, achieving 94% AUC, 87% precision, and 79% recall on benchmark datasets.GTMIT demonstrates excellent performance on a range of metrics, achieving 94% AUC, 87% precision, and 79% recall on benchmark datasets. By bridging molecular diagnostics with immediate intervention at the POC, GTMIT reduces time-to-treatment from days to minutes, particularly benefiting resource-limited settings.},
}
RevDate: 2026-01-12
CmpDate: 2026-01-12
Data management for distributed computational workflows: An iRODS-based setup and its performance.
PloS one, 21(1):e0340757 pii:PONE-D-24-57570.
Modern data-management frameworks promise a flexible and efficient management of data and metadata across storage backends. However, such claims need to be put to a meaningful test in daily practice. We conjecture that such frameworks should be fit to construct a data backend for workflows which use geographically distributed high-performance and cloud computing systems. Cross-site data transfers within such a backend should largely saturate network bandwidth, in particular when parameters such as buffer sizes are optimized. To explore this further, we evaluate the "integrated Rule-Oriented Data System" iRODS with EUDAT's B2SAFE module as data backend for the "Distributed Data Infrastructure" within the LEXIS Platform for complex computing workflow orchestration and distributed data management. The focus of our study is on testing our conjectures-i.e., on construction and assessment of the data infrastructure and on measurements of data-transfer performance over the wide-area network between two selected supercomputing sites connected to LEXIS. We analyze limitations and identify optimization opportunities. Efficient utilization of the available network bandwidth is possible and depends on suitable client configuration and file size. Our work shows that systems such as iRODS nowadays fit the requirements for integration in federated computing infrastructures involving web-based authentication flows with OpenID Connect and rich on-line services. We are continuing to exploit these properties in the EXA4MIND project, where we aim at optimizing data-heavy workflows, integrating various systems for managing structured and unstructured data.
Additional Links: PMID-41525253
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41525253,
year = {2026},
author = {Hayek, M and Golasowski, M and Hachinger, S and García-Hernández, RJ and Munke, J and Lindner, G and Slaninová, K and Tunka, P and Vondrák, V and Kranzlmüller, D and Martinovič, J},
title = {Data management for distributed computational workflows: An iRODS-based setup and its performance.},
journal = {PloS one},
volume = {21},
number = {1},
pages = {e0340757},
doi = {10.1371/journal.pone.0340757},
pmid = {41525253},
issn = {1932-6203},
mesh = {*Workflow ; *Data Management/methods ; Cloud Computing ; Software ; },
abstract = {Modern data-management frameworks promise a flexible and efficient management of data and metadata across storage backends. However, such claims need to be put to a meaningful test in daily practice. We conjecture that such frameworks should be fit to construct a data backend for workflows which use geographically distributed high-performance and cloud computing systems. Cross-site data transfers within such a backend should largely saturate network bandwidth, in particular when parameters such as buffer sizes are optimized. To explore this further, we evaluate the "integrated Rule-Oriented Data System" iRODS with EUDAT's B2SAFE module as data backend for the "Distributed Data Infrastructure" within the LEXIS Platform for complex computing workflow orchestration and distributed data management. The focus of our study is on testing our conjectures-i.e., on construction and assessment of the data infrastructure and on measurements of data-transfer performance over the wide-area network between two selected supercomputing sites connected to LEXIS. We analyze limitations and identify optimization opportunities. Efficient utilization of the available network bandwidth is possible and depends on suitable client configuration and file size. Our work shows that systems such as iRODS nowadays fit the requirements for integration in federated computing infrastructures involving web-based authentication flows with OpenID Connect and rich on-line services. We are continuing to exploit these properties in the EXA4MIND project, where we aim at optimizing data-heavy workflows, integrating various systems for managing structured and unstructured data.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Workflow
*Data Management/methods
Cloud Computing
Software
RevDate: 2026-01-10
CmpDate: 2026-01-10
A Review of High-Throughput Optical Sensors for Food Detection Based on Machine Learning.
Foods (Basel, Switzerland), 15(1): pii:foods15010133.
As the global food industry expands and consumers demand higher food safety and quality standards, high-throughput detection technology utilizing digital intelligent optical sensors has emerged as a research hotspot in food testing due to its advantages of speed, precision, and non-destructive operation. Integrating cutting-edge achievements in optics, electronics, and computer science with machine learning algorithms, this technology efficiently processes massive datasets. This paper systematically summarizes the construction principles of intelligent optical sensors and their applications in food inspection. Sensors convert light signals into electrical signals using nanomaterials such as quantum dots, metal nanoparticles, and upconversion nanoparticles, and then employ machine learning algorithms including support vector machines, random forests, and convolutional neural networks for data analysis and model optimization. This enables efficient detection of target substances like pesticide residues, heavy metals, microorganisms, and food freshness. Furthermore, the integration of multiple detection mechanisms-including spectral analysis, fluorescence imaging, and hyperspectral imaging-has significantly broadened the sensors' application scenarios. Looking ahead, optical sensors will evolve toward multifunctional integration, miniaturization, and intelligent operation. By leveraging cloud computing and IoT technologies, they will deliver innovative solutions for comprehensive monitoring of food quality and safety across the entire supply chain.
Additional Links: PMID-41517198
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41517198,
year = {2026},
author = {Wang, Y and Yang, Y and Liu, H},
title = {A Review of High-Throughput Optical Sensors for Food Detection Based on Machine Learning.},
journal = {Foods (Basel, Switzerland)},
volume = {15},
number = {1},
pages = {},
doi = {10.3390/foods15010133},
pmid = {41517198},
issn = {2304-8158},
support = {the National Key Research and Development Program (No. 2023YFF1104801)//Huilin Liu/ ; },
abstract = {As the global food industry expands and consumers demand higher food safety and quality standards, high-throughput detection technology utilizing digital intelligent optical sensors has emerged as a research hotspot in food testing due to its advantages of speed, precision, and non-destructive operation. Integrating cutting-edge achievements in optics, electronics, and computer science with machine learning algorithms, this technology efficiently processes massive datasets. This paper systematically summarizes the construction principles of intelligent optical sensors and their applications in food inspection. Sensors convert light signals into electrical signals using nanomaterials such as quantum dots, metal nanoparticles, and upconversion nanoparticles, and then employ machine learning algorithms including support vector machines, random forests, and convolutional neural networks for data analysis and model optimization. This enables efficient detection of target substances like pesticide residues, heavy metals, microorganisms, and food freshness. Furthermore, the integration of multiple detection mechanisms-including spectral analysis, fluorescence imaging, and hyperspectral imaging-has significantly broadened the sensors' application scenarios. Looking ahead, optical sensors will evolve toward multifunctional integration, miniaturization, and intelligent operation. By leveraging cloud computing and IoT technologies, they will deliver innovative solutions for comprehensive monitoring of food quality and safety across the entire supply chain.},
}
RevDate: 2026-01-10
CmpDate: 2026-01-10
Sensor Driven Resource Optimization Framework for Intelligent Fog Enabled IoHT Systems.
Sensors (Basel, Switzerland), 26(1): pii:s26010348.
Fog computing has revolutionized the world by providing its services close to the user premises, which results in reducing the communication latency for many real-time applications. This communication latency has been a major constraint in cloud computing and ultimately causes user dissatisfaction due to slow response time. Many real-time applications like smart transportation, smart healthcare systems, smart cities, smart farming, video surveillance, and virtual and augmented reality are delay-sensitive real-time applications and require quick response times. The response delay in certain critical healthcare applications might cause serious loss to health patients. Therefore, by leveraging fog computing, a substantial portion of healthcare-related computational tasks can be offloaded to nearby fog nodes. This localized processing significantly reduces latency and enhances system availability, making it particularly advantageous for time-sensitive and mission-critical healthcare applications. Due to close proximity to end users, fog computing is considered to be the most suitable computing platform for real-time applications. However, fog devices are resource constrained and require proper resource management techniques for efficient resource utilization. This study presents an optimized resource allocation and scheduling framework for delay-sensitive healthcare applications using a Modified Particle Swarm Optimization (MPSO) algorithm. Using the iFogSim toolkit, the proposed technique was evaluated for many extensive simulations to obtain the desired results in terms of system response time, cost of execution and execution time. Experimental results demonstrate that the MPSO-based method reduces makespan by up to 8% and execution cost by up to 3% compared to existing metaheuristic algorithms, highlighting its effectiveness in enhancing overall fog computing performance for healthcare systems.
Additional Links: PMID-41516782
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41516782,
year = {2026},
author = {Khan, S and Shah, IA and Loh, WK and Khan, JA and Mylonas, A and Pitropakis, N},
title = {Sensor Driven Resource Optimization Framework for Intelligent Fog Enabled IoHT Systems.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {1},
pages = {},
doi = {10.3390/s26010348},
pmid = {41516782},
issn = {1424-8220},
mesh = {Algorithms ; Humans ; *Cloud Computing ; },
abstract = {Fog computing has revolutionized the world by providing its services close to the user premises, which results in reducing the communication latency for many real-time applications. This communication latency has been a major constraint in cloud computing and ultimately causes user dissatisfaction due to slow response time. Many real-time applications like smart transportation, smart healthcare systems, smart cities, smart farming, video surveillance, and virtual and augmented reality are delay-sensitive real-time applications and require quick response times. The response delay in certain critical healthcare applications might cause serious loss to health patients. Therefore, by leveraging fog computing, a substantial portion of healthcare-related computational tasks can be offloaded to nearby fog nodes. This localized processing significantly reduces latency and enhances system availability, making it particularly advantageous for time-sensitive and mission-critical healthcare applications. Due to close proximity to end users, fog computing is considered to be the most suitable computing platform for real-time applications. However, fog devices are resource constrained and require proper resource management techniques for efficient resource utilization. This study presents an optimized resource allocation and scheduling framework for delay-sensitive healthcare applications using a Modified Particle Swarm Optimization (MPSO) algorithm. Using the iFogSim toolkit, the proposed technique was evaluated for many extensive simulations to obtain the desired results in terms of system response time, cost of execution and execution time. Experimental results demonstrate that the MPSO-based method reduces makespan by up to 8% and execution cost by up to 3% compared to existing metaheuristic algorithms, highlighting its effectiveness in enhancing overall fog computing performance for healthcare systems.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Algorithms
Humans
*Cloud Computing
RevDate: 2026-01-10
MIGS: A Modular Edge Gateway with Instance-Based Isolation for Heterogeneous Industrial IoT Interoperability.
Sensors (Basel, Switzerland), 26(1): pii:s26010314.
The exponential proliferation of the Internet of Things (IoT) has catalyzed a paradigm shift in industrial automation and smart city infrastructure. However, this rapid expansion has engendered significant heterogeneity in communication protocols, creating critical barriers to seamless data integration and interoperability. Conventional gateway solutions frequently exhibit limited flexibility in supporting diverse protocol stacks simultaneously and often lack granular user controllability. To mitigate these deficiencies, this paper proposes a novel, modular IoT gateway architecture, designated as MIGS (Modular IoT Gateway System). The proposed architecture comprises four distinct components: a Management Component, a Southbound Component, a Northbound Component, and a Cache Component. Specifically, the Southbound Component employs instance-based isolation and independent task threading to manage heterogeneous field devices utilizing protocols such as Modbus, MQTT, and OPC UA. The Northbound Component facilitates reliable bidirectional data transmission with cloud platforms. A dedicated Cache Component is integrated to decouple data acquisition from transmission, ensuring data integrity during network latency. Furthermore, a web-based Control Service Module affords comprehensive runtime management. We explicate the data transmission methodology and formulate a theoretical latency model to quantify the impact of the Python Global Interpreter Lock (GIL) and serialization overhead. Functional validation and theoretical analysis confirm the system's efficacy in concurrent multi-protocol communication, robust data forwarding, and operational flexibility. The MIGS framework significantly enhances interoperability within heterogeneous IoT environments, offering a scalable solution for next-generation industrial applications.
Additional Links: PMID-41516748
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41516748,
year = {2026},
author = {Ai, Y and Zhu, Y and Jiang, Y and Deng, Y},
title = {MIGS: A Modular Edge Gateway with Instance-Based Isolation for Heterogeneous Industrial IoT Interoperability.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {1},
pages = {},
doi = {10.3390/s26010314},
pmid = {41516748},
issn = {1424-8220},
abstract = {The exponential proliferation of the Internet of Things (IoT) has catalyzed a paradigm shift in industrial automation and smart city infrastructure. However, this rapid expansion has engendered significant heterogeneity in communication protocols, creating critical barriers to seamless data integration and interoperability. Conventional gateway solutions frequently exhibit limited flexibility in supporting diverse protocol stacks simultaneously and often lack granular user controllability. To mitigate these deficiencies, this paper proposes a novel, modular IoT gateway architecture, designated as MIGS (Modular IoT Gateway System). The proposed architecture comprises four distinct components: a Management Component, a Southbound Component, a Northbound Component, and a Cache Component. Specifically, the Southbound Component employs instance-based isolation and independent task threading to manage heterogeneous field devices utilizing protocols such as Modbus, MQTT, and OPC UA. The Northbound Component facilitates reliable bidirectional data transmission with cloud platforms. A dedicated Cache Component is integrated to decouple data acquisition from transmission, ensuring data integrity during network latency. Furthermore, a web-based Control Service Module affords comprehensive runtime management. We explicate the data transmission methodology and formulate a theoretical latency model to quantify the impact of the Python Global Interpreter Lock (GIL) and serialization overhead. Functional validation and theoretical analysis confirm the system's efficacy in concurrent multi-protocol communication, robust data forwarding, and operational flexibility. The MIGS framework significantly enhances interoperability within heterogeneous IoT environments, offering a scalable solution for next-generation industrial applications.},
}
RevDate: 2026-01-10
CmpDate: 2026-01-10
A Systematic Review of Federated and Cloud Computing Approaches for Predicting Mental Health Risks.
Sensors (Basel, Switzerland), 26(1): pii:s26010229.
Mental health disorders affect large numbers of people worldwide and are a major cause of long-term disability. Digital health technologies such as mobile apps and wearable devices now generate rich behavioural data that could support earlier detection and more personalised care. However, these data are highly sensitive and distributed across devices and platforms, which makes privacy protection and scalable analysis challenging; federated learning offers a way to train models across devices while keeping raw data local. When combined with edge, fog, or cloud computing, federated learning offers a way to support near-real-time mental health analysis while keeping raw data local. This review screened 1104 records, assessed 31 full-text articles using a five-question quality checklist, and retained 17 empirical studies that achieved a score of at least 7/10 for synthesis. The included studies were compared in terms of their FL and edge/cloud architectures, data sources, privacy and security techniques, and evidence for operation in real-world settings. The synthesis highlights innovative but fragmented progress, with limited work on comorbidity modelling, deployment evaluation, and common benchmarks, and identifies priorities for the development of scalable, practical, and ethically robust FL systems for digital mental health.
Additional Links: PMID-41516665
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41516665,
year = {2025},
author = {Fiaz, I and Kanwal, N and Al-Said Ahmad, A},
title = {A Systematic Review of Federated and Cloud Computing Approaches for Predicting Mental Health Risks.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {1},
pages = {},
doi = {10.3390/s26010229},
pmid = {41516665},
issn = {1424-8220},
mesh = {*Cloud Computing ; Humans ; *Mental Health ; *Mental Disorders/diagnosis ; Wearable Electronic Devices ; Mobile Applications ; Telemedicine ; },
abstract = {Mental health disorders affect large numbers of people worldwide and are a major cause of long-term disability. Digital health technologies such as mobile apps and wearable devices now generate rich behavioural data that could support earlier detection and more personalised care. However, these data are highly sensitive and distributed across devices and platforms, which makes privacy protection and scalable analysis challenging; federated learning offers a way to train models across devices while keeping raw data local. When combined with edge, fog, or cloud computing, federated learning offers a way to support near-real-time mental health analysis while keeping raw data local. This review screened 1104 records, assessed 31 full-text articles using a five-question quality checklist, and retained 17 empirical studies that achieved a score of at least 7/10 for synthesis. The included studies were compared in terms of their FL and edge/cloud architectures, data sources, privacy and security techniques, and evidence for operation in real-world settings. The synthesis highlights innovative but fragmented progress, with limited work on comorbidity modelling, deployment evaluation, and common benchmarks, and identifies priorities for the development of scalable, practical, and ethically robust FL systems for digital mental health.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Cloud Computing
Humans
*Mental Health
*Mental Disorders/diagnosis
Wearable Electronic Devices
Mobile Applications
Telemedicine
RevDate: 2026-01-10
A Lightweight Authentication and Key Distribution Protocol for XR Glasses Using PUF and Cloud-Assisted ECC.
Sensors (Basel, Switzerland), 26(1): pii:s26010217.
The rapid convergence of artificial intelligence (AI), cloud computing, and 5G communication has positioned extended reality (XR) as a core technology bridging the physical and virtual worlds. Encompassing virtual reality (VR), augmented reality (AR), and mixed reality (MR), XR has demonstrated transformative potential across sectors such as healthcare, industry, education, and defense. However, the compact architecture and limited computational capabilities of XR devices render conventional cryptographic authentication schemes inefficient, while the real-time transmission of biometric and positional data introduces significant privacy and security vulnerabilities. To overcome these challenges, this study introduces PXRA (PUF-based XR authentication), a lightweight and secure authentication and key distribution protocol optimized for cloud-assisted XR environments. PXRA utilizes a physically unclonable function (PUF) for device-level hardware authentication and offloads elliptic curve cryptography (ECC) operations to the cloud to enhance computational efficiency. Authenticated encryption with associated data (AEAD) ensures message confidentiality and integrity, while formal verification through ProVerif confirms the protocol's robustness under the Dolev-Yao adversary model. Experimental results demonstrate that PXRA reduces device-side computational overhead by restricting XR terminals to lightweight PUF and hash functions, achieving an average authentication latency below 15 ms sufficient for real-time XR performance. Formal analysis verifies PXRA's resistance to replay, impersonation, and key compromise attacks, while preserving user anonymity and session unlinkability. These findings establish the feasibility of integrating hardware-based PUF authentication with cloud-assisted cryptographic computation to enable secure, scalable, and real-time XR systems. The proposed framework lays a foundation for future XR applications in telemedicine, remote collaboration, and immersive education, where both performance and privacy preservation are paramount. Our contribution lies in a hybrid PUF-cloud ECC architecture, context-bound AEAD for session-splicing resistance, and a noise-resilient BCH-based fuzzy extractor supporting up to 15% BER.
Additional Links: PMID-41516652
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41516652,
year = {2025},
author = {Cha, W and Lee, HJ and Kook, S and Kim, K and Won, D},
title = {A Lightweight Authentication and Key Distribution Protocol for XR Glasses Using PUF and Cloud-Assisted ECC.},
journal = {Sensors (Basel, Switzerland)},
volume = {26},
number = {1},
pages = {},
doi = {10.3390/s26010217},
pmid = {41516652},
issn = {1424-8220},
abstract = {The rapid convergence of artificial intelligence (AI), cloud computing, and 5G communication has positioned extended reality (XR) as a core technology bridging the physical and virtual worlds. Encompassing virtual reality (VR), augmented reality (AR), and mixed reality (MR), XR has demonstrated transformative potential across sectors such as healthcare, industry, education, and defense. However, the compact architecture and limited computational capabilities of XR devices render conventional cryptographic authentication schemes inefficient, while the real-time transmission of biometric and positional data introduces significant privacy and security vulnerabilities. To overcome these challenges, this study introduces PXRA (PUF-based XR authentication), a lightweight and secure authentication and key distribution protocol optimized for cloud-assisted XR environments. PXRA utilizes a physically unclonable function (PUF) for device-level hardware authentication and offloads elliptic curve cryptography (ECC) operations to the cloud to enhance computational efficiency. Authenticated encryption with associated data (AEAD) ensures message confidentiality and integrity, while formal verification through ProVerif confirms the protocol's robustness under the Dolev-Yao adversary model. Experimental results demonstrate that PXRA reduces device-side computational overhead by restricting XR terminals to lightweight PUF and hash functions, achieving an average authentication latency below 15 ms sufficient for real-time XR performance. Formal analysis verifies PXRA's resistance to replay, impersonation, and key compromise attacks, while preserving user anonymity and session unlinkability. These findings establish the feasibility of integrating hardware-based PUF authentication with cloud-assisted cryptographic computation to enable secure, scalable, and real-time XR systems. The proposed framework lays a foundation for future XR applications in telemedicine, remote collaboration, and immersive education, where both performance and privacy preservation are paramount. Our contribution lies in a hybrid PUF-cloud ECC architecture, context-bound AEAD for session-splicing resistance, and a noise-resilient BCH-based fuzzy extractor supporting up to 15% BER.},
}
RevDate: 2026-01-10
CmpDate: 2026-01-10
An Efficient Clinical Decision Support Framework Using IoMT Based on Explainable and Trustworthy Artificial Intelligence with Transformer Model and Blockchain-Integrated Chunking.
Diagnostics (Basel, Switzerland), 16(1): pii:diagnostics16010007.
Background/Objectives: The use of edge-cloud architectures has increased rapidly to move the analysis of AI-enabled health data to global environments. However, data security, communication overhead, cost-effectiveness, and data transmission losses are still important problems to be solved. Methods: In this paper, we propose a reliable, explainable, and energy-efficient stress detection framework supported by a cost-oriented blockchain-based content-defined chunking approach to minimise the losses during data transfer. In the proposed architecture, the Nurse Stress dataset represents IoMT data. While the chunking process reduces communication volume and storage costs by avoiding data duplication, blockchain technology eliminates the risks of unauthorised access and manipulation by ensuring the immutability and traceability of data blocks. Results: All Transformer-based models have demonstrated over 99% accuracy. The TimesNet model, in particular, has been designated as the system's reference model, exhibiting superior performance in terms of both stability and accuracy. The main contribution of this study lies in proposing one of the first integrated frameworks that jointly employs chunking-based data management, blockchain-enabled trust mechanisms, and edge-cloud computing with XAI to ensure secure and transparent IoMT data processing. The proposed system not only performs highly accurate stress detection, but also optimises the dimensions of reliable data transmission, energy and cost efficiency, and clinical reliability. Conclusions: In this respect, the study presents a scalable, reliable, and repeatable approach in health decision support systems by combining data security, integrity, and explainability issues, which are addressed separately in the literature, in a holistic manner.
Additional Links: PMID-41515502
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41515502,
year = {2025},
author = {Arslanoğlu, K and Karaköse, M},
title = {An Efficient Clinical Decision Support Framework Using IoMT Based on Explainable and Trustworthy Artificial Intelligence with Transformer Model and Blockchain-Integrated Chunking.},
journal = {Diagnostics (Basel, Switzerland)},
volume = {16},
number = {1},
pages = {},
doi = {10.3390/diagnostics16010007},
pmid = {41515502},
issn = {2075-4418},
abstract = {Background/Objectives: The use of edge-cloud architectures has increased rapidly to move the analysis of AI-enabled health data to global environments. However, data security, communication overhead, cost-effectiveness, and data transmission losses are still important problems to be solved. Methods: In this paper, we propose a reliable, explainable, and energy-efficient stress detection framework supported by a cost-oriented blockchain-based content-defined chunking approach to minimise the losses during data transfer. In the proposed architecture, the Nurse Stress dataset represents IoMT data. While the chunking process reduces communication volume and storage costs by avoiding data duplication, blockchain technology eliminates the risks of unauthorised access and manipulation by ensuring the immutability and traceability of data blocks. Results: All Transformer-based models have demonstrated over 99% accuracy. The TimesNet model, in particular, has been designated as the system's reference model, exhibiting superior performance in terms of both stability and accuracy. The main contribution of this study lies in proposing one of the first integrated frameworks that jointly employs chunking-based data management, blockchain-enabled trust mechanisms, and edge-cloud computing with XAI to ensure secure and transparent IoMT data processing. The proposed system not only performs highly accurate stress detection, but also optimises the dimensions of reliable data transmission, energy and cost efficiency, and clinical reliability. Conclusions: In this respect, the study presents a scalable, reliable, and repeatable approach in health decision support systems by combining data security, integrity, and explainability issues, which are addressed separately in the literature, in a holistic manner.},
}
RevDate: 2026-01-09
Enhancing patient admission efficiency through a hybrid cloud framework for medical record sharing.
Scientific reports pii:10.1038/s41598-026-35014-6 [Epub ahead of print].
The fragmentation of patient data across multiple healthcare institutions presents a significant challenge to realizing timely and effective treatment. Although electronic medical records have replaced traditional paper records, they often remain isolated within individual hospital information systems, limiting data exchange and preventing physicians from accessing complete medical histories during patient admission. These restrictions hinder the efficiency of diagnosis and treatment, particularly in critical care settings, such as emergency departments. Cloud computing provides a promising solution by enabling controlled electronic medical record sharing, thereby improving the continuity and quality of care. This study presents a system-level, multi-layered hybrid cloud architecture framework designed to facilitate seamless and managed exchange of electronic medical records among healthcare organizations. To further enhance operational efficiency, the system integrates fingerprint authentication based on hashed identifiers for rapid patient identification and an Internet of Things bracelet for real-time monitoring of vital signs. System performance was evaluated using discrete-event simulation implemented in the OMNeT++ framework, with simulation parameters informed by real emergency department data from three hospitals in Saudi Arabia. The evaluation considers multiple workflow scenarios and incorporates repeated simulation runs to assess performance stability. The simulation results indicate consistent reductions in average patient waiting times, while treatment durations remain stable and patient throughput increases. These findings highlight the potential of the proposed framework to enhance electronic medical record management, streamline clinical workflows, and improve operational efficiency in time-critical environments.
Additional Links: PMID-41513951
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41513951,
year = {2026},
author = {Abughazalah, M and Alsaggaf, W and Saifuddin, S and Sarhan, S},
title = {Enhancing patient admission efficiency through a hybrid cloud framework for medical record sharing.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-35014-6},
pmid = {41513951},
issn = {2045-2322},
abstract = {The fragmentation of patient data across multiple healthcare institutions presents a significant challenge to realizing timely and effective treatment. Although electronic medical records have replaced traditional paper records, they often remain isolated within individual hospital information systems, limiting data exchange and preventing physicians from accessing complete medical histories during patient admission. These restrictions hinder the efficiency of diagnosis and treatment, particularly in critical care settings, such as emergency departments. Cloud computing provides a promising solution by enabling controlled electronic medical record sharing, thereby improving the continuity and quality of care. This study presents a system-level, multi-layered hybrid cloud architecture framework designed to facilitate seamless and managed exchange of electronic medical records among healthcare organizations. To further enhance operational efficiency, the system integrates fingerprint authentication based on hashed identifiers for rapid patient identification and an Internet of Things bracelet for real-time monitoring of vital signs. System performance was evaluated using discrete-event simulation implemented in the OMNeT++ framework, with simulation parameters informed by real emergency department data from three hospitals in Saudi Arabia. The evaluation considers multiple workflow scenarios and incorporates repeated simulation runs to assess performance stability. The simulation results indicate consistent reductions in average patient waiting times, while treatment durations remain stable and patient throughput increases. These findings highlight the potential of the proposed framework to enhance electronic medical record management, streamline clinical workflows, and improve operational efficiency in time-critical environments.},
}
RevDate: 2026-01-09
CmpDate: 2026-01-09
SNAP: Streamlined Nextflow Analysis Pipeline for Immunoprecipitation-Based Epigenomic Profiling of Circulating Chromatin.
bioRxiv : the preprint server for biology.
Epigenomic profiling of circulating chromatin is a powerful and minimally invasive approach for detecting and monitoring disease, but there are no bioinformatics pipelines tailored to the unique characteristics of cell-free chromatin. We present SNAP (Streamlined Nextflow Analysis Pipeline), a reproducible, scalable, and modular workflow specifically designed for immunoprecipitation-based methods for profiling cell-free chromatin. SNAP incorporates quality control metrics optimized for circulating chromatin, including enrichment score and fragment count thresholds, as well as direct estimation of circulating tumor DNA (ctDNA) content from fragment length distributions. It also includes SNP fingerprinting to enable sample identity verification. When applied to cfChIP-seq and cfMeDIP-seq data across multiple cancer types, SNAP's quality filters significantly improved classification performance while maintaining high data retention. Independent validation using plasma from patients with osteosarcoma confirmed the detection of tumor-associated epigenomic signatures that correlated with ctDNA levels and reflected disease biology. SNAP's modular architecture enables straightforward extension to additional cell-free immunoprecipitation-based assays, providing a robust framework to support studies of circulating chromatin broadly. SNAP is compatible with cloud and high-performance computing environments and is publicly available at https://github.com/prc992/SNAP/ .
Additional Links: PMID-41509217
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41509217,
year = {2025},
author = {Zhang, Z and Da Silva Cordeiro, P and Chhetri, SB and Fortunato, B and Jin, Z and El Hajj Chehade, R and Semaan, K and Gulati, G and Lee, GG and Hemauer, C and Bian, W and Sotudian, S and Zhang, Z and Osei-Hwedieh, D and Heim, TE and Painter, C and Nawfal, R and Eid, M and Vasseur, D and Canniff, J and Savignano, H and Phillips, N and Seo, JH and Weiss, KR and Freedman, ML and Baca, SC},
title = {SNAP: Streamlined Nextflow Analysis Pipeline for Immunoprecipitation-Based Epigenomic Profiling of Circulating Chromatin.},
journal = {bioRxiv : the preprint server for biology},
volume = {},
number = {},
pages = {},
pmid = {41509217},
issn = {2692-8205},
abstract = {Epigenomic profiling of circulating chromatin is a powerful and minimally invasive approach for detecting and monitoring disease, but there are no bioinformatics pipelines tailored to the unique characteristics of cell-free chromatin. We present SNAP (Streamlined Nextflow Analysis Pipeline), a reproducible, scalable, and modular workflow specifically designed for immunoprecipitation-based methods for profiling cell-free chromatin. SNAP incorporates quality control metrics optimized for circulating chromatin, including enrichment score and fragment count thresholds, as well as direct estimation of circulating tumor DNA (ctDNA) content from fragment length distributions. It also includes SNP fingerprinting to enable sample identity verification. When applied to cfChIP-seq and cfMeDIP-seq data across multiple cancer types, SNAP's quality filters significantly improved classification performance while maintaining high data retention. Independent validation using plasma from patients with osteosarcoma confirmed the detection of tumor-associated epigenomic signatures that correlated with ctDNA levels and reflected disease biology. SNAP's modular architecture enables straightforward extension to additional cell-free immunoprecipitation-based assays, providing a robust framework to support studies of circulating chromatin broadly. SNAP is compatible with cloud and high-performance computing environments and is publicly available at https://github.com/prc992/SNAP/ .},
}
RevDate: 2026-01-07
Credibility measurement of cloud services based on information entropy and Markov chain.
Scientific reports pii:10.1038/s41598-026-35346-3 [Epub ahead of print].
Despite the rapid advancement of cloud computing technologies, user skepticism about service credibility remains a major barrier to adoption of cloud services. At present, there is not a comprehensive and systematic understanding of the factors that affect the credibility of cloud services. In view of the uncertainty and correlation between the factors of cloud service credibility, this study analyzed the user's demand for credit and credibility. The cloud service credibility attributes were divided into six dimensions: cloud service visibility, controllability, security, reliability, cloud service provider viability and user satisfaction. A cloud service credibility measurement model combining information entropy and Markov chain was established, which could calculate the uncertainty of each factor in the attribute model. The degree of influence on the credibility of cloud service and the credibility level of cloud service provider are calculated in the model. The experimental validation demonstrates that the information entropy and Markov chain model achieves a 15% improvement in prediction accuracy compared to traditional AHP methods, with particularly notable enhancements in dynamic scenario adaptability, which helps users make informed decisions when selecting cloud services.
Additional Links: PMID-41501126
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41501126,
year = {2026},
author = {Ou, L and Yu, J},
title = {Credibility measurement of cloud services based on information entropy and Markov chain.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-026-35346-3},
pmid = {41501126},
issn = {2045-2322},
support = {JAT230720//Science and Technology Project of Fujian Provincial Department of Education, China/ ; SHE2524//2025 Higher Education Research Project of Sanming University in China/ ; },
abstract = {Despite the rapid advancement of cloud computing technologies, user skepticism about service credibility remains a major barrier to adoption of cloud services. At present, there is not a comprehensive and systematic understanding of the factors that affect the credibility of cloud services. In view of the uncertainty and correlation between the factors of cloud service credibility, this study analyzed the user's demand for credit and credibility. The cloud service credibility attributes were divided into six dimensions: cloud service visibility, controllability, security, reliability, cloud service provider viability and user satisfaction. A cloud service credibility measurement model combining information entropy and Markov chain was established, which could calculate the uncertainty of each factor in the attribute model. The degree of influence on the credibility of cloud service and the credibility level of cloud service provider are calculated in the model. The experimental validation demonstrates that the information entropy and Markov chain model achieves a 15% improvement in prediction accuracy compared to traditional AHP methods, with particularly notable enhancements in dynamic scenario adaptability, which helps users make informed decisions when selecting cloud services.},
}
RevDate: 2026-01-07
CmpDate: 2026-01-07
The future of big data and artificial intelligence on dairy farms: A proposed dairy data ecosystem.
JDS communications, 6(Suppl 1):S9-S14.
The dairy sector should overcome challenges in productivity, sustainability, and data management by adopting intelligent, scalable, and privacy-preserving technological solutions. Adopting data and artificial intelligence (AI) technologies is essential to ensure efficient operations and informed decision making and to keep a competitive market advantage. This paper proposes an integrated, multimodal AI framework to support data-intensive dairy farm operations by leveraging big data principles and advancing them through AI technologies. The proposed architecture incorporates edge computing, autonomous AI agents, and federated learning to enable real-time, privacy-preserving analytics at the farm level and promote knowledge sharing and refinement through research farms and cloud collaboration. Farms collect heterogeneous data, which can be transformed into embeddings for both local inference and cloud analysis. These embeddings form the input of AI agents that support health monitoring, risk prediction, operational optimization, and decision making. Privacy is preserved by sharing only model weights or anonymized data externally. The edge layer handles time-sensitive tasks and communicates with a centralized enterprise cloud hosting global models and distributing updates. A research and development cloud linked to research farms ensures model testing and validation. The entire system is orchestrated by autonomous AI agents that manage data, choose models, and interact with stakeholders, and human oversight ensures safe decisions, as illustrated in the practical use case of mastitis management. This architecture could support data integrity, scalability, and real-time personalization, along with opening up space for partnerships between farms, research institutions, and regulatory bodies to promote secure, cross-sector innovation.
Additional Links: PMID-41497383
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41497383,
year = {2025},
author = {Hostens, M and Franceschini, S and van Leerdam, M and Yang, H and Pokharel, S and Liu, E and Niu, P and Zhang, H and Noor, S and Hermans, K and Salamone, M and Sharma, S},
title = {The future of big data and artificial intelligence on dairy farms: A proposed dairy data ecosystem.},
journal = {JDS communications},
volume = {6},
number = {Suppl 1},
pages = {S9-S14},
pmid = {41497383},
issn = {2666-9102},
abstract = {The dairy sector should overcome challenges in productivity, sustainability, and data management by adopting intelligent, scalable, and privacy-preserving technological solutions. Adopting data and artificial intelligence (AI) technologies is essential to ensure efficient operations and informed decision making and to keep a competitive market advantage. This paper proposes an integrated, multimodal AI framework to support data-intensive dairy farm operations by leveraging big data principles and advancing them through AI technologies. The proposed architecture incorporates edge computing, autonomous AI agents, and federated learning to enable real-time, privacy-preserving analytics at the farm level and promote knowledge sharing and refinement through research farms and cloud collaboration. Farms collect heterogeneous data, which can be transformed into embeddings for both local inference and cloud analysis. These embeddings form the input of AI agents that support health monitoring, risk prediction, operational optimization, and decision making. Privacy is preserved by sharing only model weights or anonymized data externally. The edge layer handles time-sensitive tasks and communicates with a centralized enterprise cloud hosting global models and distributing updates. A research and development cloud linked to research farms ensures model testing and validation. The entire system is orchestrated by autonomous AI agents that manage data, choose models, and interact with stakeholders, and human oversight ensures safe decisions, as illustrated in the practical use case of mastitis management. This architecture could support data integrity, scalability, and real-time personalization, along with opening up space for partnerships between farms, research institutions, and regulatory bodies to promote secure, cross-sector innovation.},
}
RevDate: 2026-01-06
Two-Tier heuristic search for ransomware-as-a-service based cyberattack défense analysis using explainable Bayesian deep learning model.
Scientific reports, 16(1):437.
Data security assurance is essential owing to the improving popularity of cloud computing and its extensive usage through several industries, particularly in light of the increasing number of cyber-security attacks. Ransomware-as-a-service (RaaS) attacks are prominent and widespread, allowing uniform individuals with minimum technology to perform ransomware processes. While RaaS methods have declined the access barriers for cyber threats, generative artificial intelligence (AI) growth might result in new possibilities for offenders. The high prevalence of RaaS-based cyberattacks poses essential challenges to cybersecurity, requiring progressive and understandable defensive mechanisms. Furthermore, deep or machine learning (ML) methods mainly provide a black box, giving no data about how it functions. Understanding the details of a classification model’s decision can be beneficial for understanding the work way to be identified. This study presents a novel Two-Tier Metaheuristic Algorithm for Cyberattack Defense Analysis using Explainable Artificial Intelligence based Bayesian Deep Learning (TTMCDA-XAIBDL) method. The main intention of the TTMCDA-XAIBDL method is to detect and mitigate ransomware cyber threats. Initially, the TTMCDA-XAIBDL method performs data preprocessing using Z-score normalization to ensure standardization and scalability of features. Next, the improved sand cat swarm optimization (ISCSO) technique is used for the feature selection. The Bayesian neural network (BNN) is employed to classify cyberattack defence. Moreover, the BNN’s hyperparameters are fine-tuned using the whale optimization algorithm (WOA) model, optimizing its performance for effective detection of ransomware threats. Finally, the XAI using SHAP is integrated to provide explainability, offering perceptions of the model’s decision-making procedure and adopting trust in the system. To demonstrate the effectiveness of the TTMCDA-XAIBDL technique, a series of simulations are conducted using a ransomware detection dataset to evaluate its classification performance. The performance validation of the TTMCDA-XAIBDL technique portrayed a superior accuracy value of 99.29% over the recent methods.
Additional Links: PMID-41490912
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41490912,
year = {2026},
author = {Almuflih, AS},
title = {Two-Tier heuristic search for ransomware-as-a-service based cyberattack défense analysis using explainable Bayesian deep learning model.},
journal = {Scientific reports},
volume = {16},
number = {1},
pages = {437},
pmid = {41490912},
issn = {2045-2322},
abstract = {Data security assurance is essential owing to the improving popularity of cloud computing and its extensive usage through several industries, particularly in light of the increasing number of cyber-security attacks. Ransomware-as-a-service (RaaS) attacks are prominent and widespread, allowing uniform individuals with minimum technology to perform ransomware processes. While RaaS methods have declined the access barriers for cyber threats, generative artificial intelligence (AI) growth might result in new possibilities for offenders. The high prevalence of RaaS-based cyberattacks poses essential challenges to cybersecurity, requiring progressive and understandable defensive mechanisms. Furthermore, deep or machine learning (ML) methods mainly provide a black box, giving no data about how it functions. Understanding the details of a classification model’s decision can be beneficial for understanding the work way to be identified. This study presents a novel Two-Tier Metaheuristic Algorithm for Cyberattack Defense Analysis using Explainable Artificial Intelligence based Bayesian Deep Learning (TTMCDA-XAIBDL) method. The main intention of the TTMCDA-XAIBDL method is to detect and mitigate ransomware cyber threats. Initially, the TTMCDA-XAIBDL method performs data preprocessing using Z-score normalization to ensure standardization and scalability of features. Next, the improved sand cat swarm optimization (ISCSO) technique is used for the feature selection. The Bayesian neural network (BNN) is employed to classify cyberattack defence. Moreover, the BNN’s hyperparameters are fine-tuned using the whale optimization algorithm (WOA) model, optimizing its performance for effective detection of ransomware threats. Finally, the XAI using SHAP is integrated to provide explainability, offering perceptions of the model’s decision-making procedure and adopting trust in the system. To demonstrate the effectiveness of the TTMCDA-XAIBDL technique, a series of simulations are conducted using a ransomware detection dataset to evaluate its classification performance. The performance validation of the TTMCDA-XAIBDL technique portrayed a superior accuracy value of 99.29% over the recent methods.},
}
RevDate: 2026-01-04
Computing power network dynamic resource scheduling integrating time series mixing dynamic state estimation and hierarchical reinforcement learning.
Scientific reports pii:10.1038/s41598-025-32753-w [Epub ahead of print].
With the evolution of cloud computing towards a multi-cloud architecture, cross-cloud resource scheduling faces challenges such as heterogeneous environment adaptation and slow dynamic load response. How to improve resource utilization while ensuring service quality has become a core challenge in the field of cloud management. To address this need, we propose the TSL-HRL intelligent scheduling framework, which integrates time-series feature modeling and hierarchical reinforcement learning. The framework utilizes a time-series mixing module to deeply mine the periodic fluctuations and burst demand features of computing, storage, and network resources. It integrates a dynamic state estimation module with Kalman filtering to capture real-time changes in resource supply and demand. Additionally, it constructs a high-level planning - low-level response hierarchical reinforcement learning architecture: the high-level Q-learning algorithm formulates a global long-term resource allocation strategy to ensure optimal overall scheduling, while the low-level A2C algorithm adjusts the execution plan based on real-time network fluctuations and node load, enabling fast adaptation to dynamic changes, forming a macro-micro collaborative decision mechanism. In experiments on the Multi-Cloud Service Composition Dataset and Google 2019 Cluster dynamic node scenarios, TSL-HRL effectively balanced resource utilization efficiency and scheduling real-time performance with its three-level architecture design of time-series feature extraction - dynamic state perception - hierarchical strategy optimization. The study shows that TSL-HRL provides a systematic solution for resource management in multi-cloud environments. Future research will focus on lightweight extensions for edge-cloud collaborative scenarios, multi-objective energy consumption optimization frameworks, and meta-learning-driven rapid adaptation technologies, promoting the application and generalization of intelligent resource scheduling technologies in real-world complex scenarios.
Additional Links: PMID-41486178
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41486178,
year = {2026},
author = {Liu, H and Zhang, S and Li, L and Sun, T and Xue, W and Yao, X and Xu, Y},
title = {Computing power network dynamic resource scheduling integrating time series mixing dynamic state estimation and hierarchical reinforcement learning.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-32753-w},
pmid = {41486178},
issn = {2045-2322},
abstract = {With the evolution of cloud computing towards a multi-cloud architecture, cross-cloud resource scheduling faces challenges such as heterogeneous environment adaptation and slow dynamic load response. How to improve resource utilization while ensuring service quality has become a core challenge in the field of cloud management. To address this need, we propose the TSL-HRL intelligent scheduling framework, which integrates time-series feature modeling and hierarchical reinforcement learning. The framework utilizes a time-series mixing module to deeply mine the periodic fluctuations and burst demand features of computing, storage, and network resources. It integrates a dynamic state estimation module with Kalman filtering to capture real-time changes in resource supply and demand. Additionally, it constructs a high-level planning - low-level response hierarchical reinforcement learning architecture: the high-level Q-learning algorithm formulates a global long-term resource allocation strategy to ensure optimal overall scheduling, while the low-level A2C algorithm adjusts the execution plan based on real-time network fluctuations and node load, enabling fast adaptation to dynamic changes, forming a macro-micro collaborative decision mechanism. In experiments on the Multi-Cloud Service Composition Dataset and Google 2019 Cluster dynamic node scenarios, TSL-HRL effectively balanced resource utilization efficiency and scheduling real-time performance with its three-level architecture design of time-series feature extraction - dynamic state perception - hierarchical strategy optimization. The study shows that TSL-HRL provides a systematic solution for resource management in multi-cloud environments. Future research will focus on lightweight extensions for edge-cloud collaborative scenarios, multi-objective energy consumption optimization frameworks, and meta-learning-driven rapid adaptation technologies, promoting the application and generalization of intelligent resource scheduling technologies in real-world complex scenarios.},
}
RevDate: 2026-01-04
Data security storage and transmission framework for AI computing power platforms.
Scientific reports pii:10.1038/s41598-025-31786-5 [Epub ahead of print].
In the era of rapidly expanding artificial intelligence (AI) applications, ensuring secure data storage and transmission within AI computing power platforms remains a critical challenge. This research presents a novel data security storage and transmission system, termed as secure artificial intelligence data storage and transmission (Secure AI-DST), tailored for AI computing environments. The proposed framework integrates a hybrid encryption mechanism that combines Amended Merkle Tree (AMerT) hashing with Secret Elliptic Curve Cryptography (SEllC) enhanced data confidentiality. For secure storage and decentralization, the system leverages blockchain with InterPlanetary File System (IPFS) integration, ensuring tamper-proof and scalable data handling. To classify various attack types, a novel deep learning model attention bidirectional gated recurrent unit-assisted residual network (Att-BGR) is deployed, offering accurate detection of intrusions. Simulation studies conducted in MATLAB® 2023b using both synthetic and real-time datasets show that the Secure AI-DST system reduces unauthorized access attempts by 92.7%, maintains data integrity with 99.98% accuracy under simulated cyberattacks, and achieves a packet validation success rate of 97.6% across edge-to-cloud transmissions. Furthermore, the proposed method introduces only a 4.3% computational overhead, making it highly suitable for real-time AI workloads. These outcomes confirm the effectiveness of Secure AI-DST in ensuring end-to-end data guard, resilience against cyber threats, and scalable presentation for next-generation AI computing substructures.
Additional Links: PMID-41484422
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41484422,
year = {2026},
author = {Chen, J and Lu, Z and Zheng, H and Ren, Z and Chen, Y and Shang, J},
title = {Data security storage and transmission framework for AI computing power platforms.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-31786-5},
pmid = {41484422},
issn = {2045-2322},
abstract = {In the era of rapidly expanding artificial intelligence (AI) applications, ensuring secure data storage and transmission within AI computing power platforms remains a critical challenge. This research presents a novel data security storage and transmission system, termed as secure artificial intelligence data storage and transmission (Secure AI-DST), tailored for AI computing environments. The proposed framework integrates a hybrid encryption mechanism that combines Amended Merkle Tree (AMerT) hashing with Secret Elliptic Curve Cryptography (SEllC) enhanced data confidentiality. For secure storage and decentralization, the system leverages blockchain with InterPlanetary File System (IPFS) integration, ensuring tamper-proof and scalable data handling. To classify various attack types, a novel deep learning model attention bidirectional gated recurrent unit-assisted residual network (Att-BGR) is deployed, offering accurate detection of intrusions. Simulation studies conducted in MATLAB® 2023b using both synthetic and real-time datasets show that the Secure AI-DST system reduces unauthorized access attempts by 92.7%, maintains data integrity with 99.98% accuracy under simulated cyberattacks, and achieves a packet validation success rate of 97.6% across edge-to-cloud transmissions. Furthermore, the proposed method introduces only a 4.3% computational overhead, making it highly suitable for real-time AI workloads. These outcomes confirm the effectiveness of Secure AI-DST in ensuring end-to-end data guard, resilience against cyber threats, and scalable presentation for next-generation AI computing substructures.},
}
RevDate: 2026-01-03
Ensemble deep learning approach for traffic video analytics in edge computing.
Scientific reports pii:10.1038/s41598-025-25628-7 [Epub ahead of print].
Video analytics is the new era of computer vision in identifying and classifying objects. Traffic surveillance videos can be analysed to using computer vision to comprehend the road traffic. Monitoring the real-time road traffic is essential to control them. Computer vision helps in identifying the vehicles on the road, but the present techniques either perform the video analysis on the cloud platform or the edge platform. The former introduces more delay in processing while controlling is needed in real-time, the latter is not accurate in estimating the current road traffic. YOLO algorithms are the most notable ones for efficient real-time object detection. To make such object detections feasible in lightweight environments, its tinier version called Tiny YOLO is used. Edge computing is the efficient framework to have its computation done on the edge of the physical layer without the need to move data into the cloud to reduce latency. A novel hybrid model of vehicle detection and classification using Tiny YOLO and YOLOR is constructed at the edge layer. This hybrid model processes the video frames at a higher rate and produces the traffic estimate. The numerical traffic volume is sent to Ensemble Learning in Traffic Video Analytics (ELITVA) which uses F-RNN to make decisions in reducing the traffic flow seamlessly. The experimental results performed on drone dataset captured at road signals show an increase in precision by 13.8%, accuracy by 4.8%, recall by 17.4%, F1 score by 19.9%, and frame rate processing by 12.8% compared to other existing traffic surveillance systems and efficient controlling of road traffic.
Additional Links: PMID-41484116
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41484116,
year = {2026},
author = {Sathyamoorthy, M and Rajasekar, V and Krishnamoorthi, S and Pamucar, D},
title = {Ensemble deep learning approach for traffic video analytics in edge computing.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-25628-7},
pmid = {41484116},
issn = {2045-2322},
abstract = {Video analytics is the new era of computer vision in identifying and classifying objects. Traffic surveillance videos can be analysed to using computer vision to comprehend the road traffic. Monitoring the real-time road traffic is essential to control them. Computer vision helps in identifying the vehicles on the road, but the present techniques either perform the video analysis on the cloud platform or the edge platform. The former introduces more delay in processing while controlling is needed in real-time, the latter is not accurate in estimating the current road traffic. YOLO algorithms are the most notable ones for efficient real-time object detection. To make such object detections feasible in lightweight environments, its tinier version called Tiny YOLO is used. Edge computing is the efficient framework to have its computation done on the edge of the physical layer without the need to move data into the cloud to reduce latency. A novel hybrid model of vehicle detection and classification using Tiny YOLO and YOLOR is constructed at the edge layer. This hybrid model processes the video frames at a higher rate and produces the traffic estimate. The numerical traffic volume is sent to Ensemble Learning in Traffic Video Analytics (ELITVA) which uses F-RNN to make decisions in reducing the traffic flow seamlessly. The experimental results performed on drone dataset captured at road signals show an increase in precision by 13.8%, accuracy by 4.8%, recall by 17.4%, F1 score by 19.9%, and frame rate processing by 12.8% compared to other existing traffic surveillance systems and efficient controlling of road traffic.},
}
RevDate: 2026-01-04
CmpDate: 2026-01-02
Collaborative optimization of computational offloading and resource allocation based on Stackelberg game.
PloS one, 21(1):e0339955.
The exponential growth of the Internet of Things and mobile edge computing has intensified the need for substantial data processing and instantaneous response. Consequently, collaboration between the cloud, the edge and the end has become a key computing paradigm. However, in this architecture, task scheduling is complex, resources are heterogeneous and dynamic, and it is still a serious challenge to achieve low-latency and energy-efficient task processing. Aiming at the deficiency of dynamic collaborative optimization in the existing research, this paper introduces a collaborative optimization approach for computational offloading and resource allocation, utilizing the Stackelberg game to maximize the system's total utility. First, an overall utility model that integrates delay, energy consumption, and revenue is constructed for application scenarios involving multi-cloud servers, multi-edge servers, and multiple users. Subsequently, a three-tier Stackelberg game model is developed in which the cloud assumes the role of the leader, focusing on the establishment of resource pricing strategies. Concurrently, the edge operates as the sub-leader, fine-tuning the distribution of computational resources in alignment with the cloud's strategic initiatives. Meanwhile, the mobile terminal functions as the follower, meticulously optimizing the computation offloading ratio in response to the superior strategies delineated by the preceding tiers. Next, through game equilibrium analysis, the existence and uniqueness of the Stackelberg equilibrium are proven. Finally, a BI-PRO is proposed based on the backward induction resource pricing, allocation, and computation offload optimization algorithm. The experimental findings indicate that the proposed Stackelberg game method optimizes the system's total revenue and maintains stable performance across various scenarios. These results confirm the superiority and robustness of the method.
Additional Links: PMID-41481652
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41481652,
year = {2026},
author = {Li, L and Yu, Q and Wang, C and Zhao, J and Lv, J and Wang, S and Hu, C},
title = {Collaborative optimization of computational offloading and resource allocation based on Stackelberg game.},
journal = {PloS one},
volume = {21},
number = {1},
pages = {e0339955},
pmid = {41481652},
issn = {1932-6203},
mesh = {*Resource Allocation/methods ; *Game Theory ; *Cloud Computing ; Algorithms ; Cooperative Behavior ; Humans ; Models, Theoretical ; },
abstract = {The exponential growth of the Internet of Things and mobile edge computing has intensified the need for substantial data processing and instantaneous response. Consequently, collaboration between the cloud, the edge and the end has become a key computing paradigm. However, in this architecture, task scheduling is complex, resources are heterogeneous and dynamic, and it is still a serious challenge to achieve low-latency and energy-efficient task processing. Aiming at the deficiency of dynamic collaborative optimization in the existing research, this paper introduces a collaborative optimization approach for computational offloading and resource allocation, utilizing the Stackelberg game to maximize the system's total utility. First, an overall utility model that integrates delay, energy consumption, and revenue is constructed for application scenarios involving multi-cloud servers, multi-edge servers, and multiple users. Subsequently, a three-tier Stackelberg game model is developed in which the cloud assumes the role of the leader, focusing on the establishment of resource pricing strategies. Concurrently, the edge operates as the sub-leader, fine-tuning the distribution of computational resources in alignment with the cloud's strategic initiatives. Meanwhile, the mobile terminal functions as the follower, meticulously optimizing the computation offloading ratio in response to the superior strategies delineated by the preceding tiers. Next, through game equilibrium analysis, the existence and uniqueness of the Stackelberg equilibrium are proven. Finally, a BI-PRO is proposed based on the backward induction resource pricing, allocation, and computation offload optimization algorithm. The experimental findings indicate that the proposed Stackelberg game method optimizes the system's total revenue and maintains stable performance across various scenarios. These results confirm the superiority and robustness of the method.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Resource Allocation/methods
*Game Theory
*Cloud Computing
Algorithms
Cooperative Behavior
Humans
Models, Theoretical
RevDate: 2026-01-02
CmpDate: 2026-01-02
MorphoCloud: Democratizing Access to High-Performance Computing for Morphological Data Analysis.
ArXiv pii:2512.21408.
The digitization of biological specimens has revolutionized the field of morphology, creating large collections of 3D data, and microCT in particular. This revolution was initially supported by the development of open-source software tools, specifically the development of SlicerMorph extension to the open-source image analytics platform 3D Slicer. Through SlicerMorph and 3D Slicer, biologists, morphologists and scientists in related fields have all the necessary tools to import, visualize and analyze these complex and large datasets in a single platform that is flexible and expandible, without the need of proprietary software that hinders scientific collaboration and sharing. Yet, a significant "compute gap" remains: While data and software are now open and accessible, the necessary high-end computing resources to run them are often not equally accessible in all institutions, and particularly lacking at Primarily Undergraduate Institutions (PUIs) and other educational settings. Here, we present MorphoCloud, an "IssuesOps"-based platform that leverages Github Actions and the JetStream2 cloud farm to provide on-demand, research-grade computing environments to researchers working with 3D morphological datasets. By delivering a GPU-accelerated full desktop experience via a web browser, MorphoCloud eliminates hardware barriers, enabling complex 3D analysis and AI-assisted segmentation. This paper explains the platform and its architecture, as well as use cases it is designed to support.
Additional Links: PMID-41479453
Full Text:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41479453,
year = {2025},
author = {Maga, AM and Fillion-Robin, JC},
title = {MorphoCloud: Democratizing Access to High-Performance Computing for Morphological Data Analysis.},
journal = {ArXiv},
volume = {},
number = {},
pages = {},
pmid = {41479453},
issn = {2331-8422},
abstract = {The digitization of biological specimens has revolutionized the field of morphology, creating large collections of 3D data, and microCT in particular. This revolution was initially supported by the development of open-source software tools, specifically the development of SlicerMorph extension to the open-source image analytics platform 3D Slicer. Through SlicerMorph and 3D Slicer, biologists, morphologists and scientists in related fields have all the necessary tools to import, visualize and analyze these complex and large datasets in a single platform that is flexible and expandible, without the need of proprietary software that hinders scientific collaboration and sharing. Yet, a significant "compute gap" remains: While data and software are now open and accessible, the necessary high-end computing resources to run them are often not equally accessible in all institutions, and particularly lacking at Primarily Undergraduate Institutions (PUIs) and other educational settings. Here, we present MorphoCloud, an "IssuesOps"-based platform that leverages Github Actions and the JetStream2 cloud farm to provide on-demand, research-grade computing environments to researchers working with 3D morphological datasets. By delivering a GPU-accelerated full desktop experience via a web browser, MorphoCloud eliminates hardware barriers, enabling complex 3D analysis and AI-assisted segmentation. This paper explains the platform and its architecture, as well as use cases it is designed to support.},
}
RevDate: 2025-12-31
Scalable photonic reservoir computing for parallel machine learning tasks.
Nature communications pii:10.1038/s41467-025-67983-z [Epub ahead of print].
Neuromorphic photonics enables brain-inspired information processing with higher bandwidth and lower energy consumption than traditional electronics, addressing the growing computational demands of the Internet of Things, cloud services, and edge computing. However, even current state-of-the-art electronic and photonic platforms are incapable of delivering the scalable throughput, multitasking processing, and energy efficiency required by these applications. Here, we demonstrate a tunable photonic reservoir computing device based on a nonlinear amplifying loop mirror (NALM), leveraging a time-delayed, single-unit, all-optical architecture. By combining dense temporal encoding with wavelength-division multiplexing, the system supports concurrent multitasking across independent data channels, enabling scalable computational performance without additional hardware complexity. Experiments and theoretical validation on classification and prediction benchmarks demonstrate the device's performance, achieving a throughput of 20 tera-operations-per-second and an energy efficiency of 4.4 fJ per operation. These results highlight a promising path towards reconfigurable, compact, and high-performance photonic processors for real-time intelligent applications.
Additional Links: PMID-41476165
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41476165,
year = {2025},
author = {Aadhi, A and Di Lauro, L and Fischer, B and Dmitriev, P and Alamgir, I and Mazoukh, C and Perron, N and Viktorov, EA and Kovalev, AV and Eshaghi, A and Vakili, S and Chemnitz, M and Roztocki, P and Little, BE and Chu, ST and Moss, DJ and Morandotti, R},
title = {Scalable photonic reservoir computing for parallel machine learning tasks.},
journal = {Nature communications},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41467-025-67983-z},
pmid = {41476165},
issn = {2041-1723},
abstract = {Neuromorphic photonics enables brain-inspired information processing with higher bandwidth and lower energy consumption than traditional electronics, addressing the growing computational demands of the Internet of Things, cloud services, and edge computing. However, even current state-of-the-art electronic and photonic platforms are incapable of delivering the scalable throughput, multitasking processing, and energy efficiency required by these applications. Here, we demonstrate a tunable photonic reservoir computing device based on a nonlinear amplifying loop mirror (NALM), leveraging a time-delayed, single-unit, all-optical architecture. By combining dense temporal encoding with wavelength-division multiplexing, the system supports concurrent multitasking across independent data channels, enabling scalable computational performance without additional hardware complexity. Experiments and theoretical validation on classification and prediction benchmarks demonstrate the device's performance, achieving a throughput of 20 tera-operations-per-second and an energy efficiency of 4.4 fJ per operation. These results highlight a promising path towards reconfigurable, compact, and high-performance photonic processors for real-time intelligent applications.},
}
RevDate: 2026-01-02
CmpDate: 2025-12-31
MTBseq-nf: Enabling Scalable Tuberculosis Genomics "Big Data" Analysis Through a User-Friendly Nextflow Wrapper for MTBseq Pipeline.
Microorganisms, 13(12):.
The MTBseq pipeline, published in 2018, was designed to address bioinformatics challenges in tuberculosis (TB) research using whole-genome sequencing (WGS) data. It was the first publicly available tool on GitHub to perform full analysis of WGS data for Mycobacterium tuberculosis complex (MTBC) encompassing quality control through mapping, variant calling for lineage classification, drug resistance prediction, and phylogenetic inference. However, the pipeline's architecture is not optimal for analyses on high-performance computing or cloud computing environments that often involve large datasets. To overcome this limitation, we developed MTBseq-nf, a Nextflow wrapper that provides parallelization for faster execution speeds in addition to several other significant enhancements. The MTBseq-nf wrapper can run several instances of the same step in parallel, fully utilizing the available resources, unlike the linear, batched analysis of samples in the TBfull step of the MTBseq pipeline. For evaluation of scalability and reproducibility, we used 90 M. tuberculosis genomes (European Nucleotide Archive-ENA accession PRJEB7727) for the benchmarking analysis on a dedicated computational server. In our benchmarks, MTBseq-nf in its parallel mode is at least twice as fast as the standard MTBseq pipeline for cohorts exceeding 20 samples. Through integration with the best practices of nf-core, Bioconda, and Biocontainers projects MTBseq-nf ensures reproducibility and platform independence, providing a scalable and efficient solution for TB genomic surveillance.
Additional Links: PMID-41471889
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41471889,
year = {2025},
author = {Sharma, A and Marcon, DJ and Loubser, J and Lima, KVB and van der Spuy, G and Conceição, EC},
title = {MTBseq-nf: Enabling Scalable Tuberculosis Genomics "Big Data" Analysis Through a User-Friendly Nextflow Wrapper for MTBseq Pipeline.},
journal = {Microorganisms},
volume = {13},
number = {12},
pages = {},
pmid = {41471889},
issn = {2076-2607},
support = {445784/2023-7//National Council for Scientific and Technological Development/ ; 3083687//Oracle Cloud credits/ ; PhD Scholarship//National Research Foundation/ ; },
abstract = {The MTBseq pipeline, published in 2018, was designed to address bioinformatics challenges in tuberculosis (TB) research using whole-genome sequencing (WGS) data. It was the first publicly available tool on GitHub to perform full analysis of WGS data for Mycobacterium tuberculosis complex (MTBC) encompassing quality control through mapping, variant calling for lineage classification, drug resistance prediction, and phylogenetic inference. However, the pipeline's architecture is not optimal for analyses on high-performance computing or cloud computing environments that often involve large datasets. To overcome this limitation, we developed MTBseq-nf, a Nextflow wrapper that provides parallelization for faster execution speeds in addition to several other significant enhancements. The MTBseq-nf wrapper can run several instances of the same step in parallel, fully utilizing the available resources, unlike the linear, batched analysis of samples in the TBfull step of the MTBseq pipeline. For evaluation of scalability and reproducibility, we used 90 M. tuberculosis genomes (European Nucleotide Archive-ENA accession PRJEB7727) for the benchmarking analysis on a dedicated computational server. In our benchmarks, MTBseq-nf in its parallel mode is at least twice as fast as the standard MTBseq pipeline for cohorts exceeding 20 samples. Through integration with the best practices of nf-core, Bioconda, and Biocontainers projects MTBseq-nf ensures reproducibility and platform independence, providing a scalable and efficient solution for TB genomic surveillance.},
}
RevDate: 2026-01-03
CmpDate: 2025-12-31
Distributed Deep Learning in IoT Sensor Network for the Diagnosis of Plant Diseases.
Sensors (Basel, Switzerland), 25(24):.
The early detection of plant diseases is critical to improving agricultural productivity and ensuring food security. However, conventional centralized deep learning approaches are often unsuitable for large-scale agricultural deployments, as they rely on continuous data transmission to cloud servers and require high computational resources that are impractical for Internet of Things (IoT)-based field environments. In this article, we present a distributed deep learning framework based on Federated Learning (FL) for the diagnosis of plant diseases in IoT sensor networks. The proposed architecture integrates multiple IoT nodes and an edge computing node that collaboratively train an EfficientNet B0 model using the Federated Averaging (FedAvg) algorithm without transferring local data. Two training pipelines are evaluated: a standard single-model pipeline and a hierarchical pipeline that combines a crop classifier with crop-specific disease models. Experimental results on a multicrop leaf image dataset under realistic augmentation scenarios demonstrate that the hierarchical FL approach improves per-crop classification accuracy and robustness to environmental variations, while the standard pipeline offers lower latency and energy consumption.
Additional Links: PMID-41471641
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41471641,
year = {2025},
author = {Papanikolaou, A and Tziouvaras, A and Floros, G and Xenakis, A and Bonsignorio, F},
title = {Distributed Deep Learning in IoT Sensor Network for the Diagnosis of Plant Diseases.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {24},
pages = {},
pmid = {41471641},
issn = {1424-8220},
mesh = {*Deep Learning ; *Plant Diseases ; *Internet of Things ; Algorithms ; Neural Networks, Computer ; Crops, Agricultural ; Plant Leaves ; },
abstract = {The early detection of plant diseases is critical to improving agricultural productivity and ensuring food security. However, conventional centralized deep learning approaches are often unsuitable for large-scale agricultural deployments, as they rely on continuous data transmission to cloud servers and require high computational resources that are impractical for Internet of Things (IoT)-based field environments. In this article, we present a distributed deep learning framework based on Federated Learning (FL) for the diagnosis of plant diseases in IoT sensor networks. The proposed architecture integrates multiple IoT nodes and an edge computing node that collaboratively train an EfficientNet B0 model using the Federated Averaging (FedAvg) algorithm without transferring local data. Two training pipelines are evaluated: a standard single-model pipeline and a hierarchical pipeline that combines a crop classifier with crop-specific disease models. Experimental results on a multicrop leaf image dataset under realistic augmentation scenarios demonstrate that the hierarchical FL approach improves per-crop classification accuracy and robustness to environmental variations, while the standard pipeline offers lower latency and energy consumption.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Deep Learning
*Plant Diseases
*Internet of Things
Algorithms
Neural Networks, Computer
Crops, Agricultural
Plant Leaves
RevDate: 2026-01-03
CmpDate: 2025-12-31
Edge-Enabled Hybrid Encryption Framework for Secure Health Information Exchange in IoT-Based Smart Healthcare Systems.
Sensors (Basel, Switzerland), 25(24):.
The integration of the Internet of Things (IoT) and edge computing is transforming healthcare by enabling real-time acquisition, processing, and exchange of sensitive patient data close to the data source. However, the distributed nature of IoT-enabled smart healthcare systems exposes them to severe security and privacy risks during health information exchange (HIE). This study proposes an edge-enabled hybrid encryption framework that combines elliptic curve cryptography (ECC), HMAC-SHA256, and the Advanced Encryption Standard (AES) to ensure data confidentiality, integrity, and efficient computation in healthcare communication networks. The proposed model minimizes latency and reduces cloud dependency by executing encryption and verification at the network edge. It provides the first systematic comparison of hybrid encryption configurations for edge-based HIE, evaluating CPU usage, memory consumption, and scalability across varying data volumes. Experimental results demonstrate that the ECC + HMAC-SHA256 + AES configuration achieves high encryption efficiency and strong resistance to attacks while maintaining lightweight processing suitable for edge devices. This approach provides a scalable and secure solution for protecting sensitive health data in next-generation IoT-enabled smart healthcare systems.
Additional Links: PMID-41471577
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41471577,
year = {2025},
author = {Ghani, NA and Bagustari, BA and Ahmad, M and Tolle, H and Kurnianingtyas, D},
title = {Edge-Enabled Hybrid Encryption Framework for Secure Health Information Exchange in IoT-Based Smart Healthcare Systems.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {24},
pages = {},
pmid = {41471577},
issn = {1424-8220},
support = {IMG005-2023//University of Malaya/ ; 01703/UN10.A0101/B/TU.01.00.1/2024//University of Brawijaya/ ; },
mesh = {*Computer Security ; *Health Information Exchange ; *Internet of Things ; Humans ; Confidentiality ; Delivery of Health Care ; Algorithms ; Cloud Computing ; },
abstract = {The integration of the Internet of Things (IoT) and edge computing is transforming healthcare by enabling real-time acquisition, processing, and exchange of sensitive patient data close to the data source. However, the distributed nature of IoT-enabled smart healthcare systems exposes them to severe security and privacy risks during health information exchange (HIE). This study proposes an edge-enabled hybrid encryption framework that combines elliptic curve cryptography (ECC), HMAC-SHA256, and the Advanced Encryption Standard (AES) to ensure data confidentiality, integrity, and efficient computation in healthcare communication networks. The proposed model minimizes latency and reduces cloud dependency by executing encryption and verification at the network edge. It provides the first systematic comparison of hybrid encryption configurations for edge-based HIE, evaluating CPU usage, memory consumption, and scalability across varying data volumes. Experimental results demonstrate that the ECC + HMAC-SHA256 + AES configuration achieves high encryption efficiency and strong resistance to attacks while maintaining lightweight processing suitable for edge devices. This approach provides a scalable and secure solution for protecting sensitive health data in next-generation IoT-enabled smart healthcare systems.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Computer Security
*Health Information Exchange
*Internet of Things
Humans
Confidentiality
Delivery of Health Care
Algorithms
Cloud Computing
RevDate: 2026-01-03
Two Novel Cloud-Masking Algorithms Tested in a Tropical Forest Setting Using High-Resolution NICFI-Planet Basemaps.
Sensors (Basel, Switzerland), 25(24):.
High-resolution NICFI-Planet image collection on Google Earth Engine (GEE) promises fine-scale tropical forest monitoring, but persistent cloud covers, shadows, and haze undermine its value. Here, we present two simple, fully reproducible cloud-masking algorithms. We introduce (A) a Blue and Near-Infrared threshold and (B) a Sentinel-2-derived statistical thresholding approach that sets per-band cutoffs. Both are implemented end-to-end in GEE for operational use. The algorithms were first developed, tuned, and evaluated in the Sundarbans (Bangladesh) using strongly contrasting dry- and monsoon-season scenes. To assess their broader utility, we additionally tested them in two independent deltaic mangrove systems, namely, the Bidyadhari Delta in West Bengal, India, and the Ayeyarwady Delta in Myanmar. Across all sites, Algorithm B consistently removes the largest share of cloud and bright-water pixels but tends to over-mask haze and low-contrast features. Algorithm A retains more usable pixels; however, its aggressiveness is region-dependent. It appears more conservative in the Sundarbans but noticeably more over-inclusive in the India and Myanmar scenes. A Random Forest classifier provided map offers a useful reference but the model is dependent on the quantity and quality of labeled samples. The novelty of the algorithms lies in their design specifically for NICFI-Planet basemaps and their ability to operate without labeled samples. Because they rely on simple, fully shareable GEE code, they can be readily applied in regions in a consistent manner. These two algorithms offer a pragmatic operational pathway: apply them as a first-pass filter keeping in mind that its behavior may vary across environments.
Additional Links: PMID-41471553
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41471553,
year = {2025},
author = {Islam, KMA and Abir, S and Kennedy, R},
title = {Two Novel Cloud-Masking Algorithms Tested in a Tropical Forest Setting Using High-Resolution NICFI-Planet Basemaps.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {24},
pages = {},
pmid = {41471553},
issn = {1424-8220},
support = {80NSSC23K0245//This work was supported by a grant from NASA's SERVIR program under agreement 80NSSC23K0245/ ; },
abstract = {High-resolution NICFI-Planet image collection on Google Earth Engine (GEE) promises fine-scale tropical forest monitoring, but persistent cloud covers, shadows, and haze undermine its value. Here, we present two simple, fully reproducible cloud-masking algorithms. We introduce (A) a Blue and Near-Infrared threshold and (B) a Sentinel-2-derived statistical thresholding approach that sets per-band cutoffs. Both are implemented end-to-end in GEE for operational use. The algorithms were first developed, tuned, and evaluated in the Sundarbans (Bangladesh) using strongly contrasting dry- and monsoon-season scenes. To assess their broader utility, we additionally tested them in two independent deltaic mangrove systems, namely, the Bidyadhari Delta in West Bengal, India, and the Ayeyarwady Delta in Myanmar. Across all sites, Algorithm B consistently removes the largest share of cloud and bright-water pixels but tends to over-mask haze and low-contrast features. Algorithm A retains more usable pixels; however, its aggressiveness is region-dependent. It appears more conservative in the Sundarbans but noticeably more over-inclusive in the India and Myanmar scenes. A Random Forest classifier provided map offers a useful reference but the model is dependent on the quantity and quality of labeled samples. The novelty of the algorithms lies in their design specifically for NICFI-Planet basemaps and their ability to operate without labeled samples. Because they rely on simple, fully shareable GEE code, they can be readily applied in regions in a consistent manner. These two algorithms offer a pragmatic operational pathway: apply them as a first-pass filter keeping in mind that its behavior may vary across environments.},
}
RevDate: 2026-01-03
The Research on a Collaborative Management Model for Multi-Source Heterogeneous Data Based on OPC Communication.
Sensors (Basel, Switzerland), 25(24):.
Effectively managing multi-source heterogeneous data remains a critical challenge in distributed cyber-physical systems (CPS). To address this, we present a novel and edge-centric computing framework integrating four key technological innovations. Firstly, a hybrid OPC communication stack seamlessly combines Client/Server, Publish/Subscribe, and P2P paradigms, enabling scalable interoperability across devices, edge nodes, and the cloud. Secondly, an event-triggered adaptive Kalman filter is introduced; it incorporates online noise-covariance estimation and multi-threshold triggering mechanisms. This approach significantly reduces state-estimation error by 46.7% and computational load by 41% compared to conventional fixed-rate sampling. Thirdly, temporal asynchrony among edge sensors is resolved by a Dynamic Time Warping (DTW)-based data-fusion module, which employs optimization constrained by Mahalanobis distance. Ultimately, a content-aware deterministic message queue data distribution mechanism is designed to ensure an end-to-end latency of less than 10 ms for critical control commands. This mechanism, which utilizes a "rules first" scheduling strategy and a dynamic resource allocation mechanism, guarantees low latency for key instructions even under the response loads of multiple data messages. The core contribution of this study is the proposal and empirical validation of an architecture co-design methodology aimed at ultra-high-performance industrial systems. This approach moves beyond the conventional paradigm of independently optimizing individual components, and instead prioritizes system-level synergy as the foundation for performance enhancement. Experimental evaluations were conducted under industrial-grade workloads, which involve over 100 heterogeneous data sources. These evaluations reveal that systems designed with this methodology can simultaneously achieve millimeter-level accuracy in field data acquisition and millisecond-level latency in the execution of critical control commands. These results highlight a promising pathway toward the development of real-time intelligent systems capable of meeting the stringent demands of next-generation industrial applications, and demonstrate immediate applicability in smart manufacturing domains.
Additional Links: PMID-41471512
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41471512,
year = {2025},
author = {Tian, J and Shang, C and Ren, T and Li, Z and Zhang, E and Yang, J and He, M},
title = {The Research on a Collaborative Management Model for Multi-Source Heterogeneous Data Based on OPC Communication.},
journal = {Sensors (Basel, Switzerland)},
volume = {25},
number = {24},
pages = {},
pmid = {41471512},
issn = {1424-8220},
support = {Grant No. U24A6005//National Natural Science Foundation of China/ ; },
abstract = {Effectively managing multi-source heterogeneous data remains a critical challenge in distributed cyber-physical systems (CPS). To address this, we present a novel and edge-centric computing framework integrating four key technological innovations. Firstly, a hybrid OPC communication stack seamlessly combines Client/Server, Publish/Subscribe, and P2P paradigms, enabling scalable interoperability across devices, edge nodes, and the cloud. Secondly, an event-triggered adaptive Kalman filter is introduced; it incorporates online noise-covariance estimation and multi-threshold triggering mechanisms. This approach significantly reduces state-estimation error by 46.7% and computational load by 41% compared to conventional fixed-rate sampling. Thirdly, temporal asynchrony among edge sensors is resolved by a Dynamic Time Warping (DTW)-based data-fusion module, which employs optimization constrained by Mahalanobis distance. Ultimately, a content-aware deterministic message queue data distribution mechanism is designed to ensure an end-to-end latency of less than 10 ms for critical control commands. This mechanism, which utilizes a "rules first" scheduling strategy and a dynamic resource allocation mechanism, guarantees low latency for key instructions even under the response loads of multiple data messages. The core contribution of this study is the proposal and empirical validation of an architecture co-design methodology aimed at ultra-high-performance industrial systems. This approach moves beyond the conventional paradigm of independently optimizing individual components, and instead prioritizes system-level synergy as the foundation for performance enhancement. Experimental evaluations were conducted under industrial-grade workloads, which involve over 100 heterogeneous data sources. These evaluations reveal that systems designed with this methodology can simultaneously achieve millimeter-level accuracy in field data acquisition and millisecond-level latency in the execution of critical control commands. These results highlight a promising pathway toward the development of real-time intelligent systems capable of meeting the stringent demands of next-generation industrial applications, and demonstrate immediate applicability in smart manufacturing domains.},
}
▼ ▼ LOAD NEXT 100 CITATIONS
RJR Experience and Expertise
Researcher
Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.
Educator
Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.
Administrator
Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.
Technologist
Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.
Publisher
While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.
Speaker
Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.
Facilitator
Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.
Designer
Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.
RJR Picks from Around the Web (updated 11 MAY 2018 )
Old Science
Weird Science
Treating Disease with Fecal Transplantation
Fossils of miniature humans (hobbits) discovered in Indonesia
Paleontology
Dinosaur tail, complete with feathers, found preserved in amber.
Astronomy
Mysterious fast radio burst (FRB) detected in the distant universe.
Big Data & Informatics
Big Data: Buzzword or Big Deal?
Hacking the genome: Identifying anonymized human subjects using publicly available data.